求教,在用OpenCV java接口调用GrabCutopencv 快速图像分割割函数时出现错误

求教,在用OpenCV java接口调用GrabCut图像分割函数时出现错误【java吧】_百度贴吧
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&签到排名:今日本吧第个签到,本吧因你更精彩,明天继续来努力!
本吧签到人数:0成为超级会员,使用一键签到本月漏签0次!成为超级会员,赠送8张补签卡连续签到:天&&累计签到:天超级会员单次开通12个月以上,赠送连续签到卡3张
关注:611,089贴子:
求教,在用OpenCV java接口调用GrabCut图像分割函数时出现错误收藏
求教,在用OpenCV java接口调用GrabCut图像分割函数时出现错误代码为:import org.opencv.core.*;import org.opencv.highgui.Himport org.opencv.imgproc.*;public class Test {
public static void main(String[] args) {
Mat img = Highgui.imread(&image/2.jpg&);
Mat mask = Mat.eye(800, 800, CvType.CV_8UC1);
//图像大小为800*800
Rect rect = new Rect(0, 0, 700, 700);
Mat bgdModel = Mat.eye(1, 13*5, CvType.CV_64FC1);
Mat fgdModel = Mat.eye(1, 13*5, CvType.CV_64FC1);
Imgproc.grabCut(img, mask, rect, bgdModel, fgdModel, 1, 0);
Highgui.imwrite(&D:\\mask.jpg&, mask);
}}出现的错误:OpenCV Error: Assertion failed (dtrm & std::numeric_limits&double&::epsilon()) in unknown function, file ..\..\..\src\opencv\modules\imgproc\src\grabcut.cpp, line 216Exception in thread &main& CvException [org.opencv.core.CvException: ..\..\..\src\opencv\modules\imgproc\src\grabcut.cpp:216: error: (-215) dtrm & std::numeric_limits&double&::epsilon()]
at org.opencv.imgproc.Imgproc.grabCut_0(Native Method)
at org.opencv.imgproc.Imgproc.grabCut(Imgproc.java:6528)求教各位大神,怎么解决??
java做为IT行业主流技术,是很受企业青睐的.达内java培训O基础4-16周您精通速成班!更为抢手.达内IT培训,专设java学习课程「入门+进阶+精通」,学习+就业!一步搞定!
import org.opencv.core.Cimport org.opencv.core.CvTimport org.opencv.core.Mimport org.opencv.core.Rimport org.opencv.core.Simport org.opencv.highgui.Himport org.opencv.imgproc.Ipublic class ImageSegment {public static void main(String[] args) {System.loadLibrary(&opencv_java244&);Mat image =image = Highgui.imread(&syh.jpg&);Rect rectangle = new Rect(25,25,image.cols()-64,image.rows()-64);Mat result = new Mat();Mat bgdModel = new Mat();Mat fgdModel = new Mat();Mat source = new Mat(1, 1, CvType.CV_8U, new Scalar(3));Imgproc.grabCut(image, result, rectangle, bgdModel, fgdModel, 1, 0);<pare(result, source,result, Core.CMP_EQ);Mat foreground= new Mat(image.size(), CvType.CV_8UC1, new Scalar(0, 0, 0));image.copyTo(foreground, result);Highgui.imwrite(&sucess1.jpg&, foreground);System.out.println(&grabcut sucess!&);}}
我用的图片是256* 256
,分割大小为32,32,image.cols()-64,image.rows()-64
登录百度帐号推荐应用
为兴趣而生,贴吧更懂你。或图像分割之(四)OpenCV的GrabCut函数使用和源码解读
图像分割之(四)OpenCV的GrabCut函数使用和源码解读
发布时间: 10:27:59
编辑:www.fx114.net
本篇文章主要介绍了"图像分割之(四)OpenCV的GrabCut函数使用和源码解读",主要涉及到图像分割之(四)OpenCV的GrabCut函数使用和源码解读方面的内容,对于图像分割之(四)OpenCV的GrabCut函数使用和源码解读感兴趣的同学可以参考一下。
图像分割之(四)OpenCV的GrabCut函数使用和源码解读
上一文对GrabCut做了一个了解。OpenCV中的GrabCut算法是依据《&GrabCut& - Interactive Foreground
Extraction using Iterated Graph Cuts》这篇文章来实现的。现在我对源码做了些注释,以便我们更深入的了解该算法。一直觉得论文和代码是有比较大的差别的,个人觉得脱离代码看论文,最多能看懂70%,剩下20%或者更多就需要通过阅读代码来获得了,那还有10%就和每个人的基础和知识储备相挂钩了。
接触时间有限,若有错误,还望各位前辈指正,谢谢。原论文的一些浅解见上一博文:
一、GrabCut函数使用
在OpenCV的源码目录的samples的文件夹下,有grabCut的使用例程,请参考:
opencv\samples\cpp\grabcut.cpp。
而grabCut函数的API说明如下:
void cv::grabCut( InputArray _img, InputOutputArray _mask, Rect rect,
InputOutputArray _bgdModel, InputOutputArray _fgdModel,
int iterCount, int mode )
****参数说明:
img——待分割的源图像,必须是8位3通道(CV_8UC3)图像,在处理的过程中不会被修改;
mask——掩码图像,如果使用掩码进行初始化,那么mask保存初始化掩码信息;在执行分割的时候,也可以将用户交互所设定的前景与背景保存到mask中,然后再传入grabCut函数;在处理结束之后,mask中会保存结果。mask只能取以下四种&#20540;:
GCD_BGD(=0),背景;
GCD_FGD(=1),前景;
GCD_PR_BGD(=2),可能的背景;
GCD_PR_FGD(=3),可能的前景。
如果没有手工标记GCD_BGD或者GCD_FGD,那么结果只会有GCD_PR_BGD或GCD_PR_FGD;
rect——用于限定需要进行分割的图像范围,只有该矩形窗口内的图像部分才被处理;
bgdModel——背景模型,如果为null,函数内部会自动创建一个bgdModel;bgdModel必须是单通道浮点型(CV_32FC1)图像,且行数只能为1,列数只能为13x5;
fgdModel——前景模型,如果为null,函数内部会自动创建一个fgdModel;fgdModel必须是单通道浮点型(CV_32FC1)图像,且行数只能为1,列数只能为13x5;
iterCount——迭代次数,必须大于0;
mode——用于指示grabCut函数进行什么操作,可选的&#20540;有:
GC_INIT_WITH_RECT(=0),用矩形窗初始化GrabCut;
GC_INIT_WITH_MASK(=1),用掩码图像初始化GrabCut;
GC_EVAL(=2),执行分割。
二、GrabCut源码解读
其中源码包含了gcgraph.hpp这个构建图和max flow/min cut算法的实现文件,这个文件暂时没有解读,后面再更新了。
#include &precomp.hpp& #include &gcgraph.hpp& #include &limits& using namespace
class GMM { public: static const
int componentsCount = 5; GMM( Mat& _model ); double operator()(
const Vec3d color ) const; double operator()( int ci,const Vec3d color )const;int whichComponent(
const Vec3d color ) const; void initLearning(); void addSample( int ci,const Vec3d color );void endLearning(); private: void calcInverseCovAndDeterm(
int ci ); M double* double* double* double inverseCovs[componentsCount][3][3];
double covDeterms[componentsCount];double sums[componentsCount][3]; double prods[componentsCount][3][3]; int sampleCounts[componentsCount];int totalSampleC };
GMM::GMM( Mat& _model ) {
const int modelSize = 3 &#43; 9 &#43; 1;if( _model.empty() ) { _model.create( 1, modelSize*componentsCount, CV_64FC1 ); _model.setTo(Scalar(0)); } else if( (_model.type() != CV_64FC1) || (_model.rows != 1) || (_model.cols != modelSize*componentsCount) )CV_Error( CV_StsBadArg, &_model must have CV_64FC1 type, rows == 1 and cols == 13*componentsCount& );model = _
coefs = model.ptr&double&(0);
mean = coefs &#43; componentsC cov = mean &#43; 3*componentsC
for( int ci = 0; ci & componentsC ci&#43;&#43; )if( coefs[ci] & 0 )
calcInverseCovAndDeterm( ci ); }
double GMM::operator()(
const Vec3d color ) const { double res = 0; for( int ci = 0; ci & componentsC ci&#43;&#43; )res &#43;= coefs[ci] * (*this)(ci, color );return }
double GMM::operator()(
int ci, const Vec3d color )
const { double res = 0; if( coefs[ci] & 0 ) { CV_Assert( covDeterms[ci] & std::numeric_limits&double&::epsilon() );Vec3d diff = double* m = mean &#43; 3* diff[0] -= m[0]; diff[1] -= m[1]; diff[2] -= m[2]; double mult = diff[0]*(diff[0]*inverseCovs[ci][0][0] &#43; diff[1]*inverseCovs[ci][1][0] &#43; diff[2]*inverseCovs[ci][2][0])&#43; diff[1]*(diff[0]*inverseCovs[ci][0][1] &#43; diff[1]*inverseCovs[ci][1][1] &#43; diff[2]*inverseCovs[ci][2][1])&#43; diff[2]*(diff[0]*inverseCovs[ci][0][2] &#43; diff[1]*inverseCovs[ci][1][2] &#43; diff[2]*inverseCovs[ci][2][2]);res = 1.0f/sqrt(covDeterms[ci]) * exp(-0.5f*mult); } return }
int GMM::whichComponent(
const Vec3d color ) const { int k = 0; double max = 0; for( int ci = 0; ci & componentsC ci&#43;&#43; ){ double p = (*this)( ci, color );if( p & max ) { k =
max = } } return }
void GMM::initLearning() { for( int ci = 0; ci & componentsC ci&#43;&#43;){ sums[ci][0] = sums[ci][1] = sums[ci][2] = 0; prods[ci][0][0] = prods[ci][0][1] = prods[ci][0][2] = 0; prods[ci][1][0] = prods[ci][1][1] = prods[ci][1][2] = 0; prods[ci][2][0] = prods[ci][2][1] = prods[ci][2][2] = 0; sampleCounts[ci] = 0; } totalSampleCount = 0; }
void GMM::addSample( int ci,const Vec3d color ){ sums[ci][0] &#43;= color[0]; sums[ci][1] &#43;= color[1]; sums[ci][2] &#43;= color[2]; prods[ci][0][0] &#43;= color[0]*color[0]; prods[ci][0][1] &#43;= color[0]*color[1]; prods[ci][0][2] &#43;= color[0]*color[2];prods[ci][1][0] &#43;= color[1]*color[0]; prods[ci][1][1] &#43;= color[1]*color[1]; prods[ci][1][2] &#43;= color[1]*color[2];prods[ci][2][0] &#43;= color[2]*color[0]; prods[ci][2][1] &#43;= color[2]*color[1]; prods[ci][2][2] &#43;= color[2]*color[2];sampleCounts[ci]&#43;&#43;; totalSampleCount&#43;&#43;; }
void GMM::endLearning() { const double variance = 0.01;for( int ci = 0; ci & componentsC ci&#43;&#43; ){ int n = sampleCounts[ci];
if( n == 0 ) coefs[ci] = 0; else {
coefs[ci] = (double)n/totalSampleC double* m = mean &#43; 3* m[0] = sums[ci][0]/n; m[1] = sums[ci][1]/n; m[2] = sums[ci][2]/n; double* c = cov &#43; 9* c[0] = prods[ci][0][0]/n - m[0]*m[0]; c[1] = prods[ci][0][1]/n - m[0]*m[1]; c[2] = prods[ci][0][2]/n - m[0]*m[2];c[3] = prods[ci][1][0]/n - m[1]*m[0]; c[4] = prods[ci][1][1]/n - m[1]*m[1]; c[5] = prods[ci][1][2]/n - m[1]*m[2];c[6] = prods[ci][2][0]/n - m[2]*m[0]; c[7] = prods[ci][2][1]/n - m[2]*m[1]; c[8] = prods[ci][2][2]/n - m[2]*m[2]; double dtrm = c[0]*(c[4]*c[8]-c[5]*c[7]) - c[1]*(c[3]*c[8]-c[5]*c[6]) &#43; c[2]*(c[3]*c[7]-c[4]*c[6]);if( dtrm &= std::numeric_limits&double&::epsilon() ){
c[0] &#43;= c[4] &#43;= c[8] &#43;= } calcInverseCovAndDeterm(ci); } } }
void GMM::calcInverseCovAndDeterm(
int ci ) { if( coefs[ci] & 0 ) {
double *c = cov &#43; 9* double dtrm = covDeterms[ci] = c[0]*(c[4]*c[8]-c[5]*c[7]) - c[1]*(c[3]*c[8]-c[5]*c[6])&#43; c[2]*(c[3]*c[7]-c[4]*c[6]);
CV_Assert( dtrm & std::numeric_limits&double&::epsilon() ); inverseCovs[ci][0][0] = (c[4]*c[8] - c[5]*c[7]) / inverseCovs[ci][1][0] = -(c[3]*c[8] - c[5]*c[6]) / inverseCovs[ci][2][0] = (c[3]*c[7] - c[4]*c[6]) / inverseCovs[ci][0][1] = -(c[1]*c[8] - c[2]*c[7]) / inverseCovs[ci][1][1] = (c[0]*c[8] - c[2]*c[6]) / inverseCovs[ci][2][1] = -(c[0]*c[7] - c[1]*c[6]) / inverseCovs[ci][0][2] = (c[1]*c[5] - c[2]*c[4]) / inverseCovs[ci][1][2] = -(c[0]*c[5] - c[2]*c[3]) / inverseCovs[ci][2][2] = (c[0]*c[4] - c[1]*c[3]) / } }
static double calcBeta(const Mat& img ){ double beta = 0; for( int y = 0; y & img. y&#43;&#43; ){ for( int x = 0; x & img. x&#43;&#43; ){
Vec3d color = img.at&Vec3b&(y,x); if( x&0 ) { Vec3d diff = color - (Vec3d)img.at&Vec3b&(y,x-1); beta &#43;= diff.dot(diff);
} if( y&0 && x&0 ) { Vec3d diff = color - (Vec3d)img.at&Vec3b&(y-1,x-1); beta &#43;= diff.dot(diff); } if( y&0 ) { Vec3d diff = color - (Vec3d)img.at&Vec3b&(y-1,x); beta &#43;= diff.dot(diff); } if( y&0 && x&img.cols-1) { Vec3d diff = color - (Vec3d)img.at&Vec3b&(y-1,x&#43;1); beta &#43;= diff.dot(diff); } } } if( beta &= std::numeric_limits&double&::epsilon() )beta = 0; else beta = 1.f / (2 * beta/(4*img.cols*img.rows - 3*img.cols - 3*img.rows &#43; 2) );return }
static void calcNWeights(const Mat& img, Mat& leftW, Mat& upleftW, Mat& upW,Mat& uprightW, double beta,
double gamma ) {
const double gammaDivSqrt2 = gamma / std::sqrt(2.0f); leftW.create( img.rows, img.cols, CV_64FC1 ); upleftW.create( img.rows, img.cols, CV_64FC1 ); upW.create( img.rows, img.cols, CV_64FC1 ); uprightW.create( img.rows, img.cols, CV_64FC1 ); for( int y = 0; y & img. y&#43;&#43; ){ for( int x = 0; x & img. x&#43;&#43; ){ Vec3d color = img.at&Vec3b&(y,x); if( x-1&=0 ) { Vec3d diff = color - (Vec3d)img.at&Vec3b&(y,x-1); leftW.at&double&(y,x) = gamma * exp(-beta*diff.dot(diff));} else leftW.at&double&(y,x) = 0; if( x-1&=0 && y-1&=0 ) { Vec3d diff = color - (Vec3d)img.at&Vec3b&(y-1,x-1); upleftW.at&double&(y,x) = gammaDivSqrt2 * exp(-beta*diff.dot(diff));} else upleftW.at&double&(y,x) = 0; if( y-1&=0 ) { Vec3d diff = color - (Vec3d)img.at&Vec3b&(y-1,x); upW.at&double&(y,x) = gamma * exp(-beta*diff.dot(diff));} else upW.at&double&(y,x) = 0; if( x&#43;1&img.cols && y-1&=0 )
{ Vec3d diff = color - (Vec3d)img.at&Vec3b&(y-1,x&#43;1); uprightW.at&double&(y,x) = gammaDivSqrt2 * exp(-beta*diff.dot(diff));} else uprightW.at&double&(y,x) = 0; } } }
static void checkMask(const Mat& img,const Mat& mask ){ if( mask.empty() ) CV_Error( CV_StsBadArg, &mask is empty& ); if( mask.type() != CV_8UC1 ) CV_Error( CV_StsBadArg, &mask must have CV_8UC1 type& );if( mask.cols != img.cols || mask.rows != img.rows )CV_Error( CV_StsBadArg, &mask must have as many rows and cols as img& );for( int y = 0; y & mask. y&#43;&#43; ){ for( int x = 0; x & mask. x&#43;&#43; ){ uchar val = mask.at&uchar&(y,x); if( val!=GC_BGD && val!=GC_FGD && val!=GC_PR_BGD && val!=GC_PR_FGD )CV_Error( CV_StsBadArg, &mask element value must be equel&&GC_BGD or GC_FGD or GC_PR_BGD or GC_PR_FGD& ); } } }
static void initMaskWithRect( Mat& mask, Size imgSize, Rect rect ){ mask.create( imgSize, CV_8UC1 ); mask.setTo( GC_BGD ); rect.x = max(0, rect.x); rect.y = max(0, rect.y); rect.width = min(rect.width, imgSize.width-rect.x); rect.height = min(rect.height, imgSize.height-rect.y); (mask(rect)).setTo( Scalar(GC_PR_FGD) ); }
static void initGMMs(const Mat& img,const Mat& mask, GMM& bgdGMM, GMM& fgdGMM ){ const int kMeansItCount = 10;const int kMeansType = KMEANS_PP_CENTERS;Mat bgdLabels, fgdL vector&Vec3f& bgdSamples, fgdS P for( p.y = 0; p.y & img. p.y&#43;&#43; ){ for( p.x = 0; p.x & img. p.x&#43;&#43; ){ if( mask.at&uchar&(p) == GC_BGD || mask.at&uchar&(p) == GC_PR_BGD )bgdSamples.push_back( (Vec3f)img.at&Vec3b&(p) ); else fgdSamples.push_back( (Vec3f)img.at&Vec3b&(p) ); } } CV_Assert( !bgdSamples.empty() && !fgdSamples.empty() );
Mat _bgdSamples( (int)bgdSamples.size(), 3, CV_32FC1, &bgdSamples[0][0] );kmeans( _bgdSamples, GMM::componentsCount, bgdLabels, TermCriteria( CV_TERMCRIT_ITER, kMeansItCount, 0.0), 0, kMeansType ); Mat _fgdSamples( (int)fgdSamples.size(), 3, CV_32FC1, &fgdSamples[0][0] );kmeans( _fgdSamples, GMM::componentsCount, fgdLabels, TermCriteria( CV_TERMCRIT_ITER, kMeansItCount, 0.0), 0, kMeansType );bgdGMM.initLearning(); for( int i = 0; i & (int)bgdSamples.size(); i&#43;&#43; )bgdGMM.addSample( bgdLabels.at&int&(i,0), bgdSamples[i] );bgdGMM.endLearning(); fgdGMM.initLearning(); for( int i = 0; i & (int)fgdSamples.size(); i&#43;&#43; )fgdGMM.addSample( fgdLabels.at&int&(i,0), fgdSamples[i] );fgdGMM.endLearning(); }
static void assignGMMsComponents(const Mat& img,const Mat& mask,const GMM& bgdGMM,const GMM& fgdGMM, Mat& compIdxs ) { P for( p.y = 0; p.y & img. p.y&#43;&#43; ){ for( p.x = 0; p.x & img. p.x&#43;&#43; ){ Vec3d color = img.at&Vec3b&(p); compIdxs.at&int&(p) = mask.at&uchar&(p) == GC_BGD || mask.at&uchar&(p) == GC_PR_BGD ?bgdGMM.whichComponent(color) : fgdGMM.whichComponent(color); } } }
static void learnGMMs(const Mat& img,const Mat& mask,const Mat& compIdxs, GMM& bgdGMM, GMM& fgdGMM
){ bgdGMM.initLearning(); fgdGMM.initLearning(); P for( int ci = 0; ci & GMM::componentsC ci&#43;&#43; ){ for( p.y = 0; p.y & img. p.y&#43;&#43; ) { for( p.x = 0; p.x & img. p.x&#43;&#43; ) { if( compIdxs.at&int&(p) == ci ){ if( mask.at&uchar&(p) == GC_BGD || mask.at&uchar&(p) == GC_PR_BGD )bgdGMM.addSample( ci, img.at&Vec3b&(p) ); else fgdGMM.addSample( ci, img.at&Vec3b&(p) ); } } } } bgdGMM.endLearning(); fgdGMM.endLearning(); }
static void constructGCGraph(const Mat& img,const Mat& mask,const GMM& bgdGMM,const
GMM& fgdGMM,double lambda,const Mat& leftW, const Mat& upleftW,const Mat& upW,const Mat& uprightW,GCGraph&double&& graph ) { int vtxCount = img.cols*img.
int edgeCount = 2*(4*vtxCount - 3*(img.cols &#43; img.rows) &#43; 2);graph.create(vtxCount, edgeCount); P for( p.y = 0; p.y & img. p.y&#43;&#43; ) { for( p.x = 0; p.x & img. p.x&#43;&#43;) {
int vtxIdx = graph.addVtx();
Vec3b color = img.at&Vec3b&(p);
double fromSource, toS if( mask.at&uchar&(p) == GC_PR_BGD || mask.at&uchar&(p) == GC_PR_FGD ){
fromSource = -log( bgdGMM(color) ); toSink = -log( fgdGMM(color) ); } else if( mask.at&uchar&(p) == GC_BGD ){ fromSource = 0; toSink = } else
{ fromSource = toSink = 0; }
graph.addTermWeights( vtxIdx, fromSource, toSink );
if( p.x&0 ) { double w = leftW.at&double&(p);graph.addEdges( vtxIdx, vtxIdx-1, w, w ); } if( p.x&0 && p.y&0 ) { double w = upleftW.at&double&(p);graph.addEdges( vtxIdx, vtxIdx-img.cols-1, w, w ); } if( p.y&0 ) { double w = upW.at&double&(p);graph.addEdges( vtxIdx, vtxIdx-img.cols, w, w ); } if( p.x&img.cols-1 && p.y&0 ) { double w = uprightW.at&double&(p);graph.addEdges( vtxIdx, vtxIdx-img.cols&#43;1, w, w ); } } } }
static void estimateSegmentation( GCGraph&double&& graph, Mat& mask ){
graph.maxFlow(); P for( p.y = 0; p.y & mask. p.y&#43;&#43; ) { for( p.x = 0; p.x & mask. p.x&#43;&#43; ) {
if( mask.at&uchar&(p) == GC_PR_BGD || mask.at&uchar&(p) == GC_PR_FGD ){ if( graph.inSourceSegment( p.y*mask.cols&#43;p.x
) ) mask.at&uchar&(p) = GC_PR_FGD; else mask.at&uchar&(p) = GC_PR_BGD; } } } }
void cv::grabCut( InputArray _img, InputOutputArray _mask, Rect rect,InputOutputArray _bgdModel, InputOutputArray _fgdModel, int iterCount, int mode ){ Mat img = _img.getMat(); Mat& mask = _mask.getMatRef(); Mat& bgdModel = _bgdModel.getMatRef(); Mat& fgdModel = _fgdModel.getMatRef(); if( img.empty() ) CV_Error( CV_StsBadArg, &image is empty& ); if( img.type() != CV_8UC3 ) CV_Error( CV_StsBadArg, &image mush have CV_8UC3 type& );GMM bgdGMM( bgdModel ), fgdGMM( fgdModel ); Mat compIdxs( img.size(), CV_32SC1 ); if( mode == GC_INIT_WITH_RECT || mode == GC_INIT_WITH_MASK ){ if( mode == GC_INIT_WITH_RECT ) initMaskWithRect( mask, img.size(), rect ); else checkMask( img, mask ); initGMMs( img, mask, bgdGMM, fgdGMM ); } if( iterCount &= 0) return; if( mode == GC_EVAL ) checkMask( img, mask ); const double gamma = 50;const double lambda = 9*const double beta = calcBeta( img );Mat leftW, upleftW, upW, uprightW; calcNWeights( img, leftW, upleftW, upW, uprightW, beta, gamma ); for( int i = 0; i & iterC i&#43;&#43; ){ GCGraph&double& assignGMMsComponents( img, mask, bgdGMM, fgdGMM, compIdxs ); learnGMMs( img, mask, compIdxs, bgdGMM, fgdGMM ); constructGCGraph(img, mask, bgdGMM, fgdGMM, lambda, leftW, upleftW, upW, uprightW, graph );estimateSegmentation( graph, mask ); } } &
一、不得利用本站危害国家安全、泄露国家秘密,不得侵犯国家社会集体的和公民的合法权益,不得利用本站制作、复制和传播不法有害信息!
二、互相尊重,对自己的言论和行为负责。
本文标题:
本页链接:16:31 提问
如何使用opencv的Java接口中的Core.dft()函数来实现图像的傅里叶变换?
opencv中的dft()函数网上只找到了c++接口的,部分代码:
//expand input image to optimal size
int m = getOptimalDFTSize( I.rows );
int n = getOptimalDFTSize( I.cols ); // on the border add zero values
copyMakeBorder(I, padded, 0, m - I.rows, 0, n - I.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes[] = {Mat_(padded), Mat::zeros(padded.size(), CV_32F)};
Mat complexI;
merge(planes, 2, complexI);
// Add to the expanded another plane with zeros
dft(complexI, complexI);
// this way the result may fit in the source matrix
现在需要一个在Android上可以实现的Java代码,我尝试着按上面那个c++代码搬到Java中,代码如下:
Mat bmMat = new Mat();
Utils.bitmapToMat(bm, bmMat);
//bm是已经读入了的bitmap,将其转换成Mat
Mat pad=new Mat();
MatOfFloat padded = new MatOfFloat();
int m = Core.getOptimalDFTSize(bmMat.rows());
int n = Core.getOptimalDFTSize(bmMat.cols());
Imgproc.copyMakeBorder(bmMat, pad, 0, m - bmMat.rows(), 0, n- bmMat.cols(),Imgproc.BORDER_CONSTANT, Scalar.all(0));
padded=(MatOfFloat)
Mat[] userid = { padded, Mat.zeros(padded.size(), CvType.CV_32F) };
List planes = Arrays.asList(userid);
Mat complexI = new Mat();
Core.merge(planes, complexI);
Core.dft(complexI, complexI);
可是运行失败,我感觉是c++中Mat_(padded)和Java中MatOfFloat这一块没有对应上,所以出现了问题。求大神解答,谢谢!!!
其他相似问题IOS 用openCv实现简单的扣人像的
最近要实现人像扣图的功能,我在网上查到的扣图的方式主要有两种,一种是coreImage 色域,一种是openCv边缘检测 第一种适合纯色背景,扣图精准,第二种,适合复杂背景,但是默认的扣图不精确,如下图 1.处理前的照片
2.处理后的照片
coreImage 网上已经有很多实现了,也有很多文章,我就不多说了,我只把我实现的代码贴出来,代码粘过去就能用,另忘了,导入CubeMap.c
//coreImage 扣图 createCubeMap(值1,值2)值范围0~360 扣掉值1到值2范围内的颜色
CubeMap myCube = createCubeMap(self.slider1.value, self.slider2.value);
NSData *myData = [[NSData alloc]initWithBytesNoCopy:myCube.data length:myCube.length freeWhenDone:true];
CIFilter *colorCubeFilter = [CIFilter filterWithName:@CIColorCube];
[colorCubeFilter setValue:[NSNumber numberWithFloat:myCube.dimension] forKey:@inputCubeDimension];
[colorCubeFilter setValue:myData forKey:@inputCubeData];
[colorCubeFilter setValue:[CIImage imageWithCGImage:_preview.image.CGImage] forKey:kCIInputImageKey];
CIImage *outputImage = colorCubeFilter.outputI
CIFilter *sourceOverCompositingFilter = [CIFilter filterWithName:@CISourceOverCompositing];
[sourceOverCompositingFilter setValue:outputImage forKey:kCIInputImageKey];
[sourceOverCompositingFilter setValue:[CIImage imageWithCGImage:backgroundImage.CGImage] forKey:kCIInputBackgroundImageKey];
outputImage = sourceOverCompositingFilter.outputI
CGImage *cgImage = [[CIContext contextWithOptions: nil]createCGImage:outputImage fromRect:outputImage.extent];
下面我讲一下,ios结合openCv实现扣图的方法
下载opencv2 具体怎么下弄opencv 请见我之前写的一贴博客: http://blog.csdn.net/wuzehai02/article/details/8439778
IOS使用OPENCV实现物体跟踪
头文件导入下面几个
#import UIImage+OpenCV.h
UIImage+OpenCV类
// UIImage+OpenCV.h
@interface UIImage (UIImage_OpenCV)
+(UIImage *)imageWithCVMat:(constcv::Mat&)cvM
-(id)initWithCVMat:(constcv::Mat&)cvM
@property(nonatomic,readonly) cv::Mat CVM
@property(nonatomic,readonly) cv::Mat CVGrayscaleM
// UIImage+OpenCV.mm
#import UIImage+OpenCV.h
staticvoid ProviderReleaseDataNOP(void *info,const void *data,size_t size)
// Do not release memory
@implementation UIImage (UIImage_OpenCV)
-(cv::Mat)CVMat
CGColorSpaceRef colorSpace =CGImageGetColorSpace(self.CGImage);
CGFloat cols =self.size.
CGFloat rows =self.size.
cv::Mat cvMat(rows, cols,CV_8UC4); // 8 bits per component, 4 channels
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage);
CGContextRelease(contextRef);
return cvM
-(cv::Mat)CVGrayscaleMat
CGColorSpaceRef colorSpace =CGColorSpaceCreateDeviceGray();
CGFloat cols =self.size.
CGFloat rows =self.size.
cv::Mat cvMat =cv::Mat(rows, cols,CV_8UC1); // 8 bits per component, 1 channel
CGContextRef contextRef = CGBitmapContextCreate(cvMat.data, // Pointer to backing data
cols, // Width of bitmap
rows, // Height of bitmap
8, // Bits per component
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone |
kCGBitmapByteOrderDefault); // Bitmap info flags
CGContextDrawImage(contextRef, CGRectMake(0, 0, cols, rows), self.CGImage);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return cvM
+ (UIImage *)imageWithCVMat:(constcv::Mat&)cvMat
return [[[UIImagealloc] initWithCVMat:cvMat]autorelease];
- (id)initWithCVMat:(constcv::Mat&)cvMat
NSData *data = [NSDatadataWithBytes:cvMat.datalength:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorS
if (cvMat.elemSize() == 1)
colorSpace = CGColorSpaceCreateDeviceGray();
colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef provider =CGDataProviderCreateWithCFData((CFDataRef)data);
CGImageRef imageRef = CGImageCreate(cvMat.cols, // Width
cvMat.rows, // Height
8, // Bits per component
8 * cvMat.elemSize(), // Bits per pixel
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags
provider, // CGDataProviderRef
NULL, // Decode
false, // Should interpolate
kCGRenderingIntentDefault); // Intent
self = [selfinitWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
好了,上面的都是准备工作,具体的代码其实很简单
cv::Mat grayFrame,_lastFrame, mask,bgModel,fgM
_lastFrame = [self.preview.imageCVMat];
cv::cvtColor(_lastFrame, grayFrame,cv::COLOR_RGBA2BGR);//转换成三通道bgr
cv::Rect rectangle(1,1,grayFrame.cols-2,grayFrame.rows -2);//检测的范围
//分割图像
cv::grabCut(grayFrame, mask, rectangle, bgModel, fgModel, 3,cv::GC_INIT_WITH_RECT);//openCv强大的扣图功能
int nrow = grayFrame.
int ncol = grayFrame.cols * grayFrame.channels();
for(int j=0; j
for(int i=0; i
uchar val = mask.at(j,i);
if(val==cv::GC_PR_BGD){
grayFrame.at(j,i)[0]= &#39;&&#39;;
grayFrame.at(j,i)[1]= &#39;&&#39;;
grayFrame.at(j,i)[2]= &#39;&&#39;;
cv::cvtColor(grayFrame, grayFrame,cv::COLOR_BGR2RGB); //转换成彩色图片
_preview.image = [[UIImagealloc] initWithCVMat:grayFrame];//显示结果
上面的代码测试可用,其实这里最关键的代码是使用了opencv的grabCut 图像分割函数
grabCut函数的API说明如下:
void cv::grabCut( InputArray _img, InputOutputArray _mask, Rect rect,
InputOutputArray _bgdModel, InputOutputArray _fgdModel,
int iterCount, int mode )
****参数说明:
img&&待分割的源图像,必须是8位3通道(CV_8UC3)图像,在处理的过程中不会被修改;
mask&&掩码图像,如果使用掩码进行初始化,那么mask保存初始化掩码信息;在执行分割的时候,也可以将用户交互所设定的前景与背景保存到mask中,然后再传入grabCut函数;在处理结束之后,mask中会保存结果。mask只能取以下四种值:
GCD_BGD(=0),背景;
GCD_FGD(=1),前景;
GCD_PR_BGD(=2),可能的背景;
GCD_PR_FGD(=3),可能的前景。
如果没有手工标记GCD_BGD或者GCD_FGD,那么结果只会有GCD_PR_BGD或GCD_PR_FGD;
rect&&用于限定需要进行分割的图像范围,只有该矩形窗口内的图像部分才被处理;
bgdModel&&背景模型,如果为null,函数内部会自动创建一个bgdModel;bgdModel必须是单通道浮点型(CV_32FC1)图像,且行数只能为1,列数只能为13x5;
fgdModel&&前景模型,如果为null,函数内部会自动创建一个fgdModel;fgdModel必须是单通道浮点型(CV_32FC1)图像,且行数只能为1,列数只能为13x5;
iterCount&&迭代次数,必须大于0;
mode&&用于指示grabCut函数进行什么操作,可选的值有:
GC_INIT_WITH_RECT(=0),用矩形窗初始化GrabCut;
GC_INIT_WITH_MASK(=1),用掩码图像初始化GrabCut;
GC_EVAL(=2),执行分割。

我要回帖

更多关于 opencv 图像阈值分割 的文章

 

随机推荐