org.opencv.imgproc.Imgproc.cvtColor()方法的使用及代码示例

x33g5p2x  于2022-01-21 转载在 其他  
字(17.1k)|赞(0)|评价(0)|浏览(223)

本文整理了Java中org.opencv.imgproc.Imgproc.cvtColor()方法的一些代码示例,展示了Imgproc.cvtColor()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Imgproc.cvtColor()方法的具体详情如下:
包路径:org.opencv.imgproc.Imgproc
类名称:Imgproc
方法名:cvtColor

Imgproc.cvtColor介绍

[英]Converts an image from one color space to another.

The function converts an input image from one color space to another. In case of a transformation to-from RGB color space, the order of the channels should be specified explicitly (RGB or BGR). Note that the default color format in OpenCV is often referred to as RGB but it is actually BGR (the bytes are reversed). So the first byte in a standard (24-bit) color image will be an 8-bit Blue component, the second byte will be Green, and the third byte will be Red. The fourth, fifth, and sixth bytes would then be the second pixel (Blue, then Green, then Red), and so on.

The conventional ranges for R, G, and B channel values are:

  • 0 to 255 for CV_8U images
  • 0 to 65535 for CV_16U images
  • 0 to 1 for CV_32F images

In case of linear transformations, the range does not matter. But in case of a non-linear transformation, an input RGB image should be normalized to the proper value range to get the correct results, for example, for RGB*->* Luv* transformation. For example, if you have a 32-bit floating-point image directly converted from an 8-bit image without any scaling, then it will have the 0..255 value range instead of 0..1 assumed by the function. So, before calling cvtColor, you need first to scale the image down: ``

// C++ code:

img *= 1./255;

cvtColor(img, img, CV_BGR2Luv);

If you use cvtColor with 8-bit images, the conversion will have some information lost. For many applications, this will not be noticeable but it is recommended to use 32-bit images in applications that need the full range of colors or that convert an image before an operation and then convert back.

If conversion adds the alpha channel, its value will set to the maximum of corresponding channel range: 255 for CV_8U, 65535 for CV_16U, 1 for CV_32F.

The function can do the following transformations:

  • RGB GRAY (CV_BGR2GRAY, CV_RGB2GRAY, CV_GRAY2BGR, CV_GRAY2RGB) Transformations within RGB space like adding/removing the alpha channel, reversing the channel order, conversion to/from 16-bit RGB color (R5:G6:B5 or R5:G5:B5), as well as conversion to/from grayscale using:

RGB[A] to Gray: Y

and

Gray to RGB[A]: R

The conversion from a RGB image to gray is done with:

``

// C++ code:

cvtColor(src, bwsrc, CV_RGB2GRAY);

More advanced channel reordering can also be done with "mixChannels".

  • RGB CIE XYZ.Rec 709 with D65 white point (CV_BGR2XYZ, CV_RGB2XYZ, CV_XYZ2BGR, CV_XYZ2RGB):

X Z ltBR gt

R B ltBR gt

X, Y and Z cover the whole value range (in case of floating-point images, Z may exceed 1).

  • RGB YCrCb JPEG (or YCC) (CV_BGR2YCrCb, CV_RGB2YCrCb, CV_YCrCb2BGR, CV_YCrCb2RGB)

Y

Cr

Cb

R

G

B

where

delta =

Y, Cr, and Cb cover the whole value range.

  • RGB HSV (CV_BGR2HSV, CV_RGB2HSV, CV_HSV2BGR, CV_HSV2RGB) In case of 8-bit and 16-bit images, R, G, and B are converted to the floating-point format and scaled to fit the 0 to 1 range.

V

S

H

If H<0 then H . On output 0 , 0 , 0 .

The values are then converted to the destination data type:

  • 8-bit images

V

  • 16-bit images (currently not supported)

V <- 65535 V, S <- 65535 S, H <- H

  • 32-bit images H, S, and V are left as is
  • RGB HLS (CV_BGR2HLS, CV_RGB2HLS, CV_HLS2BGR, CV_HLS2RGB).

In case of 8-bit and 16-bit images, R, G, and B are converted to the floating-point format and scaled to fit the 0 to 1 range.

V_(max)

V_(min)

L

S = 0.5)

H

If H<0 then H . On output 0 , 0 , 0 .

The values are then converted to the destination data type:

  • 8-bit images

V

  • 16-bit images (currently not supported)

V <- 65535 * V, S <- 65535 * S, H <- H

  • 32-bit images H, S, V are left as is
  • RGB CIE Lab* (CV_BGR2Lab, CV_RGB2Lab, CV_Lab2BGR, CV_Lab2RGB).

In case of 8-bit and 16-bit images, R, G, and B are converted to the floating-point format and scaled to fit the 0 to 1 range.

[X Y Z]

** [R G B]*

X

Z

L

a

b

where

f(t)= t^(1/3) for t>0.008856; 7.787 t+16/116 for t

and

delta = 128 for 8-bit images; 0 for floating-point images

This outputs 0 , -127 , -127 . The values are then converted to the destination data type:

  • 8-bit images

L

  • 16-bit images (currently not supported)
  • 32-bit images L, a, and b are left as is
  • RGB CIE Luv* (CV_BGR2Luv, CV_RGB2Luv, CV_Luv2BGR, CV_Luv2RGB).

In case of 8-bit and 16-bit images, R, G, and B are converted to the floating-point format and scaled to fit 0 to 1 range.

[X Y Z]

** [R G B]*

L

u'

v'

u

v

This outputs 0 , -134 , -140 .

The values are then converted to the destination data type:

  • 8-bit images

L

  • 16-bit images (currently not supported)
  • 32-bit images L, u, and v are left as is

The above formulae for converting RGB to/from various color spaces have been taken from multiple sources on the web, primarily from the Charles Poynton site http://www.poynton.com/ColorFAQ.html

  • Bayer -> RGB (CV_BayerBG2BGR, CV_BayerGB2BGR, CV_BayerRG2BGR, CV_BayerGR2BGR, CV_BayerBG2RGB, CV_BayerGB2RGB, CV_BayerRG2RGB, CV_BayerGR2RGB). The Bayer pattern is widely used in CCD and CMOS cameras. It enables you to get color pictures from a single plane where R,G, and B pixels (sensors of a particular component) are interleaved as follows: The output RGB components of a pixel are interpolated from 1, 2, or ``

// C++ code:

4 neighbors of the pixel having the same color. There are several

modifications of the above pattern that can be achieved by shifting

the pattern one pixel left and/or one pixel up. The two letters

C_1 and

C_2 in the conversion constants CV_BayerC_1 C_22BGR and CV_BayerC_1 C_22RGB indicate the particular pattern

type. These are components from the second row, second and third

columns, respectively. For example, the above pattern has a very

popular "BG" type.
[中]将图像从一个颜色空间转换为另一个颜色空间。
该函数用于将输入图像从一个颜色空间转换为另一个颜色空间。在从RGB颜色空间转换到的情况下,应明确指定通道的顺序(RGB或BGR)。请注意,OpenCV中的默认颜色格式通常称为RGB,但实际上是BGR(字节颠倒)。因此,标准(24位)彩色图像中的第一个字节将是8位蓝色分量,第二个字节将是绿色,第三个字节将是红色。第四、第五和第六个字节将是第二个像素(蓝色、绿色、红色),依此类推。
R、G和B通道值的常规范围为:
CV_8U图像的0到255
CV_16U图像的0到65535
CV_32F图像的0到1
在线性变换的情况下,范围并不重要。但在非线性变换的情况下,应将输入RGB图像归一化到适当的值范围以获得正确的结果,例如,对于RGB
->Lu
v
变换。例如,如果您的32位浮点图像直接从8位图像转换而来,而不进行任何缩放,那么它将具有0。。255值范围而不是0。。1由职能部门承担。因此,在调用cvtColor之前,首先需要将图像缩小://C++代码: img*=1/255; CVT颜色(img、img、CV_BGR2Luv); 如果将`cvtColor`与8位图像一起使用,转换将丢失一些信息。对于许多应用程序,这一点并不明显,但建议在需要完整颜色范围的应用程序中使用32位图像,或者在操作之前转换图像,然后再转换回。 如果转换添加alpha通道,其值将设置为相应通道范围的最大值:[6$]]为255,`CV_16U`为65535,`CV_32F`为1。 该函数可以执行以下转换: *RGB空间内的RGB灰度(`CV_BGR2GRAY, CV_RGB2GRAY, CV_GRAY2BGR, CV_GRAY2RGB`)变换,如添加/删除alpha通道、反转通道顺序、与16位RGB颜色(R5:G6:B5或R5:G5:B5)的转换以及与灰度的转换,使用: *RGB[A]至灰色:Y* 和 *灰色到RGB[A]:R* 从RGB图像到灰度的转换通过以下方式完成:
//C代码:
CVT颜色(src、bwsrc、CV_rgb2灰色);
更高级的通道重新排序也可以通过“混合通道”完成。
RGB CIE XYZ。带有D65白点的Rec 709(CV_BGR2XYZ, CV_RGB2XYZ, CV_XYZ2BGR, CV_XYZ2RGB):
X Z ltBR gt
R B ltBR gt
XY
Z覆盖整个值范围(对于浮点图像,Z可能超过1)。
RGB YCrCb JPEG(或YCC)(CV_BGR2YCrCb, CV_RGB2YCrCb, CV_YCrCb2BGR, CV_YCrCb2RGB
Y

Cb
R
G
B
哪里
三角洲=
Y、 Cr和Cb覆盖整个值范围。
RGB HSV(CV_BGR2HSV, CV_RGB2HSV, CV_HSV2BGR, CV_HSV2RGB)在8位和16位图像的情况下,R、G和B将转换为浮点格式并缩放以适应0到1的范围。

**
H
如果
H<0
,则
H。在输出0、0、0上**
然后将这些值转换为目标数据类型:
8位图像

16位图像(当前不受支持)
V<-65535 V,S<-65535 S,H<-H
32位图像H、S和V保持原样
RGB HLS(CV_BGR2HLS, CV_RGB2HLS, CV_HLS2BGR, CV_HLS2RGB)。
对于8位和16位图像,R、G和B将转换为浮点格式,并进行缩放以适应0到1的范围。
V_(最大值)
V_(分钟)
L
S=0.5)
H
如果
H<0
,则
H。在输出
0、0、0上**
然后将这些值转换为目标数据类型:
8位图像

16位图像(当前不受支持)
V<-65535V,S<-65535
S,H<-H

32位图像H、S、V保持原样
RGB CIE La
b
CV_BGR2Lab, CV_RGB2Lab, CV_Lab2BGR, CV_Lab2RGB)。
对于8位和16位图像,R、G和B将转换为浮点格式,并进行缩放以适应0到1的范围。
[X Y Z]
[R G B]
X
Z
L
a
b
哪里
f(t)=t^(1/3)表示t>0.008856;7.787吨+16/116吨

对于8位图像,增量=128;0用于浮点图像
这将输出
0、-127、-127。然后将这些值转换为目标数据类型:***
8位图像
L
16位图像(当前不受支持)
32位图像L、a和b保持原样
RGB CIE Lu
v
CV_BGR2Luv, CV_RGB2Luv, CV_Luv2BGR, CV_Luv2RGB)。
对于8位和16位图像,R、G和B将转换为浮点格式,并按比例缩放以适应0到1的范围。
[X Y Z]
[R G B]
L
u'
v'
u

这将输出
0、
-134、-140**
然后将这些值转换为目标数据类型:
*8位图像
L
*16位图像(当前不受支持)
*32位图像L、u和v保持原样
以上用于将RGB转换为/从各种颜色空间转换的公式来自web上的多个来源,主要来自Charles Poynton网站http://www.poynton.com/ColorFAQ.html
拜耳->*RGB(CV_BayerBG2BGR, CV_BayerGB2BGR, CV_BayerRG2BGR, CV_BayerGR2BGR, CV_BayerBG2RGB, CV_BayerGB2RGB, CV_BayerRG2RGB, CV_BayerGR2RGB)。拜耳模式广泛应用于CCD和CMOS相机中。它使您能够从单个平面获取彩色图片,其中R、G和B像素(特定组件的传感器)按如下方式交错:像素的输出RGB组件从1、2或``插值
//C
代码:
具有相同颜色的像素的4个邻居。有几个
可通过移动来实现上述模式的修改
图案向左一个像素和/或向上一个像素。这两个字母
C_1
转换常量[$19$]C_1 C_22BGR和[$21$]C_1 C_22RGB中的C_2表示特定模式
类型这些是来自第二行、第二行和第三行的组件
列,分别为。例如,上面的模式有一个非常重要的特性
流行的“BG”类型。

代码示例

代码示例来源:origin: RaiMan/SikuliX2

public static Mat drawContoursInImage(List<MatOfPoint> contours, Mat mBase) {
 Mat mResult = Element.getNewMat();
 Mat mWork = new Mat();
 Imgproc.cvtColor(mBase, mWork, toGray);
 Imgproc.cvtColor(mWork, mResult, toColor);
 Imgproc.drawContours(mResult, contours, -1, new Scalar(0, 0, 255));
 return mResult;
}

代码示例来源:origin: dermotte/LIRE

public LinkedList<CvSurfFeature> computeSurfKeypoints(BufferedImage img) {
    MatOfKeyPoint keypoints = new MatOfKeyPoint();
    List<KeyPoint> myKeys;
//        Mat img_object = Highgui.imread(image, 0); //0 = CV_LOAD_IMAGE_GRAYSCALE
//        detector.detect(img_object, keypoints);
    byte[] data = ((DataBufferByte) img.getRaster().getDataBuffer()).getData();
    Mat matRGB = new Mat(img.getHeight(), img.getWidth(), CvType.CV_8UC3);
    matRGB.put(0, 0, data);
    Mat matGray = new Mat(img.getHeight(),img.getWidth(),CvType.CV_8UC1);
    Imgproc.cvtColor(matRGB, matGray, Imgproc.COLOR_BGR2GRAY);              //TODO: RGB or BGR?
    byte[] dataGray = new byte[matGray.rows()*matGray.cols()*(int)(matGray.elemSize())];
    matGray.get(0, 0, dataGray);

    detector.detect(matGray, keypoints);
    myKeys = keypoints.toList();

    LinkedList<CvSurfFeature> myKeypoints = new LinkedList<CvSurfFeature>();
    KeyPoint key;
    CvSurfFeature feat;
    for (Iterator<KeyPoint> iterator = myKeys.iterator(); iterator.hasNext(); ) {
      key = iterator.next();
      feat = new CvSurfFeature(key.pt.x, key.pt.y, key.size, null);
      myKeypoints.add(feat);
    }

    return myKeypoints;
  }

代码示例来源:origin: dermotte/LIRE

public LinkedList<CvSiftFeature> computeSiftKeypoints(BufferedImage img) {
    MatOfKeyPoint keypoints = new MatOfKeyPoint();
    List<KeyPoint> myKeys;
//        Mat img_object = Highgui.imread(image, 0); //0 = CV_LOAD_IMAGE_GRAYSCALE
//        detector.detect(img_object, keypoints);
    byte[] data = ((DataBufferByte) img.getRaster().getDataBuffer()).getData();
    Mat matRGB = new Mat(img.getHeight(), img.getWidth(), CvType.CV_8UC3);
    matRGB.put(0, 0, data);
    Mat matGray = new Mat(img.getHeight(),img.getWidth(),CvType.CV_8UC1);
    Imgproc.cvtColor(matRGB, matGray, Imgproc.COLOR_BGR2GRAY);              //TODO: RGB or BGR?
    byte[] dataGray = new byte[matGray.rows()*matGray.cols()*(int)(matGray.elemSize())];
    matGray.get(0, 0, dataGray);

    detector.detect(matGray, keypoints);
    myKeys = keypoints.toList();

    LinkedList<CvSiftFeature> myKeypoints = new LinkedList<CvSiftFeature>();
    KeyPoint key;
    CvSiftFeature feat;
    for (Iterator<KeyPoint> iterator = myKeys.iterator(); iterator.hasNext(); ) {
      key = iterator.next();
      feat = new CvSiftFeature(key.pt.x, key.pt.y, key.size, null);
      myKeypoints.add(feat);
    }

    return myKeypoints;
  }

代码示例来源:origin: dermotte/LIRE

public LinkedList<CvSiftFeature> computeSiftKeypoints(BufferedImage img) {
    MatOfKeyPoint keypoints = new MatOfKeyPoint();
    List<KeyPoint> myKeys;
//        Mat img_object = Highgui.imread(image, 0); //0 = CV_LOAD_IMAGE_GRAYSCALE
//        detector.detect(img_object, keypoints);
    byte[] data = ((DataBufferByte) img.getRaster().getDataBuffer()).getData();
    Mat matRGB = new Mat(img.getHeight(), img.getWidth(), CvType.CV_8UC3);
    matRGB.put(0, 0, data);
    Mat matGray = new Mat(img.getHeight(),img.getWidth(),CvType.CV_8UC1);
    Imgproc.cvtColor(matRGB, matGray, Imgproc.COLOR_BGR2GRAY);              //TODO: RGB or BGR?
    byte[] dataGray = new byte[matGray.rows()*matGray.cols()*(int)(matGray.elemSize())];
    matGray.get(0, 0, dataGray);

    detector.detect(matGray, keypoints);
    myKeys = keypoints.toList();

    LinkedList<CvSiftFeature> myKeypoints = new LinkedList<CvSiftFeature>();
    KeyPoint key;
    CvSiftFeature feat;
    for (Iterator<KeyPoint> iterator = myKeys.iterator(); iterator.hasNext(); ) {
      key = iterator.next();
      feat = new CvSiftFeature(key.pt.x, key.pt.y, key.size, null);
      myKeypoints.add(feat);
    }

    return myKeypoints;
  }

代码示例来源:origin: dermotte/LIRE

public LinkedList<CvSurfFeature> computeSurfKeypoints(BufferedImage img) {
    MatOfKeyPoint keypoints = new MatOfKeyPoint();
    List<KeyPoint> myKeys;
//        Mat img_object = Highgui.imread(image, 0); //0 = CV_LOAD_IMAGE_GRAYSCALE
//        detector.detect(img_object, keypoints);
    byte[] data = ((DataBufferByte) img.getRaster().getDataBuffer()).getData();
    Mat matRGB = new Mat(img.getHeight(), img.getWidth(), CvType.CV_8UC3);
    matRGB.put(0, 0, data);
    Mat matGray = new Mat(img.getHeight(),img.getWidth(),CvType.CV_8UC1);
    Imgproc.cvtColor(matRGB, matGray, Imgproc.COLOR_BGR2GRAY);              //TODO: RGB or BGR?
    byte[] dataGray = new byte[matGray.rows()*matGray.cols()*(int)(matGray.elemSize())];
    matGray.get(0, 0, dataGray);

    detector.detect(matGray, keypoints);
    myKeys = keypoints.toList();

    LinkedList<CvSurfFeature> myKeypoints = new LinkedList<CvSurfFeature>();
    KeyPoint key;
    CvSurfFeature feat;
    for (Iterator<KeyPoint> iterator = myKeys.iterator(); iterator.hasNext(); ) {
      key = iterator.next();
      feat = new CvSurfFeature(key.pt.x, key.pt.y, key.size, null);
      myKeypoints.add(feat);
    }

    return myKeypoints;
  }

代码示例来源:origin: dermotte/LIRE

matRGB.put(0, 0, data);
Mat matGray = new Mat(img.getHeight(),img.getWidth(),CvType.CV_8UC1);
Imgproc.cvtColor(matRGB, matGray, Imgproc.COLOR_BGR2GRAY);              //TODO: RGB or BGR?
byte[] dataGray = new byte[matGray.rows()*matGray.cols()*(int)(matGray.elemSize())];
matGray.get(0, 0, dataGray);

代码示例来源:origin: dermotte/LIRE

matRGB.put(0, 0, data);
Mat matGray = new Mat(img.getHeight(),img.getWidth(),CvType.CV_8UC1);
Imgproc.cvtColor(matRGB, matGray, Imgproc.COLOR_BGR2GRAY);              //TODO: RGB or BGR?
byte[] dataGray = new byte[matGray.rows()*matGray.cols()*(int)(matGray.elemSize())];
matGray.get(0, 0, dataGray);

代码示例来源:origin: dermotte/LIRE

matRGB.put(0, 0, data);
Mat matGray = new Mat(img.getHeight(),img.getWidth(),CvType.CV_8UC1);
Imgproc.cvtColor(matRGB, matGray, Imgproc.COLOR_BGR2GRAY);              //TODO: RGB or BGR?
byte[] dataGray = new byte[matGray.rows()*matGray.cols()*(int)(matGray.elemSize())];
matGray.get(0, 0, dataGray);

代码示例来源:origin: dermotte/LIRE

matRGB.put(0, 0, data);
Mat matGray = new Mat(img.getHeight(),img.getWidth(),CvType.CV_8UC1);
Imgproc.cvtColor(matRGB, matGray, Imgproc.COLOR_BGR2GRAY);              //TODO: RGB or BGR?
byte[] dataGray = new byte[matGray.rows()*matGray.cols()*(int)(matGray.elemSize())];
matGray.get(0, 0, dataGray);

代码示例来源:origin: RaiMan/SikuliX2

public static Mat detectEdges(Mat mSource) {
 Mat mSourceGray = Element.getNewMat();
 Mat mDetectedEdges = Element.getNewMat();
 int edgeThresh = 1;
 int lowThreshold = 100;
 int ratio = 3;
 int kernelSize = 5;
 int blurFilterSize = 3;
 if (mSource.channels() == 1) {
  mSourceGray = mSource;
 } else {
  Imgproc.cvtColor(mSource, mSourceGray, toGray);
 }
 Imgproc.blur(mSourceGray, mDetectedEdges, new Size(blurFilterSize, blurFilterSize));
 Imgproc.Canny(mDetectedEdges, mDetectedEdges,
     lowThreshold, lowThreshold * ratio, kernelSize, false);
 return mDetectedEdges;
}
//</editor-fold>

代码示例来源:origin: RaiMan/SikuliX2

public static void logShow(Mat mat, int time) {
 Picture image = new Picture();
 if (isGray(mat)) {
  Mat colored = Element.getNewMat();
  Imgproc.cvtColor(mat, colored, toColor);
  image = new Picture(colored);
 } else if (isColored(mat)) {
  image = new Picture(mat);
 }
 if (image.isValid()) {
  image.show(time);
 }
}

代码示例来源:origin: kongqw/OpenCVForAndroid

@Override
public Mat rgba() {
  Imgproc.cvtColor(mYuvFrameData, mRgba, Imgproc.COLOR_YUV2RGBA_NV21, 4);
  return mRgba;
}

代码示例来源:origin: ytai/IOIOPlotter

@Override
public Mat rgba() {
  Imgproc.cvtColor(mYuvFrameData, mRgba, Imgproc.COLOR_YUV2RGBA_NV21, 4);
  return mRgba;
}

代码示例来源:origin: ctodobom/OpenCV-3.1.0-Android

@Override
public Mat rgba() {
  Imgproc.cvtColor(mYuvFrameData, mRgba, Imgproc.COLOR_YUV2RGBA_NV21, 4);
  return mRgba;
}

代码示例来源:origin: RaiMan/SikuliX2

public static List<Element> detectChanges(Mat base, Mat mChanged) {
 int PIXEL_DIFF_THRESHOLD = 3;
 int IMAGE_DIFF_THRESHOLD = 5;
 Mat mBaseGray = Element.getNewMat();
 Mat mChangedGray = Element.getNewMat();
 Mat mDiffAbs = Element.getNewMat();
 Mat mDiffTresh = Element.getNewMat();
 Mat mChanges = Element.getNewMat();
 List<Element> rectangles = new ArrayList<>();
 Imgproc.cvtColor(base, mBaseGray, toGray);
 Imgproc.cvtColor(mChanged, mChangedGray, toGray);
 Core.absdiff(mBaseGray, mChangedGray, mDiffAbs);
 Imgproc.threshold(mDiffAbs, mDiffTresh, PIXEL_DIFF_THRESHOLD, 0.0, Imgproc.THRESH_TOZERO);
 if (Core.countNonZero(mDiffTresh) > IMAGE_DIFF_THRESHOLD) {
  Imgproc.threshold(mDiffAbs, mDiffAbs, PIXEL_DIFF_THRESHOLD, 255, Imgproc.THRESH_BINARY);
  Imgproc.dilate(mDiffAbs, mDiffAbs, Element.getNewMat());
  Mat se = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new Size(5, 5));
  Imgproc.morphologyEx(mDiffAbs, mDiffAbs, Imgproc.MORPH_CLOSE, se);
  List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
  Mat mHierarchy = Element.getNewMat();
  Imgproc.findContours(mDiffAbs, contours, mHierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
  rectangles = contoursToRectangle(contours);
  Core.subtract(mDiffAbs, mDiffAbs, mChanges);
  Imgproc.drawContours(mChanges, contours, -1, new Scalar(255));
  //logShow(mDiffAbs);
 }
 return rectangles;
}

代码示例来源:origin: farkam135/GoIV

@Override
public Mat rgba() {
  if (mPreviewFormat == ImageFormat.NV21)
    Imgproc.cvtColor(mYuvFrameData, mRgba, Imgproc.COLOR_YUV2RGBA_NV21, 4);
  else if (mPreviewFormat == ImageFormat.YV12)
    Imgproc.cvtColor(mYuvFrameData, mRgba, Imgproc.COLOR_YUV2RGB_I420, 4);  // COLOR_YUV2RGBA_YV12 produces inverted colors
  else
    throw new IllegalArgumentException("Preview Format can be NV21 or YV12");
  return mRgba;
}

代码示例来源:origin: leadrien/opencv_native_androidstudio

@Override
public Mat rgba() {
  if (mPreviewFormat == ImageFormat.NV21)
    Imgproc.cvtColor(mYuvFrameData, mRgba, Imgproc.COLOR_YUV2RGBA_NV21, 4);
  else if (mPreviewFormat == ImageFormat.YV12)
    Imgproc.cvtColor(mYuvFrameData, mRgba, Imgproc.COLOR_YUV2RGB_I420, 4);  // COLOR_YUV2RGBA_YV12 produces inverted colors
  else
    throw new IllegalArgumentException("Preview Format can be NV21 or YV12");
  return mRgba;
}

代码示例来源:origin: openpnp/openpnp

public FluentCv convertColor(int code, String... tag) {
  Imgproc.cvtColor(mat, mat, code);
  return store(mat, tag);
}

代码示例来源:origin: openpnp/openpnp

@Override
  public Result process(CvPipeline pipeline) throws Exception {
    Mat mat = pipeline.getWorkingImage();
    Imgproc.cvtColor(mat, mat, conversion.getCode());
    return null;
  }
}

代码示例来源:origin: kongqw/OpenCVForAndroid

private void rgba2Hsv(Mat rgba) {
  Imgproc.cvtColor(rgba, hsv, Imgproc.COLOR_RGB2HSV);
  //inRange函数的功能是检查输入数组每个元素大小是否在2个给定数值之间,可以有多通道,mask保存0通道的最小值,也就是h分量
  //这里利用了hsv的3个通道,比较h,0~180,s,smin~256,v,min(vmin,vmax),max(vmin,vmax)。如果3个通道都在对应的范围内,则
  //mask对应的那个点的值全为1(0xff),否则为0(0x00).
  int vMin = 65, vMax = 256, sMin = 55;
  Core.inRange(
      hsv,
      new Scalar(0, sMin, Math.min(vMin, vMax)),
      new Scalar(180, 256, Math.max(vMin, vMax)),
      mask
  );
}

相关文章

微信公众号

最新文章

更多

Imgproc类方法