Class and Description |
---|
org.bytedeco.opencv.opencv_core.CvMat
CvMat is now obsolete; consider using Mat instead.
|
org.bytedeco.opencv.opencv_core.CvMatND
consider using cv::Mat instead
|
Field and Description |
---|
org.opencv.core.CvType.CV_USRTYPE1
please use
CvType.CV_16F |
Method and Description |
---|
org.bytedeco.opencv.opencv_dnn.NormalizeBBoxLayer.acrossSpatial() |
org.bytedeco.opencv.opencv_dnn.BaseConvolutionLayer.adjustPad() |
org.bytedeco.opencv.global.opencv_stitching.createStitcher(boolean)
use Stitcher::create
|
org.bytedeco.opencv.global.opencv_stitching.createStitcherScans(boolean)
use Stitcher::create
|
org.bytedeco.opencv.opencv_dnn.BaseConvolutionLayer.dilation() |
org.bytedeco.opencv.global.opencv_aruco.drawAxis(Mat, Mat, Mat, Mat, Mat, float)
use cv::drawFrameAxes
|
org.opencv.aruco.Aruco.drawAxis(Mat, Mat, Mat, Mat, Mat, float)
use cv::drawFrameAxes
|
org.bytedeco.opencv.global.opencv_video.estimateRigidTransform(GpuMat, GpuMat, boolean) |
org.bytedeco.opencv.global.opencv_video.estimateRigidTransform(Mat, Mat, boolean)
Use cv::estimateAffine2D, cv::estimateAffinePartial2D instead. If you are using this function
with images, extract points using cv::calcOpticalFlowPyrLK and then use the estimation functions.
|
org.bytedeco.opencv.global.opencv_video.estimateRigidTransform(UMat, UMat, boolean) |
org.bytedeco.opencv.opencv_dnn.Layer.finalize(MatPointerVector, MatVector)
Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
|
org.bytedeco.opencv.opencv_dnn.Layer.finalize(MatVector)
Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
|
org.bytedeco.opencv.opencv_dnn.Layer.forward(MatPointerVector, MatVector, MatVector)
Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
|
org.bytedeco.opencv.opencv_core.AbstractCvMat.get() |
org.bytedeco.opencv.opencv_core.AbstractCvMat.get(double[]) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.get(int) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.get(int, double[]) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.get(int, double[], int, int) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.get(int, int) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.get(int, int, int) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.getByteBuffer() |
org.bytedeco.opencv.opencv_core.AbstractArray.getByteBuffer() |
org.bytedeco.opencv.opencv_core.AbstractArray.getByteBuffer(int) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.getDoubleBuffer() |
org.bytedeco.opencv.opencv_core.AbstractArray.getDoubleBuffer() |
org.bytedeco.opencv.opencv_core.AbstractArray.getDoubleBuffer(int) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.getFloatBuffer() |
org.bytedeco.opencv.opencv_core.AbstractArray.getFloatBuffer() |
org.bytedeco.opencv.opencv_core.AbstractArray.getFloatBuffer(int) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.getIntBuffer() |
org.bytedeco.opencv.opencv_core.AbstractArray.getIntBuffer() |
org.bytedeco.opencv.opencv_core.AbstractArray.getIntBuffer(int) |
org.bytedeco.opencv.opencv_core.Program.getPrefix() |
org.bytedeco.opencv.opencv_core.Program.getPrefix(BytePointer) |
org.bytedeco.opencv.opencv_core.Program.getPrefix(String) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.getShortBuffer() |
org.bytedeco.opencv.opencv_core.AbstractArray.getShortBuffer() |
org.bytedeco.opencv.opencv_core.AbstractArray.getShortBuffer(int) |
org.bytedeco.opencv.global.opencv_core.getThreadNum()
Current implementation doesn't corresponding to this documentation.
The exact meaning of the return value depends on the threading framework used by OpenCV library:
- |
org.opencv.core.Core.getThreadNum()
Current implementation doesn't corresponding to this documentation.
The exact meaning of the return value depends on the threading framework used by OpenCV library:
|
org.bytedeco.opencv.opencv_dnn.PoolingLayer.kernel() |
org.bytedeco.opencv.opencv_dnn.BaseConvolutionLayer.kernel() |
org.bytedeco.opencv.global.opencv_imgproc.linearPolar(Mat, Mat, Point2f, double, int)
This function produces same result as cv::warpPolar(src, dst, src.size(), center, maxRadius, flags)
\internal Transform the source image using the following transformation (See \ref polar_remaps_reference_image "Polar remaps reference image c)"):
where
and
|
org.opencv.imgproc.Imgproc.linearPolar(Mat, Mat, Point, double, int)
This function produces same result as cv::warpPolar(src, dst, src.size(), center, maxRadius, flags)
Transform the source image using the following transformation (See REF: polar_remaps_reference_image "Polar remaps reference image c)"):
\(\begin{array}{l}
dst( \rho , \phi ) = src(x,y) \\
dst.size() \leftarrow src.size()
\end{array}\)
where
\(\begin{array}{l}
I = (dx,dy) = (x - center.x,y - center.y) \\
\rho = Kmag \cdot \texttt{magnitude} (I) ,\\
\phi = angle \cdot \texttt{angle} (I)
\end{array}\)
and
\(\begin{array}{l}
Kx = src.cols / maxRadius \\
Ky = src.rows / 2\Pi
\end{array}\)
|
org.bytedeco.opencv.global.opencv_text.loadOCRHMMClassifierCNN(BytePointer)
use loadOCRHMMClassifier instead
|
org.bytedeco.opencv.global.opencv_text.loadOCRHMMClassifierNM(BytePointer)
loadOCRHMMClassifier instead
|
org.bytedeco.opencv.global.opencv_imgproc.logPolar(Mat, Mat, Point2f, double, int)
This function produces same result as cv::warpPolar(src, dst, src.size(), center, maxRadius, flags+WARP_POLAR_LOG);
\internal Transform the source image using the following transformation (See \ref polar_remaps_reference_image "Polar remaps reference image d)"):
where
and
The function emulates the human "foveal" vision and can be used for fast scale and rotation-invariant template matching, for object tracking and so forth. |
org.opencv.imgproc.Imgproc.logPolar(Mat, Mat, Point, double, int)
This function produces same result as cv::warpPolar(src, dst, src.size(), center, maxRadius, flags+WARP_POLAR_LOG);
Transform the source image using the following transformation (See REF: polar_remaps_reference_image "Polar remaps reference image d)"):
\(\begin{array}{l}
dst( \rho , \phi ) = src(x,y) \\
dst.size() \leftarrow src.size()
\end{array}\)
where
\(\begin{array}{l}
I = (dx,dy) = (x - center.x,y - center.y) \\
\rho = M \cdot log_e(\texttt{magnitude} (I)) ,\\
\phi = Kangle \cdot \texttt{angle} (I) \\
\end{array}\)
and
\(\begin{array}{l}
M = src.cols / log_e(maxRadius) \\
Kangle = src.rows / 2\Pi \\
\end{array}\)
The function emulates the human "foveal" vision and can be used for fast scale and
rotation-invariant template matching, for object tracking and so forth.
|
org.bytedeco.opencv.opencv_dnn.PoolingLayer.pad_b() |
org.bytedeco.opencv.opencv_dnn.PoolingLayer.pad_l() |
org.bytedeco.opencv.opencv_dnn.PoolingLayer.pad_r() |
org.bytedeco.opencv.opencv_dnn.PoolingLayer.pad_t() |
org.bytedeco.opencv.opencv_dnn.PoolingLayer.pad() |
org.bytedeco.opencv.opencv_dnn.BaseConvolutionLayer.pad() |
org.bytedeco.opencv.opencv_core.AbstractCvMat.put(double...) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.put(int, double...) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.put(int, double) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.put(int, double[], int, int) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.put(int, int, double) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.put(int, int, int, double) |
org.bytedeco.opencv.opencv_core.Program.read(BytePointer, BytePointer) |
org.bytedeco.opencv.opencv_core.Program.read(String, String) |
org.bytedeco.opencv.opencv_core.AbstractCvMat.reset() |
org.opencv.dnn.Layer.run(List<Mat>, List<Mat>, List<Mat>)
This method will be removed in the future release.
|
org.bytedeco.opencv.opencv_dnn.Layer.run(MatVector, MatVector, MatVector)
This method will be removed in the future release.
|
org.bytedeco.opencv.opencv_dnn.LSTMLayer.setProduceCellOutput() |
org.bytedeco.opencv.opencv_dnn.LSTMLayer.setProduceCellOutput(boolean)
Use flag
use_timestamp_dim in LayerParams.
\brief If this flag is set to true then layer will produce c_t as second output.
\details Shape of the second output is the same as first output. |
org.bytedeco.opencv.opencv_dnn.LSTMLayer.setUseTimstampsDim() |
org.bytedeco.opencv.opencv_dnn.LSTMLayer.setUseTimstampsDim(boolean)
Use flag
produce_cell_output in LayerParams.
\brief Specifies either interpret first dimension of input blob as timestamp dimension either as sample.
If flag is set to true then shape of input blob will be interpreted as [T , N , [data dims] ] where T specifies number of timestamps, N is number of independent streams.
In this case each forward() call will iterate through T timestamps and update layer's state T times.
If flag is set to false then shape of input blob will be interpreted as [N , [data dims] ].
In this case each forward() call will make one iteration and produce one timestamp with shape [N , [out dims] ]. |
org.bytedeco.opencv.opencv_dnn.LSTMLayer.setWeights(Mat, Mat, Mat)
Use LayerParams::blobs instead.
\brief Set trained weights for LSTM layer.
LSTM behavior on each step is defined by current input, previous output, previous cell state and learned weights.
Let
where \odot is per-element multiply operation and i_t, f_t, o_t, g_t is internal gates that are computed using learned weights.
Gates are computed as follows:
where W_{x?} , W_{h?} and b_{?} are learned weights represented as matrices:
W_{x?} \in R^{N_h \times N_x} , W_{h?} \in R^{N_h \times N_h} , b_? \in R^{N_h} .
For simplicity and performance purposes we use |
org.bytedeco.opencv.opencv_core.Program.source() |
org.bytedeco.opencv.opencv_dnn.PoolingLayer.stride() |
org.bytedeco.opencv.opencv_dnn.BaseConvolutionLayer.stride() |
org.bytedeco.opencv.opencv_core.Program.write(BytePointer) |
org.bytedeco.opencv.opencv_core.Program.write(String) |
Copyright © 2020. All rights reserved.