OpenCV  4.3.0
Open Source Computer Vision
Functions
silhouette based 3D object tracking

Functions

void cv::rapid::drawCorrespondencies (InputOutputArray bundle, InputArray srcLocations, InputArray newLocations, InputArray colors=noArray())
 
void cv::rapid::drawSearchLines (InputOutputArray img, InputArray locations, const Scalar &color)
 
void cv::rapid::drawWireframe (InputOutputArray img, InputArray pts2d, InputArray tris, const Scalar &color, int type=LINE_8, bool cullBackface=false)
 
void cv::rapid::extractControlPoints (int num, int len, InputArray pts3d, InputArray rvec, InputArray tvec, InputArray K, const Size &imsize, InputArray tris, OutputArray ctl2d, OutputArray ctl3d)
 
void cv::rapid::extractLineBundle (int len, InputArray ctl2d, InputArray img, OutputArray bundle, OutputArray srcLocations)
 
void cv::rapid::filterCorrespondencies (InputOutputArray pts2d, InputOutputArray pts3d, InputArray mask)
 
void cv::rapid::findCorrespondencies (InputArray bundle, InputArray srcLocations, OutputArray newLocations, OutputArray response=noArray())
 
float cv::rapid::rapid (InputArray img, int num, int len, InputArray pts3d, InputArray tris, InputArray K, InputOutputArray rvec, InputOutputArray tvec)
 

Detailed Description

implements "RAPID-a video rate object tracker" [95] with the dynamic control point extraction of [55]

Function Documentation

◆ drawCorrespondencies()

void cv::rapid::drawCorrespondencies ( InputOutputArray  bundle,
InputArray  srcLocations,
InputArray  newLocations,
InputArray  colors = noArray() 
)
Python:
bundle=cv.rapid.drawCorrespondencies(bundle, srcLocations, newLocations[, colors])

#include <opencv2/rapid.hpp>

Debug draw markers of matched correspondences onto a lineBundle

Parameters
bundlethe lineBundle
srcLocationsthe according source locations
newLocationsmatched source locations
colorscolors for the markers. Defaults to white.

◆ drawSearchLines()

void cv::rapid::drawSearchLines ( InputOutputArray  img,
InputArray  locations,
const Scalar color 
)
Python:
img=cv.rapid.drawSearchLines(img, locations, color)

#include <opencv2/rapid.hpp>

Debug draw search lines onto an image

Parameters
imgthe output image
locationsthe source locations of a line bundle
colorthe line color

◆ drawWireframe()

void cv::rapid::drawWireframe ( InputOutputArray  img,
InputArray  pts2d,
InputArray  tris,
const Scalar color,
int  type = LINE_8,
bool  cullBackface = false 
)
Python:
img=cv.rapid.drawWireframe(img, pts2d, tris, color[, type[, cullBackface]])

#include <opencv2/rapid.hpp>

Draw a wireframe of a triangle mesh

Parameters
imgthe output image
pts2dthe 2d points obtained by projectPoints
tristriangle face connectivity
colorline color
typeline type. See LineTypes.
cullBackfaceenable back-face culling based on CCW order

◆ extractControlPoints()

void cv::rapid::extractControlPoints ( int  num,
int  len,
InputArray  pts3d,
InputArray  rvec,
InputArray  tvec,
InputArray  K,
const Size imsize,
InputArray  tris,
OutputArray  ctl2d,
OutputArray  ctl3d 
)
Python:
ctl2d, ctl3d=cv.rapid.extractControlPoints(num, len, pts3d, rvec, tvec, K, imsize, tris[, ctl2d[, ctl3d]])

#include <opencv2/rapid.hpp>

Extract control points from the projected silhouette of a mesh

see [55] Sec 2.1, Step b

Parameters
numnumber of control points
lensearch radius (used to restrict the ROI)
pts3dthe 3D points of the mesh
rvecrotation between mesh and camera
tvectranslation between mesh and camera
Kcamera intrinsic
imsizesize of the video frame
tristriangle face connectivity
ctl2dthe 2D locations of the control points
ctl3dmatching 3D points of the mesh

◆ extractLineBundle()

void cv::rapid::extractLineBundle ( int  len,
InputArray  ctl2d,
InputArray  img,
OutputArray  bundle,
OutputArray  srcLocations 
)
Python:
bundle, srcLocations=cv.rapid.extractLineBundle(len, ctl2d, img[, bundle[, srcLocations]])

#include <opencv2/rapid.hpp>

Extract the line bundle from an image

Parameters
lenthe search radius. The bundle will have 2*len + 1 columns.
ctl2dthe search lines will be centered at this points and orthogonal to the contour defined by them. The bundle will have as many rows.
imgthe image to read the pixel intensities values from
bundleline bundle image with size ctl2d.rows() x (2 * len + 1) and the same type as img
srcLocationsthe source pixel locations of bundle in img as CV_16SC2

◆ filterCorrespondencies()

void cv::rapid::filterCorrespondencies ( InputOutputArray  pts2d,
InputOutputArray  pts3d,
InputArray  mask 
)
Python:
pts2d, pts3d=cv.rapid.filterCorrespondencies(pts2d, pts3d, mask)

#include <opencv2/rapid.hpp>

Filter corresponding 2d and 3d points based on mask

Parameters
pts2d2d points
pts3d3d points
maskmask containing non-zero values for the elements to be retained

◆ findCorrespondencies()

void cv::rapid::findCorrespondencies ( InputArray  bundle,
InputArray  srcLocations,
OutputArray  newLocations,
OutputArray  response = noArray() 
)
Python:
newLocations, response=cv.rapid.findCorrespondencies(bundle, srcLocations[, newLocations[, response]])

#include <opencv2/rapid.hpp>

Find corresponding image locations by searching for a maximal sobel edge along the search line (a single row in the bundle)

Parameters
bundlethe line bundle
srcLocationsthe according source image location
newLocationsimage locations with maximal edge along the search line
responsethe sobel response for the selected point

◆ rapid()

float cv::rapid::rapid ( InputArray  img,
int  num,
int  len,
InputArray  pts3d,
InputArray  tris,
InputArray  K,
InputOutputArray  rvec,
InputOutputArray  tvec 
)
Python:
retval, rvec, tvec=cv.rapid.rapid(img, num, len, pts3d, tris, K, rvec, tvec)

#include <opencv2/rapid.hpp>

High level function to execute a single rapid [95] iteration

  1. extractControlPoints
  2. extractLineBundle
  3. findCorrespondencies
  4. filterCorrespondencies
  5. solvePnPRefineLM
Parameters
imgthe video frame
numnumber of search lines
lensearch line radius
pts3dthe 3D points of the mesh
tristriangle face connectivity
Kcamera matrix
rvecrotation between mesh and camera. Input values are used as an initial solution.
tvectranslation between mesh and camera. Input values are used as an initial solution.
Returns
ratio of search lines that could be extracted and matched