class
TrackDescriptorDescriptor-based visual tracking.
Contents
Here we use descriptor matching to track features from one frame to the next. We track both temporally, and across stereo pairs to get stereo constraints. Right now we use ORB descriptors as we have found it is the fastest when computing descriptors. Tracks are then rejected based on a ratio test and ransac.
Base classes
- class TrackBase
- Visual feature tracking base class.
Constructors, destructors, conversion operators
- TrackDescriptor(std::unordered_map<size_t, std::shared_ptr<CamBase>> cameras, int numfeats, int numaruco, bool stereo, HistogramMethod histmethod, int fast_threshold, int gridx, int gridy, int minpxdist, double knnratio) explicit
- Public constructor with configuration variables.
Public functions
- void feed_new_camera(const CameraData& message) override
- Process a new image.
Protected functions
- void feed_monocular(const CameraData& message, size_t msg_id)
- Process a new monocular image.
- void feed_stereo(const CameraData& message, size_t msg_id_left, size_t msg_id_right)
- Process new stereo pair of images.
- void perform_detection_monocular(const cv::Mat& img0, const cv::Mat& mask0, std::vector<cv::KeyPoint>& pts0, cv::Mat& desc0, std::vector<size_t>& ids0)
- Detects new features in the current image.
- void perform_detection_stereo(const cv::Mat& img0, const cv::Mat& img1, const cv::Mat& mask0, const cv::Mat& mask1, std::vector<cv::KeyPoint>& pts0, std::vector<cv::KeyPoint>& pts1, cv::Mat& desc0, cv::Mat& desc1, size_t cam_id0, size_t cam_id1, std::vector<size_t>& ids0, std::vector<size_t>& ids1)
- Detects new features in the current stereo pair.
- void robust_match(const std::vector<cv::KeyPoint>& pts0, const std::vector<cv::KeyPoint>& pts1, const cv::Mat& desc0, const cv::Mat& desc1, size_t id0, size_t id1, std::vector<cv::DMatch>& matches)
- Find matches between two keypoint+descriptor sets.
Function documentation
ov_core:: TrackDescriptor:: TrackDescriptor(std::unordered_map<size_t, std::shared_ptr<CamBase>> cameras,
int numfeats,
int numaruco,
bool stereo,
HistogramMethod histmethod,
int fast_threshold,
int gridx,
int gridy,
int minpxdist,
double knnratio) explicit
Public constructor with configuration variables.
Parameters | |
---|---|
cameras | camera calibration object which has all camera intrinsics in it |
numfeats | number of features we want want to track (i.e. track 200 points from frame to frame) |
numaruco | the max id of the arucotags, so we ensure that we start our non-auroc features above this value |
stereo | if we should do stereo feature tracking or binocular |
histmethod | what type of histogram pre-processing should be done (histogram eq?) |
fast_threshold | FAST detection threshold |
gridx | size of grid in the x-direction / u-direction |
gridy | size of grid in the y-direction / v-direction |
minpxdist | features need to be at least this number pixels away from each other |
knnratio | matching ratio needed (smaller value forces top two descriptors during match to be more different) |
void ov_core:: TrackDescriptor:: feed_new_camera(const CameraData& message) override
Process a new image.
Parameters | |
---|---|
message | Contains our timestamp, images, and camera ids |
void ov_core:: TrackDescriptor:: feed_monocular(const CameraData& message,
size_t msg_id) protected
Process a new monocular image.
Parameters | |
---|---|
message | Contains our timestamp, images, and camera ids |
msg_id | the camera index in message data vector |
void ov_core:: TrackDescriptor:: feed_stereo(const CameraData& message,
size_t msg_id_left,
size_t msg_id_right) protected
Process new stereo pair of images.
Parameters | |
---|---|
message | Contains our timestamp, images, and camera ids |
msg_id_left | first image index in message data vector |
msg_id_right | second image index in message data vector |
void ov_core:: TrackDescriptor:: perform_detection_monocular(const cv::Mat& img0,
const cv::Mat& mask0,
std::vector<cv::KeyPoint>& pts0,
cv::Mat& desc0,
std::vector<size_t>& ids0) protected
Detects new features in the current image.
Parameters | |
---|---|
img0 | image we will detect features on |
mask0 | mask which has what ROI we do not want features in |
pts0 | vector of extracted keypoints |
desc0 | vector of the extracted descriptors |
ids0 | vector of all new IDs |
Given a set of images, and their currently extracted features, this will try to add new features. We return all extracted descriptors here since we DO NOT need to do stereo tracking left to right. Our vector of IDs will be later overwritten when we match features temporally to the previous frame's features. See robust_
void ov_core:: TrackDescriptor:: perform_detection_stereo(const cv::Mat& img0,
const cv::Mat& img1,
const cv::Mat& mask0,
const cv::Mat& mask1,
std::vector<cv::KeyPoint>& pts0,
std::vector<cv::KeyPoint>& pts1,
cv::Mat& desc0,
cv::Mat& desc1,
size_t cam_id0,
size_t cam_id1,
std::vector<size_t>& ids0,
std::vector<size_t>& ids1) protected
Detects new features in the current stereo pair.
Parameters | |
---|---|
img0 | left image we will detect features on |
img1 | right image we will detect features on |
mask0 | mask which has what ROI we do not want features in |
mask1 | mask which has what ROI we do not want features in |
pts0 | left vector of new keypoints |
pts1 | right vector of new keypoints |
desc0 | left vector of extracted descriptors |
desc1 | left vector of extracted descriptors |
cam_id0 | id of the first camera |
cam_id1 | id of the second camera |
ids0 | left vector of all new IDs |
ids1 | right vector of all new IDs |
This does the same logic as the perform_
void ov_core:: TrackDescriptor:: robust_match(const std::vector<cv::KeyPoint>& pts0,
const std::vector<cv::KeyPoint>& pts1,
const cv::Mat& desc0,
const cv::Mat& desc1,
size_t id0,
size_t id1,
std::vector<cv::DMatch>& matches) protected
Find matches between two keypoint+descriptor sets.
Parameters | |
---|---|
pts0 | first vector of keypoints |
pts1 | second vector of keypoints |
desc0 | first vector of descriptors |
desc1 | second vector of decriptors |
id0 | id of the first camera |
id1 | id of the second camera |
matches | vector of matches that we have found |
This will perform a "robust match" between the two sets of points (slow but has great results). First we do a simple KNN match from 1to2 and 2to1, which is followed by a ratio check and symmetry check. Original code is from the "RobustMatcher" in the opencv examples, and seems to give very good results in the matches. https://github.com/opencv/opencv/blob/master/samples/cpp/tutorial_