tornavis/extern/libmv/libmv-capi.h

207 lines
9.0 KiB
C
Raw Normal View History

Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 13:55:18 +01:00
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2011 Blender Foundation.
* All rights reserved.
*
* Contributor(s): Blender Foundation,
* Sergey Sharybin
*
* ***** END GPL LICENSE BLOCK *****
*/
#ifndef LIBMV_C_API_H
#define LIBMV_C_API_H
#ifdef __cplusplus
extern "C" {
#endif
struct libmv_Tracks;
struct libmv_Reconstruction;
struct libmv_Features;
struct libmv_CameraIntrinsics;
/* Logging */
void libmv_initLogging(const char *argv0);
void libmv_startDebugLogging(void);
void libmv_setLoggingVerbosity(int verbosity);
/* Planar tracker */
typedef struct libmv_TrackRegionOptions {
int motion_model;
int num_iterations;
int use_brute;
int use_normalization;
double minimum_correlation;
double sigma;
float *image1_mask;
} libmv_TrackRegionOptions;
typedef struct libmv_TrackRegionResult {
int termination;
const char *termination_reason;
double correlation;
} libmv_TrackRegionResult;
int libmv_trackRegion(const libmv_TrackRegionOptions *options,
const float *image1, int image1_width, int image1_height,
const float *image2, int image2_width, int image2_height,
Planar tracking support for motion tracking =========================================== Major list of changes done in tomato branch: - Add a planar tracking implementation to libmv This adds a new planar tracking implementation to libmv. The tracker is based on Ceres[1], the new nonlinear minimizer that myself and Sameer released from Google as open source. Since the motion model is more involved, the interface is different than the RegionTracker interface used previously in Blender. The start of a C API in libmv-capi.{cpp,h} is also included. - Migrate from pat_{min,max} for markers to 4 corners representation Convert markers in the movie clip editor / 2D tracker from using pat_min and pat_max notation to using the a more general, 4-corner representation. There is still considerable porting work to do; in particular sliding from preview widget does not work correct for rotated markers. All other areas should be ported to new representation: * Added support of sliding individual corners. LMB slide + Ctrl would scale the whole pattern * S would scale the whole marker, S-S would scale pattern only * Added support of marker's rotation which is currently rotates only patterns around their centers or all markers around median, Rotation or other non-translation/scaling transformation of search area doesn't make sense. * Track Preview widget would display transformed pattern which libmv actually operates with. - "Efficient Second-order Minimization" for the planar tracker This implements the "Efficient Second-order Minimization" scheme, as supported by the existing translation tracker. This increases the amount of per-iteration work, but decreases the number of iterations required to converge and also increases the size of the basin of attraction for the optimization. - Remove the use of the legacy RegionTracker API from Blender, and replaces it with the new TrackRegion API. This also adds several features to the planar tracker in libmv: * Do a brute-force initialization of tracking similar to "Hybrid" mode in the stable release, but using all floats. This is slower but more accurate. It is still necessary to evaluate if the performance loss is worth it. In particular, this change is necessary to support high bit depth imagery. * Add support for masks over the search window. This is a step towards supporting user-defined tracker masks. The tracker masks will make it easy for users to make a mask for e.g. a ball. Not exposed into interface yet/ * Add Pearson product moment correlation coefficient checking (aka "Correlation" in the UI. This causes tracking failure if the tracked patch is not linearly related to the template. * Add support for warping a few points in addition to the supplied points. This is useful because the tracking code deliberately does not expose the underlying warp representation. Instead, warps are specified in an aparametric way via the correspondences. - Replace the old style tracker configuration panel with the new planar tracking panel. From a users perspective, this means: * The old "tracking algorithm" picker is gone. There is only 1 algorithm now. We may revisit this later, but I would much prefer to have only 1 algorithm. So far no optimization work has been done so the speed is not there yet. * There is now a dropdown to select the motion model. Choices: * Translation * Translation, rotation * Translation, scale * Translation, rotation, scale * Affine * Perspective * The old "Hybrid" mode is gone; instead there is a toggle to enable or disable translation-only tracker initialization. This is the equivalent of the hyrbid mode before, but rewritten to work with the new planar tracking modes. * The pyramid levels setting is gone. At a future date, the planar tracker will decide to use pyramids or not automatically. The pyramid setting was ultimately a mistake; with the brute force initialization it is unnecessary. - Add light-normalized tracking Added the ability to normalize patterns by their average value while tracking, to make them invariant to global illumination changes. Additional details could be found at wiki page [2] [1] http://code.google.com/p/ceres-solver [2] http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Motion_Tracker
2012-06-10 17:28:19 +02:00
const double *x1, const double *y1,
libmv_TrackRegionResult *result,
Planar tracking support for motion tracking =========================================== Major list of changes done in tomato branch: - Add a planar tracking implementation to libmv This adds a new planar tracking implementation to libmv. The tracker is based on Ceres[1], the new nonlinear minimizer that myself and Sameer released from Google as open source. Since the motion model is more involved, the interface is different than the RegionTracker interface used previously in Blender. The start of a C API in libmv-capi.{cpp,h} is also included. - Migrate from pat_{min,max} for markers to 4 corners representation Convert markers in the movie clip editor / 2D tracker from using pat_min and pat_max notation to using the a more general, 4-corner representation. There is still considerable porting work to do; in particular sliding from preview widget does not work correct for rotated markers. All other areas should be ported to new representation: * Added support of sliding individual corners. LMB slide + Ctrl would scale the whole pattern * S would scale the whole marker, S-S would scale pattern only * Added support of marker's rotation which is currently rotates only patterns around their centers or all markers around median, Rotation or other non-translation/scaling transformation of search area doesn't make sense. * Track Preview widget would display transformed pattern which libmv actually operates with. - "Efficient Second-order Minimization" for the planar tracker This implements the "Efficient Second-order Minimization" scheme, as supported by the existing translation tracker. This increases the amount of per-iteration work, but decreases the number of iterations required to converge and also increases the size of the basin of attraction for the optimization. - Remove the use of the legacy RegionTracker API from Blender, and replaces it with the new TrackRegion API. This also adds several features to the planar tracker in libmv: * Do a brute-force initialization of tracking similar to "Hybrid" mode in the stable release, but using all floats. This is slower but more accurate. It is still necessary to evaluate if the performance loss is worth it. In particular, this change is necessary to support high bit depth imagery. * Add support for masks over the search window. This is a step towards supporting user-defined tracker masks. The tracker masks will make it easy for users to make a mask for e.g. a ball. Not exposed into interface yet/ * Add Pearson product moment correlation coefficient checking (aka "Correlation" in the UI. This causes tracking failure if the tracked patch is not linearly related to the template. * Add support for warping a few points in addition to the supplied points. This is useful because the tracking code deliberately does not expose the underlying warp representation. Instead, warps are specified in an aparametric way via the correspondences. - Replace the old style tracker configuration panel with the new planar tracking panel. From a users perspective, this means: * The old "tracking algorithm" picker is gone. There is only 1 algorithm now. We may revisit this later, but I would much prefer to have only 1 algorithm. So far no optimization work has been done so the speed is not there yet. * There is now a dropdown to select the motion model. Choices: * Translation * Translation, rotation * Translation, scale * Translation, rotation, scale * Affine * Perspective * The old "Hybrid" mode is gone; instead there is a toggle to enable or disable translation-only tracker initialization. This is the equivalent of the hyrbid mode before, but rewritten to work with the new planar tracking modes. * The pyramid levels setting is gone. At a future date, the planar tracker will decide to use pyramids or not automatically. The pyramid setting was ultimately a mistake; with the brute force initialization it is unnecessary. - Add light-normalized tracking Added the ability to normalize patterns by their average value while tracking, to make them invariant to global illumination changes. Additional details could be found at wiki page [2] [1] http://code.google.com/p/ceres-solver [2] http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Motion_Tracker
2012-06-10 17:28:19 +02:00
double *x2, double *y2);
void libmv_samplePlanarPatch(const float *image,
int width, int height,
int channels,
const double *xs, const double *ys,
int num_samples_x, int num_samples_y,
const float *mask,
float *patch,
Planar tracking support for motion tracking =========================================== Major list of changes done in tomato branch: - Add a planar tracking implementation to libmv This adds a new planar tracking implementation to libmv. The tracker is based on Ceres[1], the new nonlinear minimizer that myself and Sameer released from Google as open source. Since the motion model is more involved, the interface is different than the RegionTracker interface used previously in Blender. The start of a C API in libmv-capi.{cpp,h} is also included. - Migrate from pat_{min,max} for markers to 4 corners representation Convert markers in the movie clip editor / 2D tracker from using pat_min and pat_max notation to using the a more general, 4-corner representation. There is still considerable porting work to do; in particular sliding from preview widget does not work correct for rotated markers. All other areas should be ported to new representation: * Added support of sliding individual corners. LMB slide + Ctrl would scale the whole pattern * S would scale the whole marker, S-S would scale pattern only * Added support of marker's rotation which is currently rotates only patterns around their centers or all markers around median, Rotation or other non-translation/scaling transformation of search area doesn't make sense. * Track Preview widget would display transformed pattern which libmv actually operates with. - "Efficient Second-order Minimization" for the planar tracker This implements the "Efficient Second-order Minimization" scheme, as supported by the existing translation tracker. This increases the amount of per-iteration work, but decreases the number of iterations required to converge and also increases the size of the basin of attraction for the optimization. - Remove the use of the legacy RegionTracker API from Blender, and replaces it with the new TrackRegion API. This also adds several features to the planar tracker in libmv: * Do a brute-force initialization of tracking similar to "Hybrid" mode in the stable release, but using all floats. This is slower but more accurate. It is still necessary to evaluate if the performance loss is worth it. In particular, this change is necessary to support high bit depth imagery. * Add support for masks over the search window. This is a step towards supporting user-defined tracker masks. The tracker masks will make it easy for users to make a mask for e.g. a ball. Not exposed into interface yet/ * Add Pearson product moment correlation coefficient checking (aka "Correlation" in the UI. This causes tracking failure if the tracked patch is not linearly related to the template. * Add support for warping a few points in addition to the supplied points. This is useful because the tracking code deliberately does not expose the underlying warp representation. Instead, warps are specified in an aparametric way via the correspondences. - Replace the old style tracker configuration panel with the new planar tracking panel. From a users perspective, this means: * The old "tracking algorithm" picker is gone. There is only 1 algorithm now. We may revisit this later, but I would much prefer to have only 1 algorithm. So far no optimization work has been done so the speed is not there yet. * There is now a dropdown to select the motion model. Choices: * Translation * Translation, rotation * Translation, scale * Translation, rotation, scale * Affine * Perspective * The old "Hybrid" mode is gone; instead there is a toggle to enable or disable translation-only tracker initialization. This is the equivalent of the hyrbid mode before, but rewritten to work with the new planar tracking modes. * The pyramid levels setting is gone. At a future date, the planar tracker will decide to use pyramids or not automatically. The pyramid setting was ultimately a mistake; with the brute force initialization it is unnecessary. - Add light-normalized tracking Added the ability to normalize patterns by their average value while tracking, to make them invariant to global illumination changes. Additional details could be found at wiki page [2] [1] http://code.google.com/p/ceres-solver [2] http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Motion_Tracker
2012-06-10 17:28:19 +02:00
double *warped_position_x, double *warped_position_y);
void libmv_samplePlanarPatchByte(const unsigned char *image,
int width, int height,
int channels,
const double *xs, const double *ys,
int num_samples_x, int num_samples_y,
const float *mask,
unsigned char *patch,
double *warped_position_x, double *warped_position_y);
Planar tracking support for motion tracking =========================================== Major list of changes done in tomato branch: - Add a planar tracking implementation to libmv This adds a new planar tracking implementation to libmv. The tracker is based on Ceres[1], the new nonlinear minimizer that myself and Sameer released from Google as open source. Since the motion model is more involved, the interface is different than the RegionTracker interface used previously in Blender. The start of a C API in libmv-capi.{cpp,h} is also included. - Migrate from pat_{min,max} for markers to 4 corners representation Convert markers in the movie clip editor / 2D tracker from using pat_min and pat_max notation to using the a more general, 4-corner representation. There is still considerable porting work to do; in particular sliding from preview widget does not work correct for rotated markers. All other areas should be ported to new representation: * Added support of sliding individual corners. LMB slide + Ctrl would scale the whole pattern * S would scale the whole marker, S-S would scale pattern only * Added support of marker's rotation which is currently rotates only patterns around their centers or all markers around median, Rotation or other non-translation/scaling transformation of search area doesn't make sense. * Track Preview widget would display transformed pattern which libmv actually operates with. - "Efficient Second-order Minimization" for the planar tracker This implements the "Efficient Second-order Minimization" scheme, as supported by the existing translation tracker. This increases the amount of per-iteration work, but decreases the number of iterations required to converge and also increases the size of the basin of attraction for the optimization. - Remove the use of the legacy RegionTracker API from Blender, and replaces it with the new TrackRegion API. This also adds several features to the planar tracker in libmv: * Do a brute-force initialization of tracking similar to "Hybrid" mode in the stable release, but using all floats. This is slower but more accurate. It is still necessary to evaluate if the performance loss is worth it. In particular, this change is necessary to support high bit depth imagery. * Add support for masks over the search window. This is a step towards supporting user-defined tracker masks. The tracker masks will make it easy for users to make a mask for e.g. a ball. Not exposed into interface yet/ * Add Pearson product moment correlation coefficient checking (aka "Correlation" in the UI. This causes tracking failure if the tracked patch is not linearly related to the template. * Add support for warping a few points in addition to the supplied points. This is useful because the tracking code deliberately does not expose the underlying warp representation. Instead, warps are specified in an aparametric way via the correspondences. - Replace the old style tracker configuration panel with the new planar tracking panel. From a users perspective, this means: * The old "tracking algorithm" picker is gone. There is only 1 algorithm now. We may revisit this later, but I would much prefer to have only 1 algorithm. So far no optimization work has been done so the speed is not there yet. * There is now a dropdown to select the motion model. Choices: * Translation * Translation, rotation * Translation, scale * Translation, rotation, scale * Affine * Perspective * The old "Hybrid" mode is gone; instead there is a toggle to enable or disable translation-only tracker initialization. This is the equivalent of the hyrbid mode before, but rewritten to work with the new planar tracking modes. * The pyramid levels setting is gone. At a future date, the planar tracker will decide to use pyramids or not automatically. The pyramid setting was ultimately a mistake; with the brute force initialization it is unnecessary. - Add light-normalized tracking Added the ability to normalize patterns by their average value while tracking, to make them invariant to global illumination changes. Additional details could be found at wiki page [2] [1] http://code.google.com/p/ceres-solver [2] http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Motion_Tracker
2012-06-10 17:28:19 +02:00
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 13:55:18 +01:00
/* Tracks */
struct libmv_Tracks *libmv_tracksNew(void);
void libmv_tracksDestroy(struct libmv_Tracks *libmv_tracks);
void libmv_tracksInsert(struct libmv_Tracks *libmv_tracks, int image, int track, double x, double y, double weight);
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 13:55:18 +01:00
/* Reconstruction */
#define LIBMV_REFINE_FOCAL_LENGTH (1 << 0)
#define LIBMV_REFINE_PRINCIPAL_POINT (1 << 1)
#define LIBMV_REFINE_RADIAL_DISTORTION_K1 (1 << 2)
#define LIBMV_REFINE_RADIAL_DISTORTION_K2 (1 << 4)
enum {
LIBMV_DISTORTION_MODEL_POLYNOMIAL = 0,
LIBMV_DISTORTION_MODEL_DIVISION = 1,
};
typedef struct libmv_CameraIntrinsicsOptions {
/* Common settings of all distortion models. */
int distortion_model;
int image_width, image_height;
double focal_length;
double principal_point_x, principal_point_y;
/* Radial distortion model. */
double polynomial_k1, polynomial_k2, polynomial_k3;
double polynomial_p1, polynomial_p2;
/* Division distortion model. */
double division_k1, division_k2;
} libmv_CameraIntrinsicsOptions;
typedef struct libmv_ReconstructionOptions {
Motion tracking: automatic keyframe selection Implements an automatic keyframe selection algorithm which uses couple of approaches to find out best keyframes candidates: - First, slightly modifier Pollefeys's criteria is used, which limits correspondence ration from 80% to 100%. This allows to reject keyframe candidate early without doing heavy math in cases there're not much common features with first keyframe. - Second step is based on Geometric Robust Information Criteria (aka GRIC), which checks whether features motion between candidate keyframes is better defined by homography or fundamental matrices. To be a good keyframe candidate, fundamental matrix need to define motion better than homography (in this case F-GRIC will be smaller than H-GRIC). This two criteria are well described in this paper: http://www.cs.ait.ac.th/~mdailey/papers/Tahir-KeyFrame.pdf - Final step is based on estimating reconstruction error of a full-scene solution using candidate keyframes. This part is based on the following paper: ftp://ftp.tnt.uni-hannover.de/pub/papers/2004/ECCV2004-TTHBAW.pdf This step requires reconstruction using candidate keyframes and obtaining covariance matrix of 3D points positions. Reconstruction was done pretty much straightforward using other simple pipeline routines, and for covariance estimation pseudo-inverse of Hessian is used, which is in this case (J^T * J)+, where + denotes pseudo-inverse. Jacobian matrix is estimating using Ceres evaluate API. This is also crucial to get rid of possible gauge ambiguity, which is in our case made by zero-ing 7 (by gauge freedoms number) eigen values in pseudo-inverse. There're still room for improving and optimizing the code, but we need some point to start with anyway :) Thanks to Keir Mierle and Sameer Agarwal who assisted a lot to make this feature working.
2013-05-30 11:03:49 +02:00
int select_keyframes;
int keyframe1, keyframe2;
Motion tracking: automatic keyframe selection Implements an automatic keyframe selection algorithm which uses couple of approaches to find out best keyframes candidates: - First, slightly modifier Pollefeys's criteria is used, which limits correspondence ration from 80% to 100%. This allows to reject keyframe candidate early without doing heavy math in cases there're not much common features with first keyframe. - Second step is based on Geometric Robust Information Criteria (aka GRIC), which checks whether features motion between candidate keyframes is better defined by homography or fundamental matrices. To be a good keyframe candidate, fundamental matrix need to define motion better than homography (in this case F-GRIC will be smaller than H-GRIC). This two criteria are well described in this paper: http://www.cs.ait.ac.th/~mdailey/papers/Tahir-KeyFrame.pdf - Final step is based on estimating reconstruction error of a full-scene solution using candidate keyframes. This part is based on the following paper: ftp://ftp.tnt.uni-hannover.de/pub/papers/2004/ECCV2004-TTHBAW.pdf This step requires reconstruction using candidate keyframes and obtaining covariance matrix of 3D points positions. Reconstruction was done pretty much straightforward using other simple pipeline routines, and for covariance estimation pseudo-inverse of Hessian is used, which is in this case (J^T * J)+, where + denotes pseudo-inverse. Jacobian matrix is estimating using Ceres evaluate API. This is also crucial to get rid of possible gauge ambiguity, which is in our case made by zero-ing 7 (by gauge freedoms number) eigen values in pseudo-inverse. There're still room for improving and optimizing the code, but we need some point to start with anyway :) Thanks to Keir Mierle and Sameer Agarwal who assisted a lot to make this feature working.
2013-05-30 11:03:49 +02:00
int refine_intrinsics;
} libmv_ReconstructionOptions;
typedef void (*reconstruct_progress_update_cb) (void *customdata, double progress, const char *message);
struct libmv_Reconstruction *libmv_solveReconstruction(const struct libmv_Tracks *libmv_tracks,
const libmv_CameraIntrinsicsOptions *libmv_camera_intrinsics_options,
libmv_ReconstructionOptions *libmv_reconstruction_options,
reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata);
struct libmv_Reconstruction *libmv_solveModal(const struct libmv_Tracks *libmv_tracks,
const libmv_CameraIntrinsicsOptions *libmv_camera_intrinsics_options,
const libmv_ReconstructionOptions *libmv_reconstruction_options,
reconstruct_progress_update_cb progress_update_callback,
void *callback_customdata);
void libmv_reconstructionDestroy(struct libmv_Reconstruction *libmv_reconstruction);
int libmv_reprojectionPointForTrack(const struct libmv_Reconstruction *libmv_reconstruction, int track, double pos[3]);
double libmv_reprojectionErrorForTrack(const struct libmv_Reconstruction *libmv_reconstruction, int track);
double libmv_reprojectionErrorForImage(const struct libmv_Reconstruction *libmv_reconstruction, int image);
int libmv_reprojectionCameraForImage(const struct libmv_Reconstruction *libmv_reconstruction,
int image, double mat[4][4]);
double libmv_reprojectionError(const struct libmv_Reconstruction *libmv_reconstruction);
struct libmv_CameraIntrinsics *libmv_reconstructionExtractIntrinsics(struct libmv_Reconstruction *libmv_Reconstruction);
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 13:55:18 +01:00
/* Feature detector */
enum {
LIBMV_DETECTOR_FAST,
LIBMV_DETECTOR_MORAVEC,
LIBMV_DETECTOR_HARRIS,
};
typedef struct libmv_DetectOptions {
int detector;
int margin;
int min_distance;
int fast_min_trackness;
int moravec_max_count;
unsigned char *moravec_pattern;
double harris_threshold;
} libmv_DetectOptions;
struct libmv_Features *libmv_detectFeaturesByte(const unsigned char *image_buffer,
int width, int height, int channels,
libmv_DetectOptions *options);
struct libmv_Features *libmv_detectFeaturesFloat(const float *image_buffer,
int width, int height, int channels,
libmv_DetectOptions *options);
void libmv_featuresDestroy(struct libmv_Features *libmv_features);
int libmv_countFeatures(const struct libmv_Features *libmv_features);
void libmv_getFeature(const struct libmv_Features *libmv_features, int number, double *x, double *y, double *score,
double *size);
/* Camera intrinsics */
struct libmv_CameraIntrinsics *libmv_cameraIntrinsicsNew(
const libmv_CameraIntrinsicsOptions *libmv_camera_intrinsics_options);
struct libmv_CameraIntrinsics *libmv_cameraIntrinsicsCopy(const struct libmv_CameraIntrinsics *libmv_intrinsics);
void libmv_cameraIntrinsicsDestroy(struct libmv_CameraIntrinsics *libmv_intrinsics);
void libmv_cameraIntrinsicsUpdate(const libmv_CameraIntrinsicsOptions *libmv_camera_intrinsics_options,
struct libmv_CameraIntrinsics *libmv_intrinsics);
void libmv_cameraIntrinsicsSetThreads(struct libmv_CameraIntrinsics *libmv_intrinsics, int threads);
void libmv_cameraIntrinsicsExtractOptions(
const struct libmv_CameraIntrinsics *libmv_intrinsics,
struct libmv_CameraIntrinsicsOptions *camera_intrinsics_options);
void libmv_cameraIntrinsicsUndistortByte(const struct libmv_CameraIntrinsics *libmv_intrinsics,
unsigned char *src, unsigned char *dst, int width, int height,
float overscan, int channels);
void libmv_cameraIntrinsicsUndistortFloat(const struct libmv_CameraIntrinsics *libmv_intrinsics,
float *src, float *dst, int width, int height,
float overscan, int channels);
void libmv_cameraIntrinsicsDistortByte(const struct libmv_CameraIntrinsics *libmv_intrinsics,
unsigned char *src, unsigned char *dst, int width, int height,
float overscan, int channels);
void libmv_cameraIntrinsicsDistortFloat(const struct libmv_CameraIntrinsics *libmv_intrinsics,
float *src, float *dst, int width, int height,
float overscan, int channels);
void libmv_cameraIntrinsicsApply(const libmv_CameraIntrinsicsOptions *libmv_camera_intrinsics_options,
double x, double y, double *x1, double *y1);
void libmv_cameraIntrinsicsInvert(const libmv_CameraIntrinsicsOptions *libmv_camera_intrinsics_options,
double x, double y, double *x1, double *y1);
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 13:55:18 +01:00
void libmv_homography2DFromCorrespondencesEuc(double (*x1)[2], double (*x2)[2], int num_points, double H[3][3]);
Merge plane track feature from tomato branch This commit includes all the changes made for plane tracker in tomato branch. Movie clip editor changes: - Artist might create a plane track out of multiple point tracks which belongs to the same track (minimum amount of point tracks is 4, maximum is not actually limited). When new plane track is added, it's getting "tracked" across all point tracks, which makes it stick to the same plane point tracks belong to. - After plane track was added, it need to be manually adjusted in a way it covers feature one might to mask/replace. General transform tools (G, R, S) or sliding corners with a mouse could be sued for this. Plane corner which corresponds to left bottom image corner has got X/Y axis on it (red is for X axis, green for Y). - Re-adjusting plane corners makes plane to be "re-tracked" for the frames sequence between current frame and next and previous keyframes. - Kayframes might be removed from the plane, using Shit-X (Marker Delete) operator. However, currently manual re-adjustment or "re-track" trigger is needed. Compositor changes: - Added new node called Plane Track Deform. - User selects which plane track to use (for this he need to select movie clip datablock, object and track names). - Node gets an image input, which need to be warped into the plane. - Node outputs: * Input image warped into the plane. * Plane, rasterized to a mask. Masking changes: - Mask points might be parented to a plane track, which makes this point deforming in a way as if it belongs to the tracked plane. Some video tutorials are available: - Coder video: http://www.youtube.com/watch?v=vISEwqNHqe4 - Artist video: https://vimeo.com/71727578 This is mine and Keir's holiday code project :)
2013-08-16 11:46:30 +02:00
Camera tracking integration =========================== Commiting camera tracking integration gsoc project into trunk. This commit includes: - Bundled version of libmv library (with some changes against official repo, re-sync with libmv repo a bit later) - New datatype ID called MovieClip which is optimized to work with movie clips (both of movie files and image sequences) and doing camera/motion tracking operations. - New editor called Clip Editor which is currently used for motion/tracking stuff only, but which can be easily extended to work with masks too. This editor supports: * Loading movie files/image sequences * Build proxies with different size for loaded movie clip, also supports building undistorted proxies to increase speed of playback in undistorted mode. * Manual lens distortion mode calibration using grid and grease pencil * Supervised 2D tracking using two different algorithms KLT and SAD. * Basic algorithm for feature detection * Camera motion solving. scene orientation - New constraints to "link" scene objects with solved motions from clip: * Follow Track (make object follow 2D motion of track with given name or parent object to reconstructed 3D position of track) * Camera Solver to make camera moving in the same way as reconstructed camera This commit NOT includes changes from tomato branch: - New nodes (they'll be commited as separated patch) - Automatic image offset guessing for image input node and image editor (need to do more tests and gather more feedback) - Code cleanup in libmv-capi. It's not so critical cleanup, just increasing readability and understanadability of code. Better to make this chaneg when Keir will finish his current patch. More details about this project can be found on this page: http://wiki.blender.org/index.php/User:Nazg-gul/GSoC-2011 Further development of small features would be done in trunk, bigger/experimental features would first be implemented in tomato branch.
2011-11-07 13:55:18 +01:00
#ifdef __cplusplus
}
#endif
#endif // LIBMV_C_API_H