OpenGV
A library for solving calibrated central and non-central geometric vision problems
How to use

This page gives an introduction to the usage of OpenGV including a description of the interface and explicit examples. More information can be found in [17]. However, in order to have a smooth communication and full understanding of the functionality and documentation of the library, we first need to clearly define the meaning of a couple of words in the present context.

Vocabulary

  • Bearing vector: A bearing vector is defined to be a 3-vector with unit norm bearing at a spatial 3D point from a camera reference frame. It has 2 degrees of freedom, which are the azimuth and elevation in the camera reference frame. Because it has only two degrees of freedom, we frequently refer to it as a 2D information. It is normally expressed in a camera reference frame.
  • Landmark: A landmark describes a 3D spatial point (usually expressed in a fixed frame called world reference frame).
  • Camera: OpenGV assumes to be in the calibrated case, and landmark measurements are always given in form of bearing vectors in a camera frame. A camera therefore denotes a camera reference frame with a set of bearing vectors, all pointing from the origin to landmarks. The following figure shows a camera c with bearing vectors (in red). The bearing vectors are all lying on the unit-sphere centered around the camera.

    central.png

  • Viewpoint: You will notice that the documentation of the code very frequently talks about viewpoints instead of cameras. One of the advantages of OpenGV is that it can transparently handle both the central and the non-central case. The viewpoint is a generalization of a camera, and can contain an arbitrary number of cameras each one having it's own landmark-measurements (e.g. bearing vectors). A practical example of a viewpoint would be the set of images and related measurements captured by a fully-calibrated, rigid multi-camera rig with synchronized cameras, which therefore still represents a single (multi)-snapshot (i.e. viewpoint). Each camera has its own transformation to the viewpoint frame. In the central case the viewpoint simply contains a single camera with an identity transformation. The most general case-the generalized camera-can also be described by the viewpoint. Each bearing vector would then have it's own camera and related transformation. A generalized camera would hence be represented by an exhaustive multi-camera system. The following image shows a viewpoint vp (in blue) with three cameras c, c', and c'', each one containing its own bearing vector measurements.

    noncentral.png

  • Pose: By a pose, we understand here the position and orientation of a viewpoint, either with respect to a fixed spatial reference frame called the "world" reference frame, or with respect to another viewpoint.
  • Absolute Pose: By absolute pose, we understand the pose of a viewpoint in the world reference frame.
  • Relative Pose: By relative pose, we understand the pose of a viewpoint with respect to another viewpoint.
  • Correspondence: By a correspondence, we understand a pair of bearing-vectors pointing at the same landmark from different viewpoints (2D-2D correspondence), a bearing vector and a world-point it is pointing at (2D-3D correspondence), or a pair of expressions of the same landmark in different frames (3D-3D correspondence).

Organization of the library

The library-structure is best analyzed at the hand of the namespace or directory hierarchy (as a matter of fact, there is no difference between the two):

  • opengv: contains generic things such as the types used throughout the library.
  • opengv/math: contains a bunch of math functions that are used in different algorithms, mainly for root-finding and rotation-related stuff.
  • opengv/absolute_pose: contains the absolute-pose methods. methods.hpp is the main-header that contains the method-declarations. You can also find a bunch of adapters here for interfacing with the algorithms (explained in the next section). The sub-folder modules contains declarations of internal methods.
  • opengv/relative_pose: contains the relative-pose methods. methods.hpp is the main-header that contains the method-declarations. You can also find a bunch of adapters here for interfacing with the algorithms (explained in the next section). The sub-folder modules contains declarations of internal methods.
  • opengv/point_cloud: contains the point-cloud alignment methods. Again, methods.hpp contains the declarations, and the folder contains adapters for interfacing (explained below).
  • opengv/triangulation: contains the triangulation methods.
  • opengv/sac: contains base-classes for sample-consensus methods and problems. So far, only the Ransac algorithm is implemented.
  • opengv/sac_problems: contains sample-consensus problems derived from the base-class. Implements sample-consensus problems for point-cloud alignment and central as well as non-central absolute and relative-pose estimation.

Interface

You will quickly notice that all methods in OpenGV use a variable called adapter as a function-call parameter. OpenGV is designed based on the adapter pattern. "Adapters" in OpenGV are used as "visitors" to all geometric vision methods. They contain the landmarks, bearing-vectors, poses, correspondences etc. used as an input to the different methods (or references to the alike), and allow to access those elements with a unified interface. Adapters are derived from a base-class that defines the unified interface and they have to implement the related functions for accessing bearing-vectors, world-points, camera-transformations, viewpoint-poses, etc. There are three adapter-base-classes:

  • AbsoluteAdapterBase: Base-class for adapters holding 2D-3D correspondences for absolute-pose methods.
  • RelativeAdapterBase: Base-class for adapters holding 2D-2D correspondences for relative-pose methods.
  • PointCloudAdapterBase: Base-class for adapters holding 3D-3D correspondences for point-cloud alignment methods.

The derived adapters have the task of transforming the data from the user-format to OpenGV types. This gives the library great flexibility. Users have to implement only a couple of adapters for the specific data-format they are using, and can then access the full functionality of the library. OpenGV currently contains adapters that simply hold references to OpenGV types (no transformation needed) plus adapters for mexArrays used within the Matlab-interface. Further adapters are planned, such as for instance an adapter for OpenCV keypoint and match-types including a camera model. The user would then be able to chose whether normalization of keypoints is done "on-demand" or "once for all" at the beginning, which is more efficient in sample-consensus problems.

Note that adapters containing the tag "Central" in their name are adapters for a single camera (i.e. view-points with only one camera having identity transformation). Adapters having the tag "Noncentral" in their name are meant for view-points with multiple cameras (e.g., multi-camera systems, generalized cameras).

Please check out the doxygen documentation on the above base-classes, they contain important documentation on the functions that need to be overloaded for a proper implementation of an adapter.

Conventions, problem types, and examples

As already mentioned, the entire library is assuming calibrated cameras/viewpoints, and it operates with 3D unit bearing vectors expressed in the camera frame. Calibrated means that the configuration of the multi-camera system (i.e. the inter-camera transformations) is known. The following introduces the different problems that can be solved with the library, and outlines the conventions for transformations (translations and rotations) in their context. Note that all problems have solutions for both the minimal and non-minimal cases, and may also be solved as sample-consensus or non-linear optimization problems. The code samples are mostly taken from the test files, which you can compile along with the library by setting BUILD_TESTS in CMakeLists.txt to ON.

  • Central absolute pose: The central absolute pose problem consists of finding the pose of a camera (e.g. viewpoint with a single camera) given a number of 2D-3D correspondences between bearing vectors in the camera frame and points in the world frame. The seeked transformation is given by the position $ \mathbf{t}_{c} $ of the camera seen from the world frame and the rotation $ \mathbf{R}_{c} $ from the camera to the world frame. This is what the algorithms return (or part of it), and what the adapters can hold as known or prior information.

    absolute_central.png

    The minimal variants are p2p (a solution for position if rotation is known), p3p_kneip [1], p3p_gao [2], and UPnP [19]. The non-minimal variants are epnp [4], and UPnP [19]. The non-linear optimization variant is called optimize_nonlinear. Here's how to use them:

    // create the central adapter
    absolute_pose::CentralAbsoluteAdapter adapter(
    bearingVectors, points );
    // Kneip's P2P (uses rotation from adapter)
    adapter.setR( knownRotation );
    translation_t p2p_translation =
    absolute_pose::p2p( adapter, indices1 );
    // Kneip's P3P
    transformations_t p3p_kneip_transformations =
    absolute_pose::p3p_kneip( adapter, indices2 );
    // Gao's P3P
    transformations_t p3p_gao_transformations =
    absolute_pose::p3p_gao( adapter, indices2 );
    // Lepetit's Epnp (using all correspondences)
    transformation_t epnp_transformation =
    // UPnP (using all correspondences)
    transformations_t upnp_transformations =
    // UPnP (using three correspondences)
    transformations_t upnp_transformations =
    absolute_pose::upnp( adapter, indices2 );
    // non-linear optimization (using all correspondences)
    adapter.sett(initial_translation);
    adapter.setR(initial_rotation);
    transformation_t nonlinear_transformation =

    p3p_kneip, p3p_gao, and epnp can also be used within a sample consensus context. The following shows how to do it:

    // create the central adapter
    absolute_pose::CentralAbsoluteAdapter adapter(
    bearingVectors, points );
    // create a Ransac object
    sac::Ransac<sac_problems::absolute_pose::AbsolutePoseSacProblem> ransac;
    // create an AbsolutePoseSacProblem
    // (algorithm is selectable: KNEIP, GAO, or EPNP)
    std::shared_ptr<sac_problems::absolute_pose::AbsolutePoseSacProblem>
    absposeproblem_ptr(
    new sac_problems::absolute_pose::AbsolutePoseSacProblem(
    adapter, sac_problems::absolute_pose::AbsolutePoseSacProblem::KNEIP ) );
    // run ransac
    ransac.sac_model_ = absposeproblem_ptr;
    ransac.threshold_ = threshold;
    ransac.max_iterations_ = maxIterations;
    ransac.computeModel();
    // get the result
    transformation_t best_transformation =
    ransac.model_coefficients_;

    These examples are taken from test_absolute_pose.cpp and test_absolute_pose_sac.cpp.

  • Non-central absolute pose: The non-central absolute pose problem consists of finding the pose of a viewpoint given a number of 2D-3D correspondences between bearing vectors in multiple camera frames and points in the world frame. The seeked transformation is given by the position $ \mathbf{t}_{vp} $ of the viewpoint seen from the world frame and the rotation $ \mathbf{R}_{vp} $ from the viewpoint to the world frame. This is what the algorithms return, and what the adapters can hold as known or prior information.

    absolute_noncentral.png

    The minimal variant is gp3p, and the non-minimal variant is gpnp [3]. UPnP can be used for both the minimal and the non-minimal case [19]. The non-linear optimization variant is still optimize_nonlinear (it handles both cases). Here's how to use them:

    // create the non-central adapter
    absolute_pose::NoncentralAbsoluteAdapter adapter(
    bearingVectors,
    camCorrespondences,
    points,
    camOffsets,
    camRotations );
    // Kneip's GP3P
    transformations_t gp3p_transformations =
    absolute_pose::gp3p( adapter, indices1 );
    // Kneip's GPNP (using all correspondences)
    transformation_t gpnp_transformation =
    // UPnP (using all correspondences)
    transformations_t upnp_transformations =
    absolute_pose::upnp( adapter );
    // UPnP (using only 3 correspondences)
    transformations_t upnp_transformations_3 =
    absolute_pose::upnp( adapter, indices1 );
    // non-linear optimization
    adapter.sett(initial_translation);
    adapter.setR(initial_rotation);
    transformation_t nonlinear_transformation =

    gp3p can also be used within a sample-consensus context. It remains an AbsolutePoseSacProblem, this one is usable for both the central and the non-central case. We simply have to set algorithm to GP3P:

    // create the non-central adapter
    absolute_pose::NoncentralAbsoluteAdapter adapter(
    bearingVectors,
    camCorrespondences,
    points,
    camOffsets,
    camRotations );
    // create a RANSAC object
    sac::Ransac<sac_problems::absolute_pose::AbsolutePoseSacProblem> ransac;
    // create a absolute-pose sample consensus problem (using GP3P as an algorithm)
    std::shared_ptr<sac_problems::absolute_pose::AbsolutePoseSacProblem>
    absposeproblem_ptr(
    new sac_problems::absolute_pose::AbsolutePoseSacProblem(
    adapter, sac_problems::absolute_pose::AbsolutePoseSacProblem::GP3P ) );
    // run ransac
    ransac.sac_model_ = absposeproblem_ptr;
    ransac.threshold_ = threshold;
    ransac.max_iterations_ = maxIterations;
    ransac.computeModel();
    // get the result
    transformation_t best_transformation =
    ransac.model_coefficients_;

    These examples are taken from test_noncentral_absolute_pose.cpp and test_noncentral_absolute_pose_sac.cpp.

  • Central relative pose: The central relative pose problem consists of finding the pose of a camera (e.g. viewpoint with a single camera) with respect to a different camera given a number of 2D-2D correspondences between bearing vectors in the camera frames. The seeked transformation is given by the position $ \mathbf{t}_{c'}^{c} $ of the second camera seen from the first one and the rotation $ \mathbf{R}_{c'}^{c} $ from the second camera back to the first camera frame. This is what the algorithms return (or part of it), and what the adapters can hold as known or prior information.

    relative_central.png

    There are many central relative-pose algorithms in the library. The minimal variants are twopt (in case the rotation is known), twopt_rotationOnly (in case there is only rotational change, and using only two points), fivept_stewenius [5], fivept_nister [6], and fivept_kneip [7]. The libary also contains non-minimal variants, namely rotationOnly (in case of pure-rotation change), sevenpt [8], eightpt [9,10] and the new eigensolver [11] methods. All of them except twopt, twopt_rotationOnly, and fivept_kneip can be used for an arbitrary number of correspondences (of course at least the minimal number). The non-linear optimization variant is again called optimize_nonlinear. Here's how to use most of them (we assume a regular situation here, and thus omit the rotationOnly algorithms):

    // create the central relative adapter
    relative_pose::CentralRelativeAdapter adapter(
    bearingVectors1, bearingVectors2 );
    // Relative translation with only two point-correspondences
    // (no or known rotation)
    adapter.setR(knownRotation);
    translation_t twopt_translation =
    relative_pose::twopt( adapter, true, indices1 );
    // Stewenius' 5-point algorithm
    complexEssentials_t fivept_stewenius_essentials =
    relative_pose::fivept_stewenius( adapter, indices2 );
    // Nister's 5-point algorithm
    essentials_t fivept_nister_essentials =
    relative_pose::fivept_nister( adapter, indices2 );
    // Kneip's 5-point algorithm
    rotations_t fivept_kneip_rotations =
    relative_pose::fivept_kneip( adapter, indices2 );
    // the 7-point algorithm
    essentials_t sevenpt_essentials =
    relative_pose::sevenpt( adapter, indices3 );
    // the 8-point algorithm
    essential_t eightpt_essential =
    relative_pose::eightpt( adapter, indices4 );
    // Kneip's eigensolver
    adapter.setR(initial_rotation);
    eigensolver_rotation =
    relative_pose::eigensolver( adapter, indices5 );
    // non-linear optimization (using all available correspondences)
    adapter.sett(initial_translation);
    adapter.setR(initial_rotation);
    transformation_t nonlinear_transformation =

    fivept_nister, fivept_stewenius, sevenpt, and eigthpt can also be used within a random-sample consensus scheme. It is done as follows:

    // create the central relative adapter
    relative_pose::CentralRelativeAdapter adapter(
    bearingVectors1, bearingVectors2 );
    // create a RANSAC object
    sac::Ransac<sac_problems::relative_pose::CentralRelativePoseSacProblem> ransac;
    // create a CentralRelativePoseSacProblem
    // (set algorithm to STEWENIUS, NISTER, SEVENPT, or EIGHTPT)
    std::shared_ptr<sac_problems::relative_pose::CentralRelativePoseSacProblem>
    relposeproblem_ptr(
    new sac_problems::relative_pose::CentralRelativePoseSacProblem(
    adapter,
    sac_problems::relative_pose::CentralRelativePoseSacProblem::NISTER ) );
    // run ransac
    ransac.sac_model_ = relposeproblem_ptr;
    ransac.threshold_ = threshold;
    ransac.max_iterations_ = maxIterations;
    ransac.computeModel();
    // get the result
    transformation_t best_transformation =
    ransac.model_coefficients_;

    These examples are taken from test_relative_pose.cpp and test_relative_pose_sac.cpp. There are also sample consensus problems for the case of pure-rotation, known rotation, or the eigensolver method. Feel free to explore opengv/sac_problems/relative_pose.

  • Non-central relative pose: The non-central relative pose problem consists of finding the pose of a viewpoint with respect to a different viewpoint given a number of 2D-2D correspondences between bearing vectors in multiple camera frames. The seeked transformation is given by the position $ \mathbf{t}_{vp'}^{vp} $ of the second viewpoint seen from the first one and the rotation $ \mathbf{R}_{vp'}^{vp} $ from the second viewpoint back to the first viewpoint frame. This is what the algorithms return (or part of it), and what the adapters can hold as known or prior information.

    relative_noncentral.png

    There are three non-central relative pose methods in the library, the 17-point algorithm by Li [12], the 6-point method by Stewenius [16], and the new generalized eigensolver [18]. The 17-point algorithm as well as the generalized eigensolver can be used with an arbitrary number of points. The 6-point algorithm is usable with only 6-points exactly. "optimize_nonlinear" is again able to also handle the non-central case. Here's how to use these methods:

    // create the non-central relative adapter
    relative_pose::NoncentralRelativeAdapter adapter(
    bearingVectors1,
    bearingVectors2,
    camCorrespondences1,
    camCorrespondences2,
    camOffsets,
    camRotations );
    // 6-point algorithm
    rotations_t sixpt_rotations =
    relative_pose::sixpt( adapter, indices );
    // generalized eigensolver (over all points)
    geOutput_t output;
    relative_pose::ge(adapter,output);
    translation_t ge_translation = output.translation.block<3,1>(0,0);
    rotation_t ge_rotation = output.rotation;
    // 17-point algorithm
    transformation_t seventeenpt_transformation =
    relative_pose::seventeenpt( adapter, indices );
    // non-linear optimization (using all available correspondences)
    adapter.sett(initial_translation);
    adapter.setR(initial_rotation);
    transformation_t nonlinear_transformation =

    All algorithms are also available in a sample-consensus scheme:

    // create the non-central relative adapter
    relative_pose::NoncentralRelativeAdapter adapter(
    bearingVectors1,
    bearingVectors2,
    camCorrespondences1,
    camCorrespondences2,
    camOffsets,
    camRotations );
    // create a RANSAC object
    sac::Ransac<sac_problems::relative_pose::NoncentralRelativePoseSacProblem>
    ransac;
    // create a NoncentralRelativePoseSacProblem
    std::shared_ptr<
    sac_problems::relative_pose::NoncentralRelativePoseSacProblem>
    relposeproblem_ptr(
    new sac_problems::relative_pose::NoncentralRelativePoseSacProblem(
    adapter,
    sac_problems::relative_pose::NoncentralRelativePoseSacProblem::SEVENTEENPT)
    );
    // run ransac
    ransac.sac_model_ = relposeproblem_ptr;
    ransac.threshold_ = threshold;
    ransac.max_iterations_ = maxIterations;
    ransac.computeModel();
    // get the result
    transformation_t best_transformation =
    ransac.model_coefficients_;

    These examples are taken from test_noncentral_relative_pose.cpp and test_noncentral_relative_pose_sac.cpp. Simply set SEVENTEENPT to GE or SIXPT in order to use the alternative algorithms.

  • Triangulation of points: OpenGV contains two methods for triangulating points. They are currently only designed for the central case, and compute the position of a point expressed in the first camera given a 2D-2D correspondence between bearing vectors from two cameras. The methods reuse the relative adapter, which need to hold the transformation between the cameras given by the position $ \mathbf{t}_{c'}^{c} $ of the second camera seen from the first one and the rotation $ \mathbf{R}_{c'}^{c} $ from the second camera back to the first camera frame.

    triangulation_central.png

    There are two methods, triangulate (linear) and triangulate2 (a fast non-linear approximation). They are used as follows:

    // create a central relative adapter
    // (immediately pass translation and rotation)
    relative_pose::CentralRelativeAdapter adapter(
    bearingVectors1,
    bearingVectors2,
    translation,
    rotation );
    // run method 1
    point_t point =
    triangulation::triangulate( adapter, index );
    //run method 2
    point_t point =
    triangulation::triangulate2( adapter, index );

    The example is taken from test_triangulation.cpp.

  • Alignment of two point-clouds: OpenGV also contains a method for aligning point-clouds. It is currently only designed for the central case, and computes the transformation between two frames given 3D-3D correspondences between points expressed in the two frames (here denoted by c and c', although it ain't necessarily need to be cameras anymore). The method returns the transformation between the frames given by the position $ \mathbf{t}_{c'}^{c} $ of the second frame seen from the first one and the rotation $ \mathbf{R}_{c'}^{c} $ from the second frame back to the first frame.

    point_cloud.png

    The method is called threept_arun, and it can be used for an arbitrary number of points (minimum three). There is also a non-linear optimization method again called optimize_nonlinear. The methods are used as follows:

    // create the 3D-3D adapter
    point_cloud::PointCloudAdapter adapter(
    points1, points2 );
    // run threept_arun
    transformation_t threept_transformation =
    point_cloud::threept_arun( adapter, indices );
    // run the non-linear optimization over all correspondences
    transformation_t nonlinear_transformation =

    There is also a sample-consensus problem for the point-cloud alignment. It is set up as follows:

    // create a 3D-3D adapter
    point_cloud::PointCloudAdapter adapter(
    points1, points2 );
    // create a RANSAC object
    sac::Ransac<sac_problems::point_cloud::PointCloudSacProblem> ransac;
    // create the sample consensus problem
    std::shared_ptr<sac_problems::point_cloud::PointCloudSacProblem>
    relposeproblem_ptr(
    new sac_problems::point_cloud::PointCloudSacProblem(adapter) );
    // run ransac
    ransac.sac_model_ = relposeproblem_ptr;
    ransac.threshold_ = threshold;
    ransac.max_iterations_ = maxIterations;
    ransac.computeModel(0);
    // return the result
    transformation_t best_transformation =
    ransac.model_coefficients_;

    These examples are taken from test_point_cloud.cpp and test_point_cloud_sac.cpp.

Note that there are more unit-tests in the test-directory. It shows the usage of all the methods contained in the library.

Some words about the sample-consensus-classes

All the above mentioned Ransac-methods make use of a number of super-classes such that only the basic functions need to be implemented in the derived SacProblem (SampleConsensusProblem). The basic functions are responsible for getting valid samples for model instantiation, model instantiation itself, as well as model verification. SamplesConsensusProblem is the base-class for any problem we want to solve, and contains a virtual interface for the basic methods that need to be implemented. The base-class SampleConsensus is then for the sample-consensus method itself, calling the basic functions. So far only the Ransac is implemented [15].

Ransac threshold

Since the entire library is operating in 3D, we also need a way to compute and threshold reprojection errors in 3D. What we are looking at is the angle $ q $ between the original bearing-vector $ \mathbf{f}_{meas} $ and the reprojected one $ \mathbf{f}_{repr} $. By adopting a certain threshold angle $ q_{threshold} $, we hence constrain the $ \mathbf{f}_{repr} $ to lie within a cone of axis $ \mathbf{f}_{meas} $ and of opening angle $ q_{threshold} $.

reprojectionError.png

The threshold-angle $ q_{threshold} $ can be easily obtained from classical reprojection error-thresholds expressed in pixels $ \psi $ by assuming a certain focal length $ l $. We then have $ q_{threshold} = \arctan{\frac{\psi}{l}} $.

The threshold we are using in the end is still not quite this one, but a value derived from it in analogy with the computation of reprojection errors. The most efficient way to compute a "reprojection error" is given by taking the scalar product of $ \mathbf{f}_{meas} $ and $ \mathbf{f}_{repr} $, which equals to $ \cos q $. Since this value is between -1 and 1, and we actually want an error that minimizes to 0, we take $ \epsilon = 1 - \mathbf{f}_{meas}^{T}\mathbf{f}_{repr} = 1 - \cos q $. The threshold error is therefore given by

$ \epsilon_{threshold} = 1 - \cos{q_{threshold}} = 1 - \cos({\arctan{\frac{\psi}{l}}}) $

In the ransac-examples in the test-folder, you will often see something like this.

ransac.threshold_ = 1.0 - cos(atan(sqrt(2.0)*0.5/800.0));

This notably corresponds to the above computation of our "reprojection-error"-threshold, with a focal length of 800.0 and a reprojection error in pixels of 0.5*sqrt(2.0).

The "Multi"-stuff

As you go deeper into the code you might notice that there are a number of elements (mostly in the relative-pose context) that contain the tag "multi" in their name. The adapter base-class used here is called RelativeMultiAdapterBase. The idea of this adapter is to hold multiple sets of bearing-vector correspondences originating from pairs of cameras. A pair of cameras is, as the name says, a set of two cameras in different viewpoints. The correspondences are accessed via a multi-index (a pair-index referring to a specific pair of cameras, and a correspondence-index refering to the correspondence within the camera-pair).

Subsets of camera-pairs can be identified in a number of problems, such as

  • Non-central relative pose (2 viewpoints): Non-central relative pose problems involving two viewpoints typically originate from motion-estimation with multi-camera rigs. In the special situation where the cameras are pointing in different directions, and where the motion between the viewpoints is not too big (a practically very relevant case), the correspondences are typically originating from the same camera in both viewpoints. We therefore can do a camera-wise grouping of the correspondences in the multi-camera system. The following situation contains four pairs given by the black, green, blue, and orange camera in both viewpoints:

    nonoverlapping.png

  • Central multi-viewpoint problems: By multi-viewpoint we understand here problems that involve more than two viewpoints. As indicated below, a problem of three central viewpoints for instance allows to identify three camera-pairs as well. The number of camera-pairs in an n-view problem amounts to the combination of 2 out of n, meaning n*(n-1)/2. For the below example, we could have tha camera pairs (c,c'), (c',c''), and (c'',c). The first pair would have a set of correspondences originating from points p1 and p4, the second one from p2 and p4, and the third one from p3 and p4.

    multi_viewpoint.png

The multi-adapters keep track of these camera-pair-wise correspondence groups. The benefit of it appears when moving towards random sample-consensus schemes. Have a look at the "opengv/sac/"-folder, it contains the MultiSampleConsensus, MultiRansac, and MultiSampleConsensusProblem classes. They employ the multi-indices, and the derived MultiSampleConsensusProblems exploit the fact that the correspondences are grouped:

  • The MultiNoncentralRelativePoseSacProblem is for non-central, non-overlapping viewpoints with little change, and exploits the grouping in order to do homogeneous sampling of correspondences over the cameras. As an example, imagine we are computing the relative pose of a non-overlapping multi-camera rig with two cameras facing opposite directions. In terms of accuracy, it doesn't make sense to sample 16-points in one camera and one point in the other. We preferrably would like to sample 8 points in one camera, and 9 in the other. This is exactly what MultiNoncentralRelativePoseSacProblem is able to do. It uses the derived adapter NoncentralRelativeMultiAdapter.
  • In the multi viewpoint case, one could of course solve a central relative pose problem for each camera-pair individually. The idea of MultiCentralRelativePoseSacProblem is to benefit from a joint solution of multiple relative-pose problems. In the above three-view problem for instance, we can exploit additional constraints around the individual transformations such as cycles of rotations returning identity, and cycles of translations returning zero. The corresponding adapter is called CentralRelativeMultiAdapter.

All this stuff is highly experimental, so you probably shouldn't pay too much attention to it for the moment ;)