Motive API: Quick Start Guide

Main PageDeveloper ToolsMotive APIQuick Start Guide


Warning2.png

SDK/API Support Disclaimer

We provide developer tools to enable OptiTrack customers across a broad set of applications to utilize their systems in the ways that best suit them. Our Motive API through the NatNet SDK and Camera SDK is designed to enable experienced software developers to integrate data transfer and/or system operation with their preferred systems and pipelines. Sample projects are provided alongside each tool, and we strongly recommend the users to reference or use the samples as reliable starting points. The following list specifies the range of support that will be provided for the SDK tools:
  • Using the SDK tools requires background knowledge on software development; therefore, we do not provide support for basic project setup, compiling, and linking when using the SDK/API to create your own applications.
  • Although we ensure the SDK tools and their libraries work as intended, we do not provide support for custom developed applications that have been programmed or modified by users using the SDK tools.
  • Ticketed support will be provided for licensed Motive users using the Motive API and/or the NatNet SDK tools from the included libraries and sample source codes only.
  • The Camera SDK is a free product, and therefore we do not provide free ticketed support for it.
  • For other questions, please check out the NaturalPoint forums. Very often, similar development issues get reported and solved there.

This guide provides detailed instructions on commonly used functions of the Motive API for developing custom applications. For a full list of the functions, refer to the Motive API Function Reference page. Also, for a sample use case of the API functions, please check out the provided marker project. In this guide, the following topics will be covered:

  • Library files and header files
  • Initialization and shutdown
  • Capture setup (Calibration)
  • Configuring camera settings
  • Updating captured frames
  • 3D marker tracking
  • Rigid body tracking
  • Data streaming


Environment Setup[edit]


Library Files[edit]

When developing a Motive API project, make sure its linker knows where to find the required library files. This can be done either by specifying its location within the project or by copying this file to the project folder.

NPTrackingTools[edit]

Motive API libraries (.lib and .dll) are located in the lib folder within the Motive install directory; which is located at C:\Program Files\OptiTrack\Motive\lib by default. In this folder, library files for both 64-bit (NPTrackingToolsx64.dll and NPTrackingToolsx64.lib) and 32-bit (NPTrackingTools.dll and NPTrackingTools.lib) platforms can be found.

When using the Motive API library, all of the required DLL files must be located in the executable directory. Copy and paste the NPTrackingToolsx64.dll file, or the NPTrackingTools.dll file, onto the folder alongside the executable file.

Third-pary Libraries[edit]

Additional thrid-party libraries are required for Motive API, and DLL files for these libraries can be found in the Motive install directory C:\Program Files\OptiTrack\Motive\. DLL files for 32-bit platform is located in the lib32 folder C:\Program Files\OptiTrack\Motive\lib32. You can copy can paste them into the Motive API project directory to use them. The following are the required DLL file for Motive API:

Required third-party DLLs

  • avcodec-56.dll
  • avformat-56.dll
  • avutil-54.dll
  • libiomp5md.dll
  • Qt5Core.dll
  • Qt5Gui.dll
  • Qt5Widgets.dll
  • swscale-3.dll

Header Files[edit]

For function declarations, there are two required header files: NPTrackingTools.h and RigidBodySettings.h, and these files are located in the C:\Program Files\OptiTrack\Motive\inc\ folder. Always include the directive syntax for adding the NPTrackingTools.h header file for all programs that are developed against the Motive API. The syntax for including RigidBodySetting.h file is already included in the NPTrackingTools.h file, so there is no need to include this separately.

#include    "NPTrackingTools.h"
  • The NPTrackingTools.h file contains the declaration for the most of the functions and classes in the API.
  • The RigidBodySettings.h file contains declaration for the cRigidBodySettings class, which is used for configuring rigid body asset properties.

Note: You could define these directories by using the NPTRACKINGTOOLS_INC, NPTRACKINGTOOLS_LIB environment variables. Check the project properties (Visual Studio) of the provided marker project for a sample project configuration.


Motive Files[edit]

To use Motive API, you will need to export and import two files from Motive: The [[Motive Basics# application profile (XML) and the camera calibration (CAL). The application profile is imported in order to obtain software settings and trackable asset definitions. The calibration file is needed because Motive API does not support the calibration pipeline. You will have to use Motive to perform camera calibration, export the calibration file (CAL), and import it onto the API application. Only after the camera calibration is imported, reliable 3D tracking data can be obtained. Application profiles can be loaded using the TT_LoadProfile function, and the calibration files can be loaded using the TT_LoadCalibration function.

Info2.png

When running the sample project (markers), it will also look for an application profile named UserProfile.xml and camera calibration file named Calibration.cal

Info2.png

Project TTP files are getting deprecated starting from the 2.0 release. Please use the Profile XML files for persisting software configurations and a list of trackable assets.

Running marker sample application (x64) with the required DLL files and Motive files.

Initialization and Shutdown[edit]


When using the API, connected devices and the Motive API library need to be properly initialized at the beginning of a program and closed down at the end. The following section covers Motive API functions for initializing and closing down devices.

Initialization[edit]

To initialize all of the connected cameras, call the TT_Initialize function. This function initializes the API library and gets the cameras ready for capturing data, so always call this function at the beginning of a program. If you attempt to use the API functions without the initialization, you will get an error.

// Initializing all connected cameras
TT_Initialize();

Update[edit]

The TT_Update function is primarily used for updating captured frames, which will be covered later, but there is another use. The TT_Update can also be called to update a list of connected devices, and you can call this function after the initialization to make sure all of the newly connected devices are properly initialized in the beginning.

// Initializing all connected cameras
TT_Initialize();

//Update for newly arrive cameras
TT_Update();

Shutdown[edit]

When exiting out of a program, make sure to call the TT_Shutdown function to completely release and close down all of the connected devices. Cameras may fail to shut down completely when this function is not called.

// Closing down all of the connected cameras
TT_Shutdown();
return 0;

Setup the Project[edit]


Motive Application Profile[edit]

The Motive application profile (XML) stores all of trackable assets involved in a capture and software configurations including application settings and data streaming settings. When using the API, it is strongly recommended to first configure all of the settings and define trackable assets in Motive, export a profile XML file, and then load the file by calling the TT_LoadProfile function. This way, you can adjust the settings for your need in advance and apply them to your program without worrying about configuring individual settings.

TT_LoadProfile("UserProfile.xml"); 	// Loading application profile, UserProfile.xml

Camera Calibration[edit]

Cameras must be calibrated in order to track in 3D space. However, since camera calibration is a complex process, this process is not supported directly by the API. Instead, cameras must be calibrated from Motive, and the calibration needs to be exported from camera calibration file (CAL). Then, the exported file can be loaded into custom applications that are developed against the API. Once the calibration data is loaded, the 3D tracking functions can be used. For detailed instructions on camera calibration in Motive, please read through the Calibration page.

TT_LoadCalibration("CameraCal.cal"); 	// Loading CAL file
Camera calibration overview

Loading Calibration

  1. Open Motive.
  2. [Motive] Calibrate: Calibrate camera system using the Calibration panel. Read through the Calibration page for details.
  3. [Motive] Export: After the system has been calibrated, export the calibration file (CAL) from Motive.
  4. Close out on Motive.
  5. [API] Load: Import calibration onto your custom application by calling the TT_LoadCalibration function to import CAL files.
  6. When successfully loaded, you will be able to obtain 3D tracking data using the API functions.

Info2.png

Note:

  • Calibration Files: When using the exported calibration files, make sure to use only valid calibration. Exported calibration file can be used again as long as the system setup remained unchanged. Note that the file will no longer be valid if any of the system setups have been altered after the calibration. Also, calibration quality may degrade over time due to environmental factors. For these reasons, we recommend re-calibrating the system routinely to guarantee the best tracking quality.
  • Tracking Bars: If you are using a tracking bar, camera calibration is not required for tracking 3D points.

Camera Settings[edit]


Connected cameras are accessible by index numbers. The camera indexes are assigned in the order the cameras are initialized. Most of the API functions for controlling cameras require an index value. When processing all of the cameras, use the TT_CameraCount function to obtain the total camera count and process each camera within a loop. For pointing to a specific camera, you can use the TT_CameraID or TT_CameraName functions to check and use the camera with given its index value. This section covers Motive API functions for checking and configuring camera frame rate, camera video type, camera exposure, pixel brightness threshold, and IR illumination intensity.

Fetching Camera Settings[edit]

The following functions return integer values for the configured settings of a camera specified by its index number. Camera video type is returned as an integer value that represents a image processing mode, as listed in the NPVIDEOTYPE.

These camera settings are equivalent to the settings that are listed in the Devices pane of Motive. For more information on each of the camera settings, refer to the Devices pane page.

TT_CameraVideoType(int cameraIndex)      // Returns Video Type
TT_CameraExposure (int cameraIndex)      // Returns Camera Exposure
TT_CameraThreshold(int cameraIndex)      // Returns Pixel Threshold
TT_CameraIntensity(int cameraIndex)      // Returns IR Illumination Intensity
TT_CameraFrameRate(int cameraIndex)      // Returns Camera Frame Rate

Configuring Settings[edit]

Now that we have covered functions for obtaining configured settings, now let's modify some of them. There are two main functions for adjusting the camera settings: TT_SetCameraSettings and TT_SetCameraFramerate. The TT_SetCameraSettings function configures video type, exposure, threshold, and intensity settings of a camera which is specified by its index number. The TT_SetCameraFrameRate is used for configuring frame rate of a camera. Supported frame rate range may vary for different camera models. Check the device specifications and apply the frame rates only within the supported range.

TT_SetCameraSettings(int cameraIndex, int videoType, int exposure, int threshold, int intensity);
TT_SetCameraFrameRate(int cameraIndex, int framerate);

If you wish to keep part of the current camera settings, you can use the above functions to obtain the configured settings (e.g. TT_CameraVideoType, TT_CameraFrameRate, TT_CameraExposure, etc.) and them as input arguments for the TT_SetCameraSettings function. The following example demonstrates modifying frame rate and IR illumination intensity for all of the cameras, while keeping the other settings constant.

Sample output
//== Changing exposure and threshold settings for all of the cameras ==//
int cameraCount = TT_CameraCount();
int intensity = 10;
int framerate = 100;

for (int i = 0; i < cameraCount; i++)
{
	TT_SetCameraSettings(i, TT_CameraVideoType(i), TT_CameraExposure(i),
					TT_CameraThreshold(i), intensity);

	TT_SetCameraFrameRate(i, framerate);

	//== Outputting the Settings ==//
	printf("Camera #%d:\n", i);
	printf("\tFPS: %d\n\tIntensity: %d\n\tExposure: %d\n\tIntensity: %d\n\tVideo Type:%d\n", 
			TT_CameraFrameRate(i), TT_CameraIntensity(i), TT_CameraExposure(i), 
			TT_CameraIntensity(i),TT_CameraVideoType(i));
}

Camera Setting Ranges[edit]

Camera Settings

  • Valid frame rate values: Varies depending on camera models, refer to the respective hardware specifications.
  • Valid exposure values: Depends on camera model and frame rate settings.
  • Valid threshold values: 0 - 255
  • Valid intensity values: 0 - 15

Video Types

  • Video Type: Data Recording page for more information on image processing modes.
  • Segment Mode: 0
  • Grayscale Mode: 1
  • Object Mode: 2
  • Precision Mode: 4
  • MJPEG Mode: 6

Other Settings[edit]

There are other camera settings, such as imager gain, that can be configured using the Motive API. Please refer to the Motive API Function Reference page for descriptions on other functions.

Updating the frames[edit]


In order to process multiple consecutive frames, you must update the camera frames using the following API functions: TT_Update or TT_UpdateSingleFrame. Call one of the two functions repeatedly within a loop to process all of the incoming frames. In the 'marker sample', TT_Update function is called within a while loop as the frameCounter variable is incremented, as shown in the example below.

// marker.cpp sample project

int main()
{
        TT_Initialize();
	int frameCounter = 0; // Frame counter variable
 	while (!_kbhit())
	{
		if(TT_Update() == NPRESULT_SUCCESS)
		{
			// Each time the TT_Update function successfully updates the frame,
			// the frame counter is incremented, and the new frame is processed.
			frameCounter++;

			////// PROCESS NEW FRAME //////
		}
	}
}


TT_Update() vs. TT_UpdateSingleFrame()[edit]

There are two functions for updating the camera frames: TT_Update and TT_UpdateSingleFrame. At the most fundamental level, these two functions both update the incoming camera frames. However, they may act differently in certain situations. When a client application stalls momentarily, it could get behind on updating the frames and the unprocessed frames may be accumulated. In this situation, each of these two functions will behave differently.

  • The TT_Update() function will disregard accumulated frames and service only the most recent frame data, but it also means that the client application will not be processing the previously missed frames.
  • The TT_UpdateSingleFrame() function ensures that only one frame is processed each time the function is called. However, when there are significant stalls in the program, using this function may result in accumulated processing latency.

In general, a user should always use TT_Update(). Only in the case where a user wants to ensure their client application has access to every frame of tracking data and they are having problems calling TT_Update() in a timely fashion, should they consider using TT_UpdateSingleFrame(). If it is important for your program to obtain and process every single frame, use the TT_UpdateSingleFrame() function for updating the data.

TT_Update() 		// Process all outstanding frames of data.
TT_UpdateSingleFrame() // Process one outstanding frame of data.

3D Marker Tracking[edit]

After loading valid camera calibration, you can use the API functions to track retroreflective markers and get their 3D coordinates. The following section demonstrates using the API functions for obtaining the 3D coordinates. Since marker data is obtained for each frame, always call the TT_Update, or the TT_UpdateSingleFrame, function each time newly captured frames are received.

Marker Index

In a given frame, each reconstructed marker is assigned a marker index number. These marker indexes are used for pointing to a particular reconstruction within a frame. You can also use the TT_FrameMarkerCount function to obtain the total marker count and use this value within a loop to process all of the reconstructed markers. Marker index values may vary between different frames, but unique identifiers will always remain the same. Use the TT_FrameMarkerLabel function to obtain the individual marker labels if you wish to access same reconstructions for multiple frames.

int totalMarker = TT_FrameMarkerCount();
printf("Frame #%d: (Markers: %d)\n", framecounter, totalMarker);

//== Use a loop to access every marker in the frame ==//
for (int i = 0 ; i < totalMarker; i++) {
        printf("\tMarker #%d:\t(%.2f,\t%.2f,\t%.2f)\n\n", 
		i, TT_FrameMarkerX(i), TT_FrameMarkerY(i), TT_FrameMarkerZ(i));
}

Marker Position

For obtaining 3D position of a reconstructed marker, you can use TT_FrameMarkerX, TT_FrameMarkerY, and TT_FrameMarkerZ functions. These functions return 3D coordinates (X/Y/Z) of a marker in respect to the global coordinate system, which was defined during the calibration process. You can further analyze 3D movements directly from the reconstructed 3D marker positions, or you can create a rigid body asset from a set of tracked reconstructions for 6 Degree of Freedom tracking data. Rigid body tracking via the API will be explained in the later section.

int totalMarker = TT_FrameMarkerCount();
printf("Frame #%d: (Markers: %d)\n", framecounter, totalMarker);

//== Use a loop to access every marker in the frame ==//
for (int i = 0 ; i < totalMarker; i++) {
        printf("\tMarker #%d:\t(%.2f,\t%.2f,\t%.2f)\n\n", 
		i, TT_FrameMarkerX(i), TT_FrameMarkerY(i), TT_FrameMarkerZ(i));
}
Sample 3D marker coordinate output using the API functions.

Rigid Body Tracking[edit]


  • Retroreflective markers placed on a quadrocopter
  • The corresponding rigid body defined in Motive

For tracking 6 degrees of freedom (DoF) movement of a rigid body, a corresponding rigid body (RB) asset must be defined. A RB asset is created from a set of reflective markers attached to a rigid object which is assumed to be undeformable. There are two main approaches for obtaining RB assets when using Motive API; you can either import existing rigid body data or you can define new rigid bodies using the TT_CreateRigidBody function. Once RB assets are defined in the project, rigid body tracking functions can be used to obtain the 6 DoF tracking data. This section covers sample instructions for tracking rigid bodies using the Motive API.

Info2.png

We strongly recommend reading through the Rigid Body Tracking page for more information on how rigid body assets are defined in Motive.

Importing Rigid Body Assets[edit]

Let's go through importing RB assets into a client application using the API. In Motive, rigid body assets can be created from three or more reconstructed markers, and all of the created assets can be exported out into either application profile (XML) or a Motive rigid body file (TRA). Each rigid body asset saves marker arrangements when it was first created. As long as the marker locations remain the same, you can use saved asset definitions for tracking respective objects.

Exporting all RB assets from Motive:

  • Exporting application profile: File → Save Profile
  • Exporting rigid body file (TRA): File → Export Rigid Bodies (TRA)

Exporting individual RB asset:

  • Exporting rigid body file (TRA): Under the Assets pane, right-click on a RB asset and click Export Rigid Body


When using the API, you can load exported assets by calling the TT_LoadProfile function for application profiles and the TT_LoadRigidBodies or TT_AddRigidBodes function for TRA files. When importing TRA files, the TT_LoadRigidBodies function will entirely replace the existing rigid bodies with the list of assets from the loaded TRA file. On the other hand, TT_AddRigidBodes will add the loaded assets onto the existing list while keeping the existing assets. Once rigid body assets are imported into the application, the API functions can be used to configure and access the rigid body assets.

TT_LoadProfile("UserProfile.xml"); 	// Loading application profile
TT_LoadRigidBodies("rbfile.tra"); 	// Loading TRA file
TT_AddRigidBodies("rbfile.tra"); 

Creating New Rigid Body Assets[edit]

Rigid body assets can also be defined directly using the API. The TT_CreateRigidBody function defines a new rigid body from given 3D coordinates. This function takes in an array float values which represent x/y/z coordinates or multiple markers in respect to rigid body pivot point. The float array for multiple markers should be listed as following: {x1, y1, z1, x2, y2, z2, …, xN, yN, zN}. You can manually enter the coordinate values or use the TT_FrameMarkerX, TT_FrameMarkerY, and TT_FrameMarkerZ functions to input 3D coordinates of tracked markers.
When using the TT_FrameMarkerX/Y/Z functions, you need to keep in mind that these locations are taken in respect to the RB pivot point. To set the pivot point at the center of created rigid body, you will need to first compute pivot point location, and subtract its coordinates from the 3D coordinates of the markers obtained by the TT_FrameMarkerX/Y/Z functions. This process is shown in the following example.
NPRESULT		TT_CreateRigidBody(const char* name, int userDataID, int markerCount, float *markerList);


Example: Creating RB Assets


int markerCount = TT_FrameMarkerCount;
vector<float> markerListRelativeToGlobal(3*markerCount);

// add markers to markerListRelativeToGlobal using TT_FrameMarkerX, etc
for (int i = 0; i < markerCount; ++i)
{
    	markerListRelativeToGlobal.push_back(TT_FrameMarkerX(i));
    	markerListRelativeToGlobal.push_back(TT_FrameMarkerY(i));
	markerListRelativeToGlobal.push_back(TT_FrameMarkerZ(i));
}

// then average the locations in x, y and z
for (int i = 0; i < markerCount; ++i)
{
    	float sx += markerListRelativeToGlobal[3*i];
    	float sy += markerListRelativeToGlobal[3*i + 1];
    	float sz += markerListRelativeToGlobal[3*i + 2];
}


float ax = sx/markerCount;
float ay = sy/markerCount;
float az = sz/markerCount;
vector<float> pivotPoint = {ax, ay, az};
vector<float> markerListRelativeToPivotPoint(3*markerCount);

// subtract the pivot point location from the marker location
for (int i = 0; i < markerCount; ++i)
{
    markerListRelativeToPivotPoint.push_back(markerListRelativeToGlobal[3*i] - ax);
    markerListRelativeToPivotPoint.push_back(markerListRelativeToGlobal[3*i + 1] - ay);
    markerListRelativeToPivotPoint.push_back(markerListRelativeToGlobal[3*i + 2] - az);
}

TT_CreateRigidBody("Rigid Body New", 1, markerCount, markerListRelativeToPivotPoint);


Rigid Body 6 DoF Tracking Data[edit]

6 DoF rigid body tracking data can be obtained using the TT_RigidBodyLocation function. Using this function, you can save 3D position and orientation of a rigid body into declared variables. The saved position values indicate location of the rigid body pivot point, and they are represented in respect to the global coordinate axis. The Orientation is saved in both Euler and Quaternion orientation representations.
void	TT_RigidBodyLocation(int rbIndex, 				//== RigidBody Index
			float *x, float *y, float *z, 			//== Position
			float *qx, float *qy, float *qz, float *qw, 	//== Quaternion
			float *yaw, float *pitch, float *roll);   	//== Euler


Example: RB Tracking Data


//== Declared variables ==//
float	x, y, z;
float 	qx, qy, qz, qw;
float	yaw, pitch, roll;
int rbcount = TT_RigidBodyCount();

for(int i = 0; i < rbcount; i++)
{
	//== Obtaining/Saving the rigid body position and orientation ==//
	TT_RigidBodyLocation( i, &x, &y, &z, &qx, & qy, &qz, &qw, &yaw, &pitch, &roll );
	
	if( TT_IsRigidBodyTracked( i ) )
	{
		printf( "%s: Pos (%.3f, %.3f, %.3f) Orient (%.1f, %.1f, %.1f)\n", 
					TT_RigidBodyName( i ), x, y, z, yaw, pitch, roll );
	}
}


Rigid Body Properties[edit]

In Motive, rigid body assets have rigid body properties assigned to each of them. Depending on how these properties are configured, display and tracking behavior of corresponding rigid bodies may vary. When using the Motive API, rigid body properties are configured and applied using the cRigidBodySettings class which is declared within the RigidBodySetting.h header file.

Within your program, create an instance of cRigidBodySettings class and call the API functions to obtain and adjust rigid body properties. Once desired changes are made, use the TT_SetRigidBodySettings function to assign the properties back onto a rigid body asset.

For detailed information on individual rigid body settings, read through the Rigid Body Properties page.

NPRESULT	TT_RigidBodySettings(int rbIndex, RigidBodySolver::cRigidBodySettings &settings);
NPRESULT	TT_SetRigidBodySettings(int rbIndex, RigidBodySolver::cRigidBodySettings &settings);

Data Streaming[edit]


Once the API has been successfully initialized, data streaming can be enabled, or disabled, by calling either the TT_StreamNP, TT_StreamTrackd, or TT_StreamVRPN function. The TT_StreamNP function enables/disables data streaming via the NatNet. The NatNet SDK is a client/server networking SDK designed for sending and receiving NaturalPoint data across networks, and tracking data from the API can be streamed to client applications from various platforms via the NatNet protocol. Once the data streaming is enabled, connect the NatNet client application to the server IP address to start receiving the data.

TT_StreamNP(true);	//Enabling NatNet Streaming.

The TT_StreamNP function is equivalent to Broadcast Frame Data from the Data Streaming pane in Motive.

Data Streaming Settings[edit]

The Motive API does not currently support configuring data streaming settings directly from the API. To configure the streaming server IP address and the data streaming settings, you will need to use Motive and save an application profile XML file that contains the desired configuration. Then, the exported profile can be loaded when using the API. Through this way, you will be able to set the interface IP address and decide which data to be streamed over the network.

For more information on data streaming settings, read through the Data Streaming Pane page.

  • Output lines from the marker sample, indicating that it is starting a NatNet server.
  • Streaming from the marker sample (API) onto the NatNet WinFormTestApp sample.