The PointCloud SDK is a library which lets you use advanced computer vision in your own iOS application. Here you will find a brief overview of the concepts and functionality of the PointCloud SDK.
The API is defined in pointcloud.h.
PointCloud SDK provides a C API for controlling the SDK. Before using the SDK you'll need to create an application key. You create one key for each application. Without a valid application key, the app will exit.
A typical lifecycle of the SDK is:
The first thing you need to do before using the SDK is to create an instance of the SDK using the method pointcloud_create(). Without first calling this method, no other commands will work. After you are done using the SDK, call the pointcloud_destroy() method to free memory.
The SDK uses both accelerometer and gyro data (if available), in addition to camera data. Use the on_accelerometer_update() method to provide raw accelerometer data. If a gyro is available, use on_device_motion_update() to provide processed device motion data, where the rotation rate is unbiased and the acceleration is user acceleration (where gravity imparted acceleration has been filtered out).
You can query the SDK to find out what state it is in (for example idle, tracking SLAM map, etc.) using pointcloud_get_state().
To detect and track a 2D image, you'll first need to generate an image target. This will create a target file which you bundle with your application. There is no firm limit the number of image targets, but detection time will go up with more image targets. Up to around 20 active targets on an iPhone 3GS or 4, and up to around 40 active targets on an iPhone 4S or iPad 2 or 3 are reasonable limits. Note that you can include much more than this, and activate/deactivate programmatically to keep a reasonable number of active targets.
To load an image target, use pointcloud_add_image_target(). Note that the SDK will not start looking for the image until you activate it using pointcloud_activate_image_target(). You can completely remove an image target from memory using pointcloud_remove_image_target() or deactivate it using pointcloud_deactivate_image_target().
To start the creation of a SLAM map, use pointcloud_start_slam(). This will start the initialization process. The user needs to slowly move his/her device left/right, up/down, in/out (seeing the scene for multiple angles) until a SLAM map has been created and verified. Ideally you want to point the camera towards an area with much visual detail, which is not too far away, and has some planar surfaces. When a map has been created the state changes from POINTCLOUD_INITIALIZING to POINTCLOUD_TRACKING_SLAM_MAP. The map can be discarded or the initialization can be restarted with pointcloud_reset().
In addition to manually initializing a SLAM map, a map can be initialized by detecting an image. After the image is detected, a SLAM map is created and automatically and continuously expanded. To enable this, use
pointcloud_set_automatic_map_expansion(true) before activating the image target(s). To start looking for images again and clear the current SLAM map, call pointcloud_reset().
A third way of acquiring a SLAM map is to load a previously created and saved map. To save a map, first create one, then call pointcloud_save_current_map(). A serialized map will be saved at the file path provided. This file can later be loaded again using pointcloud_load_map(). After loading a map, the SDK will try to locate the camera pose within this map and begin tracking it.
When the state is either POINTCLOUD_TRACKING_SLAM_MAP or POINTCLOUD_TRACKING_IMAGES, you can get the current camera pose using pointcloud_get_camera_pose(). This will give you an 4x4 matrix of the camera pose, which can be used to position the content relative reality.
You can get access to a copy of the current point cloud, if there is a SLAM map, using pointcloud_get_points(). Note that this is a copy, editing it will not change the internal point cloud. After using it, make sure to destroy it using pointcloud_destroy_point_cloud() to free memory.