Class: VRDisplay

VRDisplay

new VRDisplay()

WebAR devices will be exposed as VRDisplay instances. The pose estimation is exposed using the exact same methods as in any other VR display, although in the case of the Tango underlying implementation, the pose will be 6DOF (position and orientation). Some new methods have been added though to the VRDisplay class of the WebVR spec to provide new functionalities https://w3c.github.io/webvr/#interface-vrdisplay.
Source:

Methods

detectMarkers(markerType, markerSize) → {array.<VRMarker>}

.
Parameters:
Name Type Description
markerType long A number that represents the type of marker to detect. The supported types are specified as constants inside the VRDisplay MARKER_TYPE_XXX properties.
markerSize float The size in meters of the actual physical marker.
Source:
See:
Returns:
- An array of VRMarker instances corresponding to the detected markers of the specified type.
Type
array.<VRMarker>

disableADF()

Disable the last enabled ADF. The pose estimation is based on the start of service from then on once an ADF is disabled.
Source:
See:

enableADF(uuid)

Enable an ADF to be used to localize the pose estimation. Only one ADF can be enabled at the same time, so a previously emabled ADF is disabled when this call is made.
Parameters:
Name Type Description
uuid string The UUID of the ADF to enable.
Source:
See:

getADFs() → {array.<VRADF>}

Returns an list of existing VRADF structures in the device. The ADFs can be created with other apps like the Area Description example in the Tango C Examples https://github.com/googlesamples/tango-examples-c/tree/master/cpp_basic_examples/hello_area_description
Source:
See:
Returns:
- An array of VRSeeThroughCamera instances corresponding to the ADFs exisiting on the device.
Type
array.<VRADF>

getMaxNumberOfPointsInPointCloud() → {long}

Returns the maximum number of points/vertices that the VRDisplay is able to represent. This value will be bigger than 0 only if the VRDisplay is able to provide a point cloud.
Source:
See:
Returns:
- The maximum number of points/vertices that the VRDisplay is able to represent (0 if the underlying VRDisplay does not support point cloud provisioning).
Type
long

getPickingPointAndPlaneInPointCloud(x, y) → {VRPickingPointAndPlane}

Returns an instance of VRPickingPointAndPlane that represents a point and a plane normal defined at the collision point in 3D between the point cloud and the given 2D point of the screen. IMPORTANT: The point cloud needs to be at least updated by calling getPointCloud before calling this method. The returned value will always be null if the underlying VRDisplay does not support point cloud provisioning. The internal algorithm will use the provided 2D point to cast a ray against the point cloud and return the collision point and normal of the plane in the point cloud mesh.
Parameters:
Name Type Description
x float The horizontal normalized value (0-1) of the screen position.
y float The vertival normalized value (0-1) of the screen position.
Source:
See:
Returns:
- An instance of a VRPickingPointAndPlane to represent the collision point and plane normal of the ray traced from the passed (x, y) 2D position into the 3D mesh represented by the point cloud. null is returned if no support for point cloud is provided by the VRDisplay or if no colission has been detected.
Type
VRPickingPointAndPlane

getPointCloud(pointCloud, justUpdatePointCloud, pointsToSkip, transformPoints) → {VRPointCloud}

Updates an instance of the VRPointCloud structure that represents the point cloud acquired by the underlying hardware at the moment of the call. This process is similar to how the WebVR 1.1 spec requires to update a VRFrameData instance in order to get a new pose.
Parameters:
Name Type Description
pointCloud VRPointCloud The VRPointCloud instance to be updated in this call.
justUpdatePointCloud boolean A flag to indicate if the whole point cloud should be retrieved or just updated internally. Updating the point cloud without retrieving the points may be useful if the point cloud won't be used in JS (for rendering it, for exmaple) but picking will be used. This parameter should be true to only update the point cloud returning 0 points and false to both update and return all the points detected up until the moment of the call.
pointsToSkip number An integer value to indicate how many points to skip when all the points are returned (justUpdatePointCloud = false). This parameter allows to return a less dense point cloud by skipping 1, 2, 3, ... points. A value of 0 will return all the points. A value of 1 will skip every other point returning half the number of points (1/2), a value of 2 will skip 2 of every other points returning one third of the number of points (1/3), etc. In essence, this value will specify the number of point to return skipping some points. numberOfPointsToReturn = numberOfDetectedPoints / (pointsToSkip + 1).
transformPoints boolean A flag to indicate that the resulting points should be transformed in the native side. In the case the points are transformed in the native side, the VRPointCloud structure will return an identity pointsTransformMatrix and a true pointsAlreadyTransformed. On the contrary, if the points are not transformed in the native side, the matrix to correctly transform them will be provided inside the VRPointCloud structure's pointsTransformMatrix and the pointsAlreadyTransformed flag will be false.
Source:
See:
Returns:
- An instance of a VRPointCloud with the points/vertices that the VRDisplay has detected or null if the underlying VRDisplay does not support point cloud provisioning.
Type
VRPointCloud

getSeeThroughCamera() → {VRSeeThroughCamera}

Returns an instance of VRSeeThroughCamera that represents a see through camera (both for AR or VR). The underlying VRDisplay needs to be able to provide such a camera or this method will return null.
Source:
See:
Returns:
- An instance of a VRSeeThroughCamera to represent a see through camera or null if no camera is supported.
Type
VRSeeThroughCamera