Three.js Integration
The Three.js adapter provides automatic gaze tracking, performance profiling, dynamic object tracking, and scene/object export utilities.
Installation
In your Three.js project's terminal, install the SDK from NPM.
npm install @cognitive3d/analytics
Initialization and Usage
Import both the main C3D class and the C3DThreeAdapter. After creating an instance of the SDK, pass it to the adapter's constructor.
Session Management: To ensure data is captured reliably, the SDK hooks into Three.js's native WebXR session events.
Starting the Session: An event listener is attached to the renderer for the session start event. This event fires as soon as the participant enters VR. Inside this listener, we call c3d.startSession(renderer.xr.getSession()). This is a crucial step that begins the analytics recording. By passing the active XRSession, the SDK can automatically handle gaze tracking without any extra code in the render loop.
Render Loop: You must call c3dAdapter.update() inside your application's render loop. This is required to track FPS and dynamic object movements, which are not handled automatically by startSession.
Ending the Session: Similarly, an event listener is attached for the session end event, which fires when the participant exits VR. Inside this listener, c3d.endSession() is called. This function ensures that all remaining batched data is sent to the server and properly finalizes the session on the Cognitive3D dashboard. This step is vital to prevent data loss.
// My ThreeJS VR app with Cognitive3D analytics
import * as THREE from 'three';
import C3D from '@cognitive3d/analytics'; // main c3d sdk
import C3DThreeAdapter from '@cognitive3d/analytics/adapters/threejs'; // c3d threejs adapter
import settings from './settings'; // SDK settings file
// --- Basic Three.js VR Setup ---
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000);
const renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.xr.enabled = true;
// ... add VR button and your scene objects
const interactableGroup = new THREE.Group();
scene.add(interactableGroup); // Add your trackable objects to this group
// --- Cognitive3D Initialization ---
// 1. Initialize the main SDK, passing the renderer to enable the profiler
const c3d = new C3D(settings, renderer);
// 2. Initialize the Three.js Adapter
const c3dAdapter = new C3DThreeAdapter(c3d);
// 3. Set your scene and user info
c3d.setScene('MyThreeJSScene');
c3d.setUserProperty("c3d.app.version", "1.0"); // REQUIRED PROPERTY
// --- Start Session ---
renderer.xr.addEventListener('sessionstart', async () => {
const xrSession = renderer.xr.getSession();
// Start the C3D session
await c3d.startSession(xrSession);
// This call initializes the tracking systems (FPS, Gaze, Dynamic Objects)
c3dAdapter.startTracking(renderer, camera, interactableGroup);
console.log("Cognitive3D Session Started!");
});
// 4. Render Loop
renderer.setAnimationLoop((timestamp, frame) => {
// REQUIRED: Update the adapter every frame
c3dAdapter.update();
// Your normal render call
renderer.render(scene, camera);
});
// --- End Session ---
renderer.xr.addEventListener('sessionend', () => {
console.log('Cognitive3D: VR Session Ended');
c3d.endSession().then(status => {
console.log('Cognitive3D SDK session ended with status:', status);
});
});
Performance Profiling
To enable automatic performance tracking in Three.js, pass the THREE.WebGLRenderer instance as the second argument when initializing the SDK.
import C3D from '@cognitive3d/analytics';
import settings from './settings';
// Pass the renderer to the C3D constructor
const c3d = new C3D(settings, myThreeJsRenderer);
Once enabled, the profiler automatically records Draw Calls, System Memory, Main Thread Time, Average FPS, and 1% Low FPS. See the Sensors page for full details.
Scene Export
The C3DThreeAdapter provides helper methods for exporting your scene and individual objects. See the Scenes page for full export instructions.
Dynamic Objects
The Three.js adapter supports full dynamic object tracking with automatic snapshot recording, gaze raycasting, and object engagements. See the Dynamic Objects page for the complete setup guide.
If you have a question or any feedback about our documentation please use the Intercom button (purple circle) in the lower right corner of any web page or join our Discord.