This performance test was conducted on August 17th, 2018. The purpose was to measure the impact of gaze collection techniques against a variety of scene configurations, and a baseline. The following conditions were applied to our test:
- Unity 2017.3
- Linear lighting mode
- Multipass rendering mode
- VR enabled with Oculus built-in support
- Full game screen 16:9
- Deep profiling was enabled
- We subtracted editor overhead from milliseconds measured
- Only 1 Unity was opened
- SteamVR utility not open
- No interactions or gameplay
Below are the three 3D scenes that were tested to measure performance of gaze capture from the Cognitive3D Unity SDK. We also tested a 360 video as well (not pictured)
|Simple Boxes||Small Supermarket||Full Supermarket|
Gaze Capture Techniques¶
Baseline - No CognitiveVR Manager present in the scene, so no session started at all.
Command (Default) - Implementation of command buffer rendering scene depth before full rendering pass. This is our default gaze capture technique, which doesn't require scene modification.
Physics - Implementation of a physics raycast into the scene. This technique requires modification to the scene, and assumes colliders are configured by the developer. This technique offers best overall performance, but is not our default due to scene modification requirements.
Times are in milliseconds. The times below were the maximum value in the Unity Profiler after the scene was allowed to play for 30 seconds.
|360 Video||2.24||3.96 (+1.71)||2.63 (+0.39)|
|Simple Boxes||2.55||4.52 (+1.97)||3.24 (+0.69)|
|Small Supermarket||7.43||8.60 (+1.17)||7.43 (+0.00)|
|Full Supermarket||12.37||13.29 (+0.92)||12.37 (+0.00)|