Skip to content

Dynamic Objects

The Dynamic Object component allows you to track the position and state of GameObjects during the Participant's session. These can be used to track non player characters (NPCs), other Participants in multi-user apps, and interactive objects (anything that moves position, changes orientation, or changes size.)

More information is on the Cognitive 3D website.

Using Dynamic Objects in the Cognitive3D visionOS SDK

To use Dynamic Objects, you will need to add two files into your project in Xcode; these modules are distributed with the C3D framework:

  • DynamicObjectSystem.swift
  • DynamicComponent.swift

The reason that they are not in the SDK is due to a limitation with Reality Composer Pro (RCP); at this time, the DynamicComponent module needs to placed inside the Reality Composer Pro folder.

Configuring the SDK

Note: to use the Dynamic Objects features, you need to register the two classes at runtime.

DynamicComponent.registerComponent()

DynamicObjectSystem.registerSystem()

Using the SDK with a RealityView & Reality Composer Pro

The following code snippet shows how Dynamic Objects could be registered in a RealityView. The entities are added to a scene using RCP; RCP will create a USD scene.

func configureDynamicObjects(rootEntity: Entity) {

    guard let objManager = Cognitive3DAnalyticsCore.shared.dynamicDataManager else {
        return
    }

    // get a list of all the dynamic objects
    let dynamicEntities = findEntitiesWithComponent(rootEntity, componentType: DynamicComponent.self)
    for (entity, comp) in dynamicEntities {
        // Register the object with the C3D SDK. This method will post the object's information.
        objManager.registerDynamicObject(id: comp.dynamicId, name: comp.name, mesh: comp.mesh)
    }
}

func findEntitiesWithComponent<T: Component>(_ entity: Entity, componentType: T.Type, isDebug: Bool = false
) -> [(entity: Entity, component: T)] {
    var foundEntities: [(entity: Entity, component: T)] = []

    func searchEntities(_ currentEntity: Entity, depth: Int = 0) {
        let indent = String(repeating: "    ", count: depth)

        // Check if the entity has the specified component
        if let component = currentEntity.components[componentType] {
            foundEntities.append((entity: currentEntity, component: component))
        }

        // Recursively search children
        for child in currentEntity.children {
            searchEntities(child, depth: depth + 1)
        }
    }

    // Start the search
    searchEntities(entity)

    return foundEntities
}
RealityView { content, attachments in
    // Add the initial RealityKit content
    if let immersiveContentEntity = try? await Entity(
        named: appModel.sceneInfo.usdName, in: realityKitContentBundle)
    {
        contentEntity = immersiveContentEntity
        content.add(immersiveContentEntity)

        let core = Cognitive3DAnalyticsCore.shared
        // This is required to perform ray casts & collision detection with
        // gaze tracking & dynamic objects.
        core.contentEntity = contentEntity

        configureDynamicObjects(rootEntity: immersiveContentEntity)
    }
}

DynamicObjectSystem

The Dynamic Object system is the update loop in which Dynamic Objects can be registered or removed from tracking. When the loop is running, the System will get the position, orientation, and scale from the RealityKit entity that has a DynamicComponent component and record the data to be posted to the Cognitive3D analytics platform.

public func update(context: SceneUpdateContext) {
    // Process all entities with DynamicComponent during rendering
    for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) {
        guard let component = entity.components[DynamicComponent.self] else { continue }

        // Check if the entity is no longer part of the hierarchy or is inactive
        if entity.parent == nil || !entity.isActive {
            handleEnabledStateChange(entity, component: component)
            continue // Skip further processing for this entity
        }

        let properties = [["enabled" : AnyCodable(true)]]

        dynamicManager.recordDynamicObject(
            id: component.dynamicId,
            position: entity.position,
            rotation: entity.orientation,
            scale: entity.scale,
            positionThreshold: component.positionThreshold,
            rotationThreshold: component.rotationThreshold,
            scaleThreshold: component.scaleThreshold,
            updateRate: component.updateRate,
            properties: properties
        )
    }
}

DynamicComponent

The dynamic component is associated with a RealityView Entity in either RCP or in code at run-time. The custom component has properties that get associated with an entity to:

  • Associate the object by an ID with the Cognitive3D analytics session
  • Set parameters for the various update thresholds

When using the component with RCP, the source code needs to be copied into the RCP project folder to enable the editor to automatically load the custom component and make it available in the IDE.

Dynamic Component in RCP

Link to Apple developer documentation

Using the custom component in code.

struct ContentView: View {
    var body: some View {
        RealityView { content in
            // Create an entity and add the custom component
            let dynamicEntity = ModelEntity(mesh: .generateSphere(radius: 0.1))
            let component = DynamicComponent()
            component.name = "Obj1"
            component.mesh = "Obj1"
            component.dynamicId = "107AAA776D144C2C9796B84A9DD3F113"
            dynamicEntity.components.set(component)

            // Add the entity to the scene
            content.add(dynamicEntity)
        }
    }
}

Using dynamic objects with SwiftUI views

In visionOS, it's convenient to know the position of SwiftUI views and also record gaze that fall on the views.

This is achieved by using GeometryReader3D & GeometryProxy3D with dynamic objects. A PositionTrackerView is added to the content view for each window in an application.

This also requires adding a data model and corresponding code in a RealityKit view to update the entities in the scene. The entity would have the DynamicComponent added to it in code or using Reality Composer Pro.

Dynamic Proxy flow chart

In the application main

@main
struct Dynamic_windows_multipleApp: App {
    @Environment(\.openWindow) private var openWindow
    @Environment(\.scenePhase) private var scenePhase

    @State private var hasLaunched = false

    @State private var appModel = AppModel()

    /// model for working with proxy dynamic objects
    @State private var dynamicViewsModel = DynamicObjectsModel()

    init() {
        cognitiveSDKInit()
    }

    var body: some Scene {
        // The primary window.
        WindowGroup("Primary") {
            ContentView()
                .environment(appModel)
                .environment(dynamicViewsModel)
        }.onChange(of: scenePhase) {
            if !hasLaunched {
                hasLaunched = true
                openWindow(id: "Secondary")
            }
        }

        // The second window.
        WindowGroup("Secondary", id: "Secondary") {
            // The second content view that contains a Window Position Tracker View.
            Content2View()
                .environment(appModel)
                .environment(dynamicViewsModel)
        }

        ImmersiveSpace(id: appModel.immersiveSpaceID) {
            ImmersiveView()
                .environment(appModel)
                .environment(dynamicViewsModel)
                .onAppear {
                    appModel.immersiveSpaceState = .open
                }
                .onDisappear {
                    appModel.immersiveSpaceState = .closed
                }
        }
        .immersionStyle(selection: .constant(.mixed), in: .mixed)
    }

Adding a PositionTrackerView into a content view

Note: the use of a ZStack and the placement of the view at top of the stack. If the position tracker is at the bottom layer of the ZStack it will cause the views to be floating above the window as the tracker view has a depth to as a result of using a GeometryReader3D.

#Preview {
    ZStack {
        VStack() {
            Text("using a Z Stack")
                .padding(.top, 50)
            Divider()
            Text("Testing...")
                .padding(.bottom, 50)
        }

        PositionTrackerView(dynamicId: "ABCD1234", displayMode: .hidden)
            .environment(ProxyDynamicObjectsModel())
    }.glassBackgroundEffect()
}

In the view that displays the RealityKit entities

@Environment(DynamicObjectsModel.self) private var dynamicViewsModel
update: { content, _ in
    if isRealityViewReady {
        updateDynamicObject(content: content)
    }
}
/// Update the proxy dynamic objects with data from the SwiftUI view's transform.
private func updateDynamicObject(content: RealityViewContent) {
    dynamicEntities.forEach { entity in
        guard let component = entity.components[DynamicComponent.self],
            let windowModel = dynamicViewsModel.viewModels[component.dynamicId]
        else {
            // It is possible there are dynamic objects in the scene that are not associated
            // with a SwiftUI view & thus a window model.
            return
        }

        // Get geometry for this entity
        let geometry = dynamicViewsModel.viewGeometries[component.dynamicId]

        // Apply transforms using the model
        windowModel.applyTransformsToEntity(
            entity,
            using: metricsConverter,
            geometry: geometry,
            useOffset: useOffset
        )
    }
}

intercom If you have a question or any feedback about our documentation please use the Intercom button (purple circle) in the lower right corner of any web page or join our Discord.