Abstract: The present disclosure provides a mixed reality synchronization system (100) for multi-user collaborative interaction and a method for synchronized spatial alignment through hybrid tracking architecture. The system includes physical reference marker (102) captured by camera (104) in mixed reality headsets, with local tracker (110) calculating six degrees of freedom pose data to generate marker-locked coordinate systems. Network synchronizer (112) distributes model transformations as data packets while synchronization coordinator (218) simultaneously processes marker-locked coordinate systems and shared transformations through matrix multiplication operations. Unlike single-layer tracking systems producing spatial inconsistencies, this dual-layered approach achieves sub-centimeter accuracy through marker-based local tracking combined with cloud-global synchronization. The hybrid architecture ensures all users view three-dimensional models at identical spatial positions relative to physical reference marker (102) while enabling gesture-based manipulation, virtual clipping planes, and real-time annotations for collaborative examination.
Description:TECHNICAL FIELD
[0001] The present disclosure relates to the field of mixed reality systems and spatial synchronization technologies. More particularly, the present disclosure relates to a marker-based mixed reality collaboration system for synchronized multi-user interaction with three-dimensional models.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Mixed reality environments enable multiple users to visualize and interact with three-dimensional digital content overlaid onto physical spaces. Collaborative applications require participants to perceive virtual objects at consistent spatial positions relative to their physical environment, ensuring shared understanding of digital content placement and orientation across different viewing perspectives.
[0004] Existing mixed reality systems face challenges in maintaining precise spatial alignment when multiple users interact with the same virtual content from different positions and orientations. Current approaches typically rely on markerless tracking or single-layer synchronization methods, which often result in spatial inconsistencies where users perceive virtual objects at different locations relative to their physical reference frames, leading to coordination difficulties during collaborative tasks requiring precise spatial accuracy.
[0005] Therefore, there exists a requirement for a mixed reality system that provides consistent spatial anchoring across multiple devices, enables precise synchronization of virtual content transformations between users, and delivers reliable collaborative interaction capabilities while overcoming the spatial alignment limitations inherent in conventional tracking approaches.
OBJECTS OF THE PRESENT DISCLOSURE
[0006] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0007] An object of the present disclosure is to provide a mixed reality synchronization system that enables precise spatial alignment of three-dimensional models across multiple devices through physical reference markers and hybrid synchronization architecture.
[0008] Another object of the present disclosure is to provide a collaborative interaction system that maintains consistent spatial reference frames between users while enabling real-time synchronization of model transformations with sub-centimeter accuracy and sub-50ms latency.
[0009] Yet another object of the present disclosure is to provide a marker-based tracking mechanism that establishes unified coordinate systems through six degrees of freedom pose calculation combined with network-based state distribution across connected mixed reality devices.
SUMMARY
[0010] This section is provided to introduce certain objects and aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0011] In an aspect, the present disclosure provides a mixed reality synchronization system including at least one physical reference marker having contrasting visual features and a plurality of mixed reality headsets each including at least one camera positioned to capture the at least one physical reference marker. The system includes a local tracker connected to the at least one camera to calculate six degrees of freedom pose data and generate a marker-locked coordinate system using the calculated parameters as a transformation matrix, a network synchronizer to share model transformations as data packets between the plurality of mixed reality headsets, and a synchronization coordinator to simultaneously process the marker-locked coordinate system and the shared model transformations through matrix multiplication operations to maintain a unified spatial reference frame, where a three-dimensional model rendered by each mixed reality headset maintains identical spatial position and orientation relative to the at least one physical reference marker.
[0012] In another aspect, the present disclosure provides a method for mixed reality synchronization including capturing images of at least one physical reference marker, calculating six degrees of freedom pose data from the captured images, generating a marker-locked coordinate system using the calculated parameters as a transformation matrix, sharing model transformations as data packets between mixed reality headsets, processing simultaneously the marker-locked coordinate system and the shared model transformations through matrix multiplication operations to maintain a unified spatial reference frame, and rendering a three-dimensional model at each mixed reality headset using the unified spatial reference frame.
[0013] Various objects, features, aspects and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.
BRIEF DESCRIPTION OF DRAWINGS
[0014] The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure. The diagrams are for illustration only, which thus is not a limitation of the present disclosure.
[0015] FIG. 1 illustrates an exemplary mixed reality synchronization system architecture, in accordance with an embodiment of the present disclosure.
[0016] FIG. 2 illustrates an exemplary control unit with processing components, in accordance with an embodiment of the present disclosure.
[0017] FIG. 3 illustrates an exemplary synchronization coordinator architecture, in accordance with an embodiment of the present disclosure.
[0018] FIG. 4 illustrates an exemplary flow diagram for mixed reality synchronization method, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION
[0019] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
Definitions:
Mixed Reality: A technology that can merge physical and virtual environments, enabling real and virtual objects to interact in real-time through head-mounted display devices that overlay digital content onto the physical world.
Physical Reference Marker: A visual marker having contrasting visual features, including but not limited to, grid patterns of black and white squares, QR codes, or other high-contrast patterns that can serve as spatial anchor points for establishing coordinate systems.
Hybrid Synchronization Architecture: A dual-layered system that can combine marker-based local tracking for spatial accuracy with cloud-based global synchronization for state consistency across multiple devices.
Transformation Matrix: A mathematical representation that can encode spatial transformations including, but not limited to, translation, rotation, and scale operations derived from six degrees of freedom pose data.
Module: As used herein, the term "module" refers to functional components that may be implemented in hardware, software, firmware, or any combination thereof, including processing circuitry, software modules, associated memory, and interface circuits that implement specific functionalities within system architecture.
[0020] An aspect of the present disclosure relates to a mixed reality synchronization system including at least one physical reference marker having contrasting visual features and a plurality of mixed reality headsets, including but not limited to HoloLens 2 or similar devices, each including at least one camera positioned to capture the at least one physical reference marker. The system can include a local tracker connected to the at least one camera to calculate six degrees of freedom pose data and generate a marker-locked coordinate system serving as a consistent physical anchoring point for rendering holographic models, including but not limited to anatomical models for planning procedures. The system can include a network synchronizer to share model transformations as data packets between the plurality of mixed reality headsets ensuring transformations performed by one user are immediately reflected in every other user's field of view, and a synchronization coordinator to simultaneously process the marker-locked coordinate system and the shared model transformations maintaining sub-centimeter accuracy ranging from 0.1 mm to 10 mm and sub-50ms latency ranging from 1 ms to 50 ms.
[0021] Various embodiments of the present disclosure are described using FIGs. 1 to 4.
[0022] FIG. 1 illustrates an exemplary mixed reality synchronization system architecture, in accordance with an embodiment of the present disclosure.
[0023] In an embodiment, referring to FIG. 1, a mixed reality synchronization system (100) can include a physical reference marker (102), a camera (104), hand sensors (106), a control unit (108), a local tracker (110), a network synchronizer (112), a display controller (114), a user interface (116), a cloud anchor service (118), a holographic display (120), and a storage system (122). The system (100) can establish a dual-layered synchronization architecture combining marker-local and cloud-global synchronization, where marker-based tracking provides local spatial accuracy while cloud-based anchoring ensures global synchronization of model state across devices. The hybrid approach can overcome limitations of markerless or single-layer tracking systems that often result in spatial inconsistencies where users perceive virtual objects at different locations relative to their physical reference frames.
[0024] In an embodiment, the physical reference marker (102) can function as a consistent physical anchoring point by providing high-contrast image markers that can be processed through marker tracking SDKs, including but not limited to Vuforia SDK, ARCore, or similar frameworks. The physical reference marker can operate by presenting contrasting visual features arranged in predetermined configurations that enable calculation of six degrees of freedom (6DOF) pose including three translational parameters (X, Y, Z) ranging from -10 meters to +10 meters and three rotational parameters (pitch, yaw, roll) covering full 360-degree ranges. The marker can serve as a unified spatial reference where all users, regardless of their position or orientation, view and interact with the same three-dimensional model in exactly the same spatial frame of reference, enabling collaborative examination of complex structures including but not limited to anatomical models for pre-operative planning.
[0025] In an embodiment, the camera (104) positioned in each mixed reality headset can operate as a visual input device capturing images at rates ranging from 30 Hz to 240 Hz or higher for real-time marker tracking. The camera can function by providing input to marker tracking SDKs that process high-contrast patterns, maintaining calibration for accurate 6DOF pose calculation, and supporting various lighting conditions from 10 lux to 10,000 lux ranges. The camera can enable robust tracking by capturing marker images that serve as foundation for the marker-locked coordinate system, providing consistent reference across multiple viewing angles from 0 to 360 degrees, and maintaining tracking stability during user movement at speeds from 0.1 m/s to 5 m/s or higher.
[0026] In an embodiment, the local tracker (110) connected to the camera (104) can operate as the marker-local component of the hybrid synchronization architecture by calculating real-time 6DOF pose data from captured marker images. The local tracker can function by processing marker images through feature extraction to identify contrasting patterns, computing pose parameters that define marker position and orientation in camera space, and generating transformation matrices that establish the marker-locked coordinate system. The local tracker can provide local spatial accuracy by maintaining tracking precision within sub-centimeter ranges from 0.1 mm to 10 mm, updating pose calculations at rates from 30 Hz to 120 Hz or higher, and serving as the foundation for spatial alignment that ensures all users view models at identical positions relative to the physical marker regardless of their viewing angle.
[0027] In an embodiment, the network synchronizer (112) can operate as the cloud-global component of the hybrid synchronization architecture by managing real-time data exchange for collaborative interactions. The network synchronizer can function through integration with cloud services including but not limited to Azure Spatial Anchors, Google Cloud Anchors, or similar platforms for global synchronization, and real-time networking frameworks including but not limited to Photon Unity Networking (PUN), Mirror, or similar systems for data packet transmission. The network synchronizer can ensure that transformations, annotations, and modifications performed by one user are immediately reflected in every other user's field of view by transmitting model state updates with latencies ranging from 1 ms to 50 ms, synchronizing position, rotation, and scale parameters across all connected devices, and maintaining consistency even during dynamic collaboration with 2 to 100 or more simultaneous users. The network synchronizer (112) operates at the system level and coordinates with a network synchronizer module (220) within the control unit (108) for implementation of real-time data exchange.
[0028] FIG. 2 illustrates an exemplary control unit with processing components, in accordance with an embodiment of the present disclosure.
[0029] In an embodiment, referring to FIG. 2, the control unit (108) can operate as a processing platform coordinating the dual-layered synchronization system through integrated components. The control unit (108) can include processor(s) (202), memory (204), interface(s) (206), a processing engine (208), and a database (210). The processing engine (208) can include specialized modules for collaborative interaction: a feature extraction engine (212) for marker pattern processing, a pose estimation processor (214) for 6DOF calculation, a hand gesture detector (216) for controller-free manipulation, a synchronization coordinator (218) for hybrid synchronization management, a network synchronizer module (220) for real-time data exchange, a transform module (222) for model manipulations, a clipping plane generator (224) for cross-sectional visualization, an error calculator (226) for alignment validation, and an annotation module (228) for collaborative marking.
[0030] In an embodiment, the processor(s) (202) can operate by executing parallel processing for the dual-layered synchronization system, handling both marker-local tracking and cloud-global synchronization simultaneously. The processor(s) can function by computing transformation matrices at rates supporting sub-50ms latency requirements, processing hand tracking data for gesture-based manipulation without controllers, and managing real-time synchronization across multiple collaborative sessions. The processor(s) can optimize performance for high-stakes collaborative environments by prioritizing marker tracking for spatial accuracy, implementing prediction modules to compensate for network latency, and maintaining synchronization even during complex manipulations including scaling from 0.1x to 10x, rotation through full 360-degree ranges, and translation across workspace volumes from 1 cubic meter to 1000 cubic meters or larger.
[0031] In an embodiment, the memory (204) can operate by storing critical synchronization data for the hybrid architecture including transformation matrices from marker tracking, cloud anchor data from spatial anchor services, and real-time state information from collaborative sessions. The memory can function by maintaining buffers for incoming transformations to handle network variations, storing annotation data including virtual pen drawings and measurement markers, and preserving clipping plane parameters for synchronized cross-sectional views. The memory can support collaborative features by caching model data ranging from 1 MB to 10 GB for complex anatomical structures, maintaining user avatar positions for multi-user awareness, and storing measurement data for dimensional analysis of three-dimensional models.
[0032] In an embodiment, the feature extraction engine (212) can operate by processing high-contrast marker images to extract features for 6DOF pose calculation, implementing modules optimized for contrasting visual patterns including but not limited to corner detection, edge extraction, or pattern matching. The feature extraction engine can function by identifying marker features within 1 ms to 10 ms processing windows, extracting sufficient feature points ranging from 4 to 100 or more for robust pose estimation, and providing feature data that enables sub-centimeter tracking accuracy. The engine can enhance marker tracking reliability by adapting to lighting variations while maintaining feature detection, filtering noise while preserving marker pattern integrity, and supporting multiple marker types for extended tracking ranges.
[0033] In an embodiment, the pose estimation processor (214) connected to the feature extraction engine (212) can operate by computing the transformation matrix from extracted features. The pose estimation processor can function by implementing perspective-n-point modules for pose calculation, refining pose estimates through iterative optimization ranging from 1 to 100 iterations, and generating the transformation matrix representing marker pose in camera space. The processor can ensure pose accuracy by applying temporal filtering across 2 to 60 frames, validating pose consistency through reprojection error analysis, and producing the six degrees of freedom pose data used by the local tracker (110).
[0034] In an embodiment, the hand gesture detector (216) connected to the hand sensors (106) can operate by enabling intuitive gesture-based model manipulation designed for sterile environments where controllers cannot be used. The hand gesture detector can function by recognizing manipulation gestures including but not limited to pinch for grabbing, two-handed scaling, rotation gestures, and reset commands, converting hand positions into transformation parameters without requiring physical controllers, and enabling natural interaction suitable for environments requiring sterility including but not limited to operating rooms or clean rooms. The detector can support collaborative manipulation by allowing multiple users to simultaneously interact with models, providing visual feedback showing hand positions to other users, and preventing conflicting manipulations through gesture priority systems.
[0035] In an embodiment, the synchronization coordinator (218) connected to both the local tracker (110) and the network synchronizer module (220) can operate as the core component managing the dual-layered marker-local and cloud-global synchronization system. The synchronization coordinator can function by simultaneously processing marker-locked coordinate systems from the local tracker (110) providing spatial accuracy and shared model transformations from the network synchronizer module (220) ensuring global consistency, implementing matrix multiplication operations to combine local and global transformations, and maintaining unified spatial reference frames with origin at the physical reference marker (102). The coordinator can resolve synchronization challenges by implementing weighted averaging when discrepancies between local and global coordinates exceed thresholds ranging from 1 mm to 10 mm, prioritizing marker-based alignment for maintaining sub-centimeter accuracy, and ensuring all users experience identical model positioning regardless of their physical location or viewing angle.
[0036] In an embodiment, the clipping plane generator (224) can operate by creating synchronized virtual clipping planes that allow users to expose and examine internal structures of three-dimensional models. The clipping plane generator can function by generating 2D slicing planes that can be moved through models to reveal cross-sectional anatomy, synchronizing plane position and orientation across all connected devices in real-time, and enabling collaborative examination of internal features critical for planning procedures. The generator can enhance collaborative analysis by supporting multiple simultaneous clipping planes ranging from 1 to 10 or more, allowing any user to control plane movement with changes reflected immediately for all participants, and maintaining plane synchronization with latencies under 50 ms ensuring smooth collaborative interaction.
[0037] In an embodiment, the annotation module (228) can operate by enabling synchronized manual annotation capabilities where users can draw or scribble directly onto three-dimensional models or in surrounding mixed reality space using virtual pen tools. The annotation module can function by capturing hand-drawn marks, notes, and visual cues in three-dimensional space, rendering annotations that are shared in real-time across all connected devices, and allowing multiple users to collaboratively highlight regions of interest or indicate procedural notes. The module can support collaborative planning by providing virtual pen tools with adjustable colors and thickness ranging from 0.1 mm to 10 mm, maintaining annotation persistence throughout collaborative sessions, and enabling measurement annotations showing distances between anatomical features or model components.
[0038] In an embodiment, the processing engine (208) can further include a measurement module (230) connected to the synchronization coordinator (218) for providing collaborative distance measurement capabilities on three-dimensional models. The measurement module (230) can operate by enabling users to measure distances in millimeters between anatomical features or model components through virtual measurement tools. The measurement module (230) can function by detecting user selection of measurement start and end points on three-dimensional model surfaces, calculating precise distances using three-dimensional coordinate geometry with accuracy ranging from 0.1 mm to 1 mm, and generating visual rulers or markers that are accurately anchored to the selected measurement points. The module can support collaborative measurement by synchronizing measurement data across all connected mixed reality headsets through the network synchronizer module (220), allowing all participants to view or adjust measurement annotations collaboratively, and maintaining measurement persistence throughout collaborative sessions. The measurement module (230) can enhance quantitative analysis by providing measurement tools with selectable units including millimeters, centimeters, and inches, displaying real-time distance calculations as users position measurement endpoints, and storing measurement data for dimensional analysis and decision-making support.
[0039] In an embodiment, the transform module (222) can operate by managing model transformations including scale, rotate, translate, and reset operations synchronized across all users. The transform module can function by processing transformation commands from the hand gesture detector (216), applying transformations to three-dimensional models while maintaining spatial alignment, and broadcasting transformation updates through the network synchronizer module (220). The module can enable collaborative manipulation by supporting simultaneous transformations from multiple users with conflict resolution, maintaining transformation history for undo/redo operations, and ensuring transformations preserve the model's relationship to the physical reference marker (102).
[0040] In an embodiment, the error calculator (226) connected to the synchronization coordinator (218) can operate by computing translational and rotational differences between the marker-locked coordinate system and coordinate data from each mixed reality headset to maintain alignment accuracy. The error calculator can function by measuring alignment errors to detect when recalibration is needed, identifying systematic drift that may affect synchronization quality, and providing metrics for assessing system performance. The calculator can maintain sub-centimeter accuracy by triggering realignment when errors exceed thresholds ranging from 1 mm to 10 mm, compensating for accumulated errors through incremental corrections, and validating that all users maintain consistent spatial reference frames.
[0041] FIG. 3 illustrates an exemplary synchronization coordinator architecture, in accordance with an embodiment of the present disclosure.
[0042] In an embodiment, referring to FIG. 3, the synchronization coordinator (218) architecture can operate through parallel processing of local tracker (110) components providing marker-local accuracy and global sync (302) components ensuring cloud-global consistency. The local tracker (110) components can include a marker interface (110-2) for marker detection, an SDK interface (110-4) for integration with tracking libraries including but not limited to Vuforia SDK, a 6DOF processor (110-6) for pose calculation, a local frame generator (110-8) for coordinate system creation, a marker local manager (110-10) for local state management, and a local state buffer (110-12) for transformation storage. The global sync (302) components can include a cloud anchor service interface (302-2) for Azure Spatial Anchors or similar services, a cloud frame manager (302-4) for global coordinate management, an RT comm protocol handler (302-6) for Photon Unity Networking or similar real-time communication, a state broadcaster (302-8) for transformation distribution, a conflict resolver (302-10) for discrepancy resolution, and a global state buffer (302-12) for synchronized state storage.
[0043] In an embodiment, the marker interface (110-2) within the local tracker (110) can operate by managing communication with the physical reference marker (102) detection systems. The marker interface can function by receiving marker detection events from the camera (104) processing, extracting marker identification and quality metrics ranging from 0.0 to 1.0, and providing marker data to the SDK interface (110-4). The interface can support marker tracking by validating marker detection confidence, handling marker switching for extended tracking ranges, and maintaining marker state information for the local tracker (110).
[0044] In an embodiment, the SDK interface (110-4) can operate by integrating with marker tracking SDKs, particularly including but not limited to Vuforia SDK for processing high-contrast image markers. The SDK interface can function by initializing tracking libraries with marker pattern definitions, invoking SDK functions to calculate 6DOF pose from marker images, and receiving pose data with precision supporting sub-centimeter accuracy requirements. The interface can leverage SDK capabilities by utilizing optimized computer vision modules for marker detection, accessing hardware acceleration for real-time performance, and maintaining compatibility with multiple SDK versions for flexibility.
[0045] In an embodiment, the 6DOF processor (110-6) connected to the SDK interface (110-4) can operate by processing six degrees of freedom pose data including three translational and three rotational parameters calculated from marker detection. The 6DOF processor can function by refining raw pose data from the SDK interface (110-4), applying filters to reduce jitter while maintaining responsiveness, and generating stable transformation matrices for the local frame generator (110-8). The processor can ensure tracking quality by validating pose consistency across sequential frames, interpolating during brief marker occlusions lasting 0.1 to 1 second, and maintaining numerical stability for transformation calculations.
[0046] In an embodiment, the local frame generator (110-8) connected to the 6DOF processor (110-6) can operate by creating the marker-locked coordinate system from processed pose data. The local frame generator can function by defining coordinate axes aligned with marker orientation, establishing origin at the marker center position, and generating transformation matrices for the synchronization coordinator (218). The generator can maintain frame consistency by ensuring orthogonality of coordinate axes, updating frames at tracking frequency rates from 30 Hz to 120 Hz, and providing the marker-locked coordinate system to the marker local manager (110-10).
[0047] In an embodiment, the cloud anchor service interface (302-2) within the global sync (302) can operate by establishing connections with cloud-based spatial anchor services, particularly including but not limited to Azure Spatial Anchors for global synchronization. The cloud anchor service interface can function by uploading local spatial anchors to cloud infrastructure, downloading shared anchors from other session participants, and maintaining anchor consistency across distributed devices. The interface can enable cloud-global synchronization by managing authentication with cloud services, handling network interruptions with automatic reconnection, and providing anchor data to the cloud frame manager (302-4).
[0048] In an embodiment, the RT comm protocol/Real-time communication protocol handler (302-6) can operate by managing real-time communication through networking frameworks, particularly including but not limited to Photon Unity Networking (PUN) for low-latency data exchange. The RT comm protocol handler can function by serializing transformation data, annotations, and clipping plane parameters into network packets, implementing reliable delivery for critical synchronization data, and managing bandwidth to maintain sub-50ms latency targets. The handler can optimize real-time collaboration by prioritizing time-sensitive transformation updates, compressing data to reduce network overhead ranging from 2:1 to 10:1 compression ratios, and providing the data packets to the state broadcaster (302-8).
[0049] In an embodiment, the conflict resolver (302-10) connected to both the local state buffer (110-12) and the global state buffer (302-12) can operate as a critical component of the dual-layered system by reconciling discrepancies between marker-local tracking and cloud-global anchoring. The conflict resolver can function by detecting when local and global coordinates diverge beyond acceptable thresholds, implementing weighted averaging based on tracking confidence and network latency, and generating resolved transformations that balance local accuracy with global consistency. The resolver can maintain spatial fidelity by gradually aligning coordinate systems to prevent jarring corrections, prioritizing marker-based alignment when tracking quality is high, and ensuring the dual-layered system provides both accuracy and consistency benefits.
[0050] In an embodiment, the state broadcaster (302-8) connected to the RT comm protocol handler (302-6) can operate by distributing synchronized state information including model transformations, annotation data, and clipping plane parameters to all connected participants. The state broadcaster can function by packaging state updates into efficient data structures, transmitting updates at rates from 30 Hz to 120 Hz based on change frequency, and ensuring all users receive consistent state information. The broadcaster can support collaborative features by distributing virtual pen annotations as three-dimensional polylines, sharing measurement data with distance and angle information, and synchronizing clipping plane positions for collaborative cross-sectional analysis, and distributing measurement annotations with precise distance calculations and endpoint coordinates for collaborative dimensional analysis.
[0051] FIG. 4 illustrates an exemplary flow diagram for mixed reality synchronization method, in accordance with an embodiment of the present disclosure.
[0052] In an embodiment, referring to FIG. 4, block (402) capturing images of at least one physical reference marker having contrasting visual features can operate by activating the camera (104) in each mixed reality headset to acquire marker images for the dual-layered synchronization system. The capturing process can function by acquiring images of high-contrast markers designed for robust detection, synchronizing capture timing across multiple headsets within 1 ms to 10 ms windows, and providing image data to both local tracking and cloud anchoring pipelines. The image capture can enable the hybrid synchronization by serving as input for marker-based local tracking providing spatial accuracy, initiating the spatial anchoring process for global synchronization, and establishing the physical reference point that all users share regardless of viewing position.
[0053] In an embodiment, block (404) calculating six degrees of freedom pose data including three translational and three rotational parameters can operate by processing the captured marker images through tracking SDKs to determine precise spatial relationships. The calculation process can function by extracting marker features through computer vision modules, computing pose parameters that define marker position and orientation, and generating 6DOF data serving as foundation for the marker-locked coordinate system. The pose calculation can achieve sub-centimeter accuracy by implementing robust estimation resistant to partial occlusions, maintaining tracking stability during user movement, and providing the pose data that ensures all users perceive models at identical spatial positions.
[0054] In an embodiment, block (406) generating a marker-locked coordinate system using the calculated parameters as a transformation matrix can operate by establishing the local component of the dual-layered synchronization architecture. The generation process can function by creating coordinate frames with origin at marker position, aligning axes based on marker orientation, and producing the transformation matrix that converts between marker space and camera space. The coordinate system generation can provide spatial anchoring by establishing consistent reference frames across all devices, enabling precise model positioning relative to the physical marker, and serving as the foundation for spatial alignment in collaborative scenarios.
[0055] In an embodiment, block (408) sharing model transformations including position, rotation, and scale parameters as data packets between the plurality of mixed reality headsets can operate through real-time networking protocols. The sharing process can function by transmitting transformation updates through frameworks including but not limited to Photon Unity Networking, broadcasting changes to all session participants, and ensuring transformations by one user are immediately visible to others. The model transformation sharing can enable collaborative manipulation by distributing scaling operations ranging from 0.1x to 10x, sharing rotations through full 360-degree ranges, and synchronizing translations across shared workspace volumes.
[0056] In an embodiment, block (410) receiving the transformation matrix from the local tracker and the data packets from the network synchronizer can operate by collecting inputs for the synchronization coordinator. The reception process can function by buffering local transformation matrices from marker tracking, queuing network packets containing global state updates, and preparing data for simultaneous processing. The data reception can support the dual-layered architecture by maintaining separate channels for local and global data, handling different update rates from marker tracking (30-120 Hz) and network updates (10-60 Hz), and ensuring both data streams remain available for synchronization processing.
[0057] In an embodiment, block (412) processing simultaneously the marker-locked coordinate system and the shared model transformations through matrix multiplication operations can operate as the core synchronization mechanism of the hybrid system. The processing can function by combining local marker-based transformations with global cloud-anchored states, resolving discrepancies through weighted averaging modules, and maintaining the unified spatial reference frame with origin at the physical reference marker. The simultaneous processing can achieve the benefits of both approaches by preserving sub-centimeter local accuracy from marker tracking, maintaining global consistency through cloud synchronization, and ensuring all users experience identical model positioning and behavior.
[0058] In an embodiment, block (414) rendering a three-dimensional model at each mixed reality headset using the unified spatial reference frame can operate by displaying synchronized content that maintains identical spatial position and orientation relative to the physical reference marker. The rendering process can function by applying the unified transformations to complex models including but not limited to anatomical structures for procedure planning, displaying synchronized annotations and measurements from all users, and showing clipping planes that reveal internal structures for collaborative examination. The model rendering can maintain collaborative consistency by ensuring models appear at exactly the same position for all users, updating at rates from 30 fps to 120 fps for smooth interaction, and compensating for display latency through prediction modules.
[0059] The described mixed reality synchronization system (100) presents a hybrid architecture combining marker-local and cloud-global synchronization to overcome limitations of single-layer approaches. The system can achieve sub-centimeter accuracy ranging from 0.1 mm to 10 mm through marker-based tracking while maintaining sub-50ms latency ranging from 1 ms to 50 ms through optimized networking. The dual-layered approach enables precise collaborative examination of complex three-dimensional models including but not limited to anatomical structures, engineering designs, or architectural models. The synchronization coordinator (218) ensures transformations, annotations, and incisions performed by one user are immediately reflected across all connected devices, enabling effective collaboration in high-precision environments.
[0060] The system's collaborative features including gesture-based manipulation, synchronized clipping planes, virtual pen annotations, and real-time measurements can support various applications requiring precise spatial coordination. The hybrid synchronization architecture maintains spatial fidelity even during dynamic collaboration, with the marker-local component providing accuracy and the cloud-global component ensuring consistency. The dual-layered system enables users to collaborate effectively regardless of their physical positions, all viewing and interacting with the same three-dimensional model in exactly the same spatial frame of reference.
[0061] While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
ADVANTAGES OF THE PRESENT DISCLOSURE
[0062] The present disclosure provides a mixed reality synchronization system that eliminates spatial alignment inconsistencies through dual-layered marker-local and cloud-global synchronization architecture, enabling precise collaborative interaction where all users view and manipulate the same three-dimensional model in exactly the same spatial frame of reference regardless of their physical positions.
[0063] The present disclosure provides a mixed reality synchronization system that implements hybrid synchronization combining marker-based tracking for sub-centimeter spatial accuracy with cloud-based anchoring for global state consistency, ensuring transformations, annotations, and modifications performed by one user are immediately reflected across all connected devices with sub-50ms latency.
[0064] The present disclosure provides a mixed reality synchronization system that enables intuitive collaborative examination through gesture-based manipulation, synchronized virtual clipping planes, and real-time annotation tools, supporting high-precision applications while maintaining spatial fidelity during dynamic multi-user interactions without requiring physical controllers.
, Claims:1. A mixed reality synchronization system (100), comprising:
at least one physical reference marker (102) having contrasting visual features;
a plurality of mixed reality headsets, each comprising at least one camera (104) positioned to capture the at least one physical reference marker (102);
a local tracker (110) connected to the at least one camera (104) to receive captured images and calculate six degrees of freedom pose data comprising three translational and three rotational parameters of the at least one physical reference marker (102), the local tracker (110) further generating a marker-locked coordinate system using the calculated parameters as a transformation matrix;
a network synchronizer (112) communicatively coupled to each of the plurality of mixed reality headsets to receive and share model transformations comprising position, rotation, and scale parameters as data packets between the plurality of mixed reality headsets;
a synchronization coordinator (218) connected to both the local tracker (110) and the network synchronizer (112) to receive the transformation matrix from the local tracker (110) and the data packets from the network synchronizer (112), the synchronization coordinator (218) simultaneously processing the marker-locked coordinate system and the shared model transformations through matrix multiplication operations to maintain a unified spatial reference frame with origin at the at least one physical reference marker (102);
wherein a three-dimensional model rendered by each mixed reality headset maintains identical spatial position and orientation relative to the at least one physical reference marker (102) through concurrent operation of the local tracker (110) generating the transformation matrix and the network synchronizer (112) distributing the data packets to all the connected mixed reality headsets.
2. The mixed reality synchronization system (100) as claimed in claim 1, wherein the contrasting visual features of the at least one physical reference marker (102) comprise a grid pattern of black and white squares arranged in a predetermined configuration.
3. The mixed reality synchronization system (100) as claimed in claim 1, wherein the local tracker (110) comprises a feature extraction engine (212) connected to the at least one camera (104) and a pose estimation processor (214) connected to the feature extraction engine (212) to compute the transformation matrix.
4. The mixed reality synchronization system (100) as claimed in claim 1, wherein the network synchronizer (112) comprises:
a cloud anchor service interface (302-2) to establish spatial anchor data in a cloud service (118); and
a real-time communication protocol handler (302-6) connected to the cloud anchor service interface (302-2) to transmit the data packets containing the spatial anchor data and the model transformations between the plurality of mixed reality headsets.
5. The mixed reality synchronization system (100) as claimed in claim 1, wherein the synchronization coordinator (218) comprises a conflict resolver (302-10) to process the marker-locked coordinate system and the shared model transformations when discrepancies between coordinate values exceed a predetermined threshold.
6. The mixed reality synchronization system (100) as claimed in claim 1, further comprising a hand gesture detector (216) connected to hand tracking sensors (106) in each of the plurality of mixed reality headsets, the hand gesture detector (216) converting captured hand positions into position, rotation, and scale parameters.
7. The mixed reality synchronization system (100) as claimed in claim 1, further comprising an error calculator (226) connected to the synchronization coordinator (218) to compute translational differences and rotational differences between the marker-locked coordinate system and coordinate data from each mixed reality headset.
8. The mixed reality synchronization system (100) as claimed in claim 1, wherein the three-dimensional model comprises patient-specific anatomical mesh data, and wherein the model transformations comprise manipulation parameters applied to the anatomical mesh data.
9. The mixed reality synchronization system (100) as claimed in claim 1, further comprising a clipping plane generator (224) connected to the synchronization coordinator (218), the clipping plane generator (224) producing plane intersection data synchronized across the plurality of mixed reality headsets through the network synchronizer (112).
10. A method (400) for mixed reality synchronization, comprising:
capturing (402) images of at least one physical reference marker (102) having contrasting visual features using at least one camera (104) in each of a plurality of mixed reality headsets;
calculating (404) six degrees of freedom pose data comprising three translational and three rotational parameters from the captured images;
generating (406) a marker-locked coordinate system from the calculated six degrees of freedom pose data using the calculated parameters as a transformation matrix;
sharing (408) model transformations comprising position, rotation, and scale parameters as data packets between the plurality of mixed reality headsets through network communication;
receiving (410) the transformation matrix and the data packets at the synchronization coordinator (218);
processing (412) simultaneously the marker-locked coordinate system derived from the transformation matrix and the shared model transformations from the data packets through matrix multiplication operations to maintain a unified spatial reference frame with origin at the at least one physical reference marker (102);
rendering (414) a three-dimensional model at each mixed reality headset using the unified spatial reference frame, wherein the three-dimensional model maintains identical spatial position and orientation relative to the at least one physical reference marker (102) through concurrent generation of the transformation matrix and distribution of the data packets to all the connected mixed reality headsets.
| # | Name | Date |
|---|---|---|
| 1 | 202541079907-STATEMENT OF UNDERTAKING (FORM 3) [22-08-2025(online)].pdf | 2025-08-22 |
| 2 | 202541079907-REQUEST FOR EXAMINATION (FORM-18) [22-08-2025(online)].pdf | 2025-08-22 |
| 3 | 202541079907-REQUEST FOR EARLY PUBLICATION(FORM-9) [22-08-2025(online)].pdf | 2025-08-22 |
| 4 | 202541079907-FORM-9 [22-08-2025(online)].pdf | 2025-08-22 |
| 5 | 202541079907-FORM FOR SMALL ENTITY(FORM-28) [22-08-2025(online)].pdf | 2025-08-22 |
| 6 | 202541079907-FORM 18 [22-08-2025(online)].pdf | 2025-08-22 |
| 7 | 202541079907-FORM 1 [22-08-2025(online)].pdf | 2025-08-22 |
| 8 | 202541079907-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [22-08-2025(online)].pdf | 2025-08-22 |
| 9 | 202541079907-EVIDENCE FOR REGISTRATION UNDER SSI [22-08-2025(online)].pdf | 2025-08-22 |
| 10 | 202541079907-EDUCATIONAL INSTITUTION(S) [22-08-2025(online)].pdf | 2025-08-22 |
| 11 | 202541079907-DRAWINGS [22-08-2025(online)].pdf | 2025-08-22 |
| 12 | 202541079907-DECLARATION OF INVENTORSHIP (FORM 5) [22-08-2025(online)].pdf | 2025-08-22 |
| 13 | 202541079907-COMPLETE SPECIFICATION [22-08-2025(online)].pdf | 2025-08-22 |
| 14 | 202541079907-FORM-26 [25-09-2025(online)].pdf | 2025-09-25 |