Sign In to Follow Application
View All Documents & Correspondence

System And Method For Building Multi Level Cache For A Three Dimensional (3 D) Framework

Abstract: ABSTRACT SYSTEM AND METHOD FOR BUILDING MULTI-LEVEL CACHE FOR A THREE DIMENSIONAL (3D) FRAMEWORK The present disclosure relates to the field of memory management systems. More particularly, the present disclosure relates to a system and a method for building multi-level cache for a three dimensional (3D) framework. System (102) receives an input data from the one or more computing device associated with at least one user. The input data corresponds to uploading one or more models in a three-dimensional (3D) framework. Further, the system (102) extracts one or more components from the one or more models and performs a modification on the extracted one or more components based on at least one parameter to obtain one or more transformed models. Furthermore, system (102) constructs a multi-level cache and updates the multi-level cache based on the one or more transformation models. Furthermore, the system (102) synchronizes the transformed models in the at least one server by using the multi-level cache in 3D framework. [FIGs. 1 and 3B are reference figures]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
21 September 2024
Publication Number
40/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2025-06-24

Applicants

Ctruh Technologies Private Limited
3rd floor, Obeya Tranquil, 1185, 5th Main Rd, Sector 7, HSR Layout, Bengaluru, Karnataka 560102, India

Inventors

1. Subrat Rajkumar Gupta
C/0:Gopal Prasad Gupta, 163b, Gosh compound, Basharatpur East, Gorakhpur , PO:Bashartpur ,DIST:Gorakhpur ,Uttar Pradesh -273004, India
2. Agastya Taraka Vinay Babu
309, Pioneer Kingstown, Singasandara, Bengaluru-560068, Karnataka, India.

Specification

Description:PREAMBLE TO THE DESCRIPTION
The following specification particularly describes the invention and the manner in which it is to be performed.

SYSTEM AND METHOD FOR BUILDING MULTI-LEVEL CACHE FOR A THREE DIMENSIONAL (3D) FRAMEWORK

TECHNICAL FIELD
The present disclosure relates to the field of memory management systems. More particularly, the present disclosure relates to a system and a method for building multi-level cache for a three dimensional (3D) framework.

BACKGROUND
Background description includes information that may be useful in understanding the present disclosure. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed disclosure, or that any publication specifically or implicitly referenced is prior art.
Generally, in the domain of model editing and updating, data caching has long been recognized as a reliable solution for various applications that require efficient data retrieval and utilization. Typically, the cached data includes multimedia elements such as images, files, and scripts, which are automatically stored on a computing device when a user opens an application or visits a website. The caching mechanism allows for rapid loading of the application's or information of website during subsequent accesses, thereby enhancing user experience by reducing load times. Despite the advantages of caching mechanism, conventional caching mechanisms face significant challenges, particularly when transmitting highly detailed models over typical user networks. For instance, transmitting a model exceeding 300 megabytes can be unfeasible. While caching mechanisms can mitigate the need for repeated network requests, their effectiveness is substantially reduced when the model is frequently updated. In such scenarios, the cache quickly becomes outdated, necessitating a complete refresh. However, the conventional methods and systems do not update caches without invalidating the existing data.
Conventionally, applications manage data access by storing an access grant to a cache line in a buffer and determining whether access to this cache line is permitted based on the access grant. If access is denied, any changes to the cache are blocked. However, traditional systems fall short in preventing cache updates from invalidating previously stored cache data, which is a critical requirement for maintaining cache integrity and performance.
Consequently, there is a need in the art for improved, reliable systems and methods by providing a system and a method for building multi-level cache for a three dimensional (3D) framework, to overcome at least the aforementioned drawbacks, limitations, and shortcomings associated with the prior arts.

OBJECTS OF THE PRESENT DISCLOSURE
Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
It is an object of the present disclosure to overcome the above drawback, limitations, and shortcomings associated with the existing mechanisms, and provide a system and method for building multi-level cache for a three-dimensional (3D) framework.
It is an object of the present disclosure to provide a system and method for building multi-level cache for 3D framework, which is simple, real-time, non-intrusive, low-cost, easy-to-setup system.
It is an object of the present disclosure to provide a system and method for building multi-level cache for 3D framework, which ensures that the most relevant data is readily available, reducing the need to fetch data from slower storage layers and optimizing memory usage.
It is an object of the present disclosure to provide a system and method for building multi-level cache for 3D framework, which reduces delays in rendering frequently updated parts leads to smoother interactions, which is particularly important in interactive applications.
It is an object of the present disclosure to a system and method for building multi-level cache for 3D framework, which provides predictive caching by analysing usage patterns, the system can predict which parts of the model will be updated frequently and cache them proactively, improving the overall efficiency.
It is an object of the present disclosure to a system and method for building multi-level cache for 3D framework, which provides adaptive caching where the system can adapt to changing usage patterns, and dynamically adjusting the cache strategy to ensure optimal performance.

SUMMARY
This summary is provided to introduce a selection of concepts, in a simple manner, which is further described in the detailed description of the disclosure. This summary is neither intended to identify key or essential inventive concepts of the subject matter nor to determine the scope of the disclosure.
In an aspect, the present disclosure provides a system for building multi-level cache for a three dimensional (3D) framework. The system communicates with one or more computing devices establishing a secure communication channel over a network. Further, the system receives an input data from the one or more computing device associated with at least one user. The input data corresponds to uploading one or more models in a three dimensional (3D) framework. The 3D framework comprises at least one of an Artificial Intelligence (AI) based framework and a metaverse based framework Further, the system extracts one or more components from the one or more models, and perform a modification on the extracted one or more components based on at least one parameter to obtain one or more transformed models. Furthermore, the system constructs a multi-level cache and update the multi-level cache based on the one or more transformed models. Finally, the system synchronizes the one or more transformed models in the at least one server by using the multi-level cache in the 3D framework.
In another aspect, the present disclosure provides a method for building multi-level cache for a three dimensional (3D) framework. The method includes receiving an input data from the one or more computing device associated with at least one user. The input data corresponds to uploading one or more models in a three dimensional (3D) framework. The 3D framework includes at least one of an Artificial Intelligence (AI) based framework and a metaverse based framework. The method includes extracting one or more components from the one or more models, and performs a modification on the extracted one or more components based on at least one parameter to obtain one or more transformed models. Further, the method includes constructing a multi-level cache and updates the multi-level cache based on the one or more transformed models. Furthermore, the method includes synchronizing and storing the one or more transformed models in the at least one server by using the multi-level cache in the3D framework.
Further, in the 3D editor efficient management of model resources is achieved by segmenting the model into distinct components such as textures, geometry, and state. segmentation allows for optimized loading, updating, and caching of each resource type independently. Further, the objective of the 3D editor is to manage textures separately for efficient updates and retrieval. Further, textures are extracted from the overall model file and stored as individual texture files. Each texture file represents a specific texture used in the 3D model, such as diffuse, specular, or normal maps. Further, the textures are stored in a dedicated texture cache, organized by texture type and usage. These textures are typically stored in common formats such as PNG or JPEG for efficiency. Further, when a 3D model is loaded, the editor first loads the associated textures from the texture cache. In case when texture is not found in the cache, the texture is fetched from the server or a CDN. Further, on updating the texture only the relevant texture file is modified or replaced, reducing the need for reloading other model components. Further, the objective of the 3D editor model is to handle the geometric data independently for targeted updates and rendering. Further, the geometric data, including vertices, edges, and faces, is extracted and stored in separate geometry files. These files represent the 3D mesh of the model. Further, Geometry data is cached separately from textures and typically stored in formats such as GLTF or OBJ, which efficiently represent the 3D model structure. Further, the editor retrieves the geometric data from the geometry cache. Further in case data is not present in the cache, the data is fetched from the server or CDN. Further, the objective of the invention is to manage the overall state and references of the model, facilitating integration and updates. Further. The information including pointers to textures and geometry, is maintained separately. This state information tracks the model structure, such as which textures are applied to which parts of the geometry. Further, State data is stored in a dedicated state cache, which references the texture and geometry files. This allows the editor to manage model configurations and relationships efficiently. Further, editor first retrieves the state data from the state cache and then uses this data to load the associated textures and geometry files. Further, on changing state of the model only the state file is updated. The references to textures and geometry remain unchanged unless those components are also modified.
To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.

BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and with reference to the accompanying figures, in which:
FIG. 1 illustrates an exemplary block diagram representation of a network architecture of a system for building multi-level cache for a three dimensional (3D) framework, in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates an exemplary block diagram representation of a system such as those shown in FIG. 1, for building multi-level cache for a three dimensional (3D) framework, in accordance with an embodiment of the present disclosure;
FIGs. 3A-3H illustrate exemplary schematic diagram representations of a three dimensional (3D) framework using an user interface based on multi-level cache, in accordance with an embodiment of the present disclosure;
FIG. 4 illustrates an exemplary flow diagram representation of a method for building a multi-level cache for a three dimensional (3D) framework, in accordance with an embodiment of the present disclosure;
FIG. 5 illustrates an exemplary flow diagram representation of a multi-level cache structure with resource segmentation, in accordance with an embodiment of the present disclosure;
FIG. 5A illustrates an exemplary flow diagram representation of functioning of a multi level cache, in accordance with an embodiment of the present disclosure;
FIG. 5B illustrates an exemplary flow diagram representation of a multi-level cache structure with separate caches for each resource type, in accordance with an embodiment of the present disclosure;
FIG. 6 illustrates an exemplary flow diagram representation of a process of process of data retrieval, in accordance with an embodiment of the present disclosure;
FIG. 7 illustrates an exemplary flow diagram representation of hierarchical flow of data between various caching layers, in accordance with an embodiment of the present disclosure;
FIG. 8 illustrates an exemplary flow diagram representation of a process of handling a texture request in a 3D scene, in accordance with an embodiment of the present disclosure;
FIG. 9 illustrates an exemplary flow diagram representation of a process of synchronizing changes from a 3D scene with a cloud service; in accordance with an embodiment of the present disclosure;
FIG. 10 illustrates an exemplary flow diagram representation of a Service Worker handling asset requests in a web application; in accordance with an embodiment of the present disclosure;
FIG. 11 illustrates an exemplary flow diagram representation of an asset caching and serving process in a web application using a Service Worker and IndexedDB (IDB), in accordance with an embodiment of the present disclosure;
FIG. 12 illustrates an exemplary flow chart representation of an asset caching and serving process in a web application using a Service Worker and IndexedDB (IDB), in accordance with an embodiment of the present disclosure;
FIG. 13 illustrates an exemplary flow diagram representation of process of storing an asset in a local IndexedDB database using a Service Worker, in accordance with an embodiment of the present disclosure;
FIG. 14 illustrates an exemplary flow diagram representation of process of dispatching a custom event named "cacheUpdated" from a Service Worker, in accordance with an embodiment of the present disclosure; and
FIG. 15 illustrates an exemplary block diagram representation of image formats and header mapping, in accordance with an embodiment of the present disclosure.
Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.

DETAILED DESCRIPTION
For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure. It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the disclosure and are not intended to be restrictive thereof.
To make objects, technical solutions and advantages of the present disclosure more clear, the present disclosure will be described in further detail with reference to the drawings. It is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments, which may be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
The shapes and sizes of the components shown in the drawings are not necessarily drawn to scale, but are merely for the purpose of facilitating easy understanding of the contents of the present embodiments of the present disclosure.
Unless defined otherwise, technical or scientific terms used herein should have the same meaning as commonly understood by one of ordinary skill in the art to which the present disclosure belongs. The terms of “first”, “second”, and the like used in the present disclosure are not intended to indicate any order, quantity, or importance, but rather are used for distinguishing one element from another. Further, the term “a”, “an”, “the”, or the like does not denote a limitation of quantity, but rather denotes the presence of at least one element. The term “comprising”, “including”, or the like, means that the element or item preceding the term contains the element or item listed after the term and the equivalent thereof, but does not exclude the presence of any other element or item. The term “connected”, “coupled”, or the like is not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect connections. The terms “upper”, “lower”, “left”, “right”, and the like are used only for indicating relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The terms "comprise", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that one or more devices or sub-systems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, sub-systems, additional sub-modules. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
Accordingly, the term “module” or “subsystem” should be understood to encompass a tangible entity, be that an entity that is physically constructed permanently configured (hardwired) or temporarily configured (programmed) to operate in a certain manner and/or to perform certain operations described herein.
The embodiment of the present disclosure is not limited to the embodiments shown in the drawings, but includes modifications of configurations formed based on a manufacturing process. Thus, regions illustrated in the drawings have schematic properties, and shapes of the regions shown in the drawings illustrate specific shapes of regions of elements, but are not intended to be limiting.
Embodiments herein provides a system and a method for building multi-level cache for a three dimensional (3D) framework. The system communicates with one or more computing devices establishing a secure communication channel over a network. Further, the system receives an input data from the one or more computing device associated with at least one user. The input data corresponds to uploading one or more models in a three dimensional (3D) framework. The 3D framework comprises at least one of an Artificial Intelligence (AI) based framework and a metaverse based framework Further, the system extracts one or more components from the one or more models, and perform a modification on the extracted one or more components based on at least one parameter to obtain one or more transformed models. Furthermore, the system constructs a multi-level cache and update the multi-level cache based on the one or more transformed models. Finally, the system synchronizes the one or more transformed models in the at least one server by using the multi-level cache in the 3D framework.
Referring now to the drawings, and more particularly to FIG. 1 through FIG. 14, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
FIG. 1 illustrates an exemplary block diagram representation of a network architecture 100 of a system 102 for building multi-level cache for a three dimensional (3D) framework, in accordance with an embodiment of the present disclosure. According to FIG. 1, the network architecture 100 includes the system 102, one or more computing devices (108-1, 108-2,…,108-N) (hereinafter individually referred to as the computing device 108 and collectively referred to as the computing devices 108) associated with at least one user (106-1, 106-2,…,106-N) (individually referred to as the user 106 and collectively referred to as the users 106), and a centralized server 110. The computing devices 108 may be associated with one or more users 106, and communicatively coupled to the centralized server 110 and the system 102 via a communication network 104. In an embodiment the computing device 108 may include, but is not limited to, a laptop computer, a desktop computer, a tablet computer, a smartphone, a wearable device, a digital camera, and the like. Further, the communication network 104 may be a wired network or a wireless network or combination thereof.
The centralized server 110 may be at least one of, but not limited to, a central server, a cloud server, a remote server, a rake server, an on-premises server, and the like. Further, the system 102 may be communicatively coupled to a database (not shown in FIG. 1), via the communication network 104. The database may include, but is not limited to, cached data, multimedia elements, images, files, and scripts, which are automatically stored on a computing device when a user opens an application or visits a website, any other data, and combinations thereof. The database may be any kind of databases/repositories such as, but are not limited to, relational database, dedicated database, dynamic database, monetized database, scalable database, cloud database, distributed database, any other database, and combination thereof.
Further, the computing device 108 may be associated with, but not limited to, a user, an individual, an administrator, a vendor, a technician, a worker, a specialist, a healthcare worker, an instructor, a supervisor, a team, an entity, an organization, a company, a facility, a bot, any other user, and combination thereof. The entities, the organization, and the facility may include, but are not limited to, a hospital, a healthcare facility, an exercise facility, a laboratory facility, an e-commerce company, a merchant organization, an airline company, a hotel booking company, a company, an outlet, a manufacturing unit, an enterprise, an organization, an educational institution, a secured facility, a warehouse facility, a supply chain facility, a vehicle manufacturing/managing organization, a fleet operating company, any other facility, and the like. The computing device 108 may be used to provide input and/or receive output to/from the system 102, and/or to the database, respectively. The computing device 108 may present to the user one or more user interfaces for the user to interact with the system 102 and/or to the database for building multi-level cache for a three-dimensional (3D) framework need. The computing device 108 may be at least one of, an electrical, an electronic, an electromechanical, and an electronic device. The computing device 108 may include, but is not limited to, a mobile device, a smartphone, a personal digital assistant (PDA), a tablet computer, a phablet computer, a wearable computing device, a Virtual Reality/Augmented Reality (VR/AR) device, an Artificial Intelligence (AI) based device, a Metaverse based device, a laptop, a desktop, a server, and the like.
Further, the system 102 may be implemented by way of a single device or a combination of multiple devices that may be operatively connected or networked together. The system 102 may be implemented in hardware or a suitable combination of hardware and software. The system 102 includes one or more hardware processor(s) 112, and a memory 114. The memory 114 may include a plurality of modules 116. The system 102 may be a hardware device including the hardware processor 112 executing machine-readable program instructions for building multi-level cache for a three-dimensional (3D) framework. Execution of the machine-readable program instructions by the hardware processor 112 may enable the system 102 to building multi-level cache for a three-dimensional (3D) framework. The “hardware” may comprise a combination of discrete components, an integrated circuit, an application-specific integrated circuit, a field-programmable gate array, a digital signal processor, or other suitable hardware. The “software” may comprise one or more objects, agents, threads, lines of code, subroutines, separate software applications, two or more lines of code, or other suitable software structures operating in one or more software applications or on one or more processors.
The one or more hardware processors 112 may include, for example, microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuits, and/or any devices that manipulate data or signals based on operational instructions. Among other capabilities, hardware processor 112 may fetch and execute computer-readable instructions in the memory 114 operationally coupled with the system 102 for performing tasks such as data processing, input/output processing, and/or any other functions. Any reference to a task in the present disclosure may refer to an operation being or that may be performed on data.
Though few components and subsystems are disclosed in FIG. 1, there may be additional components and subsystems which is not shown, such as, but not limited to, ports, routers, repeaters, firewall devices, network devices, databases, network attached storage devices, servers, assets, machinery, instruments, facility equipment, emergency management devices, image capturing devices, sensors, any other devices, vehicle communication infrastructure devices, and combination thereof. The person skilled in the art should not be limiting the components/subsystems shown in FIG. 1. Although FIG. 1 illustrates the system 102, and the computing devices 108 connected to the centralized server 110, one skilled in the art can envision that the system 102, and the and the computing devices 108 can be connected to several servers, databases, via the communication network.
Those of ordinary skilled in the art will appreciate that the hardware depicted in FIG. 1 may vary for particular implementations. For example, other peripheral devices such as an optical disk drive and the like, local area network (LAN), wide area network (WAN), wireless (e.g., wireless-fidelity (Wi-Fi)) adapter, graphics adapter, disk controller, input/output (I/O) adapter, Internet of Things (IoT) devices, digital twin devices, Artificial Intelligence (AI)/ Machine Learning (ML) devices, Metaverse based devices, may also be used in addition or in place of the hardware depicted. The depicted example is provided for explanation only and is not meant to imply architectural limitations concerning the present disclosure.
To optimize performance and reduce latency, multi-level caching is essential. This involves storing frequently accessed data in memory and less frequently accessed data on disk. AI-Based Frameworks such as for example, but not limited to, TensorFlow and PyTorch may leverage their graph execution and dynamic computation models to cache intermediate results efficiently. These frameworks provide built-in data pipeline mechanisms that may be customized for multi-level caching. Metaverse-Based Frameworks such as for example, but not limited to, Unity and Unreal Engine provide asset bundles and packaging systems for caching 3D models and their dependencies. These frameworks include built-in caching mechanisms that may be optimized to specific requirements.
In some embodiments, Hybrid Approaches combine the strengths of AI and metaverse frameworks. For example, TensorFlow and Unity or PyTorch and Unreal Engine may be used to create 3D applications with AI capabilities. By leveraging the caching mechanisms provided by both frameworks, performance and responsiveness may be optimized. Key Considerations for Multi-Level Caching may include cache size, replacement policy, coherency, invalidation, and compression. These approaches has technical advantages, including graph optimization, dynamic computation, asset bundling, and rendering optimization, which may significantly improve the performance and responsiveness of 3D applications.
The present invention has following technical advantages:
Reduced Latency: By storing frequently accessed data in memory, these frameworks can reduce the time it takes to retrieve and process data.
Increased Responsiveness: Faster data access leads to more responsive applications, improving the user experience.
Optimized Rendering: Metaverse frameworks often include optimizations for rendering and loading assets, which can further enhance performance.
Reduced I/O Operations: Caching data in memory can significantly reduce the number of I/O operations required, leading to improved system efficiency.
Efficient Resource Utilization: By intelligently managing cache sizes and replacement policies, these frameworks can optimize the use of system resources.
Handle Large Datasets: Multi-level caching can help handle large datasets by storing frequently accessed data in memory and less frequently accessed data on disk.
Support Growing User Base: As the number of users and the complexity of 3D applications increase, multi-level caching can help ensure that the system remains scalable and performant.
Faster Load Times: By caching frequently used assets, these frameworks can reduce load times for 3D scenes and models.
Smoother Interactions: Faster data access and optimized rendering can lead to smoother and more responsive interactions within the 3D environment.
Those skilled in the art will recognize that, for simplicity and clarity, the full structure and operation of all data processing systems suitable for use with the present disclosure are not being depicted or described herein. Instead, only so much of the system 102 as is unique to the present disclosure or necessary for an understanding of the present disclosure is depicted and described. The remainder of the construction and operation of the system 102 may conform to any of the various current implementations and practices that were known in the art.
In an embodiment, the server 110 may communicate with one or more computing devices 108 establishing a secure communication channel over a network 104.
In an embodiment, the system 102 may receive an input data from the one or more computing device 108 associated with at least one user 106. The input data corresponds to uploading one or more models in a three dimensional (3D) framework. The one or more models may include, but not limited to a statue, a vehicle, an electronic device, a building, a book, and the like.
In an embodiment, the system 102 may extract one or more components from the one or more models, and perform a modification on the extracted one or more components based on at least one parameter to obtain one or more transformed models. The one or more transformed models correspond to one or more commands executed on the one or more models. The one or more commands may include, but not limited to a translate, a rotate, a scale, a subdivide, and the like. Further, the at least one parameter of the one or more models, the at least one parameter may include, but not limited to a texture, a position, a size, an angle, and the like. The one or more transformations or the transformed models corresponds to, but not limited to, one or more commands executed on the one or more models, wherein the one or more commands comprises at least one of a translate, a rotate, a scale, and a subdivide, and the like.
In an embodiment, the system 102 may construct a multi-level cache and update the multi-level cache based on the one or more transformed models. The multi-level cache may include, but not limited to one or more geometry caches, one or more material caches, one or more resources caches, and the like. For example, the one or more geometry caches may be storage systems designed to temporarily hold and manage data related to the geometric aspects of a model. This includes information about the shapes, structures, and spatial configurations of objects within a 3D framework. By caching this geometric data, systems can quickly access and render the physical forms of models without having to recompute or re-fetch this data from the primary storage or original source. This results in faster processing times and improved performance, particularly in applications involving complex or detailed 3D models.
Further, the one or more material caches may be specialized storage systems that temporarily hold and manage data related to the materials used in a model. This encompasses information about textures, colours, surface properties, and other material attributes that define the appearance and physical characteristics of objects within a 3D environment. By caching material data, systems can efficiently apply these properties to models, enhancing rendering speed and visual fidelity. This is especially useful in scenarios where materials are frequently reused or modified, as it reduces the need to repeatedly load or generate this information.
Furthermore, the one or more resources caches may include storage systems that temporarily hold and manage various types of auxiliary data necessary for the functioning of an application or system. This can include a wide range of resources such as scripts, configuration files, libraries, assets, and other supporting data required for rendering, processing, and interacting with models. By caching these resources, systems can quickly access and utilize them, thereby improving overall efficiency and responsiveness. Resource caching is particularly beneficial in complex applications where numerous resources are needed simultaneously or in rapid succession.
In an embodiment, the system 102 may synchronize and store the one or more transformed models of the one or more models in the at least one server by using the multi-level cache in the three dimensional (3D) framework. The synchronized process includes, but not limited to a fetch local change, a push change to cloud, a clear local changes, and the like.
In an embodiment, the system 102 may identify the one or more transformed models which are frequently updated. Further, the system can be configured to perform a selective update of the multi-level cache for the one or more transformation of the frequently updated one or more models. In an embodiment, the system 102 may upload changes corresponding to the identified one or more components of the models in the at least one server by using the multi-level cache in the 3D framework.
Further, a cache may be defined as a specialized type of high-speed memory used to store copies of frequently accessed data from slower main memory or storage. Further, a primary purpose of the cache is to speed up data retrieval and improve overall system performance by reducing the time taken to access frequently used information. Further, the time required to access data from the cache may be represented as Tcache, and the required time to access data from main memory may be represented as Tmain. Further, the caching of the data may lead to ttwo or more possible conditions, such as, but not limited to, Cache Hit, Cache Miss, and the like. Further, the access time required to fetch the data from the cache memory may be represented as:
T_hit= T_cache
Further, the access time required to fetch the data from the main memory may be represented as:
T_miss= T_cache+ T_main
Further, the probability of a cache hit may be represented as P_hit, and the average time required to access the cache memory may be represented as:
T_avg= P_hit × T_hit+(1-P_hit )× T_miss
T_avg= P_hit × T_cache+(1-P_hit )×(T_cache+ T_main)
T_(avg )=T_cache+(1-P_hit ) ×T_main
Further, the caching may significantly reduce the need for frequent access to slower (main) memory, leading to a substantial performance improvement by reducing the access time to access the data, Further, the average time required to access the data from the cache memory may be represented as:
T_(avg )≪T_main
Further, the caching may be used in the Browser-Based 3D Editors for managing large 3D assets efficiently. Typically, a 3D application may require files like GLTF, GLB, OBJ, or other serialized formats to represent the state and data of the 3D scene. Further, these files may be of substantial size and often spanning multiple gigabytes. Further, the 3D scene may be referred to a digital or virtual environment where 3D objects are placed, manipulated, and rendered to simulate real or imaginary spaces. Further, when the 3D scene file is requested for the first time, the browser caches the 3D scene file. Further, the caching mechanism allows subsequent requests for the same file to be served directly from the browser's cache rather than fetching it from the network again. As a result, the 3D scene may be loaded much faster on subsequent accesses, significantly improving performance and reducing load times.
Further, In the context of a 3D editor, models are frequently updated and modified. When a model changes, the cached version of the file becomes outdated or invalid. Consequently, the next time the updated model is requested, the request may result in a cache miss. Further, in an updated model the cache entry for that model is no longer valid. The access time in this case can be represented as:
T_(cache_miss)=T_cache+T_main

where Tmain may include the time to fetch the updated model from the network.
Further, the average load time may be affected by re-downloading the entire file instead of using the cached version. The average load time with caching, denoted as Tavg, may be calculated as:
T_avg= P_hit × T_cache+(1-P_hit )×(T_cache+ T_main)
T_(avg )=T_cache+(1-P_hit ) ×T_main
where, Phit may represent the probability of a cache hit. The probability of a cache hit may be calculated as:
P_hit=N_hit/(N_(hit )+ N_miss )
where Nhit is the number of cache hits and Nmiss is the number of cache misses.
Further, the probability of cache hits in an exemplary scenario may be calculated as:
P_hit=800/(800+ 200)=800/1000=0.8
Fuerther,the above equation may represent an 80% chance that a requested data item may be found in the cache.
Further, the frequent model updates may increase the likelihood of cache misses. Further the decrease in Phit due to frequent updates may lead to increase in Tavg. Further, the increased Tavg may be calculated as:
T_(avg )≈T_cache+(1-P_hit ) ×T_main
Further, different parts of a 3D model may change at different frequencies and not all the parts of the model are updated simultaneously. Further, when any part of the model changes, the entire cached model is invalidated, irrespective of most of the data is unchanged. Further, a small change in portion of the model may invalidate the entire cached file resulting in reloading the entire model. Further, to address the above-mentioned issue a segment model data based on the frequency of change may be used. Further, the segment data model may be configured to update each segment independently. Further, the updated segments are reloaded, while unchanged segments remain valid in the cache.
Further, Si may represent the i-th segment of the model. Further, Tcache,i may represent the access time for segment Si from the cache memory and Tmain,i may represent the access time for segment Si from main memory. Further, a probability of a cache hit for segment Si may be calculated as:
P_(hit,i)=N_(hit,i)/(N_(hit,i )+ N_(miss,i) )

Where, Nhit,i may represent the number of cache hits for segment Si, and Nmiss,i may represent the number of cache misses for segment Si. Further, Nsegments may represent the number of segments into which the model is divided. Further, the average access time for the model with segmented caching may be calculated as:
T_avg=
1/Nsegments ∑_(i=1)^Nsegments▒〖(P_(hit,i)×T_cache+(1-P_(hit,i))×( Tcache,i + Tmain,i))〗
T_avg=1/Nsegments ∑_(i=1)^Nsegments▒(T_(cache,i)+(1-P_(hit,i) )× Tmain,i)
Further, As a result of segmenting the model, Phit,i tends to increase for segments that are not frequently updated, improving overall cache hit rates. Further, the granularity of the segmentation may affect the probability of a cache hit or miss Further, the granularity of the segmentation may include, segmentation such, but not limited to, a coarse segmentation, a fine segmentation, and the like. Further, the coarse segmentation may be configured to divide model into fewer, larger segments. For example, if a model is divided into 10 segments, each segment is relatively large. Further, the coarse segmentation may increase the size of each segment, leading to a higher chance of cache misses because any small change invalidates a larger portion of data. This results in a lower probability of a cache hit and higher average access time. Further, the effect of coarse segmentation on cache performance may include, but not limited to, Cache Misses, Cache Hits, and the like. Further, the coarse segmentation may include frequently updated parts of the model. Thus, when a small part of a large segment changes, the entire segment is invalidated. This leads to a higher chance of cache misses for the entire segment. Further, the effect of coarse segmentation may include cache hits. The larger segments, the probability of a cache hit for a segment, Phit,i, decreases as a result of likelihood of a segment being invalidated increases. Further, With larger segments, Phit,i tends to be lower due to higher Nmiss,i, resulting in higher average
access time. As Phit,i decreases, Tavg increases. Further, for a segment is invalidated due to changes in a small part, the whole segment must be reloaded decreasing probability of a cache hit because of more segments are invalidated. Further, the fine segmentation may be configured to divide the model into many smaller segments. For example, if the model is divided into 1000 segments, each segment is smaller. Further, the fine segmentation may be configured to reduce the size of each segment, meaning that changes are more localized and fewer segments are invalidated. This increases the probability of a cache hit and results in a lower average access time. Further, the effect of fine segmentation may include effect on cache misses and cache hits. Further, the cache miss may include an effect on a small segment. Thus, fewer segments are invalidated, reducing the chance of a cache miss. Further, the effect of fine segmentation may increase the probability
of a cache hit since unchanged segments remain valid in the cache. Further, With finer segmentation, Nmiss,i decreases as segments are smaller and more specific to the changes, leading to higher Phit,i. Further, as Nmiss,i decreases with finer segmentation, Phit,i increases. Further, with smaller segments, Phit,i tends to be higher resulting in a lower average access time. Further, ss Phit,i increases, Tavg decreases.
FIG. 2 illustrates an exemplary block diagram representation of a system 102 such as those shown in FIG. 1, for building multi-level cache for a three dimensional (3D) framework, in accordance with an embodiment of the present disclosure. The system 102 may also function as a computer-implemented system (hereinafter referred to as the system 102). The system 102 includes the one or more hardware processors 112, the memory 114, and a storage unit 204. The one or more hardware processors 112, the memory 114, and the storage unit 204 are communicatively coupled through a system bus 202 or any similar mechanism. The memory 114 comprises a plurality of modules 116 in the form of programmable instructions executable by the one or more hardware processors 112.
In an embodiment, the plurality of modules 116 may include a data acquisition module 212, a synchronization module 214, a conversion module 216, an extraction module 218, and other modules 220.
The system 102 comprises one or more processor(s) 112. The one or more processor(s) 112 are implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, one or more processor(s) 112 are configured to fetch and execute computer-readable instructions stored in a memory of the sink device. The memory 114 stores one or more computer-readable instructions or routines, which are fetched and executed to create or share the data units over a network service. Memory 114 comprises any non-transitory storage device comprising, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
In an embodiment, the system 102 also comprises an interface(s) 206. The interface(s) 206 comprises a variety of interfaces, for example, interfaces for data input and output devices referred to as I/O devices, storage devices, and the like. The interface(s) 206 facilitates communication of the user device 108 with various devices or servers coupled to the user device. The interface(s) 206 also provides a communication pathway for one or more components of the computing device 108. Examples of such components comprise, but are not limited to, processing engine(s) and database. Interface 206 comprises a platform for communication with the devices/servers to read real-time data /write data in the system 102 and to communicate with the audio unit 106. Interfaces 206 comprise a Graphical interface that allows user to feed inputs, to type/write/ upload the data and certificates, and other software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, and a printer.
In an embodiment, the modules 116 are implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the modules 116. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the modules 116 are processor-executable instructions stored on a non-transitory machine-readable storage medium, and the hardware for the processing engine(s) comprises a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium stores instructions that, when executed by the processing resource, implement the processing engine(s). In such examples, the system 102 comprises the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the computing device 108 and the processing resource. In other examples, the processing engine(s) 208 is implemented by electronic circuitry. Database 208 comprises data that is either stored or generated as a result of functionalities implemented by any of the components of the modules 116.
In an embodiment, the server 110 may communicate with one or more computing devices 108 establishing a secure communication channel over a network 104. In an embodiment, the data acquisition module 212 may receive an input data from the one or more computing device 108 associated with at least one user 106. The input data corresponds to uploading one or more models in a three dimensional (3D) framework. The one or more models may include, but not limited to a statue, a vehicle, an electronic device, a building, a book, and the like.
In an embodiment, the extraction module 218 may extract one or more components from the one or more models, and perform a modification on the extracted one or more components based on at least one parameter to obtain one or more transformed models. The one or more transformed models correspond to one or more commands executed on the one or more models. The one or more commands may include, but not limited to a translate, a rotate, a scale, a subdivide, and the like. Further, the at least one parameter of the one or more models, the at least one parameter may include, but not limited to a texture, a position, a size, an angle, and the like. The one or more transformations or transformed models corresponds to, but not limited to, one or more commands executed on the one or more models, wherein the one or more commands comprises at least one of a translate, a rotate, a scale, and a subdivide, and the like.
In an embodiment, the data acquisition module 212 may construct a multi-level cache and update the multi-level cache based on the one or more transformed models. The multi-level cache may include, but not limited to one or more geometry caches, one or more material caches, one or more resources caches, and the like. For example, the one or more geometry caches may be storage systems designed to temporarily hold and manage data related to the geometric aspects of a model. This includes information about the shapes, structures, and spatial configurations of objects within a 3D framework. By caching this geometric data, systems can quickly access and render the physical forms of models without having to recompute or re-fetch this data from the primary storage or original source. This results in faster processing times and improved performance, particularly in applications involving complex or detailed 3D models.
Further, the one or more material caches may be specialized storage systems that temporarily hold and manage data related to the materials used in a model. This encompasses information about textures, colours, surface properties, and other material attributes that define the appearance and physical characteristics of objects within a 3D environment. By caching material data, systems can efficiently apply these properties to models, enhancing rendering speed and visual fidelity. This is especially useful in scenarios where materials are frequently reused or modified, as it reduces the need to repeatedly load or generate this information.
Furthermore, the one or more resources caches may include storage systems that temporarily hold and manage various types of auxiliary data necessary for the functioning of an application or system. This can include a wide range of resources such as scripts, configuration files, libraries, assets, and other supporting data required for rendering, processing, and interacting with models. By caching these resources, systems can quickly access and utilize them, thereby improving overall efficiency and responsiveness. Resource caching is particularly beneficial in complex applications where numerous resources are needed simultaneously or in rapid succession.
In an embodiment, the synchronization module 214 may synchronize the one or more transformed models of the one or more models in the at least one server 110 by using the multi-level cache in the three dimensional (3D) framework. The synchronized process includes, but not limited to a fetch local change, a push change to cloud, a clear local changes, and the like.
In an embodiment, the conversion module 216 may identify the one or more transformed models which are frequently updated. Further, the system can be configured to perform a selective update of the multi-level cache for the one or more transformation of the frequently updated one or more models. In an embodiment, the system 102 may upload changes corresponding to the identified one or more components of the models in the at least one server by using the multi-level cache in the 3D framework.
In an embodiment, the one or more processors 112 may identify the one or more components on which modification is being performed. Further, the one or more processors 112 may upload changes corresponding to the identified one or more components of the models in the at least one server by using the multi-level cache in the 3D framework.
In an embodiment, the one or more processors 112 may receive a request from a user to retrieve a last used 3D model. Further, the one or more processors 112 may parse the receiving request to identify specific component of the 3D model to be retrieved. Further, the one or more processors 112 may identify device configuration settings and device type used by the user, wherein the configuration settings comprises at least one of a high performance machine, and a network speed. Furthermore, the one or more processors 112 may retrieve the specific component of the 3D model from the at least one server based on the identified device configuration setting and the device type used by the user.
In an embodiment, the one or more processors 112 may identify specific portions of the 3D model data that failed to load. In an embodiment, the one or more processors 112 may transmit error events are to the computing devices 108 for error detection and resolution. Upon encountering an error, the system 102 provides users 106 with informative messages detailing the issue and potential solutions.
In an embodiment, the system 102 can comprise at least one synchronization module 214 may maintain a synchronized state by performing a synchronization process with the at least one server to store the one or more transformed models which are frequently updated. The synchronization process can include, but not limited to: a fetch local change, a push change to cloud, a clear local changes, and the like.
In an embodiment, the synchronization module 214 consists of several components and functionalities to ensure effective implementation of synchronization process with cloud services. The fetchLocalChanges function of the synchronization process retrieves any local changes made to the 3D scene since the last synchronization. Local changes may include modifications, additions, or deletions of scene elements. The pushChangesToCloud function of the synchronization process can be implemented once local changes are retrieved, they are pushed to the cloud using appropriate APIs or protocols. Thus, ensuring that the cloud storage reflects the most recent state of the scene. The clearLocalChanges function of the synchronization process can be implemented after successfully synchronizing changes with the cloud, local changes are cleared to maintain consistency between local and cloud data.
In an embodiment, the system 102 provides synchronization with cloud services is crucial for maintaining data consistency across different devices and platforms. The system 102 performs synchronization with cloud services which can be handled through a dedicated synchronization module 214. The synchronization module 214 ensures that changes made to the 3D scene are propagated to the cloud in a timely manner, allowing users to access the latest version of the scene from anywhere. The synchronization events are dispatched using service workers, ensuring reliable and efficient communication between the client and the cloud.
In an embodiment, the synchronization process can include an event-based synchronization, a service worker integration, a data propagation, and a conflict resolution. The event-based synchronization enables triggering of the events based on specific actions or intervals, ensuring that updates are pushed to the cloud in a timely manner. Further, the service worker integration enables the service workers to intercept synchronization events and handle the synchronization process in the background, independent of the main thread. Thus, ensuring that synchronization does not interfere with the user experience. Further, the data propagation enables changes made to the 3D scene, such as modifications, additions, or deletions, are captured and propagated to the cloud. Thus, ensuring that all users have access to the most up-to-date version of the scene. Further, the conflict resolution are provided in case of conflicts, such as simultaneous edits from multiple users, conflict resolution mechanisms are employed to reconcile differences and maintain data integrity.
Further, one or more service workers are used extensively for caching assets and handling network requests. When a request is made for a cached asset, the one or more service worker intercepts the request and checks the cache. If the asset is found in the cache, the asset can be served from there, reducing network latency and improving performance. The service workers also enable efficient handling of versioning, allowing the system to serve the latest version of cached assets to users.
In an embodiment, versioning mechanisms are essential for ensuring data consistency and reliability. The system 102 implements the versioning mechanisms at multiple levels, including the client-side caching, the server-side data fetching, and synchronization with cloud services. Each asset is associated with a version number, which is used to determine whether the asset needs to be updated or refreshed. Further, versioning mechanisms ensure that users always have access to the latest version of the data, minimizing inconsistencies and errors.
In an embodiment, the system 102 can comprise a conversion module 216 (refer FIG.2) can be configured to separate a binary data and a metadata corresponding to the one or more models. The metadata can be extracted and synchronized with the at least one server based on the one or more transformed models by using the multi-level cache. The metadata can comprise at least one of a model information, a colour of model, a reflectivity, a transparency, and a texture path.
For example, model information encompasses the detailed structural and geometric data of a 3D object. For instance, consider a 3D model of a sports car. The model information includes specifics about the car’s shape, dimensions, and individual components. This data details the vertices, edges, and surfaces that define the car's geometry, allowing software to render its precise form. In a JSON format, this might look like a nested object containing the model's name, overall dimensions (length, width, and height), and a list of its primary components such as the body, wheels, windows, and interior. This comprehensive data set forms the foundation for the model's visual and functional representation in a 3D environment. In another example, the color of a model refers to its base coloration, which can significantly affect its visual appeal and realism. For example, the sports car model might have a vibrant orange body, represented by an RGB or HEX color code. In our JSON example, the base color is specified as "#FF5733", which corresponds to a bright, eye-catching shade of orange. This color information is crucial for rendering the model accurately and ensuring that it appears as intended in various lighting conditions within the 3D environment.
Furthermore, the reflectivity may a property that describes how much light is reflected off the surface of a model. For a sports car, a high reflectivity value, such as 0.8, would indicate a shiny, polished finish, giving the car a sleek and realistic appearance. Reflectivity values range from 0 (completely matte) to 1 (fully reflective). This property is essential for achieving realistic lighting effects and can greatly enhance the model's visual realism by simulating how light interacts with its surfaces.
Additionally, the transparency defines the degree to which a material allows light to pass through it. In the context of the sports car model, transparency might be used to describe the windows. A transparency value of 0.3 suggests that the windows are lightly tinted, allowing some light to pass through while still providing some level of opacity. Transparency is crucial for rendering realistic glass and other semi-transparent materials, ensuring that the model accurately represents the intended material properties. Further, the texture path points to the file location of texture images used to add detailed surface characteristics to a model. For instance, the seats of the sports car might be covered with a realistic leather texture. This texture is referenced in the model data via a file path, such as "/textures/leather_seat.jpg". Applying textures enhances the model's visual richness by adding fine details that are not captured by geometry alone, such as the grain of leather or the pattern of fabric. Textures are integral to creating lifelike and visually appealing models in 3D environments.
In an embodiment, the system 102 can be configured to execute a texture request by checking the multi-level cache, if the texture path exists in the multi-level cache, then an appropriate texture is fetched from the multi-level cache, else, the texture is fetched from the at least one server and stored in the multi-level cache. Further, the system 102 can be configured to asynchronously execute an IndexDB database and store the modifications using a uniform resource locator (URL) which can be retrieved based on a future request.
In an embodiment, the IndexDB and the CacheStorage are integral components utilized for caching data and dispatching events within the 3D scene management. The system 102 provide persistent storage and efficient management of cached assets, ensuring reliable performance and synchronization between client and server. IndexDB offers a persistent storage mechanism for storing cached assets locally within the client's browser. Thus, enables the system 102 to retain cached data across browser sessions, enhancing performance and reducing reliance on repeated network requests. Further, the CacheStorage facilitates efficient management of cached assets and enables the dispatching of events to notify relevant components or modules of cache updates. Thus, allows for real-time synchronization and ensures that cached data remains up-to-date.
In an embodiment, textures play a crucial role in 3D scene rendering. The system 102 provides the textures which are efficiently managed through a combination of a client-side caching and a server-side fetching. When a texture request is made, the system 102 first checks the cache. If the texture is found in the cache, the texture is served from the cache, by minimizing the network requests and improving performance. If the texture is not found in the cache, then the texture fetched from the server and stored in the cache for future use.
In an embodiment, a glTF loader module of the system 102 has been modified to enhance data caching capabilities. Specifically, modifications have been made to enable efficient caching of glTF data, including textures and other assets. When loading a glTF asset, the loader checks the cache for previously fetched data. If the data is found in the cache, it is served from there, reducing the need for repeated network requests. Additionally, the loader has been updated to support versioning mechanisms, ensuring that the latest version of the data is always served.

In an embodiment, the system 102 can be configured to create a custom event and dispatch to a window object by providing the URL of the updated modifications to enable one or more parts of the 3D framework to listen for the multi-level cache and respond accordingly. In an embodiment, the system 102 may be designed to create a custom event and dispatch it to a window object. The custom event carries the URL of the updated modifications within the 3D framework, allowing one or more parts of the framework to detect changes in the multi-level cache and respond appropriately. By leveraging this event-driven approach, different components of the 3D framework can stay synchronized with the latest updates, ensuring that modifications are seamlessly integrated and reflected across the system. This mechanism enhances the interactivity and dynamic updating capabilities of the 3D environment, improving its overall functionality and user experience.
In an embodiment, the system 102 can be configured to perform error handling to implemented at various points throughout the system 102 to ensure robustness and reliability. When errors occurs during the texture loading, the data caching, the synchronization with cloud services, or image extension detection, appropriate error messages are displayed to the user 106, indicating the nature of the error and potential solutions. Additionally, error events are transmitted to the computing device 108, enabling proactive error detection and resolution.
When errors occur during texture loading, users might encounter several types of issues. A common problem is a "File Not Found" error, which arises when the specified texture file does not exist at the provided path. Another frequent issue is encountering an "Unsupported Format" error, indicating that the texture file format is not supported by the system. Additionally, users may face a "Corrupted File" error, which occurs when the texture file is damaged and cannot be read properly. In some cases, especially with large texture files, users might experience a "Memory Overload" error, where the system runs out of memory while attempting to load the textures.
Data caching errors can also pose significant challenges. A "Cache Miss" error happens when the requested data is not found in the cache, leading to delays as the system fetches the data from the primary source. An "Stale Data" error indicates that the cached data is outdated and no longer valid, which can cause inconsistencies in the application's performance. Users might also encounter a "Cache Write Failure," where the system fails to write data to the cache due to disk errors or insufficient permissions. Furthermore, a "Cache Corruption" error can occur when the cached data becomes corrupted and unusable, potentially disrupting the application's functionality.
Synchronization with cloud services introduces another set of potential errors. One common issue is a "Network Unavailable" error, which occurs when there is no internet connection available for synchronization. An "Authentication Failure" error arises when the user’s credentials are invalid, preventing access to the cloud services. Users may also experience a "Timeout" error if the synchronization process takes too long and exceeds the allowed time limit. Additionally, a "Data Conflict" error can occur when there is a discrepancy between the local data and the cloud data, leading to synchronization issues that must be resolved to maintain data integrity.
Errors related to image extension detection can also affect system performance. An "Invalid Extension" error happens when the file extension does not match the actual file format, causing the system to misinterpret the file. An "Unsupported Extension" error indicates that the image file has an extension that the system does not support. Users might also encounter an "Extension Mismatch" error, where the file extension differs from the detected file type, suggesting a possible error in file naming or format. These errors can prevent the proper loading and display of images, impacting the overall user experience. By clearly identifying and addressing these types of errors, the system can provide users with detailed and actionable error messages. This approach helps users understand the nature of the problem and offers potential solutions, enabling them to resolve issues efficiently and maintain smooth system operation.
In an embodiment, the computing device 108 can be part of various interactive applications, where data of 3D models are compiled and stored. The computing device 108 may be personal computers, laptops, tablets, or any custom-built computing device that can connect to a network as an Internet of Things (IoT) device (not shown). Further, the network 104 can be configured with a centralized server 110 that stores compiled data received from the computing device 108. The said architecture allows various facilities in 3D model constrictions and synchronize their data in one central database which is easily accessible via the above network 104.
In an embodiment, the system 102 may receive data from the computing device 108. A person of ordinary skill in the art will understand that the at computing device 108 may be individually referred to as computing device 108 and collectively referred to as computing devices 108. In an embodiment, the computing device 110 may also be referred to as User Equipment (UE). Accordingly, the terms “computing device” and “User Equipment” may be used interchangeably throughout the disclosure.
In an embodiment, the computing device 108 may transmit the at least one captured data packet over a point-to-point or point-to-multipoint communication channel or network 104 to the system 102.
In an embodiment, the computing device 108 may involve collection, analysis, and sharing of data received from the system 102 via the communication network 104. In an embodiment, the computing device 108 may be coupled to at least one reference database 218, derived from a centralized server 110, to provide a reference training data that is used to for building multi-level cache for a three dimensional (3D) framework. In an exemplary embodiment, the communication network 104 may include, but not be limited to, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, and the like. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. In an exemplary embodiment, the communication network 104 may include, but not be limited to, a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
A layout of the output end of the system 102 is described, as it may be implemented. The output of this system enables building multi-level cache for a three dimensional (3D) framework. These will be typically sent to a display, where some kind of widget or table or animated graphics is used to show this output.
In an embodiment, the system 102 is connected to a network 104, which is connected to the at least one computing device 106 may include but not limited to personal computers, smartphones, laptops, tablets, smart watches as well as other IoT devices that support a display. The one or more users 106 may be a medical expert, a patient, a pharmacist, other staff in a medical institution etc. When this output is received via the network 104, the receiver can understand the kinds of 3D models predicted by the system 102, as well as take steps to prevent further development of these 3D models by using suggested recommendations.
In an embodiment, the network 104 is further configured with a centralized server 110 including a database, where all output is stored as part of 3D models. It can be retrieved whenever there is a need to reference this output in future.
FIGs. 3A-3H illustrate exemplary schematic diagram representations of three dimensional (3D) framework using an user interface based on multi-level cache, in accordance with an embodiment of the present disclosure.
In an embodiment, a returnRes function is responsible for processing fetch events, extracting relevant information such as an user ID and a scene ID from the request URL, and determining the appropriate response based on versioning information. The process of handling fetch requests with versioning can include an extracting User ID and Scene ID, checking for database existence and validity of IDs, retrieving version information, and determining the response.
In an embodiment, the extracting User ID and Scene ID can include the function which extracts the user ID and the scene ID from the request URL using the extractUserId and extractSceneId helper functions, respectively.
In an embodiment, the checking for database existence and validity of Ids, further checks if the DBStorage object exists. If not, it throws an error indicating that the index database does not exist. It verifies whether both the user ID and scene ID are present in the request URL. If not, it throws an error indicating that either the user ID or scene ID is missing.
In an embodiment, the retrieving version information from the local storage (LocalVir) and the server (serverVirson) using the DBStorage.get and getLatestVersionNumber functions, respectively. If an error occurs while fetching the server version, it is caught and logged to the console.
In an embodiment, the retrieved version information can be used to determine the function determines the appropriate response: If both local and server versions are undefined and the scene is not in the local database, it returns a default response using DefualtRes. If both local and server versions are undefined but the scene is in the local database, it returns a response from the local database using IdbRes. If the local version is undefined but the server version is valid, it fetches the response from the server using FetchRes. If both local and server versions are valid and different, it fetches the response from the server using FetchRes. If both local and server versions are valid and identical, it returns a response from the local database using IdbRes. If none of the above cases match, it throws an error indicating that no possible case matched.
In an embodiment, an extension detection for an image data can be performed based on the image header. When an image is loaded, the system reads the header bytes of the image data to determine its format. Supported formats include BMP, GIF, JPG, PNG, WebP, KTX2, and Basis Universal. If the header matches any known format, the corresponding extension is returned. If the header indicates a WebP image, the ".webp" extension is returned. If the header does not match any known format, an error is thrown, indicating that the image data is invalid.
In an embodiment, the image extension detection process can include a header analysis, a format recognition and error handling. The header analysis enables the system 102 to extract the header bytes from the image data to analyze its format and determine the appropriate extension. The format recognition of known image formats are mapped to their respective file extensions using a lookup table. Thus, allows for quick and efficient identification of the image format. The error handling can be implemented by the system 102 if the header does not match any known format, an errors thrown to indicate that the image data is invalid.
In an embodiment, FIGs. 3A-3B depicts user interface of an application in the computing device 108 which can be accessed by the user 106. The user interface can include the at least one parameter which can be implemented on the one or more models. The at least one parameter can include, but not limited to: a texture, a position, a size, an angle and the like. FIGs. 3C-3D depicts user interface which can be used to edit the one or more models. FIGs. 3C-3D depicts an image which can be loaded and edited based on user preference at different axis. FIG. 3A provides data caching with 3D models to speed up applications by storing frequently accessed data locally on a device. This cached data can include multimedia like images, files, and scripts. This technique is beneficial for scenarios where data is retrieved and used repeatedly. However, caching large and frequently updated 3D models (hundreds of megabytes in size) poses challenges. Traditional caching systems invalidate the entire cache whenever a model is updated, rendering the cached data unusable and defeating the purpose of caching altogether. In an embodiment, the multi-level caching for 3D frameworks includes building a multi-level cache specifically designed for 3D frameworks.
Consider a web application designed for editing 3D scenes. The web application may include a scene preference, an environment section, a background colour option with a black background with example selection (#323232), assets sub-section, and the like. All assets visible with a toggle switch presumably to show or hide all assets. Further, the web application includes a grid includes a toggle switch, and includes a system camera and ambient light options. Further, the web application includes background with a cube as the selected asset, and includes an audio section with options to select a sound file and adjust volume. The scene preferences screen allows users to configure the environment of a 3D scene, including the background colour, assets, and the audio.
FIG. 3B includes a schematic of a 3D modelling application such as editor application. The schematic depicts a 3D cube model in the center of the workspace. A top bar contains a logo, a scene ID and User ID (UID), most likely unique identifiers for this project, and a star icon. In an example, left panel includes project assets and a section labelled “Transform,”, likely where users can manipulate the position and orientation of objects in the 3D scene. A centre panel may be a main workspace where the 3D model is displayed, and a right panel include options for editing the material properties of the 3D model, including respective colour, transparency and surface texture.
FIG. 3C depicts an editor application with an asset library, which may be an collection of assets that can be used to create 3D objects in a scene. The left panel of the screenshot depicts that the asset library contains folders for objects, scenes, favourites, models, backgrounds, audio, images, and videos. Users can search for assets within the library using the search bar at the top of the panel. In the screenshot, for example, the user may have selected the objects folder within the asset library. This folder likely contains a variety of pre-made 3D objects that can be inserted into a scene. The screenshot depicts a search bar at the top of the objects folder, which suggests that the user can search for specific objects within the library.
FIG. 3D depicts an editor interface for a 3D modelling application program. For example, the scene depicted is the scene preferences menu. The top bar contains the scene ID and UID, most likely unique identifiers for this project, and three buttons. One button with a star icon, another with a symbol, and a third with the text “finish update”. A left navigation panel contains various sections for creating and editing 3D scenes. In the image, the “scene preference” section is selected. A centre panel may be the main workspace where users can edit various scene properties. In the image, it shows options for “3D shape,” “text,” “camera,” “light source,” “environment,” “audio source,” and “background”. From the user interface, it appears that users may edit various aspects of the 3D scene including the geometry, materials, lighting, and audio.
FIG. 3E depicts a 3D modelling application with an editor interface for a painting on an easel in the centre of the workspace. Top bar contains the scene ID and UID, most likely unique identifiers for this project, and a star icon. The left panel: This panel contains the project assets and a section labelled “Transform,” likely where users can manipulate the position and orientation of objects in the 3D scene. Centre panel may be a main workspace where the 3D model is displayed. The centre panel shows a 3D model of a painting on an easel. Right panel contains options for editing the material properties of the 3D model, including respective colour, transparency, and surface texture to create a wide variety of 3D objects and scenes.
FIG. 3F depicts to be from a 3D modelling or virtual environment editor based on the interface. Editor Interface includes the top bar indicates when in an editor mode, with options to undo, redo, and various other editing tools. Assets panel with all assets visible, and dropdown likely lets user filter or view different asset categories. Cube may be a primary container or group for other objects. A art_board.glb may be a GLB file which seems to contain multiple plane objects. Further, a plane.009_bake, a plane.008_bake, a plane.005_bake, are individual plane objects within the art_board.glb file. Further, a 3D viewport allows to view and manipulate your 3D objects. There are several plane objects with an image texture applied, arranged in the scene. For example, one of the plane objects may be currently selected, indicated by the blue outline and the transformation handles (red, green, and blue arrows).
The 3D object panel may include transform/surface tabs which allows you to switch between transformation settings and surface/material settings. Sliders allows to adjust the X, Y, and Z position of the selected object. Sliders allows to adjust the rotation of the selected object along the X, Y, and Z axes. Options are included to scale the object, with a toggle to scale evenly. The selected object appears to be a plane with an image texture, part of the larger art_board.glb group. The transformation settings allow you to adjust its position, rotation, and scale within the scene.
FIG. 3G depicts a “share your creation” screen in a 3D creation platform that allows users to create and edit 3D models. Web URL creates a link that can be shared on social media or other platforms. Anyone who clicks on the link will be able to view the 3D model in their web browser. A XR experience creates an immersive experience that can be viewed using a VR headset. Image creates a 2D image of the 3D model that can be shared on social media or other platforms. Further, 3D model includes creating a 3D file of the model that can be edited in other 3D application. A button includes a sync now, and by clicking this button may synchronize your creation with cloud servers, which may allow you to access it from any device.
FIG. 3H discloses a storage for 3D resources. The system 102 centres around a model-input and splitting system, which may handle preparation of 3D resources for storage. Model-input and splitting may be 3D resources and are prepared for storage. For example, babylon.js may be a JavaScript® library used for creating 3D experiences in web browsers. Textures and geometry may be two main components that produce a 3D model. Textures are the visual elements applied to a 3D model's surface, while geometry refers to the 3D shape of the model. Upload to backend includes uploading the processed 3D model data (textures and geometry) to a storage system. Fragmented blob storage for 3D resources provides how the 3D model data is stored. For example, a system breaks down the 3D model into chunks, and stores these chunks independently. Fragmental multi-level cache may be a caching system used to improve retrieval speeds for the 3D model data. A cache stores frequently accessed data for quicker retrieval. Here, the cache seems to be split across multiple levels. The text labels on the right side of the diagram depicts different places where the 3D model data can be cached. The geometry cache may stores the geometry data of the 3D model. The material cache may stores the material data, which is how the textures are applied to the 3D model. Resources cashes may be used for other types of data that may be part of the 3D model. The system 102 includes different cache locations: LI-server redes cache, which may be a cache on a server named LI, a La-network con casine may be a cache on a network named La, and a LS-browser Indexed Database (IDB) cache may be a cache that is stored on the user's browser using IDB technology.
FIG. 4 illustrates an exemplary flow diagram representation of a method 400 for building a multi-level cache for a three dimensional (3D) framework, in accordance with an embodiment of the present disclosure.
At step 402, the method 400 includes receiving, by the processor 112, an input data from the one or more computing device 108 associated with at least one user 106. The input data corresponds to uploading one or more models in a three dimensional (3D) framework. The 3D framework includes, but not limited to, an Artificial Intelligence (AI) based framework, a metaverse based framework, and the like.
At step 404, the method 400 includes extracting, by the processor 112, one or more components from the one or more models, and perform a modification on the extracted one or more components based on at least one parameter to obtain one or more transformed models.
At step 406, the method 400 includes constructing, by the processor 112, a multi-level cache and update the multi-level cache based on the one or more transformed models.
At step 408, the method 400 includes synchronizing, by the processor 112, the one or more transformed models in the at least one server by using the multi-level cache in the 3D framework.
The order in which the method 400 is described is not intended to be construed as a limitation, and any number of the described method blocks may be combined or otherwise performed in any order to implement the method 400 or an alternate method. Additionally, individual blocks may be deleted from the method 400 without departing from the spirit and scope of the ongoing description. Furthermore, the method 400 may be implemented in any suitable hardware, software, firmware, or a combination thereof, that exists in the related art or that is later developed. The method 400 describes, without limitation, the implementation of the computing system 102. A person of skill in the art will understand that method 400 may be modified appropriately for implementation in various manners without departing from the scope and spirit of the ongoing description.
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention. When a single device or article is described herein, it will be apparent that more than one device/article (whether they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be apparent that a single device/article may be used in place of the more than one device or article, or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.
FIG. 5 illustrates an exemplary flow diagram representation of a multi-level cache structure with resource segmentation, in accordance with an embodiment of the present disclosure. A 3D model may be segmented into distinct resources such as textures, geometry, and scene state. Further, each segment is treated independently, with the state of the scene holding referential pointers to the associated resources. This approach ensures that changes to one part of the model (e.g., geometry) do not necessitate reloading other parts (e.g., textures), thus optimizing the overall performance and resource management. Further, the segmentation allows for efficient management and updating of various parts of the model. Further, the resource segmentation may include, but not limited to a texture segmentation, a geometry segmentation, a state segmentation. Further, the texture segmentation may include separating textures of model from other components which allows for efficient texture management and updates. Further, the geometry segmentation may include isolating geometry data of the model and enabling targeted updates and rendering. Further, the state resource segmentation may include holding references to textures and geometry and managing overall structure and behaviour of the model. Further, the Scene State segmentation may include pointers to the corresponding textures and geometry resources, facilitating integration and updates.
FIG. 5A illustrates an exemplary flow diagram representation of functioning of a multi-level cache, in accordance with an embodiment of the present disclosure. Further, the functioning of the multi-level cache may a request for data. Further, the functioning may include a structured process to retrieve the request efficiently. Further, the structured process may include checking of L1 cache. Further, the process may be configured to determine the fastest and the smallest cache. Further, the smallest cache may be located in memory or local storage (L1). Further, the L1 cache may be designed for quick access. Further, the data from the L1 cache may be retrieved instantly, minimizing latency. Further, the process may be configured to determine data from the L2 cache. Further, the L2 cache may be larger but slower than L1. Further, the L2 cache may be managed by a CDN or a regional storage system. Further, the CDN may include Content Delivery Network (CDN) is a system of distributed servers that work together to deliver content (like web pages, images, videos, and other media) to users based on their geographic location. CDNs are designed to improve the speed, reliability, and security of delivering content across the internet. Further, the access to cache L2 is slower compared to L1, but relatively faster compared to retrieving data from the server. Further, the process may be configured to determine the data from the L3 cache. Further, the L3 cache may be largest and slowest. L3 is usually located on a server or in blob storage. Further, the blob storage may refer to a service or method of storing unstructured data, typically in the form of files or binary large objects (blobs). Determining the data from L3 cache may involve more latency, however L3 provides access to the full dataset that might not fit in the smaller caches. Further, by following the hierarchical approach, a system efficiently balances speed and capacity. Further, a frequently accessed data is quickly retrieved from the faster, smaller caches, while less frequently accessed data is still accessible from the slower, larger caches which reduces overall data retrieval times and enhances performance by avoiding the need to always access the slower, larger storage directly.
FIG. 5B illustrates an exemplary flow diagram representation of a multi-level cache structure with separate caches for each resource type, in accordance with an embodiment of the present disclosure. Further, the diagram represents a system architecture diagram, focusing on cache levels for resources like geometry, textures, and state data, with Blob Storage as the primary data source. Further, the blob storage may include data such as, but not limited to, geometry, texture, state, and the like. Further, the architecture may be configured to use a multi-level caching system to optimize data retrieval and improve performance. Further, the caches may be configured to reduce the need to fetch data from the server or blob storage frequently, improving speed and efficiency. Further, the different caches for different resource types may include, but not limited to, geometry, texture, state, and the like. Further, the geometry resource may be configured to store model shapes and structures. Further, the geometry resource type may include Geometry Browser Cache (L1) to store geometry data that has been recently used in the browser. Further, the geometry resource type may include Geometry CDN Cache (L2) to hold geometry data for providing to multiple users from nearby servers. Further, the geometry resource type may include Geometry Server-Side Cache (L3) to store frequently accessed geometry data, reducing the need to go back to the Blob storage often. Further, the texture resource type may be configured to store graphical textures or images. Further, the texture resource type may include Textures Browser Cache (L1) to store texture files closest to the client (in the browser), providing quick access to recently used textures. Further, the texture resource type may include Textures CDN Cache (L2) to store texture data on a CDN for faster, regionalized access. Further, the texture resource type may include Textures Server-Side Cache (L3) to hold texture data that is frequently used, reducing server load by limiting calls to the Blob storage. Further, the state resource type may be configured to manage overall scene information. Further, the state resource type may include State Browser Cache (L1) to store state information in the browser, including pointers to textures and geometry resources. Further, the state resource type may include State CDN Cache (L2) to hold state data for quicker access across multiple users. Further, the state resource type may include State Server-Side Cache (L3) to store frequently accessed state data, which may include scene information and structural management of models.
FIG. 6 illustrates an exemplary flow diagram representation of a process of process of data retrieval, in accordance with an embodiment of the present disclosure. Further, the process may involve multiple layers of caches such as, for example, but not limited to, local cache 602, CDN 604, server cache 610, main server 612, blob storage 612, and the like. Further, the process may include a user request. Further, a user may request for data from a 3D Scene application. Further, the process may first check the Local Cache 602 closest to the user, such as browser cache or a local storage for the requested data. Further, data is retrieved directly from the local cache 602 and returned to the user, completing the request. Further, the process may continue to the next level CDN cache 604 in case of data is not obtained. Further, the request is passed to the CDN Cache 604 which stores data regionally to optimize distribution. Further, the data is retrieved and sent back to the user in case the data is obtained in the CDN Cache 604. Further, the process continues to check the server cache 608 in case the data is not obtained at the CDN Cache 604. Further, the request is forwarded to the Server Cache 606, which holds data closer to the server 608. The data is returned to the user in case the data is obtained at the server 608. Further, the request is sent to the main server 608 in case the data is not obtained at the Server Cache 606. Further, the main server 608 is checked for the data. In case the data is obtained, the data is returned to the user and stored in intermediate caches such as local 602, CDN 604, and server cache 606 for future access. Further, the is fetched from the Blob Storage 610. Further, in case when none of the caches or the main server 608 have the requested data, the data is fetched from the Blob Storage 610. Once retrieved from Blob storage 610, the data is passed to the main server 608 and then down through the various caches to optimize future requests.
FIG. 7 illustrates an exemplary flow diagram representation of hierarchical flow of data between various caching layers, in accordance with an embodiment of the present disclosure. Further, the diagram illustrates a hierarchical caching system for managing assets in an architecture, specifically for handling model, texture, geometry, and state data. Further, the model may be configured to interact with the classes such as, but not limited to state, texture, and geometry. Further, these classes may be configured to load, update, and to fetch specific files. Further, the state, geometry, and texture classes may be configured to pass the data to the plurality of caches such as, but not limited to, browser cache, CDN cache, server cache, and Blob storage.
Further, the model may be central component responsible for coordinating the handling of State, Texture, and Geometry data. Further, the model may be configured to the model from a file or storage. Further, the model may be configured to update the model data. Further, the model may be configured to retrieve the associated texture, geometry data, and state information of the model. Further, the state may be configured to store the data of the model such as, but not limited to characteristics, and properties of the model. Further, the state may be configured to load the state data from storage. Further, the state may be configured to update the current state. Further, the texture may be configured to load texture data, update the texture, and retrieve the texture file. Further, the methods for managing the geometry data may include loading geometrical data, updating the geometry, and retrieving the geometry of the file. Further, the system includes four primary caching layers that work together to optimize the retrieval, caching, and storage. Further, the cache may include, but not limited to, browser cache, CDN cache, Server cache, and blob storage. Further, the functions of the browser cache may include caching texture data locally in the browser, caching geometry data, caching state data, retrieving cached texture from the browser, retrieving cached geometry from the browser, retrieving cached state from the browser. Further, the Content Delivery Network (CDN) caching layer, which is geographically distributed for faster asset retrieval. Further, the functions of the CDN cache may include similar function as that of browser cache. Further, the server cache may be configured to act as an intermediary between CDN cache and the backend storage system. Further, the functions of server cache may include Caching textures, geometry, and state data. Further, the functions of server cache may include retrieving data directly from blob Storage whenever needed. Further, the functions blob storage may include, but not limited to, fetching the original texture from storage, fetching the original geometry from storage, fetching the original state data from storage.
FIG. 8 illustrates an exemplary flow diagram representation of a process of handling a texture request in a 3D scene, in accordance with an embodiment of the present disclosure. According to FIG. 8 a user requests a texture for the 3D scene. Further, the 3D Scene forwards the request to the Cache Manager 802 by calling handleTextureRequest(textureUrl). Further, The Cache Manager 802 checks for the requested texture is available in the cache using checkCacheForTexture(textureUrl). Further, in case texture is found in the cache, the Cache Manager 802 retrieves the texture by calling textureFromCache and returns the texture directly to the 3D scene. Further, the sequence ends, and the texture is delivered to the user. Further, in case the texture is not found, the Cache Manager 802 requests the texture from the Texture Server 804 by calling fetchTextureFromServer(textureUrl). Further, the Texture Server 804 fetches the texture and returns it to the Cache Manager 802. Further, after receiving the fetched texture, the Cache Manager 804 stores the texture in the cache by calling storeTextureInCache(textureUrl, fetchedTexture). Further, the fetched texture is then returned to the 3D scene. Further, the texture is delivered to the user in the 3D scene.
FIG. 9 illustrates an exemplary flow diagram representation of a process of synchronizing changes from a 3D scene with a cloud service; in accordance with an embodiment of the present disclosure. According to FIG. 9 a user modifies the 3D scene and triggers synchronization. Further, a component representing the 3D scene where modifications are made. Further, a Synchronization Module 902 may be configured for managing the synchronization between local changes and the cloud. Further, a cloud server 904 is a component where changes are synchronized. Further, a service worker 904 may facilitate cloud synchronization. Further, the user may be configured to make changes to the 3D scene. Further, the 3D Scene may be configured to detect the changes by calling detectChanges(). Further, the 3D Scene may be configured to send an event to the Synchronization Module 902 with dispatchSyncEvent('syncCloud') to initiate the sync process. Further, the Synchronization Module 902 may be configured Further, the Synchronization Module 902 may be configured to request the local changes by calling fetchLocalChanges(), and the 3D Scene returns the detected local changes. Further, the Synchronization Module 902 may be configured to initiate the process of synchronizing with the cloud 904 by calling synchronizeWithCloud().
Further, The Synchronization Module 902 may be configured to send the local changes to the Cloud Service using pushChangesToCloud(localChanges). Further, The Cloud Service 904 may be configured to confirm the changes have been successfully pushed with confirmChangesPushed(). Further, the Synchronization Module 902 clears the local changes by calling clearLocalChanges(), after the changes are successfully synchronized. Further, the 3D Scene notifies the user with notifySuccess("Synchronization successful")after successful synchronization. Synchronization with cloud services is crucial for maintaining data consistency across different devices and platforms. In this system, synchronization is managed through a dedicated synchronization module that ensures changes made to the 3D scene are accurately propagated to the cloud, allowing users to access the latest version of the scene from anywhere. The synchronization module leverages service workers for reliable and efficient communication between the client and the cloud Further, the Synchronization events are triggered based on specific actions or time intervals, ensuring that updates are promptly pushed to the cloud. Further, the service workers 906 intercept synchronization events and manage the synchronization process in the background, ensuring that user experience remains uninterrupted. Further, the changes to the 3D scene, including modifications, additions, or deletions, are captured and sent to the cloud 904, keeping all users updated with the most recent version. Further, the conflict resolution mechanisms handle simultaneous edits from multiple users, reconciling differences to maintain data integrity.
FIG. 10 illustrates an exemplary flow diagram representation of a Service Worker handling asset requests in a web application; in accordance with an embodiment of the present disclosure. Further, the flow diagram may include a User to request an asset through the browser. Further, an asset may include an image, script, other resources, and the like. Further, a Browser may represent the user’s web browser, which makes the asset request. Further, a Service Worker 1004 may represent a script that runs in the background and intercepts network requests, acting as a proxy between the Browser 1002 and the Network 1008 or Cache Storage 1006. Further, Cache Storage 1006 may represent a Store cached assets locally for faster retrieval without needing to access the network 1008. Further, Network 1008 may represent the external source where assets are fetched in case not available in the cache. Further, the User requests an asset through the Browser 1002. Further, the Service Worker 1004 intercepts the request made by the Browser 1002 and takes control of how the asset is fetched. Further, The Service Worker 1004 checks whether the requested asset is available in Cache Storage 1006 using checkCacheForAsset(request). Further, in case the assets are obtained in cache, the Service Worker 1004 retrieves it from the cache by calling assetFromCache. Further, the asset is then returned to the Browser 1002 with return assetFromCache, and the request is completed without needing the network 1008. Further, in case the assets is not obtained in the cache, the Service Worker 1004 requests the asset from the Network 1008 by calling fetchAssetFromNetwork(request). Further, The Network 1008 fetches the asset and returns it to the Service Worker 1004 with return fetchedAsset. Further, the Service Worker 1004 then stores the fetched asset in Cache Storage 1006 by calling storeAssetInCache(request, fetchedAsset) for future use. Further, the fetched asset is returned to the Browser 1002, completing the request.
FIG. 11 illustrates an exemplary flow diagram representation of an asset caching and serving process in a web application using a Service Worker and IndexedDB (IDB), in accordance with an embodiment of the present disclosure. A client such as web browser requests an asset. Further, a Service Worker 1004, acting as a proxy between the client and the server, intercepts the request. Further, The Service Worker 1004 first checks the local cache (IDB) 1102 to check whether the asset with the requested ID exists. Further, in case where the asset exist, the Service Worker 1004 checks the version of the asset stored in the cache. Further, in case the local version is the same as the server version, the cached asset is served to the client. Further, in case the local version is different from the server version, the Service Worker fetches the latest version from the server and stores it in the cache before serving it to the client. Further, in case the asset does not exist in the cache, the Service Worker 1004 checks the IDB 1102 to check the asset exists there. Further, in case the asset exists in the IDB 1102, it is served to the client as a default response. Further, in case the asset does not exist in the IDB 1102, the Service Worker 1004 fetches the latest version from the server 1104 and stores it in both the cache and the IDB 1102 before serving it to the client. Further, in case the asset is not found in the cache or IDB 1102, or in case the local version is outdated, the Service Worker 1004 fetches the latest version from the server 1104. Further, the Service Worker 1004 stores the fetched asset and its version in the cache for future use. Further, the Service Worker 1004 serves the asset to the client, either from the cache or from the server 1104. Further, the function extracts user and scene IDs from the request URL to identify the resource to be fetched. Further, the function verifies the existence of the local database and checks the validity of the extracted IDs. Further, the function retrieves the asset's version from local storage and compares to the latest server version. Based on the comparison results, the function determines the appropriate response: returning a default response, serving the asset from the local database, fetching the latest version from the server, or serving the cached asset whether it is up-to-date. Further, in case none of these conditions are met, an error is thrown to indicate that no matching case was found.
FIG. 12 illustrates an exemplary flow chart representation of an asset caching and serving process in a web application using a Service Worker and IndexedDB (IDB), in accordance with an embodiment of the present disclosure. The process starts with a client such as web browser requests an asset from the Service Worker. Further, the Service Worker checks whether the requested asset ID is present in its local cache (IDB). Further, in case the asset ID exist the process proceeds to the next step. Further, in case the asset ID does not exist, the process follows the "No" branch. Further, in case the ID is found in the cache, the Service Worker checks the version of the cached asset. Further, in case the local version is defined and matches the server version, the cached asset directly provided to the client. Further, in case the local version is undefined or does not match the server version, the process proceeds to fetch the latest asset and version from the server. Further, in case the asset is not in the cache or the local version is outdated, the Service Worker fetches the latest asset and its version from the server. Further, in case the local version was undefined, process proceeds to the next step. Further, in case the local version was defined but didn't match the server version, the process continues. Further, in case the server version is not yet known, the Service Worker fetches asset ID from the server. Further, the Service Worker checks whether the fetched server version is valid. Further, in case the server version is invalid, process returns a default response. Further, in case both the local and server versions are valid and different, the asset ID has been updated on the server. Further, the Service Worker fetches the latest asset ID from the server and stores the asset ID in the cache. Further, in case the asset ID was not in the cache before, the Service Worker checks whether the asset ID is now in the cache after fetching the latest version. Further, in case the asset is now in the cache, the cached asset ID is provided to the client. Further, in case the asset is still not in the cache, the process may be a default response. Further, the Service Worker serves the cached asset from the IDB as a default response. Further, in case the local and server versions match, the Service Worker serves the cached asset to the client. Further, in case the asset was fetched from the server, the Service Worker stores asset in the cache for future use. Further, the Service Worker serves the latest asset to the client, either from the cache or after fetching it from the server.
FIG. 13 illustrates an exemplary flow diagram representation of process of storing an asset in a local IndexedDB database using a Service Worker, in accordance with an embodiment of the present disclosure. The process begins when a client such as web browser initiates a request to store an asset in the local cache. Further, a Service Worker intercepts the request and calls the storeAssetInCache function to handle the caching process. Further, The storeAssetInCache function opens an IndexedDB database named "assets-db" with version 1. Further, the IndexedDB database instance is returned to the storeAssetInCache function. Further, the storeAssetInCache function uses the put() method to store the asset in the "assets" object store of the database. The asset is indexed by its URL, which is used as the key. Further, the IndexedDB database successfully stores the asset. Further, the storeAssetInCache function returns a success message to the Service Worker, indicating that the asset was stored successfully. Further, The Service Worker receives the success message and informs the client that the asset was cached successfully.
FIG. 14 illustrates an exemplary flow diagram representation of process of dispatching a custom event named "cacheUpdated" from a Service Worker, in accordance with an embodiment of the present disclosure. The process begins when a client such as a web browser initiates a request that triggers a cache update in the Service Worker. Further, the Service Worker calls the dispatchCacheUpdateEvent function to notify the main thread about the cache update. Further, the dispatchCacheUpdateEvent function creates a new custom event named "cacheUpdated". Further, the created event is dispatched to the window object, which acts as the event target.
FIG. 15 illustrates an exemplary block diagram representation of image formats and header mapping, in accordance with an embodiment of the present disclosure. The diagram provides an overview of how specific byte sequences, called a "header bytes," map to common image formats. Further, the header bytes may refer to sequence of bytes that appears at the beginning of a file. Further, the sequence of the bytes may be used to identify image format. Further the image format may include, but not limited to, BMP, GIF, JPEG, PNG, WebP, and the like. Further, the header byte sequence [0x42, 0x4D] may be used for BMP image. Further, the header byte sequence [0x47, 0x49] may be used for GIF images. Further, the header byte sequence [0xFF, 0xD8] may be used for JPEG image. Further, the header byte sequence [0x89, 0x50] may be used for PNG images. Further, the header byte sequence [0x52, 0x49, 0x46, 0x46] may be used for WebP images. Further, the each of the header sequence is mapped to respective image format, allowing recognition of the format based on these unique header bytes.

The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising”, “having”, “containing”, and “including”, and other similar forms are intended to be equivalent in meaning and be open-ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the embodiments of the present invention are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.

ADVANTAGES OF THE PRESENT DISCLOSURE
The present disclosure provides a system and method for building multi-level cache for 3D framework, which is simple, real-time, non-intrusive, low-cost, easy-to-setup system.
The present disclosure ensures that the most relevant data is readily available, reducing the need to fetch data from slower storage layers and optimizing memory usage.
The present disclosure reduces delays in rendering frequently updated parts leads to smoother interactions, which is particularly important in interactive applications.
The present disclosure provides predictive caching by analysing usage patterns, the system can predict which parts of the model will be updated frequently and cache them proactively, improving the overall efficiency.
The present disclosure provides adaptive caching where the system can adapt to changing usage patterns, and dynamically adjusting the cache strategy to ensure optimal performance.
The present disclosure provides improved Cache Hit Rate: Segments that are unchanged or rarely changed have a higher probability of being cached, leading to improved performance.

The present disclosure provides Reduced Load Times: Only modified segments need to be fetched from the network, while the unchanged segments are served from the cache, reducing the overall load time.
, C , C , Claims:CLAIMS
We Claim:
1. A cloud-computing system (102) for building a multi-level cache for a three dimensional (3D) framework, the cloud-computing system (102) comprises:
at least one server (110) configured to communicate with one or more computing devices (108) establishing a secure communication channel over a network (104); and
one or more processors (112) coupled to a memory (114), wherein the memory (114) stores processor-executable instructions, which on execution, cause the one or more processors (112) to:
receive an input data from the one or more computing device (108) associated with at least one user (106), wherein the input data corresponds to uploading one or more models in a three dimensional (3D) framework, wherein the 3D framework comprises at least one of an Artificial Intelligence (AI) based framework and a metaverse based framework;
extract one or more components from the one or more models, and perform a modification on the extracted one or more components based on at least one parameter to obtain one or more transformed models;
construct a multi-level cache and update the multi-level cache based on the one or more transformed models; and
synchronize the one or more transformed models in the at least one server by using the multi-level cache in the 3D framework.
2. The cloud-computing system (102) as claimed in claim 1, wherein the one or more processors (112) is further configured to:
identify the one or more components on which modification is being performed; and
uploading changes corresponding to the identified one or more components of the models in the at least one server by using the multi-level cache in the 3D framework.

3. The cloud-computing system (102) as claimed in claim 1, wherein the one or more transformed models corresponds to one or more commands executed on the one or more models, wherein the one or more commands comprises at least one of a translate, a rotate, a scale, and a subdivide.

4. The cloud-computing system (102) as claimed in claim 1, wherein the at least one parameter of the one or more models, the at least one parameter comprises at least one of a texture, a position, a size, and an angle.

5. The cloud-computing system (102) as claimed in claim 1, wherein the multi-level cache comprises one or more geometry caches, one or more material caches, and one or more resources caches.

6. The cloud-computing system (102) as claimed in claim 1, wherein the one or more processors (112) is further configured to:
identify the one or more transformed models of the one or more models which are frequently updated; and
perform a selective update of the multi-level cache for the one or more transformation of the frequently updated one or more models.

7. The cloud-computing system (102) as claimed in claim 6, wherein the one or more processors (112) is further configured to:
maintain a synchronized state by performing a synchronized process with the at least one server to store the one or more transformed models of the one or more models which are frequently updated.

8. The cloud-computing system (102) as claimed in claim 7, wherein the synchronized process comprises at least one of a fetch local change, a push change to cloud, and a clear local changes.

9. The cloud-computing system (102) as claimed in claim 1, wherein the one or more models comprises at least one of a statue, a vehicle, an electronic device, a building, and a book.

10. The cloud-computing system (102) as claimed in claim 1, wherein the one or more processors (112) is further configured to:
separate a binary data and a metadata corresponding to the one or more models, wherein the metadata is extracted and synchronized with the at least one server based on the one or more transformed models by using the multi-level cache, wherein the metadata comprises at least one of a model information, a colour of model, a reflectivity, a transparency, and a texture path.

11. The cloud-computing system (102) as claimed in claim 1, wherein the one or more processors (112) is further configured to:
execute a texture request by checking the multi-level cache, if the texture path exists in the multi-level cache, then an appropriate texture is fetched from the multi-level cache, else, the texture is fetched from the at least one server and stored in the multi-level cache.

12. The cloud-computing system (102) as claimed in claim 1, wherein the one or more processors (112) is further configured to:
asynchronously execute a database and store the modifications using a uniform resource locator (URL) which can be retrieved based on a future request.

13. The cloud-computing system (102) as claimed in claim 1, wherein the one or more processors (112) is further configured to:
create a custom event and dispatch to a window object by providing the URL of the updated modifications to perform one or more parts of the 3D framework to listen for the multi-level cache and respond accordingly.

14. The cloud-computing system as claimed in claim 1, wherein the one or more processors (112) is further configured to:
receive a request from a user to retrieve a last used 3Dmodel;
parse the receiving request to identify specific component of the 3D model to be retrieved;
identify device configuration settings and device type used by the user, wherein the configuration settings comprises at least one of a high performance machine, and a network speed; and
retrieve the specific component of the 3D model from the at least one server based on the identified device configuration setting and the device type used by the user.

15. The cloud-computing system as claimed in claim 1, wherein the one or more processor (112) is further configured to:
identify specific portions of the 3D model data that failed to load;
transmit error events are to the computing devices (108) for error detection and resolution, wherein, upon encountering an error, the system (102) provides users (106) with informative messages detailing the issue and potential solutions.

16. A method for building multi-level cache for a three dimensional (3D) framework, the method comprises:
receiving, by a processor (112) associated with a system (102), an input data from the one or more computing device (108) associated with at least one user (106), wherein the input data corresponds to uploading one or more models in a three dimensional (3D) framework, wherein the 3D framework comprises at least one of an Artificial Intelligence (AI) based framework and a metaverse based framework;
extracting, , by a processor (112), one or more components from the one or more models, and perform a modification on the extracted one or more components based on at least one parameter to obtain one or more transformed models;
constructing, by a processor (112), a multi-level cache and update the multi-level cache based on the one or more transformed models; and
synchronize, by a processor (112), the one or more transformed models in the at least one server by using the multi-level cache in the 3D framework.
17. The method as claimed in claim 16, further comprises:
identifying, by a processor (112), the one or more components on which modification is being performed; and
uploading, by a processor (112), changes corresponding to the identified one or more components of the models in the at least one server by using the multi-level cache in the 3D framework.

18. The method as claimed in claim 16, wherein the one or more transformed models corresponds to one or more commands executed on the one or more models, wherein the one or more commands comprises at least one of a translate, a rotate, a scale, and a subdivide.

19. The method as claimed in claim 16, wherein the at least one parameter of the one or more models, the at least one parameter comprises at least one of a texture, a position, a size, and an angle.

20. The method as claimed in claim 16, wherein the multi-level cache comprises one or more geometry caches, one or more material caches, and one or more resources caches.

21. The method as claimed in claim 16 further comprises:
identifying, by a processor (112), the one or more transformed models of the one or more models which are frequently updated; and
performing, by a processor (112), a selective update of the multi-level cache for the one or more transformation of the frequently updated one or more models.

22. The method as claimed in claim 16 further comprises:
maintaining, by a processor (112), a synchronized state by performing a synchronized process with the at least one server to store the one or more transformed models of the one or more models which are frequently updated.

23. The method as claimed in claim 22, wherein the synchronized process comprises at least one of a fetch local change, a push change to cloud, and a clear local changes.

24. The method as claimed in claim 16, wherein the one or more models comprises at least one of a statue, a vehicle, an electronic device, a building, and a book.

25. The method as claimed in claim 16 further comprises:
separating, by a processor (112), a binary data and a metadata corresponding to the one or more models, wherein the metadata is extracted and synchronized with the at least one server based on the one or more transformed models by using the multi-level cache, wherein the metadata comprises at least one of a model information, a colour of model, a reflectivity, a transparency, and a texture path.

26. The method as claimed in claim 16, further comprises:
executing, by a processor (112), a texture request by checking the multi-level cache, if the texture path exists in the multi-level cache, then an appropriate texture is fetched from the multi-level cache, else, the texture is fetched from the at least one server and stored in the multi-level cache.

27. The method as claimed in claim 16 further comprises:
asynchronously executing, by a processor (112), a database and store the modifications using a uniform resource locator (URL) which can be retrieved based on a future request.

28. The method as claimed in claim 16 further comprises:
creating, by a processor (112), a custom event and dispatch to a window object by providing the URL of the updated modifications to perform one or more parts of the 3D framework to listen for the multi-level cache and respond accordingly.

29. The method as claimed in claim 16 further comprises:
receiving, by a processor (112), a request from a user to retrieve a last used 3Dmodel;
parsing, by a processor (112), the receiving request to identify specific component of the 3D model to be retrieved;
identifying, by a processor (112), device configuration settings and device type used by the user, wherein the configuration settings comprises at least one of a high performance machine, and a network speed; and
retrieving, by a processor (112), the specific component of the 3D model from the at least one server based on the identified device configuration setting and the device type used by the user.

30. The method as claimed in claim 16 further comprises:
identifying, by a processor (112), specific portions of the 3D model data that failed to load;

transmitting, by a processor (112), error events are to the computing devices (108) for error detection and resolution, wherein, upon encountering an error, the system (102) provides users (106) with informative messages detailing the issue and potential solutions.

Documents

Application Documents

# Name Date
1 202441071517-STATEMENT OF UNDERTAKING (FORM 3) [21-09-2024(online)].pdf 2024-09-21
2 202441071517-POWER OF AUTHORITY [21-09-2024(online)].pdf 2024-09-21
3 202441071517-FORM FOR SMALL ENTITY(FORM-28) [21-09-2024(online)].pdf 2024-09-21
4 202441071517-FORM FOR SMALL ENTITY [21-09-2024(online)].pdf 2024-09-21
5 202441071517-FORM 1 [21-09-2024(online)].pdf 2024-09-21
6 202441071517-FIGURE OF ABSTRACT [21-09-2024(online)].pdf 2024-09-21
7 202441071517-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [21-09-2024(online)].pdf 2024-09-21
8 202441071517-EVIDENCE FOR REGISTRATION UNDER SSI [21-09-2024(online)].pdf 2024-09-21
9 202441071517-DRAWINGS [21-09-2024(online)].pdf 2024-09-21
10 202441071517-DECLARATION OF INVENTORSHIP (FORM 5) [21-09-2024(online)].pdf 2024-09-21
11 202441071517-COMPLETE SPECIFICATION [21-09-2024(online)].pdf 2024-09-21
12 202441071517-FORM-9 [24-09-2024(online)].pdf 2024-09-24
13 202441071517-MSME CERTIFICATE [25-09-2024(online)].pdf 2024-09-25
14 202441071517-FORM28 [25-09-2024(online)].pdf 2024-09-25
15 202441071517-FORM 18A [25-09-2024(online)].pdf 2024-09-25
16 202441071517-Proof of Right [01-10-2024(online)].pdf 2024-10-01
17 202441071517-FER.pdf 2024-11-18
18 202441071517-OTHERS [27-03-2025(online)].pdf 2025-03-27
19 202441071517-FER_SER_REPLY [27-03-2025(online)].pdf 2025-03-27
20 202441071517-COMPLETE SPECIFICATION [27-03-2025(online)].pdf 2025-03-27
21 202441071517-CLAIMS [27-03-2025(online)].pdf 2025-03-27
22 202441071517-US(14)-HearingNotice-(HearingDate-03-06-2025).pdf 2025-05-13
23 202441071517-Correspondence to notify the Controller [27-05-2025(online)].pdf 2025-05-27
24 202441071517-Written submissions and relevant documents [17-06-2025(online)].pdf 2025-06-17
25 202441071517-PatentCertificate24-06-2025.pdf 2025-06-24
26 202441071517-IntimationOfGrant24-06-2025.pdf 2025-06-24

Search Strategy

1 SearchHistoryE_17-10-2024.pdf

ERegister / Renewals