Abstract: ABSTRACT AMBIENT INTELLIGENCE MIDDLEWARE ENVIRONMENT OPERATING SYSTEM AND METHOD FOR MANAGING AND OPTIMIZING AMBIENT ENVIRONMENTS An AmI middleware environment operating system (102) and method (900) for managing and optimizing environments (302) to provide optimized experience to end users, is disclosed. The AmI middleware environment operating method (900) comprises: receiving (902) data associated with environments (302) using sensors (702); generating (904) digital twins (802) comprising synthetic representation of environments (302) based on information from environments (302), sensors (702), and actuators (704); storing (906) data and digital twins (802), associated with environments (302) in environment databases; processing (908) data from environment databases, to determine optimal actions within environments (302) using ambient machine with AI models, wherein optimal actions are determined upon analyzing states of environments (302); and providing (910) commands to actuators (704) to execute optimal actions, wherein optimal actions comprise controlling machinery, adjusting conditions, displaying information on screens, triggering alerts, recommending suggestions to stakeholders to validate, modify and approve required actions, and updates, in environments (302). FIG. 4
DESC:EARLIEST PRIORITY DATE:
[0001]This Application claims priority from a Provisional patent application filed in India having Patent Application No. 202421019100, filed on 15th March 2024 and titled “AMBIENT INTELLIGENCE MIDDLEWARE SYSTEM FOR PERCEIVING AMBIENT ENVIRONMENTS BASED ON USER CONFIGURATION AND METHOD THEREOF”.
FIELD OF INVENTION
[0002]Embodiments of the present invention relate to ambient intelligence based systems and more particularly relate to an ambient intelligence middleware environment operating system and method for managing and optimizing one or more environments (sensing, estimating, and updating ambient environments) to provide optimized experience to one or more end users, based on user profiles and preferences.
BACKGROUND
[0003]In today's industries, the demand for intelligent systems that can efficiently manage complex environments is increasing. Many existing solutions struggle to incorporate real-time physical data into digital simulations, limiting their ability to adapt and optimize autonomously. Current approaches are often specialized for specific tasks or require extensive human intervention to function effectively. There is a critical need for an integrated system that combines sensing, simulation, decision-making, and actuation to seamlessly connect the physical and digital worlds.
[0004]s industries worldwide embrace the Fourth Industrial Revolution (Industry 4.0), the demand for intelligent and interconnected systems continues to grow. This shift highlights the importance of seamlessly integrating physical systems with digital technologies, often referred to as “phygital” environments. However, many existing industrial systems struggle to close the gap between these two layers. While advancements such as standalone smart devices and partial automation have led to incremental progress, the overall industrial ecosystem remains fragmented. Without comprehensive integration, achieving the next generation of truly smart environments remains a significant challenge.
[0005]Conventional systems function in isolation, lacking the flexibility to swiftly adapt to evolving conditions in real-time. They are predominantly reactive rather than predictive and operate as independent units with limited interaction across different layers including physical machinery, digital information systems, and human operators. This fragmented approach leads to inefficiencies, slower decision-making, and an inability to fully leverage data-driven insights. Notably, significant gaps persist between the following key industrial components:.
[0006]With respect to phygital systems integration, modern systems face significant challenges in seamlessly blending the physical and digital environments. Physical components including machinery, sensors, and products, often operate independently from digital elements like data analytics, machine learning models, and user interfaces. This lack of cohesion disrupts real-time responsiveness, leading to inefficiencies in workflow management, product customization, and process optimization. Without true synchronization between physical assets and the digital control layer, there is a lag in transforming real-world data into actionable insights, ultimately limiting operational efficiency and adaptability..
[0007]With respect to information and physical layer disparities, the fragmentation of current technologies widens the gap between the information layer, where data is processed and analyzed, and the physical layer, which governs production, logistics, and operations. In many existing systems, the physical layer remains largely static, while the information layer struggles to convert complex data into meaningful, real-time adjustments. As a result, industries fail to fully capitalize on the vast amounts of sensor-generated data, limiting their ability to make instantaneous, data-driven decisions that optimize physical processes..
[0008]With respect to manufacturer-to-buyer-to-supplier collaboration: one of the significant challenges in today’s industrial landscape is the lack of seamless integration among key stakeholders including at least one of: manufacturers, suppliers, and buyers, within a dynamic, real-time ecosystem. The supply chain remains disjointed, plagued by information silos that hinder transparent communication and adaptive workflows. For instance, the suppliers may lack visibility into real-time production requirements, while the manufacturers struggle to adjust operations in response to shifting market demands. This disconnect leads to delays in delivering customized products and reduces the industry's ability to swiftly respond to supply chain disruptions..
[0009]With respect to lack of predictive and adaptive capabilities: Current industrial systems depend largely on predefined rules and manual inputs for decision-making, restricting their ability to adapt dynamically. These systems lack predictive capabilities, making it difficult to foresee operational challenges, anticipate user needs, or optimize processes in real time. As a result, industries miss the opportunity to leverage advanced technologies like artificial intelligence (AI) and machine learning (ML) to enhance operational efficiency and deliver personalized user experiences. Without intelligent automation, businesses struggle to remain agile and competitive in an increasingly data-driven world..
[0010]Due to these limitations, industries are unable to fully harness the potential of Industry 4.0, where machines, systems, and human users should seamlessly collaborate to form intelligent, self-regulating ecosystems. The existing gaps are not just technological but also structural, highlighting the urgent need for a comprehensive and intelligent operating system. Such a system must unify disparate layers, eliminate inefficiencies, and create a fully integrated industrial ecosystem capable of real-time adaptation, automation, and optimization.
[0011]Modern technological advancements demand systems that can dynamically adapt to changing user behaviors and complex environments. However, many existing solutions are isolated or designed for specific tasks, lacking the intelligence required for real-time responsiveness. This limitation leads to inefficiencies, reducing user satisfaction and hindering operational scalability. To fully leverage technological progress, industries must adopt intelligent, integrated systems capable of continuous learning and adaptation.
[0012]Therefore, there is a need for an improved ambient intelligence middleware environment operating system and method for managing and optimizing one or more environments to provide optimized experience to one or more end users, based on user profiles and preferences system, to address the aforementioned issues. There is also a need to optimize interoperability, ensure context awareness, address privacy and security concerns, and facilitate the availability of domain-specific data for practical applications.
SUMMARY
[0013]This summary is provided to introduce a selection of concepts, in a simple manner, which is further described in the detailed description of the disclosure. This summary is neither intended to identify key or essential inventive concepts of the subject matter nor to determine the scope of the disclosure.
[0014]In order to overcome the above deficiencies of the prior art, the present disclosure is to solve the technical problem by providing an ambient intelligence (AmI) middleware system and method for sensing, estimating, and updating ambient environments based on user profiles and preferences.
[0015]In accordance with an embodiment of the present disclosure, an ambient intelligence middleware environment operating method for managing and optimizing one or more environments to provide optimized experience to one or more end users, is provided. The ambient intelligence middleware environment operating method comprises receiving, by one or more hardware processors, data associated with the one or more environments using one or more sensors. The data associated with the one or more environments comprise at least one of: spatial configuration data corresponding to location and layouts of one or more objects in the one or more environments, temporal data corresponding to one or more events occurred in the one or more environments and timings during which the one or more events are occurred, in the one or more environments, metadata corresponding to contextual information about the one or more environments, and visual data corresponding to one or more visual contents of the one or more objects in the one or more environments.
[0016]The ambient intelligence middleware environment operating method further comprises generating, by the one or more hardware processors, digital twins comprising synthetic representation of the one or more environments based on information from at least one of: the one or more environments, the one or more sensors, and one or more actuators. The ambient intelligence middleware environment operating method further comprises storing, by the one or more hardware processors, at least one of: the data and the digital twins, associated with the one or more environments in one or more environment databases.
[0017]The ambient intelligence middleware environment operating method further comprises processing, by the one or more hardware processors, the data from the one or more environment databases, to determine one or more optimal actions within the one or more environments using an ambient machine with one or more artificial intelligence (AI) models. The one or more optimal actions are determined upon analyzing one or more states of the one or more environments. The ambient intelligence middleware environment operating method further comprises providing, by the one or more hardware processors, one or more commands to the one or more actuators to execute the one or more optimal actions. The one or more optimal actions comprise at least one of: controlling one or more machinery, adjusting conditions, displaying information on one or more screens, triggering one or more alerts, recommending one or more suggestions to one or more stakeholders to validate, modify and approve one or more required actions, and one or more updates, in the one or more environments.
[0018]In an embodiment, processing the data from the one or more environment databases to determine the one or more optimal actions within the one or more environments using the ambient machine with the one or more AI models, comprises: (a) obtaining, by the one or more hardware processors, the data associated with the one or more environments, from the one or more environment databases; (b) converting, by the one or more hardware processors, the data associated with the one or more environments, into one or more actionable representations; (c) synchronizing, by the one or more hardware processors, one or more data streams corresponding to the data for unified analysis; and (d) performing, by the one or more hardware processors, one or more processes comprising at least one of: extracting one or more insights, detecting one or more anomalies, identifying one or more patterns, and predicting future states of the one or more environments, to determine the one or more optimal actions within the one or more environments, based on the unified analysis of the data, using the ambient machine with the one or more AI models.
[0019]In another embodiment, the ambient intelligence middleware environment operating method further comprises training, by the one or more hardware processors, the one or more AI models to optimize performance of the ambient machine, by at least one of: (a) utilizing, by the one or more hardware processors, the one or more data streams received from the one or more sensors to continuously update the one or more AI models and adapt the one or more AI models to one or more changes occurred in the one or more environments using online and incremental learning techniques; (b) utilizing, by the one or more hardware processors, the digital twins to generate synthetic data by simulating one or more scenarios and interventions, in the one or more environments, for training the one or more AI models; (c) combining, by the one or more hardware processors, multi-modal transformer models comprising at least one of: a vision transformer (ViT) model, a vision language model (VLM), and a small language model (SLM) to perform at least one of: one or more granular tasks, contextual analysis, and translating the one or more insights into the one or more optimal actions, in the one or more environments; (d) learning, by the one or more hardware processors, one or more domain specific tasks from one or more trainer AI models to perform the one or more optimal actions in the one or more environments, wherein the one or more trainer AI models are configured to automatically train one or more trainee AI models with one or more knowledge domains, for prioritizing object detection and tracking inventory managements; and (e) analyzing, by the one or more hardware processors, one or more feedback obtained from the one or more stakeholders for adapting the one or more ML models to update at least one of: objectives, constraints, and decision-making criteria, of the ambient machine.
[0020]In yet another embodiment, the ambient intelligence middleware environment operating method further comprises providing, by the one or more hardware processors, one or more human computer interfaces (HCI) for the one or more stakeholders to be interacted with an ambient intelligence middleware environment operating system for one or more processes, wherein the one or more human computer interfaces are configured based on one or more roles of the one or more stakeholders.
[0021]In yet another embodiment, the ambient intelligence middleware environment operating method further comprises at least one of: (a) updating, by the one or more hardware processors, the one or more environment databases based on the data continuously received from the one or more sensors, which adapts the digital twins to reflect currents states of the one or more environments; and (b) updating, by the one or more hardware processors, the one or more environment databases with the one or more changes occurred in the one or more environments.
[0022]In yet another embodiment, the ambient intelligence middleware environment operating method further comprises analyzing, by the one or more hardware processors, the one or more states of the one or more environments, using the one or more sensors, wherein analyzing the one or more states of the one or more environments, comprises: (a) analyzing, by the one or more hardware processors, one or more user profiles within the one or more environments using the one or more sensors and signal-level interoperability; (b) modelling, by the one or more hardware processors, at least one of: noises, redundancies, outliers, and patterns, from the data received from the one or more sensors, using signal preprocessing techniques; (c) applying, by the one or more hardware processors, the one or more AI models on the data to analyze the one or more states of the one or more environments; and (d) providing, by the one or more hardware processors, the one or more commands to the one or more actuators for executing the one or more optimal actions based on the analyzed one or more states of the one or more environments.
[0023]In yet another embodiment, the ambient intelligence middleware environment operating method further comprises at least one of: (a) determining, by the one or more hardware processors, an availability state of the one or more environments, comprising at least one of: shelves, counters, shelf-fullness, on-shelf availability, and out-of-stock items; (b) determining, by the one or more hardware processors, an assortment state of the one or more environments, based on at least one of: color, categories, and planograms; (c) determining, by the one or more hardware processors, an adjacency state of the one or more environments and stock keeping units (SKU), based on one or more factors comprising at least one of: visual merchandising guidelines, campaigning, storyboarding, user engagements, volume of sales, and seasonal, regional, geographical factors; and (d) analyzing, by the one or more hardware processors, the stock keeping units (SKU) to extract one or more attributes comprising at least one of: designs, shapes, volumetric analysis, barcodes, QR codes, texts, logos, International Mobile Equipment Identity (IMEI), serial numbers and tags from at least one of: images, videos, and media contents, using a three dimensional AI and machine vision system utilizing artificial intelligence of things (AIOT) and liquid lens techniques.
[0024]In yet another embodiment, the ambient intelligence middleware environment operating method further comprises generating, by the one or more hardware processors, a directed acyclic graph (DAG) for configuring at least one of: a plurality of components, a plurality of subsystems, and information flow of the ambient machine utilizing environmental parameters, domain-specific tasks, and the one or more processes to manage inter and intra-subsystem functioning, executing ambient machines workflows, and providing adaptable infrastructure and intelligent support for the one or more stakeholders and the one or more environments.
[0025]In yet another embodiment, the ambient intelligence middleware environment operating method further comprises performing, by the one or more hardware processors, online user assessments and recognition using one or more biometric signatures comprising at least one of: face, fingerprint, voice, age, gender, gait, and visual cues comprising upper and lower body apparel color.
[0026]In yet another embodiment, the ambient intelligence middleware environment operating method further comprises determining, by the one or more hardware processors, one or more preferences of the one or more end users, in the one or more environments, by: (a) detecting, by the one or more hardware processors, the one or more end users in the one or more environments through the one or more biometric signatures; (b) generating, by the one or more hardware processors, one or more tokenized identifiers to the one or more end users based on the detection of the one or more end users with the corresponding one or more biometric signatures; (c) tracking, by the one or more hardware processors, the one or more end users to generate a user journey in the one or more environments; (d) determining, by the one or more hardware processors, a positional probability of the one or more end users at one or more locations within the one or more environments by incorporating multicamera tracking and multi-modal asynchronous fusion techniques; (e) determining, by the one or more hardware processors, user engagement with the one or more environments based on the positional probability of the one or more end users at the one or more locations within the one or more environments; (f) extracting, by the one or more hardware processors, one or more user engagement matrices comprising at least one of: dwell time and the one or more events associated with one or more activities performed by the one or more end users, using the computer vision and machine learning models.
[0027]In yet another embodiment, the ambient intelligence middleware environment operating method further comprises determining, by the one or more hardware processors, at least one of: one or more differences and one or more anomalies to provide the one or more alerts to the one or more stakeholders by matching a current state of the one or more environments with respect to a previous state of the one or more environments, with one or more virtual machine based set rules.
[0028]In yet another embodiment, the ambient intelligence middleware environment operating method further comprises at least one of: (a) providing, by the one or more hardware processors, the one or more alerts to the one or more stakeholders upon determining the one or more anomalies in at least one of: live shelf status, product availability, and out-of-stock items, by matching the data with the one or more virtual machine based set rules; (b) providing, by the one or more hardware processors, the one or more alerts to the one or more stakeholders upon determining the one or more anomalies in at least one of: arrangement of products on the shelves based on color, category, planograms, pricing, promotions, offers, and visual merchandising guidelines, by matching the data with the one or more virtual machine based set rules; and (c) providing, by the one or more hardware processors, the one or more alerts to the one or more stakeholders upon determining the one or more anomalies in a placement of the products on the shelves based on one or more business rules, by matching the data with the one or more virtual machine based set rules.
[0029]In yet another embodiment, the ambient intelligence middleware environment operating method further comprises determining, by the one or more hardware processors, user profile deltas to update the one or more user profiles for providing the optimized experience to the one or more end users, wherein updating the one or more user profiles comprises at least one of: (a) updating, by the one or more hardware processors, the one or more tokenized identifiers of the one or more end users when the one or more end users are recognized in the one or more environments with optimized confidence and variability; (b) updating, by the one or more hardware processors, journey of the one or more end users, in the one or more environments, with heatmaps, dwell-time, and flow-maps, in the user master; (c) updating, by the one or more hardware processors, the user engagement matrices comprising at least one of: the dwell time and the one or more events associated with one or more activities performed by the one or more end users in the one or more environments; (d) updating, by the one or more hardware processors, assortment of the shelves, counters and products based on at least one of: user engagement, visibility, and searchability, in the one or more environments; (e) updating, by the one or more hardware processors, adjacency of the brands and shelves, and arrangement and placement of the products for providing the optimized experience to the one or more end users; (f) updating, by the one or more hardware processors, at least one of: promotions and discounts, suitable for the products to optimize user attraction; and (g) updating, by the one or more hardware processors, at least one of: offers, cashback, and loyalty programs for the one or more end users.
[0030]In an aspect, an ambient intelligence middleware environment operating system for managing and optimizing one or more environments to provide optimized experience to one or more end users, is disclosed. The ambient intelligence middleware environment operating system comprises one or more hardware processors, and a memory. The memory is coupled to the one or more hardware processors. The memory comprises a plurality of subsystems in form of programmable instructions executable by the one or more hardware processors. The plurality of subsystems comprises a data receiving subsystem configured to receive data associated with the one or more environments using one or more sensors.
[0031]The data associated with the one or more environments comprise at least one of: spatial configuration data corresponding to location and layouts of one or more objects in the one or more environments, temporal data corresponding to one or more events occurred in the one or more environments and timings during which the one or more events are occurred, in the one or more environments, metadata corresponding to contextual information about the one or more environments, and visual data corresponding to one or more visual contents of the one or more objects in the one or more environments.
[0032]The plurality of subsystems further comprises a digital twin generating subsystem configured to generate digital twins comprising synthetic representation of the one or more environments based on information from at least one of: the one or more environments, the one or more sensors, and one or more actuators. The plurality of subsystems further comprises a storage subsystem configured to store at least one of: the data and the digital twins, associated with the one or more environments in one or more environment databases.
[0033]The plurality of subsystems further comprises an action determining subsystem configured to process the data from the one or more environment databases, to determine one or more optimal actions within the one or more environments using an ambient machine with one or more artificial intelligence (AI) models, wherein the one or more optimal actions are determined upon analyzing one or more events of the one or more environments. The plurality of subsystems further comprises an action executing subsystem configured to provide one or more commands to the one or more actuators to execute the one or more optimal actions, wherein the one or more optimal actions comprise at least one of: controlling one or more machinery, adjusting conditions, displaying information on one or more screens, triggering one or more alerts, recommending one or more suggestions to one or more stakeholders to validate, modify and approve one or more required actions, and one or more updates, in the one or more environments.
[0034]To further clarify the advantages and features of the present invention, a more particular description of the invention will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is appreciated that these figures depict only typical embodiments of the invention and are therefore not to be considered limiting in scope. The invention will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS
[0035]The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
[0036]FIG. 1 illustrates an exemplary block diagram representation of a network architecture of an ambient intelligence (AmI) middleware environment operating system for managing and optimizing one or more environments to provide optimized experience to one or more end users, based on user configuration, in accordance with an embodiment of the present disclosure;
[0037]FIG. 2 illustrates a detailed view of the ambient intelligence (AmI) middleware environment operating system for managing and optimizing the one or more environments to provide the optimized experience to the one or more end users, based on the user configuration, such as those shown in FIG. 1, in accordance with an embodiment of the present disclosure;
[0038]FIG. 3 illustrates an overview of the AmI middleware environment operating system, in accordance with an embodiment of the present disclosure;
[0039]FIG. 4 illustrates an architecture of the AmI middleware environment operating system, in accordance with an embodiment of the present disclosure;
[0040]FIG. 5 illustrates an exemplary visual representation depicting the AmI middleware environment operating system with ambient machine showing a learning process from the one or more environments, in accordance with an embodiment of the present disclosure;
[0041]FIG. 6 illustrates an exemplary visual representation depicting a prompt engineering and agentic AI system, in accordance with an embodiment of the present disclosure;
[0042]FIG. 7 illustrates an exemplary visual representation depicting the ambient environment with one or more sensors and one or more actuators, in accordance with an embodiment of the present disclosure;
[0043]FIG. 8 illustrates an exemplary visual representation depicting a digital twins that is associated with the ambient environment, in accordance with an embodiment of the present disclosure; and
[0044]FIG. 9 illustrates a flow chart depicting an ambient intelligence (AmI) middleware environment operating method for managing and optimizing the one or more environments to provide the optimized experience to the one or more end users, based on the user configuration, in accordance with an embodiment of the present disclosure.
[0045]Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the method steps, chemical compounds, equipment and parameters used herein may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
[0046]For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.
[0047]The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more components, compounds, and ingredients preceded by "comprises... a" does not, without more constraints, preclude the existence of other components or compounds or ingredients or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
[0048]Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
[0049]In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
[0050]Embodiments of the present invention relate to an ambient intelligence (AmI) middleware system for sensing, estimating, and updating ambient environments based on user profiles and preferences.
[0051]FIG. 1 illustrates an exemplary block diagram representation of a network architecture 100 of an ambient intelligence (AmI) middleware environment operating system 102 for managing and optimizing one or more environments to provide optimized experience to one or more end users, based on user configuration, in accordance with an embodiment of the present disclosure.
[0052]According to an exemplary embodiment of the present disclosure, FIG. 1 depicts the network architecture 100 may include the AmI middleware environment operating system 102, a database 104, and one or more communication devices 106. The AmI middleware environment operating system 102 comprises one or more hardware processors 110, a memory 112, and a plurality of subsystems 114. The AmI middleware environment operating system 102 may be communicatively coupled to the database 104 via a communication network 108. The communication network 108 may be a wired communication network and/or a wireless communication network. The database 104 may include, but not limited to, storing, and managing event configuration files, extracted ambient environment state information, user profile information, key user engagement data, product profiles, and meta-data. The database 104 may be any kind of database such as, but not limited to, relational databases, non-relational databases, document databases, dedicated databases, dynamic databases, monetised databases, scalable databases, cloud databases, distributed databases, any other databases, and a combination thereof. The database 104 is configured to support the functionality of the AmI middleware environment operating system 102 and enables efficient data retrieval and storage for various aspects associated with the user configurations.
[0053]The one or more communication devices 106 may be digital devices, computing devices and/or networks. The one or more communication devices 106 may include, but not limited to, a mobile device, a smartphone, a Personal Digital Assistant (PDA), a tablet computer, a phablet computer, a wearable computing device, a laptop, a desktop, and the like. Further, an Application Programming Interface (API) layer is configured with the one or more communication devices 106. The API layer interacts with a data logging layer as a data lake and an application layer as the one or more end users request by API calls. Whenever the one or more end users make a request through the one or more communication devices 106 for any service or data, privacy-protected API requests are sent to the data layer. An application layer is also configured with the one or more communication devices 106. The application layer is configured to manage all the frontend applications including web, Android, iPhone Operating System (IOS) applications, and the like.
[0054]In an exemplary embodiment, the AmI middleware environment operating system 102 is designed to work on low-compute resources, including edge devices, Original Equipment Manufacturers (OEMs), hand-held devices, and as an instance one of local servers and cloud service providers.
[0055]This integrated network architecture 100 facilitates seamless communication and data exchange, enabling the AmI middleware environment operating system 102 to operate cohesively for monitoring the one or more environments (i.e., the one or more ambient environments or one or more physical environments) based on the user configurations. The AmI middleware environment operating system 102 capability to sense, estimate, and update the state of the one or more ambient environments is underpinned by the effective collaboration among the AmI middleware environment operating system 102, the database 104, and the one or more communication devices 106 within the communication network 108.
[0056]The present invention with the AmI middleware environment operating system 102 in configured to manage and optimize the one or more environments to provide the optimized experience to the one or more end users. The AmI middleware environment operating system 102 is initially configured to receive data associated with the one or more environments using one or more sensors. In an embodiment, the data associated with the one or more environments may include at least one of: spatial configuration data corresponding to location and layouts of one or more objects in the one or more environments, temporal data corresponding to one or more events occurred in the one or more environments and timings during which the one or more events are occurred, in the one or more environments, metadata corresponding to contextual information about the one or more environments, and visual data corresponding to one or more visual contents of the one or more objects in the one or more environments. In an alternative exemplary embodiment, the AmI middleware environment operating system 102 extends beyond warehouses, including hospitals, schools, homes, smart cities, and the like that come under the one or more environments.
[0057]The AmI middleware environment operating system 102 is further configured to generate digital twins including synthetic representation of the one or more environments based on information from at least one of: the one or more environments, the one or more sensors, and one or more actuators. The AmI middleware environment operating system 102 is further configured to store at least one of: the data and the digital twins, associated with the one or more environments in one or more environment databases. In an embodiment, the environment database may be the database 104.
[0058]The AmI middleware environment operating system 102 is further configured to process the data from the one or more environment databases, to determine one or more optimal actions within the one or more environments using an ambient machine with one or more artificial intelligence (AI) models. The one or more optimal actions are determined upon analyzing one or more states of the one or more environments. The AmI middleware environment operating system 102 is further configured to provide one or more commands to the one or more actuators to execute the one or more optimal actions. In an embodiment, the one or more optimal actions may include at least one of: controlling one or more machinery, adjusting conditions, displaying information on one or more screens, triggering one or more alerts, recommending one or more suggestions to one or more stakeholders to validate, modify and approve one or more required actions, and one or more updates, in the one or more environments.
[0059]In an embodiment, the one or more end users may include at least one of: one or more customers, one or more visitors to the one or more environments, and the like. In an embodiment, the one or more stakeholders may include at least one of: one or more agents, one or more shopkeepers, one or more managers, and the like, in the one or more environments.
[0060]Further, the AmI middleware environment operating system 102 may be implemented by way of a single device or a combination of multiple devices that may be operatively connected or networked together. The AmI middleware environment operating system 102 may be implemented in hardware or a suitable combination of hardware and software. The AmI middleware environment operating system 102 may be a hardware device including the one or more hardware processors 110 executing machine-readable program instructions for monitoring the ambient environment based on the user configurations. Execution of the machine-readable program instructions by the one or more hardware processors 110 may enable the AmI middleware environment operating system 102 to dynamically recommend course of action sequences for monitoring the ambient environment based on the user configurations. The course of action sequences may involve various steps or decisions taken for data collecting, data interoperability, event monitoring, and protocol analysing. The “hardware” may comprise a combination of discrete components, an integrated circuit, an application-specific integrated circuit, a field-programmable gate array, a digital signal processor, or other suitable hardware. The “software” may comprise one or more objects, agents, threads, lines of code, subroutines, separate software applications, two or more lines of code, or other suitable software structures operating in one or more software applications or on one or more processors.
[0061]The one or more hardware processors 110 may include, for example, microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuits, and/or any devices that manipulate data or signals based on operational instructions. Among other capabilities, the one or more hardware processors 110 may fetch and execute computer-readable instructions in the memory 112 operationally coupled with the AmI middleware environment operating system 102 for performing tasks such as data processing, input/output processing, and/or any other functions. Any reference to a task in the present disclosure may refer to an operation being or that may be performed on data.
[0062]Though few components and subsystems are disclosed in FIG. 1, there may be additional components and subsystems which is not shown, such as, but not limited to, ports, routers, repeaters, firewall devices, network devices, databases, network attached storage devices, servers, assets, machinery, instruments, facility equipment, emergency management devices, image capturing devices, any other devices, and combination thereof. A person skilled in the art should not be limiting the components/subsystems shown in FIG. 1.
[0063]Those of ordinary skilled in the art will appreciate that the hardware depicted in FIG. 1 may vary for particular implementations. For example, other peripheral devices such as an optical disk drive and the like, local area network (LAN), wide area network (WAN), wireless (e.g., wireless-fidelity (Wi-Fi)) adapter, graphics adapter, disk controller, input/output (I/O) adapter also may be used in addition or place of the hardware depicted. The depicted example is provided for explanation only and is not meant to imply architectural limitations concerning the present disclosure.
[0064]Those skilled in the art will recognise that, for simplicity and clarity, the full structure and operation of all data processing systems suitable for use with the present disclosure are not being depicted or described herein. Instead, only so much of the AmI middleware environment operating system 102 as is unique to the present disclosure or necessary for an understanding of the present disclosure is depicted and described. The remainder of the construction and operation of the AmI middleware environment operating system 102 may conform to any of the various current implementations and practices that were known in the art.
[0065]FIG. 2 illustrates a detailed view of the ambient intelligence (AmI) middleware environment operating system 102 for managing and optimizing the one or more environments to provide the optimized experience to the one or more end users, based on the user configuration, such as those shown in FIG. 1, in accordance with an embodiment of the present disclosure. The AmI middleware environment operating system 102 includes a memory 112, one or more hardware processors 110, and a storage unit 204. The memory 112, the one or more hardware processors 110, and the storage unit 204 are communicatively coupled through a system bus 202 or any similar mechanism. The memory 112 includes a plurality of subsystems 114 in the form of programmable instructions executable by the one or more hardware processors 110.
[0066]The plurality of subsystems 114 includes a data receiving subsystem 206, a digital twin generating subsystem 208, a storage subsystem 210, an action determining subsystem 212, an action executing subsystem 214, and a training subsystem 216, an interface configuration subsystem 218, a database updating subsystem 220, an environment state analyzing subsystem 222, a graph generating subsystem 224, an user preference determining subsystem 226, an anomaly determining subsystem 228, and an updating subsystem 230.
[0067]The one or more hardware processors 110, as used herein, means any type of computational circuit, including, but not limited to, at least one of: a microprocessor unit, microcontroller, complex instruction set computing microprocessor unit, reduced instruction set computing microprocessor unit, very long instruction word microprocessor unit, explicitly parallel instruction computing microprocessor unit, graphics processing unit, digital signal processing unit, or any other type of processing circuit. The one or more hardware processors 110 may also include embedded controllers, including at least one of: generic or programmable logic devices or arrays, application specific integrated circuits, single-chip computers, and the like.
[0068]The memory 112 may be non-transitory volatile memory and non-volatile memory. The memory 112 may be coupled for communication with the one or more hardware processors 110, being a computer-readable storage medium. The one or more hardware processors 110 may execute machine-readable instructions and/or source code stored in the memory 112. A variety of machine-readable instructions may be stored in and accessed from the memory 112. The memory 112 may include any suitable elements for storing data and machine-readable instructions, including at least one of: read only memory, random access memory, erasable programmable read only memory, electrically erasable programmable read only memory, a hard drive, a removable media drive for handling compact disks, digital video disks, diskettes, magnetic tape cartridges, memory cards, and the like. In the present embodiment, the memory 112 includes the plurality of subsystems 114 stored in the form of machine-readable instructions on any of the above-mentioned storage media and may be in communication with and executed by the one or more hardware processors 110.
[0069]The storage unit 204 may be a cloud storage, a Structured Query Language (SQL) data store, a noSQL database or a location on a file system directly accessible by the plurality of subsystems 114.
[0070]The plurality of subsystems 114 includes the data receiving subsystem 206 that is communicatively connected to the one or more hardware processors 110. The document obtaining subsystem 206 is configured to receive the data associated with the one or more environments using the one or more sensors. The data receiving subsystem 206 serves as a sensory input hub, capturing real-time information from the one or more environments. In an embodiment, the data associated with the one or more environments include at least one of: (a) the spatial configuration data corresponding to the location and layouts of the one or more objects in the one or more environments, (b) the temporal data corresponding to one or more events occurred in the one or more environments and the timings during which the one or more events are occurred, in the one or more environments, (c) the metadata corresponding to the contextual information about the one or more environments, and (d) the visual data corresponding to the one or more visual contents of the one or more objects in the one or more environments.
[0071]In an embodiment, the one or more sensors are configured to collect the data (i.e., real-time data) associated with the one or more environments and transmit the data to the one or more environment databases. The one or more sensors may include at least one of: one or more cameras for visual data, one or more motion sensors for movement detection, one or more temperature sensors, one or more specialized sensors tailored to specific applications, one or more surveillance cameras, one or more product cameras, one or more depth sensors, light detection and ranging (LiDAR), warehouse management systems (WMS), and the like. In an embodiment, the AmI middleware environment operating system 102 is configured to consider one or more humans (including end users and stakeholders) as the one or more sensors. The actions and interactions including customer’s purchase, are considered as sensor inputs, transmitting valuable information into the AmI middleware environment operating system 102.
[0072]The plurality of subsystems 114 further includes the digital twin generating subsystem 208 that is communicatively connected to the one or more hardware processors 110. The digital twin generating subsystem 208 is configured to generate the digital twins including the synthetic representation of the one or more environments based on information from at least one of: the one or more environments, the one or more sensors, and the one or more actuators. The digital twins of the one or more environments are generated using Generative AI, 3D synthetics, and computer vision for rendering millions of photo-realistic, domain-specific and task auto-annotated images and videos for AI training without the (or with a minimal) need for real-world data.
[0073]The digital twins serve as a virtual replica of a real environment, including both the database and physical spaces. The digital twin is represented within the one or more environment databases and mirrors the real environment's data and structure while enabling manipulations and simulations that are impractical in the real world. The one or more environment databases stores digital representations of all elements within the one or more environments, ensuring an accurate virtual mirror of real-world data. The digital twin with the 3D models and schemas incorporates 3D models of physical spaces, such as store layouts and warehouse configurations, along with defined data schemas for structured and comprehensive virtual representation.
[0074]The digital twin is configured to facilitate risk-free scenario testing and planning by allowing the one or more stakeholders and the ambient machine to simulate the one or more actions and strategies. The digital twin is used to generate synthetic data (i.e., synthetic-simulated data) for training the one or more AI models, enabling the ambient machine to simulate one or more scenarios and refine ML model’s ability to optimize the real environment. The digital twin may act as a continuous bridge between real and virtual environments, ensuring synchronization and enabling informed decision-making. The digital twin evolves in real time, with the data (i.e., sensor data) feeding into the one or more environment databases to maintain synchronization with the physical environment. The digital twin is configured to allow selective representation of specific areas or aspects relevant to tasks, optimizing computational efficiency. The digital twin is used to simulate shelf reorganization and to assess impacts on workflow efficiency before physical implementation. The digital twin is further used to test product placement strategies to evaluate effects on customer behavior and sales using the synthetic data. In an embodiment, the digital twin is an indispensable tool within the AmI middleware environment operating system 102, empowering the one or more stakeholders and the ambient machine to simulate, plan, learn, and optimize the real environment more effectively.
[0075]The plurality of subsystems 114 further includes the storage subsystem 210 that is communicatively connected to the one or more hardware processors 110. The storage subsystem 210 is configured to allow the one or more environment databases to store at least one of: the data and the digital twins, associated with the one or more environments. The one or more environment databases are configured to store a comprehensive representation of the one or more environments, encompassing both real-time data and the synthetic-simulated data that allows for simulation and planning.
[0076]The one or more environment databases are configured to store at least one of: data associated with static components (i.e., layouts, walls, floors, shelves, and entry/exit points that provide the foundational structure), semi-dynamic components (i.e., products and stock keeping units (SKUs) across categories like fashion, beauty, and grocery, which exhibit periodic updates), dynamic components (i.e., customers, workers, and guards whose behaviors and movements are highly variable and require continuous tracking), spatio-temporal data (i.e., real-time data streams from the one or more sensors providing positional and temporal information), and contextual metadata (i.e., supplemental data describing device statuses, operational parameters, and environmental annotations). In an embodiment, the one or more environment databases are constantly updated with new data from the one or more sensors and feedback from actions performed in the one or more environments. This dynamic updating ensures that the one or more environment databases always reflects the most current state of the real world. The data stored in the one or more environment databases provide the foundation for the ambient machine's learning process. By analyzing both the real and synthetic data, the ambient machine may improve its understanding of the one or more environments and optimize its actions over time.
[0077]The plurality of subsystems 114 further includes the action determining subsystem 212 that is communicatively connected to the one or more hardware processors 110. The action determining subsystem 212 is configured to process the data from the one or more environment databases, to determine the one or more optimal actions within the one or more environments using the ambient machine with the one or more artificial intelligence (AI) models. The one or more optimal actions are determined upon analyzing the one or more states of the one or more environments. The ambient machine is the core intelligence and operational hub of the AmI middleware environment operating system 102. The ambient machine serves as the brain that analyzes data, makes decisions, and orchestrates the one or more actions within the one or more environments. By integrating with the one or more environment databases, the ambient machine facilitates real-time data processing and command execution through a structured workflow.
[0078]The ambient machine with the one or more AI models is configured to process and analyze the sensor data to determine the one or more states of the one or more environments. The ambient machine with the one or more AI models is further configured to determine the one or more actions based on the one or more states of the one or more environments by processing the sensor data. For processing the sensor data, the action determining subsystem 212 is configured to obtain the data (i.e., the sensor data) associated with the one or more environments, from the one or more environment databases. The action determining subsystem 212 is further configured to convert the data associated with the one or more environments into one or more actionable representations. The action determining subsystem 212 is further configured to synchronize one or more data streams corresponding to the data for unified analysis. The action determining subsystem 212 is further configured to perform the one or more processes comprising at least one of: extracting one or more insights, detecting one or more anomalies, identifying one or more patterns, and predicting future states of the one or more environments, to determine the one or more optimal actions within the one or more environments, based on the unified analysis of the data, using the ambient machine with the one or more AI models. In an embodiment, the one or more actions may include at least one of: controlling one or more machinery, adjusting conditions, displaying information on one or more screens, triggering one or more alerts, recommending one or more suggestions to one or more stakeholders to validate, modify and approve one or more required actions, and one or more updates, in the one or more environments.
[0079]The plurality of subsystems 114 further includes the training subsystem 216 that is communicatively connected to the one or more hardware processors 110. The training subsystem 216 is configured to train the one or more AI models to optimize performance of the ambient machine. The ambient machine with the one or more AI models is continuously trained by utilizing both real-time data from the one or more environments and the synthetic data generated from the digital twin within the one or more environment databases. The learning process aims to improve the performance of the ambient machine and its ability to manage the one or more environments effectively. The AI models and workflows are refined by analyzing feedback and adapting to evolving conditions.
[0080]For training the ambient machine with the one or more AI models, the ambient machine initially processes spatio-temporal data streams from a distributed network of the one or more sensors. The data include spatial configurations, temporal changes, and contextual metadata, forming a real-time dynamic representation of the one or more environments. The ambient machine further utilizes online and incremental learning techniques to continuously update its AI models and adapt to real-time changes, ensuring responsiveness and relevance.
[0081]The ambient machine further utilizes the digital twins to generate the synthetic data by simulating one or more scenarios and interventions, in the one or more environments, for training the one or more AI models. The synthetic data supports the training of the one or more AI models, enabling robust decision-making and effective control strategies. The simulations also explore rare or complex scenarios that may not frequently occur in the real world, accelerating learning.
[0082]The ambient machine further utilizes multi-modal transformer models including at least one of: edge AI models, vision language models (VLM), small language models (SLM). The edge AI models are lightweight and domain-specific models deployed at the edge, such as Vision Transformers (ViTs), perform pixel-level detection, object classification, and real-time tracking of assets or objects in the one or more environments. The vision language models are heavier models for deeper analytical tasks such as engagement understanding, activity monitoring, and process compliance evaluation. The small language models are specialized models for generating instructions and orchestrating actuation commands within the one or more environments.
[0083]The ambient machine is configured to combine the multi-modal transformer models comprising at least one of: the vision transformer (ViT) models, the vision language models (VLM), and the small language models to perform at least one of: one or more granular tasks, contextual analysis, and translating the one or more insights into the one or more optimal actions, in the one or more environments.
[0084]The ambient machine is configured to learn one or more domain specific tasks from one or more trainer (i.e., teacher model) AI models to perform the one or more optimal actions in the one or more environments. In an embodiment, the one or more trainer AI models are configured to automatically train one or more trainee (i.e., student model) AI models with one or more knowledge domains, for prioritizing object detection and tracking inventory managements. The teacher models are centralized and heavier models with broad knowledge domains act as “teachers,” training smaller, task-specific student models deployed at the edge. The student models focused on domain-specific tasks, learn from the teacher models to perform efficiently in their designated environments. For example, a teacher model might train a student model to prioritize asset detection and tracking for inventory management. In an embodiment, the teacher models automate the training of student models, ensuring consistent learning and adaptability. The training subsystem 216 dynamically updates and refines these relationships based on the performance of students in real-world scenarios.
[0085]The ambient machine is further configured to obtain one or more feedback, insights and guidance, from the one or more stakeholders for learning. The one or more stakeholders articulate their desired outcomes and define the characteristics of an “ideal environment,” providing the ambient machine with a target to strive towards. The ambient machine improves a deeper understanding of the nuances and complexities of the one or more environments, including factors that might not be readily apparent from the sensor data alone. This human input helps refine the ambient machine's objectives, constraints, and decision-making criteria, through the interaction with the human (e.g., the one or more stakeholders).
[0086]In an aspect, the ambient machine is conceptualized as a network of interconnected machines, organized as pipelines. Each pipeline represents a specific workflow or task. The machines are the fundamental units within a pipeline. The machines represent individual models or functions that perform specific tasks, such as data transformation or analysis. The machines can be further composed of sub-machines, creating a hierarchical structure. The pipelines connect a plurality of machines in a specific sequence, establishing an automated workflow. Data flows through the pipeline, undergoing transformations and analysis at each machine stage. The orchestration in the ambient machine are sources that highlight the importance of learning and adaptation at the pipeline level. When the ambient machine learns, the orchestration aims to optimize the entire pipeline, not just individual machines. This ensures that adjustments made to one part of the pipeline do not negatively impact the overall performance.
[0087]The plurality of subsystems 114 further includes the action executing subsystem 214 that is communicatively connected to the one or more hardware processors 110. The action executing subsystem 214 is configured to provide one or more commands to the one or more actuators to execute the one or more optimal actions. The one or more actuators may receive the one or more instructions/commands from the ambient machine, through updates in the one or more environment databases, and translate the one or more instructions/commands into one or more actions/physical actions within the one or more environments. In an embodiment, the one or more optimal actions include at least one of: controlling the one or more machinery, adjusting the conditions, displaying the information on the one or more screens, triggering the one or more alerts, recommending the one or more suggestions to the one or more stakeholders to validate, modify and approve the one or more required actions, and the one or more updates, in the one or more environments.
[0088]In an embodiment, the one or more humans may be act as the one or more actuators. For example, a notification to a store employee prompting restocking translates human actions into physical changes in the one or more environments. In an embodiment, the one or more actuators may have different speeds based on the actions required. For example, the one or more actuators may have rapid speed for robotic movements and lower speed for store layout modifications.
[0089]In an embodiment, the data associated with the one or more sensors and the one or more actuators are integrated into the one or more environment databases. For example, the one or more sensors provide the real-time data into the one or more environment databases, while the one or more actuators respond to environment database updates. This integration ensures the environment operating system (OS) maintains an accurate representation of the one or more environments. The integration is act as a closed-loop system, where the one or more environments are sensed, analyzed, acted upon, and re-sensed for continuous adaptation and optimization.
[0090]The plurality of subsystems 114 further includes the interface configuration subsystem 218 that is communicatively connected to the one or more hardware processors 110. The interface configuration subsystem 218 is configured to provide one or more human computer interfaces (HCI) for the one or more stakeholders to be interacted with an ambient intelligence middleware environment operating system 102 for one or more processes. The one or more human computer interfaces are configured based on one or more roles of the one or more stakeholders. The one or more interfaces (HCI) are configured as one or more HCI dashboards that serve as a primary interface for the one or more stakeholders to interact with the AmI middleware environment operating system 102 and tailor the AmI middleware environment operating system 102 behavior to meet organizational needs. The one or more HCI dashboards are designed to provide different configurations to diverse user roles through specialized dashboards, offering tools and capabilities for various teams. For example, the HCI dashboard may provide unique configuration for the one or more organizations. The HCI dashboard (a) defines strategic objectives and operational constraints, (b) monitors key performance indicators (KPIs) and system metrics, (c) customizes workflows and defines rules for the ambient machine, and (d) accesses actionable insights and recommendations to align system operations with organizational goals.
[0091]The HCI dashboard may further provide the unique configuration for information technology (IT) team. The HCI dashboard (a) configures for onboard cameras, sensors, and other devices, (b) defines and maps areas, zones, or regions within the operational environment, (c) configures rulesets and assigns appropriate alerts to respective stakeholders, and (d) sets modes of alert delivery, such as email, messages, notifications, or calls. The HCI dashboard may further provide the unique configuration for an operations team. The HCI dashboard (a) monitors the daily status of stores, warehouses, or other operational environments, (b) tracks deviations from optimal or pre-defined KPIs, (c) receives alerts for anomalies occurring on a shop floor or in other monitored areas, and (d) accesses actuation options to promptly address and resolve the anomalies. The HCI dashboard may further provide the unique configuration for business heads or owners. The HCI dashboard (a) gains insights into key business KPIs and performance metrics, (b) identifies trends and areas for improvement to enhance revenue, and (c) makes informed decisions aimed at cost reduction and operational efficiency.
[0092]The plurality of subsystems 114 further includes the database updating subsystem 220 that is communicatively connected to the one or more hardware processors 110. The database updating subsystem 220 is configured to update the one or more environment databases based on the data continuously received from the one or more sensors, which adapts the digital twins to reflect currents states of the one or more environments. The database updating subsystem 220 is further configured to update the one or more environment databases with the one or more changes occurred in the one or more environments.
[0093]The plurality of subsystems 114 further includes the environment state analyzing subsystem 222 that is communicatively connected to the one or more hardware processors 110. The environment state analyzing subsystem 222 is configured to analyze the one or more states of the one or more environments, using the one or more sensors. For analyzing the one or more states of the one or more environments, the environment state analyzing subsystem 222 is configured to analyze one or more user profiles within the one or more environments using the one or more sensors and signal-level interoperability. The environment state analyzing subsystem 222 is further configured to model at least one of: noises, redundancies, outliers, and patterns, from the data received from the one or more sensors, using signal preprocessing techniques. The environment state analyzing subsystem 222 is further configured to apply the one or more AI models on the data to analyze the one or more states of the one or more environments. The environment state analyzing subsystem 222 is further configured to provide the one or more commands to the one or more actuators for executing the one or more optimal actions based on the analyzed one or more states of the one or more environments.
[0094]The environment state analyzing subsystem 222 is configured to determine an availability state of the one or more environments, comprising at least one of: shelves, counters, shelf-fullness, on-shelf availability, and out-of-stock items. The environment state analyzing subsystem 222 is configured to determine an assortment state of the one or more environments, based on at least one of: color (e.g., dark-to-light or light-to-dark, small-to-large horizontally or vertically), categories, and planograms.
[0095]The environment state analyzing subsystem 222 is configured to determine an adjacency state of the one or more environments and stock keeping units (SKU), based on one or more factors comprising at least one of: visual merchandising guidelines, campaigning, storyboarding, user engagements, volume of sales, and seasonal, regional, geographical factors. The environment state analyzing subsystem 222 is configured to analyze the stock keeping units (SKU) to extract one or more attributes comprising at least one of: designs, shapes, volumetric analysis, barcodes, QR codes, texts, logos, International Mobile Equipment Identity (IMEI), serial numbers and tags from at least one of: images, videos, and media contents, using a three dimensional AI and machine vision system utilizing artificial intelligence of things (AIOT) and liquid lens techniques.
[0096]The plurality of subsystems 114 further includes the graph generating subsystem 224 that is communicatively connected to the one or more hardware processors 110. The graph generating subsystem 224 is configured to generate a directed acyclic graph (DAG), using an orchestrator layer of the AmI middleware environment operating system 102, for configuring at least one of: a plurality of components, the plurality of subsystems 114, and information flow of the ambient machine utilizing environmental parameters, domain-specific tasks, and the one or more processes to manage inter and intra-subsystem functioning, executing ambient machines workflows, and providing adaptable infrastructure and intelligent support for the one or more stakeholders and the one or more environments. Further, The API endpoints handle the raw and processed data and the information flows from the one or more environments to the HCI endpoints for relaying live feeds, anomalies, alerts, statistics, smart search, and summarisation.
[0097]The plurality of subsystems 114 further includes the user preference determining subsystem 226 that is communicatively connected to the one or more hardware processors 110. The user preference determining subsystem 226 is configured to perform online user assessments and recognition using an extraction of one or more biometric signatures (soft biometric signatures) comprising at least one of: face, fingerprint, voice, age, gender, gait, and visual cues comprising upper and lower body apparel color, in a less-constrained, unobtrusive, and privacy-compliant manner. In an embodiment, the online user assessments are performed in real-time or pre-defined. In an embodiment, the soft biometric signatures are registered as metadata for the one or more end users without establishing the identity of the one or more end users inside the one or more environments.
[0098]The user preference determining subsystem 226 is configured to detect the one or more end users in the one or more environments through the one or more biometric signatures. The user preference determining subsystem 226 is further configured to generate one or more tokenized identifiers to the one or more end users based on the detection of the one or more end users with the corresponding one or more biometric signatures. The user preference determining subsystem 226 is further configured to track the one or more end users to generate a user journey in the one or more environments. The user preference determining subsystem 226 is further configured to determine a positional probability of the one or more end users at one or more locations within the one or more environments by incorporating multicamera tracking and multi-modal asynchronous fusion techniques. The user preference determining subsystem 226 is further configured to determine user engagement with the one or more environments based on the positional probability of the one or more end users at the one or more locations within the one or more environments. The user preference determining subsystem 226 is further configured to extract one or more user engagement matrices comprising at least one of: dwell time and the one or more events (e.g., trying but not taking behavior, searching but not found, average time to search and taking a product from the shelves, and total time taken for the user to complete the shopping) associated with one or more activities performed by the one or more end users, using the computer vision and machine learning models (e.g., object detection, segmentation, key-point tracking, action classification, scene understanding, and visual question answering).
[0099]In an embodiment, the AmI middleware environment operating system 102 utilizes one or more heterogeneous sensors and capturing devices to acquire information from the one or more environments in a direct or indirect manner. The network of sensors used to obtain this information generates a large amount of sensor data. The acquired data from the one or more environments allows the designing of an ambient learning framework which can continuously detect and recognize the one or more end users, track, segment, and classify their engagements, profile shelves and SKUs, and perform anomaly detection and preventive measures to ensure a secure and reliable ambient management system for end-to-end process monitoring, automation, and optimization.
[0100]The plurality of subsystems 114 further includes the anomaly determining subsystem 228 that is communicatively connected to the one or more hardware processors 110. The anomaly determining subsystem 228 is configured to compute the key environment state metrics from the live feed/images/signals using the one or more AI models or utilize pre-computed information from a persistent storage of the ambient machine or combination of the both. The anomaly determining subsystem 228 is configured to determine at least one of: one or more differences and one or more anomalies to provide the one or more alerts to the one or more stakeholders by verifying/matching a current state (i.e., a live environment state (nth state)) of the one or more environments with respect to a previous state (n-1th state) of the one or more environments, with one or more virtual machine based set rules.
[0101]In an embodiment, the anomaly determining subsystem 228 is configured to provide the one or more alerts to the one or more stakeholders upon determining the one or more anomalies in at least one of: live shelf status, product availability, and out-of-stock items, by matching the data with the one or more virtual machine based set rules. The anomaly determining subsystem 228 is further configured to provide the one or more alerts to the one or more stakeholders upon determining the one or more anomalies in at least one of: arrangement of products on the shelves based on color, category, planograms, pricing, promotions, offers, and visual merchandising guidelines, by matching the data with the one or more virtual machine based set rules. The anomaly determining subsystem 228 is configured to provide the one or more alerts to the one or more stakeholders upon determining the one or more anomalies in a placement of the products on the shelves based on one or more business rules, by matching the data with the one or more virtual machine based set rules.
[0102]The plurality of subsystems 114 further includes the updating subsystem 230 that is communicatively connected to the one or more hardware processors 110. The updating subsystem 230 is configured to determine user profile deltas to update the one or more user profiles for providing the optimized experience to the one or more end users. The update process includes computing the key user profile matrices (including recognizing the one or more end users within the one or more environments, continuously tracking their position, action, and interactions, evaluating the preferences and the key user engagement metrics) form the live feed/images/signals using the one or more AI models or utilizing pre-computed information from the persistent storage of the ambient machine or combination of the both.
[0103]The updating subsystem 230 is configured to update the one or more tokenized identifiers of the one or more end users when the one or more end users are recognized in the one or more environments with optimized confidence and variability. The updating subsystem 230 is further configured to update journey of the one or more end users, in the one or more environments, with heatmaps, dwell-time, and flow-maps, in the user master. The updating subsystem 230 is further configured to update the user engagement matrices including at least one of: the dwell time, an average time spent on each brand/shelf/category of SKU, most frequently visited shelves/counters/brands, most frequently purchased shelves /counters/brands, and the one or more events (e.g., trying but not taking events) associated with one or more activities performed by the one or more end users in the one or more environments.
[0104]The updating subsystem 230 is further configured to update and recommend assortment of the shelves, counters and products based on at least one of: user engagement, visibility, and searchability, in the one or more environments. The updating subsystem 230 is further configured to update and recommend adjacency of the brands and shelves, and arrangement and placement of the products for providing the optimized experience to the one or more end users. The updating subsystem 230 is further configured to update and recommend at least one of: promotions and discounts, suitable for the products to optimize user attraction. The updating subsystem 230 is further configured to update and recommend at least one of: offers, cashback, and loyalty programs (e.g., third party loyalty programs) for the one or more end users, enabling marketing in the moment for retail media.
[0105]In addition to the above said subsystems, the plurality of subsystems 114 further includes a data interoperability subsystem that is communicatively connected to the one or more hardware processors 110. The data interoperability subsystem is configured to process the sensor data. Within the data interoperability subsystem, a docker environment is employed to encapsulate different models in a modular form, specifically tailored for diverse types of the sensor data. The data interoperability subsystem serves the purpose of structuring the raw sensor data to ensure the data interoperability. The interoperability enables different information technology systems and software to communicate effectively and consistently, fostering context awareness. The context awareness refers to gathering and analysing the sensor data in the one or more environments. The data interoperability subsystem is configured with an ambient computing layer. The ambient computing layer is configured to handle computation processes including, the Internet of Things (IoT), artificial intelligence, the algorithms, data transformation, and the like. The ambient computing layer is configured to provide the different docker environments specially designed for the given model. The models are defined in specific programming languages selected based on the time and space complexity of the problem.
[0106]The data interoperability process unfolds at three levels: sensor level, signal level, and model level. At the sensor level interoperability, the sensor data are merged at the hardware level to interpret an ambient context. For instance, combining vision data from a video feed with depth data from the depth sensors assists in identifying the ambient con/.text while preserving the original data. The signal level interoperability involves transforming the sensor data within the AmI middleware environment operating system 102, extracting the ambient context in a manner akin to the sensor level interoperability.
[0107]Furthermore, the model level interoperability deals with trained AI/ML models responsible for identifying the context protocols. The trained AI/ML models, adhering to frameworks including TensorFlow, PyTorch, Keras, and the like transform the different frameworks or architectures to enhance processing efficiency on edge devices. This comprehensive approach to the interoperability ensures that the real-time sensor data is amalgamated and structured effectively, facilitating the context awareness, and preserving the integrity of the original data interpretations. The trained AI/ML models may include, but not limited to, artificial intelligence models, computer vision models, and the like.
[0108]The plurality of subsystems 114 further includes a context event recognition subsystem that is communicatively connected to the one or more hardware processors 110. The context event recognition subsystem is configured to process the interoperability data received from the data interoperability subsystem. In the context of context event recognition subsystem, the docker environment takes on the computationally intensive tasks of executing artificial intelligence algorithms , including damage detection, human detection, compliance checks, and the like. Within the one or more environments, an exploration mode involves the trained AI/ML models on the interpolated data to extract discriminative information from the one or more environments. The trained AI/ML models are configured to identify the state of the one or more environments at any given moment. The resulting context awareness is instrumental in recognising events defined by the one or more users, including damage product identification, product profiling, human detection and identification, and security breach detection, as specified in an event configuration file within the WMS. Henceforth, the context event recognition subsystem is configured to provide event data, represented as time-series data capturing event statuses at an optimised frequency rate. The context event recognition subsystem is configured with a config protocols layer. The config protocols layer carries all the event configuration files in a JavaScript Object Notation (JSON) file format for further processes and the algorithms/models. The event configuration files are configured with elementary data structures that may include, but not limited to, string, integer, float, and Boolean, in the form of paths, Uniform Resource Locator (URL), hyperparameters, and the like. The elementary data structures may change for different protocols followed by the algorithms/models. The config protocols layer provides the event configuration file to the algorithm/model, which takes the data from the data logging layer or the previous docker environment, applies the data transformation or analysis, and stores it in the data logging layer or passes it to the other docker environments.
[0109]The plurality of subsystems 114 further includes an event monitoring subsystem that is communicatively connected to the one or more hardware processors 110. The event monitoring subsystem is configured to observe the events within the one or more environments based on the event data received from the context event recognition subsystem. The event monitoring subsystem is focused on scanning the event data and generating corresponding protocol actuation data. The protocols are user-defined through the event configuration files in the WMS, encompassing actions including picking damaged products, logging product profiles, implementing worker management protocols, managing restricted area access, and the like. Henceforth, the event monitoring subsystem is configured to translate the detected events into the actionable protocols, facilitating a responsive and controlled operational environment.
[0110]The plurality of subsystems 114 further includes a protocol analysis subsystem that is communicatively connected to the one or more hardware processors 110. The protocol analysis subsystem is configured to analyze, by an interaction manager, the protocols generated by the event monitoring subsystem. The protocol analysis subsystem is configured to identify the protocol access rights of the one or more actuators based on the one or more events. The one or more actuators are responsible for executing predefined protocols that facilitate interaction with various autonomous entities including, but not limited to, Autonomous Guided Vehicles (AGVs), robotic arms, control panels, virtual reality (VR) goggles, and the like. In the environment, ambient awareness is derived from the sensor data, triggering the specific protocols and responses through the activation of the one or more actuators, thereby creating adaptive and responsive ecosystems. This stage introduces a hierarchical access control system, distinguishing between the different user roles. For instance, a worker may have limited access rights compared to a warehouse manager, encompassing permissions like server room access, the database 104 access, and actuation overwrite capabilities. The protocol analysis subsystem is configured to ensure that the one or more users with distinct roles and responsibilities have appropriate and secure access to the one or more actuators, enhancing the overall control and management of the AmI middleware environment operating system 102. The protocol analysis subsystem is configured to manage the one or more sensors, their data streaming to the data interoperability subsystem and the one or more actuators to provide updates to the environment in terms of storing and updating spatio-temporal meta-data in the database 104, sending alerts, alarms, and actionable insights to the decision makers on a human-computer interaction (HCI) dashboard associated with the one or more communication devices 106. The protocol analysis subsystem is configured with a data logging layer. The data logging layer is configured to manage all the data flow and storage. The Input-Output interaction of the data with any algorithm is managed by the data logging layer. The data logging layer is configured with the heavy data flow and the API requests between the ambient computing layer and the API layer, respectively.
[0111]FIG. 3 illustrates an overview of the AmI middleware environment operating system 102, in accordance with an embodiment of the present disclosure. The AmI middleware environment operating system 102 (as shown in FIG. 3) is an intelligence middleware for retail and supply chain environments which connects with existing IoT infrastructure and transforms them into ambient machines, each performing fundamental operations: sensing the environment states and user profiles using the one or more sensors, estimating the key state metrics and user preferences (latent desires) and engagement, actuating the actions using the one or more actuators, and updating the environment states (or perception of the state) of the one or more environments 302based on user’s preferences.
[0112]The AmI middleware environment operating system 102 is an unique environment operating system configured to create and manage automated AI-driven workflows, enhancing interactions within real-world settings. By integrating the data from the one or more sensors, one or more user inputs, and digital simulations, the AmI middleware environment operating system 102 is configured to bridge the gap between the one or more environments (i.e., the physical environments) 302 and their digital representations. Through continuous learning, adaptation, and intelligent control, the AmI middleware environment operating system 102 optimizes key performance metrics, including cost efficiency and overall value.
[0113]The AmI middleware environment operating system 102 is configured to be intelligently adapt to the one or more user inputs, behaviors, preferences, and environmental conditions 302 in real time, enhancing both user experience and operational efficiency. The AmI middleware environment operating system 102 is configured to establish a holistic framework that combines user personalization, real-time sensor data acquisition, synthetic simulation, ambient machine generation, synthetic learning, real-world implementation, active learning, and responsive environmental control. By seamlessly integrating these elements, the AmI middleware environment operating system 102 is configured to create a highly adaptive and intelligent ecosystem.
[0114]The AmI middleware environment operating system 102 is functioning as a central processing layer that senses, analyzes, and dynamically updates the surrounding environment based on the user profiles, behaviors, and preferences. The AmI middleware environment operating system 102 consists of the one or more hardware processors 110, memory 112, and a distributed database 104, which can be deployed at the edge, in the cloud, or through a hybrid approach. This architecture enables real-time adaptability and intelligent decision-making for optimized interactions within the one or more environments 302. The AmI middleware environment operating system 102 is structured around a variety of programmable subsystems and the AmI middleware environment operating system 102 includes one or more key components comprising the one or more environments 302, an environment database (i.e., the storage subsystem 210), the ambient machine, the one or more digital twins, and teacher-student learning of the ambient machine.
[0115]The one or more environments 302 are real-world environments being managed, including at least one of: one or more retail stores, one or more warehouses, and an entire value chain. The one or more environments 302 are populated with the one or more sensors, the one or more actuators, that collect the data from the one or more environments 302 and perform the one or more actions in the one or more environments 302.
[0116]The environment database is configured to store a current state of the one or more environments 302, including both real-time data from the one or more sensors and the one or more digital twins of the one or more environments 302. The environment database can store structured, unstructured, and synthetic data in one or more formats including at least one of: SQL, NoSQL, graph, and vector databases. The environment database serves as a single source of truth for the AmI middleware environment operating system 102.
[0117]The ambient machine is a collection of machines that operate on the environment database to sense, think, act, and learn. The ambient machine is configured to utilize the AI models to analyze the environment data and make decisions about how to actuate the one or more environments 302. The ambient machine can also learn from the environment data and improve its performance over time. The one or more digital twins are synthetic representations of the real environment, generated using information from the one or more environments 302, the one or more sensors, the one or more actuators, and the ambient machine. The one or more digital twins are act as the control tower for the real environment, enabling simulation, optimization, and decision-making.
[0118]The teacher-student learning model (i.e., a part of the training subsystem 216) is configured to enable continuous adaptation and evolution of ambient machines based on real-time data and user interactions. By utilizing synthetic simulations and emulations, the teacher-student learning model refines machine learning models over time. The teacher-student learning model also analyzes operational protocols and business KPIs, ensuring that the AmI middleware environment operating system 102 performance aligns with predefined goals. This continuous learning and improvement process allows the ambient machines to evolve dynamically, enhancing their ability to respond to changing conditions and optimize the one or more environments 302 in accordance with user needs and operational objectives.
[0119]FIG. 4 illustrates an architecture of the AmI middleware environment operating system 102, in accordance with an embodiment of the present disclosure. The architecture of the AmI middleware environment operating system 102 includes a sensor and actuator layer forms an interface connecting the one or more environments 302 to the digital system. The sensor and actuator layer includes the one or more sensors configured to collect the real-time data from the one or more environments 302 and transmit the real-time data the environment database (i.e., the storage subsystem 210) for providing a continuous feed of environmental updates.
[0120]The AmI middleware environment operating system 102 is configured to utilize an array of the one or more sensors including at least one of: one or more cameras for visual data, one or more motion sensors for movement detection, one or more temperature sensors, one or more specialized sensors tailored to specific applications, one or more surveillance cameras, one or more product cameras, one or more depth sensors, light detection and ranging (LiDAR), warehouse management systems (WMS), and the like. In an embodiment, the AmI middleware environment operating system 102 is configured to consider one or more humans (including end users and stakeholders) as the one or more sensors. The actions and interactions including customer’s purchase, are considered as sensor inputs, transmitting valuable information into the AmI middleware environment operating system 102. The real-time data from the one or more sensors may include at least one of: spatial configuration data indicating location and layout of objects in the one or more environments 302, temporal data indicating events and their timing, metadata indicating contextual information about the one or more environments 302, and visual data indicating images and other related formats.
[0121]Further, the one or more actuators may receive instructions from the ambient machine, via updates in the Environment DB, and translate these data into physical actions within the one or more environments 302. The Actions including controlling machinery like robots or conveyor belts, adjusting lighting or temperature, displaying information on screens, and triggering notifications or alerts for human operators. In an embodiment, similar the one or more sensors, the humans may act as actuators. For example, a notification to a store employee prompting restocking translates human actions into physical changes in the one or more environments 302. The one or more actuators exhibit varying speeds of action, from rapid robotic movements to slower processes such as store layout modifications.
[0122]The one or more sensors may provide real-time data into the environment database, while the one or more actuators respond to database updates. This integration ensures the Environment OS maintains an accurate representation of the one or more environments 302. The ambient machine acts as the a brain of the AmI middleware environment operating system 102, which analyzes the sensor data, making decisions, and issuing commands to the one or more actuators through the one or more environments 302. A closed-loop system is a part where the one or more environments 302 are sensed, analyzed, acted upon, and re-sensed for continuous adaptation and optimization. The human computer interaction (HCI) is a specialized subset of this layer. Human actions on user interfaces are treated as sensor inputs, and the AmI middleware environment operating system 102 responses through these HCI interfaces are considered as actuation outputs.
[0123]In an embodiment, the one or more environment databases are configured to store a comprehensive representation of the one or more environments 302, encompassing both real-time data and the synthetic-simulated data that allows for simulation and planning. The one or more environment databases are configured to store at least one of: data associated with static components (i.e., layouts, walls, floors, shelves, and entry/exit points that provide the foundational structure), semi-dynamic components (i.e., products and stock keeping units (SKUs) across categories like fashion, beauty, and grocery, which exhibit periodic updates), dynamic components (i.e., customers, workers, and guards whose behaviors and movements are highly variable and require continuous tracking), spatio-temporal data (i.e., real-time data streams from the one or more sensors providing positional and temporal information), and contextual metadata (i.e., supplemental data describing device statuses, operational parameters, and environmental annotations). In an embodiment, the one or more environment databases are constantly updated with new data from the one or more sensors and feedback from actions performed in the one or more environments 302. This dynamic updating ensures that the one or more environment databases always reflects the most current state of the real world. The data stored in the one or more environment databases provide the foundation for the ambient machine's learning process. By analyzing both the real and synthetic data, the ambient machine may improve its understanding of the one or more environments 302 and optimize its actions over time.
[0124]The ambient machine is the core intelligence and operational hub of the AmI middleware environment operating system 102. The ambient machine serves as the brain that analyzes data, makes decisions, and orchestrates the one or more actions within the one or more environments 302. By integrating with the one or more environment databases, the ambient machine facilitates real-time data processing and command execution through a structured workflow. The detailed implementation of the ambient learning have been explained in detail in FIG. 2.
[0125]FIG. 5 illustrates an exemplary visual representation depicting the AmI middleware environment operating system 102 with the ambient machine showing a learning process from the one or more environments 302, in accordance with an embodiment of the present disclosure.
[0126]As already explained in FIG. 2, the ambient machine with the one or more AI models is continuously trained/learned by utilizing both real-time data from the one or more environments 302 and the synthetic data generated from the digital twin within the one or more environment databases. The learning process aims to improve the performance of the ambient machine and its ability to manage the one or more environments 302 effectively. The AI models and workflows are refined by analyzing feedback and adapting to evolving conditions. For training the ambient machine with the one or more AI models, the ambient machine initially processes spatio-temporal data streams from a distributed network of the one or more sensors. The data include spatial configurations, temporal changes, and contextual metadata, forming a real-time dynamic representation of the one or more environments 302. The ambient machine further utilizes online and incremental learning techniques to continuously update its AI models and adapt to real-time changes, ensuring responsiveness and relevance.
[0127]The ambient machine further utilizes the digital twins to generate the synthetic data by simulating one or more scenarios and interventions, in the one or more environments 302, for training the one or more AI models. The synthetic data supports the training of the one or more AI models, enabling robust decision-making and effective control strategies. The simulations also explore rare or complex scenarios that may not frequently occur in the real world, accelerating learning. The ambient machine further utilizes multi-modal transformer models including at least one of: edge AI models, vision language models (VLM), small language models (SLM). The edge AI models are lightweight and domain-specific models deployed at the edge, such as Vision Transformers (ViTs), perform pixel-level detection, object classification, and real-time tracking of assets or objects in the one or more environments 302. The vision language models are heavier models for deeper analytical tasks such as engagement understanding, activity monitoring, and process compliance evaluation. The small language models are specialized models for generating instructions and orchestrating actuation commands within the one or more environments 302.
[0128]The ambient machine is configured to combine the multi-modal transformer models comprising at least one of: the vision transformer (ViT) models, the vision language models (VLM), and the small language models to perform at least one of: one or more granular tasks, contextual analysis, and translating the one or more insights into the one or more optimal actions, in the one or more environments 302. The ambient machine is configured to learn one or more domain specific tasks from one or more trainer (i.e., teacher model) AI models to perform the one or more optimal actions in the one or more environments 302. In an embodiment, the one or more trainer AI models are configured to automatically train one or more trainee (i.e., student model) AI models with one or more knowledge domains, for prioritizing object detection and tracking inventory managements. The teacher models are centralized and heavier models with broad knowledge domains act as “teachers,” training smaller, task-specific student models deployed at the edge. The student models focused on domain-specific tasks, learn from the teacher models to perform efficiently in their designated environments. For example, a teacher model might train a student model to prioritize asset detection and tracking for inventory management. In an embodiment, the teacher models automate the training of student models, ensuring consistent learning and adaptability. The training subsystem 216 dynamically updates and refines these relationships based on the performance of students in real-world scenarios.
[0129]The ambient machine is further configured to obtain one or more feedback (through a feedback loop), insights and guidance, from the one or more stakeholders for learning. The one or more stakeholders articulate their desired outcomes and define the characteristics of an “ideal environment,” providing the ambient machine with a target to strive towards. The ambient machine improves a deeper understanding of the nuances and complexities of the one or more environments 302, including factors that might not be readily apparent from the sensor data alone. This human input helps refine the ambient machine's objectives, constraints, and decision-making criteria, through the interaction with the human (e.g., the one or more stakeholders).
[0130]The ambient machine is conceptualized as a network of interconnected machines, organized as pipelines. Each pipeline represents a specific workflow or task. The machines are the fundamental units within a pipeline. The machines represent individual models or functions that perform specific tasks, such as data transformation or analysis. The machines can be further composed of sub-machines, creating a hierarchical structure. The pipelines connect a plurality of machines in a specific sequence, establishing an automated workflow. Data flows through the pipeline, undergoing transformations and analysis at each machine stage. The orchestration in the ambient machine are sources that highlight the importance of learning and adaptation at the pipeline level. When the ambient machine learns, the orchestration aims to optimize the entire pipeline, not just individual machines. This ensures that adjustments made to one part of the pipeline do not negatively impact the overall performance.
[0131]FIG. 6 illustrates an exemplary visual representation 600 depicting a prompt engineering and agentic AI system, in accordance with an embodiment of the present disclosure. The prompt engineering and agentic AI system is configured to enable real-time retrieval and generation of responses to natural language queries related to the one or more environments 302, business operations, and key performance indicators (KPIs). The framework of the prompt engineering and agentic AI system integrates Retrieval Augmented Generation (RAG), Cache Augmented Generation (CAG), and Knowledge Augmented Generation (KAG) methods, alongside Mixture of Experts and a Multi-head Self-Attention Mechanism. These elements collectively enhance the speed, adaptability, and generative capabilities of LLMs, SLMs, and VLMs while leveraging domain-specific multi-SLMs for accuracy and efficiency. The above said models may be deployed on both edge and cloud environments, ensuring precise and context-aware responses on demand.
[0132]In an embodiment, the Agentic AI system is fine-tuned to interpret complex queries related to the one or more environments 302, environmental databases, and business contexts. These models may process vast amounts of unstructured data, extracting relevant insights and generating comprehensive responses. Multi-Agentic systems specialize in domain-specific tasks are optimized for both speed and resource efficiency. The Agentic AI system dynamically selects between LLMs, VLMs, and multi-SLMs based on query complexity, context, and the required information type, ensuring an optimal balance between computational efficiency and response quality. Further, the Agentic AI system seamlessly integrates with the one or more digital twins, facilitating continuous and adaptive interactions between stakeholders and the one or more environments 302.
[0133]FIG. 7 illustrates an exemplary visual representation 700 depicting the one or more environments 302 with the one or more sensors 702 and the one or more actuators 704, in accordance with an embodiment of the present disclosure. The one or more environments 302 serves as a real-world setting where the AmI middleware environment operating system 102 operates and interacts with the one or more users and stakeholders. The one or more environments 302 represents a tangible counterpart to the digital twin housed within the environment database and the one or more environments 302 are critical foundations for the system's functionality. The one or more environments 302 may be characterized by continuous change, with variables including inventory levels, customer locations, and environmental conditions fluctuating rapidly. Some elements, including sensor readings, update quickly, while others, such as store layouts, change more gradually. This dynamic nature necessitates real-time adaptation and responsiveness from the AmI middleware environment operating system 102.
[0134]The one or more environments 302 may be integrated with a network of the one or more sensors 702 and the one or more actuators 704, forming the interface between the real world and the AmI middleware environment operating system 102. The detailed explanations of the one or more sensors 702 and the one or more actuators 704 are explained in above said paragraphs. The one or more environments 302 may encompass the entire value chain, including suppliers, manufacturing facilities, warehouses, distribution networks, and retail stores. The AmI middleware environment operating system 102 is configured to optimize this interconnected network of activities, enhancing value delivery to users while minimizing costs.
[0135]The one or more environments 302 may include the one or more humans who play an active role within the physical environments 302, functioning as both sensors 702 and actuators 704. Human’s actions and decisions contribute valuable data to the AmI middleware environment operating system 102, while the AmI middleware environment operating system 102 influences their behavior through instructions and feedback. This tightly coupled loop creates a dynamic interplay between human agency and automated processes.
[0136]The one or more environments 302 may further include human-computer interface (HCI) that bridges the physical environments 302 and the digital realm of the AmI middleware environment operating system 102. The HCI includes interaction channels, which are dashboards for stakeholders, mobile applications for workers, and user interfaces within physical stores facilitate seamless communication between the humans and the AmI middleware environment operating system 102. The HCI layer is configured to enable the humans to perceive system-generated information (e.g., recommendations, alerts) and provide their actions and decisions back into the AmI middleware environment operating system 102.
[0137]Further, the physical environments 302 imposes constraints including physical laws, safety regulations, and resource availability. While the digital twins allows for simulations and idealized scenarios, the AmI middleware environment operating system 102 must account for these real-world limitations when making decisions and executing actions. Further, the physical environments 302 provides a constant stream of feedback to the AmI middleware environment operating system 102 through its sensors 702. This real-time data enables the ambient machine to learn and adapt, refining its understanding of environmental dynamics and improving its ability to achieve stakeholder-defined objectives. The physical environments 302 in the AmI middleware environment operating system 102 is not merely a passive backdrop. The physical environments 302 is an active and integral part of the system's functionality. The complex interplay between the sensors 702, actuators 704, humans, and the ambient machine, may create a dynamic and interconnected ecosystem aimed at optimizing the value chain and delivering enhanced user experiences.
[0138]The one or more end users inside the one or more environments 302 are profiled and recognised using biometric signatures including face, fingerprint, voice, and the like in a less-constrained, unobtrusive, and privacy-compliant manner. The profiling is also performed using soft biometric signatures including age, gender, gait, and other visual cues including upper and lower body apparel colour. The one or more end users are detected and tracked across multiple connected cameras and captured key engagement matrices including heatmaps, flow-maps, dwell-time, and the like by using computer vision and machine learning algorithms. The algorithms are also configured for object detection, segmentation, key-point tracking, action classification, scene understanding, visual question answering, and the like. Other third-party programs or software are employed for recognising and attaching the profile of the users as they arrive inside the one or more environments 302.
[0139]The algorithms are configured to improve the usability and acceptability of human environment-interaction (HEI) interfaces associated with the one or more communication devices 106. The HEI defines the scenario when the environment directly interacts with the worker, the user, or the warehouse manager in a less constrained, user-friendly, intelligent, and ubiquitous manner via the communication network 108. The HEI allows them to interact with their physiological traits (type, touch, or press), behavioural traits (voice command and gestures), and sentiments (mood, expression, behaviour). The suitable modalities for the HEI may include vision (face, age, gender, gesture, emotions), audio (voice and speech), smart glasses, mobiles, goggles, geo-position, time-tracking, motion, temperature, smell, other wearable IoT sensors, and the like.
[0140]In an aspect, the AmI middleware environment operating system 102 is configured to implement a process of estimating user configurations. The process of estimating the user configurations initially includes identifying the one or more end users upon entry into the one or more environments 302 seamlessly and involuntarily through the one or more sensors 702, ensuring minimal disruption, overt presence, and user intervention. The process further includes authenticating the one or more end users by matching the face against registered profiles provided with a user consent to issue a tokenised Identification (ID). If no user consent is given, assign a new user ID to ensure privacy and security. The process further includes extracting soft biometric signatures along with the visual cues and storing them as the meta-data for the one or more end users within the one or more environments 302 without explicitly establishing the identity of the one or more end users in the one or more environments 302.
[0141]The process further includes monitoring the end user's movements and generating a pathway visualisation illustrating the user's journey within the one or more environments 302. The process further includes estimating a positional probability of the one or more end users at various locations inside the one or more environments 302 in the absence of reliable tracking features and low trust level of tracklets by incorporating a multi-camera tracking and a multi-modal asynchronous fusion approach. The process further includes evaluating the user engagement with the one or more environments 302 and extracting the key engagement matrices.
[0142]In another aspect, the AmI middleware environment operating system 102 is configured to implement a process of updating the state of the one or more environments 302 prior to the entry of the one or more end users. The process of updating the state of the one or more environments 302 initially includes computing the key engagement matrices using the pre-trained models, pre-computed information from the database 104, or a combination of both. The process further includes verifying the live environment state (n)th with respect to the (n-1)th state and setting the VM guidelines. The process further includes estimating the environment state anomalies and alerting the workers (i.e., one or more stakeholders) and the warehouse managers for instant fixes.
[0143]The process further includes analysing the data, matching the set VM guidelines, and alerting if there is any anomaly in the live shelf status, product availability, out-of-stock items, arrangement of the products on the shelves based on colour, category, planograms, pricing, promotions, offers, other VM guidelines, placement on the shelves based on business rules.
[0144]In yet another aspect, the AmI middleware environment operating system 102 is configured to implement a process of updating the state of the one or more environments 302 based on the user engagement. The process of updating the state of the one or more environments 302 initially includes computing the key user profile matrices including recognising the one or more end users inside the one or more environments 302, continuously tracking the position, action, and interactions, and further evaluating the preferences and the key engagement matrices using the pre-trained models, pre-computed information from the database 104, or a combination of both.
[0145]The process further includes estimating the user profile deltas that is the new user configurations versus the overlapping user configurations. In the next step 1806, the process 1800 includes updating the profile state of the user for personalised shopping experiences. The process further includes updating the profile state of the user for personalised shopping experiences. The process further includes updating the tokenised ID of the one or more end users whenever the one or more end users is recognised in the environment with higher confidence and variability and updating the user’s journey in the environment with the key engagement matrices in the database 104.
[0146]The process further includes updating the user’s engagement matrices including average time spent on each brand /shelf /category of SKU, most frequently visited shelves/counters/brands, most frequently purchased shelves /counters /brands, trying but not taking events, and the like. The process further includes updating and recommending the assortment of the shelves or counters and products based on the user’s engagement matrices, visibility, and searchability. Further, the adjacency of the brands, shelves, arrangement, and placement of the products, are updated and recommended for improving the user experience. Then updating and recommending promotions and discounts are suitable for products or brands to improve the user attraction. Lastly, offers, cashback, and other third-party loyalty programs are updated and recommended for the one or more end users, enabling marketing at the moment for retail media.
[0147]FIG. 8 illustrates an exemplary visual representation depicting the digital twins 802 that are associated with the ambient environment, in accordance with an embodiment of the present disclosure. The digital twins 802 serve as a virtual replica of a real environment, including both the database and physical spaces. The digital twin 802 is represented within the one or more environment databases and mirrors the real environment's data and structure while enabling manipulations and simulations that are impractical in the real world. The one or more environment databases stores digital representations of all elements within the one or more environments 302, ensuring an accurate virtual mirror of real-world data. The digital twin 802 with the 3D models and schemas incorporates 3D models of physical spaces, such as store layouts and warehouse configurations, along with defined data schemas for structured and comprehensive virtual representation.
[0148]The digital twin 802 is configured to facilitate risk-free scenario testing and planning by allowing the one or more stakeholders and the ambient machine to simulate the one or more actions and strategies. The digital twin 802 is used to generate synthetic data (i.e., synthetic-simulated data) for training the one or more AI models, enabling the ambient machine to simulate one or more scenarios and refine ML model’s ability to optimize the real environment. The digital twin 802 may act as a continuous bridge between real and virtual environments 302, ensuring synchronization and enabling informed decision-making. The digital twin 802 evolves in real time, with the data (i.e., sensor data) feeding into the one or more environment databases to maintain synchronization with the physical environment. The digital twin 802 is configured to allow selective representation of specific areas or aspects relevant to tasks, optimizing computational efficiency. The digital twin 802 is used to simulate shelf reorganization and to assess impacts on workflow efficiency before physical implementation. The digital twin 802 is further used to test product placement strategies to evaluate effects on customer behavior and sales using the synthetic data. In an embodiment, the digital twin 802 is an indispensable tool within the AmI middleware environment operating system 102, empowering the one or more stakeholders and the ambient machine to simulate, plan, learn, and optimize the real environment more effectively.
[0149]FIG. 9 illustrates a flow chart depicting an ambient intelligence (AmI) middleware environment operating method 900 for managing and optimizing the one or more environments 302 to provide the optimized experience to the one or more end users, based on the user configuration, in accordance with an embodiment of the present disclosure.
[0150]At step 902, the data associated with the one or more environments 302 are received from the one or more sensors 702. The data associated with the one or more environments 302 include at least one of: the spatial configuration data corresponding to the location and layouts of the one or more objects in the one or more environments 302, the temporal data corresponding to the one or more events occurred in the one or more environments 302 and the timings during which the one or more events are occurred, in the one or more environments 302, the metadata corresponding to the contextual information about the one or more environments 302, and the visual data corresponding to the one or more visual contents of the one or more objects in the one or more environments 302.
[0151]At step 904, the digital twins 802 including the synthetic representation of the one or more environments 302 are generated based on the information from at least one of: the one or more environments 302, the one or more sensors 702, and the one or more actuators 704.
[0152]At step 906, at least one of: the data and the digital twins 802, associated with the one or more environments 302, are stored in the one or more environment databases.
[0153]At step 908, the data from the one or more environment databases, are processed to determine the one or more optimal actions within the one or more environments 302 using the ambient machine with the one or more artificial intelligence (AI) models. The one or more optimal actions are determined upon analyzing one or more states of the one or more environments 302.
[0154]At step 910, the one or more commands are provided to the one or more actuators 704 to execute the one or more optimal actions. The one or more optimal actions include at least one of: controlling the one or more machinery, adjusting the conditions, displaying the information on the one or more screens, triggering the one or more alerts, recommending the one or more suggestions to the one or more stakeholders to validate, modify and approve the one or more required actions, and the one or more updates, in the one or more environments 302.
[0155]Numerous advantages of the present disclosure may be apparent from the discussion above. In accordance with the present disclosure, the AmI middleware environment operating system 102 for perceiving the environments 302 based on the user configuration is provided. The AmI middleware environment operating system 102 is configured to optimize the user shopping experience by bridging the reality gap between the one or more end users and the one or more environments 302 and providing ground-level visibility, compliance monitoring, real-time decision support, and revenue-maximizing insights to decision-makers. The AmI middleware environment operating system 102 is configured to minimize the user churn due to the unavailability of the stocks on the shelves, poor assortment, the planograms, and adjacency of the products. The AmI middleware environment operating system 102 is further configured to minimize retail shrinkage due to defects or damages, theft or loss, inaccurate inventory record accuracy (IRA), and inefficient and time-consuming inventory cycle counting activities.
[0156]The AmI middleware environment operating system 102 is further configured to minimize return orders arising primarily due to mis-shipments, poor product handling, and late damage discovery. Furthermore, the AmI middleware environment operating system 102 is further configured to minimize transactional disputes and related costs due to inaccurate product profiling, volumetric assessment, and lack of an intelligent vision system for validating assured product delivery from sourcing to delivery in the hands of the users. Moreover, the AmI middleware environment operating system 102 is further configured to maximize reorder frequency by optimising the issues of empty truck miles and improving the frequency of cycle counts and the accuracy of the IRA.
[0157]The Ambient Intelligence (AmI) middleware environment operating system 102 is configured to emerge as a multi-disciplinary paradigm that utilizes a network of heterogeneous IoT sensors to create unobtrusive and secure environments. The AmI envisions an intelligent and adaptive environment that prioritizes user-centric approaches, providing greater user-friendliness, efficient services, user empowerment, and support for human interactions.
[0158]The AmI middleware environment operating system 102 is configured to address a wide range of business challenges across industries, delivering transformative value through comprehensive optimization, real-time insights, and data-driven decision-making. The capabilities of the AmI middleware environment operating system 102 are anchored in its ability to optimize the entire value chain, supported by specialized intelligence in key business domains. The AmI middleware environment operating system 102 is configured to enable end-to-end optimization of value chains, ensuring seamless integration and traceability across all stages of production, distribution, and consumption.
[0159]The AmI middleware environment operating system 102 is configured to track and optimize the journey of food items from farm to fork, ensuring quality, freshness, and compliance throughout the supply chain. The AmI middleware environment operating system 102 is configured to provide complete visibility from sourcing and manufacturing to store shelves, enabling timely replenishment and quality assurance. The AmI middleware environment operating system 102 is configured to ensure stakeholders have full visibility into the status, origin, and journey of products, enhancing operational efficiency and building consumer trust.
[0160]The AmI middleware environment operating system 102 is configured to ensure that suppliers and vendors meet stringent quality, cost, and convenience criteria through data-driven evaluations and continuous monitoring. The AmI middleware environment operating system 102 is configured to track vendor reliability, delivery timelines, and adherence to quality standards. The AmI middleware environment operating system 102 is configured to assess supplier pricing to ensure cost-effectiveness without compromising on quality. The AmI middleware environment operating system 102 is configured to identify potential disruptions in the supply chain and suggests alternative vendors or strategies to mitigate risks.
[0161]The AmI middleware environment operating system 102 is configured to provide unparalleled insights into pricing and promotional strategies by analyzing internal data and competitor information. The AmI middleware environment operating system 102 is configured to analyze competitor prices and internal promotional data to recommend the most effective pricing strategies, balancing customer appeal and profitability. The AmI middleware environment operating system 102 is configured to track the impact of promotions in real-time, enabling adjustments to maximize sales and minimize revenue loss. The AmI middleware environment operating system 102 is configured to suggest data-driven price adjustments for both in-store and e-commerce platforms to remain competitive and drive customer loyalty.
[0162]The AmI middleware environment operating system 102 is configured to empower businesses with deep insights into customer profiles, behaviors, and preferences, enabling hyper-personalized engagement. The AmI middleware environment operating system 102 is configured to aggregate and analyze customer data to build detailed profiles, including purchase history, behavior patterns, and preferences. The AmI middleware environment operating system 102 is configured to enable marketing-in-the-moment strategies at physical stores, such as personalized offers based on in-store behavior. The AmI middleware environment operating system 102 is configured to utilize advanced AI models to predict customer needs and preferences, improving satisfaction and loyalty.
[0163]The AmI middleware environment operating system 102 is configured to support robust revenue management by identifying opportunities to maximize income while minimizing inefficiencies. The AmI middleware environment operating system 102 is configured to track and evaluate revenue generation across different channels, products, and services. The AmI middleware environment operating system 102 is configured to identify high-margin products, underperforming categories, and revenue-enhancing strategies. The AmI middleware environment operating system 102 is configured to utilize historical and real-time data to forecast future revenue trends and optimize strategic decisions.
[0164]The AmI middleware environment operating system 102 is configured to ensure alignment between the physical environment and its digital twin 802, enabling seamless execution of operational tasks. The AmI middleware environment operating system 102 is configured to verify that all actuations whether performed by humans or machines are executed as intended, ensuring the environment aligns with its digital twin 802. The AmI middleware environment operating system 102 is configured to assist field force personnel in implementing changes, offering step-by-step guidance or automating tasks where feasible. The AmI middleware environment operating system 102 is configured to track environmental changes and dynamically updates workflows to ensure operational alignment with business objectives.
[0165]The AmI middleware environment operating system’s 102 value chain optimization integrates supplier and vendor intelligence, cost intelligence, customer intelligence, revenue intelligence, and operational intelligence into a cohesive and adaptive framework. by leveraging these interconnected domains, Amiware transcends industry silos to deliver transformative solutions across a wide range of smart ecosystems.
[0166]The AmI middleware environment operating system 102 is configured to facilitate intelligent living environments by creating a comprehensive digital twin 802 of the home. This digital twins 802 not only replicates the physical structure but also adapts dynamically to changing family dynamics, such as accommodating elderly care with fall detection systems, child safety through restricted access zones, and energy optimization based on usage patterns. Specific IoT integrations, such as smart lighting, HVAC systems, and security features, are seamlessly coordinated using Operational Intelligence to deliver a unified and adaptive smart home experience. The AmI middleware environment operating system 102 manages the flow between the physical environments 302 and the digital twins 802 by sensing (using IoT infrastructure) and syncing data to the digital twins 802 through the environment database. It actuates changes in the physical environment (adjusting devices or initiating processes) and instructs via the ambient machine to optimize home automation. The environments 302 are customized for different users such as elderly individuals, children, or adults, by analyzing their preferences, behaviors, and profiles, ensuring a tailored and personalized living experience.
[0167]In ports, the AmI middleware environment operating system 102 focuses on productivity and efficiency by creating a digital twin 802 that mirrors cargo movement, equipment operations, and worker activities. Worker facilitation is enhanced through better scheduling algorithms, automation of repetitive tasks, and the deployment of real-time assistance tools, such as AI-driven equipment monitoring and safety alerts, which collectively improve efficiency and reduce operational delays. The AmI middleware environment operating system 102 manages the flow by sensing port operations through IoT-enabled sensors and syncing this data to the digital twin 802. Actuations in the port like crane adjustments, cargo reallocation, or worker instructions are informed and directed by the digital twin 802 via the ambient machine to ensure seamless operations. These processes are aligned to the needs of different stakeholders, such as dockworkers requiring safety and efficiency tools or shipping managers prioritizing accurate and timely cargo movement.
[0168]The AmI middleware environment operating system’s 102 digital twin 802 framework for IoT devices provides real-time monitoring and predictive analytics, ensuring optimal performance. Supplier and Vendor Intelligence ensures the quality and availability of components, while Operational Intelligence facilitates self-optimization, minimizing downtime and extending machine life. The AmI middleware environment operating system 102 senses IoT machine performance through embedded sensors, syncing the data to the digital twins 802. Adjustments or commands such as recalibrating devices or triggering maintenance, are actuated in the physical machine through precise instructions from the ambient machine. For users, such as operators or end-users of IoT systems, the environment 302 dynamically adjusts machine behavior to meet specific needs, such as performance tuning for industrial operators or energy-saving modes for consumers.
[0169]In factories, the AmI middleware environment operating system 102 aligns physical operations with their digital counterparts through advanced Operational Intelligence. The AmI middleware environment operating system 102 enables predictive maintenance, adaptive manufacturing, and seamless workflow optimization. The Supplier and Vendor Intelligence ensures material quality, while Cost Intelligence enhances resource efficiency, and Revenue Intelligence identifies high-margin production opportunities. The AmI middleware environment operating system 102 senses factory conditions through IoT sensors, syncing data such as machine performance and resource levels to the digital twin 802. Actuations like adjusting production rates, reallocating resources, or initiating maintenance are implemented via the ambient machine to maintain operational efficiency and align with factory goals. Factory workflows are customized to meet the requirements of stakeholders, such as production managers optimizing schedules or customers receiving timely, high-quality products.
[0170]The AmI middleware environment operating system’s 102 digital twin 802 of urban infrastructure enables dynamic management of city resources, including traffic, water, and energy systems. Real-time adjustments enhance citizen experiences by reducing commute times through optimized traffic flow and ensuring consistent access to essential utilities like water and electricity. Sustainability outcomes are bolstered by adaptive energy distribution and waste reduction strategies, promoting a greener and more efficient urban environment. The AmI middleware environment operating system 102 senses urban dynamics through IoT infrastructure, syncing traffic patterns, energy usage, and resource availability to the digital twin 802. Actuations, such as adjusting traffic signals, reallocating power, or deploying public services, are directed by the ambient machine to ensure responsive and efficient city management. Citizen profiles including residents, commuters, or visitors, are leveraged to personalize services such as adaptive transportation routes, optimized utility delivery, and customized public service access, ensuring a seamless urban experience across environments 302 like hotels, airports, and offices.
[0171]For agriculture, the AmI middleware environment operating system’s 102 digital twin 802 integrates real-time data on soil, weather, and crop health to optimize precision farming. Data collection is enabled through technologies such as IoT-enabled sensors for soil moisture and nutrient levels, satellite imagery for large-scale weather monitoring, and drones equipped with multispectral cameras for crop health assessment. This comprehensive approach ensures actionable insights for efficient resource utilization and higher yields. The AmI middleware environment operating system 102 senses environmental and crop conditions through its IoT network, syncing data to the digital twin 802 for analysis. Actuations, such as adjusting irrigation schedules or applying fertilizers, are executed through precise instructions from the ambient machine to optimize farming outcomes. Farmer profiles, including preferences for organic practices or crop-specific needs, inform tailored recommendations, maximizing productivity and sustainability.
[0172]The AmI middleware environment operating system 102 transforms healthcare environments 302 with digital twins 802 of hospital operations, patient workflows, and medical devices. Predictive analytics enhance patient care by forecasting potential health issues based on real-time and historical data, enabling early intervention and personalized treatment plans. The Operational Intelligence improves operational efficiency by optimizing staff scheduling, streamlining equipment utilization, and reducing patient wait times. Amiware senses hospital operations and patient data through IoT-enabled monitoring systems, syncing this information to the digital twin 802. Actuations such as adjusting staffing levels, reallocating equipment, or initiating patient alerts, are implemented via the ambient machine to ensure seamless healthcare delivery. Patient profiles, including medical history, preferences, and real-time health data, enable tailored services such as customized treatment plans or optimized room assignments for improved outcomes.
[0173]In energy grids, the AmI middleware environment operating system’s 102 digital twin 802 monitors real-time energy flow, ensuring stability and efficient distribution. Operational Intelligence integrates renewable sources and balances demand dynamically. Cost Intelligence minimizes energy waste, while predictive insights prevent outages and enhance grid performance. Amiware senses grid performance using IoT devices and syncs this data to the digital twin 802, which models energy distribution and identifies inefficiencies. Actuations such as load balancing, integrating renewable sources, or addressing faults, are executed via the ambient machine to maintain grid stability and efficiency.
[0174]The AmI middleware environment operating system’s 102 digital twin 802 for transportation systems provides real-time traffic monitoring, fleet optimization, and commuter insights. Operational Intelligence reduces congestion and ensures reliable operations, while Cost Intelligence minimizes expenses. The Supplier and Vendor Intelligence ensures infrastructure reliability, enhancing commuter experiences through tailored services. The AmI middleware environment operating system 102 senses traffic flow and vehicle performance through IoT infrastructure, syncing this data to the digital twin 802. Actuations, such as rerouting traffic, adjusting fleet schedules, or deploying maintenance crews, are informed by the digital twin 802 and carried out via the ambient machine to optimize transportation systems. Commuter profiles, including travel history and preferences, enable personalized services like adaptive route suggestions, targeted notifications, and dynamic pricing for enhanced user satisfaction.
[0175]By creating a unified framework for digital twins 802 across these domains, the AmI middleware environment operating system 102 adapts to the unique facilitation goals and KPIs of each environment. An intelligence of the AmI middleware environment operating system 102 integrates seamlessly with physical systems, enabling real-time monitoring, predictive analytics, and operational support. This ensures that the AmI middleware environment operating system 102 delivers sustainable, scalable solutions tailored to the evolving needs of diverse industries and ecosystems.
[0176]The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[0177]The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer-readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[0178]The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid-state memory, magnetic tape, a removable computer diskette, a random-access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
[0179]Input/output (I/O) devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the AmI middleware environment operating system 102 either directly or through intervening I/O controllers. Network adapters may also be coupled to the AmI middleware environment operating system 102 to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
[0180]A representative hardware environment for practicing the embodiments may include a hardware configuration of an information handling/AmI middleware environment operating system 102 in accordance with the embodiments herein. The AmI middleware environment operating system 102 herein comprises at least one processor or central processing unit (CPU). The CPUs are interconnected via the system bus 202 to various devices including at least one of: a random-access memory (RAM), read-only memory (ROM), and an input/output (I/O) adapter. The I/O adapter can connect to peripheral devices, including at least one of: disk units and tape drives, or other program storage devices that are readable by the AmI middleware environment operating system 102. The AmI middleware environment operating system 102 can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments herein.
[0181]The AmI middleware environment operating system 102 further includes a user interface adapter that connects a keyboard, mouse, speaker, microphone, and/or other user interface devices including a touch screen device (not shown) to the bus to gather user input. Additionally, a communication adapter connects the bus to a data processing network, and a display adapter connects the bus to a display device which may be embodied as an output device including at least one of: a monitor, printer, or transmitter, for example.
[0182]A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention. When a single device or article is described herein, it will be apparent that more than one device/article (whether or not they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be apparent that a single device/article may be used in place of the more than one device or article, or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.
[0183]The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open-ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[0184]Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that are issued on an application based here on. Accordingly, the embodiments of the present invention are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
,CLAIMS:I/WE CLAIM:
1. An ambient intelligence middleware environment operating method (900) for managing and optimizing one or more environments (302) to provide optimized experience to one or more end users, the ambient intelligence middleware environment operating method (900) comprising:
receiving (902), by one or more hardware processors (110), data associated with the one or more environments (302) using one or more sensors (702),
wherein the data associated with the one or more environments (302) comprise at least one of: spatial configuration data corresponding to location and layouts of one or more objects in the one or more environments (302), temporal data corresponding to one or more events occurred in the one or more environments (302) and timings during which the one or more events are occurred, in the one or more environments (302), metadata corresponding to contextual information about the one or more environments (302), and visual data corresponding to one or more visual contents of the one or more objects in the one or more environments (302);
generating (904), by the one or more hardware processors (110), digital twins (802) comprising synthetic representation of the one or more environments (302) based on information from at least one of: the one or more environments (302), the one or more sensors (702), and one or more actuators (704);
storing (906), by the one or more hardware processors (110), at least one of: the data and the digital twins (802), associated with the one or more environments (302) in one or more environment databases;
processing (908), by the one or more hardware processors (110), the data from the one or more environment databases, to determine one or more optimal actions within the one or more environments (302) using an ambient machine with one or more artificial intelligence (AI) models, wherein the one or more optimal actions are determined upon analyzing one or more states of the one or more environments (302); and
providing (910), by the one or more hardware processors (110), one or more commands to the one or more actuators (704) to execute the one or more optimal actions, wherein the one or more optimal actions comprise at least one of: controlling one or more machinery, adjusting conditions, displaying information on one or more screens, triggering one or more alerts, recommending one or more suggestions to one or more stakeholders to validate, modify and approve one or more required actions, and one or more updates, in the one or more environments (302).
2. The ambient intelligence middleware environment operating method (900) as claimed in claim 1, wherein processing (908) the data from the one or more environment databases to determine the one or more optimal actions within the one or more environments (302) using the ambient machine with the one or more AI models, comprises:
obtaining, by the one or more hardware processors (110), the data associated with the one or more environments (302), from the one or more environment databases;
converting, by the one or more hardware processors (110), the data associated with the one or more environments (302), into one or more actionable representations;
synchronizing, by the one or more hardware processors (110), one or more data streams corresponding to the data for unified analysis; and
performing, by the one or more hardware processors (110), one or more processes comprising at least one of: extracting one or more insights, detecting one or more anomalies, identifying one or more patterns, and predicting future states of the one or more environments (302), to determine the one or more optimal actions within the one or more environments (302), based on the unified analysis of the data, using the ambient machine with the one or more AI models.
3. The ambient intelligence middleware environment operating method (900) as claimed in claim 1, further comprising training, by the one or more hardware processors (110), the one or more AI models to optimize performance of the ambient machine, by at least one of:
utilizing, by the one or more hardware processors (110), the one or more data streams received from the one or more sensors (702) to continuously update the one or more AI models and adapt the one or more AI models to one or more changes occurred in the one or more environments (302) using online and incremental learning techniques;
utilizing, by the one or more hardware processors (110), the digital twins (802) to generate synthetic data by simulating one or more scenarios and interventions, in the one or more environments (302), for training the one or more AI models;
combining, by the one or more hardware processors (110), multi-modal transformer models comprising at least one of: a vision transformer (ViT) model, a vision language model (VLM), and a small language model (SLM) to perform at least one of: one or more granular tasks, contextual analysis, and translating the one or more insights into the one or more optimal actions, in the one or more environments (302);
learning, by the one or more hardware processors (110), one or more domain specific tasks from one or more trainer AI models to perform the one or more optimal actions in the one or more environments (302), wherein the one or more trainer AI models are configured to automatically train one or more trainee AI models with one or more knowledge domains, for prioritizing object detection and tracking inventory managements; and
analyzing, by the one or more hardware processors (110), one or more feedback obtained from the one or more stakeholders for adapting the one or more ML models to update at least one of: objectives, constraints, and decision-making criteria, of the ambient machine.
4. The ambient intelligence middleware environment operating method (900) as claimed in claim 1, further comprising providing, by the one or more hardware processors (110), one or more human computer interfaces (HCI) for the one or more stakeholders to be interacted with an ambient intelligence middleware environment operating system (102) for one or more processes, wherein the one or more human computer interfaces are configured based on one or more roles of the one or more stakeholders.
5. The ambient intelligence middleware environment operating method (900) as claimed in claim 1, further comprising at least one of:
updating, by the one or more hardware processors (110), the one or more environment databases based on the data continuously received from the one or more sensors (702), which adapts the digital twins (802) to reflect currents states of the one or more environments (302); and
updating, by the one or more hardware processors (110), the one or more environment databases with the one or more changes occurred in the one or more environments (302).
6. The ambient intelligence middleware environment operating method (900) as claimed in claim 1, further comprising analyzing, by the one or more hardware processors (110), the one or more states of the one or more environments (302), using the one or more sensors (702), wherein analyzing the one or more states of the one or more environments (302), comprises:
analyzing, by the one or more hardware processors (110), one or more user profiles within the one or more environments (302) using the one or more sensors (702) and signal-level interoperability;
modelling, by the one or more hardware processors (110), at least one of: noises, redundancies, outliers, and patterns, from the data received from the one or more sensors (702), using signal preprocessing techniques;
applying, by the one or more hardware processors (110), the one or more AI models on the data to analyze the one or more states of the one or more environments (302); and
providing, by the one or more hardware processors (110), the one or more commands to the one or more actuators (704) for executing the one or more optimal actions based on the analyzed one or more states of the one or more environments (302).
7. The ambient intelligence middleware environment operating method (900) as claimed in claim 6, further comprising at least one of:
determining, by the one or more hardware processors (110), an availability state of the one or more environments (302), comprising at least one of: shelves, counters, shelf-fullness, on-shelf availability, and out-of-stock items;
determining, by the one or more hardware processors (110), an assortment state of the one or more environments (302), based on at least one of: color, categories, and planograms;
determining, by the one or more hardware processors (110), an adjacency state of the one or more environments (302) and stock keeping units (SKU), based on one or more factors comprising at least one of: visual merchandising guidelines, campaigning, storyboarding, user engagements, volume of sales, and seasonal, regional, geographical factors; and
analyzing, by the one or more hardware processors (110), the stock keeping units (SKU) to extract one or more attributes comprising at least one of: designs, shapes, volumetric analysis, barcodes, QR codes, texts, logos, International Mobile Equipment Identity (IMEI), serial numbers and tags from at least one of: images, videos, and media contents, using a three dimensional AI and machine vision system utilizing artificial intelligence of things (AIOT) and liquid lens techniques.
8. The ambient intelligence middleware environment operating method (900) as claimed in claim 1, further comprising generating, by the one or more hardware processors (110), a directed acyclic graph (DAG) for configuring at least one of: a plurality of components, a plurality of subsystems (114), and information flow of the ambient machine utilizing environmental parameters, domain-specific tasks, and the one or more processes to manage inter and intra-subsystem functioning, executing ambient machines workflows, and providing adaptable infrastructure and intelligent support for the one or more stakeholders and the one or more environments (302).
9. The ambient intelligence middleware environment operating method (900) as claimed in claim 1, further comprising performing, by the one or more hardware processors (110), online user assessments and recognition using one or more biometric signatures comprising at least one of: face, fingerprint, voice, age, gender, gait, and visual cues comprising upper and lower body apparel color.
10. The ambient intelligence middleware environment operating method (900) as claimed in claim 9, further comprising determining, by the one or more hardware processors (110), one or more preferences of the one or more end users, in the one or more environments (302), by:
detecting, by the one or more hardware processors (110), the one or more end users in the one or more environments (302) through the one or more biometric signatures;
generating, by the one or more hardware processors (110), one or more tokenized identifiers to the one or more end users based on the detection of the one or more end users with the corresponding one or more biometric signatures;
tracking, by the one or more hardware processors (110), the one or more end users to generate a user journey in the one or more environments (302);
determining, by the one or more hardware processors (110), a positional probability of the one or more end users at one or more locations within the one or more environments (302) by incorporating multicamera tracking and multi-modal asynchronous fusion techniques;
determining, by the one or more hardware processors (110), user engagement with the one or more environments (302) based on the positional probability of the one or more end users at the one or more locations within the one or more environments (302); and
extracting, by the one or more hardware processors (110), one or more user engagement matrices comprising at least one of: dwell time and the one or more events associated with one or more activities performed by the one or more end users, using the computer vision and machine learning models.
11. The ambient intelligence middleware environment operating method (900) as claimed in claim 1, further comprising determining, by the one or more hardware processors (110), at least one of: one or more differences and one or more anomalies to provide the one or more alerts to the one or more stakeholders by matching a current state of the one or more environments (302) with respect to a previous state of the one or more environments (302), with one or more virtual machine based set rules.
12. The ambient intelligence middleware environment operating method (900) as claimed in claim 11, further comprising at least one of:
providing, by the one or more hardware processors (110), the one or more alerts to the one or more stakeholders upon determining the one or more anomalies in at least one of: live shelf status, product availability, and out-of-stock items, by matching the data with the one or more virtual machine based set rules;
providing, by the one or more hardware processors (110), the one or more alerts to the one or more stakeholders upon determining the one or more anomalies in at least one of: arrangement of products on the shelves based on color, category, planograms, pricing, promotions, offers, and visual merchandising guidelines, by matching the data with the one or more virtual machine based set rules; and
providing, by the one or more hardware processors (110), the one or more alerts to the one or more stakeholders upon determining the one or more anomalies in a placement of the products on the shelves based on one or more business rules, by matching the data with the one or more virtual machine based set rules.
13. The ambient intelligence middleware environment operating method (900) as claimed in claim 1, further comprising determining, by the one or more hardware processors (110), user profile deltas to update the one or more user profiles for providing the optimized experience to the one or more end users, wherein updating the one or more user profiles comprises at least one of:
updating, by the one or more hardware processors (110), the one or more tokenized identifiers of the one or more end users when the one or more end users are recognized in the one or more environments (302) with optimized confidence and variability;
updating, by the one or more hardware processors (110), journey of the one or more end users, in the one or more environments (302), with heatmaps, dwell-time, and flow-maps, in the user master;
updating, by the one or more hardware processors (110), the user engagement matrices comprising at least one of: the dwell time and the one or more events associated with one or more activities performed by the one or more end users in the one or more environments (302);
updating, by the one or more hardware processors (110), assortment of the shelves, counters and products based on at least one of: user engagement, visibility, and searchability, in the one or more environments (302);
updating, by the one or more hardware processors (110), adjacency of the brands and shelves, and arrangement and placement of the products for providing the optimized experience to the one or more end users;
updating, by the one or more hardware processors (110), at least one of: promotions and discounts, suitable for the products to optimize user attraction; and
updating, by the one or more hardware processors (110), at least one of: offers, cashback, and loyalty programs for the one or more end users.
14. An ambient intelligence middleware environment operating system (102) for managing and optimizing one or more environments (302) to provide optimized experience to one or more end users, the ambient intelligence middleware environment operating system (102) comprising:
one or more hardware processors (110);
a memory (112) coupled to the one or more hardware processors (110), wherein the memory (112) comprises a plurality of subsystems (114) in form of programmable instructions executable by the one or more hardware processors (110), and wherein the plurality of subsystems (114) comprises:
a data receiving subsystem (206) configured to receive data associated with the one or more environments (302) using one or more sensors (702),
wherein the data associated with the one or more environments (302) comprise at least one of: spatial configuration data corresponding to location and layouts of one or more objects in the one or more environments (302), temporal data corresponding to one or more events occurred in the one or more environments (302) and timings during which the one or more events are occurred, in the one or more environments (302), metadata corresponding to contextual information about the one or more environments (302), and visual data corresponding to one or more visual contents of the one or more objects in the one or more environments (302);
a digital twin generating subsystem (208) configured to generate digital twins (802) comprising synthetic representation of the one or more environments (302) based on information from at least one of: the one or more environments (302), the one or more sensors (702), and one or more actuators (704);
a storage subsystem (210) configured to store at least one of: the data and the digital twins (802), associated with the one or more environments (302) in one or more environment databases;
an action determining subsystem (212) configured to process the data from the one or more environment databases, to determine one or more optimal actions within the one or more environments (302) using an ambient machine with one or more artificial intelligence (AI) models, wherein the one or more optimal actions are determined upon analyzing one or more events of the one or more environments (302); and
an action executing subsystem (214) configured to provide one or more commands to the one or more actuators (704) to execute the one or more optimal actions, wherein the one or more optimal actions comprise at least one of: controlling one or more machinery, adjusting conditions, displaying information on one or more screens, triggering one or more alerts, recommending one or more suggestions to one or more stakeholders to validate, modify and approve one or more required actions, and one or more updates, in the one or more environments (302).
15. The ambient intelligence middleware environment operating system (102) as claimed in claim 14, wherein in processing the data from the one or more environment databases to determine the one or more optimal actions within the one or more environments (302) using the ambient machine with the one or more AI models, the action determining subsystem (212) is configured to:
obtain the data associated with the one or more environments (302), from the one or more environment databases;
convert the data associated with the one or more environments (302) into one or more actionable representations;
synchronize one or more data streams corresponding to the data for unified analysis; and
perform one or more processes comprising at least one of: extracting one or more insights, detecting one or more anomalies, identifying one or more patterns, and predicting future states of the one or more environments (302), to determine the one or more optimal actions within the one or more environments (302), based on the unified analysis of the data, using the ambient machine with the one or more AI models.
16. The ambient intelligence middleware environment operating system (102) as claimed in claim 1, further comprising a training subsystem (216) configured to train the one or more AI models to optimize performance of the ambient machine, by at least one of:
utilizing the one or more data streams received from the one or more sensors (702) to continuously update the one or more AI models and adapt the one or more AI models to one or more changes occurred in the one or more environments (302) using online and incremental learning techniques;
utilizing the digital twins (802) to generate synthetic data by simulating one or more scenarios and interventions, in the one or more environments (302), for training the one or more AI models;
combining multi-modal transformer models comprising at least one of: a vision transformer (ViT) model, a vision language model (VLM), and a small language model (SLM) to perform at least one of: one or more granular tasks, contextual analysis, and translating the one or more insights into the one or more optimal actions, in the one or more environments (302);
learning one or more domain specific tasks from one or more trainer AI models to perform the one or more optimal actions in the one or more environments (302), wherein the one or more trainer AI models are configured to automatically train one or more trainee AI models with one or more knowledge domains, for prioritizing object detection and tracking inventory managements; and
analyzing one or more feedback obtained from the one or more stakeholders for adapting the one or more ML models to update at least one of: objectives, constraints, and decision-making criteria, of the ambient machine.
17. The ambient intelligence middleware environment operating system (102) as claimed in claim 1, further comprising an interface configuration subsystem (218) configured to provide one or more human computer interfaces (HCI) for the one or more stakeholders to be interacted with an ambient intelligence middleware environment operating system (102) for one or more processes, wherein the one or more human computer interfaces are configured based on one or more roles of the one or more stakeholders.
18. The ambient intelligence middleware environment operating system (102) as claimed in claim 1, further comprising a database updating subsystem (220) configured to at least one of:
update the one or more environment databases based on the data continuously received from the one or more sensors (702), which adapts the digital twins (802) to reflect currents states of the one or more environments (302); and
update the one or more environment databases with the one or more changes occurred in the one or more environments (302).
19. The ambient intelligence middleware environment operating system (102) as claimed in claim 1, further comprising an environment state analyzing subsystem (222) configured to analyze the one or more states of the one or more environments (302), using the one or more sensors (702), wherein in analyzing the one or more states of the one or more environments (302), the environment state analyzing subsystem (222) is configured to:
analyze one or more user profiles within the one or more environments (302) using the one or more sensors (702) and signal-level interoperability;
model at least one of: noises, redundancies, outliers, and patterns, from the data received from the one or more sensors (702), using signal preprocessing techniques;
apply one or more AI models on the data to analyze the one or more states of the one or more environments (302); and
provide the one or more commands to the one or more actuators (704) for executing the one or more optimal actions based on the analyzed one or more states of the one or more environments (302).
20. The ambient intelligence middleware environment operating system (102) as claimed in claim 19, wherein the environment state analyzing subsystem (222) is configured to:
determine an availability state of the one or more environments (302), comprising at least one of: shelves, counters, shelf-fullness, on-shelf availability, and out-of-stock items;
determine an assortment state of the one or more environments (302), based on at least one of: color, categories, and planograms;
determine an adjacency state of the one or more environments (302) and stock keeping units (SKU), based on one or more factors comprising at least one of: visual merchandising guidelines, campaigning, storyboarding, user engagements, volume of sales, and seasonal, regional, geographical factors; and
analyze the stock keeping units (SKU) to extract one or more attributes comprising at least one of: designs, shapes, volumetric analysis, barcodes, QR codes, texts, logos, International Mobile Equipment Identity (IMEI), serial numbers and tags from at least one of: images, videos, and media contents, using a three dimensional AI and machine vision system utilizing artificial intelligence of things (AIOT) and liquid lens techniques.
21. The ambient intelligence middleware environment operating system (102) as claimed in claim 14, further comprising a graph generating subsystem (224) configured to generate a directed acyclic graph (DAG) for configuring at least one of: a plurality of components, the plurality of subsystems (114), and information flow of the ambient machine utilizing environmental parameters, domain-specific tasks, and the one or more processes to manage inter and intra-subsystem functioning, executing ambient machines workflows, and providing adaptable infrastructure and intelligent support for the one or more stakeholders and the one or more environments (302).
22. The ambient intelligence middleware environment operating system (102) as claimed in claim 14, further comprising an user preference determining subsystem (226) configured to perform online user assessments and recognition using one or more biometric signatures comprising at least one of: face, fingerprint, voice, age, gender, gait, and visual cues comprising upper and lower body apparel color.
23. The ambient intelligence middleware environment operating system (102) as claimed in claim 22, wherein the user preference determining subsystem (226) is further configured to determine one or more preferences of the one or more end users, in the one or more environments (302), by:
detecting the one or more end users in the one or more environments (302) through the one or more biometric signatures;
generating one or more tokenized identifiers to the one or more end users based on the detection of the one or more end users with the corresponding one or more biometric signatures;
tracking the one or more end users to generate a user journey in the one or more environments (302);
determining a positional probability of the one or more end users at one or more locations within the one or more environments (302) by incorporating multicamera tracking and multi-modal asynchronous fusion techniques;
determining user engagement with the one or more environments (302) based on the positional probability of the one or more end users at the one or more locations within the one or more environments (302); and
extracting one or more user engagement matrices comprising at least one of: dwell time and the one or more events associated with one or more activities performed by the one or more end users, using the computer vision and machine learning models.
24. The ambient intelligence middleware environment operating system (102) as claimed in claim 14, further comprising an anomaly determining subsystem (228) configured to determine at least one of: one or more differences and one or more anomalies to provide the one or more alerts to the one or more stakeholders by matching a current state of the one or more environments (302) with respect to a previous state of the one or more environments (302), with one or more virtual machine based set rules.
25. The ambient intelligence middleware environment operating system (102) as claimed in claim 24, wherein the anomaly determining subsystem (228) is configured to:
provide the one or more alerts to the one or more stakeholders upon determining the one or more anomalies in at least one of: live shelf status, product availability, and out-of-stock items, by matching the data with the one or more virtual machine based set rules;
provide the one or more alerts to the one or more stakeholders upon determining the one or more anomalies in at least one of: arrangement of products on the shelves based on color, category, planograms, pricing, promotions, offers, and visual merchandising guidelines, by matching the data with the one or more virtual machine based set rules; and
provide the one or more alerts to the one or more stakeholders upon determining the one or more anomalies in a placement of the products on the shelves based on one or more business rules, by matching the data with the one or more virtual machine based set rules.
26. The ambient intelligence middleware environment operating system (102) as claimed in claim 14, further comprising an updating subsystem (230) configured to determine user profile deltas to update the one or more user profiles for providing the optimized experience to the one or more end users, wherein in updating the one or more user profiles, the updating subsystem (230) is configured to:
update the one or more tokenized identifiers of the one or more end users when the one or more end users are recognized in the one or more environments (302) with optimized confidence and variability;
update journey of the one or more end users, in the one or more environments (302), with heatmaps, dwell-time, and flow-maps, in the user master;
update the user engagement matrices comprising at least one of: the dwell time and the one or more events associated with one or more activities performed by the one or more end users in the one or more environments (302);
update assortment of the shelves, counters and products based on at least one of: user engagement, visibility, and searchability, in the one or more environments (302);
update adjacency of the brands and shelves, and arrangement and placement of the products for providing the optimized experience to the one or more end users;
update at least one of: promotions and discounts, suitable for the products to optimize user attraction; and
update at least one of: offers, cashback, and loyalty programs for the one or more end users.
Dated this 27th day of February, 2025
Vidya Bhaskar Singh Nandiyal
Patent Agent (IN/PA-2912)
Agent for applicant
| # | Name | Date |
|---|---|---|
| 1 | 202421019100-STATEMENT OF UNDERTAKING (FORM 3) [15-03-2024(online)].pdf | 2024-03-15 |
| 2 | 202421019100-PROVISIONAL SPECIFICATION [15-03-2024(online)].pdf | 2024-03-15 |
| 3 | 202421019100-FORM FOR SMALL ENTITY(FORM-28) [15-03-2024(online)].pdf | 2024-03-15 |
| 4 | 202421019100-FORM FOR SMALL ENTITY [15-03-2024(online)].pdf | 2024-03-15 |
| 5 | 202421019100-FORM 1 [15-03-2024(online)].pdf | 2024-03-15 |
| 6 | 202421019100-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [15-03-2024(online)].pdf | 2024-03-15 |
| 7 | 202421019100-EVIDENCE FOR REGISTRATION UNDER SSI [15-03-2024(online)].pdf | 2024-03-15 |
| 8 | 202421019100-DRAWINGS [15-03-2024(online)].pdf | 2024-03-15 |
| 9 | 202421019100-FORM-26 [04-09-2024(online)].pdf | 2024-09-04 |
| 10 | 202421019100-Proof of Right [09-09-2024(online)].pdf | 2024-09-09 |
| 11 | 202421019100-FORM-5 [27-02-2025(online)].pdf | 2025-02-27 |
| 12 | 202421019100-FORM-26 [27-02-2025(online)].pdf | 2025-02-27 |
| 13 | 202421019100-FORM FOR SMALL ENTITY [27-02-2025(online)].pdf | 2025-02-27 |
| 14 | 202421019100-EVIDENCE FOR REGISTRATION UNDER SSI [27-02-2025(online)].pdf | 2025-02-27 |
| 15 | 202421019100-DRAWING [27-02-2025(online)].pdf | 2025-02-27 |
| 16 | 202421019100-CORRESPONDENCE-OTHERS [27-02-2025(online)].pdf | 2025-02-27 |
| 17 | 202421019100-COMPLETE SPECIFICATION [27-02-2025(online)].pdf | 2025-02-27 |
| 18 | 202421019100-MSME CERTIFICATE [03-04-2025(online)].pdf | 2025-04-03 |
| 19 | 202421019100-FORM28 [03-04-2025(online)].pdf | 2025-04-03 |
| 20 | 202421019100-FORM-9 [03-04-2025(online)].pdf | 2025-04-03 |
| 21 | 202421019100-FORM 18A [03-04-2025(online)].pdf | 2025-04-03 |
| 22 | Abstract.jpg | 2025-04-11 |
| 23 | 202421019100-FER.pdf | 2025-07-07 |
| 24 | 202421019100-OTHERS [04-11-2025(online)].pdf | 2025-11-04 |
| 25 | 202421019100-FER_SER_REPLY [04-11-2025(online)].pdf | 2025-11-04 |
| 26 | 202421019100-CLAIMS [04-11-2025(online)].pdf | 2025-11-04 |
| 1 | 202421019100_SearchStrategyNew_E_SearchHistory(22)E_21-05-2025.pdf |