Abstract: Disclosed herein is a method (400) monitoring and managing one or more dynamic assets in an Internet of Things (IoT) environment. The method (400) comprises aggregating (402) real-time data from one or more sensors at inter-sensor and intra sensor level and real-time data from one or more actuators, in response to a user entering the IoT environment. The method (400) comprises determining (404) context aware data corresponding to the real-time data based on a spatial context, a temporal context and a causal context and establishing (406) cross-correlation of the context aware data at each modality and sub-modality level. The method (400) comprises determining (408) a modification data associated with the IoT environment based on the cross-correlation data and transmitting (410) the modification data to the one or more actuators for modification in the IoT environment such that ensuring seamless engagement with the user, precise information dissemination, and improved operational control.
Description:FIELD OF THE INVENTION
[001] The present disclosure generally relates to an Internet of Things (IoT) environment, and in particular, to a method and a system for monitoring and managing dynamic assets in an IoT environment.
BACKGROUND
[002] With the growth of technology, the market has witnessed an unprecedented surge in the availability and diversity of smart devices, catering to a wide spectrum of needs and preferences of a user. These smart devices facilitate seamless interaction and collaboration among themselves thus forming an Internet of Things (IoT) environment. In the IoT environment, the interconnectedness of smart devices has transformed the way smart devices are perceived and interact within the IoT environment which is very now very widely used in various organisations as well as households.
[003] The proliferation of IoT-enabled environments has transformed operations in domains such as retail, warehousing, and industrial facilities. These environments rely on diverse sensors, cameras, and actuators to collect real-time data. However, traditional systems remain limited to passive data collection and predefined actions, lacking the adaptability and intelligence needed for complex, dynamic environments.
[004] Dynamic assets, including customers, staff, workers, and machines (e.g., forklifts, robotic arms), require systems that integrates and analyzes multi-modal data in real time. For instance, retail environments demand insights into customer profiles, preferences, and behaviours to enhance personalized experiences and operational efficiency. Warehouses require systems capable of ensuring compliance, monitoring task execution, and optimizing safety and throughput. Existing systems, however, are fragmented, fail to create cohesive digital twins of physical spaces, and lack advanced AI-driven capabilities like behavioural analysis and predictive modelling.
[005] Moreover, traditional solutions struggle with secure data handling, often transmitting raw video streams and sensor data to centralized servers, raising significant privacy and compliance concerns. They lack robust mechanisms to encode sensitive information or localize processing, exposing environments to vulnerabilities.
[006] Thus, with the increase in commercialization and market competition, it has become essential to identify the gaps in the operation of any organization, predict the possible and potential changes that may help mitigate the gaps and effectively manage the environment in the organization.
[007] In general, most organizations nowadays employ a variety of systems or devices integrated with IoT environment which are configured to control, monitor, and manage equipment in or around a building or building area of the organization. However, such systems may perform discrete operations which are programmed actions.
[008] None of the existing solutions have cognitive access to the systems or devices and are able to intelligently assist with the changing situational data in order to more effectively control the building environment keeping in sync with the requirements of the organization.
[009] In view of the foregoing disadvantages, there is a compelling need for an innovative solution that can address the shortcomings of traditional systems and meet the evolving challenges of the organization.
SUMMARY
[010] This summary is provided to introduce a selection of concepts in a simplified format that are further described in the detailed description of the invention. This summary is not intended to identify essential inventive concepts of the invention, nor is it intended to determine the scope of the invention.
[011] According to an embodiment of the present disclosure, a method for monitoring and managing dynamic assets in an Internet of Things (IoT) environment is disclosed. The method comprises aggregating real-time data from one or more sensors at inter-sensor and intra sensor level and real-time data from one or more actuators, in response to a user entering the IoT environment. Further, the method comprises determining context aware data corresponding to the real-time data based on a spatial context, a temporal context and a causal context. Furthermore, the method comprises establishing cross-correlation of the context aware data at each modality and sub-modality level. The method comprises determining a modification data associated with the IoT environment based on the cross-correlation data. The modification data indicates operational changes in the one or more actuators. Furthermore, the method comprises transmitting the modification data to the one or more actuators for modification in the IoT environment such that ensuring seamless engagement with the user, precise information dissemination, and improved operational control.
[012] According to another embodiment, a system for monitoring and managing dynamic assets in an Internet of Things (IoT) environment, is disclosed. The system comprises a sensing module to aggregate real-time data from one or more sensors at inter-sensor and intra sensor level and real-time data from one or more actuators, in response to a user entering the IoT environment. Further, the system comprises a thinking module to determine context aware data corresponding to the real-time data based on a spatial context, a temporal context and a causal context. Furthermore, the system comprises an Artificial Intelligence (AI) based interoperability module to establish cross-correlation of the context aware data at each modality and sub-modality level. Furthermore, the system comprises the AI based interoperability module to determine a modification data associated with the IoT environment based on the cross-correlation data. The modification data indicates operational changes in the one or more actuators. Furthermore, the system comprises an instruction module to transmit the modification data to the one or more actuators for modification in the IoT environment such that ensuring seamless engagement with the user, precise information dissemination, and improved operational control.
[013] According to another embodiment, the present invention discloses a system and method for real-time monitoring, analysis, and dynamic management of IoT-enabled environments through an Ambient Machine—an intelligent, context-aware system designed to transform physical spaces into adaptive, interactive ecosystems. By seamlessly integrating sensory data from diverse sources and employing cutting-edge AI models, the Ambient Machine offers advanced capabilities to monitor, analyze, and manage dynamic assets such as humans (e.g., customers, staff, workers) and machines (e.g., robotic arms, forklifts) within retail stores, warehouses, and industrial facilities.
[014] According to another embodiment, the Ambient Machine of the present invention is a hybrid computational architecture designed to convert physical environments into digital twins by onboarding IoT sensors, including CCTV cameras, RFID tags, and BLE beacons. Through spatial mapping and sensor fusion, the ambient machine creates an enriched digital representation of the environment, complete with configurable zones, regions of interest, and key performance indicators (KPIs). This digital twin enables precise tracking of dynamic assets, contextual understanding of interactions, and detailed profiling of users, including identification, preferences, and behavioural patterns.
[015] According to another embodiment, the system of the present invention employs a multi-tiered AI framework. At the edge, lightweight AI models perform real-time object detection, tracking, and encoding, ensuring low-latency, high-efficiency data processing. An Advanced Vision Transformers (ViTs) and Vision-Language Transformers (ViLTs) operate on local servers to extract deeper insights from encoded metadata, including customer engagement, gaze analysis, and interaction patterns. In the cloud, Large Language Models (LLMs) and multi-modal AI frameworks provide high-level reasoning, natural language query handling, and comprehensive decision-making. This hierarchical architecture ensures scalability, real-time responsiveness, and computational efficiency.
[016] The following are the key functionalities of the Ambient Machine:
● User Profiling and Tracking: The system identifies and tracks customers, staff, and workers, associating profiles with behavioural data to enable personalized interactions and optimize resource allocation.
● Customer Engagement Analysis: The system monitors dwell times, gaze patterns, product interactions, and navigation paths to derive insights into customer preferences and behaviours, facilitating personalized advertising and recommendations.
● Operational Monitoring: The system ensures compliance with standard operating procedures (SOPs), tracks staff availability and productivity, and detects anomalies such as theft or non-compliance in real time.
● Actionable Actuation: Leveraging its advanced insights, the system triggers precise actions through integrated actuators, such as robotic controls, retail media displays, access points, and automated alerts.
[017] Further, the core innovation of the present invention lies in its privacy-centric design. The raw video streams are processed locally on edge devices, where data is encoded into spatio-temporal metadata and delta-compressed video clips. This approach minimizes data transmission to the cloud, ensuring that only encoded and anonymized insights are used for high-level processing, safeguarding sensitive information.
[018] In the present invention, the Ambient Machine’s transformative impact is evident across various use cases. In retail environments, the Ambient Machine enhances customer engagement and operational efficiency by analyzing customer journeys and facilitating personalized experiences. In warehouses, the Ambient Machine optimizes workflows, ensures safety, and monitors compliance. The Ambient Machine’s ability to adapt dynamically to changing environments makes it a versatile solution for IoT-enabled ecosystems.
[019] In the present invention, by integrating IoT technologies, AI-driven analytics, and real-time actuation, the present invention bridges the gap between static monitoring systems and intelligent, context-aware management platforms. The present invention provides an innovative approach to managing dynamic assets in complex environments, ensuring enhanced safety, efficiency, and user satisfaction.
[020] To further clarify the advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawing. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting its scope. The invention will be described and explained with additional specificity and detail in the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[021] These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[022] Figure 1 illustrates an exemplary environment for implementing a system for monitoring and managing dynamic assets in an Internet of Things (IoT) environment, according to an embodiment of the present invention;
[023] Figure 2 illustrates a schematic architecture of the system for monitoring and managing dynamic assets in the IoT environment monitoring and managing dynamic assets in the IoT environment, according to an embodiment of the present invention;
[024] Figure 3 illustrates a schematic block diagram of the operational flow of the modules of the system monitoring and managing dynamic assets in the IoT environment monitoring and managing dynamic assets in the IoT environment, according to an embodiment of the present invention;
[025] Figure 4 illustrates a method for monitoring and managing dynamic assets in the IoT environment monitoring and managing dynamic assets in the IoT environment, in accordance with an exemplary embodiment of the present disclosure;
[026] Figures 5a-5b illustrate an exemplary representation of cross-camera interoperability and spatial mapping of the various zones of the IoT environment by the system, in accordance with an exemplary embodiment of the present disclosure;
[027] Figure 6a illustrates the system architecture for monitoring and managing dynamic assets in the IoT environment monitoring and managing dynamic assets in the IoT environment, in accordance with an exemplary embodiment of the present disclosure;
[028] Figures 6b-6C illustrate the system architecture integrating edge computing with closed circuit television cameras to deliver real-time intelligent surveillance, in accordance with an exemplary embodiment of the present disclosure;
[029] Figure 7 illustrates an exemplary dashboard illustrating a summary of the journey of the users entering the IoT environment, in accordance with an exemplary embodiment of the present disclosure;
[030] Figure 8A illustrates a use-case of an ambient environment depicting user’s journey inside the IoT environment, in accordance with an exemplary embodiment of the present disclosure;
[031] Figure 8B illustrates a use-case of an ambient environment depicting a live activity map on the floor inside the IoT environment, in accordance with an exemplary embodiment of the present disclosure; and
[032] Figure 8C illustrates a use-case of an ambient environment depicting live heat-map of all the users on the floor inside the IoT environment, in accordance with an exemplary embodiment of the present disclosure; and
[033] Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[034] It should be understood at the outset that although illustrative implementations of the embodiments of the present disclosure are illustrated below, the present invention may be implemented using any number of techniques, whether currently known or in existence. The present disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary design and implementation illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
[035] The term “some” as used herein is defined as “none, or one, or more than one, or all.” Accordingly, the terms “none,” “one,” “more than one,” “more than one, but not all” or “all” would all fall under the definition of “some.” The term “some embodiments” may refer to no embodiments, to one embodiment or to several embodiments or to all embodiments. Accordingly, the term “some embodiments” is defined as meaning “no embodiment, or one embodiment, or more than one embodiment, or all embodiments.”
[036] The terminology and structure employed herein is for describing, teaching, and illuminating some embodiments and their specific features and elements and does not limit, restrict, or reduce the spirit and scope of the claims or their equivalents.
[037] More specifically, any terms used herein such as but not limited to “includes,” “comprises,” “has,” “consists,” and grammatical variants thereof do NOT specify an exact limitation or restriction and certainly do NOT exclude the possible addition of one or more features or elements, unless otherwise stated, and furthermore must NOT be taken to exclude the possible removal of one or more of the listed features and elements, unless otherwise stated with the limiting language “MUST comprise” or “NEEDS TO include.”
[038] Whether or not a certain feature or element was limited to being used only once, either way, it may still be referred to as “one or more features” or “one or more elements” or “at least one feature” or “at least one element.” Furthermore, the use of the terms “one or more” or “at least one” feature or element does NOT preclude there being none of that feature or element, unless otherwise specified by limiting language such as “there NEEDS to be one or more . . .” or “one or more element is REQUIRED.”
[039] Hereinafter, it is understood that terms including “unit” or “module” at the end may refer to the unit for processing at least one function or operation and may be implemented in hardware, software, or a combination of hardware and software.
[040] Unless otherwise defined, all terms, and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by one having ordinary skill in the art.
[041] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted to not unnecessarily obscure the embodiments herein. Also, the various embodiments described herein are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments. The term “or” as used herein, refers to a non-exclusive or unless otherwise indicated. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein can be practiced and to further enable those skilled in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[042] As is traditional in the field, embodiments may be described and illustrated in terms of blocks that carry out a described function or functions. These blocks, which may be referred to herein as units or modules or the like, are physically implemented by analog or digital circuits such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits, or the like, and may optionally be driven by firmware and software. The circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like. The circuits constituting a block may be implemented by dedicated hardware, by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block. Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the invention. Likewise, the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the invention.
[043] The accompanying drawings are used to help easily understand various technical features and it should be understood that the embodiments presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents, and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.
[044] An object of the present disclosure is to provide an improved technique to overcome the above-described limitations associated with the IoT environment.
[045] The above-mentioned objective is achieved by providing a system and a method monitoring and managing dynamic assets in the IoT environment. More particularly, the present invention integrates with a wide range of sensors and actuators, including closed circuit television (CCTV) cameras, thermal cameras, IoT sensors, and various actuators, ensuring robust connectivity and real-time data aggregation. The present invention uses context-awareness data by incorporating spatial, temporal, and causal contexts at both intra-sensor and inter-sensor levels and facilitates intelligent decision-making and environmental awareness.
[046] The present invention supports interoperability across diverse sensor types through a cross-correlation technique, enabling seamless interaction and data fusion while adhering to privacy standards.
[047] The present invention may be applicable in various organizations. For example, when the present invention is applied to any store, it provides deep intelligence information that is on par with e-commerce analytics, allowing for more effective product placement, strategic marketing, and resource allocation within the store. The present invention provides a personalized in-store experience, by dynamically responding to user behaviour and preferences, leading to increased conversion rates and revenue growth. Furthermore, the present invention provides operational analytics capabilities to optimize store management by reducing losses, improving staffing efficiency, and enhancing overall operational flow.
[048] The present invention relates to intelligent IoT-enabled systems and method designed for the comprehensive monitoring, analysis, and management of dynamic assets, such as humans and machines, in physical environments including retail stores, warehouses, and industrial facilities.
[049] Specifically, the invention relates to an advanced, context-aware system, referred as Ambient Machine, that integrates multi-tiered artificial intelligence (AI), including vision transformers, video language models, and small language models, to process data from IoT sensors, actuators, and video streams. By transforming physical spaces into digital environments and generating actionable insights based on user profiles, preferences, and behaviors, the invention facilitates real-time decision-making, personalized interventions, and environmental modifications. The system emphasizes data privacy and security, employing spatio-temporal metadata encoding and edge processing for efficient and secure operations while addressing diverse use cases such as user profiling, customer journey mapping, staff productivity tracking, compliance monitoring, and anomaly detection.
[050] Embodiments of the present invention will be described below in detail with reference to the accompanying drawings.
[051] The detailed methodology of the disclosure is explained in the following paragraphs.
[052] Figure 1 illustrates an exemplary environment 100 for implementing a system 106 for monitoring and managing dynamic assets in an Internet of Things (IoT) environment, according to an embodiment of the present invention.
[053] Referring to Figure 1, a plurality of IoT devices 102a-102n is depicted within an IoT environment 100. In an example, the IoT environment 100 may refer to an interconnected ecosystem of the plurality of IoT devices 102a, 102b, 102c…102Nth (interchangeably referred hereinafter as “IoT devices 102”), equipped with sensors, software, and network connectivity, allowing the plurality of IoT devices 102 to collect and exchange data related to the activity performed by a user 104. In the IoT environment 100, the plurality of IoT devices 102 may communicate with each other and with a server or a cloud platform (not shown) via the Internet or other communication networks 108, thereby enabling a seamless exchange of information and the automation of various tasks and processes.
[054] In a non-limiting example, the plurality of IoT devices 102a-102n may include, CCTV cameras, thermal cameras, liquid lenses, temperature sensors, pressure sensors, electric sensors, and sound sensors. In the example, the plurality of IoT devices 102 may be connected to the internet or other communication networks, such as Wi-Fi, cellular networks, Bluetooth, LoRa, or Zigbee. This connectivity allows the plurality of IoT devices 102a-102d to transmit and receive data related to the activity performed by the user 102, with each other or with the server. In another example, the plurality of IoT devices 102 may be interconnected using a device local network, for instance, Zigbee, thus forming a swarm network. In the swarm network of the plurality of IoT devices 102, each of the plurality of IoT devices 102 may function as an autonomous unit, capable of making local decisions and interacting with neighboring IoT devices. The plurality of IoT devices 102 forming the swarm network may collectively achieve objectives or tasks that might be challenging for a single IoT device102 to accomplish on its own.
[055] The network interface 108 may be configured to provide network connectivity and enable communication with paired devices such as the system 106. The network connectivity may be provided via a wireless connection or a wired connection. For example, the network connectivity may be provided via cellular technology, such as the 3rd Generation (3G), 4th Generation (4G), 5th Generation (5G), pre-5G, 6th Generation (6G), or any other wireless communication technology such as Bluetooth.
[056] The network interface 108 enables communication between the IoT devices and system 106 based on the activity of the user 104 within the IoT environment. The system 106 aggregates real-time data from one or more sensors at inter-sensor and intra sensor level and real-time data from one or more actuators, in response to a user entering the IoT environment 100. The system 106 determines context aware data corresponding to the real-time data based on a spatial context, a temporal context and a causal context. The system 106 establishes cross-correlation of the context aware data at each modality and sub-modality level. The system 106 determines a modification data associated with the IoT environment 100 based on the cross-correlation data. The system 106 transmits the modification data to the one or more actuators for modification in the IoT environment 100 such that ensuring seamless engagement with the user, precise information dissemination, and improved operational control.
[057] Further, the system 106 extracts meta-information from the real-time data using one or more AI models, Vision-transformers and Vision-language transformers to determine the context aware data. The system 106 generates an alert when the meta-information detects any one or more of the incidents such as unauthorized access, suspicious behaviour, and safety violations by the user within the IoT environment 100.
[058] Figure 2 illustrates a schematic architecture of the system 106 for monitoring and managing dynamic assets in the IoT environment monitoring and managing dynamic assets in the IoT environment, according to an embodiment of the present invention.
[059] The system 106 monitors and manages dynamic assets in the IoT environment. The system 106 may include a processor 202 which is communicatively coupled to a memory 204, one or more modules 206, and a data unit 208.
[060] In an example, the processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor 202 may be configured to fetch and execute computer-readable instructions and data stored in the memory 204. At this time, the processor 202 may be a general-purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, and an AI-dedicated processor such as a neural processing unit (NPU). The processor 202 may control the processing of input data in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory and the volatile memory, i.e., the memory 204. The predefined operating rule or artificial intelligence model is provided through training or learning. Further, the processor 202 may be operatively coupled to each of the memory, the I/O Interface. The processor 202 may be configured to process, execute, or perform a plurality of operations described herein.
[061] In an example, the memory 204 may include any non-transitory computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read-only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory 204 is communicatively coupled with the processor 202 to store processing instructions for completing the process. Further, the memory 204 may include an operating system for performing one or more tasks of the system, as performed by a generic operating system in a computing domain. The memory 204 is operable to store instructions executable by the processor 202.
[062] In some embodiments, the one or more modules 206 may include a set of instructions that can be executed to cause the system 106 to perform any one or more of the methods disclosed. The system 106 may operate as a standalone device or may be connected, e.g., using a network, to other computer systems or peripheral devices. Further, while a single system 106 is illustrated, the term “system” shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.
[063] In an embodiment, the module(s) 206 may be implemented using one or more artificial intelligence (AI) modules that may include a plurality of neural network layers. Examples of neural networks include but are not limited to, Convolutional Neural Network (CNN), Deep Neural Network (DNN), Recurrent Neural Network (RNN), and Restricted Boltzmann Machine (RBM). Further, ‘learning’ may be referred to in the disclosure as a method for training a predetermined target device (for example, a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction. Examples of learning techniques include, but are not limited to supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning. At least one of a plurality of CNN, DNN, RNN, RMB models and the like may be implemented to thereby achieve execution of the present subject matter’s mechanism through an AI model. A function associated with an AI module may be performed through the non-volatile memory, the volatile memory, and the processor. The processor may include one or a plurality of processors. At this time, one or a plurality of processors may be a general-purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor, such as a neural processing unit (NPU). One or a plurality of processors control the processing of the input data in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory and the volatile memory. The predefined operating rule or artificial intelligence model is provided through training or learning.
[064] The processor may include one or a plurality of processors. At this time, one or a plurality of processors may be a general purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor such as a neural processing unit (NPU).
[065] The one or a plurality of processors control the processing of the input data in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory and the volatile memory. The predefined operating rule or artificial intelligence model is provided through training or learning.
[066] Here, being provided through learning means that, by applying a learning technique to a plurality of learning data, a predefined operating rule or AI model of a desired characteristic is made. The learning may be performed in a device itself in which AI according to an embodiment is performed, and/or may be implemented through a separate server/system.
[067] The AI model may consist of a plurality of neural network layers. Each layer has a plurality of weight values and performs a layer operation through calculation of a previous layer and an operation of a plurality of weights. Examples of neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), restricted Boltzmann Machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), and deep Q-networks.
[068] The learning technique is a method for training a predetermined target device (for example, a robot) using a plurality of learning data to cause, allow, or control the target device to make a determination or prediction. Examples of learning techniques include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning.
[069] In some embodiments, the data unit 208 serves, amongst other things, as a repository for storing data processed, received, and generated by one or more of the modules 206.
[070] According to embodiments of the present disclosure, the system 108 may include one or more modules 206, such as a sensing module 210, a thinking module 212, an Artificial Intelligence (AI) based interoperability module 214, an instruction module 216, a digital twin module 218 and a data storage module 220. The sensing module 210, the thinking module 212, the Artificial Intelligence (AI) based interoperability module 214, the instruction module 216, the digital twin module 218 and the data storage module 220 are communicably coupled with each other.
[071] In an embodiment, the sensing module 210 may be configured to aggregate real-time data from one or more sensors 102 at inter-sensor and intra sensor level and real-time data from one or more actuators, in response to the user 104 entering the IoT environment 100. The one or more sensors 102 includes at least one of closed-circuit television (CCTV) camera, thermal camera, liquid lenses, and IoT sensors such as temperature, pressure, electric, and sound sensors. The real-time data includes a plurality of media in any format from multiple cameras.
[072] In an embodiment, the thinking module 212 may be configured to determine context aware data corresponding to the real-time data based on a spatial context, a temporal context and a causal context. The context aware data indicates operational monitoring metrics, sensor-based monitoring data, object and event tracking data, differences in sensor data transmission at pre-defined intervals and operating performance, event prediction, detection, and response.
[073] In an embodiment, spatial contexts of the environment include visual region-of interest (ROI), designated zones, planes, ground floors, and multi-layered structures, modelled using sensor configurations. The sensor configuration comprises parameters such as sensor placement (x, y, z coordinates), orientation (azimuth, elevation, roll angles), detection range, resolution, and quality metrics. These configurations are mapped onto 2-dimensional or 3-dimensional layouts of the environment in various supported formats such as Portable Network Graphics (PNG), Computer-Aided Design (CAD), OBJ, or other equivalents by the digital twin module 218. The mappings on the layouts delineate specific ROIs and zones that the sensors are responsible for monitoring. Furthermore, the system 106 supports modelling and simulation of multi-sensor and cross-sensor tracking paths to create environment-specific attention mechanisms for spatial context awareness. The probabilistic weights and sensor-specific confidence values are applied to the paths to simulate coverage areas, attention regions, and zones of heightened sensitivity. This allows for enhanced accuracy and efficiency in sensor-based monitoring and object/event tracking in real world environments to model spatial context awareness.
[074] In an embodiment, the present invention discloses handling the temporal synchronization of asynchronous sensor data from a plurality of sensors, such as CCTV cameras, Radio Frequency Identification (RFID) readers, Bluetooth Low Energy (BLE) beacons, and IoT devices. The data generated by the sensors is indexed with unique timestamps to ensure temporal alignment and consistency across different sensing modalities. This time-indexing mechanism ensures that even when sensors operate on different timescales (e.g., CCTV cameras operating at 30 Frames Per Second (FPS), RFID readers scanning intermittently), their data are temporally synchronized for coherent analysis. The temporal awareness is further enhanced by calculating compute cycles per unit time for each sensor. For example, in the case of CCTV cameras, FPS requirements are taken into account, whereas in BLE beacons and RFID systems account for transmission intervals and operating performance (e.g., in TOPS). The system 106 accounts for differences in sensor performance and data generation rates, ensuring that sensor data is efficiently and accurately integrated into a unified temporal framework.
[075] The present invention provides causal context awareness by leveraging advanced query expectation maximization techniques for the prediction and identification of events based on the causal relationships established between multisensory data streams. The present invention supports dynamically adjusting system queries based on sensor data to maximize the probability of extracting the most relevant and accurate event information. For example, in response to a security query, the system automatically adjusts to prioritize relevant data from CCTV, RFID, and BLE sensors based on the expected causality of events (e.g., motion detection, object displacement, or entry-exit data).
[076] Further, the system 106 optimizes the relationship between query complexity and the corresponding computational pipeline. The complex queries that involve multiple sensor modalities and require deep causal analysis are automatically mapped to more sophisticated computational pathways to process the data efficiently. The system 106 dynamically configures event rule sets and thresholds for specific types of queries. This ensures that the pipeline complexity scales with the computational and causal demands of the query, providing a highly efficient solution for event prediction, detection, and response.
[077] Further, the thinking module 212 may be configured to extract meta-information from the real-time data using one or more AI models, Vision-transformers and Vision-language transformers to determine the context aware data. The thinking module 212 may be configured to generate an alert when the meta-information detects any one or more of the incidents such as unauthorized access, suspicious behaviour, and safety violations by the user within the IoT environment. The meta-information includes one or more information related to object detection, behaviour analysis of the user, pattern recognition of the user, information about activities and interactions of the user within the IoT environment, tracking the user, tracking products across different views and security threats.
[078] The system 106 allows for event rule set configuration based on sensor data thresholds and relationships between different types of sensor data streams. For example, an alert is triggered if a CCTV camera detects motion in a secured area while an RFID reader simultaneously records a tagged item moving out of range. This causal inference provides a robust system for real-time decision-making based on sensor data.
[079] In an embodiment, the present invention facilitates AI/ Machine learning (ML) models specifically designed to operate on low-compute resource environments such as mobile devices while maintaining high speed and fidelity. The present invention is optimized for mobile and tiny deployments, and enable advanced functionalities such as background subtraction, motion detection, optical flow analysis, monocular depth estimation, zero-shot detection, and classification.
[080] In an embodiment, the storage module 220 of the present invention transforms time-series and unstructured video data into structured meta-data through the conversion of video streams into key-value pairs. This meta-data, represented in a low dimensional embedding space, is stored efficiently in local persistent storage. This approach to miniaturized AI/ML model deployment leverages the reduced computational complexity of models tailored for edge devices, enabling real-time processing of sensor data without relying on cloud-based resources. The models utilize lightweight neural architectures, quantization techniques, and optimized inference pipelines to deliver accurate and fast processing of video data. These models function in constrained environments such as IoT devices, security cameras, and mobile systems where high-performance computing power may not be available.
[081] The converted meta-data, stored as key-value pairs in a low-dimensional space, represents the essential features extracted from the video or time-series data. These features might include object trajectories, motion vectors, detected entities, depth information, and classifications, all of which are compressed into efficient embeddings. The persistent local storage of the meta-data allows for subsequent retrieval, querying, and analysis without needing access to the full original video stream. The present invention reduces storage and computational demands while maintaining the fidelity and relevance of the extracted information.
[082] The present invention also uses a teacher-student versions of fine-tuned, contextualized Vision Transformer (ViT) and Vision-and-Language Transformer (ViLT) to interpret and analyze video-derived meta-data. The present invention enables the conversion of basic video meta-data into enriched contextual-meta-data, which serves as the foundation for constructing a vectorized database. The vectorized database is specifically designed to provide deeper insights into contextual behaviours and interactions, allowing for enhanced analysis, querying, and prediction within complex environments.
[083] The ViT and ViLT models are fine-tuned to operate efficiently on resource-constrained devices while preserving their ability to perform high-quality contextual inference. ViT models are used to analyze the visual aspects of the meta-data, such as object interactions, movements, and visual patterns, while ViLT models integrate both visual and language information to generate richer context. The models are trained and optimized to handle multi-modal inputs, enabling them to process visual data alongside supplementary data, such as sensor readings or annotations, resulting in a more comprehensive understanding of the observed environment.
[084] The contextual-meta-data generated by the Vision Transformers is vectorized and stored within a specialized vectorized database. The database retains information about individual entities, and their interactions and also encodes the underlying contextual relationships, such as temporal sequences, spatial dynamics, and causality between events. This allows the system 106 to derive deeper insights into the nature of the interactions, patterns of behavior, and predictive trends based on the multi-modal data sources.
[085] In an embodiment, the present invention uses advanced Large Language Models (LLMs) and multi-agent frameworks for LLMs (multi-SLMs) within an agentic system and pipeline, enabling real-time retrieval and generation of responses to natural language queries concerning the environment, business operations, and key performance indicators (KPIs). The system 106 combines the powerful generative and understanding capabilities of LLMs with the efficiency and specificity of multi-SLMs to deliver accurate and context-aware responses on-demand. The LLMs employed are fine-tuned to understand a wide range of queries related to complex environments and business contexts. These models process large amounts of natural language data, extracting relevant information and generating comprehensive responses. Multi-SLMs, on the other hand, are specialized for handling domain-specific tasks and are optimized for speed and resource efficiency. The system 106 dynamically selects between LLMs and multi-SLMs based on query complexity, context, and the nature of the information requested, ensuring an optimal balance between computational load and response quality.
[086] Further, the agentic system enables seamless interaction between the user and the pipeline, allowing for continuous and adaptive processing of queries. The natural language queries pertaining to any aspect of the monitored environment, operational procedures, or KPIs, and the agentic system retrieves or generates appropriate responses by drawing on real-time data streams, historical logs, and predictive models. The agentic system's ability to integrate LLMs and multi-SLMs provides a high level of responsiveness and accuracy, even in dynamic and fast-changing environments.
[087] The system 106 facilitates real-time responsiveness, the integration of advanced LLMs and multi-SLMs within a unified pipeline, and its ability to contextualize and adapt responses based on the specific requirements of the query. This capability ensures that businesses and other entities obtain actionable insights and detailed answers to complex queries efficiently, enhancing decision-making, monitoring, and operational effectiveness.
[088] In an embodiment, the AI based interoperability module 214 may be configured to establish cross-correlation of the context aware data at each modality and sub-modality level. The modality level indicates visual senses, auditory senses, kinesthetics senses, gustatory senses and olfactory senses and sub-modality level indicates specific quality of thoughts and feelings of each sense within the modality level. The AI based interoperability module 214 may be configured to determine a modification data associated with the IoT environment based on the cross-correlation data. The modification data indicates operational changes in the one or more actuators. The AI based interoperability module 214 may be configured to enable deep insights on the one or more dynamic assets engagement and behavioural analysis based on methodologies such as edge-level detection, network-level tracking, and cloud-level models.
[089] The present invention incorporates sensor-level, signal-level, and data-level interoperability across a diverse range of sensors, including but not limited to CCTV cameras, RFID readers, BLE beacons, and other IoT devices, through the implementation of the cross-correlation technique. The cross-correlation methodology establishes links between different sensor types by embedding spatio-temporal-causal contexts at each modality and its transformed sub-modalities. The present invention introduces adaptive fusion mechanisms to operate on the sensor data while maintaining compliance with privacy standards and regulations. At the sensor level, interoperability is achieved by aligning sensor outputs within the spatio-temporal causal context framework.
[090] For instance, the data from CCTV cameras is correlated with RFID readings and BLE signals, with each data stream being assigned specific weights and probabilities based on its spatial, temporal, and causal relevance. The signal-level interoperability is enhanced through dynamic signal processing techniques that adapt to varying signal characteristics such as resolution, range, and noise levels. The data-level interoperability enables seamless integration of multi-modal data streams, creating a cohesive understanding of the environment. The invention further incorporates adaptive multi-modal fusion to adjust based on environmental conditions, sensor availability, and privacy constraints. The adaptive fusion mechanism is designed to be robust in handling incomplete or noisy data while ensuring that sensitive information is protected in compliance with privacy regulations. By embedding spatio-temporal-causal contexts into each modality and sub-modality, the invention provides a comprehensive and privacy-conscious approach to multi-sensor data fusion, resulting in highly accurate and contextually aware sensor networks.
[091] In an embodiment, the instruction module 216 may be configured to transmit the modification data to the one or more actuators for modification in the IoT environment such as ensuring seamless engagement with the user, precise information dissemination, and improved operational control.
[092] In an embodiment, the digital twin module 218 may be configured to map configuration of the one or more sensors and one or more regions of interest (ROIs) to provide a real-time visualization of the one or more dynamic assets.
In an embodiment, the data storage module 220 may be configured to store the real-time data and the context aware data in a compressed form and enable retrieval of the real-time data and the context aware data by querying.
[093] The present invention provides actionable insights and operational instructions to relevant users through an intuitive and interactive interface, ensuring effective decision-making and operational efficiency across diverse environments. The interface offers real-time displays of key performance indicators (KPIs), operational data, and environmental metrics directly sourced from the underlying vectorized database, enabling a comprehensive and user-friendly overview of system performance and contextual behaviours.
[094] The interface facilitates decision-making by allowing users to access, interpret, and interact with multi-modal sensor data through customizable dashboards. The dashboards support tailored reports and real-time alerts for specific security, operational, and environmental insights, empowering users to configure the system according to their role-specific needs. The customization of data presentation and user access rights ensures that the system can be adapted to various operational contexts, enhancing both flexibility and scalability.
[095] Figure 3 illustrates a schematic block diagram of the operational flow of the modules 206 of the system 106 for monitoring and managing dynamic assets in the IoT environment, according to an embodiment of the present invention.
[096] Initially, the system 106 is configured to receive real-time data from one or more sensors 102 and from one or more actuators as soon as a user 102 enters the IoT environment 100.
[097] At operation 301, the sensing module 210 is configured to aggregate real-time data from one or more sensors 102 at inter-sensor and intra sensor level and real-time data from one or more actuators, in response to a user 104 entering the IoT environment 100. The one or more sensors 102 includes at least one of CCTV camera, thermal camera, liquid lenses, IoT sensors such as temperature, pressure, electric, and sound sensors. The real-time data includes the plurality of media in any format from multiple cameras.
[098] At operation 302, the thinking module 212 is configured to determine context aware data corresponding to the real-time data based on a spatial context, a temporal context and a causal context. The context aware data indicates operational monitoring metrics, sensor-based monitoring data, object and event tracking data, difference in sensor data transmission at pre-defined intervals and operating performance, event prediction, detection, and response.
[099] At operation 303, the AI based interoperability module 214 is configured to establish cross-correlation of the context aware data at each modality and sub-modality level. The modality level indicates visual senses, auditory senses, kinesthetic senses, gustatory senses and olfactory senses and sub-modality level indicates the specific quality of thoughts and feelings of each sense within the modality level.
[0100] In an embodiment, at operation 304a, the AI based interoperability module 214 is configured to determine a modification data associated with the IoT environment based on the cross-correlation data. The modification data indicates operational changes in the one or more actuators.
[0101] At operation 305a, the instruction module 216 is configured to transmit the modification data to the one or more actuators for modification in the IoT environment such that ensuring seamless engagement with the user, precise information dissemination, and improved operational control.
[0102] In another embodiment, at operation 304b, the thinking module 212 is configured to extract meta-information from the real-time data using one or more AI models, Vision-transformers and Vision-language transformers to determine the context aware data.
[0103] At operation 305b, the thinking module 212 is configured to generate an alert when the meta-information detects any one or more of the incidents such as unauthorized access, suspicious behaviour, and safety violations by the user within the IoT environment. The meta-information includes one or more of information related to object detection, behaviour analysis of the user, pattern recognition of the user, information about activities and interactions of the user within the IoT environment, tracking the user, tracking products across different views and security threats.
[0104] The present invention incorporates advanced query capabilities, allowing users to perform custom and natural language queries. These queries are processed by specialized LLM (Large Language Model) agents and multi-SLM (Small Language Model) agents, which retrieve and synthesize relevant data from the system’s knowledge graph and vector data structures. The system translates natural language input into structured queries, delivering precise and contextually relevant answers in real-time, thus deepening the insights available to users. Furthermore, the invention includes an automated multi-channel communication system that delivers actionable notifications and detailed action plans to stakeholders. Real-time alerts are sent through multiple platforms, including SMS, WhatsApp, and email, ensuring prompt notifications of critical events or anomalies. These alerts are accompanied by comprehensive action plans that guide users through step-by step resolutions, ensuring that anomalies are efficiently addressed. By integrating the interactive interface with advanced query capabilities, real-time alerting, and actionable insights, the invention provides a seamless and effective framework for operational control, decision-making, and situational awareness across various environments and industries.
[0105] Figure 4 illustrates a method 400 for monitoring and modifying the IoT environment 100 using AI, in accordance with an exemplary embodiment of the present disclosure.
[0106] At step 402, the method 400 comprises aggregating real-time data from one or more sensors at inter-sensor and intra sensor level and real-time data from one or more actuators, in response to a user entering the IoT environment. The one or more sensors 102 includes at least one of CCTV camera, thermal camera, liquid lenses, and IoT sensors such as temperature, pressure, electric, and sound sensors. The real-time data includes the plurality of media in any format from multiple cameras.
[0107] In an embodiment, at step 404, the method 400 comprises determining context aware data corresponding to the real-time data based on a spatial context, a temporal context and a causal context. The context aware data indicates operational monitoring metrics, sensor-based monitoring data, object and event tracking data, differences in sensor data transmission at pre-defined intervals and operating performance, event prediction, detection, and response.
[0108] At step 406, the method 400 establishing cross-correlation of the context aware data at each modality and sub-modality level. The modality level indicates visual senses, auditory senses, kinesthetic senses, gustatory senses and olfactory senses and sub-modality level indicates specific quality of thoughts and feelings of each sense within the modality level.
[0109] At step 408, the method 400 comprises determining a modification data associated with the IoT environment based on the cross-correlation data. The modification data indicates operational changes in the one or more actuators.
[0110] At step 410, the method 400 comprises transmitting the modification data to the one or more actuators for modification in the IoT environment such that ensuring seamless engagement with the user, precise information dissemination, and improved operational control.
[0111] In an embodiment, the method further comprises extracting meta-information from the real-time data using one or more AI models, Vision-transformers and Vision-language transformers to determine the context aware data. The method comprises generating an alert when the meta-information detects any one or more of the incidents such as unauthorized access, suspicious behaviour, and safety violations by the user within the IoT environment. The meta-information includes one or more of information related to object detection, behaviour analysis of the user, pattern recognition of the user, information about activities and interactions of the user within the IoT environment, track the user, track products across different views and security threats.
[0112] In an embodiment, the method further comprises mapping configuration of the one or more sensors and one or more regions of interest (ROIs) to provide a real-time visualization of the one or more dynamic assets. The method also comprises storing the real-time data and the context aware data in a compressed form and enabling retrieval of the real-time data and the context aware data by querying. In addition, method comprises
[0113] In an embodiment, the method further comprises enabling deep insights on the one or more dynamic assets engagement and behavioural analysis based on methodologies such as edge-level detection, network-level tracking, and cloud-level models.
[0114] Figures 5A-5B illustrate an exemplary representation of cross-camera interoperability and spatial mapping of the various zones of the IoT environment 100 by the system 106, in accordance with an exemplary embodiment of the present disclosure.
[0115] The present invention provides a comprehensive AI-IoT system 106 to be implemented in ambient environments. The system 106 connects seamlessly to a diverse array of existing sensors and actuators, including CCTV cameras, thermal cameras, liquid lenses, IoT sensors such as temperature, pressure, electric, and sound sensors, as well as various actuators. The system integrates with these sensors and actuators in ambient environments, ensuring robust connectivity and compatibility across a wide range of devices. The system 106 supports multiple video streaming formats, including RTSP, HTTP, and HLS, as well as industry-standard protocols such as ONVIF and WebRTC, enabling efficient integration with modern IP cameras and advanced camera systems.
[0116] The system 106 provides a comprehensive connectivity with a variety of sensors and actuators through communication protocols such as UART, MIPI, TCP/UDP/IP, socket protocols, I2C, and RS485 thereby enabling seamless interaction with temperature, pressure, electric, and sound sensors, as well as other environmental monitoring devices.
[0117] Figures 5a-5b depict the visual field of view of the system 106 for understanding the sensor interoperability and spatial-temporal-causal contexts. The system 106 aggregates data from these diverse sources in real-time provides a holistic and integrated view of the environment, and facilitating enhanced monitoring, analysis, and control of ambient systems. The system 106 facilitates multi-sensor connectivity framework which is critical for enabling intelligent, real-time decision-making and comprehensive environmental awareness.
[0118] Figure 6a illustrates the system architecture for monitoring and modifying the IoT environment 100 using AI, in accordance with an exemplary embodiment of the present disclosure.
[0119] The system architecture as depicted in Figure 6a is an AI-IoT system 106 characterized by its seamless integration of multiple subsystems. For instance, the sensor layer corresponds to the sensing module 210, thinking layer corresponds to the thinking module 212 , Acting Layer, Databases, APIs, corresponds to the Artificial Intelligence (AI) based interoperability module 214 and Human-Computer Interaction (HCI) Dashboard corresponds to the instruction module 216. Further, each of the subsystems operates in various deployment environments, including edge computing platforms (CPU/GPU/TPU/NPU/QPU), cloud infrastructures, distributed edge-computing frameworks, distributed cloud-computing architectures, or hybrid configurations. This flexible deployment capability ensures that the present invention facilitates intelligent, context-aware functionalities across diverse operational settings, enhancing its adaptability and effectiveness in managing complex ambient environments.
[0120] Figures 6b-6C illustrate the In an embodiment, the system 106 employs an encoder which connects with the plurality of sensors such as CCTV cameras 102a-n to deliver real-time intelligent surveillance. The encoder processes and analyzes video feeds from connected cameras and ensures secure video storage locally for up to one year while transmitting critical insights to the dashboard 610b for real-time monitoring and analytics. The architecture also shows the interfaces with cloud APIs for extended functionalities, such as interoperability, signal processing, KPI monitoring, and synthetic active AI, enhancing overall system intelligence and efficiency. The system 106 optimizes surveillance operations, offering seamless integration with various client networks and devices. By leveraging advanced sensor networks, data-interoperability, AI-driven modelling, and sophisticated actuators, the system 106 delivers hyper-personalized experiences while optimizing both value and efficiency metrics.
[0121] Referring back to Figure 6a, the frontend layer 602a serves as an interface to provide an intuitive and interactive platform for user engagement and system management. It enables users to interact seamlessly with the underlying knowledge graph and perform advanced data queries, leveraging both custom database queries and specialized LLM (Large Language Model) and SLM (Small Language Model) agents, i.e., interact between API layer 604a and Database layer 606a.
[0122] The frontend layer 602a of the architecture depicts a login page to secure access to the system by authenticating users based on their credentials and access rights, ensuring that only authorized personnel can interact with the system 106.
[0123] The frontend layer 602a of the architecture also depicts a Dashboard Page as illustrated in Figure 7 that provides an intuitive interface that displays key performance indicators (KPIs) and graphs directly sourced from our database, offering a comprehensive overview of operations. Further, users may also perform custom queries directly on the database or the knowledge graph of the system 106 for tailored insights. For more complex language queries, the present invention uses natural language input into structured queries which are processed by LLM (Large Language Model) agents. Then, the results are aggregated from multiple SLM (Small Language Model) agents querying the vector knowledge graph. Thus, facilitating users to receive precise answers derived from a rich data set, providing deep insights based on aggregated results from various specialized databases.
[0124] The frontend layer 602a of the architecture comprises an analytics Page that allows users to start and stop the analytics engine orchestrator. It connects to all cameras, processes video feeds, and extracts metadata based on user selected KPIs. The metadata are stored in the database for future reference and analysis, providing a rich source of information for further exploration and decision-making.
[0125] The frontend layer 602a of the architecture further comprises a Settings Page which is a configuration hub where users are able to manage cameras, define zones, set user access rights, and adjust other system settings, thereby making the system flexible or customizable.
[0126] The API layer 604a serves as a critical bridge between the frontend and backend components, facilitating seamless interaction and data management across the system. The key functions include:
[0127] Data Management: Manages data flow between the user interface and the database, enabling real-time updates and retrievals to ensure accurate and timely information is always available.
[0128] Third-Party Integration: Facilitates communication with external third-party software, such as GIS systems, POS systems, Teams, and WhatsApp, enabling Sentinel to integrate with existing workflows and expand its functionality. Provides APIs that allow these external applications to send and receive data, enabling seamless interactions and enhancing overall system capabilities.
[0129] Query Processing: Connects with LLM (Large Language Model) agents to process complex language queries, allowing users to interact with the knowledge graph using natural language. Furthermore, supports direct custom queries to the database for users who need detailed data retrieval and manipulation capabilities.
[0130] Security: Implements robust authentication and authorization mechanisms to protect sensitive data, ensuring that user actions are tracked and audited to maintain system integrity and compliance.
[0131] The ambient machine layer of the architecture is responsible for transforming raw video data and sensor inputs into actionable insights. The key features of the layer include:
[0132] AI Model Initialization: Upon startup, it initializes multiple AI models tailored to
specific tasks such as object detection, behaviour analysis, and pattern recognition, ensuring that each model is optimized for its designated function.
[0133] Real-Time Video Processing: Continuously processes video feeds from all connected cameras, extracting valuable metadata that represents coded information about activities and interactions within the monitored environment.
[0134] Multi-Camera Tracking: Integrates data from multiple sensors and cameras to track people and products across different views, as shown in Figure 8A. This post-processing of extracted knowledge enables the AI engine to understand movement patterns and interactions, providing a comprehensive view of activities in complex environments, as shown in Figure 8B, and Figure 8C. Thus, the present invention maintains continuity of tracking, even when subjects move between camera views, enhancing the accuracy and reliability of surveillance.
[0135] Metadata Generation: Structures and stores extracted metadata for further analysis, allowing the system to deliver insights into customer behaviour, security threats, and employee productivity.
[0136] The present invention provides the following technical advantages:
[0137] Business Analytics and Intelligence: The present invention provides powerful in-store insights comparable to e-commerce analytics, enabling retailers to enhance product placement, marketing strategies, and resource allocation. By providing detailed data on user demographics, seasonal trends, and shopping behaviours, the present invention empowers business and operations teams to make data-driven decisions that lead to smarter range planning, more targeted in-store marketing strategies, and optimized visual merchandise through effective A/B testing.
[0138] User Analytics and Intelligence: The present invention transforms brick-and-mortar retail operations with cutting edge Visual AI, significantly enhancing user experiences and store performance. The core objective of the present invention is to personalize the in-store shopping experience by dynamically adjusting the environment based on user’ behaviour and preferences, all while maintaining strong privacy protections. By continuously monitoring user’s gaze analytics in relation to their selection behaviour, the present invention supports seamless retail media integrations, delivering highly customized ads on digital screens. This enriches the user experience and also contributes to the overall sales and advertising revenue.
[0139] Operational Analytics and Intelligence: The present invention optimizes operational efficiency by actively monitoring self-checkout areas for anomalies, ensuring that all products are correctly scanned and paid for, thus minimizing losses. The present invention enables retailers to create a highly personalized, user-centric shopping experience thereby improving the flow of operations, driving higher conversion rates and optimising overall costs.
[0140] For example, the present invention enables tracking user footfall and their journey inside a store where they go first by choice, then which other sections of the IoT environment/ store/organization and finally if they go to the cash counter for purchasing the goods and do the payment. Using the integrated Point of Sale data, the system of the present invention is able to plan store’s visual merchandising, placement of the goods, advertisement, retail media, and other things to attract more users and user conversions leading to more revenue.
[0141] The present disclosure may be implemented to obtain useful information related to the organisation such as:
[0142] User-Engagement Key Performance Indicators (KPIs):
● Employee Availability: Track the number of employees available in each department, highlighting departments with zero manned staff.
● Customer Response Time: Measure the time taken for users to be attended to by staffs.
● Customer Attendance Metrics: Count the number of users served and the users not attended within specific departments.
● Staff vs. Brand Promoter Engagement: Record the number of users attended by in-house staff compared to brand promoters in various departments.
● Zonal Dwell Time: Analyze the time spent by users in each experience zone of the store.
● Employee-User Interaction Analytics: Evaluate whether product demonstrations provided by associates lead to sales.
● Product-User Interaction Analytics: Identify which products attract the most users’ attention.
[0143] User Journey KPIs:
● User Path Tracking: Monitor and document the paths followed by users throughout the store.
● Hot/Cold Zone Analysis: Conduct root cause analysis to understand user behaviour in hot and cold zones of the store.
● Purchase Patterns and Footfalls: Analyze buying patterns and user footfalls to understand user preferences and traffic flow.
[0144] Customer Profile KPIs
● Demographics: Collect data on user age and gender.
● Group vs. Individual Visits: Distinguish between individual users and those visiting in groups or families.
[0145] Associate Profile Identification KPIs
● Role Identification: Track and categorize staff members including Store Managers, Department Managers, Store Associates, and Brand Promoters.
[0146] Associate Productivity KPIs
● Attendance Tracking: Monitor and record the attendance of associates and staff across various departments, ensuring accurate reflection of presence and absence.
● Availability Analysis: Evaluate the number of hours associates are actively engaged in work versus the periods they are unavailable, providing insights into operational coverage and staff engagement.
● Productivity Measurement: Continuously measure and analyze the performance metrics of each associate, such as items packed per hour, scans completed per hour, and loading dock efficiency. This helps in assessing individual productivity and identifying areas for improvement.
● Interaction Time: Track the average duration of interactions between store associates and users to gauge the effectiveness of customer service.
● Manager Time Allocation: Assess and compare the average time store managers spend on the sales floor versus administrative tasks in the back office, optimizing managerial presence and engagement.
● Dynamic Staffing: Implement staffing adjustments based on peak and off-peak user visit hours to ensure optimal coverage and efficient operation.
● Conversion Rate: Accurately calculate the conversion rates resulting from user interactions to evaluate the effectiveness of sales strategies and staff performance.
[0147] Store KPIs
● Operational Hours: Record store opening and closing times.
● Footfall Tracking: Monitor overall store footfall at entry and exit gates, identifying busy and quiet periods.
● Zonal Footfall Analysis: Measure footfall in each department to identify hot and dead zones within the store.
● Average Dwell Time: Determine the average time users spend in each department.
● Demonstration Metrics: Count the number of product demonstrations versus total user interactions in each department.
● Seasonality Impact: Analyze the impact of seasonality, such as weekends, festivals, and events, on user visits and apply predictive analytics.
[0148] The present invention relates to an intelligent Ambient Machine designed to monitor, analyze, and dynamically adapt IoT-enabled environments. The system is tailored to physical spaces such as retail stores, warehouses, and operational facilities, focusing on dynamic assets including humans (e.g., customers, staff, workers, managers) and machines (e.g., forklifts, robotic arms, and automated systems). By integrating multi-modal sensing, hierarchical AI pipelines, and actuators, the invention provides a comprehensive framework for intelligent monitoring, real-time insights, and actionable interventions.
[0149] The present invention enables:
[0150] Converting the Physical Environment into a Digital Twin: The invention transforms the physical environment into a dynamic digital representation to enable real-time monitoring and interaction.
[0151] Sensor Onboarding and Configuration:
● The system integrates IoT sensors, such as CCTV cameras, RFID tags, and BLE beacons, mapping their configurations onto a digital layout derived from CAD files or PDF blueprints.
● Cameras and sensors are registered with details such as positioning, orientation, field of view, resolution, and frame rate.
[0152] Layout and Digital Mapping:
● The environment layout is converted into a 3D digital twin, enabling spatial awareness and real-time visualization.
● Zones and regions of interest (ROIs) are defined, such as customer-facing areas, restricted zones, or operational sections.
● The system identifies overlaps between sensor coverage and ROIs, enabling optimal monitoring.
[0153] Operational Rules and KPIs:
● Users define rules and KPIs tailored to the environment’s purpose. Examples include customer dwell times, staff productivity, and compliance rates.
● The digital twin offers a "God's Eye" view of the environment, visualizing dynamic assets and ongoing activities.
[0154] Dynamic Asset Profiling and Behavioural Analysis: The Ambient Machine processes real-time data streams to profile dynamic assets and generate actionable insights.
[0155] Customer Profiling and Preferences:
● Identity Recognition: Customers are identified through facial recognition and other biometric techniques.
● Demographic Profiling: Age and gender are estimated for targeted analysis and engagement.
● Preference Mining: The system analyzes customer interactions with products, such as viewing, picking, comparing, or purchasing, to infer preferences.
● Behavioural Insights: Behavioural patterns, such as dwell times in specific zones or repeated interactions with certain products, are mined to refine customer profiles.
[0156] Staff and Associate Monitoring:
● Attendance Tracking: The system monitors staff attendance in the store or warehouse, logging their presence and shift timings.
● Zone Availability: Staff availability in their designated zones is tracked, ensuring alignment with operational requirements.
● Productivity Analysis: Metrics such as time spent assisting customers, responding to alerts, or fulfilling tasks are evaluated to assess productivity.
[0157] Worker Compliance and Anomaly Detection:
● Task Compliance: The system monitors whether workers follow standard operating procedures (SOPs), such as safety guidelines or task sequences.
● Anomaly Detection: Deviations from expected behaviour, such as unauthorized access or unsafe practices, are flagged in real time.
[0158] Hierarchical Hybrid AI Architecture: The invention employs a multi-tiered AI pipeline to process data efficiently and extract rich insights.
[0159] Edge-Level Processing:
● Lightweight models on edge devices detect and classify dynamic assets in real time.
● Video data is converted into metadata, focusing on regions of interest (ROIs) and minimizing irrelevant information.
[0160] Local Network Processing:
● Quantized vision transformers refine metadata, correct errors, and perform deeper analysis.
● Temporal analysis identifies trends, such as prolonged dwell times, enabling predictive insights.
[0161] Cloud-Level Processing:
● Advanced vision-language and video-language models extract richer insights from encoded video clips.
● Examples include identifying whether a customer is interacting with SKUs, analyzing staff-customer interactions, or assessing task completion in warehouses.
● User-defined prompts are integrated to tailor system responses and generate natural language insights.
[0162] Actuation and Environmental Modifications: The system uses its insights to actively adapt the environment, enhancing safety, efficiency, and engagement.
[0163] Customer-Centric Actuation:
● Personalized Offers: Recognized customers, such as loyal shoppers, are provided personalized offers displayed on retail media screens.
● Interactive Displays: Based on customer profiles, targeted advertisements or product information are shown on digital displays.
● Customer Assistance: Alerts are sent to staff for proactive assistance if a customer appears to need help.
[0164] Operational Actuation:
● Staff Alerts: Notifications, such as emails or messages, are sent to staff regarding anomalies or customer needs.
● Robotic Interventions: Actuation commands are sent to robotic arms or automated machines for tasks like inventory management or replenishment.
● Environmental Adjustments: Access controls, lighting, and HVAC systems are adjusted based on real-time environmental conditions or safety requirements.
[0165] Anomaly Response:
● The system triggers appropriate actions for anomalies, such as:
○ Locking doors in case of unauthorized access.
○ Sending alerts for safety violations.
○ Activating security cameras or alarms for suspicious activity.
[0166] Configuration Dashboard and Instruct Mechanisms: The system provides users with actionable insights and operational control through an intuitive dashboard interface.
[0167] Comprehensive Dashboard:
● Displays real-time KPIs, such as customer engagement metrics, staff productivity scores, and compliance rates.
● Offers a digital twin view of the environment, visualizing active zones, asset movements, and detected anomalies.
[0168] Custom Queries and Reports:
● Users can query the system using natural language inputs, generating tailored reports and insights.
● Queries may include, "Which customers interacted with SKU X today?" or "What was the productivity rate of Zone A staff?"
[0169] Adaptive Feedback Loops:
● Feedback from users and system outcomes is incorporated to refine operational rules and AI models.
● For example, if bottlenecks are identified in a store layout, the system can suggest layout changes or staff reallocations
[0170] The present invention revolutionizes IoT-enabled environments by integrating advanced sensing, hierarchical AI pipelines, and actuation mechanisms. The present invention provides real-time profiling and behaviour analysis of dynamic assets, empowering users with actionable insights and enabling intelligent environmental modifications. Whether optimizing customer experiences in retail or ensuring compliance in warehouses, the Ambient Machine offers a versatile and scalable solution for modern operational challenges.
[0171] Working Flow of the Ambient Machine: The Ambient Machine is an innovative IoT-enabled system designed to monitor, analyze, and manage dynamic physical environments through a multi-stage workflow. Each stage leverages advanced AI technologies to process real-time data and deliver actionable insights.
1. Conversion of Physical to Digital Environment: The system first creates a digital representation of the physical space by onboarding IoT sensors such as CCTV cameras and RFID tags. Using layout files (e.g., CAD, PDFs), it maps sensor configurations (pose, resolution, field of view) to the environment, generating a comprehensive "God’s Eye View." It identifies zones, regions of interest (ROIs), and operational rules, forming the foundational layer for all further operations.
2. Spatial Mining and Metadata Generation: Edge AI models run real-time object detection and spatial localization tasks. These miniaturized models identify dynamic assets such as humans and machines, classify their roles (e.g., customer, staff, worker), and encode temporal changes. The data is compressed through pixel-level delta encoding, generating metadata enriched with timestamps for further processing.
3. Insights from Vision-Language Models (VLMs): The system employs Vision-Language Models to analyze encoded video clips and metadata, extracting advanced insights. These models identify behaviours such as dwell times, interactions, and engagement with objects or zones, enriching the metadata for high-level analysis.
4. Agentic AI for Real-Time Action: Small Language Models (SLMs) power an AI assistant capable of handling real-time natural language queries. This component enables triggering alerts, sending notifications (via email, WhatsApp, etc.), and actuating devices like robotic systems, retail displays, or security mechanisms.
5. Dashboards and Actuation: User-facing dashboards provide real-time monitoring of KPIs, insights, and alerts. Configuration interfaces allow for onboarding sensors, defining rules, and customizing alerts. Actuation modules execute responsive actions, such as triggering alerts, controlling robotic arms, or adjusting digital displays, ensuring seamless operational management.
[0172] Data Security and Privacy in the Ambient Machine:
[0173] The present invention prioritizes data security and privacy, particularly in handling sensitive information, including Personally Identifiable Information (PII). The Ambient Machine achieves this by employing a transformative process wherein raw video data is converted into anonymized metadata and spatio-temporally encoded clips. This ensures that only the essential details necessary for insights, such as object classes, spatial locations, and temporal interactions, are extracted and stored, while raw video frames are not retained or transmitted.
[0174] The encoding mechanism implemented in the present invention reduces the granularity of stored data by focusing exclusively on pixel-level deltas within designated regions of interest (ROIs). By analyzing changes over time and space, the system abstracts data into actionable formats, avoiding exposure of sensitive or extraneous visual details. This abstraction inherently anonymizes the data, as it eliminates any direct visual representation of individuals or objects.
[0175] The architecture of the Ambient Machine further enhances data security by employing a hierarchical processing approach. Most operations occur at the edge level, where raw video data is processed locally to generate metadata and encoded clips. These edge devices execute initial detection, classification, and encoding, thereby ensuring that sensitive data does not leave the localized environment. Only pre-processed and anonymized data is transmitted to the cloud for further analysis, where advanced AI models generate deep insights. The cloud processing is confined to encoded data formats, ensuring that PII remains inaccessible.
[0176] The present invention also incorporates rigorous encryption protocols for data transmission and storage, protecting metadata and encoded clips from unauthorized access. Role-based access controls are implemented to restrict data visibility to only those with requisite permissions, further ensuring compliance with privacy standards.
[0177] By leveraging edge computing for primary processing, anonymized data encoding, and cloud operations limited to non-PII data, the Ambient Machine establishes a robust framework for data security and privacy. This ensures that the system maintains the trust of users and stakeholders, enabling secure and ethical deployment in environments such as retail stores and warehouses.
[0178] Use Cases Supported by the Ambient Machine:
[0179] The Ambient Machine is an innovative IoT-enabled system designed to monitor, analyze, and manage dynamic physical environments through a multi-stage workflow. Each stage leverages advanced AI technologies to process real-time data and deliver actionable insights.
[0180] User Profiling
● Staff & Associates: Facial recognition tracks attendance, availability, and productivity within zones.
● Customers: Identifies loyal or repeat customers, logging demographic details like age and gender, family or individual, etc.
[0181] Customer Journey Tracking: Tracks customers across zones and time, from entry to exit, providing insights into their navigation patterns.
[0182] Customer Engagement: Detects where customers dwell, heatmaps and flow-maps, which products they touch, pick, or put back, and how long they interact with specific zone or brand.
[0183] Gaze Analysis: Monitors where customers look, including promotional boards, advertisements, or SKUs, before making purchasing decisions.
[0184] Customer Preferences and Behaviours: Analyzes frequent purchases, preferred zones, and navigation paths, while identifying behaviours exhibited during interactions.
[0185] Personalized Ads and Assistance
● Displays tailored advertisements on retail media screens based on profiles and preferences.
● Sends alerts to staff for assisting customers or addressing anomalies.
[0186] Staff Monitoring: Tracks attendance, availability in assigned zones, and productivity of employees.
[0187] Premise Monitoring: Detects unusual behaviours, theft, or shrinkage events, triggering real-time alerts and notifications.
[0188] Process Compliance in Warehouses: Monitors workers’ adherence to SOPs, checks for anomalies in packing stations, dock stations, and entry/exit gates.
[0189] While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
[0190] The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein.
[0191] Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.
[0192] Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any component(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or component of any or all the claims. , Claims:1) A method (400) for monitoring and managing one or more dynamic assets in an Internet of Things (IoT) environment, the method comprising:
aggregating (402), by a sensing module, real-time data from one or more sensors at inter-sensor and intra sensor level and real-time data from one or more actuators, in response to a user entering the IoT environment;
determining (404), by a thinking module, context aware data corresponding to the real-time data based on a spatial context, a temporal context and a causal context;
establishing (406), by an Artificial Intelligence (AI) based interoperability module, cross-correlation of the context aware data at each modality and sub-modality level;
determining (408), by the AI based interoperability module, a modification data associated with the IoT environment based on the cross-correlation data, wherein the modification data indicates operational changes in the one or more actuators; and
transmitting (410), by an instruction module, the modification data to the one or more actuators for modification in the IoT environment such that ensuring seamless engagement with the user, precise information dissemination, and improved operational control.
2) The method (400) as claimed in claim 1, wherein,
the one or more sensors includes at least one of, CCTV cameras, thermal cameras, liquid lenses, IoT sensors such as temperature, pressure, electric, and sound sensors; and
the real-time data includes plurality of media in any format from multiple cameras.
3) The method (400) as claimed in claim 1, wherein the context aware data indicates operational monitoring metrics, sensor-based monitoring data, object and event tracking data, difference in sensor data transmission at pre-defined intervals and operating performance, event prediction, detection, and response.
4) The method (400) as claimed in claim 1, wherein the modality level indicates visual senses, auditory senses, kinesthetics senses, gustatory senses and olfactory senses and the sub-modality level indicates specific quality of thoughts and feelings of each sense within the modality level.
5) The method (400) as claimed in claim 1, comprising:
extracting, by the thinking module, meta-information from the real-time data using one or more AI models, Vision-transformers and Vision-language transformers to determine the context aware data; and
generating, by the thinking module, an alert when the meta-information detects any one or more of the incidents such as unauthorized access, suspicious behaviour, and safety violations by the user within the IoT environment.
6) The method (400) as claimed in claim 5, wherein the meta-information includes one or more of information related to object detection, behaviour analysis of the user, pattern recognition of the user, information about activities and interactions of the user within the IoT environment, track the user, track products across different views and security threats.
7) The method (400) as claimed in claim 1, comprising:
mapping, by a digital twin module, configuration of the one or more sensors and one or more regions of interest (ROIs) to provide a real-time visualization of the one or more dynamic assets.
8) The method (400) as claimed in claim 1, comprising:
storing, by a data storage module, the real-time data and the context aware data in a compressed form; and
enabling, by the data storage module, retrieval of the real-time data and the context aware data by querying.
9) The method (400) as claimed in claim 1, comprising:
enabling, by the AI based interoperability module, deep insights on the one or more dynamic assets engagement and behavioural analysis based on methodologies such as edge-level detection, network-level tracking, and cloud-level models.
10) A system (106) for monitoring and managing one or more dynamic assets in an Internet of Things (IoT) environment, the system comprising:
a sensing module (210) to aggregate real-time data from one or more sensors at inter-sensor and intra sensor level and real-time data from one or more actuators, in response to a user entering the IoT environment;
a thinking module (212) to determine context aware data corresponding to the real-time data based on a spatial context, a temporal context and a causal context;
an Artificial Intelligence (AI) based interoperability module (214) to establish cross-correlation of the context aware data at each modality and sub-modality level;
the AI based interoperability module (214) to determine a modification data associated with the IoT environment based on the cross-correlation data, wherein the modification data indicates operational changes in the one or more actuators; and
an instruction module (216) to transmit the modification data to the one or more actuators for modification in the IoT environment such that ensuring seamless engagement with the user, precise information dissemination, and improved operational control.
11) The system (106) as claimed in claim 10, wherein,
the one or more sensors includes at least one of, CCTV cameras, thermal cameras, liquid lenses, IoT sensors such as temperature, pressure, electric, and sound sensors; and
the real-time data includes plurality of media in any format from multiple cameras.
12) The system (106) as claimed in claim 10, wherein the context aware data indicates operational monitoring metrics, sensor-based monitoring data, object and event tracking data, difference in sensor data transmission at pre-defined intervals and operating performance, event prediction, detection, and response.
13) The system (106) as claimed in claim 10, wherein the modality level indicates visual senses, auditory senses, kinesthetics senses, gustatory senses and olfactory senses and sub-modality level indicates specific quality of thoughts and feelings of each sense within the modality level.
14) The system (106) as claimed in claim 10, comprising:
the thinking module (212) to extract meta-information from the real-time data using one or more AI models, Vision-transformers and Vision-language transformers to determine the context aware data; and
the thinking module (212) to generate an alert when the meta-information detects any one or more of the incidents such as unauthorized access, suspicious behaviour, and safety violations by the user within the IoT environment.
15) The system (106) as claimed in claim 11, wherein the meta-information includes one or more of information related to object detection, behaviour analysis of the user, pattern recognition of the user, information about activities and interactions of the user within the IoT environment, track the user, track products across different views and security threats.
16) The system (106) as claimed in claim 10, comprising:
mapping, by a digital twin module (218), configuration of the one or more sensors and one or more regions of interest (ROIs) to provide a real-time visualization of the one or more dynamic assets.
17) The system (106) as claimed in claim 10, comprising:
a data storage module (220) to store the real-time data and the context aware data in a compressed form; and
the data storage module to enable retrieval of the real-time data and the context aware data by querying.
18) The system (106) as claimed in claim 10, comprising:
the AI based interoperability module to enable deep insights on the one or more dynamic assets engagement and behavioural analysis based on methodologies such as edge-level detection, network-level tracking, and cloud-level models.
| # | Name | Date |
|---|---|---|
| 1 | 202421101919-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [23-12-2024(online)].pdf | 2024-12-23 |
| 2 | 202421101919-STATEMENT OF UNDERTAKING (FORM 3) [23-12-2024(online)].pdf | 2024-12-23 |
| 3 | 202421101919-FORM FOR SMALL ENTITY(FORM-28) [23-12-2024(online)].pdf | 2024-12-23 |
| 4 | 202421101919-FORM FOR SMALL ENTITY [23-12-2024(online)].pdf | 2024-12-23 |
| 5 | 202421101919-FORM 1 [23-12-2024(online)].pdf | 2024-12-23 |
| 6 | 202421101919-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [23-12-2024(online)].pdf | 2024-12-23 |
| 7 | 202421101919-EVIDENCE FOR REGISTRATION UNDER SSI [23-12-2024(online)].pdf | 2024-12-23 |
| 8 | 202421101919-DRAWINGS [23-12-2024(online)].pdf | 2024-12-23 |
| 9 | 202421101919-DECLARATION OF INVENTORSHIP (FORM 5) [23-12-2024(online)].pdf | 2024-12-23 |
| 10 | 202421101919-COMPLETE SPECIFICATION [23-12-2024(online)].pdf | 2024-12-23 |
| 11 | Abstract1.jpg | 2025-02-10 |
| 12 | 202421101919-Proof of Right [13-03-2025(online)].pdf | 2025-03-13 |
| 13 | 202421101919-FORM-8 [13-03-2025(online)].pdf | 2025-03-13 |
| 14 | 202421101919-FORM-26 [13-03-2025(online)].pdf | 2025-03-13 |
| 15 | 202421101919-MSME CERTIFICATE [22-03-2025(online)].pdf | 2025-03-22 |
| 16 | 202421101919-FORM28 [22-03-2025(online)].pdf | 2025-03-22 |
| 17 | 202421101919-FORM-9 [22-03-2025(online)].pdf | 2025-03-22 |
| 18 | 202421101919-FORM FOR SMALL ENTITY [22-03-2025(online)].pdf | 2025-03-22 |
| 19 | 202421101919-FORM 18A [22-03-2025(online)].pdf | 2025-03-22 |
| 20 | 202421101919-EVIDENCE FOR REGISTRATION UNDER SSI [22-03-2025(online)].pdf | 2025-03-22 |