Abstract: The present disclosure provides a system and method for forecasting device events. In particular, a set of data parameters corresponding to events triggered at one or more devices in a network are recorded and pre-processed in one or more batches. Further, one or more combination of events is determined based on the pre-processed set of data parameters, where the one or more combination includes an event and a frequency of occurrence of the event in a particular period of time. Based on the one or more combination, one or more recommendations may be provided for execution at the one or more devices in the network, where the one or more recommendations correspond to forecasted device events.
DESC:RESERVATION OF RIGHTS
A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
FIELD OF DISCLOSURE
[0001] The embodiments of the present disclosure generally relate to a forecasting system. In particular, the present disclosure relates to a forecasting system for predicting device events using artificial intelligence and machine learning based architecture.
BACKGROUND OF DISCLOSURE
[0002] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0003] In a digital world, with millions of users across the globe, prediction definitely has the power to drive the future of interaction. Feeding a historical dataset into a system that uses machine learning algorithms to predict outcomes makes prediction possible.
[0004] People interact with a number of different electronic devices on a daily basis. However, the usefulness of these devices is often limited to basic and/or particular pre-determined tasks associated with the device. With advancements in technology and varied number of devices being deployed, comparatively fewer advancements have been made regarding usage of these devices in diverse or evolving and unpredictable ecosystems.
[0005] There is, therefore, a need in the art to provide a method and a system that can overcome the shortcomings of the existing prior arts.
SUMMARY
[0006] This section is provided to introduce certain objects and aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0007] In an aspect, the present disclosure relates to a system for providing one or more recommendations for executing forecasted events at one or more computing devices in a network. The system includes one or more processors and a memory operatively coupled to the one or more processors, where the memory includes processor-executable instructions, which on execution, cause the one or more processors to record a set of data parameters corresponding to one or more events triggered at the one or more computing devices in the network, pre-process the recorded set of data parameters in one or more batches, and determine at least one combination based on the pre-processed set of data parameters. The at least one combination includes an event of the one or more events and a frequency of occurrence of the event over a first period of time. Further, in response to said determination, the one or more processors are configured to provide the one or more recommendations for execution of the forecasted events at the one or more computing devices in the network, where the one or more recommendations include the at least one combination.
[0008] In an embodiment, the one or more processors are configured to determine whether a probability of occurrence of the at least one combination is equal to or exceeds a threshold. In an embodiment, the one or more processors are configured to provide the one or more recommendations for execution of the forecasted events at the one or more computing devices in response to a determination that the probability of occurrence of the at least one combination is equal to or exceeds the threshold.
[0009] In an embodiment, the one or more processors are configured to provide the one or more recommendations based on extrapolation of time stamps in accordance with the at least one combination.
[0010] In an embodiment, the one or more processors are configured to determine the at least one combination by being configured to determine at least one chain of combinations over the first period of time. The at least one chain of combinations includes more than one combination of the determined at least one combination, and the one or more recommendations include the at least one chain of combinations.
[0011] In an embodiment, the set of data parameters corresponding to the one or more events triggered at the one or more computing devices includes historical usage data of the one or more computing devices.
[0012] In another aspect, the present disclosure relates to a method for providing one or more recommendations for executing forecasted events. The method includes recording, by one or more processors, a set of data parameters corresponding to one or more events triggered at one or more computing devices in a network, selecting, by the one or more processors, a pair of the one or more computing devices in the network based on the recorded set of data parameters, and determining, by the one or more processors, a count of one or more intersection events corresponding to the selected pair of the one or more computing devices. Further, the method includes providing, by the one or more processors, the one or more recommendations for execution of the forecasted events at the selected pair of the one or more computing devices in the network. The one or more recommendations include the one or more intersection events.
[0013] In an embodiment, the method includes determining, by the one or more processors, whether the count of the one or more intersection events corresponding to the selected pair of the one or more computing devices exceeds a threshold. In an embodiment, the method includes providing, by the one or more processors, the one or more recommendations for execution of the forecasted events at the selected pair of the one or more computing devices in response to determining, by the one or more processors, that the count of the one or more intersection events corresponding to the selected pair of the one or more computing devices exceeds the threshold.
[0014] In an embodiment, the method includes providing, by the one or more processors, the one or more recommendations based on extrapolation of time stamps in accordance with the one or more intersection events.
[0015] In an embodiment, the method includes selecting, by the one or more processors, another pair of the one or more computing devices in response to determining, by the one or more processors, that the count of the one or more intersection events corresponding to the selected pair of the one or more computing devices is less than the threshold.
[0016] In another aspect, the present disclosure relates to a user equipment (UE) for executing one or more recommendations of forecasted events. The UE includes one or more processors communicatively coupled to a system, where the one or more processors are configured to detect one or more events triggered at the UE in a network, and execute the one or more recommendations provided by the system. Further, the system includes a processor configured to record a set of data parameters corresponding to the one or more events triggered at the UE in the network, pre-process the recorded set of data parameters in one or more batches, and determine at least one combination based on the pre-processed set of data parameters, where the at least one combination includes an event of the one or more events and a frequency of occurrence of the event over a first period of time. Further, the processor of the system is configured to provide the one or more recommendations for execution of the forecasted events at the UE in the network in response to said determination. The one or more recommendations include the at least one combination.
OBJECTS OF THE PRESENT DISCLOSURE
[0017] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0018] An object of the present disclosure is to provide a system and a method for creation of smart scenes and schedules to be automatically executed at one or more computing devices in a network.
[0019] An object of the present disclosure is to automate and forecast device schedules based on historical usage data for individual computing devices.
[0020] An object of the present disclosure is to improve user experience by dynamically creating device schedules based on historical usage data for individual computing devices.
[0021] An object of the present disclosure is to provide a system and a method that identifies hidden activities being performed consistently by users associated with respective computing devices.
[0022] An object of the present disclosure is to dynamically control computing devices, which in turn helps in reducing power consumption.
BRIEF DESCRIPTION OF DRAWINGS
[0023] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0024] FIG. 1 illustrates an exemplar y network architecture (100) in which or with which a proposed system may be implemented, in accordance with an embodiment of the present disclosure.
[0025] FIG. 2 illustrates an exemplary representation (200) of the proposed system for forecasting device events, in accordance with an embodiment of the present disclosure.
[0026] FIG. 3 illustrates an exemplary block diagram representation (300) of a network architecture for a system implemented in a cloud Internet of Things (IoT) platform, in accordance with an embodiment of the present disclosure.
[0027] FIG. 4 illustrates an exemplary architecture (400) in which or with which the embodiments of the present disclosure may be implemented.
[0028] FIG. 5 illustrates a sequence diagram of a network architecture (500) in which or with which embodiments of the present disclosure may be implemented.
[0029] FIG. 6 illustrates an example method (600) for providing one or more recommendations of predicted/forecasted events at one or more computing devices in a network, in accordance with an embodiment of the present disclosure.
[0030] FIGs. 7A and 7B illustrate exemplary representations of historical data and forecasted event(s), in accordance with some embodiments of the present disclosure.
[0031] FIG. 8 illustrates an exemplary computer system (800) in which or with which embodiments of the present disclosure may be implemented.
[0032] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION OF DISCLOSURE
[0033] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0034] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0035] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0036] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0037] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
[0038] Reference throughout this specification to “one embodiment” or “an embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0039] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
[0040] The present disclosure provides a robust and effective solution for forecasting and facilitating execution of device events in a network. In particular, the disclosed solution provides forecasting device schedules and events based on historical usage data for individual devices in the network. Further, the present disclosure improves the end user experience by dynamically creating schedules based on user patterns from the historical usage data, without any user intervention. Furthermore, the present disclosure provides for dynamically controlling the devices, which in turn reduces power consumption. For example, the present disclosure may generate appropriate schedules and scenes that may initiate switching off the devices in the network automatically at a scheduled time.
[0041] Embodiments of the present disclosure relate to forecasting or predicting events automatically in a network. In particular, a system may be provided for dynamically controlling one or more devices in a network by way of, but not limitation, automatically executing one or more recommended actions corresponding to predicted events at the one or more devices. In an embodiment, the present disclosure relates to smart device environments. In an embodiment, the present disclosure relates to the Internet of Things (IoT), and more particularly, to communication among devices via the IoT.
[0042] In accordance with the embodiments described herein, a user (e.g., customer) may interact with one or more devices in a network to trigger an event. Each event triggered at the one or more devices may be collected and recorded. Further, using the data collected, an artificial intelligence (AI)-triggered system may forecast and predict future events at the one or more devices in the network.
[0043] Accordingly, the present disclosure offers new insights across one or more devices in a network. In particular, information of one or more devices that is collected and recorded across the network (e.g., network ecosystem) provides insights on user behaviour, for example, activities performed by users associated with the one or more devices in the network. Further, the disclosed solution analyses the information and historical usage data of the one or more devices, and performs pre-processing on the collected data such that one or more future actions or events may be predicted using the AI-triggered system. In this way, end user experience is improved as user intervention or awareness is not required for such predictions. Furthermore, the disclosed solution also dynamically controls the one or more devices in the network which in turn reduces power consumption. Other like benefits and advantages are provided by the disclosed solution, which will be discussed in detail throughout the disclosure.
[0044] Certain terms and phrases have been used throughout the disclosure and will have the following meanings in the context of the ongoing disclosure.
[0045] The term “Internet of Things” may refer to a computing environment in which physical objects are embedded with devices which enable the physical objects to achieve greater value and service by exchanging data with other systems and/or other connected devices. Each physical object is uniquely identifiable through its embedded device(s) and is able to interoperate within an Internet infrastructure. The acronym “IoT,” as used herein, means “Internet of Things.”
[0046] The term “user pattern” may refer to data representing traits, acts, tendencies, and/or observable characteristics of a user. A user pattern can be generated for a user by processing data generated by a user’s interaction with one or more devices.
[0047] The term “real time” may refer to a level of processing responsiveness that a user or system senses as sufficiently immediate for a particular process or determination to be made, or that enables a processor to keep up with some external process.
[0048] The term “automatically” may refer to without user intervention.
[0049] The term “event” may refer to an act or activity performed by a user at a device.
[0050] The term “long short-term memory machine learning model” may refer to a recurrent neural network algorithm which is capable of handling long-term dependencies to identify patterns in series of events/data points. The acronym “LTSM,” as used herein, means “long short-term memory.”
[0051] The term “scene” may refer to a set of events occurring chronologically in a particular time period.
[0052] The term “event streaming” may refer to capturing data in real-time from event sources in the form of streams of events, storing these event streams durably for later retrieval, manipulating, processing, and reacting to the event streams in real-time as well as retrospectively, and routing the event streams to different destination technologies as needed.
[0053] The various embodiments throughout the disclosure will be explained in more detail with reference to FIGs. 1-7.
[0054] FIG. 1 illustrates an exemplary network architecture (100) in which or with which embodiments of the present disclosure may be implemented.
[0055] Referring to FIG. 1, the network architecture (100) may include one or more computing devices (104-1, 104-2…104-N) associated with one or more users (102-1, 102-2…102-N) deployed in an environment. A person of ordinary skill in the art will understand that one or more users may be individually referred to as the user (102) and collectively referred to as the users (102). Further, a person of ordinary skill in the art will understand that one or more computing devices may be individually referred to as the computing device (104) and collectively referred to as the computing devices (104).
[0056] In an embodiment, each computing device (104) may interoperate with every other computing device (104) in the network architecture (100). In an embodiment, the computing devices (104) may be referred to as a user equipment (UE). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “UE” may be used interchangeably throughout the disclosure.
[0057] In an embodiment, the computing devices (104) may include, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device (104) with wireless communication capabilities, and the like. In an embodiment, the computing devices (104) may include, but are not limited to, any electrical, electronic, electro-mechanical, or an equipment, or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the computing device (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, and input devices for receiving input from a user (102) such as touch pad, touch enabled screen, electronic pen, and the like.
[0058] In an embodiment, the computing devices (104) may include smart devices operating in a smart environment, for example, the IoT system. In such an embodiment, the computing devices (104) may include, but are not limited to, smart phones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for monitoring or interacting with or for users (102) and/or places, or any combination thereof. In an embodiment, the computing devices (104) may include one or more of the following components: sensor, radio frequency identification (RFID) technology, GPS technology, mechanisms for real-time acquisition of data, passive or interactive interface, mechanisms of outputting and/or inputting sound, light, heat, electricity, mechanical force, chemical presence, biological presence, location, time, identity, other information, or any combination thereof.
[0059] A person of ordinary skill in the art will appreciate that the computing devices (104) may include, but not be limited by, intelligent, multi-sensing, network-connected devices, that can integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
[0060] A person of ordinary skill in the art will appreciate that the computing devices or UEs (104) may not be restricted to the mentioned devices and various other devices may be used.
[0061] Referring to FIG. 1, the computing devices (104) may communicate with a system (110), for example, a forecasting system, through a network (106). In an embodiment, the network (406) may include at least one of a Fourth Generation (4G) network, a Fifth Generation (5G) network, or the like. The network (106) may enable the computing devices (104) to communicate between devices (104) and/or with the system (110). As such, the network (106) may enable the computing devices (104) to communicate with other computing devices (104) via a wired or wireless network. The network (106) may include a wireless card or some other transceiver connection to facilitate this communication. In an exemplary embodiment, the network (106) may incorporate one or more of a plurality of standard or proprietary protocols including, but not limited to, Wi-Fi, Zigbee, or the like. In another embodiment, the network (106) may be implemented as, or include, any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like.
[0062] Referring to FIG. 1, the system (110) may include an artificial intelligence (AI) engine (108) in which or with which the embodiments of the present disclosure may be implemented. In particular, the system (110), and as such, the AI engine (108) facilitates forecasting or predicting events of the computing devices (104) in the network architecture (100) based on monitoring data corresponding to usage of the computing devices (104) by the users (102) over a period of time.
[0063] Further, the system (110) may be operatively coupled to a server (112). In an embodiment, the computing devices (104) may be capable of data communications and information sharing with the server (112) through the network (106). In an embodiment, the server (112) may be a centralised server or a cloud-computing system or any device that is network connected.
[0064] In accordance with an embodiment of the present disclosure, each event at the computing devices (104) may be captured and recorded in a database (not shown in FIG. 1). In an embodiment, an event may be referred to as an act or activity performed by a user (102) at a computing device (104). It should be understood that the users (102) agree to provide data related to the triggered events at the computing devices (104). In particular, a set of data parameters may be collected at the database from the computing devices (104) based on the triggered events. In an embodiment, the system (110) may access the set of data parameters from the database via the server (112). For example, a user such as the user (102-1) may switch on a computing device such as the computing device (104-1) every day at a particular time instance. This data may be captured at the database accessible through the server (112).
[0065] In an embodiment, the system (110) may extract the recorded data from the server (112) (e.g., the database) and perform pre-processing on the set of data parameters in one or more batches. In an embodiment, the pre-processing of the set of data parameters may be performed by the AI engine (108) utilising one or more machine learning models such as, but not limited to, long short-term memory (LTSM) machine learning model. A person of ordinary skill in the art will understand that the LTSM may be referred to as a recurrent neural network algorithm which is capable of handling long-term dependencies to identify patterns in series of events/data points. In an embodiment, the AI engine (108) performs pre-processing of the set of data parameters to form data in a proper time-series with equal intervals of time, for example, intervals of 1 minute. This model learning or training happens over a period of time in order to identify user patterns.
[0066] In an embodiment, based on the pre-processing, the AI engine (108) predicts future events that may be triggered at the computing devices (104) by the users (102). In particular, the AI engine (108) determines one or more combinations of events based on the pre-processed set of data parameters. In an embodiment, the one or more combinations may include an event (or, action) and a frequency of occurrence of the action/event at the one or more computing devices (104) over a period of time. The AI engine (108), using appropriate machine learning models, identifies if the determined one or more combinations are likely to be triggered at the computing devices (104) in future. In an embodiment, the AI engine (108) performs extrapolation of timestamps based on the determined one or more combinations of events. In an embodiment, the AI engine (108) determines if a probability of occurrence of the one or more combinations exceeds a threshold. In an embodiment, the threshold may be pre-determined by the system (110) based on historical usage data and the set of data parameters of the one or more computing devices (104). In an embodiment, the threshold may be a dynamically varying value determined based on periodic computations based on multiple factors including the past event data, recent event data, correlation between the past predicted events and the actual events at a given time instance, any outliers in the event data, etc. In other embodiments, the threshold may be manually set by an administrator who may have the access rights to the system (110) to override the machine set values of the threshold.
[0067] Further, in an embodiment, the AI engine (108) may provide one or more recommendations for execution of forecasted events at the one or more computing devices (104). The one or more recommendations may include the determined one or more combinations that exceed the pre-determined threshold. The system (110) may dynamically control the one or more computing devices (104) through the network (106) based on the one or more recommendations provided by the AI engine (108). In an embodiment, the system (110) may automatically schedule the one or more recommendations for execution at the one or more computing devices (104) in the network (106).
[0068] Therefore, it will be appreciated that the present disclosure improves the user experience by detecting and automatically executing one or more schedules (for example, for events or actions) at the computing devices (104) based on identified user patterns from historical usage data. Further, end user experience is improved as the present disclosure does not require end user intervention or awareness. More particularly, the present disclosure provides a customized user experience, based on the user’s past experiences and interactions in the network architecture (100). Furthermore, by dynamically controlling the computing devices (104) in the network (106), the present disclosure may also help in reducing power consumption in certain scenarios. For example, based on the set of data parameters collected and learning models utilized by the AI engine (108), the system (110) may automatically switch off the one or more computing devices (104) during night time to reduce the power consumption. Other like benefits and advantages may also be provided by the disclosed system.
[0069] Although FIG. 1 shows exemplary components of the network architecture (100), in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other components of the network architecture (100).
[0070] FIG. 2 illustrates an exemplary representation (200) of the system (110), in accordance with embodiments of the present disclosure.
[0071] For example, the system (110) may include one or more processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (110). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like.
[0072] In an embodiment, the system (110) may include an interface(s) (206). The interface(s) (206) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like. The interface(s) (206) may facilitate communication for the system (110). The interface(s) (206) may also provide a communication pathway for one or more components of the system (110). Examples of such components include, but are not limited to, processing unit/engine(s) (208) and a database (210).
[0073] The processing unit/engine(s) (208) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) (208). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) (208) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system (110) may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (110) and the processing resource. In other examples, the processing engine(s) (208) may be implemented by electronic circuitry. In an aspect, the database (210) may comprise data that may be either stored or generated as a result of functionalities implemented by any of the components of the processor (202) or the processing engines (208).
[0074] In an embodiment, the processing engine (208) may include engines that receive data from one or more computing devices via a network such as the computing devices (104) via the network (106) (e.g., via the Internet) of FIG. 1, to index the data, to analyse the data, and/or to generate statistics based on the analysis or as part of the analysis. In an embodiment, the analysed data may be stored at the database (210). In an embodiment, the processing engine (208) may include one or more modules/engines such as, but not limited to, an acquisition engine (212), an AI engine (214), and other engine(s) (216). A person of ordinary skill in the art will understand that the AI engine (214) may be similar in its functionality with the AI engine (108) of FIG. 1, and hence, may not be described in detail again for the sake of brevity.
[0075] Referring to FIG. 2, the database (210) may store the data, i.e., a set of data parameters corresponding to events triggered at the one or more computing devices (104) in the network (106). In an embodiment, the database (210) may or may not reside in the system (110). In an embodiment, the system (110) may be operatively coupled with the database (210).
[0076] In an exemplary embodiment, the set of data parameters may include, but is not limited to, data indicating which computing devices (104) are active, which computing devices (104) are most active, the times at which individual computing devices (104) are most active, and location of individual computing devices (104), information indicating communications occurring between computing devices (104), etc. The set of data parameters may include, but is not limited to, information relating to which users such as the users (102) of FIG. 1 are interacting, how frequently users (102) are interacting with the computing devices (104), duration of such interactions, time period of the interactions, etc.
[0077] By way of example but not limitation, the one or more processor(s) (202) may detect when an event(s) is triggered at the one or more computing devices (104) by the one or more users (102). In an embodiment, this detection may occur, for example, by analysing microphone signals, detecting wireless signals, detecting an internet protocol (IP) address of a received signal, detecting operation of one or more computing devices (104) within a time window, or the like. Moreover, the one or more processor(s) (202) may include image recognition technology or the like to identify particular occupants or objects or users.
[0078] In an embodiment, the one or more processor(s) (202) may detect that an event is triggered at the one or more computing devices (104) based on one or more sensors (not shown in FIGs. 1 or 2). In an embodiment, the one or more computing devices (104) may include the one or more sensors. Alternatively, the one or more sensors may not reside in the one or more computing device (104). In an embodiment, the one or more sensors may include any or a combination of Wi-Fi sensors, Zigbee sensors, Bluetooth low energy (BLE) sensors, or the like.
[0079] Further, in an embodiment, the one or more processor(s) (202) of the system (110) may cause the acquisition engine (212) to extract the set of data parameters from the database (210) for further analysis by the AI engine (214). In particular, the set of data parameters gained from various computing devices (104) may be analysed as a whole, for example, to identify various trends, user preferences, occurrence of events, etc. In an embodiment, the one or more processor(s) (202) may cause the AI engine (214) to pre-process the set of data parameters in one or more batches. As described with reference to FIG. 1 above, the AI engine (214) may utilise one or more machine learning models to pre-process the set of data parameters. In an embodiment, the AI engine (214) may perform pre-processing of the set of data parameters to form data in a proper time-series with equal intervals of time, for example, intervals of 1 minute. In an embodiment, results of the pre-processing or analysis may thereafter be transmitted back to the computing device (104), to other devices, to a server providing a web page to a user (102) of the computing device (104), or to other non-device entities.
[0080] In an embodiment, based on the pre-processing, the one or more processor(s) (202) may cause the AI engine (214) to predict future events that may be triggered at the computing devices (104) by the users (102). In particular, the AI engine (214) may determine one or more combinations of events based on the pre-processed set of data parameters. In an embodiment, the one or more combinations may include an event (or, action) and a frequency of occurrence of the action/event at the one or more computing devices (104) over a period of time. In an embodiment, the AI engine (214) may perform extrapolation of timestamps based on the determined one or more combinations of events.
[0081] In an embodiment, once the one or more combination of events has been determined, the AI engine (214) may determine if a probability of occurrence of the one or more combinations meets or exceeds a threshold. For example, if the system (110) does not have enough set of data parameters to provide a threshold level, the set of data parameters may be continuously collected over a period of time. In an embodiment, the threshold may be pre-determined by the system (110) based on historical usage data and the set of data parameters of the one or more computing devices (104).
[0082] Once the probability of occurrence of the one or more combinations meets, or in particular, exceeds the threshold, the AI engine (214) may provide one or more recommendations to be stored at the database (210) of the system (110) for execution at the one or more computing devices (104) at a later time instant. The one or more recommendations may include the determined one or more combinations that meet or exceed the pre-determined threshold. The system (110) may dynamically control the computing devices (104) through the network (106) based on the one or more recommendations provided by the AI engine (214). In an embodiment, the system (110) may automatically schedule the one or more recommendations for execution at the one or more computing devices (104) in the network (106).
[0083] By way of example but not limitation, the system (110) may schedule the one or more computing devices (104) to conserve power during night time, or when nobody is home or in a particular room, or to accord with user preferences. As another example, the system (110) may initiate one of the computing devices (104), for example, an alarm or security feature, if an unrecognized user is detected under certain conditions. A person of ordinary skill in the art will appreciate that these are mere examples, and in no way, comprise an exhaustive list of recommendations that may be provided by the system (110).
[0084] In some embodiments, multiple instances of the computing device (104) may interact with each other such that events triggered by a first computing device such as the computing device (104-1) of FIG. 1 influence actions of a second computing device such as the computing device (104-2) of FIG. 1. For example, whenever a user switches on corridor light, the user may also switch on main hall light. Based on pre-processing of this data set, the system (110) including the AI engine (214) may provide a recommendation of an event or an action, i.e., to switch on the main hall light once the user switches on the corridor light. In this embodiment, based on the set of data parameters, pre-processing, and determination of one or more combinations, the system (110) may append a recommended chain of events (or, combinations) to a triggered main event (in this case, switching on of the corridor light by the user). That is, any number of sequential events (that may or may not be triggered at the same computing device), if the probability of occurrence of which meets the pre-determined threshold, then the system (110) may recommend the sequential events to be executed at the one or more computing devices (104). In an embodiment, a set of events occurring sequentially or chronologically in a particular time period may be referred to as a scene.
[0085] A person of ordinary skill in the art will appreciate that the exemplary representation (200) may be modular and flexible to accommodate any kind of changes in the system (110). In an embodiment, the data may get collected meticulously and deposited in a cloud-based data lake to be processed to extract actionable insights. Therefore, the aspect of predictive maintenance can be accomplished.
[0086] FIG. 3 illustrates an exemplary block diagram representation of a network architecture (300) for the system (110) implemented in an IoT cloud platform, in accordance with an embodiment of the present disclosure.
[0087] In an embodiment, the network architecture (300) may include an IoT gateway (302) communicatively connected to the system (110) implemented in an IoT cloud platform. The IoT gateway (302) may be communicatively connected to devices of the IoT environment. The IoT environment may include wireless devices such as, but not limited to, ZigBee devices (310), Internet Protocol (IP) devices (312), Bluetooth Low Energy (BLE) devices (314), Narrow Band Internet of Things (NB-IoT)/Long Range (LoRa) devices (316), and the like. Further, the IoT environment may include wired devices such as, but not limited to, instrumentation and control devices using bus protocol, supervisory control and data acquisition (SCADA) devices, and the like.
[0088] In an embodiment, the IoT gateway (302) may include a smart scene managing engine (304), a sensor profile managing engine (306), and a firmware upgrade controller (308).
[0089] Further, the IoT gateway (302) may be connected to the system (110) (i.e., IoT cloud platform) through a cloud Application Programming Interface (API) (318) associated with the system (110). The IoT cloud platform may include an analytics engine (332), a rule engine (320), a firmware upgrading engine (322), an access control engine (324), a reinforcement learning engine (326), a scene managing engine (328), and a notification service engine (330). The processor (202) may execute the one or more engines for generating the smart scenes.
[0090] In an embodiment, the processor (202) may execute the analytics engine (322) to analyse the historical event data and the usage pattern data corresponding to the IoT devices (104). In an embodiment, the processor (202) may execute the reinforcement learning engine (326) to update smart scenes using a reinforcement learning ML technique. The smart scenes may be stored in the database (210). The smart scenes may be transmitted to the IoT gateway (302), which is to be implemented in the one or more IoT devices (104). The smart scene managing engine (304) associated with the IoT gateway (302) may implement the smart scenes in the one or more IoT devices (104).
[0091] In an embodiment, the rule engine (320) may use the event data such as a device context to determine dynamic state of each of the one or more IoT devices (104). In an embodiment, the rule engine (320) may derive automatic rules based on device capabilities of the one or more IoT devices (104). In an embodiment, the rule engine (320) may enable rule triggers for the IoT devices (104) for executing the smart scenes.
[0092] FIG. 4 illustrates an exemplary architecture (400) in which or with which the embodiments of the present disclosure may be implemented.
[0093] Referring to FIG. 4, the architecture (400) comprises one or more sensors (402) communicatively coupled to a server (404). A person of ordinary skill in the art will appreciate that the functionality of the server (404) may be similar to the server (112) of FIG. 1, and hence, may not be described in detail again for the sake of brevity.
[0094] In an embodiment, the one or more sensors (402) may be associated with one or more computing devices in a network such as the one or more computing devices (104) in the network (106) of FIG. 1. In an embodiment, the one or more sensors (402) may produce a set of data parameters when an event or action is triggered at the one or more computing devices (104).
[0095] The one or more sensors (402), in certain embodiments, may detect various properties such as acceleration, temperature, humidity, water, supplied power, proximity, external motion, device motion, sound signals, ultrasound signals, light signals, fire, smoke, carbon monoxide or other gas, GPS signals, radio-frequency (RF), other electromagnetic signals or fields, or the like. In an exemplary embodiment, the one or more sensors (402) may include Wi-Fi sensors, Zigbee sensors, BLE sensors, or the like. In an embodiment, the one or more sensors (402) may or may not be housed within the one or more computing devices (104). In an embodiment, the one or more sensors (402) may be provided on the one or more computing devices (104) and may be incorporated in and/or embody an IoT system, such as that explained with reference to FIG. 3. While FIG. 4 illustrates an embodiment with one sensor block, many embodiments may include as few as a single sensor, or may include multiple sensor blocks each containing one or more sensors, up to and including multiple blocks each containing entire arrays of sensors.
[0096] Referring to FIG. 4, the set of data parameters produced by the one or more sensors (402) may be stored at the server (404). In an embodiment, the server (404) may be a centralised server or a cloud-computing system or any other device that is network-connected. A person of ordinary skill in the art will understand that the server (404) may be similar in its functionality with the server (112) of FIG. 1 and/or with the database (210) of FIGs. 2-3, and hence, may not be described in detail again for the sake of brevity.
[0097] In an exemplary embodiment, the set of data parameters may include, but is not limited to, data indicating which computing devices (104) are active, which computing devices (104) are most active, the times at which individual computing devices (104) are most active, and location of individual computing devices (104), information indicating communications occurring between computing devices (104), etc. The set of data parameters may include, but is not limited to, information relating to which users are interacting, how frequently users are interacting with the computing devices (104), duration of such interactions, time period of the interactions, etc.
[0098] Referring to FIG. 4, in an embodiment, one or more gateways, for example, an analytics gateway (406) may be utilized to facilitate communication with the one or more computing devices (104) and/or the one or more sensors (402) within a given architecture (400). A person of ordinary skill in the art will understand that the analytics gateway (406) may be similar to the IoT gateway (302) of FIG. 3. In an embodiment, the analytics gateway (406) may be utilized to provide a mechanism to communicate with the one or more computing devices (104) which may possess limited or proprietary communications capabilities.
[0099] The analytics gateway (406) may be coupled to an event streaming platform (408). A person of ordinary skill in the art will appreciate that event streaming may refer to capturing data in real-time from event sources, such as, but not limited to, databases, sensors, computing devices, and the like, in the form of streams of events, storing these event streams durably for later retrieval, manipulating, processing, and reacting to the event streams in real-time as well as retrospectively, and routing the event streams to different destination technologies as needed. Thus, the event streaming platform (408) ensures a continuous flow and interpretation of data, in particular, the set of data parameters.
[00100] In an embodiment, once an event(s) is triggered at the one or more computing devices (104) in the network (106), the one or more sensors (402) may produce a set of data parameters associated with the triggered event(s). The analytics gateway (406) may access the set of data parameters from the server (404) and communicate the same to the event streaming platform (408).
[00101] Referring to FIG. 4, the event streams, thus captured or recorded, are passed on to a stream analytics module (410) for further analytics. A person of ordinary skill in the art will understand that the stream analytics module (410) may be similar to the analytics engine (322) of FIG. 3. In particular, the stream analytics module (410) performs pre-processing on the event streams (e.g., the set of data parameters) in one or more batches. In an embodiment, the stream analytics module (410) may pre-process the event streams utilising one or more machine learning models such as, but not limited to, the LTSM machine learning model from an analytics engine (414).
[00102] In an embodiment, based on the pre-processing of the event streams, the stream analytics module (410) predicts future events that may be triggered at the computing devices (104) by users such as the users (102) of FIG. 1. In particular, the stream analytics module (410) determines one or more predicted events that may be executed at the one or more computing devices (104). In particular, based on analysing the set of data parameters and historical usage data of the one or more computing devices (104), the stream analytics module (410) determines one or more combinations, which may include an event triggered at a computing device and a frequency of occurrence of that event over a period of time. In an embodiment, a combination may include any number of events performed by one or more computing devices (104) in a particular time period. In an embodiment, the stream analytics module (410) performs extrapolation of timestamps based on the determined one or more combinations of events. In an embodiment, the stream analytics module (410) may determine intersection events, i.e., an event triggered by one computing device followed by another computing device in a pre-determined threshold time.
[00103] Referring to FIG. 4, the one or more combinations of events determined by the stream analytics module (410) may be stored at a database (412) or DB (412). It will be appreciated that the terms “database” and “DB” have the same meaning throughout the disclosure. A person of ordinary skill in the art will understand that the database (412) may be similar to the database (210) of FIGs. 2-3. As shown in FIG. 4, the analytics engine (414) utilises machine learning models to perform further analysis on the determined one or more combination of events from the database (412). This model learning or training happens over a period of time in order to identify user patterns. It may be appreciated that the analytics engine (414) may be similar to the analytics engine (322) and/or the reinforcement learning engine (326) of FIG. 3.
[00104] Further, in an embodiment, the analytics engine (414) may generate one or more recommendations for predicted or forecasted events based on the determined one or more combinations. In an embodiment, the analytics engine (414) compares a probability of occurrence of the determined one or more combinations of events with a pre-determined threshold. If a pair of computing devices exceed the pre-determined threshold, then the analytics engine (414) may generate one or more recommendations for the same.
[00105] By way of example but not limitation, a user switches on main hall light every day between 06:30 PM and 07:00 PM and switches off around 11:00 PM. So, the stream analytics module (410) records this data for a period of time, and uses the analytics engine (414), equipped with a machine learning model, such as, but not limited to, LSTM machine learning model, that can predict an appropriate time for switching on and off the main hall light. For example, there can be at least two approaches of predicting future events: (a) predict once a week and use same for all 7 days that may give less accuracy for the days towards the end, or (b) predict only for next day that may be more accurate but require more computational resources. A person of ordinary skill in the art will appreciate that these are mere examples, and in no way, comprise an exhaustive list of recommendations that may be provided by the analytics engine (414).
[00106] Referring to FIG. 4, the one or more recommendations generated by the analytics engine (414) may be provided to the stream analytics module (410) and to the server (404) via the analytics gateway (406). The analytics engine (414) may dynamically control the sensors (402) by providing the one or more recommendations for execution of the predicted or forecasted events at the one or more computing devices (104) in the network (106).
[00107] In this way, a notification may be sent to a user such as the user (102), where the notification may correspond to the one or more recommendations for execution of the predicted or forecasted events at the one or more computing devices (104).
[00108] A person of ordinary skill in the art will appreciate that the architecture (400) may be modular and flexible to accommodate any kind of changes. Although FIG. 4 shows exemplary components of the architecture (400), in other embodiments, the architecture (400) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 4. Additionally, or alternatively, one or more components of the architecture (400) may perform functions described as being performed by one or more other components of the architecture (400).
[00109] FIG. 5 illustrates a sequence diagram of a network architecture (500) in which or with which embodiments of the present disclosure may be implemented.
[00110] Referring to FIG. 5, the network architecture (500) comprises devices, applications, and the like. For example, the network architecture (500) comprises a user(s) (502), a device(s) (504), a gateway (506), utils (508), server Application Programming Interfaces (APIs) (510), a database (512), and an analytics engine (514). A person of ordinary skill in the art will understand that the user(s) (502) and the device(s) (504) may be similar in their functionality to the one or more users (102) and the one or more computing devices (104) of FIG. 1, respectively, and hence, may not be described in detail again for the sake of brevity. Further, a person of ordinary skill in the art will understand that the gateway (506) and the analytics engine (514) may be similar in their functionality to the analytics gateway (406) and the analytics engine (414) of FIG. 4, respectively, and hence, may not be described in detail again for the sake of brevity. Furthermore, a person of ordinary skill in the art will understand that the database (512) may be similar to the database (210) of FIGs. 2-3 and the database (412) of FIG. 4 in its functionality, and hence, may not be described in detail again for the sake of brevity.
[00111] Referring to FIG. 5, the user(s) (502) may be associated with the device(s) (504). Once an event is triggered at the device(s) (504) by the user(s) (502), data corresponding to the triggered event is transmitted to the gateway (506) in step 1. Further, in steps 2-4, the data is transmitted via utils (508) and server APIs (510) to the database (512). In an embodiment, the gateway (506), the utils (508), and the server APIs (510) may be utilized to provide a mechanism to communicate with the device(s) (504) which may possess limited or proprietary communications capabilities. In an embodiment, in steps 3 and 4, the utils (508) and the server APIs (510) may map the incoming data to appropriate backend variables in an event payload.
[00112] Further, in step 4, the data event is stored at the database (512). The analytics engine (514) extracts the stored data from the database (512) in step 5 for further analysis to forecast device events. In an embodiment, the data may be stored at the database (512) for a particular period of time for analysis.
[00113] The analytics engine (514) utilises an appropriate machine learning model to pre-process the extracted data and determine one or more combination of events for the device(s) (504). In a further embodiment, the analytics engine (514) generates one or more recommended actions based on the determined one or more combinations. In some embodiments, the device(s) (504) may interact with other device(s) (504) such that events detected by a first device influences actions of a second device.
[00114] Referring to FIG. 5, the one or more recommended actions may be stored at the database (512) in step 6. Thereafter, the one or more recommended actions may be sent through the server APIs (510) (step 7) and the utils (508) (step 8), to the gateway (506) (step 9). Further, in an embodiment, the one or more recommended actions corresponding to predicted events may be executed at the device(s) (504).
[00115] Thus, the present disclosure allows for dynamic control of the device(s) (504) based on the set of data parameters and historical usage data related to user patterns learned by the analytics engine (514).
[00116] A person of ordinary skill in the art will appreciate that the architecture (500) may be modular and flexible to accommodate any kind of changes. Although FIG. 5 shows exemplary components of the architecture (500), in other embodiments, the architecture (500) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 5. Additionally, or alternatively, one or more components of the architecture (500) may perform functions described as being performed by one or more other components of the architecture (500).
[00117] FIG. 6 illustrates an example method (600) for providing one or more recommendations of predicted events at one or more computing devices in a network, in accordance with an embodiment of the present disclosure.
[00118] At step 602, the method (600) may include considering all the possible pairs of computing devices in a network such as the one or more computing devices (104) in the network (106) of FIG. 1. It will be appreciated that any number of computing devices (104) may exist in the network (106). Further, at step 604, the method (600) may include selecting a pair of computing devices (104). In an embodiment, the pair of computing devices (104) may be selected based on events triggered at each of the computing devices (104) in the network (106).
[00119] Further, at step 606, the method (600) may include counting the number of intersection events for the selected pair of computing devices (104). An intersection event may be referred to as an event triggered by one computing device followed by an event triggered by another computing device of the selected pair of computing devices in a pre-determined threshold time.
[00120] At step 608, the method (500) may include determining if the count of intersection events for the selected pair of computing devices (104) exceeds the pre-determined threshold time. In an embodiment, the threshold time may be based on data collected from the computing devices (104) in the network (106) over time, i.e., historical usage data of the computing devices (104).
[00121] If the count of intersection events for the selected pair of computing devices (104) does not exceed the pre-determined threshold, the method (600) may include selecting (step 614) a next pair of computing devices (104) in the network (106). Alternatively, or additionally, if the count of intersection events for the selected pair of computing devices (104) exceeds the pre-determined threshold, the method (600) may include adding (step 610) the selected pair of computing devices (104) in a desired combination list. In an embodiment, the desired combination list may include one or more combinations, where the one or more combinations may include an event triggered at one or more computing devices (104) and a frequency of occurrence of the event over a particular period of time. For example, the one or more combinations may include an event triggered by a first computing device followed by an event triggered by a second device in a particular time period.
[00122] Further, at step 612, the method (600) may include creating a chain scene as a recommendation and sending the recommendation to the selected pair of computing devices (104) for execution. In an embodiment, the recommendation may be sent to one or more sensors such as one or more sensors (402) of FIG. 4 deployed in the network (106) for executing the recommendation at the selected pair of computing devices (104).
[00123] In an exemplary embodiment, the method (600) may be explained in detail considering the following example in Table 1.
Combination Triggering Device Triggering Event Chain Device Chain Event Time Cluster Average Intersection Event
1 A 1 B 1 T1 7
2 A 1 B 1 T2 2
3 B 1 C 1 T1 5
4 B 1 C 1 T2 6
Table 1
[00124] Consider that there are three computing devices in a network, A, B, and C. For each combination 1, 2, 3, and 4, average intersection events have been determined. If an average intersection event threshold is considered as 5, then it may be determined from the above table that three out of four combinations have the average intersection event value greater than or equal to the average intersection event threshold. For example, combinations 1, 3, and 4 have the average intersection event value greater than 5.
[00125] Further, it may be determined that combinations 1 and 3 are creating a chain scene in a same time cluster, i.e., T1. For example, trigger event by device A followed by chain event by device B followed by chain event by device C in the same time cluster T1. Another combination 4 may be determined in time cluster T2, such that trigger event by device B is followed by chain event by device C in the time cluster T3.
[00126] A person of ordinary skill in the art will appreciate that these are mere examples, and in no way, limit the scope of the present disclosure.
[00127] FIGs. 7A and 7B illustrate exemplary representations of historical data and forecasted event(s).
[00128] For example, FIGs. 7A and 7B depict historical events ( ) being recorded over a period of time, such as from D1 to DN at T1 to TN and forecasted event(s) (?) being predicted utilizing the proposed methods and systems.
[00129] FIG. 8 illustrates an exemplary computer system (800) in which or with which embodiments of the present disclosure may be utilized. As shown in FIG. 8, the computer system (800) may include an external storage device (810), a bus (820), a main memory (830), a read-only memory (840), a mass storage device (850), communication port(s) (860), and a processor (870). A person skilled in the art will appreciate that the computer system (800) may include more than one processor and communication ports. The processor (870) may include various modules associated with embodiments of the present disclosure. The communication port(s) (860) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port(s) (860) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (800) connects. The main memory (830) may be random access memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory (840) may be any static storage device(s) including, but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor (870). The mass storage device (850) may be any current or future mass storage solution, which may be used to store information and/or instructions.
[00130] The bus (820) communicatively couples the processor (870) with the other memory, storage, and communication blocks. The bus (820) can be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), universal serial bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (870) to the computer system (800).
[00131] Optionally, operator and administrative interfaces, e.g. a display, keyboard, and a cursor control device, may also be coupled to the bus (820) to support direct operator interaction with the computer system (800). Other operator and administrative interfaces may be provided through network connections connected through the communication port(s) (860). In no way should the aforementioned exemplary computer system (800) limit the scope of the present disclosure.
[00132] Thus, the present disclosure enables creation of smart scenes and schedules to automatically execute at one or more devices in a network. The present disclosure facilitates an improved user experience, as user intervention or awareness is not required. For example, coffee machines that start brewing just when a user thinks that it is a good time for an espresso, office lights that dim when it is sunny and workers do not need them, a user’s favourite music application that plays a tune depending on the user’s mood, or a user’s car suggesting an alternative route when the user hits a traffic jam. Various other like benefits and advantages may be provided with the implementation of the present disclosure.
[00133] While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter to be implemented merely as illustrative of the disclosure and not as limitation.
ADVANTAGES OF THE PRESENT DISCLOSURE
[00134] The present disclosure provides a system and a method for creation of smart scenes and schedules to be automatically executed at one or more computing devices in a network.
[00135] The present disclosure provides automation and forecasting of device schedules based on historical usage data for individual computing devices.
[00136] The present disclosure provides improved user experience by dynamically creating device schedules based on historical usage data for individual computing devices.
[00137] The present disclosure provides a system and a method that identifies hidden activities being performed consistently by users associated with respective computing devices.
[00138] The present disclosure dynamically controls computing devices, which in turn helps in reducing power consumption.
,CLAIMS:1. A system (110) for providing one or more recommendations for executing forecasted events at one or more computing devices (104) in a network (106), said system (110) comprising:
one or more processors (202); and
a memory (204) operatively coupled to the one or more processors (202), wherein the memory (204) comprises processor-executable instructions, which on execution, cause the one or more processors (202) to:
record a set of data parameters corresponding to one or more events triggered at the one or more computing devices (104) in the network (106);
pre-process the recorded set of data parameters in one or more batches;
determine at least one combination based on the pre-processed set of data parameters, wherein the at least one combination comprises an event of the one or more events and a frequency of occurrence of the event over a first period of time; and
in response to said determination, provide the one or more recommendations for execution of the forecasted events at the one or more computing devices (104) in the network (106), wherein the one or more recommendations comprise the at least one combination.
2. The system (110) as claimed in claim 1, wherein the memory (204) comprises processor-executable instructions, which on execution, cause the one or more processors (202) to determine whether a probability of occurrence of the at least one combination is equal to or exceeds a threshold.
3. The system (110) as claimed in claim 2, wherein the memory (204) comprises processor-executable instructions, which on execution, cause the one or more processors (202) to provide the one or more recommendations for execution of the forecasted events at the one or more computing devices (104) in response to a determination that the probability of occurrence of the at least one combination is equal to or exceeds the threshold.
4. The system (110) as claimed in claim 1, wherein the memory (204) comprises processor-executable instructions, which on execution, cause the one or more processors (202) to provide the one or more recommendations based on extrapolation of time stamps in accordance with the at least one combination.
5. The system (110) as claimed in claim 1, wherein the memory (204) comprises processor-executable instructions, which on execution, cause the one or more processors (202) to determine the at least one combination by being configured to determine at least one chain of combinations over the first period of time, wherein the at least one chain of combinations comprises more than one combination of the determined at least one combination, and wherein the one or more recommendations comprise the at least one chain of combinations.
6. The system (110) as claimed in claim 1, wherein the set of data parameters corresponding to the one or more events triggered at the one or more computing devices (104) comprises historical usage data of the one or more computing devices (104).
7. A method (500) for providing one or more recommendations for executing forecasted events, said method (500) comprising:
recording, by one or more processors (202), a set of data parameters corresponding to one or more events triggered at one or more computing devices in a network;
selecting (504), by the one or more processors (202), a pair of the one or more computing devices in the network based on the recorded set of data parameters;
determining (506), by the one or more processors (202), a count of one or more intersection events corresponding to the selected pair of the one or more computing devices; and
providing (512), by the one or more processors (202), the one or more recommendations for execution of the forecasted events at the selected pair of the one or more computing devices in the network, wherein the one or more recommendations comprise the one or more intersection events.
8. The method (500) as claimed in claim 7, comprising determining (508), by the one or more processors (202), whether the count of the one or more intersection events corresponding to the selected pair of the one or more computing devices exceeds a threshold.
9. The method (500) as claimed in claim 8, comprising providing (512), by the one or more processors (202), the one or more recommendations for execution of the forecasted events at the selected pair of the one or more computing devices in response to determining (508), by the one or more processors (202), that the count of the one or more intersection events corresponding to the selected pair of the one or more computing devices exceeds the threshold.
10. The method (500) as claimed in claim 7, comprising providing (512), by the one or more processors (202), the one or more recommendations based on extrapolation of time stamps in accordance with the one or more intersection events.
11. The method (500) as claimed in claim 8, comprising selecting (514), by the one or more processors (202), another pair of the one or more computing devices in response to determining (508), by the one or more processors (202), that the count of the one or more intersection events corresponding to the selected pair of the one or more computing devices is less than the threshold.
12. The method (500) as claimed in claim 7, wherein the set of data parameters corresponding to the one or more events triggered at the one or more computing devices comprises historical usage data of the one or more computing devices.
13. A user equipment (UE) (104) for executing one or more recommendations of forecasted events, said UE (104) comprising:
one or more processors communicatively coupled to a system (110), wherein the one or more processors are configured to:
detect one or more events triggered at the UE (104) in a network (106); and
execute the one or more recommendations provided by the system (110),
wherein the system (110) comprises a processor (202) configured to:
record a set of data parameters corresponding to the one or more events triggered at the UE (104) in the network (106);
pre-process the recorded set of data parameters in one or more batches;
determine at least one combination based on the pre-processed set of data parameters, wherein the at least one combination comprises an event of the one or more events and a frequency of occurrence of the event over a first period of time; and
in response to said determination, provide the one or more recommendations for execution of the forecasted events at the UE (104) in the network (106), wherein the one or more recommendations comprise the at least one combination.
| # | Name | Date |
|---|---|---|
| 1 | 202221014068-STATEMENT OF UNDERTAKING (FORM 3) [15-03-2022(online)].pdf | 2022-03-15 |
| 2 | 202221014068-PROVISIONAL SPECIFICATION [15-03-2022(online)].pdf | 2022-03-15 |
| 3 | 202221014068-POWER OF AUTHORITY [15-03-2022(online)].pdf | 2022-03-15 |
| 4 | 202221014068-FORM 1 [15-03-2022(online)].pdf | 2022-03-15 |
| 5 | 202221014068-DRAWINGS [15-03-2022(online)].pdf | 2022-03-15 |
| 6 | 202221014068-DECLARATION OF INVENTORSHIP (FORM 5) [15-03-2022(online)].pdf | 2022-03-15 |
| 7 | 202221014068-ENDORSEMENT BY INVENTORS [07-02-2023(online)].pdf | 2023-02-07 |
| 8 | 202221014068-DRAWING [07-02-2023(online)].pdf | 2023-02-07 |
| 9 | 202221014068-CORRESPONDENCE-OTHERS [07-02-2023(online)].pdf | 2023-02-07 |
| 10 | 202221014068-COMPLETE SPECIFICATION [07-02-2023(online)].pdf | 2023-02-07 |
| 11 | 202221014068-FORM-8 [08-02-2023(online)].pdf | 2023-02-08 |
| 12 | 202221014068-FORM 18 [08-02-2023(online)].pdf | 2023-02-08 |
| 13 | Abstract1.jpg | 2023-02-24 |
| 14 | 202221014068-FER.pdf | 2025-03-21 |
| 15 | 202221014068-FORM 3 [05-06-2025(online)].pdf | 2025-06-05 |
| 16 | 202221014068-FER_SER_REPLY [22-09-2025(online)].pdf | 2025-09-22 |
| 17 | 202221014068-CORRESPONDENCE [22-09-2025(online)].pdf | 2025-09-22 |
| 18 | 202221014068-COMPLETE SPECIFICATION [22-09-2025(online)].pdf | 2025-09-22 |
| 1 | SearchStrategyMatrixE_21-02-2024.pdf |