Abstract: The present disclosure relates to a smart fan system for monitoring, entertainment, and surveillance. The smart fan is configured with a visual subsystem and an audio subsystem to provide entertainment to users. Additionally, the smart fan system is configured with one or more sensors to achieve the desired ambient temperature in a predefined space. Further, the smart fan system is configured with an image acquisition unit to provide surveillance in the predefined space.
DESC:RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, IC layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner
FIELD OF INVENTION
[0002] The embodiments of the present disclosure generally relate to a field of mechatronics systems, and more particularly, to a smart fan for monitoring and surveillance of an enclosed space.
BACKGROUND OF THE INVENTION
[0003] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0004] Nuclear families are a growing phenomenon in today’s society where parents have to multitask to manage their infants. Hence baby monitors are being used to keep a watch on the infant. Commercially available infant monitors are expensive and have a short shelf life. Installations of infant monitors include technical glitches that consume a lot of time. Further, these infant monitors are unstable and need to be maintained constantly. The infant monitors are also unadaptable and cannot be used with cloud-based surveillance systems. As the infant grows into a child, the monitor may not be able to adapt to the child’s interests and may require frequent upgradations. The upgradations may be expensive and add heavily to the cost of purchase and maintenance of the infant monitor.
[0005] Traditionally, conventional fans are used and manually operated through a button to maintain the required ambient temperature in homes and commercial spaces. There is a possibility that the user might forget to turn off the fan manually which may lead to power wastage. Also, conventional fans have a speed controller that needs to be operated manually for achieving the required speed. Hence, a fan and a baby monitor exist as two separate individual systems with different functionalities that may be required for providing a safe and a comfortable ambience to the infant.
[0006] Hence, there is a need for a multifunctional system that incorporates value adding services such as monitoring, surveillance, entertainment and educational services and provides them at a low cost.
OBJECTS OF THE PRESENT DISCLOSURE
[0007] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0008] It is an object of the present disclosure to provide a smart fan system for monitoring an infant while providing value addition through entertainment and educational services.
[0009] It is an object of the present disclosure to provide a smart fan with a ceiling projector that would be a source of entertainment to teenagers and kids from the age 4 to 5 years.
[0010] It is an object of the present disclosure to provide a smart fan system that provides a comfortable ambience to an infant while ensuring its safety.
[0011] It is an object of the present disclosure to provide a smart fan system that aids in monitoring the infant and reporting dangerous scenarios to a caregiver.
[0012] It is an object of the present disclosure to provide a smart fan system that facilitates voice commands and intelligence to monitor and provide ease of operation.
[0013] It is an object of the present disclosure to provide a smart fan system that uses Wi-Fi access point functionality and aids in integration with multiple devices for infant surveillance.
[0014] It is an object of the present disclosure to provide a smart fan system that prevents power wastage while providing high quality surveillance.
SUMMARY
[0015] This section is provided to introduce certain objects and aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0016] In an aspect, the present disclosure relates to a system including a processor is coupled to a memory having instructions to be executed. The processor may receive a first signal from a first sensor coupled to the processor, the first signal may be indicative of an ambient temperature in a predefined space. The processor may receive a second signal from a second sensor coupled to the processor, where the second signal may be indicative of an angular speed of a hub. The processor ma, receive a third signal from an image acquisition unit coupled to the processor, where the third signal may be indicative of one or more images of a region of interest in the predefined space. The processor may receive a fourth signal from an audio subsystem coupled to the processor, where the fourth signal may be indicative of a voice input of a user. The processor may receive a fifth signal from a visual subsystem coupled to the processor, where the fifth signal may be indicative of one or more themes to be displayed in the region of interest in the predefined space. Based on any or a combination of the first signal and the second signal, the processor may operate the hub at a first angular speed to the desired ambient temperature in the predefined space. Additionally, the processor may, based on the third signal, monitor the region of interest through the image acquisition unit. Further, the processor may, based on any or a combination of the fourth signal and the fifth signal, display one or more theme images and a first sound. In an embodiment, the system may be configured to include a temperature sensor as the first sensor to measure the ambient temperature.
[0017] In an embodiment, the system may further include an SMPS configured on the hub and integrated with the processor to optimize power efficiency of the processor.
[0018] In an embodiment, the system may be configured with one or more synchronized pico projectors on the visual subsystem. The processor may configure the one or more synchronized pico projectors to render, using a mobile application, the one or more themes on the ceiling of room in the region of interest.
[0019] In an embodiment, the system may further include a microphone and a speaker on the audio subsystem. The processor may configure the microphone to receive the audio input from the user and the speaker to generate the audio output.
[0020] In an aspect, the system may include a fan assembly having a hub and one or more wings attached to the hub. Further, a first sensor may be attachably disposed on the circumference of the hub where the first sensor may be configured to sense a first signal indicative of an ambient temperature in a predefined space. A second sensor operably coupled to a motor configured within the hub. The second sensor may be configured to sense an angular speed of the motor driving the hub. An audio subsystem may be configured on the circumference of the hub to receive a voice input from a user and provide an audio output. Also, a visual subsystem may be configured on the circumference of the hub to display one or more themes in a region of interest in the predefined space. An image acquisition unit may be attachably disposed on the hub to capture one or more images in the region of interest in the predefined space. A processor may be communicatively coupled to the fan assembly, the first sensor, the second sensor, the audio subsystem, the visual subsystem through a network. The processor may be configured with a memory, the memory having one or more instructions to be executed by the processor. The processor may be configured to receive, the first signal from the first sensor and a second signal from the second sensor. The processor may operate the hub through the motor to obtain the desired ambient temperature in the predefined space, based on the first signal and the second signal. The processor may receive a third signal from the image acquisition unit and operate the image acquisition unit to capture the one or more images from the region of interest in the predefined space. The processor may receive, a fourth signal from the audio subsystem and instruct, the audio subsystem to execute a first set of actions based on a voice input provided by the user. The processor may receive a fifth signal from the visual subsystem and instruct the visual subsystem to execute a second set of actions based on the voice input of the user.
[0021] In an embodiment, the first set of actions executed by the audio subsystem may include playing any or a combination of a musical, a story and a sound effect.
[0022] In an embodiment, the second set of actions executed by the visual subsystem may include rendering any or a combination of the one or more themes such as a story character, a setting, a sound effect, a visual image, an animation, a video, and a story logic.
[0023] In an embodiment, the system may be further configured with a light source attachably disposed on the hub and configured by the processor to illuminate the region of interest. The light source may sense a sixth signal indicative of ambient light and illuminate the region of interest when the sixth signal is below a threshold value.
[0024] In an embodiment, the system may be configured to include an LED as the light source. The processor may be configured to operate the LED through an LED driver.
[0025] In an embodiment, the system may further comprise a database having a reference set of signals associated with the voice input of the user. The processor may be configured to compare the received fourth signal with the reference set of signals in the database to determine an action of the first set of actions to be performed.
[0026] In an embodiment, the system may further comprise a second database having a reference set of signals associated to the one or more theme images. The processor may be configured to compare the received fifth signal with the reference set of signals in the second database to determine an action of the second set of actions to be performed.
[0027] In an embodiment, the system may be configured to include a BLDC motor as the motor to drive the hub.
[0028] In an embodiment, the system may further include a Wi-Fi module configured within the hub to enable communication with one or more external devices.
[0029] In an embodiment, the one or more external devices may comprise any or a combination of a mobile device, a tablet, and a laptop.
[0030] In an embodiment, the system may be further configured to receive the third signal from the image acquisition unit and record one or more gestures of the user responsive to receiving the third signal.
[0031] In an embodiment, the system may be further configured with a third database having a reference set of signals associated with the one or more gestures of the user. The processor may be configured to compare the received third signal with the reference set of signals in the third database and identify the one or more gestures of the user.
[0032] In an embodiment, the system may be further configured to alert the user based on the identified one or more gestures through the one or more external devices.
[0033] In an aspect, a user equipment (UE) may be configured with one or more processors communicatively coupled to a processor comprised in the system. The one or more processors may be coupled with a memory that stores instructions to be executed by the processor of the UE. The UE may receive one or more parameters from the processor. The one or more parameters may be associated with a first signal, a second signal, a third signal, a fourth signal and a fifth signal from the modified one or more parameters. The UE may modify the one or more parameters and transmit the one or more parameters to the processor of the system. The processor of the system may receive the modified one or more parameters from the processor of the UE. The processor of the system may extract one or more of a modified first signal, a modified second signal, a modified third signal, a modified forth signal, and a modified fifth signal. Further, the processor of the system may be based on any or a combination of the modified first signal and the modified second signal, operate the hub at a first angular speed to maintain a desired ambient temperature in a predefined space. Also, the processor of the system based on the modified third signal, monitor a region of interest in the predefined space through the image acquisition unit. Additionally, the processor of the system based on any or a combination of the modified fourth signal and the modified fifth signal, display one or more theme images, and generate a first sound.
[0034] In an aspect, the method for the system may include a processor coupled to a memory with instructions to be executed by the processor. The method may include receiving by the processor, a first signal from a first sensor. The first sensor may be coupled to the processor and the first signal may be indicative of an ambient temperature in a predefined space. The method may include receiving by the processor, a second signal from a second sensor. The second sensor may be coupled to the processor and the second signal may be indicative of an angular speed of a hub. Also, the method may include receiving by the processor, a third signal from an image acquisition unit. The image acquisition unit may be coupled to the processor and the third signal may be indicative of one or more images of a region of interest in the predefined space. The method may further include receiving by the processor, a fourth signal from an audio subsystem. The audio subsystem may be coupled to the processor and the fourth signal may be indicative of a voice input of a user. The method may include receiving by the processor, a fifth signal from a visual subsystem. The visual subsystem may be coupled to the processor and the fifth signal may be indicative of one or more themes to be displayed in the region of interest in the predefined space. The method may include operating by the processor based on any or a combination of the first signal and the second signal, the hub at a first angular speed to generate the desired ambient temperature in a predefined space. Further, the method may include monitoring by the processor, based on the third signal, a region of interest through the image acquisition unit in the predefined space. Also, the method may include displaying by the processor based on any or a combination of the fourth signal and the fifth signal, one or more theme images, and generating a first sound.
BRIEF DESCRIPTION OF DRAWINGS
[0035] The accompanying drawings, which are incorporated herein, and constitute a part of this invention, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present invention. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that invention of such drawings includes the invention of electrical components, electronic components or circuitry commonly used to implement such components.
[0036] FIG. 1 illustrates an exemplary network architecture (100) of the system, in accordance with an embodiment of the present disclosure.
[0037] FIG. 2 illustrates an exemplary representation (200) of system (110), in accordance with an embodiment of the present disclosure.
[0038] FIG. 3 illustrates an exemplary block diagram (300) depicting the proposed system, in accordance with an embodiment of the present disclosure.
[0039] FIG. 4 illustrates an exemplary representation (400) of a fan motor of the proposed system, in accordance with an embodiment of the present disclosure.
[0040] FIG. 5 illustrates an exemplary block diagram representation (500) of the proposed system with reference to FIG. 3, in accordance with an embodiment of the present disclosure.
[0041] FIG. 6 illustrates an exemplary computer system (600) of the present invention which can be utilized in accordance with embodiments of the present disclosure.
[0042] The foregoing shall be more apparent from the following more detailed description of the invention.
BRIEF DESCRIPTION OF INVENTION
[0043] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0044] Embodiments of the present invention may be provided as a computer program product, which may include a machine-readable storage medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The machine-readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware).
[0045] The present disclosure relates to a system and more particularly to a smart fan that can be ultra-low cost, but can have features that can monitors one or more users, especially babies by acting as a baby monitor. The system can further act as a projector giving a 3D experience in home along with sound effects. The system can further act as a spy camera. Further with temperature sensors, the system can facilitate in regulating the speed of the fan based on the temperature of the surrounding environment.
[0046] Referring to FIG. 1 that illustrates an exemplary network architecture (100) of the system, in accordance with an embodiment of the present disclosure. As illustrated, the exemplary architecture (100) includes a system (110) that includes one or more components in communication with each other. One or more commands may be received from one or more computing devices (104-1, 104-2…104-n) (hereinafter interchangeably referred as a smart computing device and collectively referred to as computing devices (104). A user (102) or collectively referred as users (102) may interact with the system (110) by using their respective computing device (104). The computing device (104) and the system (110) may communicate with each other over a network (106). Examples of the computing devices (104) can include, but are not limited to, a computing device (104) associated with media entities and entertainment based assets, education sector, a smart phone, a portable computer, a personal digital assistant, a handheld phone and the like. The users (102) can include babies, kids, young adults, parents, caregivers, babysitters and the like. Additionally, the system (110) may be powered through a power supply (112).
[0047] In an embodiment, a hub (122) may be configured in the system (110) with multiple wings. The system may include a first sensor (118) that may be attachably disposed on the hub (122) that generates a first signal indicative of the ambient temperature in the predefined space. Further, the system (110) may include a second sensor (124) that may be operably coupled to a motor (126) configured within the hub (122). The second sensor (124) may be configured to sense an angular speed of the motor (126) driving the hub (122) and generate a corresponding signal. Further, the system (110) may include an audio subsystem (114) that may be configured on the circumference of the hub (122) to receive a voice input from the users (102) and provide an audio output. Further, the system (110) may include a visual subsystem (116) may be configured on the circumference of the hub (122) to display one or more themes in a region of interest in a predefined space. A predefined space may include immediate vicinity, an area of a room or a field of view that includes a subject to be monitored. Additionally, a subject may include a child or an infant to be monitored in the predefined space. Further, the system (110) may include an image acquisition unit (108) that may be attachably disposed on the hub (122) to capture one or more images in the region of interest.
[0048] Further, the network (106) can be a wireless network, a wired network, a cloud or a combination thereof that can be implemented as one of the different types of networks, such as Intranet, BLUETOOTH, MQTT Broker cloud, Local Area Network (LAN), Wide Area Network (WAN), Internet, and the like. Further, the network (106) can either be a dedicated network or a shared network. The shared network can represent an association of the different types of networks that can use variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like. In an exemplary embodiment, the network (106) can be anHC-05 Bluetooth module which is an easy to use Bluetooth SPP (Serial Port Protocol) module, designed for transparent wireless serial connection setup.
[0049] In an embodiment, the first sensor (118) may be a temperature sensor that indicates the ambient temperature in the predefined space. In an embodiment, the first set of actions executed by the audio subsystem (114) may include playing any or a combination of a musical, a story and a sound effect. In an embodiment, the system (110) may comprise a database having a reference set of signals associated to the voice input of the users (102).
[0050] In an embodiment, the second set of actions executed by the visual subsystem (116) may include rendering any or a combination of the one or more themes such as a story character, a setting, a sound effect, a visual image, an animation, a video, and a story.
[0051] In an embodiment, the system may comprise a second database having a reference set of signals associated to the one or more theme images. The processor (202) may be configured to match the received fourth signal with the reference set of signals in the second database. For example, the second database may comprise reference set of signals such as “play jungle book theme”, “play thomas the train” etc. The users (102) may say “play jungle book theme” and the processor (202) may match the voice input of the users (102) with the reference set of signals (play jungle book theme in this case). Further, the processor (202) may configure the visual subsystem (116) unit to project the jungle book theme, while the audio subsystem (114) may be configured to play the audio associated with the jungle book theme.
[0052] In an embodiment, the system may be further configured to receive the third signal from the image acquisition unit (108) and extract the first set of attributes indicative of one or more gestures of the users (102). Further, the system (110) may be further configured with a third database having a reference set of signals associated with the one or more gestures of the users (102). The processor (202) may be configured to match the received third signal with the reference set of signals in the third database and identify the one or more gestures of the users (102).
[0053] In an embodiment, the system (110) may alert the users (102) based on the identified one or more gestures through the one or more external devices. For example, a baby might be in distress and crying loudly.
[0054] In an embodiment, the image acquisition unit (108) may be a camera configured to capture the one or more images in the region of interest available in the predefined space. In an exemplary embodiment, the system (110) may be operatively coupled to a camera (108) for monitoring. The camera (108) can have a physical lid that can cover the camera (108) to ensure privacy. The physical lid may be controlled by commands received by the system (110) from the users (102) through the computing devices (104).
[0055] In an embodiment, the system may be configured with a Wi-Fi module (316) communicatively coupled to the one or more external devices to enable access to the system (110). The Wi-Fi module (316) may pair the computing device (104) with the system (110) and enable the users (102) to configure the system (110) based on his requirement. In an embodiment, the one or more external devices include any or a combination of a mobile device, a tablet, and a laptop. For example, the users (102) may use the system (110) an entertainment hub and configure the audio subsystem (114) and the visual subsystem (116) to play the audio and the one or more themes of his choice.
[0056] In an embodiment, the system (110) may be configured with a light source (322) attachably disposed on the hub (122) and configured by the processor (202) to illuminate the region of interest. The light source (322) may sense the sixth signal indicative of ambient light and send it to the processor (202). The processor (202) may configure the light source (322) to illuminate the region of interest when the sixth signal is below a threshold value. In an embodiment, the system (110) may include an LED (320) as the light source (120), wherein the processor (202) may be configured to operate the LED (320) through an LED driver (322).
[0057] In an embodiment, a cloud based platform (126) may be provided for communicating with the system (110). The cloud based platform (126) may include one or more processors (128) communicatively coupled to the processor (202), comprising a memory (128) storing a set of instructions, that causes the processor (126) to receive, the one or more parameters from the processor (202). The one or more parameters may be associated with the first signal, the second signal, the third signal, the fourth signal and the fifth signal received by the processor (202). Further, the cloud based platform (126) may train, a model based on the one or more parameters by an AI engine (328) configured in the processor (324), to generate an optimized model. Additionally, the cloud based platform (126) may recommend one or more threshold values for the one or more parameters based on the optimized model.
[0058] FIG. 2 illustrates an exemplary representation (200) of the system (110), in accordance with an embodiment of the present disclosure.
[0059] In an aspect, the system (110) may comprise one or more processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (110). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0060] In an embodiment, the system (110) may include an interface(s) (206). The interface(s) (206) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) (206) may facilitate communication of the system (110). The interface(s) (206) may also provide a communication pathway for one or more components of the system (110). Examples of such components include, but are not limited to, processing engine(s) (208) and a database (210). The database (210) may include a first database with a reference set of signals associated with the voice of the users (102). Additionally, the database (210) may include a second database with a reference set of signals associated with one or more theme images. Further, the database (210) may include a third database having a reference set of signals associated with the one or more gestures of the users (102).
[0061] The processing engine(s) (208) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) (208). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) (208) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system (110) may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (110) and the processing resource. In other examples, the processing engine(s) (208) may be implemented by electronic circuitry.
[0062] Further, the system (110) may include a processor (202) that may be communicatively coupled to the hub (122), first sensor (118), the second sensor (124), the audio subsystem (114), and the visual subsystem (116) through the network (106). The processor (202) may be configured with a memory (204) having one or more instructions to execute various the various functions. The processor (202) may be configured to receive the first signal from the first sensor (118) and the second signal from the second sensor (124). Based on the first signal and the second signal, the processor (202) may operate the hub (122) through the motor (126) to maintain the ambient temperature in a predefined space. Also, the processor (202) may receive the third signal from the image acquisition unit (108) and configure the image acquisition unit (108) to capture the one or more images in the region of interest in the predefined space. Further, the processor (202) may receive a fourth signal from the audio subsystem (118) and configure the audio subsystem (118) to execute the first set of actions based on the voice input provided by the users (102). Additionally, the processor (202) may receive the fifth signal from the visual subsystem (116) and configure the visual subsystem (116) to execute the second set of actions based on the voice input of the users (102). The fifth signal may be indicative of the one or more themes to be displayed in the region of interest in the predefined space. Based on any or a combination of the first signal and the second signal the processor (202) may operate the hub (122) at the first angular speed to obtain the desired ambient temperature in the predefined space. Also, based on the third signal, the processor (202) may monitor the region of interest through the image acquisition unit (108). Based on any or a combination of the fourth signal and the fifth signal, the processor (202) may display the one or more theme images, and the first sound.
[0063] The processing engine (208) may include one or more engines selected from any of a signal acquisition engine (212), an extraction engine (214). The signal acquisition engine (212) may receive, a first signal from the first sensor (118), coupled to the processor (202). The first signal may be indicative of an ambient temperature in the predefined space. The signal acquisition engine (212) may receive, the second signal from a second sensor (124), coupled to the processor (202). The second signal may be indicative of an angular speed of the hub (122). Also, the signal acquisition engine (212) may receive, the third signal from an image acquisition unit (108), coupled to the processor (202). The third signal may be indicative of the one or more images of the region of interest in the predefined space. Further, the signal acquisition engine (212) may be configured to receive, a fourth signal from the audio subsystem (114), coupled to the processor (202). The fourth signal may be indicative of the voice input of the users (102). The signal acquisition engine (212) may also receive the fifth signal from the visual subsystem (116), coupled to the processor (202). The fifth signal may be indicative of the one or more themes to be displayed in the region of interest in the predefined space.
[0064] In an embodiment, the extraction engine (214) may extract a first set of attributes from the first signal indicative of an ambient temperature in the predefined space and store it in the database (210). The extraction engine (214) may extract a second set of attributes from the second signal indicative of an angular speed of the hub (122) and store it in the database (210). The extraction engine (214) may extract a third set of attributes indicative of the one or more images of the region of interest in the predefined space. The extraction engine (214) may further store the extracted third set of attributes in the database (210). The extraction engine (214) may extract a fourth set of attributes from the fourth signal indicative of the voice input of the users (102) and store it in the database (210). Further, the extraction engine (214) may also extract a fifth set of attributes indicative of the one or more themes to be displayed in the region of interest in the predefined space. For example, the region of interest may include a selected area in the predefined space. The extraction engine (214) may also store the extracted fifth set of attributes in the database (210). Based on the extracted first set of attributes and the second set of attributes the processor (202) may operate the hub (122) at the first angular speed to generate the desired ambient temperature in the predefined space. Based on the extracted third set of attributes, the processor (202) may monitor the region of interest through the image acquisition unit (108). Also, based on the extracted fourth set of attributes and the fifth set of attributes, the processor (202) may display one or more theme images through the visual subsystem (116), and the first sound through the audio subsystem (114).
[0065] In an embodiment, the processing engine (208) may include an AI engine (216) to use the stored extracted attributes from the database (210) and compute the optimized parameters with respect to each of the signals. For example, the AI engine (216) may access the extracted second set of attributes indicative of the angular speed of the hub (122) and compute multiple optimized parameters/angular speeds to achieve the desired ambient temperature in the predefined space.
[0066] The multiple optimized parameters may provide a range of the angular speed and their corresponding ambient temperature in the predefined space. In an exemplary embodiment, the processor (202) may receive another speed signal from the motor (308) configured to rotate the hub (122) and extract another set of attributes through the extraction engine (214) and store it in the database (210). The AI engine (216) may access the extracted set of attributes from the database (210) indicative of the speed of the motor (208) and provide optimized parameters with respect to the optimized speed of the motor (208) to enable higher power efficiency of the motor (308). Additionally, the processor (202) may receive another signal from the SMPS (switched mode power supply) (302) and extract another of attributes through the extraction engine (214) and store it in the database (210). The AI engine (216) may access the extracted set of attributes from the database (210) indicative of the signal receive from the SMPS (302) and provide optimized parameters to enable efficient power modes of the SMPS (302).
[0067] In an embodiment, the processor (202) may be configured to receive, the first signal from the first sensor (118), coupled to the processor (202). The first signal may be indicative of an ambient temperature in a predefined space. Additionally, the processor (202) may be configured to receive, the second signal from a second sensor (124), coupled to the processor (202). The second signal may be indicative of an angular speed of a hub (122). In an embodiment, the processor (202) may receive the first signal from the temperature sensor and configure to rotate the hub at the first angular speed to obtain the desired ambient temperature in the predefined space. Similarly, the processor (202) may configure to rotate the hub (122) at various angular speeds to generate the desired ambient temperature in the predefined space.
[0068] Also, the processor (202) may be configured to receive, the third signal from an image acquisition unit (108), coupled to the processor (202). The third signal may be indicative of the one or more images of the region of interest in the predefined space. Further, the processor (202) may be configured to receive, a fourth signal from the audio subsystem (114), coupled to the processor (202). The fourth signal may be indicative of the voice input of the users (102). The processor (202) may also identify the missing baby by matching the “missing or unidentification in the predefined space” with the reference set of signals and alert the users (102) through the computing devices (104).
[0069] In an embodiment, the processor (202) may be configured to match the gesture with the reference set of signals associated with the third signal and identify the gesture of crying. Further, the processor (202) may send an alert to the users (102) through the computing devices (104). Similarly, the processor (202) may receive the third signal from the image acquisition unit (108) indicating a missing baby from the predefined space.
[0070] In an embodiment, the processor (202) may be configured to match the received fourth signal with the reference set of signals in the database. For example, the database could have a reference set of signals such as “play twinkle twinkle”, “play lion king story” etc. Based on the voice input of the users (102), the processor (202) may match the voice input with the fourth signal and play the corresponding audio through the audio output unit.
[0071] FIG. 3 illustrates an exemplary block diagram (300) depicting the proposed system, in accordance with an embodiment of the present disclosure. In an aspect, the block diagram (300) may include a Wi-Fi module (316) to connect the system (110) to the one or more external devices. The system (110) may be configured with the temperature sensor (318) to measure the temperature of the predefined space. The system (110) may also be configured a LED (320) or light source (120) driven through a LED driver (322). The LED (320) may be configured through the processor (202) and provide the sixth signal indicative of the ambient light to the processor (202). The processor (202) may operate the LED (320) through the LED driver (322) and illuminate the region of interest when the received sixth signal indicates the ambient light below the threshold value.
[0072] In an embodiment, the system (110) may be configured with an SMPS (switched mode power supply) (302) configured on the hub (122) and integrated with the processor (202) to optimize power efficiency of the processor (202). The switched-mode power supply (302) (switching-mode power supply, switch-mode power supply, switched power supply, SMPS, or switcher) may provide electronic power supply to the system (110) and may include a switching regulator to convert electrical power efficiently. The SMPS (302) may be operatively coupled to the processor (202) such as an MCU (304). The MCU (304) may be further coupled to an inverter (306) of a motor (308) of the hub (122). In an embodiment, the system (110) may include a BLDC motor as the motor (308) to drive the hub (122). The inverter (306) may convert the direct current (DC) into an alternating current (AC) and supply the motor (308) to drive the hub (122) at the angular speed. The motor (308) may drive the hub at the angular speed to generate the desired ambient temperature in the predefined space. The audio subsystem (310), the image acquisition unit (312), the visual subsystem (314) and the like may be operatively coupled to the MCU (304) (processor (202)). The system (110) may be further configured with the camera (312)/ image acquisition unit (108) to capture the one or more images of the region of interest in the predefined space. Further, the system (110) may be configured with a projector (314)/ visual subsystem (116) to display the one or more themes in the predefined space based on the voice input of the users (102).
[0073] In an embodiment, the audio subsystem (114) may be configured with a microphone (310-1) and a speaker (310-2). For example, the speaker (310-1) and the microphone (310-2) may be used to soothe the babies by playing musical melodies and lullabies. Additionally, the light source (120) configured in the system may be automatically lit whenever the child cries.
[0074] In an exemplary embodiment, the hub (122) can act a Wi-Fi access point that can be useful for rooms/ sections where there is existing LAN cabling done, reducing the cost of setting up additional access points and maintaining the décor.
[0075] FIG. 4 illustrates an exemplary block diagram representation (400) of the motor (308), in accordance with an embodiment of the present disclosure. As illustrated, the motor (308) may include synchronized pico projectors (402) mounted on back side of hub (122).
[0076] In an embodiment, the visual subsystem (116) may include one or more synchronized pico projectors (402) on the visual subsystem (116), wherein the processor (202) may configure the one or more synchronized pico projectors (402) to render the one or more themes in the region of interest. The one or more synchronized pico projectors (402) may project one or more themes based on the voice input of the users (102). The one or more themes may be projected on the ceiling may include themes such as a galaxy, a solar system and the like. The users (102) may change the one or more themes through the computing devices (104). Additionally, the one or more themes may be downloaded through the computing devices (104).
[0077] FIG. 5 illustrates an exemplary block diagram (500) depicting the proposed system with reference to FIG. 3, in accordance with an embodiment of the present disclosure. The front side of the hub motor cover (502) may include the audio subsystem (114). The audio subsystem (114) may be further configured with the speaker (310-1) and the microphone (310-2). The microphone (310-2) may receive the voice input of the users (102), while the processor (202) in the system (110) may execute a first set of actions such as playing any or a combination of a musical, a story and a sound effect. The speaker (310-1) may be configured by the processor (202) to provide an audio output based on the voice input of the users (102). The image acquisition unit (108) may be configured with the camera (312) on the front side of the hub motor cover (502). The camera (312) may be configured to capture the one or more gestures of the users (202). The processor (202) may further identify the one or more gestures of the users (102) and provide an alert through the computing device (104). A Wi-Fi antenna (316) (part of the Wi-Fi module) may be configured to provide an access point to the one or more external devices including any or a combination of a mobile device, a tablet, and a laptop.
[0078] FIG. 6 illustrates an exemplary computer system that can be utilized in accordance with embodiments of the present disclosure. As shown in FIG. 6, computer system 600 can include an external storage device 610, a bus 620, a main memory 630, a read only memory 640, a mass storage device 650, communication port 660, and a processor 670. A person skilled in the art will appreciate that the computer system may include more than one processor and communication ports. Processor 670 may include various modules associated with embodiments of the present invention. Communication port 660 can be any of an RS-232 port for use with a modem based dialup connection, a Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. Communication port 660 may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which computer system connects. Memory 630 can be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. Read-only memory 640 can be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or BIOS instructions for processor 670. Mass storage 650 may be any current or future mass storage solution, which can be used to store information and/or instructions.
[0079] Bus 620 communicatively couples processor(s) 670 with the other memory, storage and communication blocks. Bus 620 can be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects processor 670 to software system.
[0080] Optionally, operator and administrative interfaces, e.g. a display, keyboard, joystick and a cursor control device, may also be coupled to bus 620 to support direct operator interaction with a computer system. Other operator and administrative interfaces can be provided through network connections connected through communication port 660. Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present disclosure.
[0081] While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the invention. These and other changes in the preferred embodiments of the invention will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter to be implemented merely as illustrative of the invention and not as limitation.
,CLAIMS:1. A system (110) for providing an intelligent response, the system (110) comprising:
a processor (202) configured in a housing (128) and coupled to a memory having instructions that when executed causes the processor (202) to:
receive a first signal from a first sensor (118) configured in the housing (128), and coupled to the processor (202), the first signal indicative of an ambient temperature in a predefined space;
receive a second signal from a second sensor (124) configured in the housing (128), and coupled to the processor (202), the second signal indicative of an angular speed of a hub (122);
receive a third signal from an image acquisition unit (108) configured in the housing (128) and coupled to the processor (202), the third signal indicative of one or more images of a region of interest in the predefined space;
receive a fourth signal from an audio subsystem (114) configured in the housing (128) and coupled to the processor (202), the fourth signal indicative of an audio input of a user;
receive a fifth signal from a visual subsystem (116) configured in the housing (128) and coupled to the processor (202), the fifth signal indicative of one or more themes to be displayed in the region of interest in the predefined space;
based on any or a combination of the first signal and the second signal, operate the hub (122) at a first angular speed to maintain the desired ambient temperature in the predefined space;
based on the third signal, monitor the region of interest through the image acquisition unit (108); and
based on any or a combination of the fourth signal and the fifth signal, display the one or more theme images and generate a first sound.
2. The system (110) as claimed in claim 1, wherein the system (110) is configured to include a temperature sensor as the first sensor (118) to measure the ambient temperature.
3. The system (110) as claimed in claim 1, wherein the system (110) is configured with an SMPS (302) configured on the hub (122) and integrated with the processor (202) to optimize power efficiency of the processor (202).
4. The system (110) as claimed in claim 1, wherein the system (110) is configured with one or more synchronized pico projectors (402) on the visual subsystem (116), and wherein the processor (202) configures the one or more synchronized pico projectors (402) to render the one or more themes in the region of interest.
5. The system (110) as claimed in claim 1, wherein the system (110) is configured to include a microphone and a speaker on the audio subsystem (114), and wherein the processor (202) configures the microphone to receive the audio input from the user and the speaker to generate an audio output.
6. A system (110) for providing an intelligent response, the system (110) comprising:
a fan assembly having a hub (122) and one or more wings attached to the hub (122);
a first sensor (118) attachably disposed on a circumference of the hub (122), the first sensor (118) configured to sense a first signal indicative of an ambient temperature in a predefined space;
a second sensor (124) operably coupled to a motor (308) configured within the hub (122), wherein the second sensor (124) is configured to sense an angular speed of the motor (308) driving the hub (122);
an audio subsystem (114) configured on the circumference of the hub (122) to receive a voice input from a user and provide an audio output;
a visual subsystem (116) configured on the circumference of the hub (122) to display one or more themes in a region of interest in the predefined space;
an image acquisition unit (108) attachably disposed on the hub (122) to capture one or more images in the region of interest in the predefined space;
a processor (202) communicatively coupled to the fan assembly, the first sensor (118), the second sensor (124), the audio subsystem (114), the visual subsystem (116) through a network (106), and also coupled with a memory, the memory having one or more instructions which when executed causes the processor (202) to:
receive the first signal from the first sensor (118) and a second signal from the second sensor (124);
operate the hub (122) through the motor (308) to obtain the desired ambient temperature in the predefined space based on the first signal and the second signal;
receive a third signal from the image acquisition unit (108) and operate the image acquisition unit (108) to capture the one or more images from the region of interest in the predefined space based on the third signal;
receive a fourth signal from the audio subsystem (114) and instruct the audio subsystem (114) to execute a first set of actions based on the voice input provided by the user and generate the audio output; and
receive a fifth signal from the visual subsystem (116) and instruct the visual subsystem (116) to execute a second set of actions based on the voice input of the user and the fifth signal.
7. The system (110) as claimed in claim 6, wherein the first set of actions executed by the audio subsystem (114) comprise playing any or a combination of a musical, a story, and a sound effect.
8. The system (110) as claimed in claim 6, wherein the second set of actions executed by the visual subsystem (116) comprise rendering any or a combination of the one or more themes such as a story character, a setting, a sound effect, a visual image, an animation, a video, and a story.
9. The system (110) as claimed in claim 6, wherein the system (110) is configured with a light source (322) attachably disposed on the hub (122) and configured by the processor (202) to illuminate the region of interest, and wherein the light source (322) senses a sixth signal indicative of ambient light and illuminates the region of interest when the sixth signal is below a threshold value.
10. The system (110) as claimed in claim 9, wherein the system (110) is configured to include an LED (320) as the light source, and wherein the processor (202) is configured to operate the LED through an LED driver (322).
11. The system (110) as claimed in claim 6, wherein the system (110) comprises a first database having a reference set of signals associated with the voice input of the user, and wherein the processor (202) is configured to compare the received fourth signal with the reference set of signals in the database to determine an action of the first set of actions to be performed.
12. The system (110) as claimed in claim 6, wherein the system (110) comprises a second database having a reference set of signals associated to the one or more theme images, and wherein the processor (202) is configured to compare the received fifth signal with the reference set of signals in the second database to determine an action of the second set of actions to be performed.
13. The system (110) as claimed in claim 6, wherein the system (110) is configured to include a brush less direct current motor (BLDC) as the motor (308) to drive the hub (122).
14. The system (110) as claimed in claim 6, wherein the system (110) is configured with a Wi-Fi module (316) configured within the hub (122) to enable communication with one or more external devices.
15. The system (110) as claimed in claim 14, wherein the one or more external devices comprise any or a combination of a mobile device, a tablet, and a laptop.
16. The system (110) as claimed in claim 6, wherein the system (1100 is configured to receive the third signal from the image acquisition unit (108) and record one or more gestures of the user responsive to receiving of the third signal.
17. The system (110) as claimed in claim 16, wherein the system (110) is configured with a third database having a reference set of signals associated with the one or more gestures of the user, and wherein the processor (202) is configured to compare the received third signal with the reference set of signals in the third database and identify the one or more gestures of the user.
18. The system (110) as claimed in claim 17, wherein the system (110) is configured to alert the user based on the identified one or more gestures through one or more external devices.
19. A user equipment (UE) (104) for providing an intelligent system response, said UE (104) comprising:
one or more processors (220) communicatively coupled to a processor (202) comprised in a system (110), the one or more processors (220) coupled with a memory (222), wherein said memory (222) stores instructions which when executed by the one or more processors (220) causes said UE (104) to:
receive one or more parameters from the processor (202), wherein the one or more parameters (202) are associated with a first signal, a second signal, a third signal, a fourth signal, and a fifth signal;
modify the one or more parameters and transmit the modified one or more parameters to the processor (202), wherein the processor (202) is configured to:
receive the modified one or more parameters from the one or more processors (220);
extract one or more of a modified first signal, a modified second signal, a modified third signal, a modified forth signal, and a modified fifth signal from the modified one or more parameters;
based on any or a combination of the modified first signal and the modified second signal, operate a hub (122) at a first angular speed to maintain a desired ambient temperature in a predefined space;
based on the modified third signal, monitor a region of interest in the predefined space through an image acquisition unit (108); and
based on any or a combination of the modified fourth signal and the modified fifth signal, display one or more theme images and generate a first sound.
20. A cloud based platform (126) for communicating with an intelligent system via one or more computing devices, said platform (126) comprising one or more processors (128) communicatively coupled to a system (110), and a memory (130), said memory (130) storing a set of instructions, which when executed, causes the one or more processors (128) to:
receive one or more parameters from a processor (202) comprised in the system (110), wherein the one or more parameters are associated with a first signal, a second signal, a third signal, a fourth signal, and a fifth signal;
train a model based on the one or more parameters by an artificial intelligence (AI) engine configured in the processor (128), to generate an optimized model; and
recommend, to the processor (202), one or more threshold values for the one or more parameters based on the optimized model.
21. A method (700) for providing intelligent response, said method (700) comprising:
receiving, by a processor (202), a first signal from a first sensor (118), wherein the first sensor (118) is coupled to the processor (202) and the first signal is indicative of an ambient temperature in a predefined space;
receiving, by the processor (202), a second signal from a second sensor (126), wherein the second sensor (126) is coupled to the processor (202) and the second signal is indicative of an angular speed of a hub (122);
receiving, by the processor (202), a third signal from an image acquisition unit (108), wherein the image acquisition unit (108) is coupled to the processor (202) and the third signal is indicative of one or more images of an region of interest in the predefined space;
receiving, by the processor (202), a fourth signal from an audio subsystem (114), wherein the audio subsystem (114) is coupled to the processor (202) and the fourth signal is indicative of a voice input of a user;
receiving, by the processor (202), a fifth signal from a visual subsystem (116), wherein the visual subsystem (116) is coupled to the processor (202) and the fifth signal is indicative of one or more themes to be displayed in the region of interest in the predefined space;
operating, by the processor (202), based on any or a combination of the first signal and the second signal, the hub (122) at a first angular speed to generate the desired ambient temperature in the predefined space;
monitoring, by the processor (202), based on the third signal, the region of interest through the image acquisition unit (108) in the predefined space; and
displaying, by the processor (202), based on any or a combination of the fourth signal and the fifth signal, one or more theme images and generating a first sound.
| # | Name | Date |
|---|---|---|
| 1 | 202121057382-STATEMENT OF UNDERTAKING (FORM 3) [09-12-2021(online)].pdf | 2021-12-09 |
| 2 | 202121057382-PROVISIONAL SPECIFICATION [09-12-2021(online)].pdf | 2021-12-09 |
| 3 | 202121057382-FORM 1 [09-12-2021(online)].pdf | 2021-12-09 |
| 4 | 202121057382-DRAWINGS [09-12-2021(online)].pdf | 2021-12-09 |
| 5 | 202121057382-DECLARATION OF INVENTORSHIP (FORM 5) [09-12-2021(online)].pdf | 2021-12-09 |
| 6 | 202121057382-FORM-26 [04-02-2022(online)].pdf | 2022-02-04 |
| 7 | 202121057382-Proof of Right [10-02-2022(online)].pdf | 2022-02-10 |
| 8 | 202121057382-ENDORSEMENT BY INVENTORS [08-12-2022(online)].pdf | 2022-12-08 |
| 9 | 202121057382-DRAWING [08-12-2022(online)].pdf | 2022-12-08 |
| 10 | 202121057382-CORRESPONDENCE-OTHERS [08-12-2022(online)].pdf | 2022-12-08 |
| 11 | 202121057382-COMPLETE SPECIFICATION [08-12-2022(online)].pdf | 2022-12-08 |
| 12 | 202121057382-FORM 18 [09-12-2022(online)].pdf | 2022-12-09 |
| 13 | Abstract1.jpg | 2023-01-12 |
| 14 | 202121057382-FORM-26 [18-01-2023(online)].pdf | 2023-01-18 |
| 15 | 202121057382-Covering Letter [18-01-2023(online)].pdf | 2023-01-18 |
| 16 | 202221001125-CORRESPONDENCE(IPO)-(CERTIFIED COPY WIPO DAS)-(25-01-2023)..pdf | 2023-01-25 |
| 17 | 202121057382-FER.pdf | 2023-12-11 |
| 18 | 202121057382-DUPLICATE-FER-2024-01-04-15-33-45.pdf | 2024-01-04 |
| 19 | 202121057382-FER_SER_REPLY [02-05-2024(online)].pdf | 2024-05-02 |
| 20 | 202121057382-CORRESPONDENCE [02-05-2024(online)].pdf | 2024-05-02 |
| 21 | 202121057382-CLAIMS [02-05-2024(online)].pdf | 2024-05-02 |
| 22 | 202121057382-FORM-8 [12-11-2024(online)].pdf | 2024-11-12 |
| 23 | 202121057382-FORM-26 [28-02-2025(online)].pdf | 2025-02-28 |
| 1 | Searchstrategy202121057382E_08-12-2023.pdf |