Abstract: The present disclosure pertains to a system (100) to assist a subject having autism and facilitates in enhancing physiological capacity of the subject. The system (100) includes a device (102) adapted to be coupled to a spectacle worn by the subject, the device (102) includes an image acquisition unit (104) configured to capture images of person interacting to the subject to detect emotions of the person to assist the subject in interaction, a first set of sensors (106) configured to sense heart rate value of the subject and provides sound therapies when the heart rate is not normal, a second set of sensors (108) configured to detect head movement of the subject while focusing on one or more objects to track attention span of the subject. Also, the system (100) assists the subject in characters recognition and practicing the characters by displaying the character on a display unit (114).
The present disclosure relates generally to health and medical field. More particularly, the present disclosure provides a system to assist children having autism spectrum disorder to enhance physiological capacity of the children and increase possibility to recover from the autism.
BACKGROUND
[0002] Background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art. [0003] Autism spectrum disorder is a non-progressive neurological disorder typically appearing before the age of three years and related to development of brain that affects understanding of a person and the way they socialize with others. Main two things that get affected are behavior and communication. The word "autism" means a developmental disability significantly affecting verbal and non-verbal communication and social interaction. Many children are having this disease, and have to go through challenges of social skills, disinterest, repetitive behaviors, speech and nonverbal communication. Children with autism face a lot of challenges when it comes to social or communication skills. This in turn makes the child difficult to properly react at home and school. Thereby, leads the child to be isolated without representing their point of view and also affects the child emotionally. Continuous state of this behavior leads to severe consequences, such as depending on their position in the spectrum, they may be able to talk a lot, or some may not talk at all. Also, different autistic children display different patterns of behaviors and interests.
[0004] Problems relating to communication may range from difficulty in understanding important day to day gestures which are easily recognized by other normal children to not being able to engage in effective communication in verbal and written ways affecting their academics. This uneasiness leads to irritation and low self-confidence which may be seen by how less eye contact autistic people
make. The children with autism may also repeat certain pattern of behavior, show resistance to change in any activity, have sensory issues and may become angry, cry or laugh for reasons unknown.
[0005] Specifically, person with autism often find it hard to understand emotions such as happiness, sadness, or anger from spoken voice, a basic process that usually occurs naturally in others without autism. Not being able to recognize emotions as effectively as a person without autism might be able to prevents autistic person, from creating the emotional attachments, and current methods of treatment and remediation, particularly for children, include specialized education, behavior and social skills therapy, and placing affected individuals in highly structured environments, all of which have met with limited success. [0006] Existing solution facilitates in recognizing the emotions of the child having autism but fails to provide real time responses to the child to assist them to process things easily and enables the child communicate with people in effective manner. Other solution which provides physical and emotional tracking systems for autistic children are not reliable, and not solving the problem of lack of attention, slow learning and their applications are limited. Furthermore, existing technologies are costly and demanding patience and draining energy from caregivers.
[0007] Hence, there is need in the art to develop a solution to change the way we take care of autistic children by providing suitable responses along with calming the child down in case of hyperactive breakdown or irritability.
OBJECTS OF THE PRESENT DISCLOSURE
[0008] Some of the objects of the present disclosure, which at least one
embodiment herein satisfies are as listed herein below.
[0009] It is an object of the present disclosure to provide a system that helps
in assisting person having autism disorder.
[0010] It is another object of the present disclosure to provide a system to
recognize one or more skills of the patient and accordingly providing assistance to
improve the skills.
[0011] It is another object of the present disclosure to provide a system to
recognize learning speed of the person having autism disorder and accordingly
provide assistance.
[0012] It is another object of the present disclosure to provide a system to
provide sound therapies to the person.
[0013] It is another object of the present disclosure to provide a system to
calm down the person in case of hyperactive breakdown or irritability.
[0014] It is yet another object of the present disclosure to provide a system
that helps in avoiding expensive therapies, medicinal treatment and other means
for treating autism disorder.
SUMMARY
[0015] The present disclosure relates generally to health and medical field. More particularly, the present disclosure provides a system to assist children having autism spectrum disorder to enhance physiological capacity of the children and increase possibility to recover from the autism.
[0016] An aspect of the present disclosure pertains to a system for aiding a subject with autism, the system may include a device adapted to be coupled with frame of a spectacle of the subject, the device may be including an image acquisition unit configured to capture a first set of images of the one or more entities in the pre-defined area, and correspondingly generate a first set of signals, a first set of sensors configured to sense one or more physiological parameters of the subject, and correspondingly generate a second set of signals, and a controller may be operatively coupled to the image acquisition unit and the first set of sensors.
[0017] In an aspect, the controller may be including one or more processors coupled with a memory, the memory storing instructions executable by the one or more processors may be configured to receive the first set of signals and the second set of signals, extract a third set of signals from the first set of signals, a fourth set of signals from the second set of signals, wherein the third set of signals pertain to facial parameters associated with the one or more entities and the fourth
set of signals pertain to heart rate value of the subject, match the extracted facial parameters with a first dataset, where the first dataset comprises pre-stored facial parameters pertaining emotions of the one or more entities, transmit a first set of actuation signals to a display unit, where the first set of actuation signals may actuate the display unit that facilitates in displaying one or more emoticons associated with emotions of the one or more entities found in the pre-defined area. [0018] In an aspect, the controller may compare the extracted heart rate value with a second data set, where the second dataset may include a pre-determined heart rate limit, transmit a set of acoustic signals to an audio unit when the compared heart rate value may be found beyond the pre-determined heart limit, where the set of acoustic signals may pertain to one or more audios stored in a third data set, where the one or more audios may provide sound therapies to the subject.
[0019] In an aspect, the controller may be further configured to receive a request from associated one or more computing devices and transmit a first set of actuation signals to actuate the image acquisition unit, where the image acquisition unit may facilitate in capturing a second set of images of one or more objects, and correspondingly generate a fifth set of signals, transmit a second set of actuation signals to actuate a second set of sensors, where the second set of sensors may facilitate in detecting head movement of the subject, and correspondingly generate a sixth set of signals.
[0020] In an aspect, the controller may be configured to receive the fifth set of signals and the sixth set of signals, extract edges parameters of the one or more objects from the fifth set of signals, and head attributes from the sixth set of signals, and may determine a time duration from the extracted edges parameters and the extracted head attributes to detect attention span of the subject. [0021] In an aspect, the audio unit may include any or a combination of speaker, earphone, ear bud, and headset.
[0022] In an aspect, the image acquisition unit may include any or a combination of camera, scanner, and face recognition sensor, and where the first set of sensors may include any or a combination of heart rate sensor, heartbeat
sensors, and pulse sensors, and the second set of sensors may include any or a combination of accelerometer and gyroscope.
[0023] In an aspect, the one or more emoticons may include any or a combination of happy, sad, anger, distress, weep, laugh, joy, neutral, tired, shock, love, nervous, and scared.
[0024] In an aspect, upon receiving a request from the associated one or more mobile computing devices, the controller may be configured to transmit a set of control signals to the display unit, where the display unit may actuate and facilitate in displaying any or a combination of pre-defined characters, and transmit a third set of actuation signals to actuate the image acquisition unit, where the image acquisition unit may facilitate in capturing a third set of images, and correspondingly generate a seventh set of signals, extract one or more characters drawn by the subject on the one or more objects form the received seventh set of signals, and match the extracted one or more characters with the pre-defined set of one or more characters stored in a fourth dataset, where the pre¬defined set of one or more characters may pertain to one or more images of digits, uppercase letters, and lowercase letters drawn by the one or more entities, and determine level of accuracy from one or more characters drawn by the subject, and the level of accuracy may be transmitted the associated one or more computing devices.
[0025] In an aspect, the controller may be configured to transmit a second set of acoustic signals to the audio unit, where the second set of acoustic signals may pertain to any or a combination of rhymes, lullaby, and music to entertain the subject.
[0026] In an aspect, the controller may be in communication with the one or more mobile computing devices through a communication module, wherein the communication module comprises any or a combination of Wireless Fidelity (Wi-Fi) Module, Bluetooth Module, Li-Fi Module, optical fiber, Wireless Local Area Network (WLAN), and ZigBee.
[0027] In an aspect, the one or more mobile computing devices may include any or a combination of mobile terminal, laptop and tablet, wherein the one or
more mobile computing devices are configured to transmit the one or more
requests to the system that facilitates in controlling the system remotely.
[0028] In an aspect, the display unit may include any or a combination of light
emitting diode (LED), liquid crystal display (LCD), organic light emitting diode
(OLED), and LED matrix.
[0029] In an aspect, the system may include a power source operatively
coupled to the controller, the image acquisition unit, the first set of sensors, the
second set of sensors, and the display unit where the power source may facilitate
in providing electric power to the system.
[0030] In an aspect, the power source may include any or a combination of
cell, battery, electric power line, inverter, capacitor bank, and inductor.
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] The accompanying drawings are included to provide a further
understanding of the present disclosure, and are incorporated in and constitute a
part of this specification. The drawings illustrate exemplary embodiments of the
present disclosure and, together with the description, serve to explain the
principles of the present disclosure.
[0032] The diagrams are for illustration only, which thus is not a limitation of
the present disclosure, and wherein:
[0033] FIG. 1 illustrates a block diagram of proposed system to provide
assistance in autism spectrum disorder, in accordance with an embodiment of the
present disclosure.
[0034] FIG. 2 illustrates exemplary functional components of a controller of
the proposed system to provide assistance in autism spectrum disorder, in
accordance with an embodiment of the present disclosure.
[0035] FIG. 3 illustrates an exemplary view of the hardware components of
the proposed system to provide assistance in autism spectrum disorder, in
accordance with an embodiment of the present disclosure.
[0036] FIG. 4 illustrates an exemplary view of the proposed system to be coupled with spectacles to provide assistance in autism spectrum disorder, in accordance with an embodiment of the present disclosure.
[0037] FIG. 5A-5C illustrate exemplary views of the proposed system to provide assistance in autism spectrum disorder, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION
[0038] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details. [0039] The present disclosure relates generally to health and medical field. More particularly, the present disclosure provides a system to assist children having autism spectrum disorder to enhance physiological capacity of the children and increase possibility to recover from the autism.
[0040] In an embodiment, a system for providing assistance to a subject especially child having autism. The system can assist in recognizing writing skill of the subject, recognizing emotions of others which are talking to the subject to improve interaction, enhancing cognitive skills of the subject by repetition, and calming down the subject during any type of panic attack.
[0041] As illustrated in FIG. 1, the proposed system (100) for aiding a subject with autism (also referred to as system (100), herein) can include a device (102) adapted to be coupled with frame of a spectacle of the subject, and a controller (112). The device (102) can include an image acquisition unit (104), a first set of sensors (106), a second set of sensors (108), a display unit (114), and an audio unit (116). The image acquisition unit (104), the first set of sensors (106), the second set of sensors (108), the display unit (114), and the audio unit (116) can be operatively coupled with controller (112).
[0042] In an embodiment, the device (102) can be coupled to the spectacle of the subject on any of the side of the frame. The device can be detachably coupled
to the spectacle that facilitates in coupling and decoupling the device (102) from any of the spectacle effortlessly. In an exemplary embodiment, the device (102) is less bulky, which enables the subject to carry the device (102) with the spectacle comfortably.
[0043] In an embodiment, the image acquisition unit (104) can be configured to capture a first set of images of one or more entities in a pre-defined area, and correspondingly generate a first set of signals, where the first set of signals can be transmitted to the controller (112), and the first set of signals can be transmitted in an electric form. In another embodiment, the image acquisition unit (104) can include any or a combination of camera, camcorder, video recorder, scanner, face recognition sensor, and the likes.
[0044] In an embodiment, the image acquisition unit (104) can be configured to capture a second set of images of the one or more objects, and correspondingly generate a fifth set of signals. The generated fifth set of signals can be transmitted to the controller (112) in the electric form. In an exemplary embodiment, the one or more objects can be a paper, a book, any electronic device such as computer, mobile phone and the likes.
[0045] In an embodiment, the device (102) can include the first set of sensors (106), the first set of sensors (106) can be configured to sense one or more physiological parameters of the subject, and correspondingly generate a second set of signals. The generated second set of signals can be transmitted to the controller (112) in the electric form. In an exemplary embodiment, the first set of sensors can include any or a combination of heart rate sensor, heartbeat sensors, and pulse sensors.
[0046] In an embodiment, the device (102) can include the second set of sensors (108), the second set of sensors (108) can be configured to facilitate in detecting head movement of the subject, and correspondingly generate a sixth set of signals. The generated sixth set of signals can be transmitted to the controller (112) in the electric form. In an exemplary embodiment, the second set of sensors can include any or a combination of accelerometer, MEMS accelerometer, gyroscope, and the likes.
[0047] In an embodiment, the device (102) can include the display unit (114) to display any or a combination of pre-defined set of one or more characters. In another embodiment, the display unit (114) can display one or more emoticons associated with emotion of the one or more entities found in the pre-defined area. In yet another embodiment, the display unit (114) can include any or a combination of light emitting diode (LED), liquid crystal display (LCD), organic light emitting diode (OLED), LED matrix, and the likes.
[0048] In an embodiment, the device (102) can include the audio unit (116) to produce a first acoustic signals, where the first acoustic signals pertain to where the one or more audios can pertain to binaural beats. In another embodiment, the audio unit (116) can be configured to produce second acoustic signals, where the second acoustic signals pertain to any or a combination of rhymes, lullaby, music, and the likes to entertain the subject. In yet another embodiment, the binaural beats can be stored in a third dataset to provide sound therapies to the subject, when the subject is getting panic, and the rhymes, the lullaby, the music, and the likes also can be stored in the third dataset to entertain the subject, when the subject is crying, sad, and the likes.
[0049] In an embodiment, the audio unit (116) can include any or a combination of speaker, earphone, ear bud, headset, and the likes. [0050] In an embodiment, the controller (112) can be configured to receive the first set of signals from the image acquisition unit (104), the second set of signals from the first set of sensors (106) in the electric form. In another embodiment, the controller (112) can be any or a combination of microprocessor, microcontroller, Arduino Uno, At mega 328, Raspberry Pi or other similar processing unit, and the likes. In yet another embodiment, the controller (112) can include one or more processors coupled with a memory, the memory storing instructions executable by the one or more processors.
[0051] In an embodiment, the controller (112) can be configured to extract a third set of signals from the first set of signals, a fourth set of signals from the second set of signals, where the third set of signals can pertain to facial parameters associated with the one or more entities, and the fourth set of signals pertain to
heart rate value of the subject. In another embodiment, the controller (112) can be configured to match the extracted facial parameters with a first dataset, where the first dataset can include pre-stored facial parameters pertaining emotions of the one or more entities. In yet another embodiment, the controller (112) can be configured to transmit a first set of actuation signals to a display unit (114), where the first set of actuation signals can actuate a display unit (114) that facilitates in displaying the one or more emoticons associated with emotions of the one or more entities found in the pre-defined area.
[0052] In an embodiment, the one or more emoticons include any or a combination of happy, sad, anger, distress, weep, laugh, joy, neutral, tired, shock, love, nervous, scared, and the likes.
[0053] In an exemplary embodiment, the camera (104) can be configured to capture images of person talking to the user or standing near the user, and the captured images can be transmitted to the controller (112). The controller (112) can extract facial parameters from the received images, and can recognize emotions of the person, and can display the associated emoticon on the display unit (114) that facilitate in assisting the subject having autism to react according to the emotion of the entity talking to the subject.
[0054] In an embodiment, the controller (112) can be configured to compare the extracted heart rate value with a second data set, where the second data set can include a pre-determined heart rate limit. In another embodiment, when the compared heart rate value is beyond the pre-determined heart limit, the controller (112) can transmit the set of acoustic signals to the audio unit (116), where the set of acoustic signals can pertain the one or more audios to provide sound therapies to the subject.
[0055] In an embodiment, the controller (112) can be in communication with one or more mobile computing devices through a communication module, where the communication module can include any or a combination of Wireless Fidelity (Wi-Fi) Module, Bluetooth Module, Li-Fi Module, optical fiber, Wireless Local Area Network (WLAN), ZigBee, and the likes. In another embodiment, the one or more mobile computing devices can include any or a combination of mobile
terminal, laptop and tablet, and the one or more mobile computing devices can be configured to transmit the one or more requests to the system (100) that facilitates in controlling the system (100) remotely.
[0056] In an embodiment, the controller (112) can be configured to receive one or more request from the associated one or more computing devices, and transmit the first set of actuation signals to actuate the image acquisition unit (104), where the image acquisition unit (104) can be capture a second set of images of one or more objects, and correspondingly generate the fifth set of signals. Also, the controller (112) can transmit the second set of actuation signals to actuate a second set of sensors, where the second set of sensors can facilitate in detecting head movement of the subject, and correspondingly generate a sixth set of signals. In another embodiment, the controller (112) can be configured to receive the fifth set of signals and the sixth set of signals, and can extract edges parameters of the one or more objects from the fifth set of signals, and head attributes from the sixth set of signals. The controller (112) can be configured to determine time duration from the extracted edges parameters and the extracted head attributes to detect attention span of the subject.
[0057] In an exemplary embodiment, when the child having autism is looking at a paper, and trying to read letters or digits, the system (100) can detect the time period for focusing on a paper. To calculate the time period, firstly the edges of the paper can be detected to extract time till then the paper is positioned in front of the child, and secondly head movement of the child can be detected. When the child head is down towards the paper, a timer can be initiated, when the child move his head in any other direction, the gyroscope (108) can detect the head movement, consequently the timer can be paused, and the timer can be activated again when the head is down towards the paper to evaluate the time period to detect attention span of the child.
[0058] In an exemplary embodiment, a paper is detected in the child hand for one minutes, during this one minute, in starting for three seconds the child focused on the paper, then moved his head in left direction for some seconds, then again for four second the child focused on the paper, similarly the child moved his
head again, here in one minute the time period when the child focused on the paper can be determined, to detect attention span of the child. [0059] In an embodiment, upon receiving at least one of the one or more requests from the associated one or more mobile computing devices, the controller (112) can transmit a set of control signals to the display unit (114) and a third set of actuation signals to the image acquisition unit (104). Upon receiving the set of control signals, the display unit (114) can be actuated to display any or a combination of the pre-defined characters such as digits, uppercase letters, lowercase letters, and upon receiving the third set of actuation signals the image acquisition unit (104) can be actuated to capture a third set of images, and correspondingly generate a seventh set of signals. In another embodiment, the controller (112) can be configured to extract the one or more characters, drawn by the subject on the one or more objects form the received seventh set of signals, and match the extracted one or more characters with the pre-defined set of one or more characters stored in a fourth dataset, where the pre-defined set of one or more characters can pertain to one or more images of digits, uppercase letters, lowercase letters written by the one or more entities. The controller (112) can be configured to determine level of accuracy of the any of the one or more characters drawn by the subject, and correspondingly transmit the level of accuracy to the associated one or more computing devices.
[0060] In an exemplary embodiment, when letter practice option from the associated mobile computing device is chosen, the letters of the one or more languages such as English alphabets A, B, C, D, and the likes can be displayed on the display unit (114). And, when the child practice the letter on the paper, the camera (104) can be actuated by the controller (112) to capture images of the paper, and the controller (112) can facilitate in extracting letter from the image and can compare with the images stored in the memory, and can extract level of accuracy, which can be further used to assist the child accordingly to enhance writing skills.
[0061] In an illustrative embodiment, the controller (112) can be in communication with one or more mobile computing device through a
communication module, where the one or more mobile computing devices can be configured to receive the set of alert signals and enables in analyzing the deviation of movement associated with the entity. In another illustrative embodiment, the one or more mobile computing devices can include any or a combination of cell phone, portable digital hand held device, laptop, digital assistant, and the likes. In another illustrative embodiment, the communication module can include any or a combination of Wireless Fidelity (Wi-Fi) module , Bluetooth module, Li-Fi module, optical fiber, Wireless Local Area Network (WLAN), ZigBee module, and the likes.
[0062] In an embodiment, the system (100) can include a power source operatively coupled to the image acquisition unit (104), the first set of sensors (106), the second set of sensors (108), the controller (112), and the display unit (114) where the power source can facilitate in providing electric power to the system (100). In another embodiment, the power source can include any or a combination of cell, battery, capacitor bank, inductor, electric power line, and the likes.
[0063] As illustrated in FIG. 2, the controller (112) can include one or more processor(s) (202). The one or more processor(s) (202) can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s) (202) are configured to fetch and execute computer-readable instructions stored in a memory (204) of the controller (112). The memory (204) can store one or more computer-readable instructions or routines, which may be fetched and executed to create or share the data units over a network service. The memory (204) can include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0064] In an embodiment, the controller (112) can also include an interface(s) (206). The interface(s) (206) may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage
devices, and the like. The interface(s) (206) may facilitate communication of the controller (112) with various devices coupled to the controller (112). The interface(s) (206) may also provide a communication pathway for one or more components of controller (112). Examples of such components include, but are not limited to, processing engine(s) (208) and database (222). [0065] In an embodiment, the processing engine(s) (208) can be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) (208). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) (208) may include a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the controller (112) can include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to controller (112) and the processing resource. In other examples, the processing engine(s) (208) may be implemented by electronic circuitry. A database (222) can include data that is either stored or generated as a result of functionalities implemented by any of the components of the processing engine(s) (208).
[0066] In an embodiment, the processing engine(s) (208) can include an extraction unit (210), a matching unit (212), a comparison unit (214), a classification and training unit (216), a signal generation unit (218), and other unit(s) (220). The other unit(s) (220) can implement functionalities that supplement applications or functions performed by the system (100) or the processing engine(s) (208).
[0067] The database (222) can include data that is either stored or generated as a result of functionalities implemented by any of the components of the processing engine(s) (208).
[0068] It would be appreciated that units being described are only exemplary units and any other unit or sub-unit may be included as part of the system (100). These units too may be merged or divided into super- units or sub-units as may be configured.
[0069] As illustrated in FIG. 2, the controller (112) can be configured to receive a first set of signals from an image acquisition unit (104), a second set of signals from a first set of sensors (106) in electric form. In an embodiment, the image acquisition unit (104) can include any or a combination of camera, camcorder, video recorder, scanner, face recognition sensor, and the likes, and the image acquisition unit (104) can be configured to sense one or more entities in a pre-defined area, where the pre-defined area can be ten centimeters to fifteen centimeters, but not limited to the likes. In another embodiment, the first set of sensors (106) can include any or a combination of heart rate sensor, heartbeat sensors, pulse sensors, and the likes.
[0070] In an embodiment, the extraction unit (210) can be configured to receive the first set of signals and the second set of signals in electric form from the image acquisition unit (104) respectively, where the extraction unit (210) can be configured to extract a third set of signals from the first set of signals, and a fourth set of signals from the second set of signals in machine readable form or binary form, where the third set of signals pertain to facial parameters associated with the one or more entities and the fourth set of signals pertain to heart rate value of the subject. In another embodiment, the extraction unit (210) can be configured to transmit the facial parameters to the matching unit (212) and the heart rate value to the comparison unit (214).
[0071] In an embodiment, the matching unit (212) can be configured to receive the facial parameters form the extraction unit (210) in machine readable form or binary form. The matching unit (212) can be configured to match the facial parameters with a first dataset, where the first dataset can include pre-stored
facial parameters pertaining emotions of the one or more entities. The first dataset can be stored in the database (222). In another embodiment, the controller (112) can be configured to transmit a first set of actuation signals to a display unit (114) with the help of the signal generation unit (218), where the first set of actuation signals can actuate the display unit (114) that can facilitate in displaying one or more emoticons associated with emotions of the one or more entities found in the pre-defined area, where the one or more emoticons can include any or a combination of happy, sad, anger, distress, weep, laugh, joy, neutral, tired, shock, love, nervous, scared, and the likes.
[0072] In an exemplary embodiment, the display unit can include any or a
combination of light emitting diode (LED), liquid crystal display (LCD), organic light emitting diode (OLED), and LED matrix.
[0073] In an embodiment, the comparison unit (214) can be configured to compare the extracted heart rate value with a second data set, where the second dataset can include a pre-determined heart rate limit. In an exemplary embodiment, the heart rate limit can vary according to age, such as the pre¬determined heart limit of a child in between one to three year can be in range of eighty to one thirty beats per minute (bpm), but not limited to the likes. In another embodiment, the signal generation unit (218) can be configured to transmit a set of acoustic signals to an audio unit (116) when the compared heart rate value is beyond the pre-determined heart limit, where the set of acoustic signals can pertain to one or more audios stored in a third data set, where the one or more audios can include binaural beats to provide sound therapies to the subject. [0074] In an embodiment, the controller (112) can be configured to receive a request from the associated one or more computing devices such mobile terminals, tablet, and the likes to initiate an attention span tracking module. Upon receiving the request, the controller (112) can transmit a first set of actuation signals to the image acquisition unit (104) with the help of the signal generation unit (218), where upon receiving the first set of actuation signals the image acquisition unit (104) can be actuated that facilitate in capturing a second set of images of one or more objects, and a fifth set of signals can be generated with the
help of the signal generation unit (218). Also, Upon receiving the request, the controller (112) can transmit a second set of actuation signals to the second set of sensors (108) with the help of the signal generation unit (218), where upon receiving the second set of actuation signals the second set of sensors (108) can be actuated that facilitate in detecting head movement of the subject, and a sixth set of signals can be generated with the help of the signal generation unit (218). In another embodiment, the controller (112) can be configured to receive the fifth set of signals and the sixth set of signals from the signal generation unit and with the help of the extraction unit (210) edges parameters of the one or more objects from the fifth set of signals, and head attributes from the sixth set of signals can be extracted. In an exemplary embodiment, the one or more objects can be a paper, a book, any electronic device such as computer, mobile phone and the likes. In yet another embodiment, the controller (112) can be configured to determine time duration from the extracted edges parameters and the extracted head attributes to detect attention span of the subject with the help of the classification and training unit (216).
[0075] In an embodiment, the controller (112) can be configured to receive a request from the associated one or more mobile computing devices to initiate a letter recognition module. Upon receiving the request, the controller (112) can transmit a set of control signals to the display unit (114) with the help of the signal generation unit (218), where upon receiving the set of control signals the display unit (114) can be actuated to facilitate in displaying any or a combination of pre-defined characters. In another embodiment, the controller (112) can transmit a third set of actuation signals to actuate the image acquisition unit (104), where upon receiving the third set of actuation signals, the image acquisition unit (104) can facilitate in capturing a third set of images, and can generate a seventh set of signals accordingly with the help of the signal generation unit (218). In yet another embodiment, the extraction unit (210) can be configured to extract one or more characters drawn by the subject on the one or more objects form the received seventh set of signals, and the extracted one or more characters can be transmitted to the matching unit (212), and the matching unit (212) can be
configured to match the extracted one or more characters with the pre-defined set of one or more characters stored in a fourth dataset, where the pre-defined set of one or more characters can pertain to one or more images of digits, uppercase letters, and lowercase letters drawn by the one or more entities. The controller (112) can be configured to determine level of accuracy from the one or more characters drawn by the subject with the help of the classification and training unit (216) and the determined level of accuracy can be transmitted to the associated one or more computing devices with the help of the communication module. [0076] In an embodiment, the controller (112) can be configured to receive a request from the associated one or more mobile computing devices to initiate a music module. Upon receiving the request, the controller (112) can transmit a second set of acoustic signals to the audio unit (116) with the help of the signal generation unit (218), where the second set of acoustic signals can pertain to any or a combination of rhymes, lullaby, and music to entertain the subject. [0077] In an embodiment, the classification and training unit (216) can be configured to receive the extracted facial parameters, the heart rate value, the edges parameters of the one or more objects, the head attributes, and the one or more characters from the extraction unit (210) in machine readable form or binary form and update and train the classification and training unit (216) based on the extracted facial parameters, the heart rate value, the edges parameters of the one or more objects, the head attributes, and the one or more characters. [0078] In an embodiment, the classification and training unit (216) can be trained and updated based on the received facial parameters, the heart rate value, the edges parameters of the one or more objects, the head attributes, and the one or more characters. In another embodiment, a deep leaning model can be trained based on the received facial parameters, the heart rate value, the edges parameters of the one or more obj ects, the head attributes, and the one or more characters, where the deep leaning model can be stored in the database (222). In yet another embodiment, once the first dataset, the second dataset, the third dataset, and the fourth dataset are trained correctly, a deep learning algorithm can be configured to perform repetitive, and routine tasks within a shorter period of time.
[0079] In an exemplary embodiment, the processing engine (208) can be further configured in the form of an Artificial Neural Network like the following but not limited to Convolutional Neural Network (CNN) and Deep Neural Network (DNN). In an exemplary embodiment, the processing engine (208) can include deep learning based classifiers, where the deep learning based classifiers can include KNN classifiers, MLP neural networks and the likes. In another exemplary embodiment, the processing engine (208) can include a letter MNIST dataset including handwritten characters, which can facilitate in recognizing level of accuracy of the characters drawn by the subject.
[0080] As illustrated in FIG. 3, the system (100) can include an image acquisition unit (104) such as camera, camcorder, video recorder, scanner, face recognition sensor to capture images, a first set of sensors (106) heart rate sensor, heartbeat sensors, pulse sensors, and the likes to sense heart rate value of the subject, a display unit (114) to display emotions, letters, digits, and the likes, an audio unit (116) to provide one or more audios, and a controller (112). The controller (112) can be operatively coupled with the image acquisition unit (104), the first set of sensors (106), the display unit (114), and the audio unit (116). [0081] In an embodiment, the image acquisition unit (104) can be configured to capture a first set of images of one or more entities in a pre-defined area, a second set of images of one or more objects such as paper, book, and a third set of images of one or more characters drawn by the subject on the one or more objects, and can transmit the controller (112). The controller (112) in response to the first set of images can display one or more emoticons associated with the emotions of the one or more entity talking or standing near the subject. In response to the second set of images, the controller (112) can determine a time period to analyze attention span of the subject. Furthermore, in response to the third set of images, the controller (112) can detect level of accuracy of one or more characters drawn on a paper, writing board, slate and the likes by the subject.
[0082] In an embodiment, the audio unit (116) can be configured to play binaural beats when the heart rate value of the subject is beyond a pre-determined heart limit to calm the subject. In another embodiment, when a music option is
chosen from the associated mobile computing device, the music can be played, and the subject can enjoy the music through the audio unit such as by headphone, ear buds and the likes.
[0083] In an embodiment, the controller (112) can be configured to detect emotions of the one or more entities in the pre-defined area, and correspondingly display emoticons on the display unit (114) to assist the subject to recognize emotions of others, where the one or more emoticons can include any or a combination of happy, sad, anger, distress, weep, laugh, joy, neutral, tired, shock, love, nervous, scared, and the likes. In another embodiment, the controller (112) can be configured to detect the time period, when the subject is looking at the one or more objects such as paper, book, and the likes to determine attention span of the subject. In yet another embodiment, the controller (112) can be configured to assist the subject to practice characters such as digits, uppercase letters, and lowercase letters by detecting level of accuracy of the writing skills of the subject. [0084] In an embodiment, the controller (112) can be configured to calm down the subject during panic attack by playing one or more audios of binaural beats through the audio unit (116). Also, the controller (112) can be configured to enhance cognitive skills of the subject by playing one or more audios of music, rhymes, lullaby, and the likes through the audio unit (116) through such as by headphone, ear buds and the likes.
[0085] In an embodiment, the controller (112) can be communicatively coupled with the with the one or more mobile computing devices through a communication module (302), where the communication module comprises any or a combination of Wireless Fidelity (Wi-Fi) Module, Bluetooth Module, Li-Fi Module, optical fiber, Wireless Local Area Network (WLAN), ZigBee, and the likes.
[0086] In an embodiment, the system (100) can include a power source (306) can be operatively coupled with the image acquisition unit (104), the first set of sensors (106), the second set of sensors (108), the controller (112), and the display unit (114) where the power source can facilitate in providing electric power to the controller (112). In another embodiment, the power source (306) can include any
or a combination of cell, battery, electric power line, inverter, capacitor bank, inductor, and the likes.
[0087] As illustrated in FIG. 4, the system (100) can include a device (102) adapted to be attached to a frame of a spectacle. The device (102) comprising an image acquisition unit (104) such as camera, camcorder, video recorder, scanner, face recognition sensor to capture images, and a display unit (114) to display emoticons, letters, digits, and the likes, an audio unit (116) such as earphone to provide one or more audios, and a controller (112), where the controller (112) can be operatively coupled with the image acquisition unit (104), the display unit (114), and the audio unit (116).
[0088] In an embodiment, the device (102) can include a transparent screen (402) attached to the device (102) such that the transparent screen (402) can be positioned over at least one of a glass attached to a spectacle, and a mirror can be attached with the device (102) to reflect information displayed on the display unit (114) to the transparent screen (402), where the transparent screen (402) and the mirror can be moved to a pre-defined angle to adjust the transparent screen (402) according to the user's requirement.
[0089] In an exemplary embodiment, when the letters of the one or more languages such as English alphabets A, B, C, D, and the likes are displayed on the display unit (114), the reflection of the letters can be displayed on the transparent screen (402) through the attached mirror. In another exemplary embodiment, when the one or more emoticons associated with emotions of the one or more entities found in the pre-defined area are displayed on the display unit (112), the reflection of the emoticons can be displayed on the transparent screen (402) through the attached mirror.
[0090] As illustrated in FIG. 5A, a system (100) is communicatively coupled with one or more mobile computing devices through a communication module (302) to access the system (100) remotely. In an exemplary embodiment, the associated mobile computing devices comprise an application where registration is required, when a registered user initiate the application the registered user's information is required to be entered such as name and age, and if entered name
and age is found in database, the user can access the system (100) through the associated mobile computing devices.
[0091] In an exemplary embodiment, one or more features such as reading training, writing training, play rhymes and the likes are available in the application installed on the mobile computing device, as shown in FIG. 5B. The user can select at least one of the features such as reading training, and the characters can be played and facilitates the user to listen the alphabets or words through the audio unit (116).
[0092] In an exemplary embodiment, when writing training is selected from the mobile computing device, the characters are displayed on a display unit (114), and the user practices the characters on a paper, and an image acquisition unit (104) capture images of the paper and transmits to a controller (112), where the controller (112) analyze writing skills, and displays time engaged, and attention on the mobile computing devices as shown in FIG. 5C.
[0093] In an exemplary embodiment, the system (100) can enhance confidence of the subject having autism as they can easily detect emotions of the others. Also, the system (100) can provide level of accuracy of the writing skills of the subject and enable parents, caregivers, and the likes to take suitable action to enhance writing skills. Similarly, the system (100) can track concentration of the subject on a task, and parents or caregivers can take required action to increase attention span of the subject.
[0094] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE PRESENT DISCLOSURE
[0095] The present disclosure provides a system that helps in assisting person
having autism disorder.
[0096] The present disclosure provides a system to recognize one or more
skills of the patient and accordingly providing assistance to improve the skills.
[0097] The present disclosure provides a system to recognize learning speed
of the person having autism disorder and accordingly provide assistance.
[0098] The present disclosure provides a system to provide sound therapies to
the person.
[0099] The present disclosure provides a system to calm down the person in
case of hyperactive breakdown or irritability.
[00100] The present disclosure provides a system that helps in avoiding
expensive therapies, medicinal treatment and other means for treating autism
disorder.
We Claim:
1. A system (100) for aiding a subject with autism, the system (100) comprising:
a device (102) adapted to be coupled with frame of a spectacle of
the subject, the device (102) comprising:
an image acquisition unit (104) configured to capture a
first set of images of the one or more entities in the pre-defined
area, and correspondingly generate a first set of signals;
a first set of sensors (106) configured to sense one or more
physiological parameters of the subject, and correspondingly
generate a second set of signals; and
a controller (112) operatively coupled to the image acquisition unit (104) and the first set of sensors (106), wherein the controller (112) including one or more processors coupled with a memory, the memory storing instructions executable by the one or more processors configured to:
receive the first set of signals and the second set of signals;
extract a third set of signals from the first set of signals, a fourth set of signals from the second set of signals, wherein the third set of signals pertain to facial parameters associated with the one or more entities and the fourth set of signals pertain to heart rate value of the subject;
match the extracted facial parameters with a first dataset, wherein the first dataset comprises pre-stored facial parameters pertaining emotions of the one or more entities;
transmit a first set of actuation signals to a display unit (114), wherein the first set of actuation signals actuate the display unit (114) that facilitates in displaying one or more emoticons associated with emotions of the one or more entities found in the pre-defined area;
comparing the extracted heart rate value with a second data set, wherein the second dataset includes a pre-determined heart rate limit;
transmit a set of acoustic signals to an audio unit (116) when the compared heart rate value is beyond the pre-determined heart limit, wherein the set of acoustic signals pertains to one or more audios stored in a third data set, wherein the one or more audios provide sound therapies to the subject; and the controller (112) further configured to:
receive a request from associated one or more computing devices,
transmit a first set of actuation signals to actuate the image acquisition unit (104), wherein the image acquisition unit (104) facilitates in capturing a second set of images of one or more objects, and correspondingly generate a fifth set of signals;
transmit a second set of actuation signals to actuate a second set of sensors (108), wherein the second set of sensors facilitates in detecting head movement of the subject, and correspondingly generate a sixth set of signals;
receive the fifth set of signals and the sixth set of signals;
extract edges parameters of the one or more objects from the fifth set of signals, and head attributes from the sixth set of signals; and
determine a time duration from the extracted edges parameters and the extracted head attributes to detect attention span of the subject.
2. The system (100) as claimed in claim 1, wherein the audio unit (116) comprises any or a combination of speaker, earphone, ear bud, and headset.
3. The system (100) as claimed in claim 1, wherein the image acquisition unit (104) comprises any or a combination of camera, scanner, and face recognition sensor, and wherein the first set of sensors include any or a combination of heart rate sensor, heartbeat sensors, and pulse sensors, and wherein the second set of sensors include any or a combination of accelerometer and gyroscope.
The system (100) as claimed in claim 1, wherein the one or more emoticons comprises any or a combination of happy, sad, anger, distress, weep, laugh, joy, neutral, tired, shock, love, nervous, and scared. The system (100) as claimed in claim 1, wherein upon receiving a request from the associated one or more mobile computing devices, the controller (112) is configured to:
transmit a set of control signals to the display unit (114), wherein the display unit (114) actuates and facilitates in displaying any or a combination of pre-defined characters;
transmit a third set of actuation signals to actuate the image acquisition unit (104), wherein the image acquisition unit facilitates in capturing a third set of images, and correspondingly generate a seventh set of signals;
extract one or more characters drawn by the subject on the one or more objects form the received seventh set of signals; and
match the extracted one or more characters with the pre-defined set of one or more characters stored in a fourth dataset, wherein the pre-defined set of one or more characters pertain to one or more images of digits, uppercase letters, and lowercase letters drawn by the one or more entities; and
determine level of accuracy from one or more characters drawn by the subject, and correspondingly transmit the level of accuracy to the associated one or more computing devices.
The system (100) as claimed in claim 1, wherein the controller (112) is configured to transmit a second set of acoustic signals to the audio unit (116), wherein the second set of acoustic signals pertains to any or a combination of rhymes, lullaby, and music to entertain the subject. The system (100) as claimed in claim 1, wherein the controller (112) is in communication with the one or more mobile computing devices through a communication module, wherein the communication module comprises any or a combination of Wireless Fidelity (Wi-Fi) Module, Bluetooth
Module, Li-Fi Module, optical fiber, Wireless Local Area Network (WLAN), and ZigBee.
8. The system (100) as claimed in claim 1, wherein the one or more mobile computing devices comprises any or a combination of mobile terminal, laptop and tablet, wherein the one or more mobile computing devices are configured to transmit the one or more requests to the system (100) that facilitates in controlling the system (100) remotely.
9. The system (100) as claimed in claim 1, wherein the display unit (114) comprises any or a combination of light emitting diode (LED), liquid crystal display (LCD), organic light emitting diode (OLED), and LED matrix.
10. The system (100) as claimed in claim 1, wherein the system (100) includes a power source operatively coupled to the image acquisition unit (104), the first set of sensors (106), the second set of sensors (108), the controller (112), and the display unit (114) wherein the power source facilitates in providing electric power to the system (100), and wherein the power source includes any or a combination of cell, battery, electric power line, inverter, capacitor bank, and inductor.
| # | Name | Date |
|---|---|---|
| 1 | 202111028802-STATEMENT OF UNDERTAKING (FORM 3) [26-06-2021(online)].pdf | 2021-06-26 |
| 2 | 202111028802-POWER OF AUTHORITY [26-06-2021(online)].pdf | 2021-06-26 |
| 3 | 202111028802-FORM FOR STARTUP [26-06-2021(online)].pdf | 2021-06-26 |
| 4 | 202111028802-FORM FOR SMALL ENTITY(FORM-28) [26-06-2021(online)].pdf | 2021-06-26 |
| 5 | 202111028802-FORM 1 [26-06-2021(online)].pdf | 2021-06-26 |
| 6 | 202111028802-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [26-06-2021(online)].pdf | 2021-06-26 |
| 7 | 202111028802-EVIDENCE FOR REGISTRATION UNDER SSI [26-06-2021(online)].pdf | 2021-06-26 |
| 8 | 202111028802-DRAWINGS [26-06-2021(online)].pdf | 2021-06-26 |
| 9 | 202111028802-DECLARATION OF INVENTORSHIP (FORM 5) [26-06-2021(online)].pdf | 2021-06-26 |
| 10 | 202111028802-COMPLETE SPECIFICATION [26-06-2021(online)].pdf | 2021-06-26 |
| 11 | 202111028802-Proof of Right [13-11-2021(online)].pdf | 2021-11-13 |
| 12 | 202111028802-FORM 18 [08-04-2023(online)].pdf | 2023-04-08 |
| 13 | 202111028802-FER.pdf | 2024-12-13 |
| 14 | 202111028802-FORM-5 [16-04-2025(online)].pdf | 2025-04-16 |
| 15 | 202111028802-FORM-26 [16-04-2025(online)].pdf | 2025-04-16 |
| 16 | 202111028802-FER_SER_REPLY [16-04-2025(online)].pdf | 2025-04-16 |
| 17 | 202111028802-CORRESPONDENCE [16-04-2025(online)].pdf | 2025-04-16 |
| 1 | SearchstrategyE_18-09-2024.pdf |
| 2 | D2NPLE_18-09-2024.pdf |
| 3 | 202111028802_SearchStrategyAmended_E_SearchAE_29-07-2025.pdf |