Sign In to Follow Application
View All Documents & Correspondence

Device To Detect Emotion And Content Delivery

Abstract: The present disclosure pertains to a device 100 to detect emotion of a subject and correspondingly providing content automatically. The device 100 store Holy Scriptures in digital format and apply natural language processing techniques to extract verses, and based on the emotion of the subject deliver the related verse on a display unit 108 of the device 100. Also, audio recordings such as devotional songs, verses, and etc. may be played by an audio unit 110 provided on the device 100 automatically. Sensors 104 are provided on the device 100 and configured to detect face expressions such as anger, sad, and etc., and correspondingly extract the related verse from the memory and display on the display unit 108, and produce sound by the audio unit 110 to uplift the mood of the subject.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 January 2022
Publication Number
45/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Chitkara Innovation Incubator Foundation
SCO: 160-161, Sector - 9c, Madhya Marg, Chandigarh- 160009, India.

Inventors

1. NIJJER, Shivinder
Chitkara Business School, Chitkara University, Chandigarh-Patiala National Highway, Village Jhansla, Rajpura, Punjab - 140401, India.
2. KAUR, Bhalinder
Baba Banda Singh Bahadur Engineering College, Fatehgarh Sahib, Punjab - 140407, India.
3. SAKHUJA, Sumit
Chitkara Business School, Chitkara University, Chandigarh-Patiala National Highway, Village Jhansla, Rajpura, Punjab - 140401, India.

Specification

TECHNICAL FIELD
[0001] The present invention generally relates to emotion-based content recommendation. More particularly, relates to a device to detect emotion of a user and recommending content such as verses of holy books, and playing related songs also to uplift the mood of the user.

BACKGROUND
[0002] Background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] Users of computing devices spend increasing amounts of time browsing streams of posts on social networks, news articles, video, audio, or other digital content. The amount of information available to users is also increasing. Thus, a need exists for delivering content a user that may be of current interest to them. In existing devices, when the user require information, they needs to manually search information on the device, for example, the music that you want to listen to, the video you want to watch, the webpage you want to browse, the article you want to see, and so on. In today's huge amount of information, it takes a lot of time for users to manually select content of interest from a variety of information, and it is not possible to quickly obtain the required information.
[0004] Nowadays, ancient Holy Scriptures like Sri Guru Granth Sahib ji, Bible, Bhaagwad Geeta, Ved Puraans, Quran are available in electronic format, enabling easy read for users on the go. Mobile applications have been designed to display daily wisdom fetched from these scriptures and may be installed on user’s mobile computing devices. Besides holy songs are also played by music apps based on user’s request. However, there is a lack of a feature which may sense the emotions to adjudge the mood of the person and then display the relevant holy song or extract the relevant verse for the user to read and uplift his/her mood.
[0005] Existing Bluetooth headphones include simple storage displaying pre-recorded songs and verses, also music apps like Ganna, Youtube, Spotify are available to make a song list based on user history, however not inculcate mood prediction or emotion sensing technology. Various applications are containing Holy Scriptures in electronic format, but they do not integrate emotion sensing and display matching verse.
[0006] There is a need to provide a solution that overcomes the above-mentioned and other limitations of existing solutions and provides a device to recognize emotion of the user and playing related content automatically based on emotions of the user.

OBJECTS OF THE PRESENT DISCLOSURE
[0007] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0008] An object of the present disclosure is to provide a device to detect emotional state of a user.
[0009] Another object of the present disclosure is to provide content delivery based on the emotional state of the user, to uplift the mood of the user.
[0010] Another object of the present disclosure is to provide a device to classify verses of Holy Scriptures and enabling user to read and listen the holy verses and songs.
[0011] Another object of the present disclosure is to provide a hand-held device with efficient and cost effective solution.
[0012] Various objects, features, aspects and advantages of the present disclosure will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like features.

SUMMARY
[0013] Various aspects of the present disclosure relates to emotion-based content recommendation. In particular relates to a device to detect emotion of a user and recommending content such as verses of holy books, and playing related songs also to uplift the mood of the user.
[0014] An aspect of the present disclosure disclosing a device to detect emotion of a subject. The device may including a housing, one or more sensors positioned on the housing to detect one or more parameters of a subject, a processing unit may be operatively coupled to the one or more sensors, and the processing unit may include a learning engine coupled with a memory, the memory storing instructions executable by the learning engine and configured to receive one or more parameters of the subject, extract data depicting a face of the subject, analyse the data to identify a plurality of points on the face, identify an emotional state of the subject using the extracted plurality of points on the face, extract at least one of the content from a dataset based on the identified emotional state of the subject, wherein the dataset is storing a plurality of contents storing, display the extracted at least one of the content on a display unit positioned on the housing, and generate an acoustic signal and transmit to an audio unit coupled to the housing, wherein upon receiving the acoustic signal the audio unit generate sound based on the acoustic signal.
[0015] In an aspect, the acoustic signal may include any or a combination of audio recording, song, poem, verse form a scripture, and verse of holy person.
[0016] In an aspect, the one or more sensors may include any or a combination of camera, infrared sensor, 3D sensor, image sensor, tactile sensor, touch sensor, heart rate sensor, and temperature sensor.
[0017] In an aspect, the one or more parameters may include any or a combination of facial image, heart rate, temperature, and hand movement on device.
[0018] In an aspect, a communication unit may be coupled to the housing, and the communication unit may include at least Wi-Fi (or Wireless Fidelity), Worldwide Interoperability for Microwave Access (WiMAX), Bluetooth, Wireless LAN (WLAN), and Wireless USB (Wireless Universal Serial Bus).
[0019] In an aspect, the display unit may be selected from a group consisting of light emitting diode (LED), liquid crystal display (LCD), organic light emitting diode (OLED), and LED matrix.
[0020] In an aspect, the audio unit may include selected from a group consisting of a speaker, headphones, and air pods, wherein the audio unit is communicatively coupled with the processing unit via the communication unit.
[0021] In an aspect, a power source may be coupled to the housing to provide electricity to one or more sensors, the processing unit, the display unit, and the audio unit.
[0022] In an aspect, the power source may include any or a combination of rechargeable battery, lithium (Li) ion cell, rechargeable cells, electrochemical cells, storage battery, and secondary cell.

BRIEF DESCRIPTION OF THE DRAWINGS
[0023] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims.
[0024] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details.
[0025] FIG. 1 illustrates a block diagram of a proposed device for detecting emotion of a subject and delivering content, in accordance with an embodiment of the present disclosure.
[0026] FIG. 2 illustrates an exemplary functional components of a processing unit of the proposed device, in accordance with an embodiment of the present disclosure.
[0027] FIG. 3 illustrates an exemplary flow diagram of detecting emotion of a subject and delivering content, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION
[0028] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
[0029] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details. Embodiments explained herein relates to emotion-based content recommendation. In particular the present disclosure relates to a device to detect emotion of a user and recommending content such as verses of holy books, and playing related songs also to uplift the mood of the user.
[0030] FIG. 1 illustrates a block diagram of a proposed device for detecting emotion of a subject and delivering content, in accordance with an embodiment of the present disclosure.
[0031] As illustrated in FIG. 1, a device 100 for detecting emotion of a subject (interchangeably referred as user, hereinafter) and correspondingly delivering content is disclosed. The device 100 can include a housing 102 such as a tablet, the housing 102 can include one or more sensors 104 positioned on the housing 102 to detect one or more parameters of the subject, a processing unit 106 for analyzing the received one or more parameters, a display unit 108, an audio unit 110, a communication unit 112, and a power source 114.
[0032] In an embodiment, the one or more sensors 104 (collectively referred as sensors 104, and individually referred as sensor 104) can include any or a combination of camera, infrared sensor, 3D sensor, image sensor, tactile sensor, touch sensor, and heart rate sensor. The sensors 104 can be configured to monitor one or more parameters such as facial image, heart rate, and hand movement of the user. For example, when the user is moving his hand on the device especially on display, the device can detect hand’s movement, similarly, the heart rate sensor can be configured to detect heart rate of the user, and face images of the user can be collected with the camera. The camera can be a CMOS camera.
[0033] In an embodiment, the display unit 108 can be positioned on front side of the housing, the display unit 108 can be configured to display recommended content. The display unit 108 can be realized through several known technologies such as, but not limited to, at least one of a Liquid Crystal Display (LCD) display, a Light Emitting Diode (LED) display, a plasma display, an Organic LED (OLED) display technology, and a LED matrix.
[0034] In an embodiment, the audio unit 110 can be such as, but not limited to speaker, headphones, microphone, and air pods. The audio unit 110 can be communicatively coupled with the processing unit 106 via a communication unit 112. The audio unit 110 can be configured to play such as, but not limited to audio recording, song, poem, verse form a scripture, and verse of holy person, associated with different user emotional state. In addition, the audio unit 110 can be in build in the housing 102, or an external audio unit can be used. For example, the user can connect the air pods with the Bluetooth, and audio recordings selected based on his mood can be played automatically.
[0035] In an embodiment, the communication unit 112 can be configured to facilitate wireless Internet technology. Examples of such wireless Internet technology include GSM, Wireless LAN (WLAN), Wireless Fidelity (Wi-Fi), Wi-Fi Direct, Digital Living Network Alliance (DLNA), Wireless Broadband (WiBro), Worldwide Interoperability for Microwave Access (WiMAX), High Speed Downlink Packet Access (HSDPA), HSUPA (High Speed Uplink Packet Access), Long Term Evolution (LTE), LTE-A (Long Term Evolution-Advanced), and the like.
[0036] In addition, the communication unit 112 can be configured to facilitate short-range communication. For example, short-range communication can be supported using at least one of Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra-Wideband (UWB), ZigBee, Near Field Communication (NFC), Wireless-Fidelity (Wi-Fi), Wi-Fi Direct, Wireless USB (Wireless Universal Serial Bus), and the like.
[0037] In an embodiment, a power source 114 can be operatively coupled to the housing to provide electricity to one or more sensors 104, the processing unit 106, the display unit 108, and the audio unit 110. The power source 114 can include any or a combination of rechargeable battery, lithium (Li) ion cell, rechargeable cells, electrochemical cells, storage battery, Lithium Polymer, Lithium Ion, Nickel Cadmium, Nickel Hydride and secondary cell.
[0038] In an embodiment, the processing unit 106 can be operatively coupled to the one or more sensors 104. The processing unit 106 can include a learning engine coupled with a memory, the memory storing instructions executable by the learning engine and configured to receive one or more parameters of the subject, extract data depicting a face of the subject, analyse the data using face recognition techniques to identify a plurality of points on the face, identify an emotional state of the subject using the extracted plurality of points on the face, extract at least one of the content from a dataset based on the identified emotional state of the subject, wherein the dataset is storing a plurality of contents storing. The processing unit 106 can be configured to display the extracted at least one of the content on a display unit 108 positioned on the housing 102.
[0039] In an embodiment, the processing unit 106 can be further configured to generate an acoustic signal and transmit to the audio unit 110. Upon receiving the acoustic signal, the audio unit 110 can generate sound based on the acoustic signal. The acoustic signal can pertain information such as audio recording, song, poem, verse form a scripture, and verse of holy person. For example, the scripture include Holy Scriptures such as, but not limited to Sri Guru Granth Sahib ji, Bible, Bhaagwad Geeta, Ramayana, Ved Puraans, and Quran.
[0040] In an embodiment, the device 100 can include any one of a web client or application to facilitate communication and interaction between user and the device 100. In various embodiments, the device 100 can involve user-selected functions available through one or more user interfaces (UIs). The UIs may be specifically associated with the web client (e.g., a browser) or the application. Accordingly, the device 100 may provide a set of machine-readable instructions that, when interpreted by the device using the web client or the application, cause device to present the UI, and transmit user input received through such UIs back to the device 100. As an example, the UIs provided to view on the device 100 to enable the user to check information such as verses and audio recordings on the display unit 108.
[0041] FIG. 2 illustrates an exemplary functional components of a processing unit of the proposed device, in accordance with an embodiment of the present disclosure.
[0042] As illustrated in FIG. 2, a processing unit 106 is disclosed, the processing unit 106 can include one or more processor(s) 202 that can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s) 202 can be configured to fetch and execute computer-readable instructions stored in a memory 204 of the processing unit 106. The memory 204 can store one or more computer-readable instructions or routines, which may be fetched and executed to create or share the data units over a network service. The memory 204 can include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0043] In an embodiment, the processing unit 106 can also include an interface(s) 206. The interface(s) 206 may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) 206 may facilitate communication of device 100. The interface(s) 206 may also provide a communication pathway for one or more components of the device 100. Examples of such components include, but are not limited to, natural language processing engine(s) 208 (also referred as learning engine(s) 208, herein) and a database 210.
[0044] In an embodiment, the natural language processing engine(s) 208 can be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the natural language processing engine(s) 208. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for natural language processing engine(s) 208 may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the natural language processing engine(s) 208 may include a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the natural language processing engine(s) 208. In such examples, the processing unit 106 can include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to processing unit 108 and the processing resource. In other examples, the natural language processing engine(s) 208 may be implemented by electronic circuitry. The database 210 can include data that is either stored or generated as a result of functionalities implemented by any of the components of the natural language processing engine(s) 208.
[0045] In an embodiment, the natural language processing engine(s) 208 can include a document classification unit 212, a topic extraction unit 214, a text mining unit 216, a prediction unit 218, a recommendation engine 220, and other unit(s) 222. The other unit(s) 222 can implement functionalities that supplement applications or functions performed by the device 100 or the natural language processing engine(s) 208.
[0046] It would be appreciated that units being described are only exemplary units and any other unit or sub-unit may be included as part of the device 100. These units too may be merged or divided into super- units or sub-units as may be configured.
[0047] In an embodiment, the processing unit 106 can be configured to receive Holy Scriptures in from of pdf or word file. These Holy Scriptures can be downloaded in the device 100 directly, or can be transmitted to the device 100 by connecting through an external device via data cable or communication unit 112. The received Holy Scriptures can be stored in the database 210. The document classification unit 212 can be configured to classify the document based on their types such as health related verses, God loves you, and the likes. The topic extraction unit 214 can extract topics from each of the classified document. Further, the text mining unit 216 can be configured to extract texts from each of the topics, and store information into the database 210.
[0048] In an embodiment, upon receiving the one or more parameters of the subject from the one or more sensors 104 coupled to the device 100, the prediction unit 218 can extract data depicting a face of the subject, analyse the data using face recognition techniques to identify a plurality of points on the face, identify an emotional state of the subject using the extracted plurality of points on the face. Further, the recommendation engine 220 extract at least one of the content from a dataset stored in the database based on the identified emotional state of the subject.
[0049] In an embodiment, verses recommended by the recommendation engine 220 can be displayed on a display unit 108. And, the processing unit 106 can generate an acoustic signal and transmit the generated acoustic signal to the audio unit 110. Upon receiving the acoustic signal, the audio unit 110 can generate sound based on the acoustic signal. The acoustic signal can pertain information such as audio recording, song, poem, verse form a scripture, and verse of holy person. For example, the scripture include Holy Scriptures such as, but not limited to Sri Guru Granth Sahib ji, Bible, Bhaagwad Geeta, Ramayana, Ved Puraans, and Quran.
[0050] In an exemplary embodiment, emotional information can be detected using sensors 104 which capture information about the user's physical state or behavior. The information gathered is analogous to the cues humans use to perceive emotions in others. For example, a camera can capture facial expressions, while a microphone might capture speech. Other sensors can detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistance. In some embodiments, a camera or IR camera can detect temperature changes in a person's skin. For instance, if a user is stressed, the blood rushing to a person's face may elevate the heat pattern or sensed heat from that person's face.
[0051] In an exemplary embodiment, the way a user alters his or her speech can be used as information to produce device capable of recognizing affect based on extracted features of speech. For example, speech produced in a state of fear, anger or joy becomes faster, louder, precisely enunciated with a higher and wider pitch range. Other emotions such as tiredness, boredom or sadness, lead to slower, lower-pitched and slurred speech.
[0052] In an exemplary embodiment, emotional speech processing recognizes the user's emotional state by analyzing speech patterns. Vocal parameters and prosody features such as pitch variables and speech rate may be analyzed through pattern recognition, e.g., using a microphones provided on the device 100. In addition, the camera positioned on the device may also detect facial expressions, which may be used to detect mood (i.e. emotional state) of the user.
[0053] Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This can be done using machine learning techniques that process different modalities, such as, but not limited to natural language processing, speech recognition, and speech waveforms, or facial expression detection, and produce either labels (i.e. “sad,” “mad,” “happy,” “hurried,” “stressed,” etc.).
[0054] FIG. 3 illustrates an exemplary flow diagram of detecting emotion of a subject and delivering content, in accordance with an embodiment of the present disclosure.
[0055] As illustrated in FIG. 3, firstly in step 302, digitalization of ancient scriptures (i.e. Holy Scriptures) can be done to create a database 210 by a processing unit 106. Upon receiving the digital format of the Holy Scriptures extract themes of different verses written in these scriptures through an application of natural language processing coded in a high level language (as shown in step 304). The processing unit 106 can further classify each theme such as happy, neutral, and sad through the application of document classification techniques, save this database the memory (as shown in step 306).
[0056] In an embodiment, in step 308, a touch sensor 104 placed on the device can monitor heart rate and predict current emotion of the user, that the user is happy/ neutral/ sad. In step 310, the processing unit 106 (i.e. processors(s)) can extract matching themes based on the predicted emotion and display information on the device 100 (i.e. on display unit 108). The user can select the preferred theme on the device 100 (as shown in step 312), and in step 314, the processor(s) can fetch the keywords of the verses falling under the theme chosen by the user. Further, in step 316, the processors(s) search for keywords in linked music app, and the recommendation engine can generate a list of appropriate holy song matching with keywords of those verses (as shown in step 318), and the user can choose a particular song which can be played on the device 100 (as shown in step 320).
[0057] In an exemplary embodiments, based on the user emotion, a song playlist can be generated and displayed on the display unit 108 of the device 100, and the user can play the song from the list easily.
[0058] The above described features, configurations, effects, and the like are included in at least one of the embodiments of the present invention, and should not be limited to only one embodiment. In addition, the features, configurations, effects, and the like as illustrated in each embodiment may be implemented with regard to other embodiments as they are combined with one another or modified by those skilled in the art. Thus, content related to these combinations and modifications should be construed as including in the scope and spirit of the invention as disclosed in the accompanying claims.
[0059] Further, although the embodiments have been mainly described until now, they are just exemplary and do not limit the present invention. Thus, those skilled in the art to which the present invention pertains will know that various modifications and applications which have not been exemplified may be performed within a range which does not deviate from the essential characteristics of the embodiments. For instance, the constituent elements described in detail in the exemplary embodiments can be modified to be performed. Further, the differences related to such modifications and applications shall be construed to be included in the scope of the present invention specified in the attached claims.
[0060] The present invention encompasses various modifications to each of the examples and embodiments discussed herein. According to the invention, one or more features described above in one embodiment or example can be equally applied to another embodiment or example described above. The features of one or more embodiments or examples described above can be combined into each of the embodiments or examples described above. Any full or partial combination of one or more embodiment or examples of the invention is also part of the invention.

ADVANTAGES OF THE PRESENT DISCLOSURE
[0061] The present disclosure provides a device to detect emotional state of a user.
[0062] The present disclosure provides a device for delivering content based on the emotional state of the user, to uplift the mood of the user.
[0063] The present disclosure provides a device to classify verses of Holy Scriptures and enabling user to read and listen the holy verses and songs.
[0064] The present disclosure provides a hand-held device with efficient and cost effective solution.

We Claims:

1. A device 100 to detect emotion of a subject comprising:
a housing 102;
one or more sensors 104 positioned on the housing 102 to detect one or more parameters of a subject;
a processing unit 106 operatively coupled to the one or more sensors 104, wherein the processing unit 106 comprises a learning engine coupled with a memory, the memory storing instructions executable by the learning engine and configured to:
receive one or more parameters of the subject;
extract data depicting a face of the subject;
analyse the data to identify a plurality of points on the face;
identify an emotional state of the subject using the extracted plurality of points on the face;
extract at least one of the content from a dataset based on the identified emotional state of the subject, wherein the dataset is storing a plurality of contents storing;
display the extracted at least one of the content on a display unit 108 positioned on the housing; and
generate an acoustic signal and transmit to an audio unit 110 coupled to the housing, wherein upon receiving the acoustic signal the audio unit 110 generate sound based on the acoustic signal.
2. The device as claimed in claim 1, wherein the acoustic signal pertains to any or a combination of audio recording, song, poem, verse form a scripture, and verse of holy person.
3. The device as claimed in claim 1, wherein the one or more sensors 104 comprises any or a combination of camera, infrared sensor, 3D sensor, image sensor, tactile sensor, touch sensor, heart rate sensor, and temperature sensor.
4. The device as claimed in claim 1, the one or more parameters comprises any or a combination of facial image, heart rate, temperature, and hand movement on device.
5. The device as claimed in claim 1, wherein a communication unit 112 is coupled to the housing, wherein the communication unit comprises at least Wi-Fi (or Wireless Fidelity), Worldwide Interoperability for Microwave Access (WiMAX), Bluetooth, Wireless LAN (WLAN), and Wireless USB (Wireless Universal Serial Bus).
6. The device as claimed in claim 1, wherein the display unit 108 is selected from a group consisting of light emitting diode (LED), liquid crystal display (LCD), organic light emitting diode (OLED), and LED matrix.
7. The device as claimed in claim 1, wherein the audio unit 110 is selected from a group consisting of a speaker, headphones, and air pods, wherein the audio unit is communicatively coupled with the processing unit via the communication unit.
8. The device as claimed in claim 1, wherein a power source 114 is coupled to the housing to provide electricity to one or more sensors 104, the processing unit 106, the display unit 108, and the audio unit 110, wherein the power source can include any or a combination of rechargeable battery, lithium (Li) ion cell, rechargeable cells, electrochemical cells, storage battery, and secondary cell.

Documents

Application Documents

# Name Date
1 202211002970-STATEMENT OF UNDERTAKING (FORM 3) [19-01-2022(online)].pdf 2022-01-19
2 202211002970-POWER OF AUTHORITY [19-01-2022(online)].pdf 2022-01-19
3 202211002970-FORM FOR STARTUP [19-01-2022(online)].pdf 2022-01-19
4 202211002970-FORM FOR SMALL ENTITY(FORM-28) [19-01-2022(online)].pdf 2022-01-19
5 202211002970-FORM 1 [19-01-2022(online)].pdf 2022-01-19
6 202211002970-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [19-01-2022(online)].pdf 2022-01-19
7 202211002970-EVIDENCE FOR REGISTRATION UNDER SSI [19-01-2022(online)].pdf 2022-01-19
8 202211002970-DRAWINGS [19-01-2022(online)].pdf 2022-01-19
9 202211002970-DECLARATION OF INVENTORSHIP (FORM 5) [19-01-2022(online)].pdf 2022-01-19
10 202211002970-COMPLETE SPECIFICATION [19-01-2022(online)].pdf 2022-01-19
11 202211002970-Proof of Right [04-02-2022(online)].pdf 2022-02-04
12 202211002970-FORM-9 [03-11-2022(online)].pdf 2022-11-03
13 202211002970-FORM 18 [06-11-2023(online)].pdf 2023-11-06
14 202211002970-FER.pdf 2025-04-02
15 202211002970-FORM 3 [02-07-2025(online)].pdf 2025-07-02
16 202211002970-FORM-5 [18-08-2025(online)].pdf 2025-08-18
17 202211002970-FORM-26 [18-08-2025(online)].pdf 2025-08-18
18 202211002970-FER_SER_REPLY [18-08-2025(online)].pdf 2025-08-18
19 202211002970-CORRESPONDENCE [18-08-2025(online)].pdf 2025-08-18
20 202211002970-CLAIMS [18-08-2025(online)].pdf 2025-08-18

Search Strategy

1 SearchStrategyMatrix202211002970E_22-03-2024.pdf