Abstract: NON-INVASIVE EMOTIONAL STATE MONITORING DEVICE ABSTRACT A non-invasive emotional state monitoring device (100) is disclosed. The device (100) comprising: a microphone array (102) configured to capture vocal sounds of an animal. A processor (104) configured to: receive the captured vocal sounds; employ Digital Signal Processing (DSP) techniques on the received vocal sounds; derive Mel-Frequency Cepstral Coefficients (MFCCs) and spectrograms from the received vocal sounds; map the derived Mel-Frequency Cepstral Coefficients (MFCCs) and spectrograms with a prestored dataset of coefficients and spectrograms; and determine emotional states of the animal based on the mapped Mel-Frequency Cepstral Coefficients (MFCCs) and the spectrograms. The device (100) provides a real-time analysis of the animal vocal sounds, allowing immediate detection of stress, anxiety, or discomfort. Claims: 10, Figures: 3 Figure 1 is selected.
Description:BACKGROUND
Field of Invention
[001] Embodiments of the present invention generally relate to an understanding of animal emotions and behavior and particularly to a non-invasive emotional state monitoring device.
Description of Related Art
[002] The health and nutritional well-being of animals, including pets, domestic animals, and livestock, play a crucial role in their overall development, productivity, and longevity. Proper nutrition is essential to support their physiological functions, immune response, and disease resistance. Over time, various dietary supplements, fortified feeds, and therapeutic formulations have been introduced to enhance animal health and ensure optimal growth. These formulations often include vitamins, minerals, amino acids, probiotics, and herbal extracts aimed at improving digestion, metabolism, and overall vitality. However, existing nutritional solutions often suffer from limitations such as poor bioavailability, inadequate absorption, and difficulty in administration, reducing their overall efficacy.
[003] Advancements in animal nutrition have led to the development of novel formulations that aim to overcome these limitations. Research has shown that the method of nutrient delivery, ingredient synergy, and formulation stability significantly impact the effectiveness of nutritional supplements. Additionally, with growing concerns over synthetic additives and their potential side effects, there has been an increasing demand for natural and organic formulations. Factors such as the animal's age, breed, and specific dietary requirements further complicate the challenge of formulating universally effective nutritional solutions. Despite improvements, many available supplements still fail to provide sustained benefits, particularly in cases requiring long-term dietary intervention.
[004] In recent years, there has been a significant shift towards targeted nutrition solutions that cater to specific health concerns such as joint health, skin and coat maintenance, digestive health, and immune support. The pet care and livestock industries have increasingly adopted advanced bioactive compounds, controlled-release mechanisms, and enhanced delivery systems to maximize nutrient absorption and efficacy. However, ensuring palatability, ease of administration, and compatibility with existing feeding practices remains a challenge. As the industry continues to evolve, there is a pressing need for innovative nutritional formulations that address these concerns while providing comprehensive health benefits to a wide range of animals.
[005] There is thus a need for an improved and advanced non-invasive emotional state monitoring device that can administer the aforementioned limitations in a more efficient manner.
SUMMARY
[006] Embodiments in accordance with the present invention provide a non-invasive emotional state monitoring device. The device comprising a microphone array configured to capture vocal sounds of an animal. The device further comprising a processor communicatively connected to the microphone. The processor is configured to receive the captured vocal sounds; employ Digital Signal Processing (DSP) techniques on the received vocal sounds from the microphone array; derive Mel-Frequency Cepstral Coefficients (MFCCs) and spectrograms from the received vocal sounds; map the derived Mel-Frequency Cepstral Coefficients (MFCCs) and spectrograms with a prestored dataset of coefficients and spectrograms; and determine emotional states of the animal based on the mapped Mel-Frequency Cepstral Coefficients (MFCCs) and the spectrograms.
[007] Embodiments in accordance with the present invention further provide a method for non-invasive emotional state monitoring. The method comprising steps of receiving captured vocal sounds from a microphone array; employing Digital Signal Processing (DSP) techniques on the received vocal sounds; deriving Mel-Frequency Cepstral Coefficients (MFCCs) and spectrograms from the received vocal sounds; mapping the derived Mel-Frequency Cepstral Coefficients (MFCCs) and spectrograms with a prestored dataset of coefficients and spectrograms; and determining emotional states of the animal based on the mapped Mel-Frequency Cepstral Coefficients (MFCCs) and the spectrograms.
[008] Embodiments of the present invention may provide a number of advantages depending on their particular configuration. First, embodiments of the present application may provide a non-invasive emotional state monitoring device.
[009] Next, embodiments of the present application may provide a non-invasive device that provides real-time analysis of animals' vocal sounds, allowing immediate detection of stress, anxiety, or discomfort.
[0010] Next, embodiments of the present application may provide a non-invasive device that helps maintain emotional stability, leading to more effective vocal identification sessions and improved patient outcomes.
[0011] Next, embodiments of the present application may provide a non-invasive device that is designed to be compact and wearable, such as attaching to a collar or vest, ensuring minimal disruption to animals while allowing continuous monitoring.
[0012] Next, embodiments of the present application may provide a non-invasive device that seamlessly connects with mobile applications and hospital management systems, providing real-time notifications, data logging, and trend analysis for better welfare of animals.
[0013] Next, embodiments of the present application may provide a non-invasive device that differentiates between multiple animals in group settings, allowing handlers to monitor and respond to each animal’s emotional state effectively.
[0014] These and other advantages will be apparent from the present application of the embodiments described herein.
[0015] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The above and still further features and advantages of embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
[0017] FIG. 1A illustrates a non-invasive emotional state monitoring device, according to an embodiment of the present invention;
[0018] FIG. 1B illustrates an exemplary implementation of the non-invasive emotional state monitoring device, according to an embodiment of the present invention; and
[0019] FIG. 2 depicts a flowchart of a method for non-invasive emotional state monitoring, according to an embodiment of the present invention.
[0020] The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
DETAILED DESCRIPTION
[0021] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the scope of the invention as defined in the claims.
[0022] In any embodiment described herein, the open-ended terms "comprising", "comprises”, and the like (which are synonymous with "including", "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of", “consists essentially of", and the like or the respective closed phrases "consisting of", "consists of”, the like.
[0023] As used herein, the singular forms “a”, “an”, and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0024] FIG. 1A illustrates a non-invasive emotional state monitoring device 100 (hereinafter referred to as the device 100), according to an embodiment of the present invention. The device 100 may be adapted to non-invasively record and recognize vocal sounds of an animal. The recorded and recognized vocal sounds may be further utilized for accessing an emotional state of the animal. Further, the device 100 may possess a portable and pocket-friendly design. The device 100 may further be adapted to be attached to an accessory of the animal to ensure minimal disruption to a behaviour of the animal. The animal may be, but not limited to, a pet animal, a farm animal, a domestic animal, a wild animal, and so forth. In a preferred embodiment of the present invention, the animal may be a trained animal specialized in emotional support and well-being therapy. Embodiments of the present invention are intended to include or otherwise cover any type of the animal.
[0025] The device 100 may comprise a microphone array 102, a processor 104, a communication unit 106, and a power supply unit 108.
[0026] In an embodiment of the present invention, the microphone array 102 may be configured to capture the vocal sounds of the animal. The microphone array 102 may comprise an arrangement of directional microphones configured to isolate sounds based on spatial origin and reduce background noise.
[0027] In an embodiment of the present invention, the processor 104 may be connected to the microphone. The processor 104 may be configured to receive the captured vocal sounds. The processor 104 may be configured to singulate the captured vocal sounds by differentiating between animals in a multi-animal environment. The processor 104 may be configured to employ Digital Signal Processing (DSP) techniques on the received vocal sounds. The Digital Signal Processing (DSP) techniques may be, but not limited to, noise filtering to remove unwanted environmental sounds, beamforming to enhance signals from targeted directions, source separation algorithms to distinguish between multiple simultaneous animal vocal sounds, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the Digital Signal Processing (DSP) techniques, including known, related art, and/or later developed technologies.
[0028] The processor 104 may be configured to derive Mel-Frequency Cepstral Coefficients (MFCCs) and spectrograms from the received vocal sounds. The processor 104 may be configured to map the derived Mel-Frequency Cepstral Coefficients (MFCCs) and spectrograms with a prestored dataset of coefficients and spectrograms. The processor 104 may be configured to determine emotional states of the animal based on the mapped Mel-Frequency Cepstral Coefficients (MFCCs) and the spectrograms. The processor 104 may utilize trained Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) models for mapping and determining the emotional states of the animals. The emotional states may be, but not limited to, a calm emotional state, a happy emotional state, an excited emotional state, an anxious emotional state, an aggressive emotional state, a hungry emotional state, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the emotional states, including known, related art, and/or later developed technologies.
[0029] In an embodiment of the present invention, the processor 104 may be configured to command a plurality of sub-processors to carry out the aforementioned actions. The processor 104 may be, but not limited to, a Programmable Logic Control (PLC) unit, a microprocessor, a development board, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the processor 104 including known, related art, and/or later developed technologies.
[0030] In an embodiment of the present invention, the communication unit 106 may be adapted to establish a communicative link between the processor 104 and a computing unit 110. The communication unit 106 may further be adapted to establish a communicative link between the processor 104 and a cloud database (not shown). The communication unit 106 may enable a transmission of the vocal sounds and the emotional states of the animal concluded from the corresponding vocal sounds among the processor 104, the computing unit 110, and the cloud database. In a preferred embodiment of the present invention, the communication unit 106 may be a Wireless Fidelity (Wi-Fi) or a Bluetooth. Embodiments of the present invention are intended to include or otherwise cover any type of the communication unit 106, including known, related art, and/or later developed technologies.
[0031] In an embodiment of the present invention, the power supply unit 108 may be adapted to supply operational power to the processor 104. In a preferred embodiment of the present invention, the power supply unit 108 may be a rechargeable lithium-ion battery. Embodiments of the present invention are intended to include or otherwise cover any type of the power supply unit 108, including known, related art, and/or later developed technologies.
[0032] In an embodiment of the present invention, the computing unit 110 may be an electronic device used by a user such as, but not limited to, an owner of the animal, a handler, a caretaker, a veterinarian, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the user. The computing unit 110 may display real-time emotional state insights of the animal. The computing unit 110 may further display a previously exhibited emotional states of the animal along with a timestamp. The computing unit 110 may provide a visual and a textual emotion analytics over-time of the animal. The computing unit 110 may further be adapted to generate and display alerts for stress or unusual behavior in the animal. The computing unit 110 may further be integrated with hospital management systems for centralized monitoring. The computing unit 110 may be, but not limited to, a personal computer, a desktop, a server, a laptop, a tablet, a mobile phone, a notebook, a netbook, a smartphone, a wearable device, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the computing unit 110 including known, related art, and/or later developed technologies.
[0033] FIG. 1B illustrates an exemplary implementation of the non-invasive emotional state monitoring device 100, according to an embodiment of the present invention. In an exemplary embodiment of the present invention, the non-invasive emotional state monitoring device 100 may be attached to a collar band 114 of a canine 112. The device 100 may continuously capture the vocal sounds of the canine 112 using the microphone array 102 (as shown in FIG. 1A) and process the captured sounds using the processor 104 (as shown in FIG. 1A). The processor 104 may analyze the vocal sounds, extract relevant acoustic features, and determine the emotional state of the canine 112 using trained Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).
[0034] The device 100 may be configured to wirelessly transmit the processed data and determined emotional state to the computing device 110 (as shown in FIG. 1A). The computing device110 may be a laptop 110a, a smartphone 110n, or any other smart device associated with a user or a caretaker. The external computing device, such as the laptop 110a or smartphone 110n, may receive the transmitted data and display the real-time emotional state of the canine 112 via a graphical user interface (not shown). The graphical user interface may visualize the emotional state using color-coded indicators, audio alerts, or textual notifications. Additionally, the interface may store historical emotional state data to track behavioral patterns and detect anomalies over time.
[0035] According to the embodiments of the present invention, the device 100 may prevent an attack or aggressive behavior by providing early detection of signs of agitation, stress, or aggression in the canine 112. The device 100 may generate preemptive alerts to the caretaker or user, allowing timely intervention before the canine 112 exhibits potentially dangerous behavior. This feature may be particularly beneficial in public spaces, training environments, or households with children, reducing the risk of unintended harm or conflicts. Embodiments of the present invention may be applicable to various species of animals apart from canines, with modifications in vocal analysis parameters tailored to the specific species. The device 100 may serve as a valuable tool for pet owners, veterinarians, and animal behavior researchers to gain insights into an animal’s emotional state and well-being in a non-invasive manner.
[0036] FIG. 2 depicts a flowchart of a method 200 for the non-invasive emotional state monitoring, according to an embodiment of the present invention.
[0037] At step 202, the device 100 may receive the captured vocal sounds.
[0038] At step 204, the device 100 may employ the Digital Signal Processing (DSP) techniques on the received vocal sounds.
[0039] At step 206, the device 100 may derive the Mel-Frequency Cepstral Coefficients (MFCCs) and the spectrograms from the received vocal sounds.
[0040] At step 208, the device 100 may map the derived Mel-Frequency Cepstral Coefficients (MFCCs) and spectrograms with the prestored dataset of coefficients and spectrograms.
[0041] At step 210, the device 100 may determine the emotional states of the animal based on the mapped Mel-Frequency Cepstral Coefficients (MFCCs) and the spectrograms.
[0042] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0043] This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements within substantial differences from the literal languages of the claims. , Claims:CLAIMS
I/We Claim:
1. A non-invasive emotional state monitoring device (100), the device (100) comprising:
a microphone array (102) configured to capture vocal sounds of an animal;
a processor (104) communicatively connected to the microphone, characterized in that the processor (104) is configured to:
receive the captured vocal sounds from the microphone array (102);
employ Digital Signal Processing (DSP) techniques on the received vocal sounds;
derive Mel-Frequency Cepstral Coefficients (MFCCs) and spectrograms from the received vocal sounds; and
map the derived Mel-Frequency Cepstral Coefficients (MFCCs) and spectrograms with a prestored dataset of coefficients and spectrograms; and
determine emotional states of the animal based on the mapped Mel-Frequency Cepstral Coefficients (MFCCs) and the spectrograms.
2. The device (100) as claimed in claim 1, wherein the microphone array (102) comprises an arrangement of directional microphones configured to isolate sounds based on spatial origin and reduce background noise.
3. The device (100) as claimed in claim 1, wherein the Digital Signal Processing (DSP) techniques are selected from noise filtering to remove unwanted environmental sounds, beamforming to enhance signals from targeted directions, source separation algorithms to distinguish between multiple simultaneous animal vocal sounds, or a combination thereof.
4. The device (100) as claimed in claim 1, wherein the processor (104) utilizes trained Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) models to determine the emotional states.
5. The device (100) as claimed in claim 1, wherein the emotional states are selected from a calm emotional state, a happy emotional state, an excited emotional state, an anxious emotional state, an aggressive emotional state, a hungry emotional state, or a combination thereof.
6. The device (100) as claimed in claim 1, wherein the processor (104) is configured to singulate the vocal sounds by differentiating between animals in a multi-animal environment.
7. The device (100) as claimed in claim 1, comprising a computing unit (110) adapted to display real-time emotional state insights of the animal.
8. The device (100) as claimed in claim 1, wherein the device (100) is portable and is attachable to an accessory of the animal to ensure minimal disruption to a behaviour of the animal.
9. The device (100) as claimed in claim 1, comprising a power supply unit (108) adapted to supply operational power to the processor (104).
10. A method (200) for non-invasive emotional state monitoring, the method (200) is characterised by steps of:
receiving captured vocal sounds from a microphone array (102);
employing Digital Signal Processing (DSP) techniques on the received vocal sounds;
deriving Mel-Frequency Cepstral Coefficients (MFCCs) and spectrograms from the received vocal sounds;
mapping the derived Mel-Frequency Cepstral Coefficients (MFCCs) and spectrograms with a prestored dataset of coefficients and spectrograms; and
determining emotional states of the animal based on the mapped Mel-Frequency Cepstral Coefficients (MFCCs) and the spectrograms.
Date: March 05, 2025
Place: Noida
Nainsi Rastogi
Patent Agent (IN/PA-2372)
Agent for the Applicant
| # | Name | Date |
|---|---|---|
| 1 | 202541019976-STATEMENT OF UNDERTAKING (FORM 3) [06-03-2025(online)].pdf | 2025-03-06 |
| 2 | 202541019976-REQUEST FOR EARLY PUBLICATION(FORM-9) [06-03-2025(online)].pdf | 2025-03-06 |
| 3 | 202541019976-POWER OF AUTHORITY [06-03-2025(online)].pdf | 2025-03-06 |
| 4 | 202541019976-OTHERS [06-03-2025(online)].pdf | 2025-03-06 |
| 5 | 202541019976-FORM-9 [06-03-2025(online)].pdf | 2025-03-06 |
| 6 | 202541019976-FORM FOR SMALL ENTITY(FORM-28) [06-03-2025(online)].pdf | 2025-03-06 |
| 7 | 202541019976-FORM 1 [06-03-2025(online)].pdf | 2025-03-06 |
| 8 | 202541019976-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [06-03-2025(online)].pdf | 2025-03-06 |
| 9 | 202541019976-EDUCATIONAL INSTITUTION(S) [06-03-2025(online)].pdf | 2025-03-06 |
| 10 | 202541019976-DRAWINGS [06-03-2025(online)].pdf | 2025-03-06 |
| 11 | 202541019976-DECLARATION OF INVENTORSHIP (FORM 5) [06-03-2025(online)].pdf | 2025-03-06 |
| 12 | 202541019976-COMPLETE SPECIFICATION [06-03-2025(online)].pdf | 2025-03-06 |
| 13 | 202541019976-Proof of Right [13-05-2025(online)].pdf | 2025-05-13 |