Sign In to Follow Application
View All Documents & Correspondence

System And Method For Evaluating Cognitive And Emotional State Of A User Using Ml Model

Abstract: A system and method for evaluating cognitive and emotional state of a user using a machine learning model is provided.The system 100 includes a user 102, a first data source 104, a second data source 106, a server112 that includes afirst machine learning (ML) model 114, and a second ML model 116. The server 112 receives first input dataand second input datafrom the first data source 104 and the second data source 106. The first machine learning model 114 interacts with the user and obtainsa third data.The server112 analyses the first, second and third input data to (i) derive time periods when the user experiences difficult emotional states; (ii) determineone or more triggers; (iii) generate contentusing the second machine learning model; (iv) determine a granular description of cognitive and emotional state of the userusing the second machine learning model. FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
10 June 2022
Publication Number
38/2023
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2025-02-13
Renewal Date

Applicants

BRAINSIGHT TECHNOLOGY PRIVATE LIMITED
No 22, 4th Floor, Hosur Road, Koramangala, 7th Block, Salarpuria Towers I, Bangalore Karnataka India 560095

Inventors

1. Rimjhim Agrawal
640, 14th Cross JP Nagar, 2nd Phase Bangalore Karnataka India 560078
2. Laina Emmanuel
204, Block 2, Sun city apartment Iblur Bengaluru Karnataka India 560102
3. Simran Rana
503, Silver Bill, Nyati enclave Mohammedwadi Pune Maharashtra India 411060

Specification

DESC:Technical Field
[0001] The embodiments herein generally relate to mental health assessment tools, more particularly, a system and method evaluating the cognitive and emotional state of a user based on one or more triggers that are determined from a plurality of data points using a machine learning modelusing data.
Description of the Related Art
[0002] Mental health assessment tools consist of a series of questions, interviews, and mental and physical examinations in a paper or digital format to provide an opportunity for clinicians to understand their patients better — their struggles, concerns, habits, and behaviors. The information collected through the use of assessment tools offers valuable insight into patient health and provides a starting point for diagnosis and treatment. It is necessary to design the assessment tools by considering all the factors that affect the person’s mental health conditions to obtain a better outcome.
[0003] Nowadays, many mental health assessment tools are available. They are based on at least one biological, psychological, social, environmental, and demographic factor. Some existing mental health measuring tools use at least one of (i) data related to the “inside body” of the patients such as neuroimaging (e.g. connectomics data, i.e. structural and functional connectivity of brain networks); (ii) data related to “outside body” of the patients such as reports and symptoms by doctors, patients, and caretakers; behavior changes, digital phenotyping data (i.e. data from networked sensors) including symptoms picked by digital sources, for example, sleep patterns, motion changes, fall, etc.; and environmental changes including area, circumstances, traumatic incidents, and other stimulants and triggers; or (iii) both for assessing the mental health conditions. However, the existing mental health measuring tools do not provide a granular description of a person’s mental health conditions as they are not considered all the factors that affect the person’s mental health conditions.
[0004] One of the important factors that should be focused on for assessing the mental health conditions is “intersectionality and environmental triggers of a person” as the factor affects a person’s conception of mental health issues. Example of intersectionality factors includes personality, cognitive ability, social skills, risk behaviours, motivation, trauma, gender, language, genetic vulnerability, temperament, and physical health. Example of environmental triggers includes separation from family, structure, parental health, communication, socioeconomic status, discrimination, social activities, connection to peers, housing, school environment, community supports, and neighbourhood. For example, a trans-gender black person with HIV in a developing country has a very different lived experience than a white cisperson with depression in a developed country. Moreover, the environmental triggers that trigger a mental health episode in either of these are different. Hence, it is vital to include the factor “intersectionality and environmental triggers” in the assessment, especially to obtain a granular description of the person’s mental health conditions.
[0005] Moreover, it is essential to ask very subtle questions for understanding the intersectionality and triggers of the person as often adaptive emotional well-being strategies require masking intersectionalities and triggers. Currently, the questions for understanding the intersectionality and triggers of the person are based on existing research “Bauer, G. R., & Scheim, A. I. (2019). Advancing quantitative intersectionality research methods: Intracategorical and intercategorical approaches to shared and differential constructs. Social Science & Medicine, 226, 260–262” and Spoon Theory developed by Christine Miserandino in 2003. However, the existing assessment tools with these questions have the following drawbacks, (i) they are paper-based and hence do not have sufficient ease of operation; (ii) they do not employ a graded method of ensuring that the person is comfortable in revealing relevant information; and (iii) they do not sufficiently incentivize the person to answer questions.
[0006] Therefore, there arises a need to address the aforementioned technicaldrawbacks in the existing mental health evaluation toolsin measuring the person’s mental health conditions in a granular level.
SUMMARY
[0007] In view of foregoing a system for evaluating the cognitive and emotional state of a user based on one or more triggers that are determined from a plurality of data points using a machine learning model is provided. The system includes a first data source that is configured to provide a first input data.The first input data includes at least one digital usage pattern, digital symptomatic data, or behavioral data. The system includes a second data source that is configured to provide a second input data.The second input data comprises data associated with at least one of the brain networks, molecular details, genetics details, cell details, physiological details of the user, neuroimaging data, neurotransmitter profile, or neuro-genetic profile of the user. The system includes a first machine learning model that is configured to provide a third input data, wherein the third input data comprises at least one of (i) responses for one or more contextual questions from a caregiver of the user; (ii) response for pre-set patterns from the user; (iii) responses for one or more contextual questions from the user, or (iv) one or more first triggers.The system includes a server that acquires the first input data, the second input data, and the third input data of the user from the first data source, the second data source, and the first machine learning model and processes, the first input data, the second input data, and the third input data using the first machine learning model.The server includes a memory that stores a database and a processor that is configured to execute the machine learning model and is configured to (i) derive time periods when the user experiences difficult emotional states by automatically analysing the third input data from the first machine learning model and the first input data from the first data source; (ii) determine one or more second triggers based on the time periods that are derived, wherein the one or more second triggers comprise at least one of psychiatric, neurological, or neurosurgical conditions; (iii) generate, using a second machine learning model, a content associated with the one or more second triggers; and (iv) determine, using the second machine learning model, a granular description of cognitive and emotional state of the user by correlating the content associated with the one or more second triggers to the second input data, the second machine learning model is trained by correlating historical contents, historical second triggers with historical second input data associated with historical users.
[0008] In some embodiments, the first machine learning model is configured to associate with a user device that comprises a caregiver interface and a user interface that is displayed with at least one of (i) a bunch of pre-set patterns, and (ii) a set of questions to a response by at least one of the user or the caregiver of the user to obtain the third input data.
[0009] In some embodiments, the first machine learning model is configured to associate with a user device that includes a caregiver interface and a user interface that is displayed with at least one of (i) a bunch of pre-set patterns, and (ii) a set of questions to a response by at least one of the user or the caregiver of the user in order to obtain the third input data.
[0010] In some embodiments, the first machine learning model is configured to probe the caregiver on whether the user is going through a good emotional phase currently or a bad emotional phase by displaying the one or more contextual questions to the caregiver on the caregiver interface.
[0011] In some embodiments,the second machine learning model is trained to generate the content by identifying intersectionalities of one or more users for showing up the one or more contextual questions to the user, the intersectionalities are overlapping social identities that the one or more users possess, which influence experiences of the one or more users.
[0012] In some embodiments,the first machine learning model is trained by correlating historical set of contextual questions, historical set of patterns, with historical responses, and historical users.
[0013] In some embodiments,the time periods when the user experiences the emotional difficulties are derived by calculating scores from the third input data.
[0014] In one aspect, a method provides for evaluating the cognitive and emotional state of a user based on one or more triggers that are determined from a plurality of data points using a machine learning model. The method includes receivinga first input data from a first data source by a server.The first input data includes at least one digital usage pattern, digital symptomatic data, or behavioral data.The method includes receiving a second input data from a second data source by the server.The second input data includes data associated with at least one of the brain networks, molecular details, genetics details, cell details, physiological details of the user, neuroimaging data, neurotransmitter profile, or neuro-genetic profile of the user. The method includes obtaining a third input data from a first machine learning model by the server. The third input data comprises at least one of (i) responses for one or more contextual questions from a caregiver of the user; (ii) response for pre-set patterns from the user; (iii) responses for one or more contextual questions from the user, or (iv) one or more first triggers.The method includes deriving time periods when the user experiences difficult emotional states by automatically analyzing the third input data from the first machine learning model and the first input data from the first data source.The method includes determining one or more second triggers based on the time periods that are derived.One or more second triggers include at least one of the psychiatric, neurological, or neurosurgical conditions.The method includes generating a content associated with the one or more second triggersusing a second machine learning model.The method includes determining, using the second machine learning model, a granular description of the cognitive and emotional state of the userby correlating the content associated with the one or more second triggers to the second input data.The second machine learning model is trained by correlating historical contents, and historical second triggers with historical second input data associated with historical users.
[0015] In some embodiments, the first machine learning model is configured to associate with a user device that comprises a caregiver interface and a user interface that is displayed with at least one of (i) a bunch of pre-set patterns, and (ii) a set of questions to a response by at least one of the user or the caregiver of the user in order to obtain the third input data.
[0016] In some embodiments, the first machine learning model is configured to associate with a user device that includes a caregiver interface and a user interface that is displayed with at least one of (i) a bunch of pre-set patterns, and (ii) a set of questions to a response by at least one of the user or the caregiver of the user in order to obtain the third input data.
[0017] In some embodiments, the first machine learning model is configured to probe the caregiver on whether the user is going through a good emotional phase currently or a bad emotional phase by displaying the one or more contextual questions to the caregiver on the caregiver interface.
[0018] The system combines all the factors that affect the mental health conditions of the user including intersectionality factors, “inside the body” factors (e.g. neuroimaging, neurotransmitter profile, and neuro-genetic profile), “outside the body” factors (e.g. digital symptoms including sleep patterns, fall, etc.), etc., for measuring the mental health conditions of the user. Hence, it is possible to obtain the mental health conditions of the user at a granular level with this system. Further, the system is a non-intrusive system. With this system, mental health conditions are caught in time, hence, the system helps in clinical studies and trials formeasuring the effect of drugs on the users (patients). For obtaining the data related to the intersectionality of the user, the system uses well-designed questions and artificial intelligence-based approaches that allow for contextual probing into situations and encourages the users to disclose their intersectionalities and triggers. Hence, the system is effective and reliable in measuring the mental health conditions of the users.
[0019] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
[0021] FIG. 1 is a block diagram of a system to evaluatethe cognitive and emotional state of a user based on one or more triggers that are determined from a plurality of data points using a first machine learning model according to some embodiments herein;
[0022] FIG. 2 is a block diagram of a server according to some embodiments herein;
[0023] FIG. 3 is a block diagram of a first machine learning model of FIG. 1 according to some embodiments herein;
[0024] FIG. 4 is an exemplary diagram of a user interface associated with a user device of FIG. 2 according to some embodiments herein;
[0025] FIG. 5 is a block diagram of a second machine learning model of FIG. 2 according to some embodiments herein;
[0026] FIG. 6is a flow diagram that illustrates a method for evaluating the cognitive and emotional state of a user based on one or more triggers that are determined from a plurality of data points using a machine learning model according to some embodiments herein;and
[0027] FIG. 7 is a schematic diagram of a computer architecture in accordance with the embodiments herein.
DETAILED DESCRIPTION OF THE DRAWINGS
[0028] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0029] As mentioned, there is a need for mental health assessment tools to evaluatethe cognitive and emotional state of a user based on one or more triggers that are determined from a plurality of data points using a machine learning model. Embodiments herein provide a system and method for evaluatingthe cognitive and emotional state of a user based on one or more triggers that are determined from a plurality of data points using a machine learning model.Referring now to the drawings, and more particularly to FIGS. 1 through 7, where similar reference characters denote corresponding features consistently throughout the figures, preferred embodiments are shown.
[0030] FIG. 1 is a block diagram of a system 100 to evaluatethe cognitive and emotional state of a user based on one or more triggers that are determined from a plurality of data points using a machine learning model according to some embodiments herein.The system 100 includes a user 102, a first data source 104, a second data source 106, a user device 108, and a server112. The serverincludes afirst machine learning model 114, and a second machine learning model 116.
[0031] The server 112 includes a device processor and a non-transitory computer-readable storage medium storing one or more sequences of instructions, which when executed by the device processor causes the evaluation of the cognitive and emotional state of a userserver 112 may be a handheld device, a mobile phone, a kindle, a Personal Digital Assistant (PDA), a tablet, a music player, a computer, an electronic notebook, or a Smartphone.
[0032] The server 112is configured to connect with at least one of the first data source 104, the second data source 106, and the user device 108 through a network 110. The network 110 is a wireless network or wired network. The network 110 is a combination of a wired network and a wireless network. In some embodiments, network 110 is the Internet.
[0033] The server 112 is further configured to receive a first input data from the first data source 104. The first data source 104 may be any personal device, sensor network, or digital source that provides the first input data associated with the user 102. The first input data may include at least one symptomatic data of the user 102 from doctors, professionals, experts, or caregivers, self-reporting data, digital symptomatic data from digital sources including sleep patterns, motion changes, falls, etc., behavioral data including energy, anxiety, etc., data associated with speech or verbal changes, self-entries, notes, and diaries. The first input data may be any digital symptomatic data from digital sources. The first input data may be obtained with digital phenotyping. Digital phenotyping refers to the moment-by-moment, in situ quantification of the individual-level human phenotype using data from personal digital devices.
[0034] The server 112 further receives a second input data from the second data source 106. The second data source 106 may be an imageprocessingsystem that processes a medical image taken using an imaging modality and provides the second input data. The medical image may include a brain image of the user 102. The brain image may be a whole-brain image or a sub-regions image of the brain. The imaging modality may include at least one Magnetic resonance Imaging (MRI) scan including but not limited to T1 weighted image (T1w), T2 weighted image (T2w), T1c, Diffusion Weighted Imaging (DWI), Fluid-attenuated inversion recovery (FLAIR), etc., Computed tomography (CT), Positive Emission Tomography (PET), or any other brain image capturing modality.
[0035] The image processing system may be a handheld device, a mobile phone, a Kindle, a Personal Digital Assistant (PDA), a tablet, a laptop, a music player, a computer, an electronic notebook, or a Smartphone. The second input data may be data associated with thebrain of the user 102. The second input data may be data associated with a brain connectome of the user 102. The brain connectome is a comprehensive map of neural connections in the brain. The second input data may be data associated with the brain networks of the user 102.The second input data may be data associated withthe structural and functional connectivity of the brain. The brain networks may include eloquent cortex mapping (ECM) networks like motor networks, visual networks, language networks, corticospinal networks, auditory networks, etc, resting-state networks (RSN), default mode networks (DMN) networks, frontoparietal networks, etc.
[0036] The second data source 106 may be a modality that provides at least one molecular, genetics, cell, and physiological detail.The second data source 106 may bethe Internet, a website, or any database, from which the second input data is collected. The second input data may include data associated with at least one of the brain networks, molecular, genetics, cell, and physiological details of the user 102. The second input data may include neuroimaging data, neurotransmitter profile, and neuro-genetic profile of the user 102.
[0037] The user device 108 may be handheld, a mobile phone, a Kindle, a Personal Digital Assistant (PDA), a tablet, a laptop, a music player, a computer, an electronic notebook, or a Smartphone.
[0038] The first machine learning model 114 is configured to interact with the user 102 to obtain a third input data. The first machine learning model 114 may be an artificial intelligence (AI) driven contextual questioning interface in the server 112. The first machine learning model 114 includes a caregiver interface and a user interface that ispresented withat least one of (i) a bunch of pre-set patterns, and (ii) a set of questions to a response by the user 102 or a caregiverof the user 102 in order to obtain the third data. The questions may be contextual. The caregiver may be a doctor, clinician, or, medical professional, etc. The third input data comprises at least one of (i) responses for one or more contextual questions from a caregiver of the user 102; (ii) response for pre-set patterns from the user 102; (iii) responses for one or more contextual questions from the user 102, or (iv) one or more first triggers. One or more first triggers may be digital usage patterns, symptomatic data, or behavioral data. The third input data may be associated with intersectionalities and environmental triggers of the user 102.
[0039] The server112 receives and analyses the first input data, the second input data, and the third input data of the user102 from the first data source 104, the second data source 106, and the first machine learning model 114. The server110 derives time periods when the user 102 experiences difficult emotional states by automatically analysing the third input data from thefirst machine learning model 114 and the first input data from the first data source. In some embodiments, the time periods when the user experiences the emotional difficulties are derived by calculating scores from the third input data. The time periods may also be derived by direct proxy markers like anxiety triggers, phone-using behavior, energy calculation, or by answering direct/indirect questions on mood and emotion.
[0040] The server 112 determinesone or more second triggers based on the time periods that are derived.The one or more second triggers may be associated with a medical condition, in particular with an emotional condition. The medical condition may beany one psychiatric, neurological, or neurosurgical condition.
[0041] The server112generates content associated with the one or more second triggersusing the second machine learning model 116.The server 112 determines a granular description of the cognitive and emotional state of the user 102 by correlating the content associated with the one or more second triggers to the second input datausing the machine learning model. The second machine learning model 116is trained by correlating historical contents, and historical second triggers with historical second input data associated with historical users. In some embodiments, the system 100 determines a granular description of the cognitive and emotional state of the user 102 using a logical method or a statistical method.
[0042] FIG. 2 is a block diagram of a server 112 according to some embodiments herein. The server 112 includes a database 200, an input receiving module 202, the first machine learning model 114, a difficult emotional states time periods deriving module 204, a second triggers determining module 206, a content generating module 208, a granular description determining module 210, and thesecond machine learning model 116.
[0043] The input receiving module 202 receives a first input data from a first data source 104by a server.The first input data includes at least one digital usage pattern, digital symptomatic data, or behavioral data.The input receiving module 202 receives a second input data from a second data source 106 by the server.The second input data includes data associated with at least one of the brain networks, molecular details, genetics details, cell details, physiological details of the user, neuroimaging data, neurotransmitter profile, or neuro-genetic profile of the user 102. The input receiving module 202 obtainsa third input data from a first machine learning model by the server 112. The third input data comprises at least one of (i) responses for one or more contextual questions from a caregiver of the user 102; (ii) response for pre-set patterns from the user 102; (iii) responses for one or more contextual questions from the user 102, or (iv) one or more first triggers. One or more first triggers may be digital usage patterns, symptomatic data, or behavioral data.
[0044] The difficult emotional states time periods deriving module 204 derives time periods when the user 102 experiences difficult emotional states by automatically analysing the third input data from the first machine learning model 114 and the first input data from the first data source. For example, (i) the response from the user to one or more contextual questions may be analyzed to identify the time periods when the user 102 may have experienced difficult emotional states. The user may be enabled to input pre-set patterns or markers that indicate difficult periods, for example, the user may mark certain dates or time periods as challenging or stressful.
[0045] The second triggers determining module 206 determines one or more second triggers based on the time periods that are derived.One or more second triggers include at least one of the psychiatric, neurological, or neurosurgical conditions. For example, if the user experience significant disruptions in sleep patterns, then sleep disturbance can be a potential second trigger that indicates underlying psychiatric conditions like anxiety or mood disorders.
[0046] For example, if the user avoids social interactions or isolates themselves during time periods, then social withdrawal can be a second trigger. The social withdrawal can be linked to conditions such as depression, or social anxiety disorder.
[0047] The content generating module 208generates content associated with one or more second triggersusing the second machine learning model 116. The content may provide relevant information, insights, or recommendations related to one or more second triggers.
[0048] The second machine learning model 116 may generate informational content that provides an overview of the second triggers that explains symptoms, causes, potential treatments, and strategies related to the second triggers. The second machine learning model 116 may generate content that includes self-help techniques or strategies to deal with the second trigger. The self-help techniques or strategies may include mindfulness exercises, relaxation techniques, stress management methods, or specific behavioral changes.
[0049] The granular description determining module 210 determines a granular description of the cognitive and emotional state of the user102 by correlating the content associated with the one or more second triggers to the second input data.For example, if the second input data includes neuroimaging data, such as fMRI scans, then the granular description may include how specific brain regions or networks are affected during difficult emotional states. If the second input data includes genetic or molecular details, then the granular description may include insights into underlying biological mechanisms contributing to the user’s cognitive and emotional state.
[0050] The second machine learning model 116 is trained by correlating historical contents, and historical second triggers with historical second input data associated with historical users.
[0051] In some embodiments, the first machine learning model 114 probes the caregiver on whether a ward is going through a good phase currently or a bad phase by presenting the one or more contextual questions to the caregiver on the caregiver interface. The first machine learning model 114 further receives responses for the one or more contextual questions from the caregiver of the user 102 and stores them in a database. The first machine learning model 114 may further provide information to the caregivers on how to best take care of the ward on the caregiver interface. The good phase and the bad phase may be an indicator of the mood of the user 102.
[0052] In some embodiments, the first machine learning model 114 receives a response for the pre-set patterns on the patient interface from the user 102 and stores them in the database. The pre-set patterns may indicate a low mood in the user 102. The pre-set patterns may be pre-programmed based on the disorder.
[0053] In some embodiments, the first machine learning model 114displays one or more contextual questions to the user 102 on the patient interface. The first machine learning model 114 may use a standard question module to present the one or more contextual questions to the user 102 on the patient interface. The one or more contextual questions to the user 102 may be based on intersectionalities of the people and taken from interviewing patterns of the most empathetic interviewers and guided mediators. The intersectionalities are overlapping social identities that one or more users possess, which influence experiences of the one or more users. The one or more contextual questions to the user 102 may be based on spoon theory and associated with environmental triggers.
[0054] In some embodiments, the first machine learning model 114 uses artificial intelligence techniques to show up one or more contextual questions to the user 102 around intersectionality and environmental triggers. The first machine learning model 114 may use a machine learning model that is trained with content that helps in understanding the intersectionalities of the people for showing up one or more contextual questions to the user 102. The artificial learning model 114 is trained by correlating a historical set of contextual questions, a historical set of patterns, with historical responses, and historical users. The first machine learning model 114 may pop-up the one or more intersectionality-based questions to the user 102 using the machine learning model when the user 102 is going through the bad phase. The first machine learning model 114 further receives responses for one or more contextual questions from the user 102 and stores them in the database.
[0055] The first machine learning model 114 is configured to combine the (i) responses for one or more contextual questions from the caregiver of the user 102; (ii) responses for the pre-set patterns from the user 102; and (iii) responses for the one or more contextual questions from the user 102 and obtain the third input data. The third input data may include information on one or more first triggers. One or more first triggers may be associated with the intersectionality of the user 102. The triggers may includethoughts, feelings, actions, and the like.
[0056] FIG. 3 is a block diagram of a first machine learning model 114 of FIG. 1 according to some embodiments herein. The first machine learning model 114includes a set of patterns module 302, a set of questions to a response module 304, a contextual questions module 306, a display module 308, a caregiver interface 308A, and a user interface module 308B.
[0057] The pre-set patterns module 302 includes a bunch of pre-set patterns and the set of questions to a response module 304 includes a set of questions to a response. The pre-set patterns may be based on disorder.
[0058] The contextual questions module 306 includes one or more contextual questions.The caregiver interface 308A, and theuser interface module 308B of the display module 308 display the bunch of pre-set patterns and the set of questions through the user device 108. The pre-set patterns module 302 displays the pre-set patterns to the user 102 on the user interface 308B. The response for the pre-set patterns is received on the user interface 308Bfrom the user 102 and stored in database 200. The pre-set patterns may be indicative of a mood in the user 102.
[0059] The contextual questions module 306 displays one or more contextual questions to the caregiver on the caregiver interface 308A to probe the caregiver whether a ward is going through a good phase currently or a bad phase. The caregiver may be provided with information on how to best take care of the ward on the caregiver interface 308A. The response from the caregiver of the user 102 is received for one or more contextual questions displayed to the caregiver and stored in database 200.
[0060] The contextual questions module 306displays one or more contextual questions to the user 102 on the user interface 308B. The contextual questions module 306may be a standard question module. The one or more contextual questions to the user 102 may be based on intersectionalities of the people and taken from interviewing patterns of the most empathetic interviewers and guided mediators. The one or more contextual questions to the user 102 may be based on spoon theory and associated with one or more environmental triggers.
[0061] The contextual questions module 306uses the second machine learning model 116 to show up one or more contextual questions to the user 102 around intersectionality and environmental triggers. The second machine learning model 116 may be trained with a content that helps in understanding the intersectionalities of the people for showing up one or more contextual questions around the intersectionality and environmental triggers to the user 102.
[0062] The content used for training the second machine learning model 116may be based on generalizing from a lot of people to individuals. For example, when getting across the disempowered community ABC, across 100 people that ABC leads to denial of health services. When the 101stperson comes in from the community ABC, there will be content related to how the person can deal with this disempowerment and take care of mental health at the same time. The content used for training the second machine learning model 116may be based on generalizing from individual to a lot of people. For example, if a person has put in data for a year, and it feels like the most important trigger for a mental health episode for them was “When they are pushed over, due to XYZ”, that will be used to understand intersectionalities for the entire community.
[0063] The first machine learning model 114is trained with responses from the user 102 for one or more contextual questions displayed to the user 102 and stored in the database 200. The first machine learning model 114is trained with responses by combining the (i) responses for one or more contextual questions from the caregiver of the user 102; (ii) responses for the pre-set patterns from the user 102; and (iii) responses for the one or more contextual questions from the user 102 to obtain a third input data. The third input data may be associated with intersectionalities and environmental triggers of the user 102. The third input data may include information on one or more first triggers. One or more first triggers may be associated with the intersectionality of the user 102. The triggers may includethoughts, feelings, actions, and the like.
[0064] FIG. 4 is an exemplary diagram of a user interface 408B associated with a user device 108 of FIG. 2 according to some embodiments herein. The patient interface 212 includes a button 402, a questions display provision 404, one or more intersectionality-based questions 406A, 406B, and one or more options buttons 408A, 408B. Thebutton 402 enables the user 102 to select the pre-set patterns. The questions display provision 404 is presented with one or more contextual questions by the second question module 216. In some exemplary embodiments, the second question module 216 pop-ups one or more intersectionality-based questions 406A, 406B to the user 102 using the second machine learning model 116, when the user 102 is going through the bad phase. 408A and 408B are options buttons that enable the user 102 to respond the one or more intersectionality-based questions 406A and 406B. The options may be based on common intersectionality triggers, which are improved upon through the artificial intelligence.
[0065] FIG. 5 is a block diagram of a second machine learning model 116 of FIG. 2 according to some embodiments herein. The first second machine learning model 116 includes historical contents 502, historical second triggers 504, historical second input data module 506, historical users, and historical intersectionalities 510.
[0066] The second machine learning model 116is trained by correlating historical contents, and historical second triggers with historical second input data associated with historical users. The second machine learning model 116is trained to generate the content by identifying intersectionalities of one or more users for showing up one or more contextual questions to the user.The intersectionalities are overlapping social identities that one or more users possess, which influence experiences of the one or more users.
[0067] FIG. 6is a flow diagram that illustrates amethodfor evaluating the cognitive and emotional state of a user based on one or more triggers that are determined from a plurality of data points using a first machine learning model 114according to some embodiments herein.At step 602, the method includes receivinga first input data from a first data source 104by a server.The first input data includes at least one digital usage pattern, digital symptomatic data, or behavioral data.
[0068] At step 604, the method includes receiving a second input data from a second data source 106 by the server.The second input data includes data associated with at least one of the brain networks, molecular details, genetics details, cell details, physiological details of the user, neuroimaging data, neurotransmitter profile, or neuro-genetic profile of the user.
[0069] At step 606, the method includes obtaining a third input data from a first machine learning model by the server. The third input data comprises at least one of (i) responses for one or more contextual questions from a caregiver of the user 102; (ii) response for pre-set patterns from the user 102; (iii) responses for one or more contextual questions from the user 102, or (iv) one or more first triggers.
[0070] At step 608, the method includes deriving time periods when the user 102 experiences difficult emotional states by automatically analysing the third input data from the first machine learning model 114 and the first input data from the first data source.
[0071] At step 610, the method includes determining one or more second triggers based on the time periods that are derived.One or more second triggers include at least one of the psychiatric, neurological, or neurosurgical conditions.
[0072] At step 612, the method includes generating a content associated with the one or more second triggersusing the second machine learning model.
[0073] At step 614, the method includes determining, using the second machine learning model 116, a granular description of the cognitive and emotional state of the user102 by correlating the content associated with the one or more second triggers to the second input data.The second machine learning model 116 is trained by correlating historical contents, and historical second triggers with historical second input data associated with historical users.
[0074] In some embodiments, the first machine learning model is configured to associate with a user device that comprises a caregiver interface and a user interface that is displayed with at least one of (i) a bunch of pre-set patterns, and (ii) a set of questions to a response by at least one of the user or the caregiver of the user in order to obtain the third input data.
[0075] In some embodiments, the first machine learning model is configured to associate with a user device that comprises a caregiver interface and a user interface that is displayed with at least one of (i) a bunch of pre-set patterns, and (ii) a set of questions to a response by at least one of the user or the caregiver of the user in order to obtain the third input data.
[0076] In some embodiments, the first machine learning model is configured to probe the caregiver on whether the user is going through a good emotional phase currently or a bad emotional phase by displaying one or more contextual questions to the caregiver on the caregiver interface.
[0077] In some embodiments,the second machine learning model is trained to generate the content by identifying intersectionalities of one or more users for showing up one or more contextual questions to the user, the intersectionalities are overlapping social identities that one or more users possess, which influence experiences of the one or more users.
[0078] In some embodiments,the first machine learning model is trained by correlating a historical set of contextual questions, a historical set of patterns, with historical responses, and historical users.
[0079] In some embodiments,the time periods when the user experiences emotional difficulties are derived by calculating scores from the third input data.
[0080] The method receives symptoms of various neurological-psychiatric illnesses/disorders and reports to the caregiver and psychiatrist/ medical practitioner. The method identifies high risk by taking this multitude of the data set, like self-reports, digital phenotype, answers to the questions, behavioral patterns, phone usage patterns, and energy patterns. The method reports and identifies the triggers using the input data and also helps in notifying the caregiver. The method understands the pathway of the disorder and recommends content based on it to care to give and the user to help them in better management of the disorder. The method helps to understand the profile of the patient and help clinicians/ medical practitioners and their assistant by being a recommendation system.
[0081] The method ingests other medical data, like imaging, blood, genetics, clinical scores or scales, psychiatric scores or scales,s and performs the computing accordingly for providing recommendations or suggesting triggers, or predicting the condition change. The method identifies the possible targets in the brain or identifies the symptoms for treatment and therapies. The method helps with neuro-psychiatric conditions or high-risk individuals to manage their condition better or to seek help by identifying details from their data. The method identifies potential high risk for movement disorder, dementia disorder, and other such illnesses using digital phenotype and other data. The method takes outside-the-body parameters and inside-the-body parameters to create the profile of the users for personalized individualized treatment/ management,recommendations/ assistance. The method may capture the umwelt of the user.
[0082] A representative hardware environment for practicing the embodiments herein is depicted in FIG. 7, with reference to FIGS. 1 through 6. This schematic drawing illustrates a hardware configuration of a server110/computer system/ computing device in accordance with the embodiments herein. The system includes at least one processing device CPU 10 that may be interconnected via system bus 15 to various devices such as a random-access memory (RAM) 12, read-only memory (ROM) 16, and an input/output (I/O) adapter 18. The I/O adapter 18 can connect to peripheral devices, such as disk units 58 and program storage devices 50 that are readable by the system. The system can read the inventive instructions on the program storage devices 50 and follow these instructions to execute the methodology of the embodiments herein. The system further includes a user interface adapter 22 that connects a keyboard 28, mouse 50, speaker 52, microphone 55, and/or other user interface devices such as a touch screen device (not shown) to the bus 15 to gather user input. Additionally, a communication adapter 20 connects the bus 15 to a data processing network 52, and a display adapter 25 connects the bus 15 to a display device 26, which provides a graphical user interface (GUI) 56 of the output data in accordance with the embodiments herein, or which may be embodied as an output device such as a monitor, printer, or transmitter, for example.
[0083] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the scope.
,CLAIMS:I/We Claim:

1. A system (100) for evaluating cognitive and emotional state of a user based on one or more triggers that are determined from a plurality of data points using a machine learning model, the system comprising:
a first data source (104) that is configured to provide a first input data, wherein the first input data comprises at least one digital usage pattern, digital symptomatic data, or behavioral data;
a second data source (106) that is configured to provide a second input data, wherein the second input data comprises data associated with at least one of brain networks, molecular details, genetics details, cell details, and physiological details of the user (102), neuroimaging data, neurotransmitter profile, or neuro-genetic profile of the user (102);
characterized in that,
a first machine learning model (114) that is configured to provide a third input data, wherein the third input data comprises at least one of (i) responses for one or more contextual questions from a caregiver of the user (102); (ii) response for pre-set patterns from the user (102); (iii) responses for one or more contextual questions from the user (102), or (iv) one or more first triggers;
a server (112) that acquires the first input data, the second input data, and the third input data of the user (102) from the first data source (104), the second data source (106), and the first machine learning model (114) and processes, the first input data, the second input data, and the third input data using a machine learning model (212), wherein the server (108) comprises:
a memory that stores a database;
a processor that is configured to execute the machine learning model (110) and is configured to,
derive time periods when the user (102) experiences difficult emotional states by automatically analysing the third input data from the first machine learning model (112) and the first input data from the first data source (104);
determine one or more second triggers based on the time periods that are derived, wherein the one or more second triggers comprise at least one of psychiatric, neurological, or neurosurgical conditions;
generate, using a second machine learning model (116), a content associated with the one or more second triggers; and
determine, using the second machine learning model (116), a granular description of cognitive and emotional state of the user (102) by correlating the content associated with the one or more second triggers to the second input data, wherein the second machine learning model (116) is trained by correlating historical contents, historical second triggers with historical second input data associated with historical users.


2. The system as claimed in claim 1, wherein the first machine learning model (114) is configured to associate with a user device (108) that comprises a caregiver interface and a user interface that is displayed with at least one of (i) a bunch of pre-set patterns, and (ii) a set of questions to a response by at least one of the user (102) or the caregiver of the user (102) to obtain the third input data.


3. The system as claimed in claim 1, wherein the first machine learning model (114) is configured to probe the caregiver on whether the user (102) is going through a good emotional phase currently or a bad emotional phase by displaying the one or more contextual questions to the caregiver on the caregiver interface.


4. The system as claimed in claim 1, wherein the second machine learning model (116) is trained to generate the content by identifying intersectionalities of one or more users for showing up the one or more contextual questions to the user (102), wherein the intersectionalities are overlapping social identities that the one or more users possess, which influence experiences of the one or more users.

5. The system as claimed in claim 1, wherein the first machine learning model (114) is trained by correlating a historical set of contextual questions, a historical set of patterns, with historical responses, and historical users.

6. The system as claimed in claim 1, wherein the time periods when the user (102) experiences the emotional difficulties are derived by calculating scores from the third input data.

7. A method for evaluating cognitive and emotional state of a user based on one or more triggers that are determined from a plurality of data points using a first machine learning model (114), the system comprising:
receiving, by a server, a first input data from a first data source (104), wherein the first input data comprises at least one digital usage pattern, digital symptomatic data, or behavioral data;
receiving, by the server, a second input data from a second data source (106), wherein the second input data comprises data associated with at least one of brain networks, molecular details, genetics details, cell details, physiological details of the user (102), neuroimaging data, neurotransmitter profile, or neuro-genetic profile of the user (102);
obtaining, by the server, a third input data from a first machine learning model (114), wherein the third input data comprises at least one of (i) responses for one or more contextual questions from a caregiver of the user (102); (ii) response for pre-set patterns from the user (102); (iii) responses for one or more contextual questions from the user (102), or (iv) one or more first triggers;
deriving time periods when the user (102) experiences difficult emotional states by automatically analysing the third input data from the first machine learning model (112) and the first input data from the first data source (104);
determining one or more second triggers based on the time periods that are derived, wherein the one or more second triggers comprise at least one of psychiatric, neurological, or neurosurgical conditions;
generating, using a second machine learning model (116), a content associated with the one or more second triggers; and
determining, using the second machine learning model (116), a granular description of cognitive and emotional state of the user (102) by correlating the content associated with the one or more second triggers to the second input data, wherein the second machine learning model (212) is trained by correlating historical contents, historical second triggers with historical second input data associated with historical users.

8. The method as claimed in claim 7, wherein the first machine learning model (114) is configured to associate with a user device (108) that comprises a caregiver interface and a user interface that is displayed with at least one of (i) a bunch of pre-set patterns, and (ii) a set of questions to a response by at least one of the user (102) or the caregiver of the user (102) to obtain the third input data.


9. The method as claimed in claim 7, wherein the first machine learning model (114) is configured to probe the caregiver on whether the user (102) is going through a good emotional phase currently or a bad emotional phase by displaying the one or more contextual questions to the caregiver on the caregiver interface.


10. The method as claimed in claim 7, wherein the second machine learning model (116) is trained to generate the content by identifying intersectionalities of one or more users for showing up the one or more contextual questions to the user (102), wherein the intersectionalities are overlapping social identities that the one or more users possess, which influence experiences of the one or more users.

Dated this 9th June, 2023
Signature of the Patent Agent

(ARJUN KARTHIK BALA)
IN/PA-1021
Agent for Applicant.

Documents

Application Documents

# Name Date
1 202241033455-STATEMENT OF UNDERTAKING (FORM 3) [10-06-2022(online)].pdf 2022-06-10
2 202241033455-PROVISIONAL SPECIFICATION [10-06-2022(online)].pdf 2022-06-10
3 202241033455-PROOF OF RIGHT [10-06-2022(online)].pdf 2022-06-10
4 202241033455-POWER OF AUTHORITY [10-06-2022(online)].pdf 2022-06-10
5 202241033455-FORM FOR STARTUP [10-06-2022(online)].pdf 2022-06-10
6 202241033455-FORM FOR SMALL ENTITY(FORM-28) [10-06-2022(online)].pdf 2022-06-10
7 202241033455-FORM 1 [10-06-2022(online)].pdf 2022-06-10
8 202241033455-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [10-06-2022(online)].pdf 2022-06-10
9 202241033455-EVIDENCE FOR REGISTRATION UNDER SSI [10-06-2022(online)].pdf 2022-06-10
10 202241033455-DRAWINGS [10-06-2022(online)].pdf 2022-06-10
11 202241033455-DRAWING [09-06-2023(online)].pdf 2023-06-09
12 202241033455-CORRESPONDENCE-OTHERS [09-06-2023(online)].pdf 2023-06-09
13 202241033455-COMPLETE SPECIFICATION [09-06-2023(online)].pdf 2023-06-09
14 202241033455-Request Letter-Correspondence [14-07-2023(online)].pdf 2023-07-14
15 202241033455-Power of Attorney [14-07-2023(online)].pdf 2023-07-14
16 202241033455-FORM28 [14-07-2023(online)].pdf 2023-07-14
17 202241033455-Form 1 (Submitted on date of filing) [14-07-2023(online)].pdf 2023-07-14
18 202241033455-Covering Letter [14-07-2023(online)].pdf 2023-07-14
19 202241033455-FORM-9 [21-09-2023(online)].pdf 2023-09-21
20 202241033455-STARTUP [26-09-2023(online)].pdf 2023-09-26
21 202241033455-FORM28 [26-09-2023(online)].pdf 2023-09-26
22 202241033455-FORM 18A [26-09-2023(online)].pdf 2023-09-26
23 202241033455-FER.pdf 2023-10-06
24 202241033455-FORM 3 [15-12-2023(online)].pdf 2023-12-15
25 202241033455-FORM 3 [27-12-2023(online)].pdf 2023-12-27
26 202241033455-FORM 3 [25-01-2024(online)].pdf 2024-01-25
27 202241033455-OTHERS [05-04-2024(online)].pdf 2024-04-05
28 202241033455-FER_SER_REPLY [05-04-2024(online)].pdf 2024-04-05
29 202241033455-DRAWING [05-04-2024(online)].pdf 2024-04-05
30 202241033455-CORRESPONDENCE [05-04-2024(online)].pdf 2024-04-05
31 202241033455-COMPLETE SPECIFICATION [05-04-2024(online)].pdf 2024-04-05
32 202241033455-CLAIMS [05-04-2024(online)].pdf 2024-04-05
33 202241033455-ABSTRACT [05-04-2024(online)].pdf 2024-04-05
34 202241033455-US(14)-HearingNotice-(HearingDate-17-01-2025).pdf 2024-12-19
35 202241033455-Correspondence to notify the Controller [11-01-2025(online)].pdf 2025-01-11
36 202241033455-Annexure [11-01-2025(online)].pdf 2025-01-11
37 202241033455-FORM-26 [30-01-2025(online)].pdf 2025-01-30
38 202241033455-Written submissions and relevant documents [31-01-2025(online)].pdf 2025-01-31
39 202241033455-Proof of Right [31-01-2025(online)].pdf 2025-01-31
40 202241033455-POA [31-01-2025(online)].pdf 2025-01-31
41 202241033455-MARKED COPIES OF AMENDEMENTS [31-01-2025(online)].pdf 2025-01-31
42 202241033455-FORM 13 [31-01-2025(online)].pdf 2025-01-31
43 202241033455-AMMENDED DOCUMENTS [31-01-2025(online)].pdf 2025-01-31
44 202241033455-Response to office action [12-02-2025(online)].pdf 2025-02-12
45 202241033455-PatentCertificate13-02-2025.pdf 2025-02-13
46 202241033455-IntimationOfGrant13-02-2025.pdf 2025-02-13

Search Strategy

1 SearchHistory202241033455E_05-10-2023.pdf

ERegister / Renewals

3rd: 22 Apr 2025

From 10/06/2024 - To 10/06/2025

4th: 22 Apr 2025

From 10/06/2025 - To 10/06/2026