Sign In to Follow Application
View All Documents & Correspondence

A Controller For A Conversational System For Conflict Identification And Resolution And Method Thereof

Abstract: A CONTROLLER FOR A CONVERSATIONAL SYSTEM FOR CONFLICT IDENTIFICATION AND RESOLUTION AND METHOD THEREOF Abstract The controller 110 configured to receive conversational input through the input means 142. The controller 110 processes the conversational input using a first dataset comprising user data, user profile and conversation history stored in a memory element 106, through any one of a rule based model and a learning based model. The controller 110 provides conversational output, based on the determined context, through the output means 144. The controller 110, characterized in that, to process the conversational input, the controller configured to, monitor, by an aggregator module 112, a second dataset in addition to the first dataset. The second dataset comprises environmental and situational data of the user. The controller 110 further identifies, by an analyzer module 114, a conflict and severity based on the first dataset and the second dataset. The controller 110 further determines, by a planner module 116, solution to avert the conflict. Figure 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
01 September 2023
Publication Number
10/2025
Publication Type
INA
Invention Field
MECHANICAL ENGINEERING
Status
Email
Parent Application

Applicants

Bosch Global Software Technologies Private Limited
123, Industrial Layout, Hosur Road, Koramangala, Bangalore – 560095, Karnataka, India
Robert Bosch GmbH
Postfach 30 02 20, 0-70442, Stuttgart, Germany

Inventors

1. Khalpada Purvish
C/O Gordhan Vallabh, 2183, Shree Gokul Niwas, Near Statue of Gandhi, Aazad Chowk, Kapadwanj, Kheda, Gujarat – 387620, India
2. Karthikeyani Shanmuga Sundaram
3/58, AKG Nagar, Ponnalamman durai, Sethumadai(Po), Pollachi(Tk), Coimbatore – 642133, Tamilnadu, India
3. Swetha Shankar Ravisankar
Tower 4, 304 Salarpuria Sattva Cadenza Apartments, Near Nandi Toyota Office, Kudlu gate signal. Hosur Main Road, Benguluru – 560068, Karnataka, India
4. Arvind Devarajan Sankruthi
P-207, Purva, Bluemont,Trichy Road, Singanallur, Coimbatore – 641005, Tamilnadu, India

Specification

Description:Complete Specification:
The following specification describes and ascertains the nature of this invention and the manner in which it is to be performed.
Field of the invention:
[0001] The present invention relates to a controller for a conversational system for conflict identification and resolution and method thereof.

Background of the invention:
[0002] Majority of the existing device/systems today deliver a command driven conversations, especially in automobile or homes or the like. For example, consider the following typical conversational scenario of such command driven conversational systems.
User I am feeling very hot.
System Okay! AC temperature reduced to 21 degrees.

[0003] Such systems are more of a voice-controlled system, than a conversational system. A voice-controlled systems translates the physical controls like dials, buttons, etc. to vocal controls. Such translations, being very passive in participation, do not leverage the true potential of a conversational system.

[0004] Often, a multi-dimensional multi-variable equation for each decision is not computed, especially if the decision seem perceivably small or complex. However, such decisions can be fatal. According to 2014 survey carried out by Ahmedabad police, 15% of fatal accidents on the highway were primarily due to wrong tire pressure. Similar data is known for other countries as well. A blinking light that only considers the preset pressure threshold is not sufficient as it cannot consider the weather conditions, predict the speed of the vehicle, etc. Similarly, all the conversational system that solely rely on the “blinking light sensor” are also not sufficient.

[0005] Imagine the user is returning to the home from the office on Friday evening. The next day early morning, the user is going for a weekend gateway. The car shows the predicted range to be little more than the distance to the destination. However, early morning, the fuel stations near to the user’s home and the route will be closed. Also, as it is a famous weekend gateway, it usually has a high traffic at approaching point. Additionally, it is expected to rain during that time. Hence, the real range on the car will be less than what the car is showing. So, even though there is no traditional reason to blink the low fuel light, the personal companion will start a conversation about fueling with the user on Friday evening when approaching any fuel station.

[0006] According to a patent literature 4271/CHE/2012, methods and systems for providing personalized and context-aware suggestions is disclosed. The patent literature relate to methods and systems for providing personalized and context-aware suggestions to a user. The method includes providing a user profile. Further, the method includes establishing contextual information regarding the user. Thereafter, one or more suggestions are provided to the user based on the user profile and the contextual information. Subsequently, the user profile based on the user feedback in response to the suggestion is modified. The user profile may be modified using a machine learning algorithm executed on a processor in order to improve the quality of the personalized and context-aware suggestions. In certain embodiments, the personalized and context-aware suggestions can be provided while the user is in a vehicle or while the user is operating a vehicle

Brief description of the accompanying drawings:
[0007] An embodiment of the disclosure is described with reference to the following accompanying drawings,
[0008] Fig. 1 illustrates a block diagram of a controller for a conversational system, according to an embodiment of the present invention;
[0009] Fig. 2 illustrates a block diagram of analyzer module and planner module of the conversational system, according to an embodiment of the present invention, and
[0010] Fig. 3 illustrates a method of operating the conversational system, according to the present invention.

Detailed description of the embodiments:
[0011] Fig. 1 illustrates a block diagram of a controller for a conversational system, according to an embodiment of the present invention. The conversational system 100 facilitates contextual conversation with a user 146. The conversational system 100 comprises the controller 110 interfaced with an input means 142 and an output means 144. The controller 110 configured to receive conversational input through the input means 142. The controller 110 processes the conversational input using a first dataset comprising user data, user profile and conversation history stored in a memory element 106, through any one of a rule based model and a learning based model. The controller 110 provides conversational output, based on the determined context, through the output means 144. The controller 110, characterized in that, to process the conversational input, the controller 110 configured to, monitor or collect, by an aggregator module 112, a second dataset in addition to the first dataset. The second dataset comprises environmental and situational data of the user. The controller 110 further identifies, by an analyzer module 114, a conflict and severity based on the first dataset and the second dataset. The controller 110 further determines, by a planner module 116, solution to avert the conflict. The first dataset can be considered as primary data and the second dataset as secondary data for ease of understanding. Further, it is to be noted that Automatic Speech Recognition or Speech-to-Text conversion is also performed, but the same is not explained for being state of the art.

[0012] The conversational input refers to dialogue between two or more humans, between humans and the conversational system 100 or a query or a question to humans or the conversational system 100. Further, the input means is at least one microphone and a keyboard over a touch screen or conventional keyboard.

[0013] The aggregator module 112 comprises a featureizer sub-module which extracts feature vectors of each of the first dataset and the second dataset (called as featurization). The analyzer module 114 is configured to analyze the feature vector derived/extracted from the monitored data and determine a potential conflict and severity. The planner module 116 configured to determine solutions to prevent the identified conflicts. The controller 110 also comprises a context module 118 configured to prompt a resolution to the user through the output means 144. The output means 144 is either a speaker or a display screen connected to the controller 110. Similarly, the input means 142 is either a microphone or a keyboard, pointers, or touch screen. All the modules are based on Artificial Intelligence (AI) based or Machine Learning (ML) based concepts which are briefly explained below.

[0014] It is important to understand some aspects of Artificial Intelligence (AI) technology and AI based devices, which can be explained as follows. Depending on the architecture of the implements, AI devices may include many components. One such component is an AI model or AI modules. The AI model can be defined as reference or an inference set of data, which uses different forms of correlation matrices. Using these AI models and the data from these AI models, correlations can be established between different types of data to arrive at some logical understanding of the data. A person skilled in the art would be aware of the different types of AI models such as linear regression, naïve bayes classifier, support vector machine, neural networks and the like. It must be understood that this disclosure is not specific to the type of model being executed and can be applied to any AI module irrespective of the AI model being executed. A person skilled in the art will also appreciate that the AI model may be implemented as a set of software instructions, combination of software and hardware or any combination of the same. The modules used in the present invention are AI modules.

[0015] Some of the typical tasks performed by AI systems are classification, clustering, regression etc. Majority of classification tasks depend upon labeled datasets; that is, the data sets are labelled manually in order for a neural network to learn the correlation between labels and data. This is known as supervised learning. Some of the typical applications of classifications are, face recognition, object identification, gesture recognition, voice recognition etc. In a regression task, the model is trained based on labeled datasets, where the target labels are numeric values. Some of the typical applications of regressions are, Weather forecasting, Stock price predictions, House price estimation, energy consumption forecasting etc. Clustering or grouping is the detection of similarities in the inputs. The cluster learning techniques do not require labels to detect similarities.

[0016] According to an embodiment of the present invention, the second dataset is selected for processing based on availability. The first dataset and the second dataset is selected from a group comprising a facial expression/emotions extracted from a camera 128, physiological parameter from a wearable device 130 worn by the user, external data comprising weather data, environmental parameters, traffic data from a map based service provider from an internet connected data source 132, conversation history 134, a calendar entry, a to-do list obtained from a smart device 136, a location through a satellite based navigation system 138, vehicle data from built-in sensors 140 of the vehicle, and the like. The first dataset and the second dataset are both categorizable into static data and dynamic data. The static data refers to information provided by the user which are fixed and changes very less frequently, such as favorite places, song, cuisine, restaurant, friends, places, weather, etc. The dynamic data refers to information which keeps changing such as current location, temperature, emotion, expression, physiological parameters, vehicle parameters such as speed, fuel level, weather conditions, etc., browsing history. The user data is updated regularly as and when the user changes and preferences or performs any activity.

[0017] In accordance to an embodiment of the present invention, the controller 110 is provided with necessary signal detection, acquisition, and processing circuits. The controller 110 is the one which comprises input interface 104, output interfaces 106 having pins or ports, the memory element 108 such as Random Access Memory (RAM) and/or Read Only Memory (ROM), Analog-to-Digital Converter (ADC) and a Digital-to-Analog Convertor (DAC), clocks, timers, counters and at least one processor (capable of implementing machine learning) connected with each other and to other components through communication bus channels. The memory element 108 is pre-stored with logics or instructions or programs or applications or modules/models and/or threshold values/ranges, reference values, predefined/predetermined criteria/conditions, which is/are accessed by the at least one processor as per the defined routines. The internal components of the controller 110 are not explained for being state of the art, and the same must not be understood in a limiting manner. The controller 110 may also comprise communication units such as transceivers to communicate through wireless or wired means such as Global System for Mobile Communications (GSM), 3G, 4G, 5G, Wi-Fi, Bluetooth, Ethernet, serial networks, and the like. The controller 110 is implementable in the form of System-in-Package (SiP) or System-on-Chip (SOC) or any other known types. Examples of controller 110 comprises but not limited to, microcontroller, microprocessor, microcomputer, etc.

[0018] Further, the processor may be implemented as any or a combination of one or more microchips or integrated circuits interconnected using a parent board, hardwired logic, software stored in the memory element 108 and executed by a microprocessor, firmware, an application specific integrated circuit (ASIC), and/or a field programmable gate array (FPGA). The processor is configured to exchange and manage the processing of various AI models.

[0019] According to an embodiment of the present invention, the controller 110, through the analyzer module 114, configured to apply common sense ontologies 122 and inference rules 124 to identify the conflict. Further, the controller 110, through the analyzer module 114, determines the severity of the conflict using causal chain analysis or root cause analysis.

[0020] According to an embodiment of the present invention, the controller 110 comprises a ranking module 126 to rank at least two or more solutions based on the monitored/collected second dataset or the feature vectors of the monitored data in real time.

[0021] According to an embodiment of the present invention, the controller 110 is part of at least one of an infotainment unit of the vehicle, a smartphone, a wearable device 130, a cloud computer. In other words, the controller 110 is part of an internal device of the vehicle or part of external device which is connected to the vehicle through known wired or wireless means as described earlier. The conversational device 100 is an infotainment unit of the vehicle, the smartphone, the wearable device 130 or the cloud computer or a smart speaker. In case of the controller 110 being the cloud computer, a first controller 110 is in the cloud and the second controller 110 is in the vehicle or the device. The first controller 110 and the second controller 110 communicate and share the processing and performs the function as described above. Alternatively, the controller 110 is in the cloud and communicates with the existing control unit of the device or vehicle through communication means as known in the art.

[0022] Fig. 2 illustrates a block diagram of analyzer module and planner module of the conversational device, according to an embodiment of the present invention. The analyzer module 114 and the planner module 116 are explained with the help of a working example provided below.

[0023] The analyzer module 114 (also known as a conflict prediction component) uses common sense ontology and set of inference rules 124 to predict and detect a potential conflict. For example, it uses inference rules 124 like “if you drive fast, your tire/tyre pressure increases”, “if we go to a place with significantly lower temperature, we may feel cold”, and so on to detect and predict potential conflict. This conflict is then classified based on its severity and effect on the user. For example, a life threating conflict is given the highest priority and minor inconveniences takes the lowest priority. The controller 110 uses the causal chains to derive the end outcome. If the controller 110 reaches to an end such as accident or death or health hazard, in less than the threshold hops, the conflict is labelled to be life threatening or highly severe. Consider the user is driving a car at high speed and the external temperature is high as acquired from the aggregator module 112, the analyzer module 114 receives the monitored data (or collected data) or the feature vectors of the monitored data in real time from the aggregator module 112. The analyzer module 114 continuously applies the Natural Language Processing (NLP) models, common sense ontologies 122 and the set of inference rules 124 to the monitored data (or the feature vectors) to determine any potential conflict. The operation of analyzer module 114 is explained through sub-figure 200. The analyzer module 114 determines from a first rule 202 that the high speed can increase tire pressure, and a second rule 204 that high temperature increases tire pressure. Thus, two sensory inputs, which are vehicle speed and external temperature leads to a possible first conflict node 206 as high tire pressure. The analyzer module 114 further considers a third rule 208 as high tire pressure can burst tires which leads to second conflict node 210 of tire burst event. Still further, the analyzer module 114 considers a fourth rule 212 as tire burst at high speed causes loss of steering control, which leads to a third conflict node 214 as fatal accident or crash. The third conflict node 214 is determined to be adverse to the driver/user and is the final conflict. Since, there is a threat to the life of the user/driver/occupants of the vehicle, the analyzer module 114 considers the final conflict to be high.

[0024] The operation of the planner module 116 for searching for the solution is explained through sub-figure 222. Once, the conflict is identified with the priority, the planner module 116 uses planning to devise a plan to avoid the identified conflict. The planning traverses from the first conflict node 206 to the current state node and tries to search the branched resolution that deviates the user’s state machine from running into the state of the conflict. The conflict is predicted as a path in a stateful machine. An advantage of the path traversal in the stateful machine is, the searching for the planner module 116 is bounded, unlike traditional artificial algorithms. Hence, the planner module 116 searches for “solution” that takes the user away from the predicted path of conflict or dissatisfy the requirements of the conflict. In Fig. 2, the planner module 116 devises “solutions” like, First solution: User should drive slowly; Second solution: Reduce the tire pressure.

[0025] Both the solutions are ranked according to feasibility (effort count) and impact. In the above two solution, asking user to drive slowly is ranked lower because the controller 110 of the conversational device 100 understand the user’s pattern to drive (acceptably) fast on highway. Asking the user for behavior demands/requires more active effort from the user and has reduced feasibility compared to stopping at a fuel station as per a fifth rule 216 and getting the tire pressure correction as per a sixth rule 218. However, if there had not been any fuel station on the way or nearby, then feasibility of second solution would have decreased, and that of the first solution would have increased, i.e. the user would have been requested to moderate the drive speed.

[0026] The planner module 116 sends the selected solution to the context module 118 (also known as conversational context manager). The context module 118 evaluates priority of the solutions (or resolution prompt) and any ongoing conversation with the user. For example, if the fuel station is ten km away and the current conversation is about changing the route or about user feeling drowsy, the context module 118 does not act on the solution. The context module 118 waits till the fuel station is nearby or the higher priority conversation is complete. However, if user is not conversing, or is conversing about low priority conversation, the context module 118 stores the ongoing context and utterances in a memory element 108 and sends the new context and prompt to the conversational device 100. Following that, controller 110 takes new prompt, new context and already has the old context. The controller 110 then switches the conversational flow from the old context to the new context. The controller 110 alerts user about potential conflict and suggests a possible resolution. When the user agrees to a resolution, the controller 110 updates an action module 120 (or action manager), which then triggers respective APIs, like temporarily changing navigation to the nearest fuel station, etc., through the output means 144.

[0027] According to an embodiment of the present invention, the action module 120 is configured to perform the action which is based on the confirmation of the user such as changing navigation to the fuel station in the above example or controlling the speed of the vehicle to a lower speed, etc. The action is possible to be done in the absence of the confirmation to avoid the conflict.

[0028] According to an embodiment of the present invention, the conversational system 100 is applicable not just as a digital companion in the vehicle, but also applicable in healthcare industry for patients, in homes, for industrial workers for assisting in safety or in avoiding any hazards etc., where the data is collected based on availability of data sources 102 to identify the conflict and provide resolution for the same.

[0029] In another example, the user is at home and conversing with a smart speaker where conflict is identified and resolved. In yet another example, the user is in a hospital and wearing a health monitoring band with speaker functionality and the conflict resolution is performed.

[0030] Fig. 3 illustrates a method of operating the conversational device, according to the present invention. The method comprises plurality of steps of which, a step 302 comprises receiving/detecting the input from the user through an input means 142. A step 304 comprises processing the conversational input using the first dataset comprising user data, user profile and conversation history stored in the memory element 106, through any one of the rule based model and the learning based model. A step 306 comprises providing the conversational output corresponding to the conversational input through at least one output means 144. The step 304 of processing the conversational input by the controller 110 is characterized by, a step 308 which comprises monitoring, by the controller 110, the second dataset from the aggregator module 112 in addition to the first dataset. The second dataset comprising environmental data and the situational data of the user. The method also comprises extracting or deriving feature vectors of each type of data monitored from the respective data source 102. A step 310 comprises analyzing the monitored data (or the feature vectors of the collected data), through the analyzer module 114, for determining the potential conflict and the severity. A step 312 comprises planning, by the planner module 116, solutions to prevent the identified conflicts. A step 314 comprises prompting the resolution through the output means 144.

[0031] According to the method, the second dataset is monitored from respective data source 102 based on availability. The first dataset and the second dataset are selected from the group comprising the facial expression/emotions extracted from the camera 122, physiological parameter from the wearable device 124 worn by the user, external data comprising weather data, environmental parameters, traffic data from the map based service provider from the internet connected data source 126, conversation history 128, the calendar entry, the to-do list obtained from the smart device 130, the location through the satellite based navigation system 132, vehicle data from built-in sensors 134 of the vehicle, and the like.

[0032] According to the present invention, the step 310 comprises sub-steps of which a step 316 comprises applying common sense ontologies 122 and inference rules 124 for identifying the conflict. A step 318 comprises determining the severity of the conflict through causal chain analysis. Further the step 312 also comprises a step 320 of ranking the at least two or more solutions based on the collected data (or features of the collected data).

[0033] According to the present invention, a working of the conversational system 100 through a sample dialogue between the user and the conversational system 100 is provided.

System 100 Hey! Sorry for the interruption. Before we move forward, a fuel station is coming on the left side. We may get the fuel refill, but let’s get the tire pressure correct. Current tire pressure is above recommended range for hot weather and highspeed driving. It can be dangerous for highspeed driving, especially in hot days.
User Thanks for the tip. Let’s do that.
System 100 Okay! Showing you the navigation to the upcoming fuel station. Please get in the left most lane.

System 100 The tire pressure is good now. Let’s get back to places to visit in Mysore. Because you like foggy weather, you can visit Chamundi Hills tomorrow evening. You will like the fog and evening light of the Mysore.

[0034] Here, the controller 110 of the conversational system 100 notices the higher tire pressure and that the user is going to drive on an expressway through respective data sources 102. The controller 110 determines using user driving pattern and history or profile, that the user drives fast on the expressways and highways and also that driving fast with high tire pressure can be fatal, especially on hot days. So, as soon as the car is near the fuel station, where the user can alter the tire pressure, the controller 110 pivots the ongoing conversation towards prompting the user about the situation. Once, the tire pressure is corrected, the controller 110 pivots back the conversation to the previously going path. In the above example, the tire pressure could have been fine for cold weather or low speed city driving. The controller 110 does not trigger the conversation based only on a prefixed range for all the situations. The decision is based on the environment (weather, terrain, etc.) and behavioral pattern of the user (highspeed driver, etc.) and relation between based on current context.

[0035] According to an embodiment of the present invention, the conversational system 100 is preferably used for a vehicle to provide more convenience to the driver or passengers. The conversational system 100 may also be referred to as digital companion or virtual companion which is more than a digital assistant in a manner that the conversational system 100 is able to extract/deriver and give more information for a detected or asked query. The conversational system 100 is able to start the conversations for conflicts, if there are no ongoing-conversations, or can pivot an ongoing low priority conversation to discuss the resolution of potential conflict.

[0036] According to the present invention, an environment aware conflict avoidant personal companion is disclosed. The conversational device 100 is considered to be a companion than an assistant as the controller 110 is enabled to determine conflicts in advance and provide possible solution to averse the conflict. The controller 110 uses environmental context along with Artificial Intelligence (AI) and/or Machine Learning (ML) based models to understand user behavior, analyze the situation, predict the conflict, devise a plan to resolve the conflict and converse about it with the user. As discussed, majority of the current conversational systems are more voice-control than an active personal companion. Their participation is limited to traditional sensors and explicit trigger from the user. There is hardly a companion system that actively works to predict and avoid conflicts. The digital/virtual companion of the present invention, learns the behavior and the causality of the conflicts/problems, predicts the unfavorable scenarios, and attempts to avoid such scenarios from raising. The conversational device 100 of the present invention is capable to understand situational and environmental context, understand the user behavior and preferences, predict the conflict, understand priority of the conflict, devise the solution to the problem, pivoting conversation to and from the conflict resolution.

[0037] According to the present invention, the controller 110 and method offers companionship. The controller 110 and method minimizes the potential conflicts and maximize the well-being of the user. The present invention aim is conflict avoidance, or well-being of the user, and such decisions may not fall in-line with user profile. For example, if user is recovering, the controller 110 and method recommends “Satvik” restaurants (which offers Satvik food), which might not be fitting the user profile. If it is a cold weather, or if there is a child passenger, the controller 110 and method might not recommend the desert restaurants at all, even though it might fall under the user profile. So, the primitive differentiation is the methodology to come from sensor to suggestions. The present invention uses conflict prediction and solution module to drive the suggestions. In addition, the present invention prioritizes the conflicts severity such as if something is life-threatening, the controller 110 and method prioritizes it over something time-sensitive, like take the next left as per navigation.

[0038] It should be understood that the embodiments explained in the description above are only illustrative and do not limit the scope of this invention. Many such embodiments and other modifications and changes in the embodiment explained in the description are envisaged. The scope of the invention is only limited by the scope of the claims.
, Claims:We claim:
1. A controller (110) for a conversational system (100), said conversational system (100) facilitates contextual conversation with a user (146), said conversational system (100) comprises said controller (110) interfaced with an input means (120) and an output means (118), said controller (110) configured to,
receive conversational input through said input means (120),
process said conversational input using a first dataset comprising user data, user profile and conversation history stored in a memory element (106), through any one of a rule based model and a learning based model, and
provide conversational output, after said conversational input is processed, through said output means (144), characterized in that, to process conversational input, said controller configured to,
monitor, by an aggregator module, a second dataset in addition to said first dataset, said second dataset comprises environmental and situational data of said user;
identify, by an analyzer module, a conflict and severity based on said first dataset and said second dataset, and
determine, by a planner module, solution to avert said conflict.

2. The controller (110) as claimed in claim 1, wherein said second dataset is selected based on availability, said second dataset is selected from a group comprising a facial expression/emotions extracted from a camera (122), physiological parameter from a wearable device (124) worn by said user, external data comprising weather data, environmental parameters, traffic data from a map based service provider from an internet connected data source (126), conversation history (128), a calendar entry, a to-do list obtained from a smart device (130), a location through a satellite based navigation system (132), vehicle data from built-in sensors (134) of a vehicle, and the like.

3. The controller (110) as claimed in claim 1, wherein said analyzer module (114) configured to:
apply common sense ontologies and inference rules to identify said conflict, and
determine a severity of said identified conflict through causal chain analysis.

4. The controller (110) as claimed in claim 1 comprises a ranking module (126) to rank at least two or more solutions based on said collected data or feature vector of collected data.

5. The controller (110) as claimed in claim 1 is part of at least one of an infotainment unit of a vehicle, a smartphone, a wearable device, a cloud computer.

6. A method for operating a conversational system (100), said method comprising the steps of:
receiving conversational input from an input means (120),
processing said conversational input using a first dataset comprising user data, user profile and conversation history stored in a memory element (106), through any one of a rule based model and a learning based model, and
providing conversational output, based on said processed data, through an output means (118), said step of determining said context is characterized by,
monitoring, by an aggregator module (112), a second dataset in addition to said first dataset, said second dataset comprising environmental and situational data of said user;
identifying, by an analyzer module (114), a conflict and severity based on said first dataset and said second dataset, and
determining, by a planner module (116), solution to avert said conflict.

7. The method as claimed in claim 6, wherein said second dataset is selected based on availability, said second dataset is selected from a group comprising a facial expression/emotions extracted from a camera (122), physiological parameter from a wearable device (124) worn by said user, external data comprising weather data, environmental parameters, traffic data from a map based service provider from an internet connected data source (126), conversation history (128), a calendar entry, a to-do list obtained from a smart device (130), a location through a satellite based navigation system (132), vehicle data from built-in sensors (134) of a vehicle, and the like.

8. The method as claimed in claim 6, wherein analyzing said monitored data comprises,
applying common sense ontologies and inference rules for identifying said conflict, and
determining a severity of said identified conflict through causal chain analysis.

9. The method as claimed in claim 6 comprises ranking said at least two or more solutions based on said collected data.

10. The method as claimed in claim 6, wherein said controller (110) is part of at least one of an infotainment unit of a vehicle, a smartphone, a wearable device, a cloud computer.

Documents

Application Documents

# Name Date
1 202341058658-POWER OF AUTHORITY [01-09-2023(online)].pdf 2023-09-01
2 202341058658-FORM 1 [01-09-2023(online)].pdf 2023-09-01
3 202341058658-DRAWINGS [01-09-2023(online)].pdf 2023-09-01
4 202341058658-DECLARATION OF INVENTORSHIP (FORM 5) [01-09-2023(online)].pdf 2023-09-01
5 202341058658-COMPLETE SPECIFICATION [01-09-2023(online)].pdf 2023-09-01
6 202341058658-Power of Attorney [29-08-2024(online)].pdf 2024-08-29
7 202341058658-Form 1 (Submitted on date of filing) [29-08-2024(online)].pdf 2024-08-29
8 202341058658-Covering Letter [29-08-2024(online)].pdf 2024-08-29