Abstract: Disclosed herein is an artificial intelligence (AI) driven atmosphere centric and mood based audio content recommendation system and method thereof (100) that comprises a biometric acquisition unit (102) configured to capture physiological signals such as heart rate and blood perfusion patterns via a fingerprint sensor integrated into a electronic device or wearable device, voice input unit (104) configured to capture voice commands from the user through one or more integrated or external microphones, a processing unit (106) wherein the processing unit (104) comprises a weather monitoring module (108) adapted to identify real-time environmental weather parameters through artificial intelligence-based weather data acquisition algorithms, a mood and atmosphere fusion module (110),an audio content selection module (112),a calendar integration module (114),a voice recognition module (116),a content playback control module (118),a voice response module (120),a communication network (122),a display unit (124),a speaker unit (126 ,a cloud database (128).
Description:FIELD OF DISCLOSURE
[0001] The present disclosure generally relates to audio recommendation system, more specifically, relates to artificial intelligence (AI) driven atmosphere centric and mood-based audio content recommendation system and method thereof.
BACKGROUND OF THE DISCLOSURE
[0002] One of the key innovations of the system is its capacity to deliver audio content dynamically in function of real-time atmospheric conditions. By drawing on global positioning system information, time, and meteorological weather forecast information whether it is sunny, rainy, cloudy, or stormy the system does weather analysis to ascertain the prevailing environmental conditions. It uses this information to select and play audio material contextually appropriate to the mood. For instance, in rain or cloudy weather, the system might play soothing instrumental music, narrative content, or reflective podcasts to resonate with the user's probable emotional state. On bright days, it might play cheerful energetic music to lift spirits. This ambient synchronization is powered by artificial intelligence (AI) modules that assesses past and current weather patterns alongside user behavior in order to guarantee that the chosen audio augments the user experience. This functionality is especially conducive for users who desire synchronizing their sensory environment with external phenomena, thus creating harmony between their emotional and environmental state. In contrast to traditional recommendation systems that recommend content based on listening history or genre preference only, this system adds a dynamic level of personalization based on meteorological reality, rendering the listening experience more immersive, mood-congruent, and situationally attuned.
[0003] A second unique characteristic of this system is that it can learn to change audio playback in real time according to the emotional state of the user. Contrary to other systems that use manual input or pre-designed mood profiles, this system uses biometric information heart rate variability, skin temperature, and galvanic skin response gathered via electronic fingerprint sensors or connected wearables. These physiological signals are analysed by an artificial intelligence (AI) powered mood identification module, which correlates biometric changes to certain emotional states, like stress, calmness, excitement, or tiredness. After the mood is detected, the system chose audio content from its pool that corresponds with or attempts to have a positive effect on the user's mood. For instance, if stress signals are picked up, the system could play calming instrumental or guided meditation material, while energetic tracks could be played during times of detected happiness or energy. Furthermore, the system provides real-time adjustment of the playback according to ongoing mood tracking, so content is kept contextually appropriate even if the user's state has altered. This mood-sensitive architecture provides a richly personalized aspect to the listening experience, encouraging emotional health through custom audio interaction. While mood recommender systems merely recommend playlists, this system automatically starts and adapts playback without requiring user intervention. The innovation here is not just the accuracy of emotional sensing using passive sensors but also the smooth incorporation of that information into dynamic audio feedback, creating a continuously mood-sensitive audio environment.
[0004] The next novel ability of the system is to incorporate calendar events and cultural celebrations into the audio content selection process, supporting socially and temporally appropriate playback. The system retrieves a synchronized digital calendar either user-defined or system-wide to determine notable personal dates, national holidays, religious celebrations, and internationally accepted celebrations. It also identifies time-sensitive patterns like weekends, mornings, or evenings. A calendar integration module matches this time data with an event-sensitive content repository full of music, tales, and podcasts specially prepared for different occasions. It can play classical music, religious content, or occasion-based storytelling associated with the mood of the celebration. Additionally, the module is smart enough to accommodate regional calendars and diverse user profiles across cultures, thus being inclusive and relevant. Automating the selection of audio content based on celebratory or solemn occasions, the system imparts emotional and social value to the playback of audio, providing an enriched, meaningful listening experience. The users are freed from the burden of searching for event-specific content, as the system itself curates and starts playing it automatically. Such uniqueness converts the system from a mere passive media player to an engaging cultural companion involved in and elevating the temporal experience of the user throughout the year.
[0005] Current music recommendation systems are mostly based on user preferences, past listening records, and collaborative filtering for suggestions. These systems look into past listening trends and demographic data and suggest things on the basis of patterns seen over time. They mostly do not consider dynamic environmental conditions like weather, which have a significant effect on a user's mood and musical taste. Weather-based recommendations are not common in music, and when undertaken, they have been generic and not highly personalized to appeal to users in real-time. One of the biggest drawbacks of existing systems is that they cannot adjust recommendations according to weather, like sunny or rainy weather, which might be more in sync with the mood of the user and improve the listening experience. The absence of context-based adjustments makes music recommendations seem out of touch with atmospheric reality, leaving scope for systems that incorporate real-time environmental information better for more immersive and personalized music experiences.
[0006] Mood-based music recommendation systems have come up, using biometric data like heart rate and facial recognition in combination with machine learning algorithms to determine states of emotion. Others provide mood-based playlists, attempting to pair music with an emotional state of a user. These systems are limited by their accuracy, with static mood detection that is incapable of mapping real-time mood shifts. Although some systems attempt to read emotional signals from users, human emotion complexity and variability are not always accurately expressed. In addition, mood-recommended suggestions tend to be overly broad and not always representative of the nuances of the user's unique emotional status. The resulting playlists are often generated from history, resulting in decreased relevance in the current moment. This static method does not offer music based on the user's current feeling, so it is less engaging and less enjoyable. The largest drawback of existing systems is that they cannot dynamically adjust music recommendations as moods shift, resulting in music that might not appeal to the user's actual real-time emotions.
[0007] Music recommendation systems that try to incorporate calendar events or celebrations usually concentrate on creating broad, pre-defined playlists according to holiday or seasonal themes. Such playlists, though convenient, tend to be non-personalized and do not incorporate the user's specific cultural heritage or individual tastes. Systems that provide generic holiday music or celebration playlists do so without considering personal or regional differences in the observance of festivals. For instance, such systems can provide the same collection of songs for highly acknowledged holidays, but they don't take into account the variety of cultural practices or particular importance of a holiday to the user. The restrictions of such an approach are evident the recommended music can be out of context with the user's cultural background or individual celebrations. Additionally, such systems commonly provide static recommendations without dynamically adapting based on user interaction or unique calendar entries. The outcome is a generic and sometimes unapplicable experience devoid of the substance and relevance necessary to add emotional value to certain celebrations. Such systems tend not to identify niche or local celebrations that may be significant to users, losing the chance to customize the audio experience for particular calendar events, whether cultural or personal.
[0008] Thus, in light of the above-stated discussion, there exists a need for an artificial intelligence (AI) driven atmosphere centric and mood-based audio recommendation system and method thereof.
SUMMARY OF THE DISCLOSURE
[0009] The following is a summary description of illustrative embodiments of the invention. It is provided as a preface to assist those skilled in the art to more rapidly assimilate the detailed design discussion which ensues and is not intended in any way to limit the scope of the claims which are appended hereto in order to particularly point out the invention.
[0010] According to illustrative embodiments, the present disclosure focuses on an AI-driven healthcare prediction system with secure and privacy-enhanced data sharing which overcomes the above-mentioned disadvantages or provide the users with a useful or commercial choice.
[0011] An objective of the present disclosure is to enable live audio playback to dynamically change depending on existing atmospheric conditions like temperature, humidity, rain, or sunlight.
[0012] Another objective of the present disclosure is to sense and understand the user's present mood using biometric or behavioral data, allowing for mood-based playback of audio content without any input from the user.
[0013] Another objective of the present disclosure is to use calendar information and cultural background to suggest and play music, podcasts, or tales matching upcoming festivals, events, or personal celebrations
[0014] Another objective of the present disclosure is to provide a personalized and engaging audio experience by combining weather, mood, and calendar information into one recommendation engine.
[0015] Another objective of the present disclosure is to transcend the static and generic character of traditional music recommendation systems by providing dynamic, context-sensitive playback that adapts to the user's surroundings and emotional life.
[0016] Another objective of the present disclosure is to provide a wide range of audio content, such as music, stories, and podcasts, to ensure rich content diversity for different user situations.
[0017] Another objective of the present disclosure is to promote user well-being and interest by matching audio content with environmental and psychological context, enhancing emotional connection and listener satisfaction.
[0018] Another objective of the present disclosure is to enable manual override and personalization features so users can modify the recommendations based on their individual needs or preferences.
[0019] Another objective of the present disclosure is to build a continuously learning recommendation engine that updates and improves based on user feedback, seasonal trends and user habits for better future forecasting.
[0020] Yet another objective of the present disclosure is to make it culturally and geographically relevant by acknowledging local holidays, user-created events, and geographical location information to improve contextual synchronization of audio playback.
[0021] In light of the above, in one aspect of the present disclosure, an artificial intelligence (AI)-driven atmosphere centric and mood-based audio recommendation system is disclosed herein. The system comprises a biometric acquisition unit configured to capture physiological signals such as heart rate and blood perfusion patterns via a fingerprint sensor integrated into an electronic device or wearable device. The system includes a voice input unit connected to the biometric acquisition unit configured to capture voice commands from the user through one or more integrated or external microphones. The system also includes a processing unit connected to the biometric acquisition unit and the voice input unit being configured to process mood data and atmospheric data. The processing unit comprises a weather monitoring module adapted to identify real-time environmental weather parameters through artificial intelligence-based weather data acquisition algorithms. The processing unit includes a mood and atmosphere fusion module adapted to merge the real-time mood information, weather information, and calendar event information into a context-oriented emotional profile. The processing unit also includes an audio content selection module configured to select and prioritize audio content such as music, podcasts, and stories from a content database according to the created emotional profile. The processing unit also includes a calendar integration module adapted to retrieve details of festivals, events, and celebrations from the calendar database of a user. The processing unit also includes a voice recognition module configured to detect wake words, transcribe user voice into text, and identify intended commands. The processing unit also includes a content playback control module configured to manage play, pause, skip, and volume functions automatically play the selected audio content. The processing unit also includes a voice response module configured to generate and output natural language audio responses using a text-to-speech engine. The system also includes a communication network connected to the processing unit and being configured to support real-time retrieval and transmission of audio content metadata, weather information, and calendar information. The system also includes a display unit connected to the processing unit and being configured to show information about ongoing playing audio content, volume levels, playback controls, and system alerts. The system also includes a speaker unit connected to the processing unit and being configured to output both the selected audio content and the generated voice responses to the user. The system also includes a cloud database connected to the communication network and being configured to hold user mood profiles, weather history, calendar events, and metadata of audio content for adaptive learning and personalization.
[0022] In one embodiment the biometric acquisition unit set up to obtain physiological measures such as heart rate variability, skin conductivity, and facial expression data via the use of photoplethysmography (PPG), galvanic skin response (GSR), and computer vision modules.
[0023] In one embodiment the system further comprises a privacy and consent management module configured to provide user-specific consent controls, manage permission-based access to physiological, environmental, and calendar data, and implement real-time anonymization for regulatory compliance and ethical data handling.
[0024] In one embodiment the voice input unit is set up to pick up emotional indicators via changes in vocal tone, pitch and speech tempo variation.
[0025] In one embodiment the weather monitoring module within the processing unit has an artificial intelligence (AI) based pattern analysis engine that relates environmental sensor data to historical weather patterns to anticipate atmospheric transitions and pre-emptively adjust audio content.
[0026] In one embodiment the mood and atmosphere fusion module within the processing unit uses a weighted decision fusion method integrating mood probabilities, environmental information, and temporal events to derive a composite context measure that controls selection of content.
[0027] In one embodiment the audio content selection module within the processing unit employs a neural network model that has been trained on a multi-label dataset to map audio content to metadata tags such as weather-type, mood category, and event-type for best match.
[0028] In one embodiment the calendar integration module within the processing unit is functionally linked to a geolocation engine that screens calendar events according to regional, cultural, and linguistic preferences, thus adding cultural specificity to content playback.
[0029] In one embodiment the content playback module within the processing unit dynamically modifies audio output parameters like tempo, rhythm, and volume according to mood detected and environmental changes for ensuring emotional synchrony.
[0030] In light of the above, in another aspect of the present disclosure, method for artificial intelligence (AI) driven atmosphere centric and mood-based audio recommendation system is disclosed herein. The method comprising receiving user-specific input data via a biometric acquisition unit and a voice input unit. The method includes detecting real-time atmospheric conditions using artificial intelligence (AI)-based environmental data sources via a weather monitoring module. The method also includes fusing the detected mood data, real-time atmospheric data, and calendar event data into a composite emotional-environmental context via a mood and atmosphere fusion module. The method also includes selecting corresponding audio content comprising music, podcasts, and stories based on the composite emotional-environmental context via an audio content selection module. The method also includes retrieving calendar event data including festivals and celebrations via a calendar integration module. The method also includes processing the voice commands and identify user intent via voice recognition module. The method also includes controlling the playback of selected audio content by managing play, pause, skip, and volume functions via a content playback module. The method also includes delivering auditory system feedback and verbal responses to user commands via a voice response module. The method also includes facilitating real-time data exchange between system components and external services via a communication network connected to the processing unit. The method also includes displaying playback information, user prompts, and system status via a display unit connected to the processing unit. The method also includes delivering audio content and voice feedback to the user via a speaker unit connected to the processing unit. The method also includes storing user preferences, audio content metadata, and interaction history for personalization via a cloud database connected to the communication network.
[0031] These and other advantages will be apparent from the present application of the embodiments described herein.
[0032] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
[0033] These elements, together with the other aspects of the present disclosure and various features are pointed out with particularity in the claims annexed hereto and form a part of the present disclosure. For a better understanding of the present disclosure, its operating advantages, and the specified object attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated exemplary embodiments of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0034] To describe the technical solutions in the embodiments of the present disclosure or in the prior art more clearly, the following briefly describes the accompanying drawings required for describing the embodiments or the prior art. Apparently, the accompanying drawings in the following description merely show some embodiments of the present disclosure, and a person of ordinary skill in the art can derive other implementations from these accompanying drawings without creative efforts. All of the embodiments or the implementations shall fall within the protection scope of the present disclosure.
[0035] The advantages and features of the present disclosure will become better understood with reference to the following detailed description taken in conjunction with the accompanying drawing, in which:
[0036] FIG. 1 illustrates a block diagram of an artificial intelligence (AI) driven atmosphere centric and mood-based audio content recommendation system in accordance with an exemplary embodiment of the present disclosure;
[0037] FIG.2 illustrates a flowchart of a method of an artificial intelligence (AI) driven atmosphere centric and mood-based audio content recommendation system, outlining the sequential steps, in accordance with an exemplary embodiment of the present disclosure;
[0038] FIG.3 illustrates the flowchart of method of an artificial intelligence (AI) driven atmosphere centric and mood-based audio content recommendation system, in accordance with an exemplary embodiment of the present disclosure; and
[0039] FIG. 4 illustrates the architectural flow diagram of atmosphere centric and mood-based audio content recommendation system in accordance with an exemplary embodiment of the present disclosure;
[0040] Like reference, numerals refer to like parts throughout the description of several views of the drawing.
[0041] The atmosphere centric and mood-based audio content recommendation system is illustrated in the accompanying drawings, which like reference letters indicate corresponding parts in the various figures. It should be noted that the accompanying figure is intended to present illustrations of exemplary embodiments of the present disclosure. This figure is not intended to limit the scope of the present disclosure. It should also be noted that the accompanying figure is not necessarily drawn to scale.
DETAILED DESCRIPTION OF THE DISCLOSURE
[0042] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure.
[0043] In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without some of these specific details.
[0044] Various terms as used herein are shown below. To the extent a term is used, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing.
[0045] The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
[0046] The terms “having”, “comprising”, “including”, and variations thereof signify the presence of a component.
[0047] Referring now to FIG. 1 to FIG. 4 to describe various exemplary embodiments of the present disclosure. FIG. 1 illustrates a block diagram of an artificial intelligence (AI) driven atmosphere centric and mood-based audio content recommendation system in accordance with an exemplary embodiment of the present disclosure;
[0048] The system 100 may include a biometric acquisition unit 102 configured to capture physiological signals such as heart rate and blood perfusion patterns via a fingerprint sensor integrated into an electronic device or wearable device, a voice input unit 104 connected to the biometric acquisition unit configured to capture voice commands from the user through one or more integrated or external microphones , a processing unit 106 connected to the biometric acquisition unit 102 and the voice input unit 104 being configured to process mood data and atmospheric data, a weather monitoring module 108 adapted to identify real-time environmental weather parameters through artificial intelligence-based weather data acquisition algorithms, a mood and atmosphere fusion module 110 adapted to merge the real-time mood information, weather information, and calendar event information into a context-oriented emotional profile, an audio content selection module 112 configured to select and prioritize audio content such as music, podcasts, and stories from a content database according to the created emotional profile, a calendar integration module 114 adapted to retrieve details of festivals, events, and celebrations from the calendar database of a user, a voice recognition module 116 configured to detect wake words, transcribe user voice into text, and identify intended commands, a content playback control module 118 configured to manage play, pause, skip, and volume functions automatically play the selected audio content, a voice response module 120 configured to generate and output natural language audio responses using a text-to-speech engine, a communication network 122 connected to the processing unit 106 and being configured to support real-time retrieval and transmission of audio content metadata, weather information, and calendar information, a display unit 124 connected to the processing unit 106 and being configured to show information about ongoing playing audio content, volume levels, playback controls, and system alerts, a speaker unit 126 connected to the processing unit 106 and being configured to output both the selected audio content and the generated voice responses to the user, a cloud database 128 connected to the communication network 122 and being configured to hold user mood profiles, weather history, calendar events, and metadata of audio content for adaptive learning and personalization.
[0049] The biometric acquisition unit 102 set up to obtain physiological measures such as heart rate variability, skin conductivity, and facial expression data via the use of photoplethysmography (PPG), galvanic skin response (GSR), and computer vision modules.
[0050] The system 100 further comprises a privacy and consent management module configured to provide user-specific consent controls, manage permission-based access to physiological, environmental, and calendar data, and implement real-time anonymization for regulatory compliance and ethical data handling.
[0051] The voice input unit 104 is set up to pick up emotional indicators via changes in vocal tone, pitch and speech tempo variation.
[0052] The weather monitoring module 108 within the processing unit 106 has an artificial intelligence (AI) based pattern analysis engine that relates environmental sensor data to historical weather patterns to anticipate atmospheric transitions and pre-emptively adjust audio content.
[0053] The mood and atmosphere fusion module 110 within the processing unit 106 uses a weighted decision fusion method integrating mood probabilities, environmental information, and temporal events to derive a composite context measure that controls selection of content.
[0054] The audio content selection module 112 within the processing unit 106 employs a neural network model that has been trained on a multi-label dataset to map audio content to metadata tags such as weather-type, mood category, and event-type for best match.
[0055] The calendar integration module 114 within the processing unit 106 is functionally linked to a geolocation engine that screens calendar events according to regional, cultural, and linguistic preferences, thus adding cultural specificity to content playback.
[0056] The content playback module 118 within the processing unit 106 dynamically modifies audio output parameters like tempo, rhythm, and volume according to mood detected and environmental changes for ensuring emotional synchrony.
[0057] The method 100 may include receiving user-specific input data via a biometric acquisition unit 102 and a voice input unit 104, determining the user's emotional or mood state based on said physiological data via a processing unit 106 connected to the biometric acquisition unit 102 and a voice input unit 104, detecting real-time atmospheric conditions using artificial intelligence (AI)-based environmental data sources via a weather monitoring module 108 , fusing the detected mood data, real-time atmospheric data, and calendar event data into a composite emotional-environmental context via a mood and atmosphere fusion module 110, selecting corresponding audio content comprising music, podcasts, and stories based on the composite emotional-environmental context via an audio content selection module 112, retrieving calendar event data including festivals and celebrations via a calendar integration module 114, processing the voice commands and identify user intent via voice recognition module 116, controlling the playback of selected audio content by managing play, pause, skip, and volume functions via a content playback control module 118, delivering auditory system feedback and verbal responses to user commands via a voice response module 120, facilitating real-time data exchange between system components and external services via a communication network 122 connected to the processing unit 106, displaying playback information, user prompts, and system status via a display unit 124 connected to the processing unit 106 , delivering audio content and voice feedback to the user via a speaker unit 126 connected to the processing unit 106 , storing user preferences, audio content metadata, and interaction history for personalization via a cloud database 128 connected to the communication network 122.
[0058] A biometric acquisition unit 102 configured to capture physiological signals such as heart rate and blood perfusion patterns via a fingerprint sensor integrated into an electronic device or wearable device. The biometric acquisition unit 102 is a primary component in the system 100, responsible for acquiring the user's physiological signals to determine their mood or emotional state. Fundamentally, the biometric acquisition unit 102 consists of a fingerprint sensor mounted inside an electronic or wearable device that allows for unobtrusive and effortless acquisition of biometric information. The sensor works by sensing key vital signs like heart rate, skin conductivity, and blood perfusion patterns. These physiological indicators reflect the emotional arousal and stress of the user. Variations in heart rate and peripheral blood flow can indicate excitement, calmness, or anxiety states.
[0059] The incorporation of the fingerprint sensor into widely used devices provides passive and continuous monitoring without interfering with the user's daily routine. This ongoing data collection enables real-time evaluation of the emotional state of the user, making it possible for the system to adjust its responses and content presentation in response. This fusion serves as the foundation for mood-based content delivery such that the output of the system is custom-tuned to the user's immediate emotional scenario.Through the utilization of sophisticated biometric sensing technology, the biometric acquisition unit 102 maximizes the capability of the system 100 to deliver context-aware and personalized interactions.
[0060] A voice input unit 104 connected to the biometric acquisition unit 102 configured to capture voice commands from the user through one or more integrated or external microphones. The voice input unit 104 is a key module designed to provide smooth interaction between the user and the system with voice commands. With one or more built-in or external microphones, this voice input unit 104 records the voice inputs of the user with high fidelity, allowing for correct recognition even in noisy environments. Enhanced noise-cancellation and signal-processing techniques are utilized to extract the user's voice from ambient sounds, providing improved voice data clarity and reliability.
[0061] This voice input unit 104 supports hands-free operation, where users can control different system functions like starting playback, changing volume, skipping tracks, or asking for information without manual input. Through the interpretation of natural language commands, the voice input unit 104 helps to create an intuitive and user-friendly interface. Integration with the biometric acquisition unit 102 permits the system 100 to put voice commands into context relative to the user's prevailing emotional state. The tone and pattern of a user's speech can yield more information about their mood, which, when added to physiological information, allows the system to adapt responses and content presentation more effectively. This multimodal design maximizes the capabilities of the system 100 to offer personalized and empathetic interactions.
[0062] A processing unit 106 connected to the biometric acquisition unit 102 and the voice input unit 104 being configured to process mood data and atmospheric data. The processing unit 106 is the central core of the system 100, coordinating the combining and processing of data from multiple inputs to provide context-aware and personalized audio content which accepts physiological information from the biometric acquisition unit 102 and verbal instructions from the voice input unit 104 and uses this to calculate the user's intent and emotional state. Through the examination of heart rate, skin conductivity, and blood perfusion patterns in combination with voice tone and command content, the processing unit 106 builds a nuanced understanding of the user's present mood and intentions.
[0063] The processing unit 106 consolidates information from a variety of sources, ranging from real-time weather data and calendar events, to develop a thorough emotional record which is utilized in the picking and playing out of audio materials to ensure responses by the system 100 are contextual and based on the user's setting. The processing unit106 also oversees the adaptive learning functions of the system. Through the storage and analysis of past data regarding user preferences, emotional reactions, and environmental conditions, it optimizes its algorithms to improve future interactions. Through this ongoing learning process, the system 100 is able to better predict user needs and preferences over time.
[0064] A weather monitoring module 108 adapted to identify real-time environmental weather parameters through artificial intelligence-based weather data acquisition algorithms. The weather monitoring module 108 is intended to capture and translate environmental weather parameters in real-time, feeding this data into the adaptive content delivery of the system. Through artificial intelligence-driven algorithms, the weather monitoring module 108 collects data on temperature, humidity, precipitation, and atmospheric pressure from multiple sources, including onboard sensors and external weather sources. This comprehensive strategy guarantees accurate and timely weather data pertinent to the user's geolocation. Through the analysis of these environmental conditions, the weather monitoring module 108 determines the current weather and forecasts short-term changes. This data is essential in adapting audio content to the environment of the user, improving the relevance and personalization of the system's output. The incorporation of weather information into the emotional profile of the user provides the system with an ability to account for outside factors affecting behavior and mood. Weather can profoundly affect an individual's mood; therefore, adding this information gives the system 100 more educated choices about selecting content. The weather monitoring module 108 automatically updates its information to account for fluctuating conditions, keeping the response of the system valid throughout the day.
[0065] A mood and atmosphere fusion module 110 adapted to merge the real-time mood information, weather information, and calendar event information into a context-oriented emotional profile. The mood and atmosphere fusion module 110 is tasked with the integration of varied data inputs to build a complete emotional portrait of the user. The mood and atmosphere fusion module 110 combines real-time physiological information, including heart rate and skin conductivity derived from the biometric acquisition unit, with external factors like weather conditions and calendar events. By combining these streams of data, the mood and atmosphere fusion module 110 identifies the user's immediate emotional state and situates it within their immediate context.
[0066] Combining is a function of sophisticated algorithms processing internal bodily signals versus external stimuli. This subtle realization allows the system to provide audio material that connects with the emotional and environmental context of the user.In addition, the mood and atmosphere fusion module 110 regularly refreshes the emotional profile through tracking changes in physiological signals and environmental conditions so that the system's responses are always fresh and responsive. This dynamic process makes real-time adjustments possible to content delivery to maximize user engagement and satisfaction.
[0067] An audio content selection module 112 configured to select and prioritize audio content such as music, podcasts, and stories from a content database according to the created emotional profile. The audio content selection module 112 is a key piece that deals with selecting and prioritizing audio content like music, podcasts, and stories according to the emotional profile of the user. This audio content selection module 112 draws on information from the mood and atmosphere fusion module 112, which blends physiological signals, environmental conditions, and calendar events to calculate the user's immediate emotional state. By matching audio content to this emotional profile, the system 100 works to increase user well-being and engagement.The audio content selection module 112 taps into a rich content library, using sophisticated algorithms to map audio choices with the user's mood.
[0068] A calendar integration module 114 adapted to retrieve details of festivals, events, and celebrations from the calendar database of a user. The calendar integration module 114 is structured to integrate the user's booked events, holidays, and celebrations seamlessly into the adaptive content delivery framework of the system. With access to the user's calendar database, the calendar integration module 114 fetches relevant information on upcoming events and notable dates. Such information proves vital in grounding the user's emotional state and personalizing audio content to correlate with their routine activities and milestone events. By applying intelligent parsing and analysis, the calendar integration module 114 recognizes patterns and recurring events and allows the system 100 to predict the user's preferences and needs.
[0069] The combination of calendar information with physiological signals and external environmental conditions enables the user's context to be comprehensively understood. This holistic approach prevents the system's responses from being merely reactive but rather proactive based on the user's schedule and expected emotional states. The system 100, by correlating audio content with the user's calendar, promotes a more personalized and empathetic experience.
[0070] A voice recognition module 116 configured to detect wake words, transcribe user voice into text, and identify intended commands. The voice recognition module 116 is a key part intended to understand and process voice language inputs from the user. It works by detecting audio signals through built-in or external microphones, translating the signals into digital information, and processing them to identify specific words, phrases, or commands. The voice recognition module 116 utilizes sophisticated algorithms and machine learning methodologies to correctly transcribe speech into text despite background noise or differing accents. When voice recognition module 116 receives voice input, it undergoes signal preprocessing, including noise filtering and normalization to filter out unwanted noise and improve the audio signal quality. Voice recognition module 116 then makes use of speech recognition engines to interpret the linguistic content, extracting keywords and phrases that match predefined queries or commands. This allows the system 100 to interpret user intent and respond accordingly.
[0071] Apart from command recognition, the voice recognition module 116 can identify wake words or activation phrases to enable hands-free usage. Voice recognition module 116 also allows for continuous learning through adjustments to the user's voice patterns over time to enhance responsiveness and accuracy. In addition, the voice recognition module 116 is also able to examine vocal features for inferring emotional states, which helps complement the system's general knowledge regarding the mood of the user. By integrating linguistic material with voice tone analysis, the voice recognition module improves the system's capability to offer personalized and contextual interactions.
[0072] A content playback control module 118 configured to manage play, pause, skip, and volume functions automatically play the selected audio content. The content playback control module 118 is a critical building block that governs the audio content playback in a smooth, intuitive manner. The content playback control module 118 interacts with the processing unit 106 to carry out audio playback-related commands, that is play, pause, skip, and volume control. Through the decoding of user input be it voice commands, touch, or automated triggers it enables real-time control over the delivery of audio content. One of the most important features of this content playback control module 118 is its contextual response capability. This dynamic adaptation helps maintain the audio experience intact and in accordance with the current context of the user. Additionally, the content playback control module 118 allows for user-based customization. From analyzing past events and user behaviors, it is able to learn optimal volume levels for specific types of content or settings and then adjust playback parameters accordingly.
[0073] A voice response module 120 configured to generate and output natural language audio responses using a text-to-speech engine. The voice response module 120 is a central element intended to produce and present natural language audio responses to users, providing effortless and intuitive interaction. This voice response module 120 captures the user's spoken input and converts it into text using speech-to-text (STT) technology. The transcribed text is then processed to determine the user's intent, enabling the system to generate an appropriate response. The interaction flow involves the voice response module 120 converting the user's speech into text, which the system 100 processes to formulate a response. The voice response module 120 then converts this response text back into speech, delivering it audibly to the user. This seamless integration enables natural, hands-free communication between the user and the system 100.In addition, the voice response module 120 is multilingual-enabled, meaning that it can talk to users in languages of their choice. This broadens the reach and makes users feel at home.
[0074] A communication network 122 connected to the processing unit 106 and being configured to support real-time retrieval and transmission of audio content metadata, weather information, and calendar information. This communication network 122 is coupled with the processing unit 106 and is especially designed to allow for the real-time acquisition and conveyance of various forms of data that affect audio content delivery. These are audio content metadata, weather data, and calendar data, all of which enable dynamic personalization of the audio experience for the user enabling concurrent data exchange between external data sources and internal processing components. The communication network 122 ensures the system's 100 responsiveness, personalization, and context-sensitivity, making the user's auditory experience more stimulating based on real-world environmental and temporal conditions.
[0075] A display unit 124 connected to the processing unit 106 and being configured to show information about ongoing playing audio content, volume levels, playback controls, and system alerts. acts as the system's main intrinsic visual interface. The embedded unit is designed to present real-time information about the operation and output of the audio system so that users can observe and control the system directly without depending on other devices. The display device 124 is provided to display elaborate information regarding the audio content in play, such as content name, type, artist or speaker, length, playback status, and metadata tags for describing mood matching, weather context, or calendar-related thematic context. Such contextual indicators improve the user's comprehension of real-time selection of content.
[0076] Furthermore, the display unit 124 offers a definite visual indication of volume levels, usually by an on-screen slider or numeric indicator. On-device playback controls, such as play, pause, stop, skip, and rewind operations, can be accessed by users through capacitive touch buttons, a touch-sensitive screen interface, or body buttons around the display, depending on system design. In addition, the display unit 124 notifies the user of system alerts, network status, scheduled audio approaching or sensor and privacy permission notifications. As embedded, the display is ergonomically placed and part of the system's housing, providing a contained, user-friendly, and reactive interface enhancing autonomy and user experience.
[0077] A speaker unit 126 connected to the processing unit 106 and being configured to output both the selected audio content and the generated voice responses to the user. The speaker unit 126 is the system's main auditory interface with the user, providing both pre-selected audio content and generated voice feedback. Attached to the processing unit 106, it provides sound output of high quality to the user, thus improving the user experience.This speaker unit 126 is engineered to operate with a broad variety of audio outputs, to system notifications and voice assistance. Its style is designed to prioritize fidelity and clarity, with audio content presented faithfully and agreeably. The speaker unit 126 can also have advanced features such as noise cancellation and volume adjustment, which adapt their output in response to surrounding noise and individual control. Aside from playing chosen audio material, the speaker unit 126 plays a role in communicating voice answers that are output by the text-to-speech engine of the system. They offer users instantaneous feedback, affirmations, and data, thus allowing for free-flowing communication.
[0078] A cloud database 128 connected to the communication network 122 and being configured to hold user mood profiles, weather history, calendar events, and metadata of audio content for adaptive learning and personalization. The cloud database 128 is the main database for data storage and management integral to the system's personalization and adaptive learning features. The cloud database 128 stores metadata of audio materials, weather histories, calendar information, and mood profiles of users, making the system 100 able to provide personalized, context-sensitive experience. User mood profiles, computed from physiological and behavioral information, are maintained and updated continuously to enable the system to identify patterns and modify responses over time. Weather history data enables insight into environmental conditions potentially affecting the user's mood and improving the accuracy of mood inferences and recommendations.Calendar events retained in the cloud database 128 guide the system 100 of impending events so that audio material keeps pace with the user's agenda and milestones. Furthermore, audio content metadata, such as genre, tempo, and theme, allows the system 100 to synchronize material with the user's current emotional state and personal preferences accurately. The architecture of the cloud database 128 is scalable and real-time, with access to data in readiness for processing and decision-making.
[0079] Consider in a simplistic example, a scenario where intelligent audio system flawlessly adjusts music and voice feedback to your mood, surroundings, and calendar activities. On a rainy day, if it senses stress, it may play soothing instrumental songs to calm you down. On a sunny morning when you are full of energy, it may pick lively songs to match your lively mood. During holiday seasons such as diwali, identifying the holiday from your calendar, it prepares celebratory songs to add festive fervour. At the end of a busy workday, feeling tired, it may recommend soothing podcasts to calm you down. With evening approaching, identifying your unwinding routine, it switches to gentle melodies to make you sleep well. This smart system keeps changing, making your listening experience consistent with your present context and mood.
[0080] FIG.2 illustrates a flowchart of a method of a artificial intelligence (AI) driven atmosphere centric and mood-based audio content recommendation system, outlining the sequential steps, in accordance with an exemplary embodiment of the present disclosure.
[0081] At 202, the user-specific input data are received via a biometric acquisition unit and a voice input unit.
[0082] At 204, the emotional or mood state of the user are determined based on said physiological data via a processing unit connected to the biometric acquisition unit and a voice input unit.
[0083] At 206, the real-time atmospheric conditions are detected using artificial intelligence (AI)-based environmental data sources via a weather monitoring module.
[0084] At 208, the detected mood data, real-time atmospheric data and calendar event data are fused into a composite emotional-environmental context via a mood and atmosphere fusion module.
[0085] At 210, corresponding audio content comprising music, podcasts, and stories are selected based on the composite emotional-environmental context via an audio content selection module.
[0086] At 212, the calendar event data including festivals and celebrations are retrieved via a calendar integration module.
[0087] At 214, the voice commands are processed and user intent is identified via voice recognition module.
[0088] At 216, the playback of selected audio content is controlled by managing play, pause, skip, and volume functions via a content playback module.
[0089] At 218, the auditory system feedback and verbal responses are delivered to user commands via a voice response module.
[0090] At 220, real-time data exchange between system components and external services are facilitated via a communication network connected to the processing unit.
[0091] At 222, the playback information, user prompts, and system status are delivered via a display unit connected to the processing unit.
[0092] At 224, the audio content and voice feedback are delivered to the user via a speaker unit connected to the processing unit.
[0093] At 226, the user preferences, audio content metadata, and interaction history for personalization are stored via a cloud database connected to the communication network.
[0094] FIG.3 illustrates the flowchart of method of an artificial intelligence (AI) driven atmosphere centric and mood-based audio content recommendation system, in accordance with an exemplary embodiment of the present disclosure.
[0095] At 302, receive user-specific input data via a biometric acquisition unit and a voice input unit.
[0096] At 304, determine the user's emotional or mood state based on said physiological data via a processing unit connected to the biometric acquisition unit and a voice input unit.
[0097] At 306, detect real-time atmospheric conditions using artificial intelligence (AI)-based environmental data sources via a weather monitoring module.
[0098] At 308, fuse the detected mood data, real-time atmospheric data, and calendar event data into a composite emotional-environmental context via a mood and atmosphere fusion module.
[0099] At 310, select corresponding audio content comprising music, podcasts, and stories based on the composite emotional-environmental context via an audio content selection module.
[0100] At 312, retrieve calendar event data including festivals and celebrations via a calendar integration module.
[0101] At 314, process the voice commands and identify user intent via voice recognition module.
[0102] At 316, control the playback of selected audio content by managing play, pause, skip, and volume functions via a content playback module.
[0103] At 318, deliver auditory system feedback and verbal responses to user commands via a voice response module.
[0104] At 320, facilitate real-time data exchange between system components and external services via a communication network connected to the processing unit.
[0105] At 322, display playback information, user prompts, and system status via a display unit connected to the processing unit.
[0106] At 324, deliver audio content and voice feedback to the user via a speaker unit connected to the processing unit.
[0107] At 326, store user preferences, audio content metadata, and interaction history for personalization via a cloud database connected to the communication network.
[0108] FIG. 4 illustrates the architectural flow diagram of atmosphere centric and mood-based audio content recommendation system in accordance with an exemplary embodiment of the present disclosure.
[0109] At 402, the system captures the user's geographical location from GPS information to decide local weather conditions appropriate for audio selection.
[0110] At 404, current time data is gathered to frame recommendations, like playing energetic music during the morning or soothing music during the evening.
[0111] At 406, live or predicted weather data is retrieved to evaluate atmospheric conditions such as sunshine, rain, or clouds.
[0112] At 408, a combination of location, time, and weather forecast inputs are processed to derive the current weather context.
[0113] At 410, based on weather analysis, the system queries the music database to fetch content that suits the detected weather condition.
[0114] At 412, music database serves as the central content repository from which audio tracks are selected for playback. It houses a wide variety of audio content including music, podcasts, and stories, all systematically tagged with metadata such as genre, mood, tempo, weather compatibility, and contextual themes.
[0115] At 414, the system evaluates user interest by explicit feedback or passive interaction signals such as skipping or listening time. If the user is interested in the suggestion, the chosen music is inserted into the playlist and played. If the user is not interested, feedback is recorded and the system adjusts future suggestions accordingly.
[0116] At 416, the system continuously observes user preferences and environmental changes to dynamically improve future music suggestions.
[0117] At 418, final output of carefully selected music that is similar to the user's context and preferences is prepared for playback. This playlist is dynamically generated based on information from the music database and emotional profile to make sure that the audio content is contextually relevant and emotionally engaging. The playlist is then presented to the user via the speaker or display interface, giving a seamless, immersive listening experience specific to the user's environment and emotional requirements.
[0118] In the best mode of operation System begins by triggering the biometric acquisition unit 102, which gathers passive real-time physiological data like heart rate variability and blood perfusion from a linked wearable sensor or embedded sensor. At the same time, the voice input unit 104 keeps its always-on microphone array on to capture voice commands and infer emotional tone from user speech. The processing unit 106 takes the biometric and voice information, and matches them with external environmental information gathered through the weather monitoring module 108, which streams localized atmospheric conditions like temperature, humidity, and precipitation continuously through artificial intelligence (AI) improved weather services. The calendar integration module 114 synchronizes with the user's digital calendar to pull cultural, national, and personal events.
[0119] These multimodal inputs biometric, voice, weather, and calendar are subsequently fused within the mood and atmosphere fusion module 110, which dynamically creates a contextual emotional profile through adaptive weighting schemes. The fused emotional state is passed to the audio content selection module 112, which asks the system's cloud-integrated audio database to fetch music, podcasts, or stories under the audio system that correspond to the user's current mood and atmosphere. The choice is filtered and enriched according to user history and likes through internal algorithms in the processing unit 106. Playback is controlled via the content playback control module 118, permitting real-time play control like play, pause, skip, and volume. The voice recognition module 116 identifies wake words and handles speech commands, which facilitates hands-free use, and also transfers emotional tone analysis to enable mood refinement. User input and confirmation are enabled by the voice response module 120, which employs text-to-speech synthesis to give natural feedback. All modules exchange information through the communication network 122, providing coordinated data flow and access to online services. Output is provided through the speaker unit 126 for audio playback and the display unit 124, which displays current track information, mood indicators, and user control options.
[0120] The atmosphere centric mood-driven audio content recommendation system improves user welfare by synchronizing audio playback with real-time mood and environmental context. In contrast to traditional systems that depend entirely on user interaction or listening records, this system combines biometric feedback, tone of voice, weather conditions, and calendar entries to dynamically personalize music, podcasts, or tales to the current mood and environment of the user. This system aids in lowering stress levels, improving mood, and producing engaging experiences that appeal to the emotional state of the user. It is particularly useful for users who require therapeutic sound treatments, situation-sensitive entertainment, or affect regulation during the day. The hands-free nature of the system through voice control and learning adaptation also enhances accessibility and user convenience. Through the integration of technology and emotional intelligence, this system provides a new, intuitive method of experiencing content, which makes digital interaction more responsive, empathetic, and human-centered in a world that is growing more connected.
[0121] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it will be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0122] A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware, computer software, or a combination thereof.
[0123] The foregoing descriptions of specific embodiments of the present disclosure have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described to best explain the principles of the present disclosure and its practical application, and to thereby enable others skilled in the art to best utilize the present disclosure and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but such omissions and substitutions are intended to cover the application or implementation without departing from the scope of the present disclosure.
[0124] Disjunctive language such as the phrase “at least one of X, Y, Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
[0125] In a case that no conflict occurs, the embodiments in the present disclosure and the features in the embodiments may be mutually combined. The foregoing descriptions are merely specific implementations of the present disclosure, but are not intended to limit the protection scope of the present disclosure. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in the present disclosure shall fall within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
, Claims:I/We Claim:
1. An artificial intelligence (AI) driven atmosphere centric and mood-based audio content recommendation system (100), the system (100) comprising:
a biometric acquisition unit (102) configured to capture physiological signals such as heart rate and blood perfusion patterns via a fingerprint sensor integrated into an electronic device or wearable device;
voice input unit (104) connected to the biometric acquisition unit (102) configured to capture voice commands from the user through one or more integrated or external microphones;
a processing unit (106) connected to the biometric acquisition unit (102) and the voice input unit (104) being configured to process mood data and atmospheric data, wherein the processing unit (104) comprises:
a weather monitoring module (108) adapted to identify real-time environmental weather parameters through artificial intelligence-based weather data acquisition algorithms;
a mood and atmosphere fusion module (110) adapted to merge the real-time mood information, weather information, and calendar event information into a context-oriented emotional profile;
an audio content selection module (112) configured to select and prioritize audio content such as music, podcasts, and stories from a content database according to the created emotional profile;
a calendar integration module (114) adapted to retrieve details of festivals, events, and celebrations from the calendar database of a user;
a voice recognition module (116) configured to detect wake words, transcribe user voice into text, and identify intended commands;
a content playback control module (118) configured to manage play, pause, skip, and volume functions automatically play the selected audio content;
a voice response module (120) configured to generate and output natural language audio responses using a text-to-speech engine;
a communication network (122) connected to the processing unit (106) and being configured to support real-time retrieval and transmission of audio content metadata, weather information, and calendar information;
a display unit (124) connected to the processing unit (106) and being configured to show information about ongoing playing audio content, volume levels, playback controls, and system alerts;
a speaker unit (126) connected to the processing unit (106) and being configured to output both the selected audio content and the generated voice responses to the user;
a cloud database (128) connected to the communication network (122) and being configured to hold user mood profiles, weather history, calendar events, and metadata of audio content for adaptive learning and personalization;
2. The system (100) as claimed in claim 1, wherein the biometric acquisition unit (102) set up to obtain physiological measures such as heart rate variability, skin conductivity, and facial expression data via the use of photoplethysmography (PPG), galvanic skin response (GSR), and computer vision modules.
3. The system (100) as claimed in claim 1, wherein the system (100) further comprises a privacy and consent management module configured to provide user-specific consent controls, manage permission-based access to physiological, environmental, and calendar data, and implement real-time anonymization for regulatory compliance and ethical data handling.
4. The system (100) as claimed in claim 1, wherein the the voice input unit (104) is set up to pick up emotional indicators via changes in vocal tone, pitch, and speech tempo variation.
5. The system (100) as claimed in claim 1, wherein the weather monitoring module (108) within the processing unit (106) has an artificial intelligence (AI) based pattern analysis engine that relates environmental sensor data to historical weather patterns to anticipate atmospheric transitions and pre-emptively adjust audio content.
6. The system (100) as claimed in claim 1, wherein the mood and atmosphere fusion module (110) within the processing unit (106) uses a weighted decision fusion method integrating mood probabilities, environmental information, and temporal events to derive a composite context measure that controls selection of content.
7. The system (100) as claimed in claim 1, wherein the audio content selection module (112) within the processing unit (106) employs a neural network model that has been trained on a multi-label dataset to map audio content to metadata tags such as weather-type, mood category, and event-type for best match.
8. The system (100) as claimed in claim 1, wherein the calendar integration module (114) within the processing unit (106) is functionally linked to a geolocation engine that screens calendar events according to regional, cultural, and linguistic preferences, thus adding cultural specificity to content playback.
9. The system (100) as claimed in claim 1, wherein the content playback module (118) within the processing unit (106) dynamically modifies audio output parameters like tempo, rhythm, and volume according to mood detected and environmental changes for ensuring emotional synchrony.
10. A method for real-time atmosphere centric and mood-based audio content recommendation system (100), the method (100) comprising:
receiving user-specific input data via a biometric acquisition unit and a voice input unit;
determining the user's emotional or mood state based on said physiological data via a processing unit connected to the biometric acquisition unit and a voice input unit;
detecting real-time atmospheric conditions using artificial intelligence (AI)-based environmental data sources via a weather monitoring module;
fusing the detected mood data, real-time atmospheric data, and calendar event data into a composite emotional-environmental context via a mood and atmosphere fusion module;
selecting corresponding audio content comprising music, podcasts, and stories based on the composite emotional-environmental context via an audio content selection module;
retrieving calendar event data including festivals and celebrations via a calendar integration module;
processing the voice commands and identify user intent via voice recognition module;
controlling the playback of selected audio content by managing play, pause, skip, and volume functions
via a content playback module;
delivering auditory system feedback and verbal responses to user commands via a voice response module;
facilitating real-time data exchange between system components and external services via a communication network connected to the processing unit;
displaying playback information, user prompts, and system status via a display unit connected to the processing unit;
delivering audio content and voice feedback to the user via a speaker unit connected to the processing unit;
storing user preferences, audio content metadata, and interaction history for personalization via a cloud database connected to the communication network;
| # | Name | Date |
|---|---|---|
| 1 | 202541051208-STATEMENT OF UNDERTAKING (FORM 3) [28-05-2025(online)].pdf | 2025-05-28 |
| 2 | 202541051208-REQUEST FOR EARLY PUBLICATION(FORM-9) [28-05-2025(online)].pdf | 2025-05-28 |
| 3 | 202541051208-POWER OF AUTHORITY [28-05-2025(online)].pdf | 2025-05-28 |
| 4 | 202541051208-FORM-9 [28-05-2025(online)].pdf | 2025-05-28 |
| 5 | 202541051208-FORM FOR SMALL ENTITY(FORM-28) [28-05-2025(online)].pdf | 2025-05-28 |
| 6 | 202541051208-FORM 1 [28-05-2025(online)].pdf | 2025-05-28 |
| 7 | 202541051208-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [28-05-2025(online)].pdf | 2025-05-28 |
| 8 | 202541051208-DRAWINGS [28-05-2025(online)].pdf | 2025-05-28 |
| 9 | 202541051208-DECLARATION OF INVENTORSHIP (FORM 5) [28-05-2025(online)].pdf | 2025-05-28 |
| 10 | 202541051208-COMPLETE SPECIFICATION [28-05-2025(online)].pdf | 2025-05-28 |
| 11 | 202541051208-Proof of Right [30-05-2025(online)].pdf | 2025-05-30 |