Sign In to Follow Application
View All Documents & Correspondence

Brain Computer Interface Based Sound Source Localization For Attending Tasks In An Industrial Environment Via Human Robot Interaction

Abstract: ABSTRACT BRAIN-COMPUTER INTERFACE BASED SOUND SOURCE LOCALIZATION FOR ATTENDING TASKS IN AN INDUSTRIAL ENVIRONMENT VIA HUMAN ROBOT INTERACTION Embodiments of the present disclosure relate to a method and system for multimodal collaboration between a human and a robot, wherein a robot identifies a gesture and an audio input provided by a human as input, correlated the gesture with the audio input to perform a task and then performs the task. Other embodiments are also disclosed. Figure 1.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 December 2023
Publication Number
01/2024
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2025-03-20
Renewal Date

Applicants

INDIAN INSTITUTE OF SCIENCE
C V RAMAN AVENUE, BANGALORE 560012, INDIA

Inventors

1. ABHRA ROY CHOWDHURY
INDIAN INSTITUTE OF SCIENCE, C V RAMAN AVENUE, BANGALORE 560012, INDIA
2. MUKIL SARAVANAN
INDIAN INSTITUTE OF SCIENCE, C V RAMAN AVENUE, BANGALORE 560012, INDIA

Specification

Description:TECHNICAL FIELD
Embodiments of the present disclosure relate to an architecture of a brain computer interface for collaboration between humans and robots in an industrial environment, and more specifically to a brain computer interface based audio source localization for attending tasks in an industrial environment via a human robot interaction.

BACKGROUND
Generally, Human-Robot Collaboration is a collaborative processes where human and robot agents work jointly in order to perform tasks that may be shared and thereby achieve shared goals. There are a number of technological areas and applications where robots may be required to work alongside humans, making them a part of the team and have capable and efficient members of human-robot teams. In an industrial step-up, collaboration is a special type of coordinated activity, one in which two or more agents work jointly with each other, together performing a task or carrying out the activities needed to satisfy a shared goal. Some of the areas where such collaborations find use for robots includes homes, hospitals, offices, space exploration, manufacturing etc. In many industrial set-ups, the becomes imperative to use robots especially where an operator may be located at a distant place and can instruct robot(s) to perform the task by way of collaboration depending on the requirements identified within the industrial setup.
Industrial applications of human-robot collaborations in an industrial environment is gaining relatively high importance in recent times. These robots used in an industrial environment are known to physically interact with humans, take commands from the humans (operators) within the given environment, for example the industrial setup, and complete tasks associated within the environment. For effective human-robot collaboration in a given environment, industrial environment, it becomes generally imperative that the robot should be capable of understanding and interpreting several communication mechanisms similar to the mechanisms involved in human-human interaction. This poses a challenge with respect to human-robot collaboration especially in an industrial environment. Again, these robots must also communicate with the interacting humans in order to coordinate its actions properly to execute the shared plan and achieve the overall task by interacting with the humans. There is therefore a need in the art for a better and efficient human-robot collaboration, especially in industrial environments to address tasks quickly and efficiently.

SUMMARY
Embodiments of the present disclosure relate to a method and system for operating a human-actuated robotic system using a brain computer interface for audio source localization within a given environment, for example an industrial environment, wherein the environment may be noisy environment or a noise free environment, for attending tasks in the environment (environment in the present disclosure would mean a reference to an industrial environment unless and otherwise explicitly mentioned) via a human robot interaction. In an embodiment, the system includes at least one human operator interacting with a robot. In an embodiment, the human operator may be provided with a monitoring device, wherein the monitoring device (brain computer interface or BCI, and reference to monitoring device throughout disclosure refers to the BCI) is placed on the scalp of the human operator. In an embodiment, the monitoring device is configured to acquire in-vivo signals (non-invasive signals) from the human operator, where the in-vivo signal is based on a reaction or response of the human operator to at least one audio signal from a localized audio source placed within the environment. In an embodiment, each of the audio sources placed in the environment may have a distinct frequency, such that they are distinguishable from each other.
In an embodiment, as the non-invasive signal gathered from the monitoring device is weak, the monitoring device is configured to amplify the non-invasive signal acquired gathered from the human operator in response to the distinct audio signal. In an embodiment, the monitoring device is coupled to a computing device, and the monitoring device is configured to transmit the amplified signal to the computing device for further processing. In an embodiment, the computing device is configured to determine the coordinates and/or a location of the audio source, and on determining the location of the audio source, which is a localized audio source, the computing device directs a robot to the location of the audio source emitting the localized audio signal. In an embodiment, on reaching the location of the localized audio source, the robot may be assigned to perform tasks at the location and/or report back to the operator a particular scenario due to which the audio source was activated. In an embodiment, the audio source discontinues emitting the audio signal (alarm) after the robot has reached the location and/or the task has been attended to. In an embodiment, the discontinuity of the alarm may be automatically or manually controlled. Other embodiments are also disclosed.

BRIEF DESCRIPTION OF THE DRAWINGS
The detailed description is described with reference to the accompanying figures. Features, aspects, and advantages of the subject matter of the present disclosure will be better understood with regard to the following description and the accompanying drawings. The figures are intended to be illustrative, not limiting, and are generally described in context of the embodiments, and it should be understood that it is not intended to limit the scope of the disclosure to these particular embodiments. In the figures, the same numbers may be used throughout the drawings to reference features and components. In order that the present disclosure may be readily understood and put into practical effect, reference will now be made to exemplary embodiments and/or cases as illustrated with reference to the accompanying figures. The figures together with detailed description below, are incorporated in and form part of the specification, and serve to further illustrate the embodiments and explain various principles and advantages.
Figure 1 is an illustration of a monitoring device 100 (referred to also as a brain computer interface or BCI) used to be placed on the scalp of a human operator and acquire in-vivo (non-invasive) signals from the human operator in accordance with an embodiment of the present disclosure.
Figure 2 is an exemplary setup illustrating the human-robot interaction that can be used in a given environment (reference to environment should be read as an industrial environment), in accordance with an embodiment of the present disclosure.
Figure 3A is an exemplary pictorial representation of an environment wherein the human-robot interaction is illustrated with two localized audio sources in the environment in accordance with an embodiment of the present disclosure.
Figure 3B is an exemplary pictorial representation of an environment wherein the human-robot interaction is illustrated with multiple localized audio sources in the environment in accordance with an embodiment of the present disclosure.
Figure 3C is an exemplary pictorial representation of an environment wherein the human-robot interaction is illustrated with localized audio sources in the environment wherein the robot is sent to the localized audio source after determining the location of the localized audio source which needs to be attended to in accordance with an embodiment of the present disclosure.
Figure 3D is an exemplary pictorial representation of an environment wherein the human-robot interaction is illustrated with multiple localized audio sources in the environment wherein the robot is sent to the localized audio source which needs to be attended to after determining the location of the localized audio source that needs attention in accordance with an embodiment of the present disclosure.
Figure 4 is an exemplary embodiment of a method for a human robot interaction within a given environment for performing a task in accordance with an embodiment of the present disclosure.
Throughout the drawings, identical reference numbers designate similar, but not necessarily identical elements. The figures are not necessarily to scale, and the size of some parts may be exaggerated to more clearly illustrate the example shown. Moreover, the drawings provide examples and/or implementations consistent with the description; however, the description is not limited to the examples and/or implementations provided in the drawings.

DETAILED DESCRIPTION
The following describes technical solutions in exemplary embodiments of the subject matter of the present disclosure with reference to the accompanying drawings. In this application as disclosed herein, "at least one" means one or more, and "a plurality of" means two or more. The term "and/or" describes an association relationship for describing associated objects and represents that three relationships may exist. For example, A and/or B may represent the following cases: Only A exists, both A and B exist, and only B exists, where A and B may be singular or plural. The character "/" usually indicates an "or" relationship between the associated objects. "At least one item (piece) of the following" or a similar expression thereof means any combination of the items, including any combination of singular items (piece) or plural items (pieces). For example, at least one item (piece) of a, b, or c may represent a, b, c, a and b, a and c, b and c, or a, b, and c, where a, b, and c each may be singular or plural.
It should be noted that in this application articles “a”, “an” and “the” are used to refer to one or to more than one (i.e., to at least one) of the grammatical object of the article. The terms “comprise” and “comprising” are used in the inclusive, open sense, meaning that additional elements may be included. It is not intended to be construed as “consists of only”. Throughout this specification defined above, unless the context requires otherwise the word “comprise”, and variations such as “comprises” and “comprising”, will be understood to imply the inclusion of a stated element or step or group of elements or steps but not the exclusion of any other element or step or group of elements or steps. The term “including” is used to mean “including but not limited to”. “Including” and “including but not limited to” are used interchangeably. In the structural formulae given herein and throughout the present disclosure, the following terms have been indicated meaning, unless specifically stated otherwise.
Unless otherwise defined, all terms used in the disclosure, including technical and scientific terms, have meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. By means of further guidance, term definitions are included for better understanding of the present disclosure. The term ‘about’ as used herein when referring to a measurable value such as a parameter, an amount, a temporal duration, and the like, is meant to encompass variations of ±10% or less, preferably ±5% or less, more preferably ±1% or less and still more preferably ±0.1% or less of and from the specified value, insofar such variations are appropriate to perform the present disclosure. It is to be understood that the value to which the modifier ‘about’ refers is itself also specifically, and preferably disclosed.
It should be noted that in this application, the term such as "example" or "for example" or “exemplary” is used to represent giving an example, an illustration, or descriptions. Any embodiment or design scheme described as an "example" or "for example" in this application should not be explained as being more preferable or having more advantages than another embodiment or design scheme. Exactly, use of the word such as "example" or "for example" is intended to present a related concept in only a specific manner.
It should be understood that in the embodiments of the present subject matter that "B corresponding to A" indicates that B is associated with A, and B can be determined based on A. However, it should be further understood that determining B based on A does not mean that B is determined based on only A. B may alternatively be determined based on A and/or other information.
In the embodiments of this application, "a plurality of" means two or more than two. Descriptions such as "first", "second" in the embodiments of this application are merely used for indicating and distinguishing between described objects, do not show a sequence, do not indicate a specific limitation on a quantity of devices in the embodiments of this application, and do not constitute any limitation on the embodiments of this application. Through the disclosure reference to monitoring device specifically refers to a brain computer interface or BCI, and reference to environment specifically refers to an industrial environment.
Embodiments of the present disclosure relate to a method and system for operating a human-actuated robotic system using a brain computer interface for audio source localization within a given environment, for example an industrial environment, wherein the environment may be noisy environment or a noise free environment. In an exemplary case, the system includes at least one human operator interacting with a robot. In an embodiment, the human operator may be provided with a monitoring device (monitoring device throughout the present disclosure refers to a brain computer interface or BCI unless and otherwise specified), wherein the monitoring device is placed on the scalp of the human operator. In an exemplary case, the monitoring device is configured to acquire in-vivo signals (non-invasive signals) from the human operator, where the in-vivo signal is based on a reaction or response of the human operator to at least one audio signal from a localized audio source placed within the environment (reference to environment in the present disclosure refers to industrial environment unless and otherwise specified). In an exemplary case, each of the audio sources placed in the environment may have a distinct frequency, such that each of the audio sources are distinguishable from the other, within the environment.
In an exemplary case, the monitoring device is configured first acquire a signal from the human operator, and the signal acquired from the human operator is a weak signal that needs to be amplified, before being transmitted and processed. In an exemplary embodiment, a amplifier attached to the monitoring device amplifies the acquired signal. In an exemplary case, the monitoring device is coupled to a computing device, and the monitoring device transmits the amplified signal to the computing device for further processing. In an exemplary, the computing device determines the coordinates and/or a location of the audio source, and on determining the coordinated and/or location of the audio source, which is a localized audio source, the computing device directs a robot to the coordinates and/or location of the localized audio source that is emitting the audio signal. In an exemplary case, on reaching the location of the localized audio source, the robot may be assigned to perform tasks at the location and/or report back to the operator a particular scenario due to which the audio source was activated. In an exemplary case, the audio source discontinues emitting the audio signal (alarm) after the robot has reached the location and/or has attended to the task. In an exemplary case, discontinuity of the audio signal (hereinafter also cross referenced as alarm) may be automatically or manually controlled.
In an exemplary case, the audio signal is received by the human operator as an input, which is generated from at least one of a plurality of audio source within the given environment. The human operator and the audio source are located within the given environment. In an exemplary case, any audio signal from an audio source outside the given environment may not be processed by the monitoring device. In an exemplary case, each of the plurality of audio sources within the environment is configured with a pre-defined distinct frequency and placed at a pre-defined location within the given environment. In an exemplary case, the audio sources within the given environment are preferably continuously ringing amplitude modulated sinusoidal tones in distinct frequencies, which may be identified by the human operator, irrespective of the side from which the audio signal is received by the human operator. In an exemplary case, the audio source may be placed on the left side, in which case generally the left side ear of the human operator may pick up the audio signal and due to a reaction of the human operator to the audio signal having the distinct frequency, a non-invasive signal is acquired by the monitoring device. In another exemplary case, the audio source could be placed on the left side, and if the human operator is deaf in the left ear, and the right-side ear of the human operator is active, then the human operator may pick up the audio signal on the right side ear, and in reaction or response to the distinct frequency, a non-invasive signal is acquired by the monitoring device. In an exemplary case, irrespective of where the audio source is in the given environment, when the human operator identifies and acknowledges the distinct audio signal from the localized audio source that is having a pre-defined frequency, on acknowledgement (which is a response or reaction of the human operator to th audio signal), a non-invasive signal is acquired up by the monitoring device, and based on the location of the source of the audio signal and the frequency, the computing device directs the robot to the specific location of the audio source. In an exemplary case, the audio sources have distinct frequencies and the location and/or coordinates of the audio source may be mapped prior to operating the robot in the environment. In another exemplary case, the robot may be provided with a imaging device and a LiDAR to create a map in real-time and navigate to the location of the audio source.
In an exemplary case, the in-vivo signal from the human operator, in reaction and/or response to the distinct audio frequency, is a non-invasive electro-encephalography (EEG) signal. It should be obvious to a person skilled in the art that any other forms of in-vivo signals may be picked up from the human operator in response to the audio signal depending on the configuration of the monitoring device from the localized audio source and all these fall within the scope of the present disclosure. In an exemplary case, the in-vivo signal is picked-up or obtained from the monitoring device in response to a reaction of the human operator, wherein the reaction of the human operator is to the audio signal of a pre-defined distinct frequency from the at least one localized audio source. In an exemplary case, the reaction of the human operator to the audio signal from the localized audio source may be raising the eyebrow and the reaction (also referred to as a response in this disclosure) is picked up and/or gathered by the monitoring device. The reaction is picked up as a non-invasive signal by the monitoring and these signals are weak signals. It should also be obvious to a person of ordinary skill in the art that various other variations of the reaction to the audio signal from the localized audio source may be made by the human operator, and all such variation fall within the scope of the present disclosure.
In an exemplary case, the monitoring device has electrodes that are placed on the scalp of the human operator and the monitoring device is in contact with the human operator. The monitoring device is configured to acquire the non-invasive signal from the human operator, which as previously mentioned are signals of relatively very weak intensity. In an exemplary case, the non-invasive signal is based on the reaction/response of the human operator to the distinct pre-defined frequency audio signal from the audio source, and the audio signal arises from a localized audio source placed at a particular location within the given environment. In an exemplary case, the audio source which produces the audio signal (alarm) requires attention of the human operator at the location of the audio source, and may be also to alert the human operator that attention may be required at the location of the audio source to perform a task or activity.
In an exemplary case, the monitoring device after receiving the non-invasive signal, which is based on the human operators reaction, amplifies the non-invasive signal to an amplified signal, which is performed by an amplifier that is coupled to the monitoring device, as the strength of the acquired non-invasive signal may be relatively very weak. In an exemplary case, once the non-invasive signal acquired is amplified at the monitoring device, the amplified signal may be transmitted to the computing system over a network, via a routing device coupled to the monitoring device. In an exemplary case, the computing system may be within the same given environment or located outside the given environment, but constantly interacting with the monitoring device and the robot. In an exemplary case, the network may include at least one of a wired means and/or a wireless means and/or a cloud and/or a combination thereof. In a preferred embodiment, the network is a communication system which may cover satellite communication (especially if the computing system is remotely located), infrared communication, broadcast radio, microwave communication, Wi-Fi, mobile communications, Bluetooth, NFC etc. It should be obvious to a person of ordinary skill in the art that other wireless communications are available, with different technologies and aerial designs to support such communication, and all such technologies generally fall within the scope of the present disclosure. In a preferred embodiment, Bluetooth may be advantageously used within the a given environment where the range of the application is limiting or fixed. In another preferred embodiment, WiFi may be advantageously used if the range of operation is larger.
In an example case, the robot may be provided with an imaging device and LiDAR. In an example case the imaging device and the LiDAR may be used to create a map the area or environment. In an exemplary case, previously created maps stored in the robot and/or computing device may be updated as and when the robot is active and scanning the area for new obstacles introduced within the area or old obstacles removed from the area. In an exemplary case, if no map of the area has been created, the robot may create a map of the area and store the map for future use and updation. In an exemplary case, the environment may include a number of obstacles, which can be a plurality of stationary object and/or obstacles and dynamically moving object and/or obstacles. In an exemplary case, the robot using the imaging device and the LiDAR may create a map of the environment at a given instant of time and continuously update the map which may include all such stationary obstacles and dynamic obstacles in the given environment. In an exemplary case, as the robot updates the positions of obstacles, the map of the environment may dynamically change depending on the position of the object/obstacles within the environment.
In an exemplary case, the robot may be provided with a map of the environment and/or the computing system may alternatively have the map that may be provided to the robot. In an exemplary case, the map may be stored in the robot, and if the map is stored on the computing device, the robot may access the map from the computing device. In an exemplary case, based on the audio signal received from the localized audio source, the computing system may compute the most optimal path to reach the location of the audio source, wherein the optimal path computed is an obstacle free path to the location of the audio source. In an exemplary case, the optimal path may be the shortest and fastest path to the localized audio source avoiding all obstacles. In an exemplary case, the path computed is provided to the robot, and the robot is configured to navigate the environment to the localized audio source avoiding the objects and/or obstacles. In an exemplary case, a mentioned previously the object and/or obstacle is at least one of a stationary object or a moving object and can be monitored by the robot periodically within the environment to create an update map of the environment. In an exemplary case, the entire computing device may be placed within the robot, such that the monitoring device directly interacts with the robot over the network.
In an exemplary case, the monitoring device may acquire the non-invasive signal either from a central region and/or a temporal region and/or an occipital region and/or a combination thereof, from the head of the human operator, as the monitoring device is placed on the scalp of the human operator. In a preferred embodiment, the non-invasive signal may be corresponding to a motor, an auditory and a visual cortex of the human operators’ brain in reaction to an alert, which is the distinct audio signal from the localized audio source in the environment. It should be also obvious to a person skilled in the art that the in-vivo (non-invasive) signal related to the distinct audio frequency may be picked up from other region of the human operators brain other than those mentioned depending on the capabilities and configuration of the monitoring device in use, and all such non-invasive signals being picked up by the monitoring device in response to the distinct localized audio source using the methodology as described herein falls within the scope of the present disclosure.
Reference is now made to Figure 1, which illustrates a monitoring device 100 (BCI) that may be placed on the scalp of a human operator, wherein the monitoring device acquires in-vivo signals from the human operator in response/reaction to an audio signal from a localized audio source placed within a given environment in accordance with an embodiment of the present disclosure. As illustrated monitoring device 100 includes a plurality of electrodes 107 set up on a cap like structure, wherein the cap like structure is flexible and can be easily placed on the scalp of a human operator. Monitoring device 100 is placed on the scalp/head of human operator 105 and generally covers the head portion of human operator 105. Each electrode 107 of the plurality of electrodes is coupled to wire 120, wherein wire 120 is a carrier of the non-invasive signal from picked up by electrode 107 of monitoring device 100. The non-invasive signal is picked up by electrode 107 is in response to a distinct audio signal, and the non-invasive signal from monitoring device 100 is sent to amplifier 130. The non-invasive signal from human operator 105 is generally a relatively very weak signal and is based on the reaction of human operator 105 to a distinct audio signal, when the audio signal of the distinct frequency is emitted from a localized audio source. Non-invasive signals from electrodes 107 are provided to amplifier 130 via plurality of wires 120 coupled to each of the plurality of electrode 107 of monitoring device 100.
The signals from human operator 105 in reaction to the localized audio signal of a distinct frequency is sent to amplifier 130. The non-invasive signal may for example be an EEG signal. The non-invasive signal is amplified at amplifier 130, and the amplified signal is then transmitted to a computing device (not shown in the figure) and/or a robot (not shown in the figure), wherein in a preferred embodiment, the robot may be configured to perform the role of a computing device or the entire computing device may be placed within the robot. The amplified signal is transmitted to the communication device (also referred to as a computing system interchangeably) by means of communication system 140, where communication system 140 is also coupled to monitoring device 100. Communication system 140 may be either one of a wired means of communication or a wireless means of communication or a combination thereof. In a preferred embodiment in accordance with the present disclaoure, Bluetooth may be used as a preferred means of communication between monitoring device 100 and the computing device and/or the robot depending on the range of operation. In an alternate preferred embodiment, WiFi may be used as a preferred means of communication between monitoring device 100 and the computing device and/or the robot depending on the range of operation, wherein the range of WiFi is larger than the range of Bluetooth.
Reference is now made to Figure 2, which is an exemplary setup illustrating the human-robot interaction that can be used in one given environment in accordance with an embodiment of the present disclosure. Embodiments of monitoring device 210 as illustrated in Figure 2 have been described in detail with respect monitoring device 100 of Figure 1 and will not be described again. Monitoring device 210 is placed on the scalp of human operator 205 and monitoring device 210 is configured to pick up non-invasive signals from human operator 205, where the non-invasive signals are in response to human operators 205 reaction to a audio signal having a definite/distinct frequency, and the audio signal arising from an audio source (not shown in the figure) placed a pre-defined location within the given environment. Monitoring device 210 is placed on the scalp of human operator 205. The monitoring device 210 has plurality of electrodes 207, and is generally flexible in nature and takes the shape of the scalp of the human operator. In an exemplary case, Open BCI may be used as the monitoring device, where the number of electrodes 207 on the scalp area may vary. In general, a 10:20 configuration may be used, where the numbers “10” and “20” refer to the distances between adjacent electrodes, which are either 10% or 20% of the total distance (front-back or right-left) of the skull. It should be obvious to a person of ordinary skill in the art that various other configuration of the electrodes may be placed within the distance depending on the application and the sensitivity of the monitoring device, and all such variations of the configurations for the monitoring device fall within the scope of the present disclosure.
Non-invasive signals are picked up by at least one of the plurality of electrodes 207 of monitoring device 210. The non-invasive signals are a reaction of human operator in response to the distinct frequency heard by human operator 205 within the given environment. Non-invasive signals from electrodes 207 are relatively weak signals and need to be amplified before processing. The acquired non-invasive signals are transmitted to amplifier 230 via wires 220 from electrode 207. The amplifier 230 amplifies the non-invasive signal and the amplified signal is transmitted to communication system 240, which is coupled with monitoring device 210. The amplified signal from human operator 205 is then transmitted to computing system 260 and/or robot 270 over network 250. As disclosed previously, network 250 may be wired and/or wireless and/or a combination thereof, and may also include cloud based service and the likes. Amplified signals from monitoring device 210 are sent over network 250 to computing device 260, where computing device 260 is a processing system including at least a memory and a processor and additionally hardware elements and software elements. The computing device 260 is operational by various software elements and hardware elements that power and run the device. Computing device 260 interacts with monitoring device 210 and is configured to process the amplified signals that is received from monitoring device 210. Computing device 260 is also interfaced with robot 270, which is configured to received commands from computing device 260 and perform tasks as assigned by a user and/or alert a user and/or report back to a user on a specific condition or requirement at the location of the audio source.
In an alternate case, robot 270 may directing be coupled with monitoring device 210, wherein robot 270 may be configured to directly process the data, i.e., the audio signal, received from monitoring device 210 and take necessary action based on the signals received from monitoring device 210, essentially also performing the role of the comporting device 260. Essentially all functions associated with computing device 260 may be built into robot 270, in which case robot 270, the need for a separate computing device is not required and robot 270 in addition to performing it’s role and function will perform the function of computing device 260, in which case monitoring device 210 is directly coupled to robot 270 over network 250.
Robot 270 also has an imaging device, for example a camera, and/or a LiDAR/RADAR which is configured to create a map of the environment, wherein the map includes all obstacles and/or objects within the given environment. The map of the environment created by robot 270 may be used by robot 270 to navigate around the given environment and arrive at the destination, i.e., the location of the audio source from where the distinct frequency is received by human operator 205 by choosing the most optimal path to the location. The most optimal path includes avoiding any objects and/or obstacles in the path of robot 207 to the location of the audio source. The map of the environment may be stored in computing device 260 and/or robot 270 and may be dynamically updated as robot 270 monitors the environment and moves around the environment. It should be obvious that Figure 2 is only exemplary in nature and various other modifications and variations may be possible to the exemplary case disclosed herein, and all such modifications and variations fall within the scope of the present disclosure.
Reference is now made top Figure 3A, which is an exemplary pictorial representation of an environment 300A wherein the human-robot interaction via a brain computer interface is illustrated with two localized audio sources in the environment in accordance with an embodiment of the present disclosure. In environment 300A, human operator is coupled to computing device 310. Two distinct audio sources, first audio source 332 and second audio source 334 are located within environment 300A at distinct locations, the distinct location may be pre-defined or pre-determined fixed locations. Each of the audio source, first audio source 332 and second audio source 334 are associated with distinct audio frequencies, such that the audio source may be easily located in environment. Environment 300A may also include plurality of static objects 342, 343, 344, 345 ad plurality of dynamic objects 341, 346, wherein dynamic objects 341, 346 may be moving within environment 300A.
Robot 320 may be configured to create a map of environment 300A, wherein the location of the static objects 342, 343, 344, 345 and location of dynamic objects 341, 346 are dynamically updated by robot 320 periodically and may be stored in the computing device and/or the memory of robot 320. It should be obvious to a person of ordinary skill in the art that various kind of robots may be used in the embodiment and implementation of the present disclosure.
First audio source 332 and second audio source 334 may be specific location where attention may be required in environment 300A. Whenever, attention is required at either of audio source, first audio source 332 and/or second audio source 334, the audio source at that location in the environment emits the distinct frequency that is picked up by the human operator, and the robot is sent to the location of the audio source to either perform a particular task and/or attend may requirements at the audio source.
Reference is now made to Figure 3B, which is an exemplary pictorial representation of an environment wherein the human-robot interaction via a BCI is illustrated with multiple localized audio sources in the environment in accordance with an embodiment of the present disclosure. The description of elements is the same as those associated with Figure 3A. In environment 300B, human operator is coupled to computing device 310. Three distinct audio sources, first audio source 332, second audio source 334 and third audio source 336 are located within environment 300B at distinct locations, only for purpose of illustration, to indicate that multiple audio source may be present in the given environment. Each of the audio source, first audio source 332, second audio source 334, and third audio source 336 are associated with three distinct audio frequencies, such that the audio source may be easily located in environment. Environment 300B may also include plurality of static objects 342, 343, 344, 345 ad plurality of dynamic objects 341, 346, wherein dynamic objects 341, 346 may be moving within environment 300A. It should be obvious to a person of ordinary skill in the art that more than three audio sources may be placed at strategic location within the environment, and these multiple audio sources each have a distinct frequency by which they may be identified.
Robot 320 may be configured to create a map of environment 300B, wherein the location of the static objects 342, 343, 344, 345 and location of dynamic objects 341, 346 are dynamically updated by robot periodically and may be stored in the computing device and/or the memory of robot 320.
The multiple audio sources are placed at specific and strategic location where attention may be required in environment 300B. Whenever, attention is required at either of multiple audio sources, the audio source emits an audio signal in the distinct frequency that requires attention of human operator in the environment, which is picked up by the human operator, and the robot is sent to the location of the audio source to either perform a particular task and/or attend any requirements at the location of the audio source. If multiple audio sources are emitting distinct frequencies, then depending on the receipt of the audio signal, the tasks for the robot may be put in a queue, for example first in first out may be followed or prioritizing the location. In an alternate embodiment, each of the locations may also be provided with an emergency frequency, where if two or more audio sources are calling for attention, and one has an emergency situation, the location emitting the emergency frequency can divert the robot and/or the computing system to reassign priorities and assign the robot to address the emergency situation first.
Figure 3C is an exemplary pictorial representation of an environment 300C wherein the human-robot interaction via a BCI is illustrated with localized audio sources in the environment wherein the robot is sent to the localized audio source after determining the location of the localized audio source in accordance with an embodiment of the present disclosure. As illustrated in Figure 3C, all elements are like Figure 3A and hence reference is made to the description of Figure 3A towards the elements described herein. As illustrated Figure 3C indicated two audio sources, a first audio source 332 located at a first location and a second audio source 334 located at a second location within the environment.
Human operator 310 is coupled to the computing system and having monitoring device picks up a distinct audio signal from a location of second sound source 334, which matches the distinct frequency of second sound source 334. The monitoring device placed on the scalp of the human operator picks up a non-invasive signal from the human operator, where the non-invasive signal is a response from the human operator to the distinct sound frequency located at second source 334. The human operator has a reaction when the distinct audio signal is heard and this reaction is treated as a response, which is a non-invasive signal picked up by electrodes of the monitoring device. After the monitoring device picks up the signal of the human operator in response to the distinct frequency, the signal is amplified and then transmitted to the computing device and/or directly to the robot via a communication system.
On detection of the signal by the human operator, and the computing device being intimated about the received audio signal, the computing device is configured to compute the most optimal path, for example the shortest and/or fastest path, to second audio source 334 and instruct robot 320 to proceed to second audio source 334 using the optimal path, which will be the path of least resistance.
In an exemplary case, the second audio source 334 emits the distinct frequency when to alert the human operator that a particular task needs to be performed/accomplished at second audio source 334 or to draw the attention of the human operator to any untoward incident that may be occurring at second audio source 334. In the exemplary case, the second audio source is only illustrative in nature and it should be obvious to a person of ordinary skill in the art that the same can be performed at the first audio source and this is part of the variation that are included in the embodiment of the present disclosure. In fact, when both audio source emit the distinct frequency, the human operator may pick up both and, in that case, may prioritize which location to handle first and then proceed to the next location depending on several factors associated with the location of the audio source. In an alternate embodiment, if the first audio source emits an emergency frequency, then the robot may be configured to abandon the task or leave incomplete the task at the second audio source, attend the emergency situation at the first audio source and then move back to the complete the task at the second audio source.
Reference is now made to Figure 3D, which is an exemplary pictorial representation of an environment 300D wherein the human-robot interaction via a BCI is illustrated with multiple localized audio sources in the environment wherein the robot is sent to the localized audio source after determining the location of the localized audio source in accordance with an embodiment of the present disclosure. As compared to the illustration of Figure 3C, in the illustration of Figure 3D, there could be multiple audio sources, a first source 332, a second source 334, a third source 336 etc., where each audio source identified by a particular specific and strategic location within the environment. As an exemplary case, only three audio sources are illustrated, but in principle there could be multiple audio source each identified by a distinct frequency and by a location, thereby localizing the audio signal to a particular location of the environment. Based on the audio signal received from the localized audio source, i.e., third source 336, the computing device may compute multiple paths for robot 320 to reach the location and then determine the most optimal path from amongst the paths determined or assign the path as a ranked list to the human operator, who can then assign the most optimal path to robot 320 to reach the location of the third audio source 336. All other embodiments remain similar to that described previously.
As disclosed previously, if there are two or more audio sources calling for attention, in an exemplary case, the human operator may prioritize the location to be attended. In an alternate exemplary case, when there are multiple audio source and once of them emitting an emergency frequency, then the system and/or the human operator may prioritize the robot to attend to the location of the emergency frequency and then prioritize the other audio source. In an alternate exemplary case, where there are multiple audio source emitting he audio signal, a priority queue may be created based on the reception or activation of the audio signal for the sources.
Reference is now made to Figure 4, which is an exemplary embodiment of a method for a human robot interaction using a BCI within a given environment for performing a task in accordance with an embodiment of the present disclosure. As illustrated in Figure 4, data is acquired from the human operator, wherein the data is a non-invasive signal in response to a distinct audio frequency that can be picked up by the human operator. The distinct audio frequency is a relatively weak brain signal or neuro signal which is a response to the distinct audio frequency being emitted from an audio source placed within a given environment. Essentially the source at a particular location, i.e., a localized sound/audio source is configured to emit a distinct frequency when a task needs to be performed and/or attention of a human operator is required at the location. The non-invasive signal is then amplified and after amplification transmitted by a wired means and/or a wireless means and/or a combination thereof.
In step 420, the amplified signal from the human operator is transmitted to a computing system and/or a robot, which may be coupled to the human operator. In an exemplary embodiment, the robot itself may be the computing system, wherein elements associated with the computing system may be built into the robot.. In step 430 the signal is processed at the computing unit (system/device). The computing device is configured to determine the location of the audio source from the distinct frequency received by the human operator and also gather information regarding the environment in real-time to identify object and/or obstacles in the environment. The objects/obstacles may be static or dynamic objects.
In step 440, the computing device is configured to determine the location of the audio source based on the distinct frequency of the audio signal received. In step 450, once the audio source is identified, i.e., the location of the audio source or the localized audio source is identified, the computing device is configured to compute multiple paths, and determine the optimal path, for example the shortest and/or the fastest path, to reach the location of the audio source requiring attention. The computing device may choose a most optimal path for the robot. In an example case, a planner algorithm automatically finds the most optimal path depending on the location of the audio source and the obstacles in the given environment. In an example case, the a list of paths may be presented as a ranked list to the human operator and the human operator may select a preferred path from the list of paths. In step 460, once the path is selected/assigned, the robot is provided with instruction on the location of the audio source and the path to be taken, with alternate paths if required. In step 470, the robot is sent to the location to be assigned the task to be performed or send back information regarding the happenings at the location. In an exemplary case, the computing system is continuously being provided with inputs from the robot on the obstacles, and can dynamically reassign a path for the robot to reach the location of the audio source, if the first path has some unexpected blockages suddenly coming up.
In an exemplary case, a human computer interface (human Brain Robot Interface (BRI)) architecture uses a bi-modal Brain-Computer Interface (BCI) framework to prioritize one among multiple continuously ringing distinct sound sources such as alarms and/or sirens to localize and navigate by motion planning to attended the sound (audio) source of an autonomous assistive robot in an environment, for example an industry or manufacturing environment. In an exemplary case, BRI introduces a sensor fusion technique that incorporates human motion responses using Motor Imagery (MI) and intended sound directions using Auditory Steady State Response (ASSR) with a mobile robot in audio-aware environments. In an exemplary case, common Spatial Pattern (CSP)-based Support Vector Machine (SVM) classification algorithm may be utilized so that the direction of the intended sound source is localized, from the human electroencephalograph (EEG) signals mounted over the temporal, occipital, and parietal cortices, and may not be limited to this technique. It should be obvious to a person or ordinary skill in the art that various other classifications and signals from other regions may be used and all such classifications and regions of obtaining the signals fall within the scope of the present disclosure. In an exemplary case, perceived acoustic information through EEG signals may be mapped to a location in the map of the physical space defining the environment, and a path planner may generate a navigational path from robot’s current position to the estimated position of the sound source supporting its autonomy in movement.
In an exemplary case, auditory BCI may be considered as an intuitive and non-voluntary method of decision making to attend to warnings or sirens in an environment. In an exemplary case, to overcome the limitations of conventional BCI paradigms, to auditory stimuli may be used an alternative to visual stimuli. In an exemplary case, the feasibility of using auditory steady-state responses (ASSRs), elicited by selective attention to a specific sound source, as an electroencephalography (EEG)-based BCI paradigm is conceived. In an exemplary case, in addition to ASSR-based BCI, Motor Imagery (MI) BCI technique may be used to extract the human intention, to enhance the BCI paradigm. In an exemplary case, motor imagery is defined as the cognitive process of imagining the movement of a body part without actually moving that body part. In an exemplary case, different types of MI may stimulate the corresponding event-related synchronization /desynchronization (ERS/ERD) in the ß and µ rhythm.
In an exemplary case, in real-world scenarios with EEG signals, which have low spatial resolution and provide noisy overview of the ongoing brain activity, decoder design and training of the system are critical, and some in some case feature extraction and pattern classification may also be useful. In an exemplary case, features of EEG signals may be expressed in frequency domain, time domain, and spatial domain. In an exemplary case, commonly used algorithms for the EEG feature extraction consist of the common spatial patterns, power spectrum, and independent components analysis. In some exemplary cases, some machine learning algorithms, such as linear discriminant analysis support vector machine, and learning vector quantization may be used in BCI system for EEG decoders.
In an exemplary case, as discussed previously, the brain-actuated system in accordance with the present disclosure includes a human operator, a central computer with an OpenBCI device and a mobile robot, wherein the mobile robot has an imaging device and an LiDAR, which are all connected through a network. In an exemplary case, the RGBD camera (whose image size is 640*480 at 30 fps) and 2D LiDAR sensor (scanning range of 0.2 m to 16 m with 8 kHz sampling rate) perceive the environment and build an Occupancy Grid Map of the environment with all static and dynamic object/obstacles in the environment. It should be obvious to a person of ordinary skill in the art that other imaging forms, for example high resolution imaging device and LIDAR/RADAR sensors may be provided to the robot to perceive the environment and create the grip map of the environment, and all such devices and variations fall within the scope of the principles of operation of the present disclosure.
In an exemplary case, when one or more audio sources are triggered, the human operator focuses on the audio source and performs a motor-intended motion with either his/her left hand or right hand, and the a sensor fusion algorithm as discussed in the present disclosure interprets EEG signals to estimate the direction of the intended audio source of the human operator. In an exemplary case, the estimated direction information is sent to the BRI system in which the robot localizes the attention-specific audio source and navigates to the goal position in the map as has been disclosed previously, wherein the robot may be maneuvering through obstacles in the environment.
In an exemplary case, Supervisor with the OpenBCI cap may be located at a particular location in an environment that may be surrounded by machines. In the exemplary case, the OpenBCI electrode cap with 19 channels was used to brain signals of the operators to obtain the intention of a human. In an exemplary case, the sampling frequency of EEG signals was 256 Hz. In an exemplary case, a notch filter (50 Hz) and a bandpass filter (1 - 40 Hz) may be employed to attenuate baseline noise and band-limit EEG signals. In an exemplary case, the electrodes impendence influences a Signal to Noise Rate (SNR) of EEG signals, which is set preferably below 5 K Ohm. In an exemplary case, the channel (FPZ) is regarded as the reference signal and main electrodes (C3, Cz, C4, T7, T8, Oz), which represent the motor, auditory and visual cortical areas are used to monitor data.
In an exemplary case, ASSR may be a technique related to an electrical response elicited from the brain when a human operator is hearing periodic amplitude modulated sinusoidal tones or click sound. In an exemplary case, ASSR shows an increased spectral density around the modulation frequency of the sound stream. In an exemplary case, optimal modulation frequency may be found to be ranging from 30 Hz to 50 Hz, peaking around 40 Hz. In an exemplary case, to obtain a sufficient signal-to-noise ratio (SNR) of ASSR, two frequencies were chosen around 40 Hz range, a first frequency at 37 Hz and a second frequency at 43 Hz, as the modulation frequencies. In an exemplary case, the carrier frequencies of the two auditory stimuli were set to 2.5 kHz and 1 kHz, respectively, so that the subjects could easily distinguish each sound stream. In an exemplary case, the pulse widths of the 37 Hz and 43 Hz pure tone pulses were found to be about 13.5 ms and 11.6 ms, respectively using theoretical simulation. In an exemplary case, the duration of each trial was 20 s.
In an exemplary case, four electrodes (FPZ, C3, C4, GND) were utilized to extract motion intention from the human operator. In the exemplary case, the EEG signals generated by the regular activities in the sensorimotor cortex may be identified clearly as a sensorimotor rhythm (SMR) under frequencies between 12.5 and 15.5 Hz. In the exemplary case, the SMR signal may decrease drastically when the sensorimotor area is activated, which is referred to as Event-Related Synchronization (ERS). In an exemplary case, after the activation in the sensorimotor cortex is finished, the SMR may return to the normal stage, called Event-Related Desynchronization (ERD). In the exemplary case, detection of SMR may be generally difficult in the presence of strong rhythm a (8–15 Hz) of the brain signals.
In an exemplary case, Common Spatial Pattern (CSP) may be used to obtain the feature extraction process of EEG signals. In an exemplary case, select a collection of samples in a sampling period as D, then the number of the EEG recording channels M and samples per period N decide the size of D, which is M×N matrix. The covariance matrix of samples is denoted as H, which leads to
H_L (i)=D_L (i)*D_L^T (i)
H_R (i)=D_R (i)*D_R^T (i)
where i represents the ith sampling period, and the subscripts L and R represent the data of imagination of left and right hand.
(H_L ) ¯=1/N ?_(i=0)^N¦¦(D_L(i) )
(H_R ) ¯=1/N ?_(i=0)^N¦¦(D_R(i) )
After computing the united matrix H of the average covariance matrices, H ¯= (H_L ) ¯+ (H_R ) ¯, the singular value decomposition of H ¯ may be given by
H=U?U^T
?=?^(-1/2) U^T
Through ? and V, the common matrix of the feature vector of H, the feature matrix F by the following equation is expressed by F=V^T ?D.
In the exemplary case, for the feature value, a phase locking value (PLV) is used. In the exemplary case, PLV outperforms the other tested methods of discrete Fourier transform (DFT) and coherence analysis. In an exemplary case, PLV quantifies phase coupling between two signals that are recorded at different electrodes. In the exemplary case, when the two EEG signals are coupled, a hypothesis may be drawn that they originate from the same brain source or they are related to the same perceptual or cognitive process. In an exemplary case, of very noisy multichannel recordings such as EEG signals, a strong signal distributed among many electrodes may be expected to result in multiple phase couplings. In an exemplary case, unlike amplitude based methods, such as coherence analysis, PLV may detect the signal which has a very small amplitude, like ASSR, buried in a very low SNR recording environment as in case of EEG captured from the human scalp.
In an exemplary case, PLV may be obtained as follows:
Band Pass Filtering is performed to the given signal xa(t) which is recorded at the ????h-electrode. In the exemplary case, the filter band is set from (????-2) Hz to (????+2) Hz. In an exemplary case, the instantaneous phase of the signal is computed by the following formula:
?_a (t)=arctan?(( x ~_a (t))/(x_a (t) ))
Where x ~a (t) is the Hilbert transformed version of ????(??). In the exemplary case, the difference between instantaneous phases of two signals ????(??) and ????(??) is defined as ?????,??(??) and it is obtained by ?a,b(t) = ?a(t) - ?b(t) . In the exemplary case, the PLV is quantified by averaging of the unit vector exp(???????,??(??)) over time.
PLV_((a,b) )=|1/L ?_(l=1)^L¦?ex p?(???_(a,b) (l)) ?|
where ?? is the length of the analysed signal window. In this case, ?? is equal to the stimulus length.
In an exemplary case, the robot uses LiDAR Inertial SLAM - Gmapping and Particle Filter-based Adaptive Monte Carlo Localization (AMCL) algorithms, respectively, to create a 2D map and localize itself in the environment and uses Dijkstra’s algorithm to generate a collision free global path from the robot’s current state to the goal state. In another exemplary case, Dynamic Window Approach (DWA) produces velocity commands to send to a mobile base. Several other approaches may be used, and it should be obvious to a person of ordinary skill in the art that all such approaches fall within the scope of the present disclosure.
Although the present disclosure has been described with reference to several preferred embodiments, it should be understood that the present disclosure is not limited to the preferred embodiments disclosed here. Embodiments of the present disclosure are intended to cover various modifications and equivalent arrangements within the spirit and scope of the appended claims. Although the foregoing disclosure has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications may be practised within the scope of the appended claims. Examples of the present disclosure have been described in language specific to structural features and/or methods. It should be noted that there are many alternative ways of implementing both the process and apparatus of the present invention. Accordingly, embodiments of the present disclosure are to be considered illustrative and not restrictive, and the invention is not to be limited to the details given herein but may be modified within the scope and equivalents of the appended claims. It should be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed and explained as examples of the present disclosure.
, C , Claims:We Claim:
1. A environment 200 having a brain-computer interface for audio source localization to attend tasks via a human robot interaction, the environment comprising:
- a human operator 205, the human operator 205 provided with a monitoring device 210;
- the monitoring device 210 placed on the scalp of the human operator 205, wherein the monitoring device is configured to acquire an in-vivo signal from the human operator 205, the in-vivo signal being a reaction by the human operator 205 to at least one audio signal from a localized audio source within the environment 200;
- the monitoring device 210 coupled to computing device 260, the monitoring device 210 configured to transmit the in-vivo signal from the human operator 205 to the computing device 260;
- the computing device 260 configured to determine a coordinate of the audio source and direct a robot 270 to the audio source emitting the localized audio signal.
2. The environment as claimed in claim 1, wherein the audio signal received by the human operator 210 as input is from at least one of a plurality of audio source 332, 334, 336.
3. The environment as claimed in claim 2, wherein each of the plurality of audio sources is configured to a pre-defined distinct frequency and/or an emergency frequency.
4. The environment as claimed in claim 1, wherein the in-vivo signal from the human operator 205 is a non-invasive electro-encephalography (EEG) signal.
5. The environment as claimed in claim 1, wherein electrodes 207 on the monitoring device 210 is contact with the scalp of the human operator 205, and the electrodes 207 are configured to acquire the non-invasive signal.
6. The environment as claimed in claim 5, wherein the non-invasive signal is based on the reaction of the human operator 205 to the pre-defined distinct frequency from the audio source.
7. The environment as claimed in claim 1, wherein the monitoring device 100 is configured convert the non-invasive signal to an amplified signal by an amplifier 230 coupled to the monitoring device.
8. The environment as claimed in claim 7, wherein the amplified signal is transmitted to the computing system 260 over a network 250.
9. The environment as claimed in claim 9, wherein the network 250 comprises at least one of a wired means and/or a wireless means and/or a cloud and/or a combination thereof.
10. The environment as claimed in claim 1, wherein the robot 320 is provided with an imaging device and LiDAR.
11. The environment as claimed in claim 10, wherein the robot is configured to create a map of the environment containing the objects and/or obstacles.
12. The environment as claimed in claim 11, wherein the robot is configured to navigate the environment to the localized audio source avoiding the objects and/or obstacles to a specified location.
13. The environment as claimed in claim 1, wherein the object and/or obstacle is at least one of a stationary object or a moving object.
14. The environment as claimed in claim 1, wherein the monitoring device is configured to acquire the in vivo signal either from a central region and/or a temporal region and/or an occipital region of the head, wherein the central region is associated with motor cortices, the temporal region associated with auditory cortices and the occipital region associated with visual cortices .
15. The environment as claimed in claim 1, wherein based on processing of the in vivo signal, the computing device is configured to instruct a robot to move to a position of the attended alert.
16. The environment as claimed in claim 1, wherein on receiving multiple audio signals as input, the computing system and/or the human operator is configured to prioritize the location of audio source to be attended.
17. The environment as claimed in claim 3, wherein on detection of the emergency frequency, the computing system and/or the operator assigns the robot to attend the location of the emergency frequency.
18. The environment as claimed in claim 1, wherein a path planner computes the optimal path for the robot, wherein the optimal path is the shortest path to the location of the audio source avoiding all objects and/or obstacles.
19. The environment as claimed in claim 15, wherein the robot is configured to perform a task based on the attended alert.
20. The environment as claimed in claim 1, wherein the computing system is integrated into the robot.
21. A system comprising a human operator 205 provided with a brain computer interface 100 placed on the scalp of the human operator and at least a robot configured to perform the claims as claimed in claims 1 to 20.

Dated this 20th day of December 2023 Indian Institute of Science
By their Agent & Attorney

Dr. Eric W B Dias
Reg No IN/PA- 1058
of Khaitan & Co

Documents

Application Documents

# Name Date
1 202341087196-STATEMENT OF UNDERTAKING (FORM 3) [20-12-2023(online)].pdf 2023-12-20
2 202341087196-PROOF OF RIGHT [20-12-2023(online)].pdf 2023-12-20
3 202341087196-POWER OF AUTHORITY [20-12-2023(online)].pdf 2023-12-20
4 202341087196-FORM FOR SMALL ENTITY(FORM-28) [20-12-2023(online)].pdf 2023-12-20
5 202341087196-FORM 1 [20-12-2023(online)].pdf 2023-12-20
6 202341087196-FIGURE OF ABSTRACT [20-12-2023(online)].pdf 2023-12-20
7 202341087196-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [20-12-2023(online)].pdf 2023-12-20
8 202341087196-EVIDENCE FOR REGISTRATION UNDER SSI [20-12-2023(online)].pdf 2023-12-20
9 202341087196-EDUCATIONAL INSTITUTION(S) [20-12-2023(online)].pdf 2023-12-20
10 202341087196-DRAWINGS [20-12-2023(online)].pdf 2023-12-20
11 202341087196-DECLARATION OF INVENTORSHIP (FORM 5) [20-12-2023(online)].pdf 2023-12-20
12 202341087196-COMPLETE SPECIFICATION [20-12-2023(online)].pdf 2023-12-20
13 202341087196-FORM-9 [21-12-2023(online)].pdf 2023-12-21
14 202341087196-FORM-8 [21-12-2023(online)].pdf 2023-12-21
15 202341087196-FORM 18A [22-12-2023(online)].pdf 2023-12-22
16 202341087196-EVIDENCE OF ELIGIBILTY RULE 24C1f [22-12-2023(online)].pdf 2023-12-22
17 202341087196-FER.pdf 2024-01-31
18 202341087196-RELEVANT DOCUMENTS [16-05-2024(online)].pdf 2024-05-16
19 202341087196-POA [16-05-2024(online)].pdf 2024-05-16
20 202341087196-FORM 13 [16-05-2024(online)].pdf 2024-05-16
21 202341087196-OTHERS [27-06-2024(online)].pdf 2024-06-27
22 202341087196-FER_SER_REPLY [27-06-2024(online)].pdf 2024-06-27
23 202341087196-CLAIMS [27-06-2024(online)].pdf 2024-06-27
24 202341087196-ABSTRACT [27-06-2024(online)].pdf 2024-06-27
25 202341087196-Proof of Right [17-07-2024(online)].pdf 2024-07-17
26 202341087196-US(14)-HearingNotice-(HearingDate-20-01-2025).pdf 2025-01-03
27 202341087196-Correspondence to notify the Controller [16-01-2025(online)].pdf 2025-01-16
28 202341087196-Written submissions and relevant documents [03-02-2025(online)].pdf 2025-02-03
29 202341087196-PatentCertificate20-03-2025.pdf 2025-03-20
30 202341087196-IntimationOfGrant20-03-2025.pdf 2025-03-20

Search Strategy

1 SearchHistoryE_31-01-2024.pdf

ERegister / Renewals

3rd: 30 Apr 2025

From 20/12/2025 - To 20/12/2026

4th: 30 Apr 2025

From 20/12/2026 - To 20/12/2027

5th: 30 Apr 2025

From 20/12/2027 - To 20/12/2028

6th: 30 Apr 2025

From 20/12/2028 - To 20/12/2029

7th: 30 Apr 2025

From 20/12/2029 - To 20/12/2030

8th: 30 Apr 2025

From 20/12/2030 - To 20/12/2031

9th: 30 Apr 2025

From 20/12/2031 - To 20/12/2032

10th: 30 Apr 2025

From 20/12/2032 - To 20/12/2033