Abstract: Title: SYSTEM AND METHOD FOR FACE RECOGNITION ABSTRACT A face recognition system (100) comprising: a data collection module (206) configured to receive captured images and/or videos from an imaging device (102); a data storage module (208) configured to store the received images and/or the videos in a memory (112); a training module (210) configured to extract facial features from each of the images and/or the videos stored in the memory (112) to generate reference facial data; a data processing module (212) configured to match extracted features from the images and/or the videos captured by the imaging device (102) with the reference facial data stored in the memory (112) to generate display data; and a display module (214) configured to display a match percentage and an image and/or a video associated with the matched reference facial data through a user device (106). Claims: 10, Figures: 3 Figure 1 is selected.
Claims:CLAIMS
I/We Claim:
1. A face recognition system (100) comprising:
a data collection module (206) configured to receive captured images and/or videos from an imaging device (102) over a communication network (108);
a data storage module (208) configured to store the received images and/or videos in a memory (112);
a training module (210) configured to extract facial features from each of the images and/or the videos stored in the memory (112) to generate reference facial data for each of the images and/or videos;
a data processing module (212) configured to match extracted features from the images and/or the videos captured by the imaging device (102) with the reference facial data stored in the memory (112) to generate display data, wherein the display data comprises a match percentage and an image and/or a video of the images and/or the videos associated with the matched reference facial data; and
a display module (214) configured to display the match percentage and the image and/or the video of the images and/or the videos associated with the matched reference facial data through a user device (106).
2. The face recognition system (100) as claimed in claim 1, wherein the training module (210) is further configured to utilize at least one of, a supervised learning and an unsupervised learning for training operations.
3. The face recognition system (100) as claimed in claim 1, wherein the data storage module (208) is configured to store the received images and/or the videos in the memory (112) by means of a first communication bus (204).
4. The face recognition system (100) as claimed in claim 1, wherein the training module (210) is further configured to access the stored images and/or the videos from the memory (112).
5. The face recognition system (100) as claimed in claim 1, wherein the training module (210) is further configured to store the generated reference facial data in a look up table of the memory (112).
6. The face recognition system (100) as claimed in claim 5, wherein the generated reference facial data is mapped with the corresponding image and/or the video of the stored images and/or the videos in the look up table of the memory (112).
7. The face recognition system (100) as claimed in claim 1, wherein the data processing module (212) is further configured to extract the features from the images and/or the videos captured by the imaging device (102) by using an image processing technique.
8. The face recognition system (100) as claimed in claim 1, wherein the match percentage defines an extent to which the extracted features from the received images and/or the videos matched with the reference facial data.
9. The face recognition system (100) as claimed in claim 1, wherein the imaging device (102) is a web camera.
10. A method for recognizing face using a face recognition system (100), wherein the method comprising steps of:
identifying a facial structure in a visible frame of an imaging device (102);
capturing images and/or videos through the imaging device (102) based on the identified facial structure;
extracting features from the captured images and/or the videos;
matching the extracted features with reference facial data stored in a memory (112) to generate display data; and
displaying the display data comprising a match percentage and an image and/or a video of the images and/or the videos associated with the matched reference facial data through a user device (106).
Date: 09 November, 2021
Place: Noida
Dr. Keerti Gupta
Agent for the Applicant
(IN/PA-1529)
, Description:FORM 2
THE PATENT ACT 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See Section 10, and rule 13)
SYSTEM AND METHOD FOR FACE RECOGNITION
APPLICANT(S)
NAME: SR UNIVERSITY
NATIONALITY: INDIAN
ADDRESS: S R University Ananthasagar, Warangal, Telangana, India
The following specification particularly describes the invention and the manner in which it is to be performed
BACKGROUND
Field of Invention
[001] Embodiments of the present invention generally relate to biometric recognition, and more particularly to a system and a method for face recognition.
Description of Related Art
[002] In today’s world, face recognition is an important step for security and surveillance and has been widely applied in daily activities, such as in a field of statehood. Some programs have been implemented which will not integrate face recognition biometrics into an identity database of users so that the users do not have to worry about potential abuse. The face recognition in a security sector has evolved a lot in a modern era, such as an automatic face recognition (AFR) technology which functions to map faces in a crowd and compares them to watchlist images, which are accessible to suspects, missing persons and people sought by police. The AFR technology has been widely installed in public places such as streets, shopping centers, stadiums, and so forth.
[003] Also, the face recognition technology can be implemented in rail transportation such that people no longer need to use a swipe card to be able to use the rail transportation. The face recognition technology is expected to facilitate the people in using transportation facilities and is also able to replace a function of a payment string card commonly used by the people for traveling. However, such face recognition system is usually expensive. In addition, training of such face recognition system is difficult.
[004] Thus, there is a need for a technical solution that overcomes the aforementioned problems of the conventional face recognition systems.
SUMMARY
[005] Embodiments in accordance with the present invention provide a face recognition system. The face recognition system comprising: a data collection module configured to receive captured images and/or videos from an imaging device over a communication network. The face recognition system further comprising: a data storage module configured to store the received images and/or the videos in a memory. The face recognition system further comprising: a training module configured to extract facial features from each of the images and/or the videos stored in the memory to generate reference facial data for each of the images and/or the videos. The face recognition system further comprising: a data processing module configured to match extracted features from the images and/or the videos captured by the imaging device with the reference facial data stored in the memory to generate display data. The display data comprises a match percentage and an image and/or a video of the images and/or the videos associated with the matched reference facial data. The face recognition system further comprising: a display module configured to display the match percentage and the image and/or the video of the images and/or the videos associated with the matched reference facial data through a user device.
[006] Embodiments in accordance with the present invention further provide a method for recognizing face using a face recognition system. The method comprising steps of: identifying a facial structure in a visible frame of an imaging device; capturing images and/or videos through the imaging device based on the identified facial structure; extracting features from the captured images and/or the videos; matching the extracted features with reference facial data stored in a memory to generate display data; and displaying the display data comprising a match percentage and an image and/or a video of the images and/or the videos associated with the matched reference facial data through a user device.
[007] Embodiments of the present invention may provide a number of advantages depending on its particular configuration. First, embodiments of the present invention may provide a low-cost face recognition system that self-train for unknown users and display details of matched users.
[008] Next, embodiments of the present invention may provide a light weight face recognition system that is having a lower charging time.
[009] These and other advantages will be apparent from the present application of the embodiments described herein.
[0010] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] The above and still further features and advantages of embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
[0012] FIG. 1 illustrates a block diagram of a face recognition system, in accordance with an embodiment of the present invention;
[0013] FIG. 2 illustrates a block diagram depicting a server of the face recognition system, in accordance with an embodiment of the present invention; and
[0014] FIG. 3 illustrates a flow chart of a method for recognizing face using the face recognition system, in accordance with an embodiment of the present invention.
[0015] The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
DETAILED DESCRIPTION
[0016] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention as defined in the claims.
[0017] In any embodiment described herein, the open-ended terms "comprising", "comprises”, and the like (which are synonymous with "including", "having”, and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of", “consists essentially of", and the like or the respective closed phrases "consisting of", "consists of”, the like.
[0018] As used herein, the singular forms “a”, “an”, and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0019] FIG. 1 illustrates a block diagram of a face recognition system 100, in accordance with an embodiment of the present invention. The face recognition system 100 may be a cost-effective, high performance, and easy to use system for facial recognition operations. The face recognition system 100 may include an imaging device 102, a server 104, and a user device 106. As illustrated in the FIG. 1, the imaging device 102, the server 104, and the user device 106 may be communicatively coupled to each other through a communication network 108. In other embodiments of the present invention, the imaging device 102, the server 104, and the user device 106 may be communicably coupled through separate communication networks established therebetween.
[0020] The imaging device 102 may be configured to capture images and/or videos of a user. In an embodiment of the present invention, the imaging device 102 may be configured to process a visible frame to determine a presence of a facial structure. In an embodiment of the present invention, the imaging device 102 may have a capability to recognize the presence of the facial structure. In another embodiment of the present invention, the imaging device 102 in communication with a processing circuitry 110 of the server 104 may be configured to identify the presence of the facial structure in the visible frame. Upon identifying the presence of the facial structure, the imaging device 102 may be configured to capture the images and/or the videos. According to embodiments of the present invention, the imaging device 102 may be further configured to transmit the captured images and/or the videos to the server 104 over the communication network 108. In an embodiment of the present invention, the imaging device 102 may comprise a communication unit (not shown) that may be configured to facilitate a transmission of the captured images and/or the videos to the server 104 over the communication network 108. The imaging device 102 may be, but not limited to, a digital camera, a Close Circuit Television (CCTV) camera, a mirrorless camera, and so forth. In a preferred embodiment of the present invention, the imaging device 102 may be a web camera. Embodiments of the present invention are intended to include or otherwise cover any type of the imaging device 102 including known, related art, and/or later developed technologies.
[0021] Further, the server 104 may be a network of computers, a software framework, or a combination thereof, that may provide a generalized approach to create a server implementation. Examples of the server 104 may be, but not limited to, personal computers, laptops, mini-computers, mainframe computers, any non-transient and tangible machine that can execute a machine-readable code, cloud-based servers, distributed server networks, or a network of computer systems. Embodiments of the present invention are intended to include or otherwise cover any type of the server 104 including known, related art, and/or later developed technologies. The server 104 may be realized through various web-based technologies such as, but not limited to, a Java web-framework, a .NET framework, a personal home page (PHP) framework, or any web-application framework. Embodiments of the present invention are intended to include or otherwise cover any type of the web-based technologies including known, related art, and/or later developed technologies. The server 104 may be maintained by a third-party entity that facilitates the face recognition operations of the face recognition system 100. The server 104 may include the processing circuitry 110 and a memory 112.
[0022] The processing circuitry 110 may include, but not limited to, a suitable logic, instructions, circuitry, interfaces, and/or codes for executing various operations, such as face recognition and displaying output through the user device 106. The processing circuitry 110 may be configured to perform the operations associated with the face recognition system 100 by communicating one or more commands and/or instructions over the communication network 108. Examples of the processing circuitry 110 may include any type of processor such as, but not limited to, an Application-Specific Integrated Circuit (ASIC) processor, a Reduced Instruction Set Computer (RISC) processor, a Complex Instruction Set Computer (CISC) processor, Field Programmable Gate Arrays (FPGA), and the like. In a preferred embodiment of the present invention, the processor may be a raspberry pi. Embodiments of the present invention are intended to include or otherwise cover any type of the processor including known, related art, and/or later developed technologies. In an embodiment of the present invention, an external power source may be used for powering the processing circuitry 110 for operation. In an embodiment of the present invention, the external power source may be an Alternate Current (AC) power source. Embodiments of the present invention are intended to include or otherwise cover any type of the external power source.
[0023] The memory 112 may be configured to store the logic, the instructions, the circuitry, the interfaces, and/or the codes of the processing circuitry 110 for executing the various operations associated to the face recognition system 100. Examples of the memory 112 may be, but not limited to, a Read Only Memory (ROM), a Random-Access Memory (RAM), a flash memory, a removable storage drive, a Hard Disk Drive (HDD), a solid-state memory, a magnetic storage drive, a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), and/or an Electronically Erasable Programmable Read-Only Memory (EEPROM). Embodiments of the present invention are intended to include or otherwise cover any type of the memory 112 including known, related art, and/or later developed technologies. In some embodiments of the present invention, a set of centralized or distributed network of peripheral memory devices may be interfaced with the server 104, as an example, on a cloud server.
[0024] The user device 106 may be capable of facilitating the user to input data, receive the data, and/or transmit the data within the face recognition system 100. Examples of the user device 106 may be, but not limited to, a desktop, a notebook, a laptop, a handheld computer, a touch sensitive device, a computing device, a smart-phone, and/or a smart watch. Embodiments of the present invention are intended to include or otherwise cover any type of the user device 106 including known, related art, and/or later developed technologies. In the illustrated embodiment of the FIG. 1, the user device 106 may comprise a user interface 114, a processing unit 116, a storage element 118, and a communication interface 120.
[0025] The user interface 114 may include an input interface for receiving inputs from the user. Examples of the input interface may be, but not limited to, a touch interface, a mouse, a keyboard, a motion recognition unit, a gesture recognition unit, a voice recognition unit, and the like. Embodiments of the present invention are intended to include or otherwise cover any type of the input interface including known, related art, and/or later developed technologies. The user interface 114 may further include an output interface for displaying (or presenting) the output to the user. Examples of the output interface may be, but not limited to, a display device, a printer, a projection device, and/or a speaker. Embodiments of the present invention are intended to include or otherwise cover any type of the output interface including known, related art, and/or later developed technologies. Examples of the user interface 114 may be, but not limited to, a digital display, an analog display, a graphical user interface, a website, a webpage, the keyboard, the mouse, a light pen, an appearance of a desktop, and/or illuminated characters. In a preferred embodiment of the present invention, the user interface 114 may be a touch screen display. Embodiments of the present invention are intended to include or otherwise cover any type of the user interface 114 including known, related art, and/or later developed technologies.
[0026] The processing unit 116 may include the suitable logic, the instructions, the circuitry, the interfaces, and/or the codes for executing the various operations, such as the operations associated with the user device 106, or the like. In some embodiments of the present invention, the processing unit 116 may be configured to control the operations executed by the user device 106 in response to the input received at the user device 106 from the user. Examples of the processing unit 116 may be, but not limited to, the Application-Specific Integrated Circuit (ASIC) processor, the Reduced Instruction Set Computing (RISC) processor, the Complex Instruction Set Computing (CISC) processor, the Field-Programmable Gate Array (FPGA), a Programmable Logic Control unit (PLC), and the like. Embodiments of the present disclosure are intended to include or otherwise cover any type of the processing unit 116 including known, related art, and/or later developed processing units.
[0027] The storage element 118 may be configured to store the logic, the instructions, the circuitry, the interfaces, and/or the codes of the processing unit 116, the data associated with the user device 106, and the data associated with the face recognition system 100. Examples of the storage element 118 may be, but are not limited to, the Read-Only Memory (ROM), the Random-Access Memory (RAM), the flash memory, the removable storage drive, the Hard Disk Drive (HDD), the solid-state memory, the magnetic storage drive, the Programmable Read Only Memory (PROM), the Erasable PROM (EPROM), and/or the Electrically EPROM (EEPROM). Embodiments of the present disclosure are intended to include or otherwise cover any type of the storage element 118 including known, related art, and/or later developed storage elements.
[0028] The communication interface 120 may be configured to enable the user device 106 to communicate with the server 104 and other components of the face recognition system 100 over the communication network 108, according to embodiments of the present invention. Examples of the communication interface 120 may be, but not limited to, a modem, a network interface such as an Ethernet card, a communication port, and/or a Personal Computer Memory Card International Association (PCMCIA) slot and card, an antenna, a Radio Frequency (RF) transceiver, amplifiers, a tuner, oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, a Subscriber Identity Module (SIM) card, and a local buffer circuit. It will be apparent to a person of ordinary skill in the art that the communication interface 120 may include any device and/or apparatus capable of providing wireless or wired communications between the server 104 and the user device 106.
[0029] The communication network 108 may include, but not limited to, the suitable logic, the circuitry, and the interfaces that may be configured to provide network ports and communication channels for transmission and reception of the data related to the operations of various entities (such as the server 104 and the user device 106) of the face recognition system 100. Each network port may correspond to a virtual address (or a physical machine address) for transmission and reception of communication data. For example, the virtual address may be an Internet Protocol Version 4 (IPV4) (or an IPV6 address) and the physical machine address may be a Media Access Control (MAC) address. The communication network 108 may be associated with an application layer for implementation of communication protocols based on communication requests from the imaging device 102, the server 104, and user device 106. The communication data may be transmitted or received, through the communication protocols. Examples of the communication protocols may be, but not limited to, a Hypertext Transfer Protocol (HTTP), a File Transfer Protocol (FTP), a Simple Mail Transfer Protocol (SMTP), a Domain Network System (DNS) protocol, a Common Management Interface Protocol (CMIP), a Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Long Term Evolution (LTE) communication protocols, or any combination thereof. Embodiments of the present disclosure are intended to include or otherwise cover any type of the communication protocols including known, related art, and/or later developed communication protocols.
[0030] In an embodiment of the present invention, the communication data may be transmitted or received through at least one communication channel of the communication channels in the communication network 108. The communication channels may be, but not limited to, a wireless channel, a wired channel, a combination of wireless and wired channel, and so forth. The wireless or wired channel may be associated with a data standard which may be defined by one of a Local Area Network (LAN), a Personal Area Network (PAN), a Wireless Local Area Network (WLAN), a Wireless Sensor Network (WSN), a Wireless Area Network (WAN), a Wireless Wide Area Network (WWAN), a Metropolitan Area Network (MAN), a satellite network, the Internet, a fiber optic network, a coaxial cable network, an infrared (IR) network, a Radio Frequency (RF) network, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the communication channel, including known, related art, and/or later developed technologies.
[0031] FIG. 2 illustrates a block diagram depicting the server 104 of the face recognition system 100, in accordance with an embodiment of the present invention. The server 104 may include the processing circuitry 110 and the memory 112, as discussed. The server 104 may further include a network interface 200 and an Input/Output (I/O) interface 202. The processing circuitry 110, the memory 112, the network interface 200, and the Input/Output (I/O) interface 202 may communicate with each other by means of a first communication bus 204. The processing circuitry 110 may include a data collection module 206, a data storage module 208, a training module 210, a data processing module 212, and a display module 214 that communicate with each other by means of a second communication bus 216. It will be apparent to a person having ordinary skill in the art that the server 104 is for illustrative purposes and not limited to any specific combination of hardware circuitry and/or software.
[0032] The I/O interface 202 may include the suitable logic, the circuitry, the interfaces, and/or the code that may be configured to receive the inputs (e.g., orders) and transmit the outputs through a plurality of data ports in the server 104. The I/O interface 202 may include various input and output data ports for different I/O devices. Examples of such I/O devices may be, but not limited to, the touch screen, the keyboard, the mouse, the joystick, a projector audio output, a microphone, an image-capture device, a Liquid Crystal Display (LCD) screen and/or a speaker. Embodiments of the present invention are intended to include or otherwise cover any type of the I/O devices, including known, related art, and/or later developed technologies.
[0033] The processing circuitry 110 may be configured to perform the face recognition operations by means of the data collection module 206, the data storage module 208, the training module 210, the data processing module 212, and the display module 214. In an embodiment of the present invention, the data collection module 206 may be configured to receive the captured images and/or the videos from the imaging device 102. The data collection module 206 may be configured to receive the captured images and/or the videos in a digital format and further configured to transmit the captured images and/or the videos to the data storage module 208.
[0034] The data storage module 208 may be configured to receive the captured images and/or the videos in the digital format from the data collection module 206. Further, the data storage module 208 may be configured to store the received images and/or the videos in the memory 112 by means of the first communication bus 204.
[0035] The training module 210 may be configured to access the stored images and/or the videos stored in the memory 112. Further, the training module 210 may be configured to utilize the stored images and/or the videos as training data to recognize faces. In an embodiment of the present invention, the training module 210 may be configured to utilize a supervised learning with the training data for training operations. In another embodiment of the present invention, the training module 210 may be configured to utilize an unsupervised learning for the training operations. The training module 210 may be further configured to extract facial features from each of the images and/or the videos stored in the memory 112 to generate reference facial data for each of the images and/or the videos. Further, the training module 210 may be configured to store the generated reference facial data in a look up table of the memory 112. In an embodiment of the present invention, the generated reference facial data may be mapped with the corresponding stored images and/or the videos.
[0036] The data processing module 212 may be configured to receive the captured images and/or the videos from the imaging device 102 in real-time. For the sake of ongoing discussion, it is assumed that the images and/or the videos received by the data processing module 212 can be same as the images utilized by the training module 210. However, the scope of the present invention is not limited to it. In an alternate embodiment of the present invention, the images and/or the videos received by the data processing module 212 can be different from the images and/or the videos utilized by the training module 210. Further, the data processing module 212 may be configured to extract features from the received images and/or videos by using an image processing technique. The image processing technique may comprise algorithms such as, but not limited to, Haar Cascade algorithm, Local Binary Pattern Histogram (LBPH) algorithm, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the image processing technique, including known, related art, and/or later developed technologies.
[0037] Furthermore, the data processing module 212 may be configured to match the extracted features from the received images and/or videos with the reference facial data stored in the memory 112. In an embodiment of the present invention, when the data processing module 212 determines that the extracted features from the received images and/or videos is matched with the reference facial data of each of the images and/or the videos stored in the memory 112, then the data processing module 212 may be configured to determine a match percentage that may define an extent to which the extracted features from the received images and/or videos matched with the reference facial data. Further, the data processing module 212 may be configured to generate display data that may comprise the determined match percentage and the image and/or the video associated with the matched reference facial data. The data processing module 212 may be further configured to transmit the generated display data to the display module 214.
[0038] The display module 214 may be configured to receive the display data from the data processing module 212. Further, the display module 214 may be configured to transmit the display data to the user device 106 such that the display data facilitate the processing unit 116 of the user device 106 to decode and display the match percentage and the image and/or the video associated with the matched reference facial data embedded in the display data through the user interface 114 of the user device 106.
[0039] FIG. 3 illustrates a flow chart of a method 300 for recognizing the face using the face recognition system 100, in accordance with an embodiment of the present invention.
[0040] At step 302, the face recognition system 100 may identify the facial structure in the visible frame of the imaging device 102.
[0041] At step 304, if the face recognition system 100 identifies the facial structure, then the method 300 may proceed to a step 306, otherwise the method 300 may return to the step 302.
[0042] At the step 306, the face recognition system 100 may capture the images and/or videos through the imaging device 102.
[0043] At step 308, the face recognition system 100 may extract the features from the captured images and/or the videos and further compare the extracted features with the reference facial data mapped with each of the images and/or videos stored in the memory 112.
[0044] At step 310, if the face recognition system 100 determines that the extracted features from the captured images and/or the videos is matched with the reference facial data, then the method 300 may proceed to a step 312, otherwise the method 300 may conclude.
[0045] At the step 312, the face recognition system 100 may generate the display data that may comprise the determined match percentage and the image and/or the video associated with the matched reference facial data.
[0046] At step 314, the face recognition system 100 may display the determined match percentage and the image and/or the video associated with the matched reference facial data of the display data through the user interface 114 of the user device 106.
[0047] Embodiments of the invention are described above with reference to block diagrams and schematic illustrations of methods and apparatuses according to embodiments of the invention. It will be understood that each block of the diagrams and combinations of blocks in the diagrams can be implemented by computer program instructions. These computer program instructions may be loaded onto one or more general purpose computers, special purpose computers, or other programmable data processing apparatus to produce machines, such that the instructions which execute on the computers or other programmable data processing apparatus create means for implementing the functions specified in the block or blocks. Such computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means that implement the function specified in the block or blocks.
[0048] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
[0049] This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope the invention is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements within substantial differences from the literal languages of the claims.
| # | Name | Date |
|---|---|---|
| 1 | 202141057818-FORM 13 [15-02-2025(online)].pdf | 2025-02-15 |
| 1 | 202141057818-STATEMENT OF UNDERTAKING (FORM 3) [13-12-2021(online)].pdf | 2021-12-13 |
| 2 | 202141057818-FORM 18 [15-02-2025(online)].pdf | 2025-02-15 |
| 2 | 202141057818-REQUEST FOR EARLY PUBLICATION(FORM-9) [13-12-2021(online)].pdf | 2021-12-13 |
| 3 | 202141057818-POWER OF AUTHORITY [13-12-2021(online)].pdf | 2021-12-13 |
| 3 | 202141057818-POA [15-02-2025(online)].pdf | 2025-02-15 |
| 4 | 202141057818-RELEVANT DOCUMENTS [15-02-2025(online)].pdf | 2025-02-15 |
| 4 | 202141057818-OTHERS [13-12-2021(online)].pdf | 2021-12-13 |
| 5 | 202141057818-Proof of Right [12-03-2022(online)].pdf | 2022-03-12 |
| 5 | 202141057818-FORM-9 [13-12-2021(online)].pdf | 2021-12-13 |
| 6 | 202141057818-FORM FOR SMALL ENTITY(FORM-28) [13-12-2021(online)].pdf | 2021-12-13 |
| 6 | 202141057818-COMPLETE SPECIFICATION [13-12-2021(online)].pdf | 2021-12-13 |
| 7 | 202141057818-FORM 1 [13-12-2021(online)].pdf | 2021-12-13 |
| 7 | 202141057818-DECLARATION OF INVENTORSHIP (FORM 5) [13-12-2021(online)].pdf | 2021-12-13 |
| 8 | 202141057818-FIGURE OF ABSTRACT [13-12-2021(online)].pdf | 2021-12-13 |
| 8 | 202141057818-DRAWINGS [13-12-2021(online)].pdf | 2021-12-13 |
| 9 | 202141057818-EDUCATIONAL INSTITUTION(S) [13-12-2021(online)].pdf | 2021-12-13 |
| 9 | 202141057818-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [13-12-2021(online)].pdf | 2021-12-13 |
| 10 | 202141057818-EDUCATIONAL INSTITUTION(S) [13-12-2021(online)].pdf | 2021-12-13 |
| 10 | 202141057818-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [13-12-2021(online)].pdf | 2021-12-13 |
| 11 | 202141057818-DRAWINGS [13-12-2021(online)].pdf | 2021-12-13 |
| 11 | 202141057818-FIGURE OF ABSTRACT [13-12-2021(online)].pdf | 2021-12-13 |
| 12 | 202141057818-DECLARATION OF INVENTORSHIP (FORM 5) [13-12-2021(online)].pdf | 2021-12-13 |
| 12 | 202141057818-FORM 1 [13-12-2021(online)].pdf | 2021-12-13 |
| 13 | 202141057818-COMPLETE SPECIFICATION [13-12-2021(online)].pdf | 2021-12-13 |
| 13 | 202141057818-FORM FOR SMALL ENTITY(FORM-28) [13-12-2021(online)].pdf | 2021-12-13 |
| 14 | 202141057818-FORM-9 [13-12-2021(online)].pdf | 2021-12-13 |
| 14 | 202141057818-Proof of Right [12-03-2022(online)].pdf | 2022-03-12 |
| 15 | 202141057818-OTHERS [13-12-2021(online)].pdf | 2021-12-13 |
| 15 | 202141057818-RELEVANT DOCUMENTS [15-02-2025(online)].pdf | 2025-02-15 |
| 16 | 202141057818-POA [15-02-2025(online)].pdf | 2025-02-15 |
| 16 | 202141057818-POWER OF AUTHORITY [13-12-2021(online)].pdf | 2021-12-13 |
| 17 | 202141057818-FORM 18 [15-02-2025(online)].pdf | 2025-02-15 |
| 17 | 202141057818-REQUEST FOR EARLY PUBLICATION(FORM-9) [13-12-2021(online)].pdf | 2021-12-13 |
| 18 | 202141057818-STATEMENT OF UNDERTAKING (FORM 3) [13-12-2021(online)].pdf | 2021-12-13 |
| 18 | 202141057818-FORM 13 [15-02-2025(online)].pdf | 2025-02-15 |