Abstract: CENTRALIZED SMART ATTENDANCE SYSTEM ABSTRACT A centralized smart attendance system (100) is disclosed. The system (100) comprise a data acquisition unit (104) adapted to receive an image of an individual from an input unit (102) and receive a premise location corresponding to the captured image. A processing unit (106) is configured to receive the captured image and the premise location from the data acquisition unit (104); extract facial features from the captured image using a deep learning-based facial recognition algorithm (108); compare the extracted facial features with a database (110) of pre-registered facial data to authenticate an identity of the individual; and record the attendance of the individual along with a timestamp and the detected premise location, upon successful authentication. The system (100) records and synchronizes automatically when internet connectivity restores, allowing the system (100) to function even in offline environments. Claims: 10, Figures: 3 Figure 1 is selected.
Description:BACKGROUND
Field of Invention
[001] Embodiments of the present invention generally relate to an attendance system and particularly to a centralized smart attendance system.
Description of Related Art
[002] Manual attendance systems prevail in many educational and organizational settings. These systems often rely on roll calls, paper registers, or ID card sign-ins. Such practices tend to introduce delays, human errors, and inefficiencies in record maintenance. Over time, the dependency on human intervention and physical documentation leads to discrepancies and challenges in verification, especially in large institutions.
[003] Some modern solutions attempt to automate attendance by using biometric methods, mobile applications, or GPS-based tracking. Despite offering partial automation, these systems face limitations in scalability, data accuracy, and security. Proxy attendance, device dependency, and inconsistent synchronization between devices and central servers reduce their effectiveness. Furthermore, many systems lack adaptability to various operational environments or infrastructural constraints.
[004] Existing technologies also fall short in providing predictive insights and comprehensive administrative control. Tools fail to offer proactive decision support or seamless integration with existing academic or HR systems. Privacy concerns and compliance with data protection laws remain ongoing issues for many institutions adopting digital attendance solutions.
[005] There is thus a need for an improved and advanced centralized smart attendance system that can administer the aforementioned limitations in a more efficient manner.
SUMMARY
[006] Embodiments in accordance with the present invention provide a centralized smart attendance system. The system comprising a data acquisition unit. The data acquisition unit is adapted to receive an image from an input unit. The input unit is installed in a premise and adapted to capture the image of an individual; and receive a premise location corresponding to the captured image. The system further comprising a processing unit communicatively connected to the data acquisition unit. The processing unit is configured to receive the captured image and the premise location from the data acquisition unit; extract facial features from the captured image using a deep learning-based facial recognition algorithm; compare the extracted facial features with a database of pre-registered facial data to authenticate an identity of the individual; and record the attendance of the individual along with a timestamp and the detected premise location, upon successful authentication.
[007] Embodiments in accordance with the present invention further provide a method for recording attendance using a centralized smart attendance system. The method comprising steps of receiving a captured image and a premise location from the data acquisition unit; extracting facial features from the captured image using a deep learning-based facial recognition algorithm; comparing the extracted facial features with a database of pre-registered facial data to authenticate an identity of the individual; and recording the attendance of the individual along with a timestamp and the detected premise location, upon successful authentication.
[008] Embodiments of the present invention may provide a number of advantages depending on their particular configuration. First, embodiments of the present application may provide a centralized smart attendance system.
[009] Next, embodiments of the present application may provide an attendance system that ensures only authorized individuals can mark attendance by using AI-powered facial recognition, thereby eliminating proxy or fraudulent entries.
[0010] Next, embodiments of the present application may provide an attendance system that records and synchronizes automatically when internet connectivity restores, allowing the system to function even in offline environments.
[0011] Next, embodiments of the present application may provide an attendance system that generates automatic notifications for late arrivals, absences, or suspicious activity, providing instant administrative insights.
[0012] Next, embodiments of the present application may provide an attendance system that ensures data integrity while aligning with data privacy regulations.
[0013] Next, embodiments of the present application may provide an attendance system that marks attendance when users are present at a pre-approved location, which adds an extra layer of authenticity and control.
[0014] These and other advantages will be apparent from the present application of the embodiments described herein.
[0015] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The above and still further features and advantages of embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
[0017] FIG. 1 illustrates a schematic block diagram of a centralized smart attendance system, according to an embodiment of the present invention;
[0018] FIG. 2 illustrates a block diagram of a processing unit, according to an embodiment of the present invention; and
[0019] FIG. 3 depicts a flowchart of a method for recording attendance using a centralized smart attendance system, according to an embodiment of the present invention.
[0020] The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
DETAILED DESCRIPTION
[0021] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the scope of the invention as defined in the claims.
[0022] In any embodiment described herein, the open-ended terms "comprising", "comprises”, and the like (which are synonymous with "including", "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of", “consists essentially of", and the like or the respective closed phrases "consisting of", "consists of”, the like.
[0023] As used herein, the singular forms “a”, “an”, and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0024] FIG. 1 illustrates a schematic block diagram of a centralized smart attendance system 100 (hereinafter referred to as the system 100), according to an embodiment of the present invention. The system 100 may be adapted to recognize an individual in a physical location of a premise using facial recognition computation. Further, the system 100 may compare the recognized individual and the physical location with corresponding detail stored in a remote storage medium. If the recognized individual and the physical location matches the stored corresponding detail, then the system 100 may mark the individual as present. Else, the system 100 may mark the individual as absent.
[0025] The system 100 may be installed in the premise such as, but not limited to, a school, an office, a seminar hall, a movie theater, a train, a plane, a bus, an event locale, a workshop, a training auditorium, and so forth. Embodiments of the present invention are intended to include or otherwise cover any premise, including known, related art, and/or later developed technologies, for installation of the system 100.
[0026] According to the embodiments of the present invention, the system 100 may incorporate non-limiting hardware components to enhance the processing speed and efficiency such as the system 100 may comprise an input unit 102, a data acquisition unit 104, a processing unit 106, a deep learning-based facial recognition algorithm 108, a database 110, and a cloud server 112. In an embodiment of the present invention, the hardware components of the system 100 may be integrated with computer-executable instructions for overcoming the challenges and the limitations of the existing systems.
[0027] In an embodiment of the present invention, the input unit 102 may be installed in the premise. The input unit 102 may be adapted to capture an image of the individual. Further, the input unit 102 may be adapted interpolate a premise location corresponding to the captured image. The premise may be geofenced, such that the input unit 102 may automatically be activated within an inner periphery of a predefined geofenced area.
[0028] In another embodiment of the present invention, the input unit 102 may be adapted to be worn by the individual. The input unit 102 worn by the individual may be, but not limited to, a strap, a watch, a bracelet, a tag, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the input unit 102 worn by the individual, including known, related art, and/or later developed technologies. The input unit 102 may be worn by the individual at locations such as, but not limited to, a hand, a wrist, a neck, an arm, and so forth. Embodiments of the present invention are intended to include or otherwise cover any location for wearing of the input unit 102, including known, related art, and/or later developed technologies.
[0029] In an embodiment of the present invention, the input unit 102 may be adapted to pinpoint the premise location of the individual in the premise. In an exemplary embodiment of the present invention, the pinpointed premise location may be represented in x° North, y° East coordinated format. In another exemplary embodiment of the present invention, the pinpointed premise location may be in x° North y minute and z second, a° East b minute and c second coordinated format. In yet another exemplary embodiment of the present invention, the pinpointed premise location may be in any format.
[0030] According to embodiments of the present invention, the input unit 102 may comprise tracker such as, but not limited to, a Global Navigation Satellite System (GLONASS), a Real-time locating systems (RTLS), and so forth. In a preferred embodiment of the present invention, the location-tracking unit 116 may be a Global Positioning System (GPS). Embodiments of the present invention are intended to include or otherwise cover any type of the tracker, including known, related art, and/or later developed technologies, that may be encompassed in the input unit 102.
[0031] The input unit 102 may be, but not limited to, a handheld device used by a premise manager, a self-owned mobile device of the individual, a centrally installed surveillance camera, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the input unit 102, including known, related art, and/or later developed technologies.
[0032] In an embodiment of the present invention, the data acquisition unit 104 may be adapted to receive the captured image and the premise location corresponding to the captured image from the input unit 102.
[0033] In an embodiment of the present invention, the processing unit 106 may be connected to the data acquisition unit 104. The processing unit 106 may further be configured to execute computer-executable instructions to generate an output relating to the system 100. According to embodiments of the present invention, the processing unit 106 may be, but not limited to, a Programmable Logic Control (PLC) unit, a microprocessor, a development board, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the processing unit 106, including known, related art, and/or later developed technologies. In an embodiment of the present invention, the processing unit 106 may further be explained in conjunction with FIG. 2.
[0034] FIG. 2 illustrates a block diagram of the processing unit 106, according to an embodiment of the present invention. The processing unit 106 may comprise the computer-executable instructions in form of programming modules such as a data receiving module 200, a data extraction module 202, a data comparison module 204, and an attendance module 206.
[0035] In an embodiment of the present invention, the data receiving module 200 may be configured to receive the captured image and the premise location from the data acquisition unit 104. The data receiving module 200 may be configured to transmit the received images to the data extraction module 202. The data receiving module 200 may be configured to transmit the received premise location to the attendance module 206.
[0036] The data extraction module 202 may be activated upon receipt of the captured image and the premise location from the data receiving module 200. In an embodiment of the present invention, the data extraction module 202 may be configured to extract facial features from the captured image using the deep learning-based facial recognition algorithm 108. The deep learning-based facial recognition algorithm 108 renders the extracted facial features by subtraction of accessories selected from glasses, masks, hoods, caps, and so forth. The data extraction module 202 may be configured to transmit the extracted facial features to the data comparison module 204.
[0037] The data comparison module 204 may be activated upon receipt of the extracted facial features from the data extraction module 202. In an embodiment of the present invention, the data comparison module 204 may be configured to compare the extracted facial features with the database 110 of pre-registered facial data to authenticate an identity of the individual. Further, if the extracted facial features may match the pre-registered facial data, then the user may be authenticated. The database 110 may be for example, but not limited to, a distributed database, a personal database, an end-user database, a commercial database, a Structured Query Language (SQL) database, a non-SQL database, an operational database, a relational database, an object-oriented database, a graph database, and so forth. In a preferred embodiment of the present invention, the database 110 may be a cloud database. Embodiments of the present invention are intended to include or otherwise cover any type of the database 110, including known, related art, and/or later developed technologies.
[0038] Upon authentication, the data comparison module 204 may transmit an attendance signal to the attendance module 206.
[0039] The attendance module 206 may be activated upon receipt of the premise location from the data receiving module 200, and the attendance signal from the data comparison module 204. In an embodiment of the present invention, the attendance module 206 may be configured to record the attendance of the individual as present. Moreover, the attendance module 206 may be configured to record the attendance of the individual as present when the individual may within the inner periphery of the predefined geofenced area of the premise. Along with the recording of the attendance, the attendance module 206 may be configured to record a timestamp and the detected premise location to providing supporting evidence to the recorded attendance.
[0040] In an embodiment of the present invention, the attendance module 206 may be configured to store the recorded attendance locally and may synchronize the locally stored recorded attendance with the cloud server 112. In an embodiment of the present invention, the cloud server 112 may be remotely located. In an exemplary embodiment of the present invention, the cloud server 112 may be a public cloud server. In another exemplary embodiment of the present invention, the cloud server 112 may be a private cloud server. In yet another embodiment of the present invention, the cloud server 112 may be a dedicated cloud server. According to embodiments of the present invention, the cloud server 112 may be, but not limited to, a Microsoft Azure cloud server, an Amazon AWS cloud server, a Google Compute Engine (GCE) cloud server, an Amazon Elastic Compute Cloud (EC2) cloud server, and so forth. Embodiments of the present invention are intended to include or otherwise cover any type of the cloud server 112, including known, related art, and/or later developed technologies.
[0041] In an embodiment of the present invention, the attendance module 206 may be configured to transmit a notification to the cloud server 112 indicating late arrivals, absentees, suspicious activities, and so forth. The notification received on the cloud server 112 may be in a pre-defined form, in an embodiment of the present invention. According to embodiments of the present invention, the pre-defined form of the notification received on the cloud server 112 may be, but not limited to a pop-up notification, a flash notification, a ringer notification, a silent notification, a push notification, a hidden notification, an electronic mail notification, a Short Message Service (SMS) notification, an always on-screen notification, and so forth. Embodiments of the present invention are intended to include or otherwise cover any pre-defined form of the notification that may be received on the cloud server 112, including known, related art, and/or later developed technologies.
[0042] In an exemplary scenario, if a metro loco pilot may be appointed to drive metro locomotive in a city. Every day before starting work, the metro loco pilot may have to report to a headquarter for getting their attendance marked, further, the metro loco pilot may have to arrive at an origination station of an assigned metro locomotive. There may be several days, when the origination station of the assigned metro locomotive may be close by to the headquarter, and there may be no hurdle in visiting the headquarter and the origination station for taking over and driving the assigned metro locomotive.
[0043] However, in many a scenarios, the origination station may be far away from the headquarter, and traversing from the headquarter to the origination station may result in inefficiency and wastage of time and resources.
[0044] At this point of shortcoming, the system 100 may be deployed at the origination station. As the metro loco pilot may arrive at the origination station, the input unit 102 may capture the image of the metro loco pilot and mark them as present. Along with marking the attendance as present, the input unit 102 may pinpoint the premise location ensuring the presence of the metro loco pilot at the correct origination station.
[0045] This may prevent a trip of the metro loco pilot to the headquarter for getting their attendance marked. Further, the recorded attendance along with the premise location (here the origination station) may be stored in the cloud server 112. The storage in the cloud server 112 may ensure transparent and integrity in the recorded attendance.
[0046] FIG. 3 depicts a flowchart of a method 300 for recording attendance using the system 100, according to an embodiment of the present invention.
[0047] At step 302, the system 100 may receive the captured image and the premise location from the data acquisition unit 104.
[0048] At step 304, the system 100 may extract the facial features from the captured image using the deep learning-based facial recognition algorithm 108.
[0049] At step 306, the system 100 may compare the extracted facial features with the database 110 of pre-registered facial data to authenticate the identity of the individual. Upon successful authentication, the method 300 may proceed to a step 308. Else, the method 300 may proceed to a step 310.
[0050] At step 308, the system 100 may compare the detected premise location with the inner periphery of the predefined geofenced area. If within the inner periphery, the method 300 may proceed to a step 312. Else, the method 300 may proceed to a step 310.
[0051] At step 310, the system 100 may record the attendance of the individual as absent.
[0052] At step 312, the system 100 may record the attendance of the individual as present along with the timestamp and the detected premise location.
[0053] At step 314, the system 100 may store the recorded attendance locally.
[0054] At step 316, the system 100 may synchronize the locally stored recorded attendance with the cloud server 112.
[0055] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0056] This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements within substantial differences from the literal languages of the claims. , Claims:CLAIMS
I/We Claim:
1. A centralized smart attendance system (100), the system (100) comprising:
a data acquisition unit (104) adapted to:
receive an image from an input unit (102), wherein the input unit (102) is installed in a premise and adapted to capture the image of an individual; and
receive a premise location corresponding to the captured image; and
a processing unit (106) communicatively connected to the data acquisition unit (104), characterized in that the processing unit (106) is configured to:
receive the captured image and the premise location from the data acquisition unit (104);
extract facial features from the captured image using a deep learning-based facial recognition algorithm (108);
compare the extracted facial features with a database (110) of pre-registered facial data to authenticate an identity of the individual; and
record the attendance of the individual along with a timestamp and the detected premise location, upon successful authentication.
2. The system (100) as claimed in claim 1, wherein the attendance is recorded when the detected premise location of the individual is within an inner periphery of a predefined geofenced area.
3. The system (100) as claimed in claim 1, wherein the processing unit (106) is configured to store the recorded attendance locally and synchronize the locally stored recorded attendance with a cloud server (112).
4. The system (100) as claimed in claim 1, wherein the input unit (102) is an integrated camera in a mobile device, a centrally installed surveillance camera, or a combination thereof.
5. The system (100) as claimed in claim 1, wherein the processing unit (106) is configured to transmit a notification to a cloud server (112) indicating late arrivals, absentees, suspicious activities, or a combination thereof.
6. The system (100) as claimed in claim 1, wherein the deep learning-based facial recognition algorithm (108) renders the extracted facial features by subtraction of accessories selected from glasses, masks, hoods, caps, or a combination thereof.
7. The method (300) for recording attendance using a centralized smart attendance system (100), the method (300) is characterized by steps of:
receiving a captured image and a premise location from the data acquisition unit (104);
extracting facial features from the captured image using a deep learning-based facial recognition algorithm (108);
comparing the extracted facial features with a database (110) of pre-registered facial data to authenticate an identity of the individual; and
recording the attendance of the individual along with a timestamp and the detected premise location, upon successful authentication.
8. The method (300) as claimed in claim 7, wherein the input unit (102) is an integrated camera in a mobile device, a centrally installed surveillance camera, or a combination thereof.
9. The method (300) as claimed in claim 7, wherein the deep learning-based facial recognition algorithm (108) renders the extracted facial features by subtraction of accessories selected from glasses, masks, hoods, caps, or a combination thereof.
10. The method (300) as claimed in claim 7, wherein the attendance is recorded when the detected premise location of the individual is within an inner periphery of a predefined geofenced area.
Date: June 02, 2025
Place: Noida
Nainsi Rastogi
Patent Agent (IN/PA-2372)
Agent for the Applicant
| # | Name | Date |
|---|---|---|
| 1 | 202541056098-STATEMENT OF UNDERTAKING (FORM 3) [11-06-2025(online)].pdf | 2025-06-11 |
| 2 | 202541056098-REQUEST FOR EARLY PUBLICATION(FORM-9) [11-06-2025(online)].pdf | 2025-06-11 |
| 3 | 202541056098-POWER OF AUTHORITY [11-06-2025(online)].pdf | 2025-06-11 |
| 4 | 202541056098-OTHERS [11-06-2025(online)].pdf | 2025-06-11 |
| 5 | 202541056098-FORM-9 [11-06-2025(online)].pdf | 2025-06-11 |
| 6 | 202541056098-FORM FOR SMALL ENTITY(FORM-28) [11-06-2025(online)].pdf | 2025-06-11 |
| 7 | 202541056098-FORM 1 [11-06-2025(online)].pdf | 2025-06-11 |
| 8 | 202541056098-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [11-06-2025(online)].pdf | 2025-06-11 |
| 9 | 202541056098-EDUCATIONAL INSTITUTION(S) [11-06-2025(online)].pdf | 2025-06-11 |
| 10 | 202541056098-DRAWINGS [11-06-2025(online)].pdf | 2025-06-11 |
| 11 | 202541056098-DECLARATION OF INVENTORSHIP (FORM 5) [11-06-2025(online)].pdf | 2025-06-11 |
| 12 | 202541056098-COMPLETE SPECIFICATION [11-06-2025(online)].pdf | 2025-06-11 |