Abstract: ABSTRACT CONTACTLESS MASK MONITORING AND BIOMETRIC ATTENDANCE SYSTEM A contactless mask monitoring and biometric attendance system comprising: a first imaging module (IM) to determine a bounding box from an imaged face; a processor to process coordinate data; a ‘four degrees of motion’ robotic arm to automatically adjust said imaging module (IM) for the task of biometric recognition of a person standing in front of said system, said imaging module configured to start recording images / video and is configured to capture one frame at a time once a subject’s face is collinear to said second imaging module. [[FIGURE 5]]
DESC:FIELD OF THE INVENTION:
This invention relates to the field of electronics engineering.
Particularly, this invention relates to a contactless mask monitoring and biometric attendance system.
BACKGROUND OF THE INVENTION:
Covid-19 has impacted everyone’s way of living in a huge way; examples of which are evident in every aspect of life. Some basic measures to prevent transmission of Covid-19 are wearing a mask and social distancing.
In situations, such as the current pandemic, which has necessitated compulsory usage of masks, monitoring compliance is a big task.
According to the COVID-19 safety norms, wearing a mask is necessary which requires some kind of human intervention
Therefore, face mask detection has to be acknowledged as a requirement in present society, especially in defined environments, which can help ensure a healthy and safe environment for students and front-line workers. Common biometric authentication systems, like fingerprint scanners, require contact with a publicly used surface. Non-biometric authentication systems can be, easily, cheated in order to falsely record someone else’s attendance.
Hence, there is a dire need of a system that can monitor and record the entries of the working population.
There is a need for a contactless attendance tracking system which is efficient and fast.
OBJECTS OF THE INVENTION:
An object of the invention is to provide a system, for monitoring compliance of masks’ protocol, that does not involve contact between people or surfaces.
Another object of the invention is to provide a system which records attendance, efficiently, by verifying identity along with mask detection.
Yet another object of the invention is to provide a system which saves time and is resource efficient whilst recording attendance and whilst verifying identity along with mask detection; simultaneously.
Still another object of the invention is to provide a system which allows for repetitive monitoring with minimum time lag.
An additional object of the invention is to record attendance, automatically, to reduce time taken for recording attendance.
Yet an additional object of the invention is to minimise and eliminate interpersonal contact and contact with public surfaces to prevent the spread of diseases like Covid-19.
Still an additional object of the invention is to minimise attendance cheating which is common in non-biometric methods of recording attendance.
Another additional object of the invention is to allow people to enter a defined environment only if they are wearing a mask.
Yet another additional object of the invention is to eliminate the need of human intervention for monitoring if people entering a defined environment are wearing a mask or not.
Still another additional object of the invention is to record attendance for every hour automatically.
Another additional object of the invention is to provide autonomous biometric recognition.
Yet another additional object of the invention is to present a scalable and adaptable system that allows scanning of multiple biometrics for improved accessibility and security.
SUMMARY OF THE INVENTION:
According to this invention, there is provided a contactless mask monitoring and biometric attendance system, said system comprising:
- a first imaging module configured to determine a face of a person standing before said system, said first imaging module being configured to determine a bounding box from an imaged face obtained from said first imaging module;
- a processor configured to process x-coordinate data, y-coordinate data, and z-coordinate data from said determined bounding box;
- a ‘four degrees of motion’ robotic arm configured to automatically adjust said imaging module for the task of biometric recognition of a person standing in front of said system, in that, said arm comprising:
o three links extending from an operatively vertically displaceable linear motion platform, wherein:
? said linear motion platform comprises a linear joint causing operatively vertical displacement of said linear motion platform;
? a first link comprises a rotary joint at its both ends, in that, a first rotary joint connects said first link to said linear motion platform and a second rotary joint which connects said first link to a second link,
? said second link comprises a rotary joint at its both ends, in that, a second rotary joint connects said second link to said first link and a third rotary joint connects said second link to a third link, and
? a third link comprises a third rotary joint with an end effector hosting a second imaging module, said third rotary joint causing angular displacement of said end effector; and
- said imaging modules configured to start recording images / video and is configured to capture one frame at a time once a subject’s face is collinear to said second imaging module.
In at least an embodiment, said one linear joint and said three revolute (rotary) joints spanning a cylindrical work envelope, in that:
- said linear joint causing linear displacement (Y-axis movement) of said end effector; and
- said rotary joints causing angular displacement (X-axis, Z-axis) of said end effector.
In at least an embodiment, said system comprises one or more stepper motors coupled to each of said joints to cause actuation of the links via said joints.
In at least an embodiment, said system comprises one or more stepper motors coupled to each of said joints to cause actuation of the links via said joints, characterized in that:
- a first stepper motor being coupled to said first linear joint, being a lead screw setup, causing operative linear vertical displacement of said arm;
- a second stepper motor actuating a first rotary joint, through a first timing belt and pulley assembly;
- a third stepper motor actuating said second rotary joint, through a second timing belt and pulley assembly; and
- a servo motor actuating said third rotary joint, through a second timing belt and pulley assembly, for causing angular displacement of said end effector.
In at least an embodiment, said system comprises one or more stepper motors coupled to each of said joints to cause actuation of the links via said joints, characterized in that:
- a first stepper motor being coupled to said first linear joint, being a lead screw setup, causing operative linear vertical displacement of said arm;
- a second stepper motor actuating a first rotary joint, through a first timing belt and pulley assembly, said first rotary joint, having a 180° range of motion;
- a third stepper motor actuating said second rotary joint, through a second timing belt and pulley assembly, said second rotary joint having a 250° range of motion; and
- a servo motor actuating said third rotary joint, through a second timing belt and pulley assembly, for causing angular displacement of said end effector, said third rotary joint having a 260° range of motion.
In at least an embodiment, a mask detection module is configured to detect presence of a mask, upon successful detection, once said subject’s face is collinear to said second imaging module, said successful detection being determined by a first comparator configured to compare said detected face, in said bounding box, with preset mask detection parameters in order to determine if the face has a mask on it or not.
In at least an embodiment, a mask detection module is configured to mark attendance, upon successful detection, once said subject’s face is collinear to said second imaging module.
In at least an embodiment, a second comparator is configured to compare, and check, if computed area of said bounding box is greater than a predefined threshold and further configured to compute co-ordinates in order to be fed to a movement control module controlling movement of said robotic arm via corresponding stepper motors.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS:
The invention will now be described in relation to the accompanying drawings, in which:
Figure 1 illustrates a schematic block diagram of the system of this invention;
Figure 2 illustrates a flowchart of the method of this invention;
Figure 3 shows a flowchart for the algorithm that the arm follows in order to scan the biometric of the user present in the work envelope;
Figure 4 shows a basic electronics block diagram of the system;
Figure 5 shows a realistic 3D rendering of the system;
Figure 5a illustrates a top view of the arm with various links and joints in consonance with the various link lengths and angles (to be) achieved;
Figure 6 shows an isometric view of the system;
Figure 7 shows a top view of the system;
Figure 8 illustrates a face mask detection algorithm.
Figure 9(a) illustrates a flowchart of face detection algorithm
Figure 9(b) illustrates a flowchart of Translation algorithm;
Figure 9(c) illustrates a flowchart of Communication algorithm;
Figure 10 illustrates overall flowchart of the algorithm; and
Figures 11(a), 11(b), 11(c), and 11(d) illustrates various positions of face for which the arm, of this invention moves or does not move.
DETAILED DESCRIPTION OF THE ACCOMPANYING DRAWINGS:
With reference to the figure where elements have been given like numerical designations to facilitate the reader’s understanding of the present invention, the preferred embodiments of the present invention are set forth below. The enclosed text and drawings are merely illustrative of the preferred embodiments. Although specific components, materials, configurations and uses are illustrated, it should be understood that the number of variations to the components and to the configuration of those components described herein and in the accompanying figures can be made without changing the scope and the function of the invention set forth herein.
According to this invention, there is provided a contactless mask monitoring and biometric attendance system.
FIGURE 1 illustrates a schematic block diagram of the system of this invention.
FIGURE 2 illustrates a flowchart of the method of this invention.
In at least an embodiment, there is provided a first node, being an entry mode, configured with a biometric authentication mechanism (BIM) in order to identify a person based on their confirmed biometric authentication. In at least an embodiment, this biometric authentication mechanism (DB1) is communicably coupled with a second node, being an entry restriction mechanism (ERM) configured to allow a person in a defined environment, only upon confirmed, biometric authentication by the biometric authentication mechanism (BIM).
In at least an embodiment, the biometric authentication mechanism (BIM) comprises an iris scanner (IS) with a communicably coupled biometric identity database (DB1) such that when a user’s iris is scanned, in a non-contact manner, it is checked with data from the database to check for authenticity. Further, the iris scanner (IS) is communicably coupled with a time-stamping mechanism (TSM) configured to record a timestamp along with identity of an authenticated user, when authenticity is confirmed by the biometric authentication mechanism (BIM); this time-stamped identity is logged in a timestamp-identity database (DB2).
In at least an embodiment, there is provided a third node, being also an entry mode, configured with an imaging module (IM) in order to image a person’s face, at an entry point of the defined environment, in order to check for mask-wearing compliance. In at least an embodiment, this imaging module (IM) is communicably coupled with the second node, being the entry restriction mechanism (ERM) configured to allow a person, in a defined environment, only upon confirmed mask compliance by the imaging module (IM).
The present invention discloses a combination of iris biometric authentication, mask detection, and attendance generation; configured with an automatic robotic arm to seamlessly carry out attendance recording, automatically, for different times in a day, following guidelines for prevention of Covid-19.
Figure 3 shows a flowchart for the algorithm that the arm follows in order to scan the biometric of the user present in the work envelope.
In at least an embodiment, the tasks of ‘mask detection’ and ‘iris recognition’ are implemented using a convolutional neural network model, trained with a database of numerous images. One of the aspects of this invention is that the inventors varied the parameters for training the model to find the best fit for it to work satisfactorily.
In at least an embodiment, the system, of this invention, employs a ‘four degrees of motion’ robotic arm that can automatically adjust itself for the task of iris recognition of a person standing in front of it.
Figure 4 shows a basic electronics block diagram of the system.
Figure 5 shows a realistic 3D rendering of the system.
Figure 6 shows an isometric view of the system.
Figure 7 shows a top view of the system.
In at least an embodiment, the arm is at least a robotic arm having four degrees-of-motion. Typically, this arm is controlled by a microcontroller which acts as a slave to a main processor.
In at least an embodiment, the robotic arm, of this invention, incorporates at least a set of linear movements and at least a set of rotary movements, with the ability to connect multiple links, together, offering a range of motion that covers a cylindrical workspace.
In at least an embodiment, the arm comprises three links extending from a linear motion platform (11), wherein:
- a linear motion platform (11) comprises a linear joint (13a);
- a first link (12a) comprises a rotary joint at its both ends i.e. a first rotary joint (14a) which connects the first link (12a) to the linear motion platform (11) and a second rotary joint (14b) which connects the first link (12a) to a second link (12b),
- a second link (12b) comprises a rotary joint at its both ends i.e. a second rotary joint (14b) which joins the second link (12b) to the first link (12a) and a third rotary joint (14c) which joins the second link (12b) to a third link (12c), and
- a third link (12c) comprises a third rotary joint (14c) with an end effector, the third rotary joint (14c) causing angular displacement of the end effector.
The linear joint has a vertical axis of motion and the rotary joints have an axis of motion in a horizontal plane.
In at least an embodiment, the one linear joint (13a) and three revolute (rotary) joints (14a, 14b, 14c) span a cylindrical work envelope.
Typically, the linear joint (13a) causes linear displacement (Y-axis movement) of the end effector (EE).
Typically, the rotary joints (14a, 14b, 14c) causes angular displacement (X-axis, Z-axis) of the end effector (EE).
Reference numeral SM refers to corresponding stepper motors which cause the joints to cause actuation of the links.
Reference numeral D refers to display.
In preferred embodiment, the linear motion of the arm is actuated using a first stepper motor coupled to a lead screw setup. This lead screw setup moves a platform in a vertical linear motion on which the whole arm is mounted. The platform is supported using smooth rods and corresponding linear guide bearings. The platform houses a second stepper motor which actuates a first revolute joint through a first timing belt and pulley assembly. This first revolute joint, preferably, has a 150° range of motion and is connected to a second link that is supported with flanges and bearings. This second link houses a similar setup with a third stepper motor which actuates a second revolute joint through a second timing belt and pulley assembly. The second revolute joint, preferably, has a 250° range of motion and, on the other end of this second link, the third link is mounted using a flange and bearing. A servo motor is mounted on the third link and is used to angularly displace the end effector i.e. scanner mount (EE). The servo motor is coupled to the scanner mount flange, holding the end effector, using a third timing belt and pulley assembly. The range of angular displacement, of the scanner mount, preferably, is 260°.
It is to be noted that the three-link configuration, as defined by this current invention, is important because having a static end effector (EE) would not work. An end effector holds an imaging module (camera) and a person may approach this invention’s system in an oblique / lateral / side-wise / tilted fashion. In other words, it is not required that a person be collinear with the imaging module. However, it is important that a face of a person, while being imaged / scanned, by the imaging module on the end effector, be collinear with the end effector (hosting the imaging module). An angularly displaceable end effector (EE) effected by the third rotary joint (14c) is what allows the end effector (EE) to be dynamically configurable, move in the z-plane, and be collinear with a face to be scanned.
It is to be noted that the three-link configuration (12a, 12b, 12c), as defined by this current invention, is important because removing the second link (12b) would not work. If the second link (12b) were to be removed, the working envelope drastically reduces (shown by dotted line in Figure 7. Essentially, the addition of the second link (12b) allows for depth control (Z-axis movement) and allows the end effector (EE) to cover a large work envelope as seen in Figure 7.
In at least an embodiment, the imaging module (IM) is configured to start recording images / video and is configured to capture one frame at a time. The imaging module (IM) is, further, configured to detect a face in one or more captured frames using a mask detection algorithm. The mask detection algorithm uses the captured frame/s, a detection algorithm, and trained datasets in order to output location and probability of a face in a frame. This algorithm runs a capture frame face through the detection algorithm using trained datasets in order to output prediction and corresponding probability of a face being with mask or being without mask. These probabilities are then compared against each other and the image is classified as a category which has a relatively larger probability.
Using this, a bounding box is created, by the imaging module (IM), for the detected face. A first comparator is configured to compare the detected face, in the bounding box, with preset mask detection parameters in order to determine if the face has a mask on it or not. Further, coordinates from the real work are mapped, via a mapper, on to a working envelope and area of configured bounding box is computed. For a bounding box’s coordinates, there are four corner coordinates which, together, for the bounding box’s coordinate. In at least a non-limiting exemplary embodiment, original dimensions of a video stream, being captured, are 400 x 300 (width X height) whereas the working envelope has the dimensions of 30 x 30. Thus, the mapper maps the width and height of the frame to the working envelope dimensions by multiplying the coordinates by a factor of 0.075 and 0.07 respectively.
A second comparator is configured to compare, and check, if computed area of the bounding box is greater than a predefined threshold. If yes, then a co-ordinate determination module is configured to compute co-ordinates, preferably in a Cartesian coordinate system, in order to be fed to a movement control module controlling movement of the arm via corresponding stepper motors. Multiple threshold points were decided based on data derived from observations of placing a variety of faces, at varied distances, from an imaging module (comprising at least a camera). Setting the right threshold ensures that only intentional monitoring is conducted and no resource is wasted in monitoring and checking masks for unintended people. According to a non-limiting exemplary embodiment, the set of thresholds on the area calculated by the bounding box height and width on the working envelope scale were [14, 20, 25, 40, 65, 80]. As the threshold value increases, a user has to stand closer to the setup in order to get their iris scanned.
In at least an embodiment, the bounding box coordinates are scaled to constraints of a working envelope in order to determine a midpoint and to, correlatively, determine X and Y coordinates for the arm to position itself in a Cartesian coordinate system. An intialiser is configured to initialise, and store, starting X-coordinate value and starting Y-coordinate value. Width and height of the bounding box are determined, in that, a middle point is determined based on the determined width and height of the bounding box. Middle point value/s are added to initialised X-coordinate value/s and Y-coordinate value/s in order to compute final X-coordinate value/s and Y-coordinate value/s. Further, depth coordinates (Z-coordinate) is computed using inverse kinematics. A computing processor receives, and transmits, X-coordinate value/s and Z-coordinate value/s where the endpoint of the arm should reach. An angle computation module is configured to compute angles that links (12a, 12b, 12c) have to traverse in order to reach particular X-coordinate and Z-coordinate.
FIGURE 5a illustrates a top view of the arm with various links and joints in consonance with the various link lengths and angles (to be) achieved.
Here:
• L1 is the length of the first rotating link, L2 is the length of the second rotating link.
• P (X, Z) is the point to which Robotic arm’s end effector has to traverse.
(R) is the distance between the first rotary joint and the point P.
i.e. R= v{(X2) + (Z2)} …. (1)
• Theta2 is the angle between the second link and a normal passing through first link,
Given by,
Theta2 = acos((sq(X) + sq(Z) - sq(L1) - sq(L2)) / (2.0 * L1* L2))
…. (2)
• Theta1 is the angle between the first link and the Z axis of the system, given by from geometry,
Theta1 = atan(X / Z) - atan((L2* sin(Theta2)) / (L1+ L2* cos(Theta2)))
…. (3)
From the inverse kinematics two values of angles are achieved ?1 and ?2.
The first angle value, ?1, is given to a stepper motor which controls the first rotary joint.
The second angle value, ?2, is given to a stepper motor which controls the second rotary joint.
Thus, movement in the X-Z plane is achieved so that the end effector (EE) is at an accurate collinear spaced apart from a face of a person.
Thus, once the coordinates are detected, the arm along with the end effector moves directly to the point where the face is detected based only on the bounding box size and coordinates. The arm moves towards a detected face, automatically.
In at least an embodiment, a serial communication line is configured with the computing processor. The determined X-coordinate value/s, Y-coordinate value/s, and Z-coordinate value/s are converted into a string of formats: X=12.4, Y=22.7, Z=17.7; where all the values are of 4 characters including the decimal point. The computing processor receives the sting values and decodes it into values of X-coordinate, Y-coordinate, and Z-coordinate. Simultaneously, these values are also converted from character to float in order to be used for calculations. A structure is defined in the computing processor that contains current status of each of the motors at all times.
In at least an embodiment, it is to be noted that, for movement of the arm in Y-direction (up movement, down movement), one revolution of the shaft leads to, preferably 8 mm, of movement in the Y-direction. Thus, a 360 degree angular displacement, of the arm, corresponds to the pre-defined 8 mm of movement in the Y-direction. Thus 360 degrees correspond to 8 mm. Thus, to achieve a predefined amount of linear displacement, the number of motor steps, for the motor governing Y-direction movement, required will be the distance (preferably, in millimeters)*1000.
For the remaining two motors (X-direction governing motor and Z-direction governing motor), the system has configured / actuated joints, in angles, instead of linear motion. The range of motion, for the remaining two motors are, preferably, 150 degrees and 250 degrees, respectively.
In at least an embodiment, once the arm moves to a determined location (as per determined X-coordinate, determined Y-coordinate, determined Z-coordinate), it sends back a confirmation signal to the computing processor. After this confirmation signal is received by the computing processor, a flag is released and a next frame is captured, by the imaging module (IM), where the entire procedure is repeated.
Linear motion calculations (Joint 1):
The lead screw used in the system has following specifications
• Pitch (p) = 2mm
• No. Of Start (N) = 4
• Lead = p * N = 8 mm
Hence for one rotation of the lead screw, platform attached to it will move by 8 mm.
Nema17 steppers used in the system require 400 steps to make a complete 360° movement, hence in order to traverse “Y” mm linearly, motor has to move steps calculated as given below:
Number of steps: Y *(400/8)
Number of steps for Y cm traversal: (Y*10) *(400/8)
Rotary Joints calculation (Joint 2, 3, and 4):
Joint 2 (First rotary joint):
The timing pulleys and belts used are of 2 mm pitch.
Reduction ratio: Pulley on the motor has 60 teeth and the pulley on the flange connected on the other end has 40 teeth, hence reduction of 2:3 is achieved.
Joint 3 (Second rotary joint):
The timing pulleys and belts used are of 2 mm pitch.
Reduction ratio: Pulley on the motor has 60 teeth and the pulley on the flange connected on the other end has 40 teeth, hence reduction of 2:3 is achieved.
Joint 4 (Forth rotary joint/ Servo joint):
The timing pulleys and belts used are of 2 mm pitch.
Reduction ratio: Pulley on the motor has 60 teeth and the pulley on the flange connected on the other end has 60 teeth, hence reduction of 1:1 is achieved.
Linear motion calculations:
According to non-limiting preferred embodiment, the arm, of this invention, uses a lead screw of lead 8 mm/rev to convert stepper motor rotational motion into linear motion. The stepper motor used for actuation has a holding torque of 5.6KgCm. Considering the efficiency of lead screw to be 0.5, the weight this setup can pull up is more than 20 Kg. Total weight of all the movable parts on the arm including the scanner (which can range from 250 gm to 1kg), the weight that the lead screw must pull is about 1.8Kg to 2.8Kg, which is well in line with the motor capacity. If the system uses this motor at its ideal power output speed i.e., 600 rpm, the arm will achieve a linear motion with 80 mm/s velocity. This can cover the total linear traversal (230mm) in 2.8 seconds.
Rotational motion (angular displacement) calculations:
According to non-limiting preferred embodiment, all timing belts are GT2 type and use pulleys of the same diameter at both ends, hence the transmission ratio is 1:1. First revolute joint hence rotates with the same as motor RPM, which is 100rpm for higher torque output. Hence the tangential velocity of the first link will be 1.57 m/s. Since the second link has a similar timing-belt pulley system, the tangential velocity of the second link will also be 1.57 m/s. The total final tangential velocity of the total arm will be 3.66 m/s. This can cover the rotational traversal (Arc length of 916.30 mm) in 0.25 seconds.
Total weight of all the movable parts on the arm including the scanner (which can range from 250 gm to 1kg) , the weight that the lead screw has to pull is about 1.8Kg to 2.8Kg, which is well in line with the motor capacity.
Figure 8 illustrates a face mask detection algorithm.
Figure 9(a) illustrates a flowchart of face detection algorithm
Figure 9(b) illustrates a flowchart of Translation Algorithm
Figure 9(c) illustrates a flowchart of Communication Algorithm
Figure 10 illustrates overall flowchart of the algorithm
The processor detects faces in a frame of reference captured by a scanner located on the arm, of this invention, and checks if those faces are wearing masks or not. If a person wearing a mask sits inside the work envelope of the arm, the processor will send the coordinate of the ‘face’ that is detected to the microcontroller. This coordinate data consists of the X-Y position of the centre of the ROI of the face along with the Z (depth) value calculated from the area of ROI of the Face. The microcontroller converts these X-Y coordinates from the ROI to the real world by following a scale and computes the required joint actuations using inverse kinematics to reach the specified location in the real world. Microcontroller will then control the motors through motor drivers in a closed loop feedback system. When the end effector reaches the required location, Arm will wait until the biometric is scanned properly. Once the biometric is scanned, the processor will command the arm to go back to its home position.
According to a non-limiting exemplary embodiment, Figures 11(a), 11(b), 11(c), and 11(d) illustrates various positions of face for which the arm, of this invention moves or does not move.
Figure 11(a) illustrates ROI of face less than threshold – this image was clicked before Arm movement.
Figure 11(b) illustrates ROI of face less than threshold – hence, no movement done by the arm.
Figure 11(c) illustrates ROI of face greater than threshold – this image was clicked before arm movement.
Figure 11(d) illustrates ROI of face greater than threshold – hence, arm reaches face for iris scanning.
The system, and method, of this invention can be used in various defined environments such as educational institutes, offices, industrial units, and the like in order to record attendance of students, employees, staff, workforce, and the like personnel; seamlessly, without contact. The attendance record generated can be accessed easily by employers and faculty members.
The current invention can be used:
1. In industries for authentication;
2. In high security areas where multiple biometrics can be scanned for the same person in order to authenticate their entrance;
3. For the differently abled to can scan the biometric most convenient to them.
As per one of the objects of invention, interpersonal contact and contact with public surfaces is minimised or eliminated to prevent disease transmission as there is no human intervention to monitor personally and take attendance and no contact with the surface of the arm that scans the biometric.
The present invention can be used in organisations and institutes, which are defined environments, that require daily attendance, having an independent system that does monitoring and attendance is a necessity. Multiple modules can be deployed in a single institute at the entrances of classrooms and labs.
The current invention can be easily modified to do secondary tasks such as opening a gate when a person in the frame is wearing a mask. The gathered attendance can be added on the cloud for easy access by the authorities. Different types of scanners can be mounted on the end effector and increase its applicability.
The TECHNICAL ADVANCEMENT, of this invention, lies in providing a seamless system and method, using a robotic arm with four degrees of motion, wherein recordal of attendance and compliance in wearing masks is authenticated in a non-contact manner; thus, human-to-human interaction is eliminated.
The TECHNICAL ADVANCEMENT, of this invention, also lies in achieveing the following TECHNICAL ADVANTAGES:
- Collinearity in alignment of face to be scanned with an end effector – by provisioning of an angularly displaceable end effector
- Varying Depth i.e. movement in Z-plane for the end effector – by provisioning of a second link between a first link and a third link
- Wider working envelope allowing more coverage in the X-Z plane – by provisioning of a second link between a first link and a third link.
While this detailed description has disclosed certain specific embodiments for illustrative purposes, various modifications will be apparent to those skilled in the art which do not constitute departures from the spirit and scope of the invention as defined in the following claims, and it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the invention and not as a limitation.
,CLAIMS:WE CLAIM,
1. A contactless mask monitoring and biometric attendance system, said system comprising:
- a first imaging module (IM) configured to determine a face of a person, standing before said system, to obtain an imaged face, said first imaging module (IM) being configured to determine a bounding box from said imaged face;
- a processor configured to process x-coordinate data, y-coordinate data, and z-coordinate data from said determined bounding box;
- a ‘four degrees of motion’ robotic arm configured to automatically adjust said imaging module (IM) for the task of biometric recognition of a person standing in front of said system, in that, said arm comprising:
o three links (12a, 12b, 12c) extending from an operatively vertically displaceable linear motion platform (11), wherein:
? said linear motion platform (11) comprises a linear joint (13a) causing operatively vertical displacement of said linear motion platform;
? a first link (12a) comprises a rotary joint at its both ends, in that, a first rotary joint (14a) connects said first link (12a) to said linear motion platform (11) and a second rotary joint (14b) which connects said first link (12a) to a second link (12b),
? said second link (12b) comprises a rotary joint at its both ends, in that, a second rotary joint (14b) connects said second link (12b) to said first link (12a) and a third rotary joint (14c) connects said second link (12b) to a third link (12c), and
? a third link (12c) comprises a third rotary joint (14c) with an end effector hosting a second imaging module, said third rotary joint (14c) causing angular displacement of said end effector; and
- said imaging modules configured to start recording images / video and is configured to capture one frame at a time once a subject’s face is collinear to said second imaging module.
2. The system as claimed in claim 1 wherein, said one linear joint (13a) and said three revolute (rotary) joints (14a, 14b, 14c) spanning a cylindrical work envelope, in that:
- said linear joint (13a) causing linear displacement (Y-axis movement) of said end effector (EE); and
- said rotary joints (14a, 14b, 14c) causing angular displacement (X-axis, Z-axis) of said end effector (EE).
3. The system as claimed in claim 1 wherein, said system comprising one or more stepper motors coupled to each of said joints to cause actuation of the links via said joints.
4. The system as claimed in claim 1 wherein, said system comprising one or more stepper motors coupled to each of said joints to cause actuation of the links via said joints, characterized in that:
- a first stepper motor being coupled to said first linear joint, being a lead screw setup, causing operative linear vertical displacement of said arm;
- a second stepper motor actuating a first rotary joint, through a first timing belt and pulley assembly;
- a third stepper motor actuating said second rotary joint, through a second timing belt and pulley assembly; and
- a servo motor actuating said third rotary joint, through a second timing belt and pulley assembly, for causing angular displacement of said end effector.
5. The system as claimed in claim 1 wherein, said system comprising one or more stepper motors coupled to each of said joints to cause actuation of the links via said joints, characterized in that:
- a first stepper motor being coupled to said first linear joint, being a lead screw setup, causing operative linear vertical displacement of said arm;
- a second stepper motor actuating a first rotary joint, through a first timing belt and pulley assembly, said first rotary joint, having a 180° range of motion;
- a third stepper motor actuating said second rotary joint, through a second timing belt and pulley assembly, said second rotary joint having a 250° range of motion; and
- a servo motor actuating said third rotary joint, through a second timing belt and pulley assembly, for causing angular displacement of said end effector, said third rotary joint having a 260° range of motion.
6. The system as claimed in claim 1 wherein, a mask detection module being configured to detect presence of a mask, upon successful detection, once said subject’s face is collinear to said second imaging module, said successful detection being determined by a first comparator configured to compare said detected face, in said bounding box, with preset mask detection parameters in order to determine if the face has a mask on it or not.
7. The system as claimed in claim 1 wherein, a mask detection module being configured to mark attendance, upon successful detection, once said subject’s face is collinear to said second imaging module.
8. The system as claimed in claim 1 wherein, a second comparator being configured to compare, and check, if computed area of said bounding box is greater than a predefined threshold and further configured to compute co-ordinates in order to be fed to a movement control module controlling movement of said robotic arm via corresponding stepper motors.
Dated this 23rd day of June, 2022
CHIRAG TANNA
of INK IDEE
APPLICANT’S PATENT AGENT
REGN. NO. IN/PA – 1785
| # | Name | Date |
|---|---|---|
| 1 | 202121046553-PROVISIONAL SPECIFICATION [12-10-2021(online)].pdf | 2021-10-12 |
| 2 | 202121046553-PROOF OF RIGHT [12-10-2021(online)].pdf | 2021-10-12 |
| 3 | 202121046553-POWER OF AUTHORITY [12-10-2021(online)].pdf | 2021-10-12 |
| 4 | 202121046553-FORM-8 [12-10-2021(online)].pdf | 2021-10-12 |
| 5 | 202121046553-FORM FOR SMALL ENTITY(FORM-28) [12-10-2021(online)].pdf | 2021-10-12 |
| 6 | 202121046553-FORM 3 [12-10-2021(online)].pdf | 2021-10-12 |
| 7 | 202121046553-FORM 1 [12-10-2021(online)].pdf | 2021-10-12 |
| 8 | 202121046553-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [12-10-2021(online)].pdf | 2021-10-12 |
| 9 | 202121046553-EVIDENCE FOR REGISTRATION UNDER SSI [12-10-2021(online)].pdf | 2021-10-12 |
| 10 | 202121046553-EDUCATIONAL INSTITUTION(S) [12-10-2021(online)].pdf | 2021-10-12 |
| 11 | 202121046553-DRAWINGS [12-10-2021(online)].pdf | 2021-10-12 |
| 12 | 202121046553-FORM-9 [23-06-2022(online)].pdf | 2022-06-23 |
| 13 | 202121046553-FORM 18 [23-06-2022(online)].pdf | 2022-06-23 |
| 14 | 202121046553-ENDORSEMENT BY INVENTORS [23-06-2022(online)].pdf | 2022-06-23 |
| 15 | 202121046553-DRAWING [23-06-2022(online)].pdf | 2022-06-23 |
| 16 | 202121046553-COMPLETE SPECIFICATION [23-06-2022(online)].pdf | 2022-06-23 |
| 17 | Abstract.jpg | 2022-07-08 |
| 18 | 202121046553-FER.pdf | 2022-11-22 |
| 19 | 202121046553-FER_SER_REPLY [24-01-2023(online)].pdf | 2023-01-24 |
| 20 | 202121046553-COMPLETE SPECIFICATION [24-01-2023(online)].pdf | 2023-01-24 |
| 21 | 202121046553-CLAIMS [24-01-2023(online)].pdf | 2023-01-24 |
| 22 | 202121046553-PatentCertificate01-03-2024.pdf | 2024-03-01 |
| 23 | 202121046553-IntimationOfGrant01-03-2024.pdf | 2024-03-01 |
| 1 | SearchHistory(90)E_22-11-2022.pdf |