Abstract: The present disclosure provides a security system 100 that is installed at a region of interest (ROI), such as, a car, or a building. The security system 100 includes one or more first sensors 102, a processing unit 104, an image capturing unit 106, and a control unit 108. The image capturing unit 106 activates and capture images of a location, whose kinetic parameters are detected by the first sensors 102. Based on the positive authentication of the captured images, the control unit 108 is operated to provide access to a human subject in the ROI. But, in case of negative authentication of the captured images, the security system 100 sends warning signals to mobile devices of registered users. If a registered user authenticates the human subject within a specific time-period, then the human subject can access the ROI, else, the security system 100 generates an alarm.
TECHNICAL FIELD
[0001] The present disclosure relates to the field of security. More particularly, the present
disclosure relates to a security system.
BACKGROUND
[0002] Background description includes information that may be useful in understanding
the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] With an increase in number of incidences related to robbery and crime, smart
safety/ security devices and techniques have become necessary. In today's time, most of the people have got a very hectic schedule, they have to manage multiple things simultaneously at a time, such as attending meetings, office, parties, and a lot of projects and tasks at hand, etc. With such a busy and cumbersome routine, people cannot take out time to look after their belongings such as house, car, etc. Moreover, it is not even possible to constantly keep an eye on everyone and be alert all the time in order to protect the belongings. So, installation of security devices have become mandatory for protection of the belongings.
[0004] Most of the security devices and techniques comprise modules, such as camera and
other monitoring devices, which are required to be kept in activated state all the time, in order to keep a check on the belongings. But, the devices consume a lot of power in order to remain in constant and continuous activated state, all over the day. Moreover, it also leads to an increase in rate of depreciation of the devices and a reduction in their efficiency, and also require a lot of unnecessary processing steps, and consumption of a huge chunk of memory associated with the devices.
[0005] There is, therefore, a need in the art to provide an improved, efficient, and cost-
effective, and reliable system to overcome the above-mentioned problems, and provide better security conditions with minimal power consumption.
OBJECTS OF THE PRESENT DISCLOSURE
[0006] Some of the objects of the present disclosure, which at least one embodiment herein
satisfies are as listed herein below.
[0007] It is an object of the present disclosure to provide a system for monitoring and
providing security to a region where it is installed.
[0008] It is another object of the present disclosure to provide a system that is able to
distinguish between a human subject and a non-human subject.
[0009] It is another object of the present disclosure to provide a system for authenticating
a person.
[0010] It is another object of the present disclosure to provide a system for facilitating
authentication through face-recognition mechanism.
[0011] It is another object of the present disclosure to provide a system for distinguishing
between voluntary and involuntary movements/ actions around the region where it is installed, and
correspondingly alarming a registered person in case of voluntary movements/ actions.
[0012] It is another object of the present disclosure to provide a system that consumes
minimal electrical power.
[0013] It is another object of the present disclosure to provide an improved, reliable,
efficient, and cost-effective, and easy-available system.
SUMMARY
[0014] The present disclosure relates to the field of security. More particularly, the present
disclosure relates to a security system.
[0015] An aspect of the present disclosure pertains to a security system comprising: one or
more first sensors systematically placed at pre-configured locations at a region of interest (ROI), to sense one or more kinetic parameters at the ROI; an image capturing unit to capture one or more images; a processing unit operatively coupled to the one or more first sensors and the image capturing unit, the processing unit comprising one or more processors coupled with a memory, the memory storing instructions executable by the one or more processors and configured to: determine a location associated with at least one of the one or more first sensors based on the sensed one or more kinetic parameters; generate a first set of signals to capture one or more images of the determined location of the at least one of the one or more first sensors; extract facial
attributes of at least one human subject from the captured one or more images; and responsive to
positive matching of the extracted facial attributes with a first dataset of pre-stored facial attributes,
generate an authentication signal indicative of authentication of the at least one human subject;
and a control unit configured at the ROI, and operatively coupled to the processing unit, such that
the generated authentication signal enables the control unit to provide access to the ROI.
[0016] In an aspect, the facial attributes may comprise any or a combination of shape,
colour, and texture of face of the at least one human subject.
[0017] In an aspect, the one or more first sensors may be any or a combination of motion
sensor, force sensor, and pressure sensor, and the kinetic parameters may be any or a combination
of movement, exerted force, and exerted pressure.
[0018] In an aspect, the processing unit may be configured to generate a set of warning
signals responsive to negative matching of the extracted facial attributes with the first dataset, and
wherein the generated set of warning signals may comprise any or a combination of the detected
kinetic parameters and the one or more captured images, and wherein the generated set of warning
signals may be transmitted to one or more mobile devices operatively coupled to the processing
unit.
[0019] In an aspect, at least one of the one or more mobile devices may generate a second
set of signals in response to the set of warning signals, and wherein the processing unit may be
configured to authenticate the at least one human subject based on the generated second set of
signals.
[0020] In an aspect, the processing unit may be configured to identify the at least one
human subject from the one or more captured images, and generate a corresponding signal based
on the identification of the at least one human subject.
[0021] In an aspect, the system may comprise an input module operatively coupled to the
processing unit to receive an authentication code, wherein the received authentication code may
be authenticated by comparing the received authentication code with a second dataset comprising
one or more pre-configured authentication codes.
[0022] In an aspect, the input module may be any or a combination of keyboard,
touchscreen, biometric module, laptop and computer, and wherein the authentication code may
comprise any or a combination of an encrypted code, a pin code, biometric data, and a real time
password.
[0023] In an aspect, the processing unit may be configured to generate a set of alarm signals
when the authentication code is not received within a first pre-determined duration of time.
[0024] In an aspect, the image capturing unit may be configured to switch to energy-saving
mode after a second pre-determined duration of time.
BRIEF DESCRIPTION OF THE DRAWINGS
[0025] The accompanying drawings are included to provide a further understanding of the
present disclosure, and are incorporated in and constitute a part of this specification. The drawings
illustrate exemplary embodiments of the present disclosure and, together with the description,
serve to explain the principles of the present disclosure.
[0026] The diagrams are for illustration only, which thus is not a limitation of the present
disclosure, and wherein:
[0027] FIG. 1 illustrates exemplary block diagram of the proposed system to illustrate its
overall working in accordance with an embodiment of the present disclosure.
[0028] FIG. 2 illustrates exemplary processing unit in accordance with an embodiment of
the present disclosure.
DETAILED DESCRIPTION
[0029] The following is a detailed description of embodiments of the disclosure depicted
in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims.
[0030] Various terms as used herein are shown below. To the extent a term used in a claim
is not defined below, it should be given the broadest definition persons in the pertinent art have
given that term as reflected in printed publications and issued patents at the time of filing.
[0031] In some embodiments, the numerical parameters set forth in the written description
and attached claims are approximations that can vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary
rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the
broad scope of some embodiments of the invention are approximations, the numerical values set
forth in the specific examples are reported as precisely as practicable. The numerical values
presented in some embodiments of the invention may contain certain errors necessarily resulting
from the standard deviation found in their respective testing measurements.
[0032] As used in the description herein and throughout the claims that follow, the
meaning of "a," "an," and "the" includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise.
[0033] The recitation of ranges of values herein is merely intended to serve as a shorthand
method of referring individually to each separate value falling within the range. Unless otherwise
indicated herein, each individual value is incorporated into the specification as if it were
individually recited herein. All methods described herein can be performed in any suitable order
unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and
all examples, or exemplary language (e.g. "such as") provided with respect to certain embodiments
herein is intended merely to better illuminate the invention and does not pose a limitation on the
scope of the invention otherwise claimed. No language in the specification should be construed
as indicating any non-claimed element essential to the practice of the invention.
[0034] Groupings of alternative elements or embodiments of the invention disclosed herein
are not to be construed as limitations. Each group member can be referred to and claimed individually or in any combination with other members of the group or other elements found herein. One or more members of a group can be included in, or deleted from, a group for reasons of convenience and/or patentability. When any such inclusion or deletion occurs, the specification is herein deemed to contain the group as modified thus fulfilling the written description of all groups used in the appended claims.
[0035] The present disclosure relates to the field of security. More particularly, the present
disclosure relates to a security system.
[0036] According to an aspect the present disclosure pertains to a security system
including: one or more first sensors systematically placed at pre-configured locations at a region of interest (ROI), to sense one or more kinetic parameters at the ROI; an image capturing unit to capture one or more images; a processing unit operatively coupled to the one or more first sensors
and the image capturing unit, the processing unit can be including one or more processors coupled
with a memory, the memory storing instructions executable by the one or more processors and
configured to: determine a location associated with at least one of the one or more first sensors
based on the sensed one or more kinetic parameters; generate a first set of signals to capture one
or more images of the determined location of the at least one of the one or more first sensors;
extract facial attributes of at least one human subject from the captured one or more images; and
responsive to positive matching of the extracted facial attributes with a first dataset of pre-stored
facial attributes, generate an authentication signal indicative of authentication of the at least one
human subject; and a control unit configured at the ROI, and operatively coupled to the processing
unit, such that the generated authentication signal enables the control unit to provide access to the
ROI.
[0037] In an embodiment, the facial attributes can include any or a combination of shape,
colour, and texture of face of the at least one human subject.
[0038] In an embodiment, the one or more first sensors can include any or a combination
of motion sensor, force sensor, and pressure sensor, and the kinetic parameters can include any or
a combination of movement, exerted force, and exerted pressure.
[0039] In an embodiment, the processing unit can be configured to generate a set of
warning signals responsive to negative matching of the extracted facial attributes with the first
dataset, and wherein the generated set of warning signals can include any or a combination of the
detected kinetic parameters and the one or more captured images, and wherein the generated set
of warning signals can be transmitted to one or more mobile devices operatively coupled to the
processing unit.
[0040] In an embodiment, at least one of the one or more mobile devices can generate a
second set of signals in response to the set of warning signals, and wherein the processing unit can
be configured to authenticate the at least one human subject based on the generated second set of
signals.
[0041] In an embodiment, the processing unit can be configured to identify the at least one
human subject from the one or more captured images, and generate a corresponding signal based
on the identification of the at least one human subject.
[0042] In an embodiment, the system can include an input module operatively coupled to
the processing unit to receive an authentication code, wherein the received authentication code can
be authenticated by comparing the received authentication code with a second dataset comprising one or more pre-configured authentication codes.
[0043] In an embodiment, the input module can be any or a combination of keyboard,
touchscreen, biometric module, laptop and computer, and wherein the authentication code can include any or a combination of an encrypted code, a pin code, biometric data, and a real time password.
[0044] In an embodiment, the processing unit can be configured to generate a set of alarm
signals when the authentication code is not received within a first pre-determined duration of time.
[0045] In an embodiment, the image capturing unit can be configured to switch to energy-
saving mode after a second pre-determined duration of time.
[0046] FIG. 1 illustrates exemplary block diagram of the proposed system to illustrate its
overall working in accordance with an embodiment of the present disclosure.
[0047] As illustrated in FIG. 1, in an embodiment, the block diagram of the proposed
system 100 includes one or more first sensors 102-1, 102-2... 102-N (collectively referred to as first sensors 102, and, individually referred to as first sensor 102, herein). The first sensors 102 can be any or a combination of motion sensor, force sensor, pressure sensor, and the likes. The first sensors 102 can be systematically placed at pre-configured locations at a region of interest (ROI), and can be configured to sense one or more kinetic parameters at the ROI. The kinetic parameters can be any or a combination of movement, exerted force, exerted pressure, and the likes.
[0048] In an embodiment, the block diagram of the proposed system 100 includes a
processing unit 104. The processing unit 104 can be operatively coupled to the first sensors 102, and can determine a location associated with at least one of the first sensors 102 based on the sensed one or more kinetic parameters. The processing unit 104 can correspondingly generate a first set of signals based on the determined location of the at least one of the first sensors 102. In an illustrative embodiment, the processing unit 104 can generate the first set of signals when the sensed one or more kinetic parameters exceed pre-defined thresholds, so that voluntary actions can be distinguished from involuntary actions.
[0049] In an embodiment, the block diagram of the proposed system 100 includes an image
capturing unit 106 to capture one or more images. The image capturing unit 106 can be operatively coupled to the processing unit 104, and can capture one or more images based on the generated
first set of signals. In an illustrative embodiment, the image capturing unit 106 can be positioned
at a pre-determined height, which can be any or a combination of height of shoulder, height of
neck, and height of face of a normal human subject. In another illustrative embodiment, the image
capturing unit 106 can be configured at a hidden place at the ROI, and can be positioned in a
manner, so that the image capturing unit 106 can cover a wide field of view. In yet another
illustrative embodiment, the image capturing unit 106 can be a rotating camera, which can be
actuated, when the first set of signals is generated by the processing unit 104 based on detection
of the kinetic parameters by the at least one of the first sensors 102. The image capturing unit 106
can rotate towards the location of the at least one of the first sensors 102, which is determined by
the processing unit 104, and can capture the one or more images associated with the determined
location. In an illustrative embodiment, the image capturing unit 106 can be configured to
automatically switch to an energy-saving mode when not in use, that is, when the first set of signals
is not transmitted to the image capturing unit 106 through the processing unit 104.
[0050] In an embodiment, the processing unit 104 can facilitate extraction of facial
attributes of at least one human subject from the captured one or more images. The facial attributes can be any or a combination of shape, colour, and texture of face at least one human subject, and the likes. The extracted facial attributes can be compared with a first dataset of pre-stored facial attributes, and responsive to positive matching of the extracted facial attributes with the first dataset, the processing unit 104 can generate an authentication signal, which can be indicative of authentication of the at least one human subject. The pre-stored facial attributes in the first dataset can pertain to one or more registered users, or owner of the proposed system 100, and their relatives. The facial attributes can be stored in the first dataset at the time of initialization, and can be appended multiple number of times, afterwards, by entering a security key such as password through an input module, in the proposed system 100.
[0051] In an embodiment, the block diagram of the proposed system 100 includes a control
unit 108. The control unit 108 can be operatively coupled to the processing unit 104, and can be
controlled by the generated authentication signal, and can provide access to the ROI. The control
unit 108 can be any or a combination of an actuator, a digital lock, and the likes.
[0052] In an embodiment, when the extracted facial attributes do not match with the first
dataset, the processing unit 104 can generate a set of warning signals, where the generated set of warning signals can include any or a combination of the detected kinetic parameters and the one
or more captured images. The generated set of warning signals can be transmitted to one or more
mobile devices, which can be of the one or more registered users, or the owner, and can be
operatively coupled to the processing unit 104 through an app, a network, WiFi, and the likes.
[0053] In an illustrative implementation, the one or more registered users, or the owner can
access the one or more captured images of the at least one human object, which are associated with the set of warning signals, and can input a set of first commands, or provide instructions through at least one of the one or more mobile devices, based on which the at least one of the one or more mobile devices can generate a second set of signals indicative of authentication of the human subject. The processing unit 104 can authenticate the at least one human subject based on the generated second set of signals, which can, further, enable the control unit 108 to provide access to the ROI.
[0054] In another illustrative implementation, in response to the set of warning signals, the
one or more registered users, or the owner can input a set of second commands, or provide
instructions through at least one of the one or more mobile devices, based on which the at least
one of the one or more mobile devices can generate a third set of signals. Based on the generated
third set of signals, the processing unit 104 can be configured to generate an alarm signal that can
be audible, and can generate a set of emergency signals, simultaneously, which can be transmitted
to second mobile devices pertaining to a responsible authority, neighbours, relatives, and the likes.
[0055] In yet another illustrative implementation, the processing unit 104 can be
configured to generate the set of alarm signals, in case, the processing unit 104 not receives any signal by the one or more mobile devices, within a pre-determined duration of time after generating the set of warning signals.
[0056] In an embodiment, the processing unit 104 can be configured to extract image
attributes of the captured one or more images, and can correspondingly, identify the at least one human subject in the captured one or more images based on the extracted image attributes, through human identification technique.
[0057] In an embodiment, the proposed system 100 can include an input module (not
shown) to facilitate authentication of the at least one human subject through the input module, in case of any technical or process glitch, such as accidental detachment, or malfunctioning of the first sensors 102, loss of internet connectivity, and the likes. The input module can include any or a combination of keyboard, touchscreen, biometric module, laptop computer, and the likes. An
authentication code can be entered through the input module, which can be, further, compared, at the processing unit 104, with a second dataset, which can include one or more pre-configured authentication codes. The authentication code can be any or a combination of an encrypted code, a pin code, biometric data, a real time password, and the likes.
[0058] FIG. 2 illustrates exemplary processing unit in accordance with an embodiment of
the present disclosure.
[0059] As illustrated, the processing unit 104 can include one or more processor(s) 202.
The one or more processor(s) 202 can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s) 202 are configured to fetch and execute computer-readable instructions stored in a memory 204 of the processing unit 104. The memory 204 can store one or more computer-readable instructions or routines, which may be fetched and executed to create or share the data units over a network service. The memory 204 can include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0060] In an embodiment, the processing unit 104 can also include an interface(s) 206. The
interface(s) 206 may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) 206 may facilitate communication of the Processing unit 104 with various devices coupled to the Processing unit 104. The interface(s) 206 may also provide a communication pathway for one or more components of the Processing unit 104. Examples of such components include, but are not limited to, processing engine(s) 208 and database 210.
[0061] In an embodiment, the processing engine(s) 208 can be implemented as a
combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) 208. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) 208 may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) 208 may include a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable
storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) 208. In such examples, the Processing unit 104 can include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to processing unit 104 and the processing resource. In other examples, the processing engine(s) 208 may be implemented by electronic circuitry. The database 210 can include data that is either stored or generated as a result of functionalities implemented by any of the components of the processing engine(s) 208.
[0062] In an embodiment, the processing engine(s) 208 can include, a location determining
engine 212, an extraction engine 214, a recognition engine 214, an recognition engine 216, a
control signal and alarm generation engine 218, a configuration engine 220, and other engine(s)
222. The other engine(s) 222 can implement functionalities that supplement applications or
functions performed by the processing unit 104 or the processing engine(s) 208.
[0063] In an embodiment, the location determining engine 212 of the processing unit 104
can facilitate determination of a location associated with at least one of the first sensors 102 based on the sensed one or more kinetic parameters. In an illustrative implementation, the first sensors 102 can be systematically placed at pre-configured locations at a region of interest (ROI), and can be configured to sense one or more kinetic parameters at the ROI. The kinetic parameters can be any or a combination of movement, exerted force, exerted pressure, and the likes. A first set of signals can be generated correspondingly, based on the determined location of the at least one of the first sensors 102. In an illustrative embodiment, the first set of signals can be generated, when the sensed one or more kinetic parameters exceed pre-defined thresholds, hence voluntary actions can be distinguished from involuntary actions. For example, if the proposed system 100 is configured in a car that is parked near a market. Now, if a first person is randomly passing by, or standing in proximity to the car, then the processing unit 104 does not generate the first set of signals. Whereas, if the first person is trying to open the car, or trying to cause damage to the car, like scratching on a part the car, or writing on the car, then the processing unit 104 generates the first set of signals.
[0064] In another embodiment, an image capturing unit 106 can be operated based on the
generated first set of signals, to rotate towards the determined location and capture one or more images associated with the determined location. In an illustrative embodiment, the image capturing
unit 106 can be a rotating camera, which can be actuated based on the generated first set of signals.
The image capturing unit 106 can rotate towards the location of the at least one of the first sensors
102, which is determined by the processing unit 104, and can capture the one or more images
associated with the determined location, which can include the image of the first person.
[0065] In an embodiment, the extraction engine 214 of the processing unit 104 can
facilitate extraction of facial attributes of at least one human subject from the captured one or more images. The facial attributes can be any or a combination of shape, colour, and texture of face at least one human subject, and the likes. The extracted facial attributes can be compared with a first dataset of pre-stored facial attributes, which can be pertain to one or more registered users, owner, and the likes. In another embodiment, the extraction engine 214 can enable extraction of image attributes of the captured one or more images, which can, further, be utilized for distinguishing the at least one human subject form the captured one or more images. For an example, the facial attributes of the first person can be extracted from the captured one or more images. In another example, if a leaf, or bag, blown over by the wind, accidently sticks on the window, or handle of the car, or, moreover, if the wind itself is exerting pressure on the car, then, the first sensors 102 will sense the kinetic attributes, and correspondingly, the first set of signals will be generated as the sensed kinetic attributes exceed the pre-defined thresholds. The image capturing unit 106 will operate, and accordingly, capture the one or more images. Now, the image attributes of the captured one or more images will be extracted, which can, further, be utilized to determine the presence of the human subject.
[0066] In an embodiment, the recognition engine 216 of the processing unit 104 can enable
authentication of the at least one human subject through face recognition techniques. In an illustrative embodiment, an authentication signal can be generated responsive to positive matching of the extracted facial attributes with a first dataset, which can include pre-stored facial attributes can pertain to one or more registered users, owner, their relatives, and the likes. The generated authentication signal can be indicative of authentication of the at least one human subject. In another illustrative embodiment, a warning signal can be generated responsive to negative matching of the extracted facial attributes with the first dataset. The generated set of warning signals can be transmitted to one or more mobile devices, which can be of the one or more registered users, or the owner, and can be operatively coupled to the processing unit 104 through an app, a network, WiFi, and the likes. For example, if the extracted facial attributes of the first
person are matching with the first dataset, then, correspondingly, the authentication signal is generated. But, if the extracted facial attributes of the first person do not match with the first dataset, then, correspondingly, the warning signal is generated, which can be transmitted to the mobile device associated with the owner of the car.
[0067] In another embodiment, the recognition engine 216 can identify the at least one
human subject in the captured one or more images based on the extracted image attributes, through human identification technique. For example, in case a leaf, or bag, blown over by the wind, accidently sticks on the window, or handle of the car, or, moreover, if the wind itself is exerting pressure on the car, then, the leaf, or the bag, then, no human subject can be detected at the processing unit 104.
[0068] In an embodiment, the control signal and alarm generation engine 218 of the
processing unit 104 can enable operating of a control unit 108 attached to the ROI, and can provide access to the ROI. In another embodiment, the control signal and alarm generation engine 218 can generate an alarm signal in case of emergency. The control unit 108 can be any or a combination of an actuator, a digital lock, and the likes. In an illustrative embodiment, the control signal and alarm generation engine 218 can generate control signal, to operate the control unit 108, based on the generated authentication signal. The control signal can also be generated can based on a set of first commands, or instructions provided through at least one of the one or more mobile devices, in response to the generated warning signals, based on which the at least one of the one or more mobile devices can generate a second set of signals indicative of authentication of the at least one human subject. Further, the control signal can be generated based on a positive comparison of an authentication code entered through an input module, with a second dataset, which can include one or more pre-configured authentication codes. The input module can be any or a combination of keyboard, touchscreen, biometric module, laptop computer, and the likes. In an illustrative embodiment, the control signal and alarm generation engine 218 can generate an alarm signal, in case, a warning signal is generated. The alarm signal can also be generated, in case, the processing unit 104 not receives any signal by the one or more mobile devices, within a pre-determined duration of time after generating the set of warning signals. For example, if the first person is a relative of the owner of the car, and wants to enter in the car then, the owner can authenticate the first person, which results in automatic opening of digital lock 108 of the car. But, in case, the first
person is not authenticated by the owner, the alarm signal can be generated by the proposed system 100 for alerting the owner, or the people near-by.
[0069] In an embodiment, the configuration engine 220 of the processing unit 104 can
facilitate configuration of any or a combination of the first dataset and the second dataset. In an illustrative embodiment, the facial attributes can be stored in the first dataset at the time of initialization, and can be appended multiple number of times, afterwards, by entering a security key such as password through an input module, in the proposed system 100. In another illustrative embodiment, the authentication code stored in the second dataset can be altered or appended, whenever required.
[0070] Though various embodiments of the present disclosure are explained through an
example of the car, but, a person skilled in the art would appreciate that the proposed system 100 is equally effective, and can be used for security of a building, or other such things, which is well within the scope of the present disclosure.
[0071] Thus, it will be appreciated by those of ordinary skill in the art that the diagrams,
schematics, illustrations, and the like represent conceptual views or processes illustrating systems and methods embodying this invention. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the entity implementing this invention. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular named.
[0072] While embodiments of the present invention have been illustrated and described, it
will be clear that the invention is not limited to these embodiments only. Numerous modifications,
changes, variations, substitutions, and equivalents will be apparent to those skilled in the art,
without departing from the spirit and scope of the invention, as described in the claim.
[0073] In the foregoing description, numerous details are set forth. It will be apparent,
however, to one of ordinary skill in the art having the benefit of this disclosure, that the present invention may be practiced without these specific details. In some instances, well-known structures
and devices are shown in block diagram form, rather than in detail, to avoid obscuring the present invention.
[0074] As used herein, and unless the context dictates otherwise, the term "coupled to" is
intended to include both direct coupling (in which two elements that are coupled to each other contact each other)and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms "coupled to" and "coupled with" are used synonymously. Within the context of this document terms "coupled to" and "coupled with" are also used euphemistically to mean "communicatively coupled with" over a network, where two or more devices are able to exchange data with each other over the network, possibly via one or more intermediary device.
[0075] It should be apparent to those skilled in the art that many more modifications
besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms "comprises" and "comprising" should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the specification claims refers to at least one of something selected from the group consisting of A, B, C ... .N, the text should be interpreted as requiring only one element from the group, not A plus N, or B plus N, etc.
[0076] While the foregoing describes various embodiments of the invention, other and
further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE PRESENT DISCLOSURE
[0077] The present disclosure provides a system for monitoring and providing security to
a region where it is installed.
[0078] The present disclosure provides a system that is able to distinguish between a
human subject and a non-human subject.
[0079] The present disclosure provides a system for authenticating a person.
[0080] The present disclosure provides a system for facilitating authentication through
face-recognition mechanism.
[0081] The present disclosure provides a system for distinguishing between voluntary and
involuntary movements/ actions around the region where it is installed, and correspondingly
alarming a registered person in case of voluntary movements/ actions.
[0082] The present disclosure provides a system that consumes minimal electrical power.
[0083] The present disclosure provides an improved, reliable, efficient, and cost-effective,
and easy-available system.
CLAIM
We Claim
1.A security system comprising:
one or more first sensors systematically placed at pre-configured locations at a region of interest (ROI), to sense one or more kinetic parameters at the ROI; an image capturing unit to capture one or more images;
a processing unit operatively coupled to the one or more first sensors and the image capturing unit, the processing unit comprising one or more processors coupled with a memory, the memory storing instructions executable by the one or more processors and configured to:
determine a location associated with at least one of the one or more first sensors based on the sensed one or more kinetic parameters;
generate a first set of signals to capture one or more images of the determined location of the at least one of the one or more first sensors;
extract facial attributes of at least one human subject from the captured one or more images; and
responsive to positive matching of the extracted facial attributes with a
first dataset of pre-stored facial attributes, generate an authentication signal
indicative of authentication of the at least one human subject; and
a control unit configured at the ROI, and operatively coupled to the processing unit,
such that the generated authentication signal enables the control unit to provide access to
the ROI.
2. The system as claimed in claim 1, wherein the facial attributes comprises any or a combination of shape, colour, and texture of face of the at least one human subject.
3. The system as claimed in claim 1, wherein the one or more first sensors are any or a combination of motion sensor, force sensor, and pressure sensor, and the kinetic parameters are any or a combination of movement, exerted force, and exerted pressure.
4. The system as claimed in claim 1, wherein the processing unit is configured to generate a set of warning signals responsive to negative matching of the extracted facial attributes with the first dataset, and wherein the generated set of warning signals comprises any or a combination of the detected kinetic parameters and the one or more captured images, and
wherein the generated set of warning signals is transmitted to one or more mobile devices operatively coupled to the processing unit.
5. The system as claimed in claim 4, wherein at least one of the one or more mobile devices
are able to generate a second set of signals in response to the set of warning signals, and
wherein the processing unit is configured to authenticate the at least one human subject based on the generated second set of signals.
6. The system as claimed in claim 1, wherein the processing unit is configured to identify the at least one human subject from the one or more captured images, and generate a corresponding signal based on the identification of the at least one human subject.
7. The system as claimed in claim 1, wherein the system comprises an input module operatively coupled to the processing unit to receive an authentication code, wherein the received authentication code is authenticated by comparing the received authentication code with a second dataset comprising one or more pre-configured authentication codes.
8. The system as claimed in claim 7, wherein the input module is any or a combination of keyboard, touchscreen, biometric module, laptop and computer, and wherein the authentication code comprises any or a combination of an encrypted code, a pin code, biometric data, and a real time password.
9. The system as claimed in claim 7, wherein the processing unit is configured to generate a set of alarm signals when the authentication code is not received within a first pre-determined duration of time.
10. The system as claimed in claim 1, wherein the image capturing unit is configured to switch to energy-saving mode after a second pre-determined duration of time.
| # | Name | Date |
|---|---|---|
| 1 | 202011005615-STATEMENT OF UNDERTAKING (FORM 3) [09-02-2020(online)].pdf | 2020-02-09 |
| 2 | 202011005615-FORM FOR STARTUP [09-02-2020(online)].pdf | 2020-02-09 |
| 3 | 202011005615-FORM FOR SMALL ENTITY(FORM-28) [09-02-2020(online)].pdf | 2020-02-09 |
| 4 | 202011005615-FORM 1 [09-02-2020(online)].pdf | 2020-02-09 |
| 5 | 202011005615-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [09-02-2020(online)].pdf | 2020-02-09 |
| 6 | 202011005615-EVIDENCE FOR REGISTRATION UNDER SSI [09-02-2020(online)].pdf | 2020-02-09 |
| 7 | 202011005615-DRAWINGS [09-02-2020(online)].pdf | 2020-02-09 |
| 8 | 202011005615-DECLARATION OF INVENTORSHIP (FORM 5) [09-02-2020(online)].pdf | 2020-02-09 |
| 9 | 202011005615-COMPLETE SPECIFICATION [09-02-2020(online)].pdf | 2020-02-09 |
| 10 | 202011005615-FORM-26 [02-04-2020(online)].pdf | 2020-04-02 |
| 11 | 202011005615-Proof of Right [14-07-2020(online)].pdf | 2020-07-14 |
| 12 | 202011005615-FORM 18 [20-09-2021(online)].pdf | 2021-09-20 |
| 13 | abstract.jpg | 2021-10-18 |
| 14 | 202011005615-FER.pdf | 2022-04-25 |
| 15 | 202011005615-FORM 3 [22-10-2022(online)].pdf | 2022-10-22 |
| 16 | 202011005615-FER_SER_REPLY [22-10-2022(online)].pdf | 2022-10-22 |
| 17 | 202011005615-ENDORSEMENT BY INVENTORS [22-10-2022(online)].pdf | 2022-10-22 |
| 18 | 202011005615-CORRESPONDENCE [22-10-2022(online)].pdf | 2022-10-22 |
| 19 | 202011005615-COMPLETE SPECIFICATION [22-10-2022(online)].pdf | 2022-10-22 |
| 20 | 202011005615-CLAIMS [22-10-2022(online)].pdf | 2022-10-22 |
| 21 | 202011005615-ABSTRACT [22-10-2022(online)].pdf | 2022-10-22 |
| 22 | 202011005615-US(14)-HearingNotice-(HearingDate-13-05-2024).pdf | 2024-04-22 |
| 23 | 202011005615-FORM-26 [10-05-2024(online)].pdf | 2024-05-10 |
| 24 | 202011005615-Correspondence to notify the Controller [10-05-2024(online)].pdf | 2024-05-10 |
| 25 | 202011005615-Written submissions and relevant documents [28-05-2024(online)].pdf | 2024-05-28 |
| 26 | 202011005615-Annexure [28-05-2024(online)].pdf | 2024-05-28 |
| 27 | 202011005615-PatentCertificate30-10-2024.pdf | 2024-10-30 |
| 28 | 202011005615-IntimationOfGrant30-10-2024.pdf | 2024-10-30 |
| 1 | 202011005615E_21-04-2022.pdf |