Abstract: An operating framework for computer vision devices is disclosed. Said operating framework comprises: a base container (11); a service management container (12); a user management container (13); a camera management container (14); an at least a socket server (16); a frontend container (22); an API gateway (18); an at least a use case container (15); an at least an AI model container (17), and an at least an IoT service container (25). Said containers or components (11, 12, 13, 14, 15, 16, 17, 22, and 25) are independent from each other, may be distributed on different computer vision devices, and can jointly work as a single device (10). The method of working is also disclosed. The advantages of the disclosed operating framework for computer vision devices are: faster real-time results on a large scale; enhanced operational reliability; and increased security for devices and data. Figure to be Included is Figure 1
Claims:1. An operating framework for computer vision devices, comprising:
a base container (11) that is configured to facilitate: manufacturer authentication; user activation; memory management; and device monitoring;
a service management container (12) that is configured to facilitate: adding, removing, and updating of services; activating and deactivating of services; authenticating of services; and starting and stopping of services;
a user management container (13) that is configured to facilitate performing of all user-related activities;
a camera management container (14) that is configured to facilitate: adding, removing, and editing of cameras; adding and removing of use case services to cameras; and performing of camera health check-up;
a frontend container (22) that is configured to facilitate the interacting of an end user to with the operating framework, including: adding of cameras; and adding of use case services and associated AI models, without any technical knowledge;
an at least an use case container (15), said at least one use case container (15) being created dynamically, whenever a use case service is downloaded to a computer vision device (10), with: a separate use case container being created for each use case service; and one use case container being independent of another use case container;
an at least an AI model container (17), said at least one AI model container (17) being created dynamically as a parent AI model, whenever an AI model is downloaded, when downloading a use case service, with: a separate AI model container being created for each AI model, and one AI model container being independent of another AI model container;
an at least an IoT service container (25), said at least one IoT service container (25) being created dynamically, whenever an IoT device is connected to the computer vision device (10) and a service associated with the IoT device that is connected to the device (10) is downloaded, with: a separate IoT container being created for each IoT service associated with the IoT device connected; and one IoT service container being independent of another IoT service container;
an at least a socket server (16) that is configured to facilitate the establishing of communication between:
the at least one use case container (15), the at least one AI model container (17), and the at least one IoT service container (25); and
the base container (11), the service management container (12), the user management container (13), the camera management container (14), and the frontend container (22), and: the at least one use case container (15), the at least one AI model container (17), and the at least one IoT service container (25);
an API gateway (18) that is configured to establish communication between:
said base container (11), said service management container (12), said user management container (13), said camera management container (14), said at least one socket server (16), and said frontend container (22) on the computer vision device (10);
said base container (11), said service management container (12), said user management container (13), said camera management container (14), said at least one socket server (16), and said frontend container (22), and: an authentication cloud (19), a ticketing cloud (20), a hybrid cloud (21), a data collection and model making pipeline cloud (23), and a service registry cloud (24); and
the computer vision device (10) with an at least an external device; and
an at least a wrapper module (26) that is configured to facilitate internal communication between the at least one use case container (15), the at least one AI model container (17), the at least one IoT service container (25), and the at least one socket server (16),
with:
said base container (11), said service management container (12), said user management container (13), said camera management container (14), said at least one socket server (16), said frontend container (22), said API gateway (18), said at least one use case container (15), said at least one AI model container (17), and said at least one IoT service container (25) being independent from each other, and comprising their own data store;
said base container (11), said service management container (12), said user management container (13), said camera management container (14), said at least one socket server (16), said frontend container (22), said at least one use case container (15), said at least one AI model container (17), and said at least one IoT service container (25) being distributed among different computer vision devices, with each of the computer vision devices comprising the API gateway (18), and all the computer vision devices jointly working as a single computer vision device (10); and
the downtime of the operating framework being performed through swarm optimization-based load balancing mechanism.
2. The operating framework for computer vision devices as claimed in claim 1, wherein the base container (11) comprises:
a manufacturer authentication unit (111) that facilitates the authenticating of the computer vision device (10) based on an at least a first parameter to avoid the duplication of the computer vision device (10);
a user device activation unit (112) that facilitates the activating of the computer vision device (10) by a user, with the device activation being performed on the authentication cloud (19) based on an at least a second parameter;
an alert syncing and viewing unit (113) that facilitates the syncing of alerts and reports generated on the computer vision device (10) with the ticketing cloud (20);
a memory management unit (114) that facilitates the: detecting of an at least a storage device connected with the computer vision device (10); and prioritizing of the at least one storage device for storing: images captured, and generated alerts and reports;
a user notification unit (115) that facilitates the sending of notifications to the user, if alerts are generated based on user preferences;
a communication unit (116) that facilitates the establishing of communication with the at least one external device connected with the computer vision device (10);
a double check mechanism (117) that facilitates the verifying of the accuracy of alerts generated on the computer vision device (10), by sending the images to the hybrid cloud (21) for rechecking, with an alert being generated only if the accuracy of the alert is confirmed by the hybrid cloud (21);
an alert analysis unit (118) that facilitates the generating of an analytical summary of all alerts and reports generated on the computer vision device (10); and
a device and network analysis unit (119) that facilitates the monitoring of the computer vision device (10) and network performance parameters continuously for generating analytical reports periodically.
3. The operating framework for computer vision devices as claimed in claim 2, wherein the at least one first parameter includes: manufacturer key; device type; device serial number; and password.
4. The operating framework for computer vision devices as claimed in claim 2, wherein the at least one second parameter includes: access key; device serial number; device name; and location.
5. The operating framework for computer vision devices as claimed in claim 1, wherein the service management container (12) comprises:
a service authentication unit (121) that facilitates the authenticating of the validity of a use case service purchased by the user through the authentication cloud (19);
an add/remove/update service unit (122) that facilitates the adding and removing of the use case services to the computer vision device (10) that is already running, from the authentication cloud (19), without disturbing or restarting the computer vision device (10);
an activate/deactivate service unit (123) that facilitates the activating or deactivating of the use case services added to the computer vision device (10);
a service analysis unit (124) that facilitates the: analysing of each use case service on the computer vision device (10); generating of a performance report for each use case service; and calculating a rating for each use case service;
a service discovery unit (125) that facilitates the identifying of hardware limitations and resource availability of the computer vision device (10) before downloading and adding a use case service or an AI model; and
a service start/stop unit (126) that facilitates the starting and stopping of the use case services as per their usage.
6. The operating framework for computer vision devices as claimed in claim 5, wherein the hardware limitations of the computer vision device (10) include: number of allowed use case services; and number of allowed AI models.
7. The operating framework for computer vision devices as claimed in claim 5, wherein the resource availability of the computer vision device (10) includes: availability of RAM; availability of CPU; and availability of storage.
8. The operating framework for computer vision devices as claimed in claim 1, wherein the user management container (13) comprises:
an add/remove/edit user unit (131) that facilitates the adding, removing, and editing of users associated with the computer vision device (10);
a user permission management unit (132) that facilitates the managing of permissions at the level of an individual user;
a user login/logout unit (133) that facilitates the: managing of login and logout operations of each user; and enabling the user to retrieve a forgotten password;
a user authentication token generation unit (134) that facilitates the generating of authentication token, whenever the user logs into the computer vision device (10); and
a user log unit (135) that facilitates the storing and tracking of the activities of each user logging into the computer vison device (10).
9. The operating framework for computer vision devices as claimed in claim 1, wherein the camera management container (14) comprises:
an add/remove/edit camera unit (141) that facilitates the adding, removing, and editing of cameras with the computer vision device (10) by the user;
an add/remove service unit (142) that facilitates the adding and removing of use case services to the cameras connected with the computer vision device (10);
a Region of Interest configuration unit (143) that facilitates the configuring of a region of interest for a use case service within which the parent AI model is to perform its detection, on an input camera feed;
a camera health check unit (144) that facilitates the detecting of the status of the cameras connected with the computer vision device (10);
a scheduler unit (145) that facilitates the switching of the use case services on each camera connected with the computer vision device (10) based on user-defined preferences;
an Open Network Video Interface IN unit (146) that facilitates the detecting of an at least an ONVIF camera communicatively associated with the computer vision device (10) through the same network;
an ONVIF OUT unit (147) that facilitates the detecting of standalone Mobile Industry Processor Interface/USB-based AI cameras readable by external Network Video Recorders or ONVIF compatible devices; and
an API configuration unit (148) that facilitates the selecting of relevant Application Programmable Interface for configuring an AI model.
10. The operating framework for computer vision devices as claimed in claim 9, wherein the user-defined preferences include: working hours, holidays; and non-working hours.
11. The operating framework for computer vision devices as claimed in claim 1, wherein the computer vision device (10) supports: Real-Time Streaming Protocol stream from IP Cameras, USB web cameras, and MIPI Camera Serial Interface cameras.
, Description:TITLE OF THE INVENTION: AN OPERATING FRAMEWORK FOR COMPUTER VISION DEVICES
FIELD OF THE INVENTION
The present disclosure is generally related to computer vision. Particularly, the present disclosure is related to an operating framework for computer vision devices.
BACKGROUND OF THE INVENTION
Computer vision is related to technology that allows computing devices to use visual information to interpret and understand the visual world in either broad or limited sense. Generally, computer vision is an artificial intelligence (AI)-based technology that enables the extracting of information from images. The images can be in any form, such as single images, video sequences, views from multiple cameras, or higher dimensional data.
Computer vision has several applications, ranging from relatively simple tasks, such as industrial systems used to count objects passing by on a production line, to more complicated tasks, such as facial recognition and perceptual tasks.
The number of use cases for applying AI that performs at human level or better to understand the visual world is increasing exponentially. The AI inference requires a considerable amount of processing power, especially for real-time data-intensive applications.
Generally, computer vision is done in a cloud-based environment that requires heavy computing capacity. The AI solutions are normally deployed on cloud environments in order to take advantage of simplified management and scalable computing assets. However, in most circumstances, cloud is not an adequate environment for deploying Artificial Intelligence (AI).
The cloud deployment has the following drawbacks: the response time is slow, hence may not be suitable for real-time applications; incurs high operating costs for analysing massive data on the cloud; and sending and storing video materials may create privacy issues.
The computer vision techniques are resource hungry. They require high-end GPUs (Graphics Processing Units) and processing power to deploy. Hence, the overall costs increase drastically.
Further, computer vision systems/devices require a skilled person to configure the AI server at user’s premises; hence, this is not scalable for the masses. Each user will have his/her unique requirements of monitoring parameters. Thus, customization is required every time when a new user is added. This takes time to deploy, and, in turn, affects the scalability of the system.
Furthermore, there are many AI frameworks available in the market for computer vision, but none of the frameworks are made for the end user. All the frameworks available in the market only target the developer community. The end user cannot use such frameworks without technical skills, such as programming knowledge and detailed understanding of computer vision mechanisms.
There is, therefore, a need in the art for an operating framework for computer vision devices, which overcomes the aforementioned drawbacks and shortcomings.
SUMMARY OF THE INVENTION
An operating framework for computer vision devices is disclosed. Said operating framework comprises: a base container; a service management container; a user management container; a camera management container; an at least a socket server; a frontend container; an API gateway; an at least a use case container; an at least an AI model container, and an at least an IoT (Internet of Things) service container.
Said base container, said service management container, said user management container, said camera management container, said at least one socket server, said frontend container; said API gateway, said at least one use case container, said at least one AI model container, and said at least one IoT service container are independent from each other, and each container comprises its own data store.
The communication between the base container, the service management container, the user management container, the camera management container, and the frontend container is performed through the API gateway.
The communication between the containers or components (the base container, the service management container, the user management container, the camera management container, and the frontend container), and: the at least one use case container, the at least one AI model container, and the at least one IoT service container is performed only through the at least one socket server.
Said base container, said service management container, said user management container, said camera management container, said at least one socket server, said frontend container; said API gateway, said at least one use case container, said at least one AI model container, and said at least one IoT service container are distributed among different computer vision devices (each computer vision device comprises the API gateway), and jointly work as a single device. The downtime of the operating framework is performed through swarm optimization-based load balancing mechanism.
The operating framework further comprises: an authentication cloud; a ticketing cloud; a hybrid cloud; a data collection and model making pipeline cloud; and a service registry cloud.
The base container is configured to facilitate: manufacturer authentication; user activation; memory management; and device monitoring functionalities. The base container comprises: a manufacturer authentication unit; a user device activation unit; an alert syncing and viewing unit; a memory management unit; a user notification unit; a communication unit; a double check mechanism; an alert analysis unit; and a device and network analysis unit.
The service management container is configured to facilitate: adding, removing, and updating of services; activating and deactivating of services; authenticating of services; and starting and stopping of services. The service management container comprises: a service authentication unit; an add/remove/update service unit; an activate/deactivate service unit; a service analysis unit; a service discovery unit; and a service start/stop unit.
The user management container is configured to facilitate performing of all user-related activities. The user management container comprises: an add/remove/edit user unit; a user permission management unit; a user login/logout unit; a user authentication token generation unit; and a user log unit.
The camera management container is configured to facilitate: adding, removing, and editing of cameras; adding and removing of use case services to cameras; and performing of camera health check-up. The camera management container comprises: an add/remove/edit camera unit; an add/remove services unit; a ROI (Region of Interest) configuration unit; a camera health check unit; a scheduler unit; an ONVIF (Open Network Video Interface) IN unit; an ONVIF OUT unit; and an API configuration unit.
The frontend container is configured to facilitate the interacting of an end user to with the operating framework, including: adding of cameras; and adding of use case services and associated AI models, without any technical knowledge.
The at least one use case container is created dynamically, whenever a use case service is downloaded to a computer vision device. A separate use case container is created for each use case service, and one use case container is independent of another use case container.
The at least one AI model container is created dynamically as a parent AI model, whenever an AI model is downloaded, when downloading a use case service. A separate AI model container is created for each AI model, and one AI model container is independent of another AI model container.
The at least one IoT service container is created dynamically, whenever an IoT device is connected to the computer vision device, and a service associated with the IoT device that is connected to the device is downloaded. A separate IoT service container is created for each service associated with the IoT device, and one IoT service container is independent of another IoT service container.
The at least one socket server is configured to facilitate the establishing of communication between the at least one use case container the at least one AI model container, and the at least one IoT service container.
The API gateway is configured to establish communication between: the containers on the device or the components of the device; the containers or components, and: the authentication cloud, the ticketing cloud, the hybrid cloud, the data collection and model making pipeline cloud, and the service registry cloud; and the device and an at least an external device.
An at least a wrapper module is configured to facilitate internal communication between the at least one use case container, the at least one AI model container, the at least one IoT service container, and the at least one socket server.
The method of working of the operating framework for computer vision devices is also disclosed. The advantages of the disclosed operating framework for computer vision devices are: faster real-time results on a large scale; enhanced operational reliability; and increased security for devices and data.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a block diagram illustrating the architecture of an operating framework for computer vision devices, in accordance with an embodiment of the present disclosure;
Figure 2 is a block diagram illustrating a base container of an operating framework for computer vision devices, in accordance with an embodiment of the present disclosure;
Figure 3 is a block diagram illustrating a service management container of an operating framework for computer vision devices, in accordance with an embodiment of the present disclosure;
Figure 4 is a block diagram illustrating a user management container of an operating framework for computer vision devices, in accordance with an embodiment of the present disclosure;
Figure 5 is a block diagram illustrating a camera management container of an operating framework for computer vision devices, in accordance with an embodiment of the present disclosure; and
Figure 6 illustrates the working of an operating framework for computer vision devices, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION OF THE INVENTION
Throughout this specification, the use of the words "comprise", “have”, “contain”, and “include”, and variations such as "comprises", "comprising", “having”, “contains”, “containing”, “includes”, and “including” may imply the inclusion of an element or elements not specifically recited. The disclosed embodiments may be embodied in various other forms as well.
Throughout this specification, the phrases “at least a”, “at least an”, and “at least one” are used interchangeably.
Throughout this specification, where applicable, the use of the phrase “at least” is to be construed in association with the suffix “one” i.e. it is to be read along with the suffix “one” as “at least one”, which is used in the meaning of “one or more”. A person skilled in the art will appreciate the fact that the phrase “at least one” is a standard term that is used in Patent Specifications to denote any component of a disclosure that may be present or disposed in a single quantity or more than a single quantity.
Throughout this specification, the phrases “Computer Vision”, “Vision Computing”, and “Machine Vision” are used interchangeably with the same meaning.
Throughout this specification, the word “device”, and the phrases “Computer Vision Device”, “Vision Computing Device’, and their variations are to be construed as a device that is used for computer vision use cases. The device may include, but is not limited to: edge devices; smart cameras; AI (Artificial Intelligence) cameras; computer vision-based IoT devices; and AI computer servers.
Throughout this specification, the phrases “Artificial Intelligence Model”, “AI Model”, and their variations are to be construed as a processing block that takes inputs, like images or videos, and predicts or returns pre-learned concepts or labels. Said models can be trained to see and recognize almost anything humans can see and recognize.
Throughout this specification, the phrase “use case” and its variations are to be construed as being inclusive of: safety, sustainability, etc., for Government authorities; theft detection; product recommendation; facial recognition; suspicious behaviour recognition; quality management; productivity analytics; counting and sorting; diagnosing symptoms by analysing medical images; cell classification; mask detection; movement analysis; check-in or check-out detection; monitoring physical therapy exercises; object detection and identification; image recognition; player tracking; ball tracking; plant recognition; animal monitoring; farm automation; autonomous driving; number plate recognition; collision avoidance; traffic analytics; and social distancing.
Throughout this specification, the use of the word “framework” is to be construed as a set of technical and/or functional components that are communicatively and/or operably associated with each other, and function together as part of a framework to operate computer vision devices.
Throughout this specification, the use of the words “communication”, “couple”, and their variations (such as communicatively) are to be construed as being inclusive of: one-way communication (or coupling); and two-way communication (or coupling), as the case may be, irrespective of the direction of arrows in the drawings.
Also, it is to be noted that embodiments may be described as a method. Although the operations in a method are described as a sequential process, many of the operations may be performed in parallel, concurrently, or simultaneously. In addition, the order of the operations may be re-arranged. A method may be terminated when its operations are completed, but may also have additional steps.
Figure 1 is a block diagram that illustrates an operating framework for computer vision devices, in accordance with the embodiments of the present disclosure. The operating framework comprises: a base container (11); a service management container (12); a user management container (13); a camera management container (14); an at least a socket server (16); a frontend container (22); an API gateway (18); an at least a use case container (15); an at least an AI model container (17); and an at least an IoT (Internet of Things) service container (25).
In an embodiment of the present disclosure, said base container (11), said service management container (12), said user management container (13), said camera management container (14), said at least one socket server (16), said frontend container (22), said API gateway (18), said at least one use case container (15), said at least one AI model container (17), and said at least one IoT service container (25) are independent from each other, and comprise their own data store.
Said operating framework further comprises: an authentication cloud (19); a ticketing cloud (20); a hybrid cloud (21); a data collection and model making pipeline cloud (23); and a service registry cloud (24).
In another embodiment of the present disclosure, the base container (11), the service management container (12), the user management container (13), the camera management container (14), the at least one socket server (16), the frontend container (22), the API gateway (18), the at least one use case container (15), the at least one AI model container (17), and the at least one IoT service container (25) reside on a single computer vision device (10).
The communication between the base container (11), the service management container (12), the user management container (13), the camera management container (14), the at least one socket server (16), and the frontend container (22) is performed through Application Programmable Interfaces (APIs), and said communication is facilitated by the API gateway (18). This mechanism or configuration increases the stability of the operating framework. Any crash or damage in one of the containers or components (11, 12, 13, 14, 15, 16, 17, or 22) does not affect the other containers or components.
In yet another embodiment of the present disclosure, the communication between: the base container (11), the service management container (12), the user management container (13), the camera management container (14), and the frontend container (22), and: the at least one use case container (15), the at least one AI model container (17), and the at least one IoT service container (25) is performed only through the at least one socket server (16). In other words, there is no direct connection between the containers or components (11, 12, 13, 14, and 22), and the containers or components (15, 17, and 25).
In yet another embodiment of the present disclosure, the base container (11), the service management container (12), the user management container (13), the camera management container (14), the at least one socket server (16), the frontend container (22), the at least one use case container (15), the at least one AI model container (17), and the at least one IoT service container (25) are distributed among different computer vision devices, and each computer vision device comprises the API gateway (18). Even though the containers or components (11, 12, 13, 14, 15, 16, 17, 22, and 25) are distributed on different computer vision devices, they jointly work as a single device (10).
In yet another embodiment of the present disclosure, the containers or components (11, 12, 13, 14, 15, 16, 17, 22, and 25) of the operating framework have their own virtual environment, thereby enabling them to work on different computer vision devices. The major benefit of this mechanism or configuration is that the operating framework is of plug and play nature, where containers or components can be individually added, removed, updated, etc., without affecting the other containers or components of the operating environment. Hence, this mechanism or configuration enables a user to add or remove or update a use case service and/or an AI model and/or an IoT device on the go, even after the installation of the computer vision device (10) that is running at the user’s premises.
In yet another embodiment of the present disclosure, the IoT device added and/or removed and/or updated to/from/on the computer vision device (10) includes, but is not limited to: a boom barrier; a hooter; a thermal sensor; relay; proximity sensor; temperature sensor; fingerprint sensor; and/or the like.
In yet another embodiment of the present disclosure, the downtime of the operating framework is performed through swarm optimization-based load balancing mechanism.
In yet another embodiment of the present disclosure, the disclosed operating framework is deployed on computer vision devices through micro service architecture, thereby increasing the stability.
In yet another embodiment of the present disclosure, the API gateway (18) is configured to establish communication between: the components or containers (11, 12, 13, 14, 16, and 22) on the device (10); the components or containers (11, 12, 13, 14, 16, and 22) and the authentication cloud (19), the ticketing cloud (20), the hybrid cloud (21), the data collection and model making pipeline cloud (23), and the service registry cloud (24); and the device (10) with an at least an external device. The API gateway (18) accepts all API calls, aggregates the various services required to fulfil them, and returns appropriate results.
In yet another embodiment of the present disclosure, the APIs are REST APIs.
In yet another embodiment of the present disclosure, the base container (11) is configured to facilitate: manufacturer authentication; user activation; memory management; and device monitoring functionalities.
As illustrated in Figure 2, said base container (11) comprises: a manufacturer authentication unit (111); a user device activation unit (112); an alert syncing and viewing unit (113); a memory management unit (114); a user notification unit (115); a communication unit (116); a double check mechanism (117); an alert analysis unit (118); and a device and network analysis unit (119).
The manufacturer authentication unit (111) facilitates the authenticating of the device (10) based on an at least a first parameter, to avoid the duplication of the device (10). The at least one first parameter includes, but is not limited to: manufacturer key; device type (standalone, server, etc.); device serial number; and/or password. The at least one first parameter is sent to the authentication cloud (19) for authenticating the device (10).
The user device activation unit (112) facilitates the activating of the device (10) by the user. The device activation is performed on the authentication cloud (19) based on an at least a second parameter sent by the user device activation unit (112). The at least one second parameter includes, but is not limited to: access key; device serial number; device name; and/or location.
The user activation is performed by combining all the devices owned or purchased by the user, and sync alerts or reports are generated from all the devices, thereby enabling the user to easily track, monitor, and control all the devices.
The alert syncing and viewing unit (113) facilitates the syncing of alerts and reports generated on the device (10) with the ticketing cloud (20).
The memory management unit (114) facilitates the: detecting of an at least a storage device connected with the computer vision device (10); and prioritizing the at least one storage device for storing: images captured, and generated alerts and reports. The memory management unit (114) supports First in First out (FIFO) logic if the at least one storage device is full.
The user notification unit (115) facilitates the sending of notifications to the user, if alerts are generated based on user preference. The notifications may include, but are not limited to, emails; text messages; and/or phone calls. The user preference may include: notifying all alerts; notifying only specific alerts, etc. The base container (10) detects requisite facilities (availability of internet for sending emails; availability of GSM for sending text messages or making phone calls, etc.) for sending different types of notifications or alerts.
The communication unit (116) facilitates the establishing of communication with the at least one external device connected with the computer vision device (10). The at least one external device may include, but is not limited to: buzzers; sensors; and/or IoT devices. For example, triggering of a buzzer or reading a sensor value, which are connected externally to the device (10), are performed through the communication unit (116).
The double check mechanism (117) facilitates the verifying of the accuracy of alerts generated on the device (10), by sending image(s) to the hybrid cloud (21) for rechecking. An alert is generated only if its accuracy is confirmed by the hybrid cloud (21). Double checking is performed only if the user enables this feature for specific situations and/or use cases. For example, in the case of security alerts (weapon detection during security check), the double check mechanism (117) is useful in avoiding false alerts.
The alert analysis unit (118) facilitates the generating of an analytical summary of all alerts and reports generated on the device (10).
The device and network analysis unit (119) facilitates the monitoring of the device (10) and network performance parameters continuously for generating analytical reports periodically (e.g. daily). The performance parameters include, but are not limited to: RAM usage; availability of free storage space; device down time; device up time; network up time; and/or network down time.
In yet another embodiment of the present disclosure, the service management container (12) is configured to facilitate: adding, removing, and/or updating of services; activating and/or deactivating of services; authenticating of services; and starting and/or stopping of services.
As illustrated in Figure 3, the service management container (12) comprises: a service authentication unit (121); an add/remove/update service unit (122); an activate/deactivate service unit (123); a service analysis unit (124); a service discovery unit (125); and a service start/stop unit (126).
The service authentication unit (121) facilitates the authenticating of the validity of a use case service purchased (e.g. subscription basis) by the user through the authentication cloud (19). The use case service authentication is performed periodically (for example, for every 24 hours, every boot and/or restart of the device (10)). If the use case service is suspended on the authentication cloud (19), said use case service is disabled inside the device (10) as well.
The add/remove/update service unit (122) facilitates the adding and/or removing of the use case services to the computer vision device (10) (that is already running) from the authentication cloud (19), without disturbing or restarting the device (10). Said add/remove/update service unit (122) also facilitates the updating of the use case services added to the device (10) over the air in real-time.
The activate/deactivate service unit (123) facilitates the activating and/or deactivating of the use case services added to the computer vision device (10). The activation and/or deactivation of use case services may be performed in different situations.
For example, if the user customizes a use case to be activated only for a scheduled period, said use case is active only for the scheduled period (like 9 AM to 6 PM) and deactivated during the rest of the time (6 PM to next day 9 AM).
If the subscription of a use case service expires, said status is updated on the authentication cloud (19) as suspended for the particular device (10). In such cases, at the scheduled time, when the device (10) fetches the status of the use case service from the authentication cloud (19), since it is suspended, it is automatically deactivated on the device (10) as well.
If the user manually removes a use case service from the authentication cloud (19), said use case service is automatically deactivated and deleted from the device (10) as well.
The service analysis unit (124) facilitates the: analysing of each use case service on the device (10); generating of a performance report for each use case service; and calculating a rating for each use case service. Said analysis is performed based on factors, including, but not limited to: the accuracy of each use case service; number of false alerts generated by each use case service; and/or average resources consumed by each use case service (RAM usage, storage usage, CPU/GPU usage, network usage, etc.).
The service discovery unit (125) facilitates the identifying of hardware limitations and the resource availability of the vision computing device (10) before downloading and adding a use case service or an AI model. Whenever a new use case service and/or AI model is downloaded, said service discovery unit (125) collects service identity parameters of said use case service and/or AI model, and checks the hardware limitations and the resource availability of the vision computing device (10) against the service identify parameters collected. The service discovery unit (125) allows the downloading of the use case service and/or AI model, only if the use case service and/or AI model is within the allowed limit.
In yet another embodiment of the present disclosure, the hardware limitations of the vision computing device (10) include, but are not limited to: number of allowed use case services; number of allowed AI models; and/or number of allowed cameras.
In yet another embodiment of the present disclosure, the resource availability of the vision computing device (10) includes, but is not limited to: availability of RAM; availability of CPU; and/or availability of storage.
In yet another embodiment of the present disclosure, the service identity parameters include, but are not limited to: service name; service ID; maximum RAM usage; maximum CPU usage; maximum GPU usage; parent container name (AI model or AI container name in case of use case service download); output (alerts or reports or both); and default setting.
In yet another embodiment of the present disclosure, the service discovery container (125) further facilitates the identifying of the presence of the parent AI model (AI container) related to the use case service that is downloaded on the computer vision device (10). If the parent AI container is not present, the service discovery unit (125) collects service identity parameters of the parent AI container, checks the hardware availability and the resource availability of the vision computing device (10), and decides whether to download the use case service, along with the parent AI container or not.
The service start/stop unit (126) facilitates the effective starting and/or stopping of the use case services as per their usage. If any use case service is not assigned to any camera connected with the vision computing device (10), said use case service is stopped by the service start/stop unit (126) to optimize the resource utilization of the vision computing device (10).
In yet another embodiment of the present disclosure, the user management container (13) is configured to facilitate the performing of all user-related activities.
As illustrated in Figure 4, the user management container (13) comprises: an add/remove/edit user unit (131); a user permission management unit (132); a user login/logout unit (133); a user authentication token generation unit (134); and a user log unit (135).
The add/remove/edit user unit (131) facilitates the adding, removing, and/or editing of users associated with the computer vision device (10). The type of user includes, but is not limited to: super admin; admin; operator; and/or manufacturer.
Super admin is the user who is the owner of the device (10). There is only one super admin per device (10). No user can create or delete the super admin. The super admin creates multiple admins and/or operators for the device (10).
An admin is the user who add, removes, and/or edits cameras and use case services. The admin may also create operators under him/her. He/she may view the alerts or reports generated on the device (10) locally. An operator is the user who may view the alerts and/or reports.
Initially, when the device (10) is authenticated, the manufacturer is added as a user. The role of the manufacturer is to add cameras, whenever the device (10) is created as a standalone device, and perform QC tests. No user can add or delete the manufacturer. There will be only one manufacturer per device (10).
The user permission management unit (132) facilitates the managing of permissions at the level of an individual user. This unit (132) allows the assigning of users with different permission levels, depending on their use. For example, the user permission management unit (132) enables the selecting of exactly what a user can see and edit.
The user login/logout unit (133) facilitates the managing of login and/or logout operations of each user, apart from enabling the user to retrieve his/her password, if he/she forgets the password.
The user authentication token generation unit (134) facilitates the generating of an authentication token, whenever a user logs into the computer vision device (10). Operations performed by any user, such as adding, removing, and/or editing cameras, adding and/or removing use case services, adding, removing, editing and/or deleting users, are carried out depending on the level of permission associated with the respective user who performs the operations, after verifying the authentication token generated. An operation is denied if the authentication token is invalid, and an alert message is issued to the user.
In yet another embodiment of the present disclosure, said authentication token is a JWT authentication token and comprises: a user name; a user role; and a secret key.
The user log unit (135) facilitates the storing and tracking of the activities of each user logging into the computer vison device (10).
In yet another embodiment of the present disclosure, the camera management container (14) is configured to facilitate the: adding, removing, and/or editing of cameras; adding and/or removing use case services to cameras; and performing camera health check-up.
As illustrated in Figure 5, the camera management container (14) comprises: an add/remove/edit camera unit (141); an add/remove services unit (142); a ROI (Region of Interest) configuration unit (143); a camera health check unit (144); a scheduler unit (145); an ONVIF (Open Network Video Interface) IN unit (146); an ONVIF OUT unit (147); and an API configuration unit (148).
The add/remove/edit camera unit (141) facilitates the adding, removing, and/or editing of cameras with the computer vision device (10) by the user. In yet another embodiment of the present disclosure, the computer vision device (10) supports three types of camera feeds: Real-Time Streaming Protocol (RTSP) stream (from IP Cameras); USB web cameras; and MIPI (Mobile Industry Processor Interface) CSI (Camera Serial Interface) cameras.
The add/remove services unit (142) facilitates the adding and/or removing of use case services to the camera(s) connected with the computer vision device (10). Multiple use case services may be added to a single camera. Similarly, a single use case service may be added to multiple cameras. Once the use case service(s) is/are added to a particular camera, the camera feed is inputted to the respective parent AI model of the added use case service(s), and alerts and/or reports are generated accordingly.
The ROI configuration unit (143) facilitates the configuring of a region of interest for a use case service within which the parent AI model is to perform its detection on an input camera feed. For example, in people IN/OUT detection use case service, the region of interest (the IN direction of people) is configured in the input feed from the camera.
For example, the IN direction is configured by the moving of people from a red line to a yellow line, by drawing the red line and the yellow line. The IN/OUT detection use case detects and/or counts the people coming in, whenever a person moves from the red line to the yellow line in the input feed from the camera, and vice versa for OUT detection.
The camera health check unit (144) facilitates the detecting of the status (active or inactive) of the cameras connected with the computer vision device (10). Said camera health check unit (144) captures an image from a camera for every pre-defined interval (for example, for every half an hour or one hour), and checks the dimensions of the image. If the dimensions are greater than 0, the camera is detected as active; else it is detected as inactive.
The scheduler unit (145) facilitates the switching of the use case services in each camera connected with the computer vision device (10) based on user-defined preferences (e.g. working hours, holidays, non-working hours, etc.). Once the user-defined scheduling (preferences) is done, the user assigns the use case services for each schedule to each camera according to his/her requirement. This mechanism optimizes the processing power of the computer vision device (10) and increases the number of use case services present on the device (10) according to the computing capacity of the device (10).
The ONVIF IN unit (146) facilitates the detecting of an at least an ONVIF camera communicatively associated with the device (10) through the same network. Said ONVIF IN mechanism is utilized when the operating framework is embedded on an AI server and/or AI edge computing device that is/are configured to run inferences on external IP cameras.
For detecting ONVIF cameras and to establish a connection, the user provides the username and the password of the camera, along with its location details. Once the camera is connected with the device (10), details about the camera, such as encode mode, resolution, frame rate, bit rate type, video quality, RTSP link, etc., are collected.
The ONVIF OUT unit (147) facilitates the detecting of standalone MIPI/USB-based AI cameras readable by external NVRs or ONVIF compatible devices. Said ONVIF OUT mechanism is utilized when the operating framework is embedded on a standalone AI camera. If the AI camera is made available to the network, said camera shares the details, such as encode mode, resolution, frame rate, bit rate type, video quality, RTSP link, etc., to the ONVIF compatible network device, such as NVR (Network Video Recorders), video management systems, or the like.
The API configuration unit (148) facilitates the selecting of relevant API for configuring an AI model. This mechanism enables the integrating of the disclosed operating framework with any external system, device, and/or application.
For example, if a parking lot wants to get the vehicle number, vehicle type, vehicle count, ticket number, vehicle in time, and vehicle out time in its parking management system, said user connects one or more AI cameras, all embedded with the disclosed operating framework, and calls the respective API of the operating framework for extracting the information generated by the AI model on the AI cameras.
Further, the API configuration unit (148) also helps to setup and configure the whole AI server remotely through the cloud by just connecting to the API gateway of that AI server and sending API requests to the server for configuring the device (10).
The disclosed operating framework acts as a bridge between the developer community and the end user. To use the disclosed operating framework, the end user doesn’t require any programming background to add use case services to the device (10) in their premises. For this purpose, said operating framework comprises the frontend container (22). Said frontend container (22) is configured to facilitate the interacting of the end user with the operating framework, such as adding of cameras, adding of IoT devices, adding of use case services along with the associated AI models, and adding of IoT services related to the IoT devices without any technical knowledge.
The frontend container (22) facilitates the performing of the following activities by the user:
Acts as an interface to: add, remove, edit, and/or download use case services; add, remove, and/or edit cameras; activate and/or deactivate use case services to/from a camera; view, edit, remove, and/or download generated alerts/reports; create, edit, and/or remove users; and view or download user activity logs.
Acts as an interface to configure:
Cloud Settings: the user may choose on which cloud server he/she wants to sync the alerts or reports generated; and the user may also select on which cloud the double check mechanism is to be performed;
Storage Settings: the user may connect multiple storage devices and choose on which device the images of alerts or reports are to be stored; the user may also add cloud storage of his/her own just by entering the required credentials; if the user enters custom credentials, all the images videos of generated alerts/reports are sent to that cloud storage;
Time Settings: the user may select the time zone and current date and time; and the user may also configure his/her own NTP server for real-time synchronization of the time;
Network Settings: the user may select the network priority, such as LAN/ WiFi/4G, etc. (all the data are synced through the selected mode of communication); the user may add new WiFi SSID; the user may also provide a static IP to the AI server/device;
Schedule Settings: the user may define the user-defined preferences (e.g. working hours, holidays, non-working hours, etc.); and the user may select which use case service is to be active in scheduled hours/days and vice versa;
Hybrid Cloud: the user may activate and/or deactivate double check mechanism for a use case service;
Data Collection and Model Making Pipeline: the user can activate and/or deactivate the provision of sharing data from a location where the device (10) is installed, with the data collection and model making pipeline cloud (23) for training the respective AI model for achieving improved accuracy;
SMTP Settings: the user may configure SMTP settings that are to be used for sending emails, if any alert or report is generated;
Notification Settings: the user may choose the mode of notification when an alert is generated (email, text message, phone call, etc.); the user may also add the email IDs and/or phone numbers, on which the notifications are to be sent;
System Update Settings: the user may update the system or use case services; the user may also schedule the updates;
System Settings: the user may safely shut down or restart the device (10);
Setting Alert Priority: the user may set the priority of alerts as HIGH/ MEDUIM/LOW; the alerts generated during no internet connectivity are synced with the storage as per the priority, after internet connectivity is restored. HIGH Priority alerts are synced first, followed by MEDUIM Priority and then LOW priority;
Setting Alert Actions: the user may configure an action (triggering a hooter, or the like) to be performed against an alert; and
Profile: the user may view his/her profile, device information, etc.; the user may also perform factory reset or change the access key.
In yet another embodiment of the present disclosure, the at least one use case container (15) is created dynamically, whenever a use case service is downloaded to the computer vision device (10). Said at least one use case container (15) is configured to facilitate the accommodating of the use case service downloaded to the device (10). A separate use case container is created for each use case service, and one use case container is independent of another use case container.
In yet another embodiment of the present disclosure, the at least one AI model container (17) is created dynamically, whenever an AI model is downloaded as a parent AI model, when downloading a use case service. A separate AI model container is created for each AI model, and one AI model container is independent of another AI model container.
In yet another embodiment of the present disclosure, a single AI model is a parent AI model for one or more use case services.
In yet another embodiment of the present disclosure, the at least one IoT service container (25) is created dynamically, whenever an IoT device is connected/added to the computer vision device (10). Said at least one IoT service container (25) is configured to facilitate the accommodating of a service associated with the IoT device that is connected to the device (10). A separate IoT service container is created for each IoT service associated with the IoT device connected, and one IoT service container is independent of another IoT service container.
In yet another embodiment of the present disclosure, the use case services, the AI models, and the IoT services are developed and made available to the users by third parties. The operating framework further comprises an at least a wrapper module (26; Figure 6), which enables the third parties to develop the use case services, AI models, and IoT services with ease.
Said at least one wrapper module (26) is configured to facilitate internal communication between the use case services [i.e., the at least one use case container (15)], the AI models [i.e., the at least one AI model container (17)], the IoT services [i.e., the at least one IoT service container (25)], and the at least one socket server (16).
In yet another embodiment of the present disclosure, the at least one socket server (16) is configured to facilitate the establishing of communication between the at least one use case container (15), the at least one AI model container (17), and the at least one IoT service container (25).
In yet another embodiment of the present disclosure, the service registry cloud (24) is configured to facilitate: the registering of the third-party developers with the operating framework; uploading of the use case services, AI models, IoT services, and/or the like; assigning of a unique service ID for each: use case service or AI model or IoT service; and making the use case services, AI models, IoT services, and/or the like available for the end users to purchase, and download them to the device (10).
In yet another embodiment of the present disclosure, the data collection and model making pipeline cloud (23) is configured to facilitate: the improving of the performance accuracy of the device (10), by collecting the data from the location where the device (10) is installed, and training the respective AI model, against the user’s consent.
The device (10) captures images from all the cameras connected with the device (10) regularly at a predetermined interval. Said images are shared with the data collection and model making pipeline cloud (23) every day at a scheduled time. The image data from all such devices is stored on the data collection and model making pipeline cloud (23), and annotated for the improvement of the AI model. After the completion of data annotation, the transfer learning of the AI model is performed. Finally, the AI model is redeployed on the respective device, after the completion of testing on the data collection and model making pipeline cloud (23).
In yet another embodiment of the present disclosure, the predetermined interval is 5 minutes.
The working of the operating framework for computer vision devices, particularly, the at least one use case container (15), the at least one socket server (16), the at least one AI model container (17), and the at least one IoT service container (25) shall now be explained with the help of Figure 6.
For the purpose of illustration, it is assumed that there are three cameras (C1, C2, and C3) and three IoT devices [a boom barrier (27), hooter (28), and a thermal sensor (29] are connected with the computer vision device (10) embedded with the disclosed operating framework.
The camera C1 is for generating alerts on detection of gun.
The camera C2 is for: generating alerts on detection of gun; generating alerts on detection of a person loitering; and generating daily footfall counting.
The camera C3 is for generating daily footfall counting.
The boom barrier (27) is for allowing or restricting of a person entering or leaving, depending on the condition.
The hooter (28) is for generating audio alert, depending on the condition.
The thermal sensor (29) is for capturing the body temperature of the person.
At the time of embedding the disclosed operating framework on the computer vision device (10), the manufacturer-level authentication is performed to avoid the duplication of the device (10). During this process, a unique serial number is assigned to the computer vision device (10), and said device serial number, along with other details related to the device (10), such as manufacturer key, device type (standalone, AI server, etc.), password, etc. are collected and maintained on the authentication cloud (19). This information is used later when the end user tries to activate the device (10).
Initially, the computer vision device (10) does not contain any use case services and AI models, and does not have any IoT devices connected with it. After the device (10) is activated by the user and necessary IoT devices are connected, the required use case services and IoT services are to be downloaded from the service registry cloud (24) and added to the device (10). When downloading the required use case services and IoT services, the service management container (12) collects the service identity parameters of said use case services and IoT services, and checks the hardware limitations and the resource availability of the vision computing device (10) against the service identify parameters collected. The service management container (12) allows the downloading of the use case services and IoT services, only if the use case services and the IoT services are within the allowed limit.
Further, the service management container (12) tries to identify the presence of the parent AI model related to the use case service that is downloaded on the computer vision device (10). If the parent AI model is not present, the service management container (12) collects the service identity parameters of the parent AI model, checks the hardware availability and the resource availability of the vision computing device (10), and decides whether to download the use case service, along with the parent AI container or not.
In the present illustration, three use case services GUN DETECTION, LOITERING WITH GUN, and PEOPLE COUNT are downloaded from the online repository. Since there is no use case service and AI model available on the device (10), the parent AI models AIGUN and AIPERSON associated with the use case services are also to be downloaded.
Once the use case services with related parent AI models, and the IoT services are downloaded, separate containers for each: use case service, AI model, and IoT service are created dynamically by the operating framework and stored within the respective containers. In the present illustration, the use case services GUN DETECTION, LOITERING WITH GUN, and PEOPLE COUNT are stored inside the use case containers 151, 152, and 153, respectively. The AI models WEAPON DETECTION and PERSON DETECTION are stored within the AI model containers 171 and 172, respectively. Similarly, the IoT services BOOM BARRIER TRIGGERING, HOOTER TRIGGERING, and THERMAL SENSOR READING are stored within the IoT service containers 251, 252, and 253, respectively.
After the required use case services, the AI models, and the IoT services are downloaded, the cameras are added to the device (10). The frontend container (22) facilitates the finding of the cameras connected with the device (10) by the user. The camera management container (14) facilitates the adding of the cameras with the device (10) and assigns a name to each camera added (C1, C2, and C3).
After adding the cameras (C1, C2, and C3) with the computer vision device (10), the required use case services are to be activated on each camera. In the present illustration, the use case service GUN DETECTION and the IoT service BOOM BARRIER TRIGGERING are to be added (assigned) to camera C1; the use case services GUN DETECTION, LOITERING, and PEOPLE COUNT, and the IoT service BOOM BARRIER TRIGGERING and HOOTER TRIGGERING are to be added to camera C2; and the use case service PEOPLE COUNT, and the IoT Service THERMAL SENSOR READING are to be added to camera C3.
Once the use case services and the IoT services are added to the respective cameras, the camera management container (14) stores the details of the cameras (C1, C2, and C3), along with the IoT services, and the use case services with their relevant parent AI model details are added to each camera. The camera management container (14) then shares these details with the at least one socket server (16).
After the receipt of the information, the at least one socket server (16) dynamically creates rooms (C11, C21, C22, C23, and C33) for each use case service associated with each camera. The added use case services and the relevant AI models also create rooms (U1, U2, and U3) and (A1, and A2), respectively, dynamically. Similarly, the added IoT services create rooms (IOT1, IOT2, and IOT3) dynamically. These rooms facilitate the establishing of communication between the at least one socket server (16), the use case containers (151, 152, and 153), the AI model containers (171 and 172), and the IoT service containers (251, 252, and 253).
Subsequently, the at least one socket server (16) shares the link of each camera (C1, C2, and C3) with the AI model containers (171 and 172), on which the AI model containers (171 and 172) have to run inferences.
For running the inferences, firstly, the AI model containers (171 and 172) create an image buffer, where all the frames generated by all the cameras (C1, C2, and C3) are stored.
Secondly, the AI model containers (171 and 172) access the image buffer and run the inferences on each frame stored on the image buffer.
After running the inferences, the AI model containers (171 and 172) generate a result of the inferred frame; said result may include the following parameters:
a. Camera ID or Source ID: It tells the source ID or camera ID of the inferred frame. This is useful if multiple cameras are connected to a single AI model container;
b. Object Name: If an object is detected by an AI model, it outputs the object name i.e. it tells which object is detected (Person, Gun, Mask). If the model is trained on multiple object classes, this parameter is useful;
c. Object ID: Whenever an object is detected, the AI model container assigns an object ID to each object. The object ID is useful when multiple objects of the same object class are detected in a single frame;
d. Coordinates of Each Object ID: It gives the X and Y coordinates of each object ID detected in a particular frame;
e. Buffer Index: It gives the frame ID in the image buffer; and
f. Current Hour: It gives the current working hour i.e. is it a scheduled hour or unscheduled hour. This is useful if there are 2 behaviours of a single use case.
The result generated by the AI model container 171 is sent to the room A1, and the result generated by the AI model container 172 is sent to the room A2.
Then, the at least one socket server (16) reads the results on the rooms A1 and A2, runs the segregation logic to generate pruned results according to the association between the use case service and the cameras, and stores the pruned results on the relevant rooms (C11, C21, C22, C23, and C33).
For example, the room A1 has the results of all the inferences on the AI model container (171). After running the segregation logic by the at least one socket server (16), the room C11 only has the results of ‘GUN DETECTION” use case, because it is the use case service assigned to the camera C1.
At the same time, switching of the use case services assigned with each camera, according to the user-defined preferences through the scheduler unit (145), is also managed by the at least one socket server (16).
The use case services run their logic on the pruned results, and generate relevant alerts and/or reports, apart from storing them on their own data store. The generated alerts and/or reports are stored on the rooms U1, U2, and U3. At the same time, when an alert and/or a report is generated, the use case service communicates with the base container (11), and transfers the alerts and/or the reports details to the base container (11) for storing them for feature reference.
In the present illustration, the use case container 151 generates an alert whenever a gun is detected from the image feed from camera C1 or camera C2. The generated alerts and/or reports are stored on the room IOT1. The IOT service container 251 reads the alerts and/or reports stored on the room IOT1, and instructs the boom barrier (27) to open or close accordingly.
The use case container 152, in association with the AI container 171 and the AI container 172, can generate an alert during the detection of a person loitering (with or without gun). The generated alerts and/or reports are stored on the room IOT2. The IoT service container 252 reads the alerts and/or reports stored on the room IOT2, and instructs the boom barrier (27) to open or close, and the hooter (280) to generate audio alert, accordingly.
Similarly, use case container 152 and use case container 153 individually generate people count reports, based on the image feed from the camera C2 and camera C3. The generated alerts and/or reports are stored on the room IOT3. Further, since the connected IoT device, i.e., the thermal sensor (29), is an input device, the values of the thermal sensor (29) are stored on the room IOT3, with the help of the IoT service container (253). The thermal sensor (29) values are shared with the PEOPLE COUNT use case through the room IOT3. Said PEOPLE COUNT use case generates an alert, if a person with high temperature is detected.
If the use case services want to save the source images related to the alert and/or the report, they share their request, along with the parent container ID and buffer indexes (Frame IDs), to the at least one socket server (16), and the at least one socket server (16) instructs the respective parent AI model containers to save the images related to the frame IDs.
Finally, the base container (11) syncs the alerts and/or the reports to the ticketing cloud (20) based on the user preference.
The disclosed operating framework for computer vision devices offers the following advantages:
Faster real-time results at a large scale: By running on premise image recognition and analysing images or video streams in real-time, it provides significantly faster insights and compliance levels, which are delivered at high availability and at a large scale.
Enhanced operational reliability: It allows users to locally process images and receive actionable insights from connected devices, without worrying about connectivity issues in their premises
Increased security for devices and data: By analysing images locally on the devices instead of sending raw data to the cloud, it helps the users by eliminating the need to send large and potentially sensitive data to the cloud.
Flexible and scalable: The users can download the use case services according to their requirements without any technical skills, such as programming knowledge and detailed understanding of computer vision mechanisms. Similarly, the manufacturers can manufacture computer vision devices without any technical skills, such as programming knowledge and detailed understanding of computer vision mechanisms.
Ease of Installation: Any unskilled person with basic knowledge can install and configure the computer vision devices with the disclosed operating framework.
It will be apparent to a person skilled in the art that the above description is for illustrative purposes only and should not be considered as limiting. Various modifications, additions, alterations and improvements without deviating from the spirit and the scope of the disclosure may be made by a person skilled in the art. Such modifications, additions, alterations and improvements should be construed as being within the scope of this disclosure.
LIST OF REFERENCE NUMERALS
10 – Computer Vision Device/Vision Computing Device
11 – Base Container
12 – Service Management Container
13 – User Management Container
14 – Camera Management Container
15 – At Least One Use Case container
16 – At Least One Socket Server
17 – At Least One AI Model Container
18 – API Gateway
19 – Authentication Cloud
20 – Ticketing Cloud
21 – Hybrid Cloud
22 – Frontend Container
23 – Data Collection and Model Making Pipeline Cloud
24 – Service Registry Cloud
25 – IoT Service Container
26 – At Least One Wrapper Module
27 – Boom Barrier
28 – Hooter
29 – Thermal Sensor
111 – Manufacturer Authentication Unit
112 – User Device Activation Unit
113 – Alert Syncing and Viewing Unit
114 – Memory Management Unit
115 – User Notification Unit
116 – Communication Unit
117 – Double Check Mechanism
118 – Alert Analysis Unit
119 – Device and Network Analysis Unit
121 – Service Authentication Unit
122 – Add/Remove/Update Service Unit
123 – Activate/Deactivate Service Unit
124 – Service Analysis Unit
125 – Service Discovery Unit
126 – Service Start/Stop Unit
131 – Add/Remove/Edit User Unit
132 – User Permission Management Unit
133 – User Login/Logout Unit
134 – User Authentication Token Generation Unit
135 – User Log Unit
141 – Add/Remove/Edit Camera Unit
142 – Add/Remove Service Unit
143 – ROI Configuration Unit
144 – Camera Health Check Unit
145 – Scheduler Unit
151, 152, 153 – Containers for the Use Case Services GUN DETECTION, LOITERING, and PEOPLE COUNT, Respectively
171, 172 – Containers for the AI Models WEAPON DETECTION and PERSON DETECTION, Respectively
251, 252, 253 – IoT Service Containers for BOOM BARRIER TRIGGERING, HOOTER TRIGGERING, and THERMAL SENSOR READING
C1, C2, C3 – Cameras
U1, U2, U3 – Rooms for the Use Case Services GUN DETECTION, LOITERING, and PEOPLE COUNT, Respectively
A1, A2 – Rooms for the AI Models WEAPON DETECTION and PERSON DETECTION, Respectively
IOT1, IOT2, IOT3 – Rooms for the IOT services BOOM BARRIER, HOOTER, and THERMAL SENSOR, Respectively
C11 – Room for Camera C1, GUN DETECTION use case service in WEAPON DETECTION AI Model
C21 – Room for Camera C2, GUN DETECTION use case service in WEAPON DETECTION AI Model
C22 - Room for Camera C2, GUN DETECTION use case service in WEAPON DETECTION AI Model, and LOITERING use case service in PERSON DETECTION AI Model
C23 – Room for C2, PEOPLE COUNT use case service in PERSON DETECTION AI Model
C33 - Room for C3, PEOPLE COUNT use case service in PERSON DETECTION AI Model
| # | Name | Date |
|---|---|---|
| 1 | 202121053504-FORM FOR SMALL ENTITY(FORM-28) [22-11-2021(online)].pdf | 2021-11-22 |
| 2 | 202121053504-FORM FOR SMALL ENTITY [22-11-2021(online)].pdf | 2021-11-22 |
| 3 | 202121053504-FORM 3 [22-11-2021(online)].pdf | 2021-11-22 |
| 4 | 202121053504-FORM 1 [22-11-2021(online)].pdf | 2021-11-22 |
| 5 | 202121053504-FIGURE OF ABSTRACT [22-11-2021(online)].jpg | 2021-11-22 |
| 6 | 202121053504-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [22-11-2021(online)].pdf | 2021-11-22 |
| 7 | 202121053504-EVIDENCE FOR REGISTRATION UNDER SSI [22-11-2021(online)].pdf | 2021-11-22 |
| 8 | 202121053504-ENDORSEMENT BY INVENTORS [22-11-2021(online)].pdf | 2021-11-22 |
| 9 | 202121053504-DRAWINGS [22-11-2021(online)].pdf | 2021-11-22 |
| 10 | 202121053504-DECLARATION OF INVENTORSHIP (FORM 5) [22-11-2021(online)].pdf | 2021-11-22 |
| 11 | 202121053504-COMPLETE SPECIFICATION [22-11-2021(online)].pdf | 2021-11-22 |
| 12 | 202121053504-Proof of Right [13-01-2022(online)].pdf | 2022-01-13 |
| 13 | 202121053504-FORM-26 [13-01-2022(online)].pdf | 2022-01-13 |
| 14 | Abstract1.jpg | 2022-02-01 |
| 15 | 202121053504-PA [16-08-2023(online)].pdf | 2023-08-16 |
| 16 | 202121053504-FORM28 [16-08-2023(online)].pdf | 2023-08-16 |
| 17 | 202121053504-FORM FOR SMALL ENTITY [16-08-2023(online)].pdf | 2023-08-16 |
| 18 | 202121053504-EVIDENCE FOR REGISTRATION UNDER SSI [16-08-2023(online)].pdf | 2023-08-16 |
| 19 | 202121053504-ASSIGNMENT DOCUMENTS [16-08-2023(online)].pdf | 2023-08-16 |
| 20 | 202121053504-8(i)-Substitution-Change Of Applicant - Form 6 [16-08-2023(online)].pdf | 2023-08-16 |
| 21 | 202121053504-FORM 18 [07-09-2023(online)].pdf | 2023-09-07 |
| 22 | 202121053504-FER.pdf | 2025-03-17 |
| 23 | 202121053504-FORM 3 [12-05-2025(online)].pdf | 2025-05-12 |
| 24 | 202121053504-OTHERS [16-05-2025(online)].pdf | 2025-05-16 |
| 25 | 202121053504-FORM FOR STARTUP [16-05-2025(online)].pdf | 2025-05-16 |
| 26 | 202121053504-FER_SER_REPLY [15-09-2025(online)].pdf | 2025-09-15 |
| 1 | SearchHistory(14)E_29-08-2024.pdf |