Abstract: ABSTRACT Title: METHOD AND SYSTEM ADAPTED FOR ASYMMETRICALLY SCALABLE CLOUD STACK FOR VIDEO COMPUTING. The present invention discloses a system for asymmetrically scalable cloud stacking of micro-services relating to video computing applications according to user requested video computing servicing and method thereof. The present system includes storage devices embodying different micro-services for different video computing applications and plurality of servers operatively connected to each other including access to said micro-services and enabled for operating on video data from plurality of sources based on users request for one or more video computing services on said video data including selectively breaking down said user requested video computing services into a stack of independent micro-services to cater each of said user requested video computing services on the video date. Figure 1
DESC:FIELD OF THE INVENTION:
The present invention relates to video computing servicing on cloud based network. More specifically, the present invention is directed to a system and method for carrying out asymmetrically scalable cloud stack for video computing involving plurality of servers, storage devices and communication channels to receive video data from plurality of devices, and a plurality of users request for one or more video computing services with advancements in breaking down those requests into a set of independent micro-services that are embodied in the system to cater to variety of video computing tasks. The system and method interconnects those micro-services automatically to service the requests and deploys those micro-services in various relatable servers, storage devices and communication channels. Advantageously, the system is further scalable automatically by replicating required micro-services involving cloud resident auto-scaling framework depending on the computing load demanded by those micro-services. As different video computing requests gets broken down into different set of micro-services, and a particular micro-service has a different computing requirement compared to others, the system achieves a very unique and user friendly prospects of its capability to be asymmetrically scaled up dynamically and automatically to meet the overall computing load generated out of a particular video computing request. A set of micro-services has been designed and architecture is developed so that different user selectable video computing requests can be served with interconnection of those micro-services only.
BACKGROUND OF THE INVENTION:
For last many years Video computing services including services related to Video management, Video analytics, etc are deployed to serve various requirements in Video surveillance and Video summarization domain, but not limited to the same. In a traditional Video Management System (VMS), the installation of servers and software applications hosted in those servers is on-premise with very little flexibility to share computing capability of the servers and other hardware devices across multiple users having their own separate requirements. As for example in an on-premise VMS even if there are multiple users, they belong to a particular organization having a common set of cameras, sensors and other devices to monitor and manage. The organization has to procure the hardware by calculating the combined expected load. Cloud computing environment delivers a solution with isolated services where the hardware resource pool is shared amongst multiple independent users (SaaS model). However, when there are users with complex requirement consisting of multiple interconnected services, particularly dealing with streamed and voluminous data like video, no such system is available. There are systems that offer Video management functionality as a service in a semi-automated manner where the hardware resources are pre-provisioned in the cloud based on fixed predetermined set of services requested by users when the users are inducted in the system, and additional hardware resource are provisioned manually against the user account if users request for additional services, thus not providing a truly automatically scalable service based model for video computing tasks when the tasks consists of a combination of multiple interconnected services.
OBJECT OF THE INVENTION:
It is thus the basic object of the present advancement to provide for advancements in systems and/or methods for carrying out asymmetrically scalable cloud stack required for value addition and end user benefits in variety of video computing servicing.
Another object of the present advancement is directed advancements in systems and/or methods for carrying out asymmetrically scalable cloud stack which would be adapted for plurality of servers, storage devices and communication channels to receive video data from plurality of devices, and a plurality of users request for one or more video computing services involving selective breaking down those requests into a set of independent micro-services to cater to wide variety of video computing tasks.
Yet another object of the present invention is to develop a video management/computing system which would be adapted to provide video management/computing functionality as a service involving minimum hardware resource to cater users request for additional video management/computing functionality services.
Another object of the present invention is to develop a video management/computing system which would be adapted to provide video management/computing functionality as a service involving any pre-provisioned hardware resources in cloud based network adapted for automatic scalability and establishing a availability of combination of multiple hardware resources corresponding to interconnected services for the video related management/computing functionality as per end user requirements and demands thereby making is cost effective and operationally more flexible and accommodative of more loads and optimized resources and facilities.
A still further object of the present invention is directed to advancements in computer implemented systems and methods for provisioning hardware resources in cloud based network for any fixed predetermined set of services/operation and automatically establishing a combination of multiple hardware resources corresponding to interconnected services/operations for completion of an assignment.
Yet another object of the present invention is to develop a cloud network based method or system to receive video data, users request for one or more video computing services relating to the received video data and break down the requests into a set of independent micro-services to cater the video computing tasks relating to the requested video computing services by interconnecting those micro-services automatically to service the requests.
SUMMARY OF THE INVENTION:
Thus according to the basic aspect of the present invention there is provided a system for carrying out asymmetrically scalable cloud stack of micro-services relating to video computing applications according to user requested video computing servicing comprising
storage devices embodying different micro-services for different video computing applications;
plurality of servers operatively connected to each other including access to said micro-services and enabled for operating on video data from plurality of sources based on users request for one or more video computing services on said video data including selectively breaking down said user requested video computing services into a stack of independent micro-services to cater each of said user requested video computing services on the video date.
In a preferred embodiment of the present system, the plurality of servers includes
atleast one receiver server to receives the video computing service requests from the users over a computer network channel;
atleast one micro-service chain creator sever connected to said receiver server over the network to receive the service requests passed by the receiver server and breaks down each of the service requests into a set of the micro-services;
atleast one micro-service mapping server in the network and operatively connected to the micro-service chain creator sever to create micro-service interconnect map for each set of the micro-services corresponding to a service request.
In a preferred embodiment of the present system, the micro-service mapping server specifies feeding of output (video stream) of a micro-service as input of a subsequent micro-service (video analytics application) in the map.
In a preferred embodiment of the present system, the micro-services generate output in form of actions for example, storing result of a computation in a data pool or creating a video clip from time T1 to time T2.
In a preferred embodiment, the present system comprises atleast one micro service server in the network having access to task lists of all the micro-services, said micro service server reads the micro-service interconnect map accordingly populate the task-lists of the respective micro-services that is included in the micro-service interconnect map avoiding replication of micro-service activities in the system.
In a preferred embodiment of the present system, the tasks list of a specific micro-service contains information of (i) the location of input data and (ii) the location where the output needs to be sent for a specific task, said list includes multiple task each having different and independent input and output specifications and input and output source can be a data pool in Cloud resident storage device or a uniform resource locator (URL).
In a preferred embodiment of the present system, the receiver server includes a Graphical user interface (GUI) to receive the video computing service requests from the users and checks credential of the users by using a standard authentication and authorization server.
In a preferred embodiment of the present system, the micro-service chain creator server is equipped with capability to select the necessary micro-service chains to serve a specific video computing request and the micro-services are parameterized so that their input, output and QoS attributes can be defined per instance of the micro-service.
In a preferred embodiment, present system is adapted to provide video management/computing functionality as a service comprising
hardware resources operatively connectable in the network; and
automatically configurable interconnection of selective hardware resources from said set of hardware resources corresponding to interconnected services for the video related management/computing functionality involving selective breaking down those requests into the set of independent micro-services to cater to wide variety of video computing tasks.
According to another aspect in the present invention, there is provided a method for asymmetrically scalable cloud stacking of micro-services relating to video computing applications according to user requested video computing servicing comprising
involving the storage devices embodying different micro-services for different video computing applications;
involving the receiver server to receives the video computing service requests from the users over a computer network channel;
involving the micro-service chain creator sever and connecting the same to the receiver server over the network to receive the service requests passed by the receiver server and breaking down each of the service requests into a set of the micro-services;
involving the micro-service mapping server in the network and connecting the same to the micro-service chain creator sever to create micro-service interconnect map for each set of the micro-services corresponding to a service request;
involving the micro service server in the network and providing its access to task lists of all the micro-services such as that the micro service server reads the micro-service interconnect map accordingly populate the task-lists of the respective micro-services that is included in the micro-service interconnect map avoiding replication of micro-service activities in the system.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS:
Fig. 1 shows top level architecture of the present system in accordance with the present invention.
Fig. 2 shows request to micro-service mapping in accordance with the present system.
Fig 3 (a) shows interconnection of micro-services using Datapool in accordance with the present system.
Fig 3(b) shows creating of micro-service chains, and entry in task-lists for “Record and stream live” video computing request in accordance with the present system.
Fig. 4(a) shows micro-service deployment mechanism in accordance with the present system.
Fig 4(b) shows hardware resource allocation strategy in accordance with the present system.
DESCRIPTION OF THE INVENTION WITH REFERENCE TO THE ACCOMPANYING DRAWINGS:
As stated hereinbefore, the present invention discloses a system and a method for carrying out asymmetrically scalable cloud stack for value addition and end user benefits in variety of video computing servicing. The system of the present invention includes plurality of servers, storage devices and communication channels to receive video data from plurality of devices, and a plurality of users request for one or more video computing services involving selective breaking down those requests into a set of independent micro-services to cater to wide variety of video computing tasks in a more cost effective and end user friendly manner.
The present system and its operating method can include hardware resources in cloud based network even in networks involving any fixed predetermined set of services/operation and enabling automatically and dynamically scalable combination of multiple hardware resources corresponding to interconnected services/operations for more effective and end user compliant completion of an assignment. The present system and its operating method is unique in the sense that users can add her devices with various time variant video computing service requests, and a common computing backbone consisting of various micro-services auto-scales up to serve those requests. There is no need to pre-assign any computing resource to any particular user or a particular service request as a whole. Rather, the service requests from the users are automatically broken down into a pre-defined set of micro-services specific to video computing needs and an interconnection within those micro-services are automatically established, and only the relevant micro-services are scaled up to serve the requests. If the user adds or deletes any device, or modifies her demand for any particular video computing task, the system automatically decides the change in interconnection of the micro-services and readjusts its state in terms of interconnection of micro-services and its overall demand for computing resources. As for example, if a particular user had previously requested for recording a 25 fps video generated by a specified device in Cloud storage and now changes her desire to record only a single image per second, micro-services related to video grabbing and video recording is scaled down, whereas those related to image grabbing and image recording is scaled up at the backend computing platform. As another example, if a user requests for Video analytics applications related to object classification, the micro-service that classifies objects from video stream is automatically connected to the micro-service related to video grabber. Thus, the system can be used to serve many geographically distributed independent users for their video computing needs including but not limited to hosting a time-variant, reconfigurable, auto-scalable VMS with shared and optimal usage of hardware resources.
In a preferred embodiment, the system of the present invention consists of at least one server, one storage unit, and network communication channel to enable users to send video computing service requests to the system. There are is one or more independent users with their own set of devices that generate video data. The users uses this proposed system to perform various tasks related to video computing, viz, recording, streaming to directed recipients, running various video analytics algorithms, receiving metadata generated by those video analytics algorithms, searching the metadata and video based on various filters, etc.
Reference is now invited from the accompanying Fig 1 which describes the top level functionality of the system. As shown in the Fig 1, receiver server 101 of the present system receives video computing service requests from users over computer network channel. Users create those requests by means of a Graphical user interface (GUI) as provided by the receiver server. The user interface converts the users’ request in a JSON object and passes it on to the receiver server 101. The receiver server 101 checks the credential of the user using a standard authentication and authorization server (001) e.g. OAuth2.
The receiver server 101 passes this service request to micro-service chain creator sever (102) in the network. The creator server 102 internally breaks down the service request into a set of micro-services (103) and also creates a micro-service interconnect map by involving a micro-service mapping server (105) in the network.
It is to be noted that different micro-services (103) for different video computing applications are already embodied and running in the different storage elements of system as part of system installation, and the new workload generated due to ingestion of new service requests are added to them. The micro-services (103) generate output in form of actions (104). Actions can be, for example, storing result of a computation in a data pool or creating a video clip from time T1 to time T2. If the service request be, say, “to detect whether there is any person in field of view(FOV) of a particular camera”, then connecting the camera and receiving video from specific camera is an independent micro-service while analyzing the video frames and detecting presence of a person using standard video analytics algorithms is another micro-service. The micro-service mapping server (105) specifies how the output (video stream) of the first micro-service above is to be fed as input to the second micro-service (video analytics application). A micro service server (107) uses this rule to update the task-lists (106) associated with the said two micro-services.
Fig 2 describes the internal working mechanism of micro-service chain creator server (102). The receiver server 101 receives a video computing service request (1021) from any user. It (102) breaks down the service request into one or more micro-service chains and construct a micro-service interconnect map by using the server (105) with those micro service chains as its content. The micro-service server (107) that has access to the task lists of all the micro-services reads this micro-service interconnect map and populate the task-lists (106) of the respective micro-services. The tasks list of a specific micro-service contains the information of (i) the location of input data and (ii) the location where the output needs to be sent for a specific task. There can be many tasks in the list, each having different and independent input and output specifications. The input and output source can be a data pool in Cloud resident storage device; it can also be a uniform resource locator (URL).
The micro-service chain creator server (102) is equipped with the capability to select the necessary micro-service chains to serve a specific video computing request. The micro-services are parameterized so that their input, output and QoS attributes can be defined per instance of the micro-service. The micro-service server (107) avoids replication of activities within the system. As for example let a micro-service chain consists of the following micro-services:
1(i) analysis of video frames to detect presence of a car (first micro-service in this chain),
1(ii) recognize the number plate of the car (second micro-service),
While another micro-service chain consists of
2(i) analysis of video frames to detect the presence of a car (first micro-service in this chain),
2(ii) identify whether the driver is wearing a seat-belt (second micro-service in this chain), and
2(iii) recognize the number of the car,
Since the micro-services 1(i) and 2(i) as well as 1(ii) and 2(iii) are identical (assuming they run on the same camera video feed), only once instance of the service will run in the system.
Defining a set of micro-services as basic building blocks of video computing and interconnecting them to accomplish a given task avoiding these types of redundancy is unique and innovative. Considering the voluminous nature of video and the extent of computation or storage or both it takes to store, stream or analyse them, this mechanism of defining a set of micro-services, connecting them using chains, including them within a interconnect map and executing the micro-services avoiding redundancy is very much cost effective.
service_requesti = {interconnect mapi}
interconnect mapi= {{chain1, chain2, …., chainn}, {chain2, chain7}…}
chain1 = { taska,taskb,taskk}, chain2 = { taskb, taskx, tasky, taskk}
There can be multiple video computing tasks that are common to two or more chains. Identifying those tasks related to video computing services so that they can be installed and executed independent to one another and included in multiple chains is a key invention. Though a particular task is included in more than one chain, at the micro-service level only one instance of those micro-services runs in the system is a key innovation.
Fig 3 illustrates a video computing service named “Record and stream live” that comes from a device (Cell phone 3). Note that though the micro-service (grabCamera) appears in both the chains, the activity will be performed only once as the task list will contain only one entry for that task. The mapping micro-service is equipped to eliminate redundant and replicated entries in the task-lists.
Fig 4(a) describes architecture of a typical micro-service deployment. Micro-service orchestrator (1031) consults the task list 106 (generated by 107), and adds the tasks to the servers (1033) in a load balanced way. The orchestrator continuously monitors the resource utilization parameters (e.g. CPU usage, Memory usage, Network usage etc) of the servers and decides whether to allocate additional resource from the resource pool and whether to redistribute tasks to balance the computing load across the servers. A mechanism for allocating additional resource is described in Fig 4(b). The resource pool is provided by the Cloud platform. Once the orchestrator decides to spawn a new instance of its service, it allocates the resource and put the dockerized application (1033) corresponding to the micro-service in the newly allocated resource. Though auto-scaling and load distribution is a standard practice in industry, but converting the service request to task-lists and thereupon allowing the micro-service orchestrator to consult the task-list and the orchestrator taking its own decision to generate resource requirement demand for itself or relinquishing the same is a unique property of this architecture.
The micro-services have pluggable modules so that different algorithms can be run within the micro-services. As for example any change in streaming protocol or any change in Video analytics applications can be accommodated by replacing the video grabbing driver and analytics engine respectively, without affecting the overall architecture in any way. Also new micro-services can be added and new micro-service chains can be created for any new types of video computing requests.
The proposed system is thus directed to advancement in the art to serve efficiently a given video computing task by identifying and prioritizing one or more micro-service chains and accordingly deploying those micro-services seamlessly and automatically avoiding any redundant operation or replication of jobs. This not only utilizes the resources in an optimal way but also avoids data redundancy.
It also has the benefit of applying software upgrades to all the users in a single go, as it is the set of micro-services that are upgraded and the micro-services are not executed in the scope of any specific user but resides in a common shared computing backbone.
,CLAIMS:WE CLAIM:
1. A system for carrying out asymmetrically scalable cloud stack of micro-services relating to video computing applications according to user requested video computing servicing comprising
storage devices embodying different micro-services for different video computing applications;
plurality of servers operatively connected to each other including access to said micro-services and enabled for operating on video data from plurality of sources based on users request for one or more video computing services on said video data including selectively breaking down said user requested video computing services into a stack of independent micro-services to cater each of said user requested video computing services on the video date.
2. The system as claimed in claim 1, wherein the plurality of servers includes
atleast one receiver server to receives the video computing service requests from the users over a computer network channel;
atleast one micro-service chain creator sever connected to said receiver server over the network to receive the service requests passed by the receiver server and breaks down each of the service requests into a set of the micro-services;
atleast one micro-service mapping server in the network and operatively connected to the micro-service chain creator sever to create micro-service interconnect map for each set of the micro-services corresponding to a service request.
3. The system as claimed in claim 2, wherein the micro-service mapping server specifies feeding of output (video stream) of a micro-service as input of a subsequent micro-service (video analytics application) in the map.
4. The system as claimed in claim 1, wherein the micro-services generate output in form of actions for example, storing result of a computation in a data pool or creating a video clip from time T1 to time T2.
5. The system as claimed in claim 1, comprises atleast one micro service server in the network having access to task lists of all the micro-services, said micro service server reads the micro-service interconnect map accordingly populate the task-lists of the respective micro-services that is included in the micro-service interconnect map avoiding replication of micro-service activities in the system.
6. The system as claimed in claim 5, wherein the tasks list of a specific micro-service contains information of (i) the location of input data and (ii) the location where the output needs to be sent for a specific task, said list includes multiple task each having different and independent input and output specifications and input and output source can be a data pool in Cloud resident storage device or a uniform resource locator (URL).
7. The system as claimed in claim 2, wherein the receiver server includes a Graphical user interface (GUI) to receive the video computing service requests from the users and checks credential of the users by using a standard authentication and authorization server.
8. The system as claimed in claim 5, wherein the micro-service chain creator server is equipped with capability to select the necessary micro-service chains to serve a specific video computing request and the micro-services are parameterized so that their input, output and QoS attributes can be defined per instance of the micro-service.
9. The system as claimed in claim 1 is adapted to provide video management/computing functionality as a service comprising
hardware resources operatively connectable in the network; and
automatically configurable interconnection of selective hardware resources from said set of hardware resources corresponding to interconnected services for the video related management/computing functionality involving selective breaking down those requests into the set of independent micro-services to cater to wide variety of video computing tasks.
10. A method for asymmetrically scalable cloud stacking of micro-services relating to video computing applications according to user requested video computing servicing comprising
involving storage devices embodying different micro-services for different video computing applications;
involving a receiver server to receives the video computing service requests from the users over a computer network channel;
involving a micro-service chain creator sever and connecting the same to the receiver server over the network to receive the service requests passed by the receiver server and breaking down each of the service requests into a set of the micro-services;
involving a micro-service mapping server in the network and connecting the same to the micro-service chain creator sever to create micro-service interconnect map for each set of the micro-services corresponding to a service request;
involving a micro service server in the network and providing its access to task lists of all the micro-services such as that the micro service server reads the micro-service interconnect map accordingly populate the task-lists of the respective micro-services that is included in the micro-service interconnect map avoiding replication of micro-service activities in the system.
| # | Name | Date |
|---|---|---|
| 1 | 201931050548-STATEMENT OF UNDERTAKING (FORM 3) [07-12-2019(online)].pdf | 2019-12-07 |
| 2 | 201931050548-PROVISIONAL SPECIFICATION [07-12-2019(online)].pdf | 2019-12-07 |
| 3 | 201931050548-FORM 1 [07-12-2019(online)].pdf | 2019-12-07 |
| 4 | 201931050548-DRAWINGS [07-12-2019(online)].pdf | 2019-12-07 |
| 5 | 201931050548-ENDORSEMENT BY INVENTORS [07-12-2020(online)].pdf | 2020-12-07 |
| 6 | 201931050548-DRAWING [07-12-2020(online)].pdf | 2020-12-07 |
| 7 | 201931050548-COMPLETE SPECIFICATION [07-12-2020(online)].pdf | 2020-12-07 |
| 8 | 201931050548-FORM 18 [29-11-2023(online)].pdf | 2023-11-29 |
| 9 | 201931050548-FER.pdf | 2025-04-17 |
| 10 | 201931050548-Proof of Right [16-07-2025(online)].pdf | 2025-07-16 |
| 11 | 201931050548-FORM-26 [16-07-2025(online)].pdf | 2025-07-16 |
| 12 | 201931050548-PETITION UNDER RULE 137 [09-10-2025(online)].pdf | 2025-10-09 |
| 13 | 201931050548-PETITION UNDER RULE 137 [09-10-2025(online)]-1.pdf | 2025-10-09 |
| 14 | 201931050548-OTHERS [09-10-2025(online)].pdf | 2025-10-09 |
| 15 | 201931050548-FER_SER_REPLY [09-10-2025(online)].pdf | 2025-10-09 |
| 16 | 201931050548-DRAWING [09-10-2025(online)].pdf | 2025-10-09 |
| 17 | 201931050548-COMPLETE SPECIFICATION [09-10-2025(online)].pdf | 2025-10-09 |
| 18 | 201931050548-CLAIMS [09-10-2025(online)].pdf | 2025-10-09 |
| 1 | 201931050548_SearchStrategyNew_E_201931050548E_24-01-2025.pdf |