Abstract: ABSTRACT METHOD FOR RENDERING PARAMETERS AND A MINIMUM NUMBER OF CAMERAS REQUIRED FOR MAXIMUM COVERAGE A method for determining a minimum number of cameras required for observing a maximum space of an environment to be covered by the cameras, the method comprising creating a 3-Dimensional mesh grid (101) for an environment (100) and performing a grid search on the 3D mesh grid (101) and initializing parameters (202) for a first camera (103a) while fixing other plurality of cameras (103b; 103c...etc) at a fixed position. And further feeding the data to a computer implemented global parameter handling application which optimized the data and re-fed it to optimizer to obtain best optimized parameters for each of the cameras and a minimum number of cameras so as each voxel (102) is covered by at least two cameras.
Claims:WE CLAIM:
1. A method for determining a minimum number of cameras required for observing a maximum space of an environment (100) to be covered by the cameras, the method comprising:
- creating a 3-Dimensional mesh grid (101) for the environment;
- determining an initial position and angles for a plurality of cameras placed in the chosen space (100);
- performing a grid search on the3D mesh grid (101) for a first camera (103a) while fixing other plurality of cameras (103b; 103c... etc) at a fixed position,
Wherein grid search further comprises the steps of:
-initializing (401) R (rotation) and T (translation) for the first camera;
-fixing translation and optimizing rotation (402) for the first camera (101);
-fixing rotation and optimizing translation (403) for the first camera (101),
wherein the step of fixing rotation and optimize translation is repeated at least for 10 iterations.
-computing (404) the camera count matrix, distance and weights matrix for one camera (103a);
-computing (405) the score using an objective function;
-feeding (406) the parameters obtained by the previous steps to the global parameters handling and;
-optimizing (407) a final parameter for one camera (103a), by performing a grid search on the mesh grid (100) for each plurality of cameras (103b; 103c… etc) and optimizing final parameters for each cameras (103b; 103c …etc) by repeating (408) the iterative process and optimizing (409) the minimum number of cameras for observing maximum space and overlap between pluralities of cameras.
2. The method as claimed in claim 1 wherein the mesh grid (101) comprises of voxels (102) which is created using dimensions of the space or environment for which camera optimization is to be obtained.
3. The method as claimed in claim 1 wherein the grid search comprises of scanning the mesh grid (101) for estimating an optimal rotation and translation for the plurality of cameras (103a; 103b; 103c…etc) for obtaining maximum coverage of the space (100).
4. The method as claimed in claim 3 wherein the grid search builds a computer generated model based on each parameter combination obtained by the step (407) and stores each model based on each parameter combination.
5. The method as claimed in claim 1 wherein optimization performed by using voxel (102) and camera distance scoring mechanism which maximizes the coverage of the space (100) by the plurality of cameras(103a; 103b; 103c…etc).
6. The method as claimed in claim 1 wherein the grid searching comprises scanning the mesh grid (101) for estimating an optimal rotation and translation for the plurality of cameras(103a; 103b; 103c…etc) for obtaining maximum coverage by the plurality of cameras of the space (100)
7. The method as claimed in claim 6 wherein the grid search builds a 3-Dimensional model based on each parameter combination obtained by the optimizing step (407) and stores each model corresponding to each parameter combination.
8. The method as claimed in claim 7 wherein the 3-Dimensional model is built with a computer implemented application.
9. The method as claimed as claim 7 wherein each of the 3-Dimensional models are stored in a computer device and compared with each other to obtain an optimized model for implementation.
, Description:FIELD OF INVENTION
[0001] The present invention herein generally relates to a method of estimating optimal camera position. In particular, the present invention relates to a method for obtaining parameters of each camera and minimum number of cameras required for achieving maximum coverage for a chosen space.
BACKGROUND AND PRIOR ART
[0002] The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
[0003] In present era it has become an inevitable practice to install cameras at public and private spaces due to multiple reasons that includes safety and hazard prevention. Generally, the video surveillance cameras are installed at various public places like stores, airports, areas of importance, railway stations etc., and these cameras are provided to cover a large range of activities happening at such spaces. Many of the times, it is vital to cover even a pin sized area in a camera to capture activities that could happen at the given point. Traditionally, the issue of capturing each point of a chosen space is resolved by installing maximum number of cameras which adds on the cost and utility as two or more cameras might cover a single space.
[0004] In some applications, higher resolution images may be required for specific areas. Higher resolution may be obtained by setting specific cameras at higher resolutions. However, the use of high-resolution cameras for all applications may increase the cost; increase the load on system resources such as bandwidth, storage capacity, and comparable resources. Inefficient camera placement in a surveillance system is one of the main reasons that increases cost and complexity of maintenance system.
[0005] Finding the optimal camera placement is a difficult problem because of the constraints that is needed to be taken into consideration. There are the complexity of the environment, diverse camera properties, and numerous performance metrics for different applications. Each solution is based on different requirements and constraints but has the same objective that is to find the minimum number of cameras.
[0006] In the existing systems, the problem of optimally placing a mixture of directional and omni-directional camera is present. Further, it is difficult to compute cells of the grid present in the camera field of view due to usage of several types of cameras with different sensor resolutions and fields of view.
[0007] Further, in the existing system, the methods of camera placement estimation may sometimes be inherently sensitive to misclassified pixels in the foreground-background segmentation, the presence of which is expected when camera views are occluded by one or more objects. Therefore, there is a need of a method that determines for every voxel if it is occluded for any of the cameras.
[0008] Therefore, there is a need of a method for finding the minimum number of cameras that can observe maximum space and/or maximum overlap between the cameras and/or important zones of any given environment along with the constraints on the camera placements.
OBJECTS OF THE INVENTION
[0009] Some of the objects of the present disclosure are described herein below:
[00010] A main objective of the present invention is to find parameters of cameras that minimize the number of cameras while ensuring maximum coverage.
[00011] Still another object of the present invention is to obtain a minimum number of cameras required for maximum coverage of a chosen space.
[00012] Still another objective of the present invention is to provide the depth estimation of each voxel or 3-D cell.
[00013] Yet another objective of the present invention is to focus the entire 3D space, with user provided constraints and to estimate the depth at each voxel/3D cell.
[00014] Yet another objective of the present invention is to use a computer implemented grid search process to compute the value for camera count matrix and camera distance scoring mechanism to maximize the coverage.
SUMMARY OF THE INVENTION
[00015] In view of the foregoing, embodiments herein provides for a method of obtaining parameters of cameras that are installed at a chosen space. Further, the method herein disclosed provides steps to be followed to achieve a minimum number of cameras that ensures maximum coverage by covering each voxel or 3D cell of a mess grid. The present invention discloses an iterative process in which the first step begins with scanning a chosen space which is further built into a 3-D computer generated model and is scaled down into voxels. Each voxel represent a particular point of the chosen space. The 3-D model implies like mesh grid which is further scanned and the data obtained is fed into a computer implemented global parameter handling application. Further, the cameras are initialized individually while keeping rest of the cameras fixed. Firstly, translation is optimized for a single camera while fixing the rotation. Similarly, Rotation is optimized while keeping the translation of the camera fixed. The data from the mesh grid search and optimized camera are further processed and fed to global parameter handling which further processes it and provides for a final optimized parameter for each cameras and a minimum number cameras so as each voxel of the mesh grid is covered by at least two of the cameras.
BRIEF DESCRIPTION OF DRAWINGS
[00016] The detailed description is set forth with reference to the accompanying figures. In all the figures except for the Fig. 1 and Fig. 2., the left-most digit (s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.
[00017] Fig. 1 illustrates a 3-D mesh grid model of a space in an exemplary form.
[00018] Fig. 2 is a block diagram illustrating the method of obtaining an optimized number of cameras.
[00019] Fig. 3 is a schematic flow chart diagram illustrating the iterative processes that are assimilated in the present invention.
[00020] Fig. 4 is a process diagram illustrating the steps of the present invention in a schematic fashion.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[00021] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[00022] As per one embodiment, the present invention discloses a method which assist in determining parameters for given set of cameras for a chosen space. The invention disclosed herein provides for a method to obtain a minimum number of cameras that is required for maximum coverage ensuring that each point/space is covered by at least two cameras installed in that particular realm.
[00023] For the purpose of achieving one of the objectives of the invention, which is to obtain a minimum number of camera and parameters of each cameras, a chosen space (100) as depicted in Fig. 1 is at first thoroughly scanned using a set of cameras which are then fed via a cloud camera surveillance system into a computer implemented application. The chosen space is further built into a 3-D model by the computer implemented application. The 3-D model of the chosen space is then further scaled down into a mesh grid (101) in X-Y-Z coordinates. The mesh grid (101) is further divided into a multiple numbers of small unit cells/ voxel (102). Each voxel (102) represents a particular point of the chosen space and each has a unique X-Y-Z coordinates. The mesh grid (101) is built into a computer generated 3-D model in a way that it replicates the real time scanned image of the chosen space (100) and the each voxels (102) are created in order to cover whole of the chosen space (100). Initially, a set of any number of cameras (103a; 103b; 103c…. etc.) (The number of cameras and position in the Fig. 1 is for exemplary and illustrative purpose) are installed at any of the voxels (102).
[00024] Once, an initial set of cameras (103a; 103b; 103c…etc) are virtually installed at the voxels (102) of the mesh grid (101), an iterative process of obtaining a minimum number of cameras required can be initiated. Fig. 2 delineates a block diagram illustrating the method of obtaining an optimized number of cameras. The first step (201) is to create the mesh grid (101) as illustrated in Fig. 1 and as described in the above paragraphs. The mesh grid (101) is further scanned or searching of mesh grid is carried out. Grid-searching is the process of scanning the mesh-grid (101) to estimate the best optimal rotation and translation for each of the cameras (103a; 103b; 103c …etc) to achieve maximum coverage. Grid-Search further builds a computer generated model on each parameter combination possible. Ititerates through every parameter combination and stores a model for each combination.
[00025] Secondly, the set of cameras (103a; 103b; 103c… etc) are initialized (202) which connote that the cameras (103a; 103b; 103c… etc) are installed at any of the voxels (102). This step is carried out in furtherance of an iterative process to obtain internal Rotation and Translation (R & T) and external parameters at which the set of cameras (103a; 103b; 103c… etc) are to installed.
[00026] At the next step, the data; that is the scanned data of the mesh grid (101) and the parameters for the set of cameras (103a; 103b; 103c… etc) obtained by the step (202) are projected (203) to a computer implemented global parameters handling application. The projected data are further processed to obtain an optimized number of camera count matrix with respect to voxel and distance and weight matrix. The Fig. 2 briefly illustrates the major steps involved in obtaining the required minimum number of the cameras and also the parameters for the set of cameras (103a; 103b; 103c…etc).
[00027] Fig. 3 is a schematic flow chart diagram illustrating the iterative processes that are assimilated in the present invention. The first input box of the flow chart provides feeding information with respect to the voxel (102) grid; Rotation and Translation (R&T) data for each of the cameras (103a; 103b; 103c… etc) and parameters space of the mesh grid (101). At the next step, an optimizer which is a computer implemented application processes the fed data. The fed data are processed separately for each individual cameras, followed by scanning and searching the mesh grid (101) to obtain an optimized parameters for each of the cameras (103a; 103b; 103c… etc). The optimized parameters are extracted and then it is updated in the computer implemented global parameter handling application. Further, the updated parameters are stored as an updated data in the computer implemented global parameter handling application and is then re-fed to the optimizer for repeating the iterative process in order to obtain better optimized parameters for the cameras and the minimum number of the cameras required to achieve maximum coverage so that each voxel (102) is covered by at least two cameras. Iterative process is repeated for at least a minimum 10 number of times but is not limited to, for obtaining better optimized parameters for each of the cameras (103a; 103b; 103c….etc).
[00028] Fig. 4 is a process diagram illustrating the steps assimilated in the present invention in a schematic fashion. As discussed above the iterative process starts with the step of initializing (401) R (rotation) and T (translation) for one individual camera (103a) from the set of the plurality of cameras (103a; 103b; 103c…etc.
[00029] A particular camera is chosen from the given set of plurality of cameras (103a; 103b; 103c….etc). Here, a camera (103a) has been chosen for exemplary purpose. All the other cameras (103b; 103c…etc) are kept fixed at any parameter at different voxels (102). For the selected camera (103a), at step (402) translation (T) is fixed and rotation is optimized by following the process as described in Fig. 3 using computer implemented optimizer and global parameter handling application..
[00030] Further, translation parameter for the selected camera (103b) is also optimized (403) while fixing the rotation of the camera (103b). After, optimizing the rotation and translation for the chosen camera (103a), camera count matrix, distance and weight matrix is computed (404) by feeding the optimized parameters into a computer implemented application. The score for the camera (103a) that is the Rotational, Translational and angular parameters are computed (405) using a computer implemented application. The score obtained at the step (405) is then fed (406) to a computer implemented global parameter handling application as also mentioned in Fig. 3. The global parameter handling application processes the parameters and then re-fed them to the optimizer. After processing the updated and optimized parameters for the camera (103a), a final and the best parameter is optimized for the camera (103a).
[00031] Similarly, for all the other cameras (103b; 103c…etc) parameters are optimized by repeating the iterative process (408). And at the final stage; a minimum number of the cameras required is optimized so that each voxels (102) are covered by at least two of the cameras from the set of installed cameras.
| # | Name | Date |
|---|---|---|
| 1 | 202141022670-FORM FOR SMALL ENTITY(FORM-28) [21-05-2021(online)].pdf | 2021-05-21 |
| 2 | 202141022670-FORM FOR SMALL ENTITY [21-05-2021(online)].pdf | 2021-05-21 |
| 3 | 202141022670-FORM 3 [21-05-2021(online)].pdf | 2021-05-21 |
| 4 | 202141022670-FORM 18 [21-05-2021(online)].pdf | 2021-05-21 |
| 5 | 202141022670-FORM 1 [21-05-2021(online)].pdf | 2021-05-21 |
| 6 | 202141022670-FIGURE OF ABSTRACT [21-05-2021(online)].jpg | 2021-05-21 |
| 7 | 202141022670-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [21-05-2021(online)].pdf | 2021-05-21 |
| 8 | 202141022670-EVIDENCE FOR REGISTRATION UNDER SSI [21-05-2021(online)].pdf | 2021-05-21 |
| 9 | 202141022670-ENDORSEMENT BY INVENTORS [21-05-2021(online)].pdf | 2021-05-21 |
| 10 | 202141022670-DRAWINGS [21-05-2021(online)].pdf | 2021-05-21 |
| 11 | 202141022670-COMPLETE SPECIFICATION [21-05-2021(online)].pdf | 2021-05-21 |
| 12 | 202141022670-FORM-9 [22-06-2021(online)].pdf | 2021-06-22 |
| 13 | 202141022670-Proof of Right [22-09-2021(online)].pdf | 2021-09-22 |
| 14 | 202141022670-Power of Authority [22-09-2021(online)].pdf | 2021-09-22 |
| 15 | 202141022670-PETITION u-r 6(6) [22-09-2021(online)].pdf | 2021-09-22 |
| 16 | 202141022670-FORM-26 [22-09-2021(online)].pdf | 2021-09-22 |
| 17 | 202141022670-Covering Letter [22-09-2021(online)].pdf | 2021-09-22 |
| 18 | 202141022670-FER.pdf | 2022-05-19 |
| 19 | 202141022670-FER_SER_REPLY [07-09-2022(online)].pdf | 2022-09-07 |
| 20 | 202141022670-CLAIMS [07-09-2022(online)].pdf | 2022-09-07 |
| 1 | 202141022670E_18-05-2022.pdf |