Abstract: A system and method for generating an incident storyboard is provided. The system including a command centre in wireless communication with one or more user devices and a surveillance unit having a plurality of cameras. Based on incident details received from the one or more user devices, a processor of the command centre identifies video clips from at least one camera, a mapping engine places each video clip along with at least one comment in at least one frame of a storyboard template, a formatting engine formats the video clip along with the at least one comment in the at least one frame depending upon user requirements. Further, a storyboard generation unit generates the incident storyboard by combining the plurality of frames in a pre-set or user-defined sequence. Ref. Figure 1
Claims:WE CLAIM:
1. A system for generating an incident storyboard, the system comprising a command centre in wireless communication with one or more user devices and a surveillance unit, the command centre comprising:
a transceiver for receiving incident details from the one or more user devices, wherein the incident details including time of an incident and location particulars;
a processor configured for identifying a plurality of video clips pertaining to the incident details, wherein the video clips are received at a database of the command centre from a plurality of cameras of the surveillance unit, wherein each camera is configured to capture at least one location;
a mapping engine configured for:
placing each of the video clip in at least one frame of a plurality of frames of a storyboard template; and
mapping at least one comment to each of the video clip present in the at least one frame of the storyboard template, wherein the at least one comment is received from the one or more user devices in response to viewing the video clip on a user interface of the one or more user devices;
a formatting engine configured for formatting each of the video clip along with the at least one comment present in the at least one frame of the storyboard template to video, audio or text information depending on user requirements; and
an incident storyboard generation unit configured for generating the incident storyboard by combining the plurality of frames of the storyboard template in a pre-set or user-defined sequence, wherein the at least one frame including the formatted video clip along with the at least one comment.
2. The system as claimed in claim 1, wherein the incident details are received as video, audio or text information at the command centre from the user device operated by the user.
3. The system as claimed in claim 1, wherein the incident details includes source data, user data and metadata.
4. The system as claimed in claim 3, wherein the source data includes information of the user device such as location of the user device when the incident details were transmitted.
5. The system as claimed in claim 3, wherein the user data includes information such as identity of the user.
6. The system as claimed in claim 3, wherein the metadata includes time and location particulars of the incident details.
7. The system as claimed in claim 1, wherein the plurality of frames of the storyboard template are adapted to accommodate video clips, still images and text information.
8. The system as claimed in claim 1, wherein the transceiver of the command centre transmits a copy of the storyboard template to the one or more user devices, wherein the copy of the storyboard template includes the plurality of frames, each of the frame including the video clip.
9. The system as claimed in claim 8, wherein the one or more users operating the one or more user devices input at least one comment in response to viewing the video clip present in each of the frames of the copy of the storyboard template.
10. The system as claimed in claim 1, wherein the pre-set sequence is based on parameters such as time and location of capturing the video clip by the at least one camera of the surveillance unit, wherein the video clip is present in the at least one frame.
11. The system as claimed in claim 10, wherein the plurality of frames of the storyboard template are combined in a chronological order when the pre-set sequence is based on the time parameter.
12. The system as claimed in claim 1, wherein the at least one frame of the storyboard template includes a pointer indicating the time and location of capturing the video clip present in the at least one frame by the at least one camera of the surveillance unit.
13. A method for generating an incident storyboard utilizing a command centre, the method comprising the steps of:
receiving incident details at a command centre from one or more user devices, wherein the incident details including time of the incident and location particulars;
identifying a plurality of video clips pertaining to the incident details, wherein the video clips are received at the command centre from a plurality of cameras of a surveillance unit, wherein each camera is configured to capture at least one location;
placing each of the video clip in at least one frame of a plurality of frames of a storyboard template;
mapping at least one comment to each of the video clip present in the at least one frame of the storyboard template, wherein the at least one comment is received from one or more user devices in response to viewing the video clip on a user interface of the one or more user devices;
formatting each video clip along with the at least one comment present in the at least one frame of the storyboard template to video, audio or text information depending on user requirements; and
generating the incident storyboard by combining the plurality of frames of the storyboard template in a pre-set or user-defined sequence, wherein the at least one frame including the formatted video clip along with the at least one comment.
14. The method as claimed in claim 13, wherein the step of identifying a plurality of video clips pertaining to the incident details using a processor of the command centre, further comprising the steps of:
identifying at least one camera configured for capturing at least one location based on the location particular of the incident details; and
aggregating video clips received from the identified camera based on the time particular of the incident details.
15. The method as claimed in claim 13, wherein the step of mapping at least one comment to each of the video clip present in the at least one frame of the storyboard template, further comprising the step of:
tagging the incident details to the storyboard template, wherein the storyboard template including the video clip along with the at least one comment at the at least one frame of the storyboard template, wherein the incident details including source data, user data and metadata.
16. The method as claimed in claim 13, wherein the step of generating the incident storyboard by combining the plurality of frames of the storyboard template in a pre-defined or user defined sequence, further includes combining the incident details with the plurality of frames of the storyboard template in a similar pre-set or user-defined sequence. , Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
[See section 10, Rule 13]
A SYSTEM AND METHOD FOR GENERATING AN INCIDENT STORYBOARD;
PRICEWATERHOUSECOOPERS PVT. LTD., A CORPORATION ORGANISED AND EXISTING UNDER THE LAWS OF INDIA, WHOSE ADDRESS IS 252, VEER SAVARKAR MARG, NEXT TO MAYOR'S BUNGALOW, SHIVAJI PARK, DADAR, MUMBAI, MAHARASHTRA 400028, INDIA
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention generally relates to storyboards. More particularly, the present invention relates to a system and method for generating an incident storyboard.
BACKGROUND OF THE INVENTION
[0002] Generally, surveillance units, such as camera surveillance units, include a plurality of cameras. The cameras may be a closed circuit television (CCTV) camera or an Internet protocol (IP) camera. The cameras capture video clips and images of the surveillance region(s).
[0003] While working on a reported incident, the surveillance unit operator does multiple things such as viewing live and recorded video clips and images from concerned cameras of the surveillance units, talking to supervisors and police officers, adding comments to the video clips or images, etc. Review of the entire incident details of the reported incident may be needed for forensics, audit, process improvisation or training. Currently, all the requisite information gathered regarding an incident, such as the video clips, images, observations, comments, etc are stored at different locations. Therefore, to review the entire incident, all the requisite information has to be gathered and arranged in a proper order. The task of gathering requisite information regarding the incident is cumbersome and takes time. Further, during the task, some important information may be left out. This may lead to an erroneous review of the reported incident.
BRIEF SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present invention provide a system and method for generating an incident storyboard.
[0005] Accordingly, the invention provides a system for generating an incident storyboard. The system including a command centre in wireless communication with one or more user devices and a surveillance unit. The command centre comprising: a transceiver for receiving incident details from the one or more user devices, wherein the incident details including time of an incident and location particulars; a processor configured for identifying a plurality of video clips pertaining to the incident details, wherein the video clips are received at a database of the command centre from a plurality of cameras of the surveillance unit, wherein each camera is configured to capture at least one location; a mapping engine configured for: placing each of the video clip in at least one frame of a plurality of frames of a storyboard template; and mapping at least one comment to each of the video clip present in the at least one frame of the storyboard template, wherein the at least one comment is received from the one or more user devices in response to viewing the video clip on a user interface of the one or more user devices; a formatting engine configured for formatting each of the video clip along with the at least one comment present in the at least one frame of the storyboard template to video, audio or text information depending on user requirements; and an incident storyboard generation unit configured for generating the incident storyboard by combining the plurality of frames of the storyboard template in a pre-set or user-defined sequence, wherein the at least one frame including the formatted video clip along with the at least one comment.
[0006] In another aspect of the invention, a method for generating an incident storyboard utilizing a command centre is provided. The method comprising the steps of: receiving incident details at a command centre from one or more user devices, wherein the incident details including time of the incident and location particulars; identifying a plurality of video clips pertaining to the incident details, wherein the video clips are received at the command centre from a plurality of cameras of a surveillance unit, wherein each camera is configured to capture at least one location; placing each of the video clip in at least one frame of a plurality of frames of a storyboard template; mapping at least one comment to each of the video clip present in the at least one frame of the storyboard template, wherein the at least one comment is received from one or more user devices in response to viewing the video clip on a user interface of the one or more user devices; formatting each video clip along with the at least one comment present in the at least one frame of the storyboard template to video, audio or text information depending on user requirements; and generating the incident storyboard by combining the plurality of frames of the storyboard template in a pre-set or user-defined sequence, wherein the at least one frame including the formatted video clip along with the at least one comment.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] Reference will be made to embodiments of the invention, examples of which may be illustrated in the accompanying figures. These figures are intended to be illustrative, not limiting. Although the invention is generally described in the context of these embodiments, it should be understood that it is not intended to limit the scope of the invention to these particular embodiments.
[0008] Figure 1 illustrates a system for generating an incident storyboard according to an embodiment of the present invention.
[0009] Figure 2 illustrates a format of the storyboard template in accordance with an embodiment of the invention.
[0010] Figure 3 illustrates a sample of a frame of a storyboard template along with at least one comment.
[0011] Figure 4 illustrates an example of an incident storyboard generated based on pre-set time sequence in accordance with an embodiment of the invention.
[0012] Figure 5 illustrates a flowchart of a method for generating an incident storyboard according to an embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0013] Figure 1 illustrates a system 100 for generating an incident storyboard, according to an embodiment of the present invention. The system 100 includes one or more user devices 110, a command centre 120 and a surveillance unit 140.
[0014] The one or more user devices 110 include a communication module 112 and a user interface 114. The user interface 114 including an input unit 116 and an incident storyboard viewer 118.
[0015] The command centre 120 includes a transceiver 122, a processor 126, a mapping engine 128, a formatting engine 130 and an incident storyboard generation unit (ISGU) 132.
[0016] The surveillance unit 140 is in communication with the command centre 120. The command centre 120 is in communication with the one or more user devices 110.
[0017] Within the command centre 120, the transceiver 122, the processor 126, the mapping engine 128, the formatting engine 130 and the ISGU 132 are in communication with the database 124.
[0018] Within the user device 110, the communication module 112 is in communication with the user interface 114 having the input unit 116 and the incident storyboard viewer 118.
[0019] With reference to figure 1 of an embodiment of the invention, the system 100 is configured for generating an incident storyboard. At the outset, one or more users report an incident, called as incident details via the one or more user devices 110. The incident details are received at the transceiver 122 of the command centre 120 as video, audio or text information or a combination of video, audio or text information. For instance, if the incident details are transmitted as video information by the one or more users operating the one or more user devices 110, the incident details includes a video clip of the incident. Further, the video clip will include text information of the time of occurrence of the incident and the location(s) where the incident occurred. If the incident details are transmitted as audio information, the users operating the one or more user devices 110 dials a designated number of the command centre, and during the call, the audio information of the incident details is transmitted. Further, if the incident details are transmitted as text information, the one or more users operating the one or more user devices 110, transmit the text information of the incident details to the command centre 120. The incident details including time and location of the incident.
[0020] In accordance with an embodiment of the invention, the incident details can be modified or edited by the one or more users via the one or more user devices 110, even after the incident details are transmitted to the command centre 120.
[0021] In accordance with an embodiment of the invention, the incident details received at the transceiver 122 is stored at the database 124 of the command centre 120. The formatting engine 130 of the command centre 120 formats the incident details received as video, audio or text information or a combination of the video, audio or text information, to text information and stores the formatted incident details at the database 124 of the command centre 120. The incident details include the source data, user data and the metadata as shown in figure 4. The source data of the incident details includes information of the user device 110 such as location of the user device 110 when the incident details were transmitted. The user data of the incident details represents the identity of the user transmitting the incident details from the one or more user devices 110. The metadata of the incident details indicates the time and location when the incident occurred.
[0022] In accordance with an embodiment of the invention, the processor 126 of the command centre 120 is configured for identifying a plurality of video clips pertaining to the incident details. The plurality of video clips are received at the database 124 of the command centre 120 from a plurality of cameras of the surveillance unit 140. Each camera is configured for capturing the video clips of at least one location. For instance, the metadata of the incident details indicates the time duration of the incident is from 10 AM to 11AM on May 1st, 2016 and the location of the incident as X. The processor 126 initially identifies one or more cameras 142 of the surveillance unit 140 configured for capturing the video clips of the location X. Thereafter, the processor 126 aggregates the video clips received from the identified one or more cameras based on time, herein between 10 AM to 11 AM on May 1st, 2016. The processor 126 is not limited to identifying only one camera. During instances wherein there is a car accident, the car may have travelled from location X to location Y. During this situation, the processor 126 has to identify more than one camera that covers location X and location Y. Each video clip received at the database 124 of the command centre 120 includes metadata as shown in figure 3. The metadata of the video clip indicates the location where the video clip was captured. Further, the metadata of the video clip also includes information of time when the video clip was captured. The processor 126 identifies the video clip based on correlating with the metadata of the video clip at the database 124 of the command centre 120.
[0023] In accordance with an embodiment of the invention, the database 124 of the command centre 120 includes a storyboard template as shown in figure 2. The storyboard template is divided into a plurality of frames. The mapping engine 128 of the command centre 120 accesses the video clips pertaining to the incident details from the database 124. The plurality of frames of the storyboard template is formed of various shapes such as squares, rectangles, triangles, etc. The mapping engine 128 places each video clip in at least one frame of the storyboard template. The storyboard template is adapted for accommodating video clips, still images and text information. Each frame of the storyboard template includes a pointer as shown in figure 2. The pointer indicates the time and location of capturing the video clip. The mapping engine 128 retrieves time and location particulars of each video clip present in the said frame by correlating with the metadata of the video clip present in each of the frames of the storyboard template. Further, the mapping engine 128, broadcasts the retrieved time and location particulars of the video clip to the pointer of each of the frame. Therefore, the pointer of each frame includes the time and location particulars of each of the video clip present in the at least one frame.
[0024] A copy of the storyboard template is transmitted to the one or more user devices 110. The copy of the storyboard template includes each video clip present in the at least one frame. The one or more users operating the one or more user devices 110, access the copy of the storyboard template at the user interface 114 of the user device 110. The user accessing the video clip present in the at least one frame of the storyboard template inputs at least one comment at the input unit 116 in response to viewing the video clip. The mapping engine 128 of the command centre 120 maps the at least one comment received from the one or more user devices 110 in response to viewing the video clip by the one or more users. While mapping the at least one comment to the video clip, the identity of the one or more users inputting the at least one comment and time when the at least one comment was inputted is also retrieved and mapped along with the video clip present in the at least one frame. For instance, the at least one comment can be received from an incident investigator, police inspector, etc. via the user devices 110. Further, the mapping engine 128 tags the incident details to the storyboard template, wherein the storyboard template including the video clip along with the at least one comment at the at least one frame of the storyboard template. The incident details include user data, source data and metadata with reference to figure 4 of an embodiment of the invention.
[0025] In accordance to an embodiment of the invention, the formatting engine 130 of the command centre 120, formats each of the video clip along with the at least one comment present at the at least one frame of the storyboard template. Further, the formatting engine 130 also formats the incident details tagged along with the storyboard template. The formatting engine 130 formats the video clip along with the at least one comment and the incident details to video, audio or text information depending upon user requirements. For instance, the user intends that the video clip along with the at least one comment and the incident details be formatted as video information, then based on input by the user via the input unit 116 of the user interface 114 of the user device 110, the video clip along with the at least one comment is formatted as video information, the same is applicable for formatting into audio or text information.
[0026] In accordance with an embodiment of the invention, the incident storyboard generation unit 132 is configured for generating the incident storyboard by combining the plurality of frames of the storyboard template in a pre-set or user-defined sequence, wherein the at least one frame including the converted video clip along with the at least one comment. Further, the incident details tagged along with the storyboard template is also combined with the plurality of frames of the storyboard template in the similar pre-set or user-defined sequence.
[0027] In accordance with an embodiment of the invention, the pre-set sequence is based on parameters such as time and location of capturing the video clip by the at least one camera of the surveillance unit 140, wherein the video clip is present in the at least one frame. For instance, the pre-set sequence is based on the time parameter. In this situation, the plurality of frames of the storyboard template along with the incident details are combined based on a chronological order. In an example embodiment illustrated in figure 4, let us consider that there are four video clips pertaining to the incident details that are captured by the camera 142 of the surveillance unit 140. The mapping engine 128 places each video clip in at least one frame of the storyboard template. Hence, if there are four video clips, a storyboard template including four frames are selected by the mapping engine 128, i.e. Frame 1, Frame 2, Frame 3 and Frame 4 respectively as shown in figure 4. Frame 1 includes a video clip 1, similarly Frame 2 includes video clip 2, Frame 3 includes video clip 3 and Frame 4 includes video clip 4. The video clip 1 is captured at time 5 PM, video clip 2 is captured at time 4.50 PM, video clip 3 is captured at 5.20 PM and video clip 4 is captured at 5.30 PM respectively. The four video clips are captured on the same day. A pointer is included for each of the four frames as shown in figure 2. The pointer indicates the location and time particulars of the video clip present in each frame. In view of the mentioned time data of the video clips, the incident storyboard generation unit 132 generates the incident storyboard by combining the frames 1 to 4 based on the time the video clips 1 to 4 are captured, i.e. in the chronological order as shown in figure 4. The incident storyboard generation unit 132 combines the frames in the chronological order by correlating with the pointer associated with each frame. Based on the chronological order, the incident details will be arranged first, thereafter frame 2 is arranged; subsequently frames 1, 3 and 4 will be combined with the incident details and frame 2 in the storyboard template as shown in figure 4. In view of the combined plurality of frames 1 to 4, the incident storyboard is generated. In the above mentioned example, the pre-set sequence is based on video clips captured in chronological order, i.e. the frame including the video clip with earliest captured time will be arranged first, the subsequent frames will also be arranged in a similar fashion. Further, the incident details pertaining to the video clips are combined with the storyboard template as well. Since, the processor 126 identifies the video clips pertaining to the incident details once the incident details are received at the command centre, the incident details tagged along with the storyboard template is combined first, earlier than the plurality of frames of the storyboard template.
[0028] In accordance with an embodiment of the invention, once the incident storyboard is generated, the incident storyboard is stored at the database 124 of the command centre 120. Thereafter, a copy of the incident storyboard is transmitted to the one or more user devices 110 from the transceiver 122 of the command centre 120. The one or more users can view the incident storyboard via the incident storyboard viewer 118 of the one or more user devices 110 as shown in figure 1. In the above mentioned example, the incident storyboard is generated by combining Frames 1 to Frame 4. The frames are combined in the sequence namely, Frame 2, Frame 1, Frame 3 and Frame 4 respectively. Therefore, when the user views the incident storyboard on the user device 110, the incident details will appear first, thereafter video clips 2, 1, 3 and 4 present in the frames 2, 1, 3 and 4 respectively will appear. Therefore, the entire incident that was reported by means of the incident details can be viewed in a pre-set sequence on the user device 110.
[0029] In accordance with an embodiment of the invention, the incident storyboard is generated based on a user-defined sequence as well. The user can customize the manner in which the plurality of frames of the storyboard template should be combined.
[0030] With reference to figure 5 of an embodiment of the invention, a flowchart of a method for generating an incident storyboard utilizing a command centre is illustrated. The method comprises the following steps:
[0031] At step 502, incident details are received at the command centre from one or more user devices. The incident details include time of the incident and location particulars. The one or more user devices are operated by one or more users. The one or more users input the incident details via a user input of one or more user devices. The incident details are received at the command centre in video, audio or text information. The incident details received at the command centre is formatted into text information by a formatting engine and stored at a database of the command centre.
[0032] At step 504, a plurality of video clips pertaining to the incident details is identified by a processor of the command centre. The video clips are received at the command centre from a plurality of cameras of a surveillance unit. Each camera of the surveillance unit is configured to capture at least one location. A processor of the command centre identifies the video clips based on time and location particulars of the incident details, wherein the incident details are stored as text information at the database of the command centre.
[0033] At step 506, each video clip is placed in at least frame of a plurality of frames of a storyboard template by a mapping engine of the command centre.
[0034] At step 508, the mapping engine maps at least one comment to each of the video clip present in the at least one frame of the storyboard template, wherein the at least one comment is received from one or more user devices in response to viewing the video clip on a user interface of the one or more user devices.
[0035] At step 510, each of the video clip along with the at least one comment present in the at least one frame of the storyboard template is formatted to video, audio or text information depending on user requirements. For instance, if the user intends to format the video clip along with the at least one comment to video information, then the user has to select via the user input on the user interface, video format among other formats available namely audio and text formats.
[0036] At step 512, the incident storyboard is generated by combining the plurality of frames of the storyboard template in a pre-set or user-defined sequence, wherein the at least one frame including the formatted video clip along with the at least one comment. In accordance with an embodiment of the invention, the pre-set sequence is based on parameters such as time and location of capturing the video clip by the at least one camera of the surveillance unit, wherein the video clip is present in the at least one frame. For instance, the pre-set sequence is based on the time parameter. In this situation, the plurality of frames are combined based on a chronological sequence. In an example embodiment, let us consider that there are four video clips pertaining to the incident details that are captured by the camera of the surveillance unit. The mapping engine places each video clip in at least one frame of the storyboard template. Hence, if there are four video clips, a storyboard template including four frames are selected by the mapping engine, i.e. Frame 1, Frame 2, Frame 3 and Frame 4 respectively. Frame 1 includes a video clip 1, similarly Frame 2 includes video clip 2, Frame 3 includes video clip 3 and Frame 4 includes video clip 4. The video clip 1 is captured at time 5 PM, video clip 2 is captured at time 4.50 PM, video clip 3 is captured at 5.20 PM and video clip 4 is captured at 5.30 PM respectively. The four video clips are captured on the same day. A pointer is included for each of the four frames. The pointer indicates the location and time particulars of the video clip present in each frame. In view of the mentioned time data of the video clips, the incident storyboard generation unit generates the incident storyboard by combining the frames 1 to 4 based on the time the video clips 1 to 4 are captured, i.e. in the chronological sequence. The incident storyboard generation unit combines the frames in the chronological sequence by correlating with the pointer associated with each frame. Based on the chronological sequence, the frame 2 will be arranged first; subsequently frames 1, 3 and 4 will be combined with frame 2 in the storyboard template. In view of the combined plurality of frames 1 to 4, the incident storyboard will be generated. In the above mentioned example, the pre-set sequence is based on video clips captured in chronological sequence, i.e. the frame including the video clip with earliest captured time will be arranged first, the subsequent frames will also be arranged in a similar fashion. Further, the incident details pertaining to the video clips will be combined with the storyboard template as well. Since, the processor identifies the video clips pertaining to the incident details once the incident details is received at the command centre, the incident details tagged along with the storyboard template is combined first, earlier than the plurality of frames of the storyboard template.
[0037] In accordance with an embodiment of the invention, once the incident storyboard is generated, the one or more users can view the generated incident storyboard on the one or more user devices via the incident storyboard viewer. In the above mentioned example, the incident storyboard is generated by combining Frames 1 to Frame 4. The frames are combined in the sequence namely, Frame 2, Frame 1, Frame 3 and Frame 4 respectively. Therefore, when the user views the incident storyboard on the user device, the incident details will appear first, thereafter video clips 2, 1, 3 and 4 present in the frames 2, 1, 3 and 4 respectively will appear. Therefore, the entire incident that was reported by means of the incident details can be viewed in a pre-set sequence on the user device.
[0038] In accordance with an embodiment of the invention, the incident storyboard can be generated based on a user-defined sequence as well. The user can customize the manner in which the plurality of frames of the storyboard template should be combined. In accordance with an embodiment of the invention, the incident storyboard is viewed on the user interface of the user device.
[0039] In accordance with an embodiment of the invention, the step 504 of identifying a plurality of video clips pertaining to the incident details, further comprises the steps of identifying at least one camera configured for capturing at least one location based on the location particular of the incident details and aggregating video clips received from the identified camera based on the time particular of the incident details. For instance, the metadata of the incident details indicates the time duration of the incident is from 10 AM to 11AM on May 1st, 2016 and the location of the incident as X. The processor initially identifies camera configured for capturing the video clips of the location X. Thereafter, the processor aggregates the video clips received from the identified camera based on time, herein between 10 AM to 11 AM on May 1st, 2016.
[0040] Step 508, of mapping at least one comment to each of the video clip present in the at least one frame of the storyboard template, further comprises the step of tagging the incident details to the storyboard template, wherein the storyboard template including the video clip along with the at least one comment at the at least one frame of the storyboard template.
[0041] In accordance with an embodiment of the invention, the step 512 of generating the incident storyboard by combining the plurality of frames of the storyboard template in a pre-defined or user defined sequence, further includes combining the incident details in a similar pre-set or user-defined sequence.
[0042] The foregoing description of the invention has been set merely to illustrate the invention and is not intended to be limiting. Since, modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to person skilled in the art, the invention should be construed to include everything within the scope of appended claims.
| # | Name | Date |
|---|---|---|
| 1 | Form 5 [27-04-2017(online)].pdf | 2017-04-27 |
| 2 | Form 3 [27-04-2017(online)].pdf | 2017-04-27 |
| 3 | Form 20 [27-04-2017(online)].pdf | 2017-04-27 |
| 4 | Form 1 [27-04-2017(online)].pdf | 2017-04-27 |
| 5 | Drawing [27-04-2017(online)].pdf | 2017-04-27 |
| 6 | Description(Complete) [27-04-2017(online)].pdf_27.pdf | 2017-04-27 |
| 7 | Description(Complete) [27-04-2017(online)].pdf | 2017-04-27 |
| 8 | PROOF OF RIGHT [08-06-2017(online)].pdf | 2017-06-08 |
| 9 | Form 26 [08-06-2017(online)].pdf | 2017-06-08 |
| 10 | 201721014883-ORIGINAL UNDER RULE 6 (1A)-12-06-2017.pdf | 2017-06-12 |
| 11 | 201721014883-FORM-9 [10-10-2017(online)].pdf | 2017-10-10 |
| 12 | 201721014883-FORM18 [27-04-2018(online)].pdf | 2018-04-27 |
| 13 | Abstract1.jpg | 2018-08-11 |
| 14 | 201721014883-FER.pdf | 2020-06-08 |
| 15 | 201721014883-OTHERS [07-12-2020(online)].pdf | 2020-12-07 |
| 16 | 201721014883-FER_SER_REPLY [07-12-2020(online)].pdf | 2020-12-07 |
| 17 | 201721014883-COMPLETE SPECIFICATION [07-12-2020(online)].pdf | 2020-12-07 |
| 18 | 201721014883-CLAIMS [07-12-2020(online)].pdf | 2020-12-07 |
| 19 | 201721014883-US(14)-HearingNotice-(HearingDate-07-08-2023).pdf | 2023-07-19 |
| 20 | 201721014883-Correspondence to notify the Controller [04-08-2023(online)].pdf | 2023-08-04 |
| 21 | 201721014883-Written submissions and relevant documents [18-08-2023(online)].pdf | 2023-08-18 |
| 22 | 201721014883-PatentCertificate31-10-2023.pdf | 2023-10-31 |
| 23 | 201721014883-IntimationOfGrant31-10-2023.pdf | 2023-10-31 |
| 23 | Form 5 [27-04-2017(online)].pdf | 2017-04-27 |
| 1 | Search_StrategyE_15-05-2020.pdf |
| 1 | Search_Strategy_amendedstageAE_30-12-2020.pdf |
| 2 | Search_StrategyE_15-05-2020.pdf |
| 2 | Search_Strategy_amendedstageAE_30-12-2020.pdf |