Sign In to Follow Application
View All Documents & Correspondence

A Method For Generating A 3 D Real Model

Abstract: According to an embodiment, a method for generating a real-life customized scenario from a video having a plurality of image frames is disclosed. The image frame may be divided into a plurality of regions and at least one characteristic of the image frame may be determined. The method may include identifying image pixels related to at least one object in the image frame using a convolution network method. The method may further include identifying image pixels related to at least one object by segmenting & parsing the image frame. The image pixels of the objects identified through both methods may be compared pixel by pixel basis to generate an object identification map. After objects identification, the characteristics of the image frame may be corrected and a three-dimensional model may be generated using the corrected image frame and the object identification map.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
26 December 2017
Publication Number
26/2019
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
patents@ltts.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-05-22
Renewal Date

Applicants

L&T TECHNOLOGY SERVICES LIMITED
DLF IT SEZ PARK, 2 ND FLOOR - BLOCK 3  1/ 124, MOUNT POONAMALLEE ROAD RAMAPURAM, CHENNAI.

Inventors

1. GOPINATH C
M.49, Manickamapalayam, Anna Nagar, Collectorate (P.O), Erode, Tamil Nadu, 638011, India.
2. RAMESH BABU D
E-I2, Arihant Amara, No. 49, Goparasanallur, Kumananchavadi, Chennai, Tamil Nadu, 600077, India.

Specification

FIELD OF INVENTION
The invention generally relates to an image and video processing technology and more particularly to a method for generating a real-life customized scenario from a video.
BACKGROUND
With the rapid growth in technology, the ability to provide computer generated virtual environment has become a reality. Such virtual environments have been proven popular for training systems, such as for driver training, pilot training, surgical procedures etc. The system involves combining pre-recorded or computer generated visual information with a real-world environment to provide the perception of a desired environment. For example, a driver's training simulator may include a physical representation of the driver's seat of an automobile with a video or computer-generated image of a road. The image is made to be reactive to the actions of the driver, by changing speeds and perspectives in response to acceleration, braking and steering by the driver.
The major disadvantage of the virtual environment in testing is that it doesn't satisfy all real-time conditions. Besides, the virtual environment appears cartoonish and doesn't provide the same feel as that of the real-life driving. There is a further possibility that an ADAS (Advanced Driver Assistance Systems) algorithm, that works fine in the virtual environment might not work in real time condition because of continuous changing of environmental conditions.
Testing in real time is also been performed, but it is difficult to test all the scenarios. For Fecordmgrall possible testhcase"s §hd\e£ch teslcase do£er/aLpb£siE!e scenaFioi, £ve need huge

database or we need to record a video for more than a million hours. As per Euro NCAP (New Car Assessment Programme) statement, at least 9.5 billion hours of data is required for real time testing of ADAS algorithm.
Hence there is a need for an improved system and method for generating a real-life customized scenario from a video.
SUMMARY OF THE INVENTION
According to an exemplary embodiment of the invention, a method for generating a customized real-life scenario from a video is disclosed. The video may include a plurality of image frames, where each of the image frame may be divided into a plurality of regions. The method may include determining at least one or more of the characteristics of the image frame. The method may further include a step for identifying one or more image pixels related to at least one object in the image frame using a convolution network. The image pixels related to at least one object in image frame may be again identified by segmenting & parsing the image frame. The image pixels of the objects identified through the convolution network method and the segmenting and parsing may be compared on pixel by pixel basis to generate an object identification map. After the objects are identified, the characteristics of the image frame identified earlier may be corrected as required. A three-dimensional model may be generated using the corrected image frame and the object identification map.
BRIEF DESCRIPTION OF DRAWINGS

Other objects, features, and advantages of the invention will be apparent from the following description when read with reference to the accompanying drawings. In the drawings, wherein like reference numerals denote corresponding parts throughout the several views:
Figure 1 illustrates a flow chart of a method for generating a scenario from a video having a plurality of image frames, according to an exemplary embodiment of the invention.
DETAILED DESCRIPTION OF DRAWINGS
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skilled in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
Figure 1 illustrates a flow chart of a method 100 for generating a real-life customized scenario from a video, according to an exemplary embodiment of the invention. The video may include a plurality of image frames, that when viewed in a sequence, plays as the video. The video having a plurality of image frames may be of a real-life environment. According to an embodiment, the video may be of a real-life road traffic condition consisting of multiple objects such as, but not limited to, vehicles, pedestrians, buildings, trees etc. According to another embodiment, the video may be of a real-life flight condition. According to an embodiment, for 'capturing ine-*-roao- iraT-nc Gonamons, a viaeo reGoraing^aewce ma-y-oe mourned on me

windshield of a vehicle and the vehicle may be driven. According to an embodiment, the distance driven by the vehicle along with the video recording device may depend on the test to be performed on the scenario and may vary from few metres to few kilometres. The video recording device may be a digital camera and the captured video may be stored in a video repository.
At step 102, the method 100 obtains the video from the video repository. The video repository may include data storage space such as, but not limited to cioud, hard drive, pen drive, compact disk etc. The captured video may be fed to a real drive simulator system. The real drive simulator system may be used for driver trainings and for research in human factors such as, but not limited to, monitoring driver behaviour, performance, attention etc. The real drive simulator system may further be used in vehicle industry to design and evaluate new vehicles or new advanced driver assistance systems.
At step 104, the method 100 divides each of the image frame of the video into a plurality of regions. According to an embodiment, the image frame may be divided into 3*3 regions. According to another embodiment, the image frame may be divided into 4*4 regions. According to yet another embodiment, the number of regions into which the image frame is divided may be decided by the user based on the parameters such as, but not limited to, size of the image frame, number of objects in the image frame, clarity of the image frame, etc.
At step 106, the method 100 determines the value of at least one or more characteristics of the image frame. The characteristics of the image frames may include hue, saturation, gamma or/and white balance information. The values of the characteristics of the image frames may 1 oe altered to moai-r-y trie image1 rrame-as per^user"requirements.-

At step 108, the method 100 identifies at least one or more image pixels of at least one object in the image frame. According to an embodiment, the image pixel of the object may be identified by a convolution network. The Convolution network analyses the image frame with convolution layers to identify the objects in the image frame. The objects in the image frame may include elements such as, but not limited to, road, sky, person, trees, vehicles, buildings, clouds etc.
At step 110, the method 100 repeats the step of identifying at least one or more image pixels related to at least one or more object in the image frame. In this step, the image pixels related to the object are identified by segmenting and parsing the image frame. The objects in the • image frame may include elements such as but not limited to road, sky, person, trees, vehicles, buildings, clouds etc.
At step 112, the image pixels of the objects identified in step 108 by convolution network and the image pixels of the objects identified in step 110 by segmenting and parsing the image frame may be compared on pixel by pixel basis to generate an object identification map. The image pixels of the objects are collected across multiple image frames.
At step 114, the characteristics of the image frames identified in step 106 may be corrected. The characteristics of each of the image frames may be corrected to perform changes in the image frame. These characteristics may enable a user to covert the lighting conditions of the environment to day or night conditions in the image frame. The characteristics may be further corrected to synchronise each of the objects along with the other objects in the image frame

At step 116, the method maps all the image frames in a three-dimensional space to generate a three-dimensional model. The three-dimensional model may include the objects in a three-dimensional view.
According to an embodiment, the objects in the image frame may be removable from the video captured by the video recording device. To remove the object, the pixels of the object may be removed from the image frame. On removing the pixels of the object, the pixels are replaced by the neighbouring pixels. The neighbouring pixels may replace the space left by the pixels of the object.
According to an embodiment, the objects may be added or moved in the video captured by the video recording device. To add an object, the three-dimensional object may be added at the , required position in the three-dimensional or two-dimensional model. The added object may be further synchronized with the other objects in the three-dimensional or two-dimensional model.
It is understood that the above description is intended to be illustrative, and not restrictive. It is intended to cover all alternatives, modifications and equivalents as may be included within the spirit and scope of the invention as defined in the appended claims. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms "including" and ;'in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein," respectively.

We claim:
1. A method for generating a real-life customized scenario from a video, the video
having a plurality of image frames, the method comprising:
step 1 - dividing each image frame into a plurality of regions;
step 2 - determining at least one of the characteristics of the image frame;
step 3 - identifying one or more image pixels related to at least one object in the image frame using convolution network method;
step 4 - repeating the step of identifying the image pixels related to at least one object in image frame by segmenting & parsing the image frame;
step 5 - comparing the image pixels of the objects identified in step 3 and step 4 on pixel by pixel basis to generate an object identification map;
step 6 - correcting the characteristics of the image frame identified in step 2; and
step 7 - generate a three-dimensional model using the corrected image frame and the object identification map.
2. The method for generating a real-life customized scenario from the video as claimed
in claim 1, wherein the characteristics of the image frame include hue, saturation,
gamma and white balance information.

3. The method for generating a scenario from the video as claimed in claim 1, wherein the image pixels of the objects identified in the object identification map are mapped in a three-dimensional space to generate a three-dimensional view of the objects.
4. The method for generating a scenario from the video further comprising:
removing the image pixels of the objects identified in the object identification
map; and
replacing the image pixels of the object with the neighboring pixels.
5. The method for generating a scenario from the video further comprising:
placing the three-dimensional object at a desired position in the three-dimensional or two-dimensiona! model; and
synchronizing the three-dimensional object with the three-dimensional or two-dimensional model.

Documents

Application Documents

# Name Date
1 Form5_As Filed_26-12-2017.pdf 2017-12-26
2 Form3_As Filed_26-12-2017.pdf 2017-12-26
3 Form2 Title Page_Provisional_26-12-2017.pdf 2017-12-26
4 Form1_As Filed_26-12-2017.pdf 2017-12-26
5 Drawing_As Filed_26-12-2017.pdf 2017-12-26
6 Description Provisional_As Filed_26-12-2017.pdf 2017-12-26
7 Correspondence by Applicant_As Filed_26-12-2017.pdf 2017-12-26
8 Claims_As Filed_26-12-2017.pdf 2017-12-26
9 Abstract_As Filed_26-12-2017.pdf 2017-12-26
10 abstract 201741046582.jpg 2017-12-28
11 Form1_Proof of Right_18-01-2018.pdf 2018-01-18
12 Correspondence by Applicant_Form1_18-01-2018.pdf 2018-01-18
13 Form2 Title Page_Complete_26-12-2018.pdf 2018-12-26
14 Form1_After Filing_26-12-2018.pdf 2018-12-26
15 Drawing_After Filing_26-12-2018.pdf 2018-12-26
16 Description Complete_After Filing_26-12-2018.pdf 2018-12-26
17 Correspondence by Applicant_Complete Specification_26-12-2018.pdf 2018-12-26
18 Claims_After Filing_26-12-2018.pdf 2018-12-26
19 Abstract_After Filing_26-12-2018.pdf 2018-12-26
20 Form18_Normal Request_21-06-2019.pdf 2019-06-21
21 Correspondence by Applicant_Form18_21-06-2019.pdf 2019-06-21
22 201741046582-FER.pdf 2021-10-17
23 201741046582-Correspondence_Amend the email addresses_14-12-2021.pdf 2021-12-14
24 201741046582-OTHERS [30-03-2022(online)].pdf 2022-03-30
25 201741046582-FER_SER_REPLY [30-03-2022(online)].pdf 2022-03-30
26 201741046582-CORRESPONDENCE [30-03-2022(online)].pdf 2022-03-30
27 201741046582-COMPLETE SPECIFICATION [30-03-2022(online)].pdf 2022-03-30
28 201741046582-PatentCertificate22-05-2024.pdf 2024-05-22
29 201741046582-IntimationOfGrant22-05-2024.pdf 2024-05-22

Search Strategy

1 amendAE_29-12-2022.pdf
2 2021-02-1014-32-26E_10-02-2021.pdf

ERegister / Renewals

3rd: 11 Jul 2024

From 26/12/2019 - To 26/12/2020

4th: 11 Jul 2024

From 26/12/2020 - To 26/12/2021

5th: 11 Jul 2024

From 26/12/2021 - To 26/12/2022

6th: 11 Jul 2024

From 26/12/2022 - To 26/12/2023

7th: 11 Jul 2024

From 26/12/2023 - To 26/12/2024

8th: 11 Jul 2024

From 26/12/2024 - To 26/12/2025

9th: 11 Jul 2024

From 26/12/2025 - To 26/12/2026

10th: 11 Jul 2024

From 26/12/2026 - To 26/12/2027

11th: 11 Jul 2024

From 26/12/2027 - To 26/12/2028

12th: 11 Jul 2024

From 26/12/2028 - To 26/12/2029