Sign In to Follow Application
View All Documents & Correspondence

Method And System For Extracting Regions Of Interest From Images And Post Processing Images

Abstract: The present disclosure relates to a method and system for content identification, extracting regions of interest from images and post-processing images. The method comprises receiving, a set of multiple images of an object; generating a set of augmented images from the set of multiple images; extracting features from the augmented images of the set of augmented images; extracting an object information based at least on the one or more features; predicting one or more regions of interest in the set of multiple images; generating a set of final data based at least on the one or more regions of interest, and the object information, wherein the set of final data comprises at least graphical data, and an audio data. Finally, the method comprises allowing users to interact with the system in realtime by adding further information related to the set of final data for displaying to users.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 October 2022
Publication Number
04/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

FLIPKART INTERNET PRIVATE LIMITED
Buildings Alyssa, Begonia & Clover, Embassy Tech Village, Outer Ring Road, Deverabeesanahalli Village, Bengaluru - 560103, Karnataka, India

Inventors

1. Karimulla Shaik
#301, Belur Homes, 6th Cross, NanjaReddy Colony, Murugesh Palya, Bangalore-560017

Specification

We claim:

1. A method for extracting regions of interest from images and post-processing images, the method comprising:

- receiving, by a processing unit [102], a set of multiple images of an object;

- generating, by a data augmentation unit [104], a set of augmented images from the set of multiple images;

- extracting, by a feature extraction unit [106], one or more features from one or more augmented images of the set of augmented images;

- extracting, by a feature fusion unit [108], an object information based at least on the one or more features;

- predicting, by a prediction unit [110], one or more regions of interest in the set of multiple images;

- generating, by the prediction unit [110], a set of final data based at least on the one or more regions of interest, and the object information, wherein the set of final data comprises at least one of a graphical data and an audio data related to the object; and

- adding, by a post-processing unit [112], further information related to the set of final data for displaying to one or more users via a user interface [114].

2. The method as claimed in claim 1, wherein the generating, by the data augmentation unit [104], the set of augmented images from the set of multiple images, includes application of data augmentation techniques including one or more of a cropping technique, a color-augmentation technique, a rotation technique, a resolution correction technique, and a compression technique or a combination thereof.

3. The method as claimed in claim 1, wherein the processing unit [102] is a pretrained unit trained based on machine learning.

4. The method as claimed in claim 1, wherein the step of extracting, by a feature extraction unit [106], one or more features from the set of multiple
images is further based on manual inputs or application of a pooling

technique, or a combination of both, on a convolution layer output.

5. The method as claimed in claim 1, the method further comprising:

- displaying, by the processing unit [102], the set of final data to the one or more users via the user interface [114]; and

- receiving, via the user interface [114], one or more input data points on the set of final data from the one or more users;

6. The method as claimed in claim 1, the method further comprising:

- storing, by a memory unit [116], the one or more input data points on the set of final data received from the one or more users; and

- retrieving, by the processing unit [102], the one or more input data points on the set of final data stored in the memory unit [116] for further training of the processing unit [102].

7. A system for extracting regions of interest from images and post-processing

images, the system comprising:

- a processing unit [102] configured to receive a set of multiple images of an object;

- a data augmentation unit [104] configured to generate a set of augmented images from the set of multiple images;

- a feature extraction unit [106] configured to extract one or more features from one or more augmented images of the set of augmented images;

- a feature fusion unit [108] configured to extract an object information based at least on the one or more features;

- a prediction unit [110] configured to:

o predict one or more regions of interest in the set of multiple images; and

o generate a set of final data based at least on the one or more regions of interest, and the object information, wherein the set of final data comprises at least one of a graphical data and an
audio data related to the object; and

- a post-processing unit [112] configured to add further information related to the set of final data for displaying to one or more users via a user interface [114].

8. The system as claimed in claim 7, wherein the data augmentation unit [104] for generating the set of augmented images from the set of multiple images, is configured to apply data augmentation techniques including one or more of a cropping technique, a color-augmentation technique, a rotation technique, a resolution correction technique, and a compression technique or a combination thereof.

9. The system as claimed in claim 7, wherein the processing unit [102] is a pretrained unit trained based on machine learning.

10. The system as claimed in claim 7, wherein the feature extraction unit [106] for extracting the one or more features from the one or more augmented images of the set of augmented images, is configured to: receive manual inputs or apply a pooling technique, or apply a combination of both, on a convolution layer output.

11. The system as claimed in claim 7, wherein:

- the processing unit [102] is further configured to display the set of final data to the one or more users via the user interface [114]; and

- the user interface [114] is further configured to receive one or more input data points on the set of final data from the one or more users.

12. The system as claimed in claim 7, the system further comprising:

- a memory unit [116] configured to store the one or more input data points on the set of final data received from the one or more users.

13. The system as claimed in claim 12, wherein the processing unit [102] is further configured to retrieve the one or more input data points on the set of final data stored in the memory unit [116] for further training of the processing unit [102].

Documents

Application Documents

# Name Date
1 202241061611-STATEMENT OF UNDERTAKING (FORM 3) [28-10-2022(online)].pdf 2022-10-28
2 202241061611-REQUEST FOR EXAMINATION (FORM-18) [28-10-2022(online)].pdf 2022-10-28
3 202241061611-REQUEST FOR EARLY PUBLICATION(FORM-9) [28-10-2022(online)].pdf 2022-10-28
4 202241061611-PROOF OF RIGHT [28-10-2022(online)].pdf 2022-10-28
5 202241061611-POWER OF AUTHORITY [28-10-2022(online)].pdf 2022-10-28
6 202241061611-FORM-9 [28-10-2022(online)].pdf 2022-10-28
7 202241061611-FORM 18 [28-10-2022(online)].pdf 2022-10-28
8 202241061611-FORM 1 [28-10-2022(online)].pdf 2022-10-28
9 202241061611-FIGURE OF ABSTRACT [28-10-2022(online)].pdf 2022-10-28
10 202241061611-DRAWINGS [28-10-2022(online)].pdf 2022-10-28
11 202241061611-DECLARATION OF INVENTORSHIP (FORM 5) [28-10-2022(online)].pdf 2022-10-28
12 202241061611-COMPLETE SPECIFICATION [28-10-2022(online)].pdf 2022-10-28
13 202241061611-Request Letter-Correspondence [31-10-2022(online)].pdf 2022-10-31
14 202241061611-Power of Attorney [31-10-2022(online)].pdf 2022-10-31
15 202241061611-Form 1 (Submitted on date of filing) [31-10-2022(online)].pdf 2022-10-31
16 202241061611-Covering Letter [31-10-2022(online)].pdf 2022-10-31
17 202241061611-Correspondence_Form-1 And POA_21-11-2022.pdf 2022-11-21
18 202241061611-FER.pdf 2023-02-15
19 202241061611-MARKED COPY [13-03-2023(online)].pdf 2023-03-13
20 202241061611-CORRECTED PAGES [13-03-2023(online)].pdf 2023-03-13
21 202241061611-FER_SER_REPLY [14-08-2023(online)].pdf 2023-08-14

Search Strategy

1 SearchStrategyE_15-02-2023.pdf
2 SearchHistoryamended(17)AE_26-06-2024.pdf