Abstract: A camera for generating distortion free images and a method thereof is disclosed. The camera includes a plurality of lenses, wherein each of the plurality of lenses has a dedicated sensor. The camera further includes a processor communicatively coupled to the plurality of lenses. The camera further includes a memory communicatively coupled to the processor and having instructions stored thereon, causing the processor, on execution to capture a plurality of images through the plurality of lenses and to generate a single distortion free image from the plurality of images based on a deep learning technique trained using a mapping of each of a plurality of sets of low resolution images generated in one or more environments to an associated distortion free image, wherein one or more low resolution images in each of the plurality of sets are distorted. FIG.4
Claims:WE CLAIM
1. A method of generating distortion free images, the method comprising:
capturing a plurality of images through a plurality of lenses in a camera, wherein each of the plurality of lenses has a dedicated sensor; and
generating, by the camera, a single distortion free image from the plurality of images based on a deep learning technique trained using a mapping of each of a plurality of sets of low resolution images generated in one or more environments to an associated distortion free image, wherein one or more low resolution images in each of the plurality of sets are distorted.
2. The method of claim 1, wherein generating comprises identifying at least one distorted image from the plurality of images using the deep learning technique, wherein each of the at least one distorted image comprises a plurality of distorted pixels.
3. The method of claim 2, wherein generating further comprises:
discarding the at least one distorted image; and
combining a remaining set of images in the plurality of images to generate the single distortion free image, wherein the remaining set of images is obtained after discarding the at least one distorted image.
4. The method of claim 1, further comprising creating the mapping between each of the plurality of sets of low resolution images captured in the one or more environments and the associated distortion free image.
5. The method of claim 4, wherein each pixel in each image within a set from the plurality of sets of low resolution images is mapped to a corresponding pixel of the associated distortion free image in the mapping.
6. The method of claim 5, further comprising assigning weights, using the deep learning technique, to each pixel within each of the plurality of images captured by the plurality of lenses, based on the mapping and an environment in which the plurality of images is captured, wherein a distorted pixel is assigned a zero weight.
7. The method of claim 6, further comprising determining a total weight for each of the plurality of images by computing an average of weights assigned to associated pixels.
8. The method of claim 7, further comprising combining the plurality of images based on associated total weights to generate the single distortion free image.
9. The method of claim 6, further comprising:
discarding each distorted pixel from each of the plurality of images to generate a plurality of modified images; and
combining the plurality of modified images obtained after discarding each distorted pixel from each of the plurality of images to generate the single distortion free image.
10. The method of claim 1, wherein each of the plurality of images are low resolution images and the single distortion free image is a high resolution image.
11. The method of claim 1, wherein the one or more environments comprise at least one of rain, fog, dust, lighting conditions, landscape, or an obstacle.
12. The method of claim 1, wherein the plurality of lenses is arranged as a lens array in the camera.
13. The method of claim 1, wherein the one or more environments are simulated to generate the plurality of sets of low resolution images used to create the mapping.
14. A camera comprising:
a plurality of lenses, wherein each of the plurality of lenses has a dedicated sensor;
a processor communicatively coupled to the plurality of lenses; and
a memory communicatively coupled to the processor and having instructions stored thereon, causing the processor, on execution to:
capture a plurality of images through the plurality of lenses; and
generate a single distortion free image from the plurality of images based on a deep learning technique trained using a mapping of each of a plurality of sets of low resolution images generated in one or more environments to an associated distortion free image, wherein one or more low resolution images in each of the plurality of sets are distorted.
15. The camera of claim 14, wherein processor instructions further cause the processor to identify at least one distorted image from the plurality of images using the deep learning technique, wherein each of the at least one distorted image comprises a plurality of distorted pixels.
16. The camera of claim 14, wherein processor instructions further cause the processor to:
discard the at least one distorted image; and
combine a remaining set of images in the plurality of images to generate the single distortion free image, wherein the remaining set of images is obtained after discarding the at least one distorted image.
17. The camera of claim 14, wherein processor instructions further cause the processor to create the mapping between each of the plurality of sets of low resolution images captured in the one or more environments and the associated distortion free image.
18. The camera of claim 17, wherein each pixel in each image within a set from the plurality of sets of low resolution images is mapped to a corresponding pixel of the associated distortion free image in the mapping.
19. The camera of claim 18, wherein processor instructions further cause the processor to assign weights, using the deep learning technique, to each pixel within each of the plurality of images captured by the plurality of lenses, based on the mapping and an environment in which the plurality of images is captured, wherein a distorted pixel is assigned zero weight.
20. The camera of claim 19, wherein processor instructions further cause the processor to determine a total weight for each of the plurality of images by computing an average of weights assigned to associated pixels.
21. The camera of claim 20, wherein processor instructions further cause the processor to combine the plurality of images based on associated total weights to generate the single distortion free image.
22. The camera of claim 19, wherein processor instructions further cause the processor to:
discard each distorted pixel from each of the plurality of images to generate a plurality of modified images; and
combine the plurality of modified images obtained after discarding each distorted pixel from each of the plurality of images to generate the single distortion free image.
Dated this 27th day of June, 2017
Swetha SN
Of K&S Partners
Agent for the Applicant
, Description:TECHNICAL FIELD
This disclosure relates generally to image generation and more particularly to camera for generating distortion free images and method thereof.
| # | Name | Date |
|---|---|---|
| 1 | Power of Attorney [27-06-2017(online)].pdf | 2017-06-27 |
| 2 | Form 5 [27-06-2017(online)].pdf | 2017-06-27 |
| 3 | Form 3 [27-06-2017(online)].pdf | 2017-06-27 |
| 4 | Form 18 [27-06-2017(online)].pdf_974.pdf | 2017-06-27 |
| 5 | Form 18 [27-06-2017(online)].pdf | 2017-06-27 |
| 6 | Form 1 [27-06-2017(online)].pdf | 2017-06-27 |
| 7 | Drawing [27-06-2017(online)].pdf | 2017-06-27 |
| 8 | Description(Complete) [27-06-2017(online)].pdf_973.pdf | 2017-06-27 |
| 9 | Description(Complete) [27-06-2017(online)].pdf | 2017-06-27 |
| 10 | REQUEST FOR CERTIFIED COPY [28-06-2017(online)].pdf | 2017-06-28 |
| 11 | 201741022467-Proof of Right (MANDATORY) [01-09-2017(online)].pdf | 2017-09-01 |
| 12 | Correspondence by Agent_Form30,Form1_05-09-2017.pdf | 2017-09-05 |
| 13 | 201741022467-PETITION UNDER RULE 137 [23-02-2021(online)].pdf | 2021-02-23 |
| 14 | 201741022467-OTHERS [23-02-2021(online)].pdf | 2021-02-23 |
| 15 | 201741022467-FORM 3 [23-02-2021(online)].pdf | 2021-02-23 |
| 16 | 201741022467-FER_SER_REPLY [23-02-2021(online)].pdf | 2021-02-23 |
| 17 | 201741022467-DRAWING [23-02-2021(online)].pdf | 2021-02-23 |
| 18 | 201741022467-COMPLETE SPECIFICATION [23-02-2021(online)].pdf | 2021-02-23 |
| 19 | 201741022467-CLAIMS [23-02-2021(online)].pdf | 2021-02-23 |
| 20 | 201741022467-FER.pdf | 2021-10-17 |
| 21 | 201741022467-US(14)-HearingNotice-(HearingDate-13-12-2023).pdf | 2023-11-23 |
| 22 | 201741022467-POA [28-11-2023(online)].pdf | 2023-11-28 |
| 23 | 201741022467-FORM 13 [28-11-2023(online)].pdf | 2023-11-28 |
| 24 | 201741022467-Correspondence to notify the Controller [28-11-2023(online)].pdf | 2023-11-28 |
| 25 | 201741022467-AMENDED DOCUMENTS [28-11-2023(online)].pdf | 2023-11-28 |
| 26 | 201741022467-Written submissions and relevant documents [28-12-2023(online)].pdf | 2023-12-28 |
| 27 | 201741022467-FORM 3 [28-12-2023(online)].pdf | 2023-12-28 |
| 28 | 201741022467-PatentCertificate29-12-2023.pdf | 2023-12-29 |
| 29 | 201741022467-IntimationOfGrant29-12-2023.pdf | 2023-12-29 |
| 1 | search022467E_25-09-2020.pdf |