Sign In to Follow Application
View All Documents & Correspondence

A Method And System For Extended Depth Of Field Calculation For Optical Microscopic Images

Abstract: The invention relates to an image processing method and system for constructing a composite image with extended depth of field. The composite image may be constructed from a plurality of source images of a scene stored in an image stack. The method includes aligning the images in the image stack such that every image in the image stack is aligned with other images in the stack, performing illumination and color correction on the aligned images in the image stack, generating an energy matrix for each pixel of each illumination and color corrected image in the image stack by computing energy content for each pixel, generating a raw index map that contains the location of every pixels having maximum energy level among all the images in the image stack, generating degree of defocus map and constructing the composite image. Figure 1 (for publication)

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
30 November 2012
Publication Number
23/2014
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2020-02-19
Renewal Date

Applicants

Larsen & Toubro Limited
L&T Integrated Engineering Services  Mysore Campus  KIADB Industrial Area  Hebbal  Hootagalli  Mysore 570018  Karnataka  India

Inventors

1. Rabi Rout
L&T Integrated Engineering Services  Mysore Campus  KIADB Industrial Area  Hebbal  Hootagalli  Mysore 570018  Karnataka  India
2. Vajendra Desai
L&T Integrated Engineering Services  Mysore Campus  KIADB Industrial Area  Hebbal  Hootagalli  Mysore 570018  Karnataka  India
3. Harikrishnan Ramaraju
L&T Integrated Engineering Services  Mysore Campus  KIADB Industrial Area  Hebbal  Hootagalli  Mysore 570018  Karnataka  India
4. Gineesh Sukumaran
L&T Integrated Engineering Services  Mysore Campus  KIADB Industrial Area  Hebbal  Hootagalli  Mysore 570018  Karnataka  India
5. Sistu Ganesh
L&T Integrated Engineering Services  Mysore Campus  KIADB Industrial Area  Hebbal  Hootagalli  Mysore 570018  Karnataka  India
6. Devvrata Priyadarshi
L&T Integrated Engineering Services  Mysore Campus  KIADB Industrial Area  Hebbal  Hootagalli  Mysore 570018  Karnataka  India
7. Eldho Abraham
L&T Integrated Engineering Services  Mysore Campus  KIADB Industrial Area  Hebbal  Hootagalli  Mysore 570018  Karnataka  India
8. Saurabh Mishra
L&T Integrated Engineering Services  Mysore Campus  KIADB Industrial Area  Hebbal  Hootagalli  Mysore 570018  Karnataka  India
9. SudiptaKumar Sahoo
L&T Integrated Engineering Services  Mysore Campus  KIADB Industrial Area  Hebbal  Hootagalli  Mysore 570018  Karnataka  India
10. NiveditaTripathi
L&T Integrated Engineering Services  Mysore Campus  KIADB Industrial Area  Hebbal  Hootagalli  Mysore 570018  Karnataka  India
11. Siddhartha Goutham
L&T Integrated Engineering Services  Mysore Campus  KIADB Industrial Area  Hebbal  Hootagalli  Mysore 570018  Karnataka  India
12. Kunal Gauraw
L&T Integrated Engineering Services  Mysore Campus  KIADB Industrial Area  Hebbal  Hootagalli  Mysore 570018  Karnataka  India

Specification

Field of the Invention The invention relates to image processing field in general and particularly to a method and system for extending depth of field in an imaging device. Background of the Invention Limited depth of field is a common problem in imaging devices such as conventional light microscope. Objects imaged in such cases are sharply in focus over a limited distance known as the depth of field. The Depth of field (DOF) is the distance between the nearest and farthest objects in a scene that appear acceptably sharp in an image. Typically sharpness decreases with depth resulting in blurriness in at least some part of the image. Also, in order to capture a large amount of light from a small specimen under the imaging device, one needs to have a high numerical aperture. However, high numerical aperture results in a very shallow depth of field, due to which it is not possible to have all region of the scene to be in focus. To improve the depth of field in a captured image, one or more digital image processing techniques may be employed. Using the digital image processing techniques, images " taken at different depths of field of the same scene may be combined to produce a single composite image. The digital image processing techniques involve capturing multiple images of the same scene to form an image stack, identifying focused part from multiple images in the stack and recreating a single image with better depth of field by combining the focused parts. During the digital processing process, index information of the images in the stack is collected and processed to generate a depth map and composite image or/and a 3D model of the scene. Typically, greater the number of images in the image stack greater is the DOF in the composite image. Though with the increase in number of images in the stack, the complexity, time required for processing the images, errors in the composite image and memory requirement also increases. There are many processing techniques which provide solutions to improve the depth of field. However, the known solutions have one or the other drawback such as misalignment of the images in the stack, illumination variations in the composite image, noises in the Depth map and composite image, low quality of the composite image with blotchy background, edge shadowing and depth cross over, time complexity of the processes involved in depth of field calculations, too many manually configurable parameters and unable to manage large image stacks. Therefore, there is a need to have an improved method and system for digital image processing that may address at least one of the above mentioned limitations. Summary of the Invention According to embodiments of the invention an image processing method for constructing a composite image with extended depth of field is disclosed. The composite image may be constructed from a plurality of source images of a scene stored in at least one image stack. The plurality of source images may be taken at substantially identical fields of view. The method includes aligning the images in the image stack such that every image in the image stack is aligned with other images in the stack, performing illumination and color correction on the aligned images in the image stack, generating an energy matrix for each pixel of each illumination and color corrected image in the image stack by computing energy content for each pixel, generating a raw index map that contains the location of every pixels having maximum energy level among all the images in the image stack, generating degree of defocus map by comparing the energy content at a particular pixel in all the images against a reference signal and repeating the process for all the pixels and constructing the composite image using raw index map and degree of defocus map. According to another embodiment a system for constructing a composite image with extended depth of field, from a plurality of source images of a scene is disclosed. The disclosed system include a memory for storing a plurality of source images of a scene taken at substantially identical fields of view, a processing unit for processing the images stored in the memory, to align the images in the image stack such that every image in the image stack is aligned with other images in the stack, perform illumination and color correction on the aligned images in the image stack, generate an energy matrix for each pixel of each illumination and color corrected image in the image stack by computing energy content for each pixel, generate a raw index map that contains the location of every pixels having maximum energy level-among all the images in the image stack, generate degree of defocus map by comparing the energy content at a particular pixel against a reference signal and constructing the composite image using raw index map and degree of defocus map and an output unit for displaying the composite image received from the processing unit. Other aspects, advantages, and salient features of the invention will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses exemplary embodiments of the invention. Brief description of the Drawings The above and other aspects, features, and advantages of certain exemplary embodiments of the invention will be more apparent from the following description taken in conjunction with the accompanying drawings in which: FIG. 1 illustrates a flow chart of a method for processing a plurality of images taken of a scene to generate a composite image with extended depth of field, according to one embodiment of the invention; FIG. 2 illustrates a method for performing illumination and color correction according to an embodiment of the invention; and FIG. 3 illustrates a block diagram of a system for constructing a composite image with extended depth of field, from a plurality of source images of a scene according to one embodiment of the invention. Persons skilled in the art will appreciate that elements in the figures are illustrated for simplicity and clarity and may have not been drawn to scale. For example, the dimensions of some of the elements in the figure may be exaggerated relative to other elements to help to improve understanding of various exemplary embodiments of the disclosure. Throughout the drawings, it should be noted that like reference numbers are used to depict the same or similar elements, features, and structures. Detailed description of the Invention The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments of the invention as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness. FIG. 1 illustrates a flow chart of a method 100 for processing a plurality of images taken of a scene to generate a composite image with extended depth of field, according to an exemplary embodiment of the invention. The extended depth of field indicates a greater depth of field in the composite/processed image as compared to the original image before processing. At step 102, the method obtains a plurality of image captured from a defined subject, where the images are taken from different positions for said subject. According to an embodiment, the images may be captured from any known imaging device such as but not limiting to, optical microscope, digital camera, etc. According to another embodiment, the images may be obtained from an archive having images of said subject captured from different positions. According to yet another embodiment, the different positions include capturing images from different 'Z' level of the imaging device. The obtained images are arranged in a stack to form an image stack. At step 104, the method performs image alignment. The image alignment is the process of transforming different sets of image data in the image stack so that all the images may have one coordinate system. The image data may be obtained from different sensors, at different time, or from different viewpoints. The image alignment process enables processing of the images obtained from the different measurements/sources. The image alignment process involves finding an optimal one-to-one mapping between the pixels in one image to those in other images in the stack. According to one embodiment, a bi-directional image alignment method may be used for aligning the images in the image stack. The bi-directional image aligning method may include arranging the images in a sequence, such that the images are arranged as per their respective distance from the 'Z' level. Among the image sequence a first reference image may be selected, such that the first reference image is the substantially central image in the image sequence. The alignment process further includes comparing the first reference image with immediate left side and immediate right side images in the sequence. The geometric transformation between the immediate left side image and the immediate right side image may- be calculated with respect to the first reference image. Based on calculations, the immediate left side image and the immediate right side image may be aligned with the first reference image. Subsequently immediate left side image may be compared with second image on the left side of the first reference image in the image sequence and immediate right side image may be compared with the second image on the right side of the first reference image in the image sequence and aligned with the immediate left side and the immediate right side images respectively. The process may be repeated for all the images thereby resulting in an aligned image stack that has substantially all images aligned with each other. The first and last image in the stack may have large variations and processing images in one direction that is from first to last may not provide effectively aligned image stack. Moreover processing images in one direction may result in a lot of time and memory consumption. On the other hand disclosed two way processing of images reduces time and result in better aligned stack of images. According to an embodiment of the invention,- the process of comparison and alignment-',! may be performed by any known conventional method in the image alignment process. According to another embodiment of the invention, the process of comparison and alignment may be performed by the parametric image alignment process. The parametric image alignment process includes specifying the particular geometric parametric transformation and then estimating the parameter by means of any known optimization method. The miss-aligned images generally have an affine transformation and six parameters (a, b, c, d, e, and f) needed to be estimated for this: x' = a*x + b*y + c These parameters may be identified using any known suitable optimization method. According to an exemplary embodiment of the invention, the disclosed method uses a hierarchical coarse-to-fine approach of parametric image alignment using gradient descent optimization method and normalized cross correlation as cost function based on image pyramids. ... The exemplary illustrated method of comparison and alignment includes reducing the resolution on the images in the image stack by a scale factor in a range of between 1/2 to 1/16. According to yet another embodiment, the resolution of the images may be reduced by a scale factor of 1/8 of the original resolution to generate a stack of down sampled images. The method further includes creating an image pyramid of down sampled images and performing a coarse estimation of the transformation on the down sampled images. In microscopic imaging, the whole series of images or some part of a series of images may undergo through same transformations. Hence, if the second image undergoes through substantially same transformation, the transformation parameters obtained from the first image may be used as a clue or initial guess of transformation of next image. According to an embodiment, the hierarchical coarse-to-fine parametric image alignment process is implemented by a guided optimization process. The guided optimization process includes implementing an initial guess method/algorithm to search for global optima of the cost function faster by making it to start the iterations near the global optima position of the previous image; cost function. The illustrated process of ■ comparison and alignment is only exemplary in nature and may not be construed limiting on the invention. Any other known method/ process of comparison and alignment may be used without going beyond the scope of the invention. At step 106, the method performs illumination and color correction on the aligned stack of images. FIG. 2 illustrates an exemplary flow chart 200 to illustrate the method for performing illumination and color correction according to an embodiment of the invention. RIlf RI2, RI3...RIn, refers to the aligned images in the stack of images. The method 200 of performing illumination and color correction, at step 202 may include selecting at least two consecutive images from the stack of aligned images, where one of the image is considered as a second reference image and the other is considered as first sample image. At step 204, the selected images are converted from RGB colour space to HSV colour space. According to an embodiment of the invention, the conversion from RGB to HSV may be performed by any known method. At step 206, the HSV color images may be split into HSV channels. Further, at step 208 and step 210 the method computes the" average value of luminance and average value of saturation for both the"* * HSV images respectively. At step 212 and step 214, percentage deviation of average luminance and average saturation may be calculated respectively for the first sample image with respect to the second reference image. At step 216 and step 218, the percentage deviation of average luminance and percentage deviation of average saturation is compared with a predefined threshold value respectively. According to an embodiment, the threshold value is more than 2 per cent deviation. According to another embodiment, threshold value may be more than 5 per cent deviation. If the percentage deviation of average luminance is more than the predefined threshold value then the first sample image may be multiplied by a luminance correction factor at step 220, else the image may be retained without incorporating any change. The luminance correction factor is the ratio of the average value of the illumination of the first sample image divided by average value of illumination for the second reference image. Similarly, if the percentage deviation of average saturation is more than the predefined threshold value, then the first sample image may be multiplied by a saturation correction factor at step 222, else the image may be retained without incorporating any change. The luminance correction-factor is the ratio of the average value of the saturation of the first sample «-image divided by average value of saturation for the second reference image. At step 224, the H S V channels of the processed images are merged together. The disclosed process may be repeated for all the images in the aligned image stack considering the corrected first sample image as second reference image for the next image and similarly repeating the process for other images. Once corrected, the images may be again converted in RGB color space at step 226 and stored in the image stack. At step 108, the method computes energy content for each pixel of illuminated and color corrected stacked images to generate energy matrix of each image. According to an exemplary embodiment, complex wavelet decomposition method may be used for wavelet decomposition. According to the complex wavelet decomposition method, the step of computing energy content includes selecting one of the images from the illuminated and color corrected image stack. Selected image is converted from RGB color scale to grayscale for wavelet decomposition. The method further includes down sampling the grayscale image to a lower resolution exemplary by one level and normalizing the intensity values in the range of 0 to 1. Processing the image at a lower resolution may reduce the impulse noises present in the images and hence may provide better results. The method further includes, convolving the down sampled image with a complex wavelet filter bank to generate an energy matrix for said image. The process may be repeated for all the images in the illuminated and color corrected image stack so as to have at least one energy matrix for each image in the stack. According to another embodiment the energy matrix may be generated using any other known process such as but not limited to real wavelets (Haar, Daubechies), difference of Gaussians, variance, Tenengrad, Fourier transform and high pass filter. At step 110, the method generates a raw index map for the scene. The process of generating raw index map includes analyzing the energy matrix's pixel by pixel basis for all the images and identifying maximum focused pixel for a particular pixel in the image stack. The process is repeated for all the pixels of the scene and an index of all the focused pixels may be used to generate the raw index map. At step 112, the method generates degree of defoeus -map by comparing- the

Documents

Application Documents

# Name Date
1 5010-CHE-2012-RELEVANT DOCUMENTS [26-09-2023(online)].pdf 2023-09-26
1 Form-5.pdf 2012-12-06
2 5010-CHE-2012-RELEVANT DOCUMENTS [29-09-2022(online)].pdf 2022-09-29
2 Form-3.pdf 2012-12-06
3 Form-1.pdf 2012-12-06
3 5010-CHE-2012-Correspondence_Request Form Mail Id Update_30-06-2022.pdf 2022-06-30
4 Drawings.pdf 2012-12-06
4 5010-CHE-2012-Covering Letter [30-05-2022(online)].pdf 2022-05-30
5 5010-CHE-2012-PETITION u-r 6(6) [30-05-2022(online)].pdf 2022-05-30
5 5010-CHE-2012 POWER OF ATTORNEY 13-05-2013.pdf 2013-05-13
6 5010-CHE-2012-Power of Authority [30-05-2022(online)].pdf 2022-05-30
6 5010-CHE-2012 FORM-1 13-05-2013.pdf 2013-05-13
7 5010-CHE-2012-Abstract_Granted 332458_19-02-2020.pdf 2020-02-19
7 5010-CHE-2012 CORRESPONDENCE OTHERS 13-05-2013.pdf 2013-05-13
8 5010-CHE-2012-Claims_Granted 332458_19-02-2020.pdf 2020-02-19
8 5010-CHE-2012 FORM-5 03-10-2013.pdf 2013-10-03
9 5010-CHE-2012 FORM-3 03-10-2013.pdf 2013-10-03
9 5010-CHE-2012-Description_Granted 332458_19-02-2020.pdf 2020-02-19
10 5010-CHE-2012 FORM-2 03-10-2013.pdf 2013-10-03
10 5010-CHE-2012-Drawings_Granted 332458_19-02-2020.pdf 2020-02-19
11 5010-CHE-2012 FORM-1 03-10-2013.pdf 2013-10-03
11 5010-CHE-2012-IntimationOfGrant19-02-2020.pdf 2020-02-19
12 5010-CHE-2012 DRAWING 03-10-2013.pdf 2013-10-03
12 5010-CHE-2012-Marked up Claims_Granted 332458_19-02-2020.pdf 2020-02-19
13 5010-CHE-2012 DESCRIPTION (COMPLETE) 03-10-2013.pdf 2013-10-03
13 5010-CHE-2012-PatentCertificate19-02-2020.pdf 2020-02-19
14 5010-CHE-2012 CORRESPONDENCE OTHERS 03-10-2013.pdf 2013-10-03
14 Abstract_FER Reply_17-05-2019.pdf 2019-05-17
15 5010-CHE-2012 CLAIMS 03-10-2013.pdf 2013-10-03
15 Claims_FER Reply_17-05-2019.pdf 2019-05-17
16 5010-CHE-2012 ABSTRACT 03-10-2013.pdf 2013-10-03
16 Correspondence by Applicant_Reply to Examination Report_17-05-2019.pdf 2019-05-17
17 Drawing_FER Reply_17-05-2019.pdf 2019-05-17
17 5010-CHE-2012 FORM-18 04-11-2013.pdf 2013-11-04
18 5010-CHE-2012 FORM-13 04-11-2013.pdf 2013-11-04
18 Marked copy_FER Reply_17-05-2019.pdf 2019-05-17
19 5010-CHE-2012 CORRESPONDENCE OTHERS 04-11-2013.pdf 2013-11-04
19 Petition ur 137_Fer Reply_17-05-2019.pdf 2019-05-17
20 5010-CHE-2012 CORRESPONDENCE OTHERS 04-11-2013..pdf 2013-11-04
20 Correspondence by Agent_ E-mail ID Updation_22-04-2019.pdf 2019-04-22
21 5010-CHE-2012-FER.pdf 2019-03-20
21 abstract5010-che-2012.jpg 2014-03-03
22 5010-CHE-2012-Form 3-220316.pdf 2016-03-28
22 Correspondence by Applicant_Form3, Granted US Patent_23-05-2018.pdf 2018-05-23
23 5010-CHE-2012-Correspondence-220316.pdf 2016-03-28
23 Form3_After Filing_23-05-2018.pdf 2018-05-23
24 Granted US Patent_After Filing_23-05-2018.pdf 2018-05-23
24 5010-CHE-2012-OTHERS-Assignment-020516.pdf 2016-07-19
25 5010-CHE-2012-Form 6-020516.pdf 2016-07-19
25 Correspondence by Applicant_Request for FER_19-04-2018 .pdf 2018-04-19
26 5010-CHE-2012-Correspondence-Form 3-020516.pdf 2016-07-19
26 5010-CHE-2012-Form 3-020516.pdf 2016-07-19
27 5010-CHE-2012-Correspondence-Form 6,Others-020516.pdf 2016-07-19
28 5010-CHE-2012-Correspondence-Form 3-020516.pdf 2016-07-19
28 5010-CHE-2012-Form 3-020516.pdf 2016-07-19
29 5010-CHE-2012-Form 6-020516.pdf 2016-07-19
29 Correspondence by Applicant_Request for FER_19-04-2018 .pdf 2018-04-19
30 5010-CHE-2012-OTHERS-Assignment-020516.pdf 2016-07-19
30 Granted US Patent_After Filing_23-05-2018.pdf 2018-05-23
31 5010-CHE-2012-Correspondence-220316.pdf 2016-03-28
31 Form3_After Filing_23-05-2018.pdf 2018-05-23
32 5010-CHE-2012-Form 3-220316.pdf 2016-03-28
32 Correspondence by Applicant_Form3, Granted US Patent_23-05-2018.pdf 2018-05-23
33 5010-CHE-2012-FER.pdf 2019-03-20
33 abstract5010-che-2012.jpg 2014-03-03
34 5010-CHE-2012 CORRESPONDENCE OTHERS 04-11-2013..pdf 2013-11-04
34 Correspondence by Agent_ E-mail ID Updation_22-04-2019.pdf 2019-04-22
35 5010-CHE-2012 CORRESPONDENCE OTHERS 04-11-2013.pdf 2013-11-04
35 Petition ur 137_Fer Reply_17-05-2019.pdf 2019-05-17
36 Marked copy_FER Reply_17-05-2019.pdf 2019-05-17
36 5010-CHE-2012 FORM-13 04-11-2013.pdf 2013-11-04
37 Drawing_FER Reply_17-05-2019.pdf 2019-05-17
37 5010-CHE-2012 FORM-18 04-11-2013.pdf 2013-11-04
38 5010-CHE-2012 ABSTRACT 03-10-2013.pdf 2013-10-03
38 Correspondence by Applicant_Reply to Examination Report_17-05-2019.pdf 2019-05-17
39 5010-CHE-2012 CLAIMS 03-10-2013.pdf 2013-10-03
39 Claims_FER Reply_17-05-2019.pdf 2019-05-17
40 5010-CHE-2012 CORRESPONDENCE OTHERS 03-10-2013.pdf 2013-10-03
40 Abstract_FER Reply_17-05-2019.pdf 2019-05-17
41 5010-CHE-2012 DESCRIPTION (COMPLETE) 03-10-2013.pdf 2013-10-03
41 5010-CHE-2012-PatentCertificate19-02-2020.pdf 2020-02-19
42 5010-CHE-2012 DRAWING 03-10-2013.pdf 2013-10-03
42 5010-CHE-2012-Marked up Claims_Granted 332458_19-02-2020.pdf 2020-02-19
43 5010-CHE-2012 FORM-1 03-10-2013.pdf 2013-10-03
43 5010-CHE-2012-IntimationOfGrant19-02-2020.pdf 2020-02-19
44 5010-CHE-2012 FORM-2 03-10-2013.pdf 2013-10-03
44 5010-CHE-2012-Drawings_Granted 332458_19-02-2020.pdf 2020-02-19
45 5010-CHE-2012 FORM-3 03-10-2013.pdf 2013-10-03
45 5010-CHE-2012-Description_Granted 332458_19-02-2020.pdf 2020-02-19
46 5010-CHE-2012-Claims_Granted 332458_19-02-2020.pdf 2020-02-19
46 5010-CHE-2012 FORM-5 03-10-2013.pdf 2013-10-03
47 5010-CHE-2012-Abstract_Granted 332458_19-02-2020.pdf 2020-02-19
47 5010-CHE-2012 CORRESPONDENCE OTHERS 13-05-2013.pdf 2013-05-13
48 5010-CHE-2012-Power of Authority [30-05-2022(online)].pdf 2022-05-30
48 5010-CHE-2012 FORM-1 13-05-2013.pdf 2013-05-13
49 5010-CHE-2012-PETITION u-r 6(6) [30-05-2022(online)].pdf 2022-05-30
49 5010-CHE-2012 POWER OF ATTORNEY 13-05-2013.pdf 2013-05-13
50 Drawings.pdf 2012-12-06
50 5010-CHE-2012-Covering Letter [30-05-2022(online)].pdf 2022-05-30
51 5010-CHE-2012-Correspondence_Request Form Mail Id Update_30-06-2022.pdf 2022-06-30
51 Form-1.pdf 2012-12-06
52 5010-CHE-2012-RELEVANT DOCUMENTS [29-09-2022(online)].pdf 2022-09-29
52 Form-3.pdf 2012-12-06
53 5010-CHE-2012-RELEVANT DOCUMENTS [26-09-2023(online)].pdf 2023-09-26
53 Form-5.pdf 2012-12-06

Search Strategy

1 SearchStrategy_19-03-2019.pdf

ERegister / Renewals

3rd: 11 May 2020

From 30/11/2014 - To 30/11/2015

4th: 11 May 2020

From 30/11/2015 - To 30/11/2016

5th: 11 May 2020

From 30/11/2016 - To 30/11/2017

6th: 11 May 2020

From 30/11/2017 - To 30/11/2018

7th: 11 May 2020

From 30/11/2018 - To 30/11/2019

8th: 11 May 2020

From 30/11/2019 - To 30/11/2020

9th: 11 May 2020

From 30/11/2020 - To 30/11/2021

10th: 30 May 2022

From 30/11/2021 - To 30/11/2022

11th: 30 May 2022

From 30/11/2022 - To 30/11/2023

12th: 28 Nov 2023

From 30/11/2023 - To 30/11/2024

13th: 28 Nov 2023

From 30/11/2024 - To 30/11/2025