Sign In to Follow Application
View All Documents & Correspondence

Encoding Apparatus And Encoding Method Including Encoding Of Error Transform Coefficients

Abstract: Provided is a voice encoding device which can accurately encode a spectrum shape of a signal having a strong tonality such as a vowel. The device includes: a sub-band constituting unit (151) which divides a first layer error conversion coefficient to be encoded into M sub-bands so as to generate M sub-band conversion coefficients; a shape vector encoding unit (152) which performs encoding on each of the M sub-band conversion coefficient so as to obtain M shape encoded information and calculates a target gain of each of the M sub-band conversion coefficients; a gain vector forming unit (153) which forms one gain vector by using M target gains; a gain vector encoding unit (154) which encodes the gain vector so as to obtain gain encoded information; and a multiplexing section unit (155) which multiplexes the shape encoded information with the gain encoded information.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 August 2009
Publication Number
42/2012
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2017-02-28
Renewal Date

Applicants

PANASONIC CORPORATION
1006, OAZA KADOMA, KADOMA-SHI, OSAKA 571-8501, JAPAN.

Inventors

1. OSHIKIRI MASAHIRO
C/O PANASONIC CORPORATION 1006, OAZA KADOMA, KADOMA-SHI, OSAKA 571-8501, JAPAN.
2. MORII TOSHIYUKI
C/O PANASONIC CORPORATION 1006, OAZA KADOMA, KADOMA-SHI, OSAKA 571-8501, JAPAN.
3. YAMANASHI TOMOFUMI
C/O PANASONIC CORPORATION 1006, OAZA KADOMA, KADOMA-SHI, OSAKA 571-8501, JAPAN.

Specification

FORM 2 THE PATENTS ACT, 1970 (39 of 1970) & THE PATENTS RULES, 2003 COMPLETE SPECIFICATION [See section 10, Rule 13] ENCODING DEVICE AND ENCODING METHOD; PANASONIC CORPORATION LTD, A CORPORATION ORGANIZED AND EXISTING UNDER THE LAWS OF JAPAN, WHOSE ADDRESS IS 1006, OAZA KADOMA, KADOMA-SHI, OSAKA 571-8501, JAPAN THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED. 1 DESCRIPTION Technical Field The present invention relates to an encoding apparatus and encoding method used in a communication system that encodes and transmits input signals such as speech signals. Background Art It is demanded in a mobile communication system that speech signals are compressed to low bit rates to transmit to efficiently utilize radio wave resources and so on. On the other hand, it is also demanded that quality improvement in phone call speech and call service of high fidelity be realized, and, to meet these demands, it is preferable to not only provide quality speech signals but also encode other quality signals than the speech signals, such as quality audio signals of wider bands. The technique of integrating a plurality of coding techniques in layers is promising for these two contradictory demands. This technique combines in layers the base layer for encoding input signals in a form adequate for speech signals at low bit rates and an enhancement layer for encoding differential signals between input signals and decoded signals of the base layer in a form adequate to other signals than speech. The technique of performing layered coding in this way have characteristics of providing scalability in bit streams acquired from an encoding apparatus, that is, acquiring decoded signals from part of information of bit streams, and, therefore, is generally referred to as "scalable coding (layered coding)," The scalable coding scheme can flexibly support communication between networks of varying bit rates thanks to its characteristics, and, consequently, is adequate for a future network environment where various networks will be integrated by the IP (Internet Protocol). For example, Non-Patent Document 1 discloses a technique of realizing scalable coding using the technique that is 2 standardized by MPEG-4 (Moving Picture Experts Group phase-4). This technique uses CELP (Code Excited Linear Prediction) coding adequate to speech signals, in the base layer, and uses transform coding such as AAC (Advanced Audio Coder) and TwinVQ (Transform Domain Weighted Interleave Vector Quantization) with respect to residual signals subtracting base layer decoded signal from original signal, in the enhancement layer. Further, to flexibly support a network environment in which transmission speed dynamically fluctuates due to handover between different types of networks and the occurrence of congestion, scalable encoding of small bit rate scales needs to be realized and, accordingly, needs to be configured by providing multiple layers of lower bit rates. Patent Document 1 and Patent Document 2 disclose a technique of transform encoding of transforming a signal which is the target to be encoded, in the frequency domain and encoding the resulting frequency domain signal. In such transform encoding, first, an energy component of a frequency domain signal, that is, gain (i.e. scale factor) is calculated and quantized on a per subband basis, and a fine component of the above frequency domain signal, that is, shape vector, is calculated and quantized. Non-Patent Document 1: "All about MPEG-4," written and edited by Sukeichi MIKI, the first edition, Kogyo Chosakai Publishing, Inc., September 30, 1998, page 126 to 127 Patent Document 1: Japanese Translation of PCT Application Laid-Open No .2006-5 1 345 7 Patent Document 2: Japanese Patent Application Laid-Open NO.HEI7-261 800 Disclosure of the Invention Problems to be Solved by the Invention However, when two successive parameters are quantized in order, the parameter that is quantized later is influenced by the quantization distortion of the parameter that is quantized earlier, and therefore is inclined to show increased quantization distortion. Therefore, there is a general tendency that, in transform encoding 3 disclosed in Patent Document 1 and Patent Document 2 for quantizing a gain and shape vector in order, shape vectors show increased quantization distortion and are unable to represent the accurate spectral shape. This problem produces significant quality deterioration with respect to signals of strong tonality such as vowels, that is, signals having spectral characteristics that multiple peak shapes are observed. This problem becomes more distinct when a lower bit rate is implemented. It is therefore an object of the present invention to provide an encoding apparatus and encoding method for accurately encoding the spectral shapes of signals of strong tonality such as vowels, that is, the spectral shapes of signals having spectral characteristics that multiple peak shapes are observed, and improving the quality of decoded signals such as the sound quality of decoded signals. Means for Solving the Problem The encoding apparatus according to the present invention employs a configuration which includes: a base layer encoding section that encodes an input signal to acquire base layer encoded data; a base layer decoding section that decodes the base layer encoded data to acquire a base layer decoded signal; and an enhancement layer encoding section that encodes a residual signal representing a difference between the input signal and the base layer decoded signal, to acquire enhancement layer encoded data, and in which the enhancement layer encoding section has: a dividing section that divides the residual signal into a plurality of subbands; a first shape vector encoding section that encodes the plurality of subbands to acquire first shape encoded information, and that calculates target gains of the plurality of subbands; a gain vector forming section that forms one gain vector using the plurality of target gains; and a gain vector encoding section that encodes the gain vector to acquire first gain encoded information. The encoding method according to the present invention includes: dividing transform coefficients acquired by transforming an input signal in a frequency domain, into a plurality of 4 subbands; encoding transform coefficients of the plurality of subbands to acquire first shape encoded information and calculating target gains of the transform coefficients of the plurality of subbands; forming one gain vector using the plurality of target gains; and encoding the gain vector to acquire first gain encoded information. Advantageous Effects of Invention The present invention can more accurately encode the spectral shapes of signals of strong tonality such as vowels, that is, the spectral shapes of signals having spectral characteristics that multiple peak shapes are observed, and improve the quality of decoded signals such as the sound quality of decoded signals. Brief Description of Drawings FIG.l is a block diagram showing the main configuration of a speech encoding apparatus according to Embodiment 1 of the present invention; FIG.2 is a block diagram showing the configuration inside a second layer encoding section according to Embodiment 1 of the present invention; FIG.3 is a flowchart showing steps of second layer encoding processing in the second layer encoding section according to Embodiment 1 of the present invention; FIG.4 is a block diagram showing the configuration inside a shape vector encoding section according to Embodiment 1 of the present invention; FIG.5 is a block diagram showing the configuration inside the gain vector forming section according to Embodiment 1 of the present invention; FIG.6 illustrates in detail the operation of a target gain arranging section according to Embodiment 1 of the present invention; FIG.7 is a block diagram showing the configuration inside a gain vector encoding section according to Embodiment 1 of the 5 present invention; FIG.8 is a block diagram showing the main configuration of a speech decoding apparatus according to Embodiment 1 of the present invention; FIG.9 is a block diagram showing the configuration inside a second layer decoding section according to Embodiment 1 of the present invention; FIG.10 illustrates a shape vector codebook according to Embodiment 2 of the present invention; FIG.11 illustrates multiple shape vector candidates included in the shape vector codebook according to Embodiment 2 of the present invention; FIG.12 is a block diagram showing the configuration inside the second layer encoding section according to Embodiment 3 of the present invention; FIG.13 illustrates range selecting processing in a range selecting section according to Embodiment 3 of the present invention; FIG.14 is a block diagram showing the configuration inside the second layer decoding section according to Embodiment 3 of the present invention; FIG.15 shows a variation of the range selecting section according to Embodiment 3 of the present invention; FIG.16 shows a variation of a range selecting method in the range selecting section according to Embodiment 3 of the present invention; FIG.17 is a block diagram showing a variation of the configuration of the range selecting section according to Embodiment 3 of the present invention; FIG.18 illustrates how range information is formed in the range information forming section according to Embodiment 3 of the present invention; FIG.19 illustrates the operation of a variation of a first layer error transform coefficient generating section according to Embodiment 3 of the present invention; FIG.20 shows a variation of the range selecting method in 6 the range selecting section according to Embodiment 3 of the present invention; FIG.21 shows a variation of the range selecting method in the range selecting section according to Embodiment 3 of the present invention; FIG.22 is a block diagram showing the configuration inside the second layer encoding section according to Embodiment 4 of the present invention; FIG.23 is a block diagram showing the main configuration of the speech encoding apparatus according to Embodiment 5 of the present invention; FIG.24 is a block diagram showing the main configuration inside the first layer encoding section according to Embodiment 5 of the present invention; FIG.25 is a block diagram showing the main configuration inside the first layer decoding section according to Embodiment 5 of the present invention; FIG.26 is a block diagram showing the main configuration of the speech decoding apparatus according to Embodiment 5 of the present invention; FIG.27 is a block diagram showing the main configuration of the speech encoding apparatus according to Embodiment 6 of the present invention; FIG.28 is a block diagram showing the main configuration of the speech decoding apparatus according to Embodiment 6 of the present invention; FIG.29 is a block diagram showing the main configuration of the speech encoding apparatus according to Embodiment 7 of the present invention; FIG.30 illustrates processing of selecting the range which is the target to be encoded in encoding processing in the speech encoding apparatus according to Embodiment 7 of the present invention; FIG.31 is a block diagram showing the main configuration of the speech decoding apparatus according to Embodiment 7 of the present invention; 7 FIG.32 illustrates a case where the target to be encoded is selected from range candidates arranged at equal intervals, in encoding processing in the speech encoding apparatus according to Embodiment 7 of the present invention; and FIG.33 illustrates a case where the target to be encoded is selected from range candidates arranged at equal intervals, in encoding processing in the speech encoding apparatus according to Embodiment 7 of the present invention. Best Mode for Carrying Out the Invention Hereinafter, embodiments of the present invention will be explained in detail with reference to the accompanying drawings. A speech encoding apparatus/speech decoding apparatus will be used as an example of an encoding apparatus/decoding apparatus according to the present invention to explain below. (Embodiment 1) FIG.l is a block diagram showing the main configuration of speech encoding apparatus 100 according to Embodiment 1 of the present invention. An example will be explained where the speech encoding apparatus and speech decoding apparatus according to the present embodiment employ a scalable configuration of two layers. Further, the first layer constitutes the base layer and the second layer constitutes the enhancement layer. In FIG.l, speech encoding apparatus 100 has frequency domain transforming section 101, first layer encoding section 102, first layer decoding section 103, subtractor 104, second layer encoding section 105 and multiplexing section 106. Frequency domain transforming section 101 transforms a time domain input signal into a frequency domain signal, and outputs the resulting input transform coefficients to first layer encoding section 102 and subtractor 104. First layer encoding section 102 performs encoding processing with respect to the input transform coefficients received from frequency domain transforming section 101, and outputs the resulting first layer encoded data to first layer 8 decoding section 103 and multiplexing section 106. First layer decoding section 103 performs decoding processing using the first layer encoded data received from first layer encoding section 102, and outputs the resulting first layer decoded transform coefficients to subtractor 104. Subtractor 104 subtracts the first layer decoded transform coefficients received from first layer decoding section 103, from the input transform coefficients received from frequency domain transforming section 101, and outputs the resulting first layer error transform coefficients to second layer encoding section 105. Second layer encoding section 105 performs encoding processing with respect to the first layer error transform coefficients received from subtractor 104, and outputs the resulting second layer encoded data to multiplexing section 106. Further, second layer encoding section 105 will be described in detail later. Multiplexing section 106 multiplexes the first layer encoded data received from first layer encoding section 102 and the second layer encoded data received from second layer encoding section 105, and outputs the resulting bit stream to a transmission channel. FIG.2 is a block diagram showing the configuration inside second layer encoding section 105. In FIG.2, second layer encoding section 105 has subband forming section 151, shape vector encoding section 152, gain vector forming section 153, gain vector encoding section 154 and multiplexing section 155. Subband forming section 151 divides the first layer error transform coefficients received from subtractor 104, into M subbands, and outputs the resulting M subband transform coefficients to shape vector encoding section 152. Here, when the first layer error transform coefficients are represented as ei(k), the m-th subband transform coefficients e(m,k) (where 0^m^ M-l) are represented by following equation 1. [1] 9 e(m,k) = e,(k + F(m)) ... (Equation 1 ) (0

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 1568-MUMNP-2009-ENGLISH TRANSLATION-(08-03-2016).pdf 2016-03-08
1 1568-MUMNP-2009-RELEVANT DOCUMENTS [22-09-2023(online)].pdf 2023-09-22
2 1568-MUMNP-2009-CORRESPONDENCE-(08-03-2016).pdf 2016-03-08
2 1568-MUMNP-2009-RELEVANT DOCUMENTS [20-09-2022(online)].pdf 2022-09-20
3 Petition Under Rule 137 [23-06-2016(online)].pdf 2016-06-23
3 1568-MUMNP-2009-RELEVANT DOCUMENTS [13-08-2021(online)].pdf 2021-08-13
4 OTHERS [23-06-2016(online)].pdf 2016-06-23
4 1568-MUMNP-2009-RELEVANT DOCUMENTS [04-03-2020(online)].pdf 2020-03-04
5 Other Patent Document [23-06-2016(online)].pdf 2016-06-23
5 1568-MUMNP-2009-RELEVANT DOCUMENTS [21-02-2019(online)].pdf 2019-02-21
6 Form 3 [23-06-2016(online)].pdf 2016-06-23
6 1568-MUMNP-2009-ABSTRACT(7-9-2009).pdf 2018-08-10
7 Examination Report Reply Recieved [23-06-2016(online)].pdf 2016-06-23
8 Description(Complete) [23-06-2016(online)].pdf 2016-06-23
8 1568-mumnp-2009-abstract.pdf 2018-08-10
9 1568-MUMNP-2009-CERTIFICATE(7-9-2009).pdf 2018-08-10
9 Claims [23-06-2016(online)].pdf 2016-06-23
10 1568-MUMNP-2009-CLAIMS(7-9-2009).pdf 2018-08-10
10 Abstract [23-06-2016(online)].pdf 2016-06-23
11 Other Patent Document [05-10-2016(online)].pdf 2016-10-05
12 1568-mumnp-2009-claims.pdf 2018-08-10
12 Other Patent Document [07-02-2017(online)].pdf 2017-02-07
13 1568-MUMNP-2009-CORRESPONDENCE(12-3-2012).pdf 2018-08-10
13 Other Patent Document [13-02-2017(online)].pdf 2017-02-13
14 1568-MUMNP-2009-CORRESPONDENCE(16-2-2010).pdf 2018-08-10
14 1568-MUMNP-2009-RELEVANT DOCUMENTS [16-02-2018(online)].pdf 2018-02-16
15 1568-MUMNP-2009-CORRESPONDENCE(7-9-2009).pdf 2018-08-10
15 POA,FORM-1,2.pdf 2018-08-10
16 1568-MUMNP-2009-CORRESPONDENCE(8-2-2011).pdf 2018-08-10
16 Others.pdf_42.pdf 2018-08-10
17 Others.pdf 2018-08-10
17 1568-MUMNP-2009-CORRESPONDENCE(IPO)-(28-2-2017).pdf 2018-08-10
18 1568-MUMNP-2009-CORRESPONDENCE(IPO)-(DECISION)-(28-2-2017).pdf 2018-08-10
18 FORM-6.pdf 2018-08-10
19 1568-MUMNP-2009-CORRESPONDENCE(IPO)-(HEARING NOTICE)-(5-1-2017).pdf 2018-08-10
19 FER RESPONSE.pdf_46.pdf 2018-08-10
20 1568-mumnp-2009-correspondence.pdf 2018-08-10
20 FER RESPONSE.pdf 2018-08-10
21 1568-MUMNP-2009-DESCRIPTION(COMPLETE)-(7-9-2009).pdf 2018-08-10
21 Complete Specification.pdf_43.pdf 2018-08-10
22 Complete Specification.pdf 2018-08-10
23 1568-mumnp-2009-description(complete).pdf 2018-08-10
23 Claims - Clean.pdf_45.pdf 2018-08-10
24 Claims - Clean.pdf 2018-08-10
24 1568-MUMNP-2009-DRAWING(7-9-2009).pdf 2018-08-10
25 1568-mumnp-2009-drawing.pdf 2018-08-10
25 ASSIGNMENT.pdf 2018-08-10
26 1568-MUMNP-2009-FORM 1(7-9-2009).pdf 2018-08-10
26 abstract1.jpg 2018-08-10
27 1568-mumnp-2009-form 1.pdf 2018-08-10
27 ABSTRACT.pdf_44.pdf 2018-08-10
28 1568-MUMNP-2009-FORM 18(8-2-2011).pdf 2018-08-10
28 ABSTRACT.pdf 2018-08-10
29 1568-mumnp-2009-form 2(title page).pdf 2018-08-10
29 1568-MUMNP-2009_EXAMREPORT.pdf 2018-08-10
30 1568-MUMNP-2009-POWER OF ATTORNEY(7-9-2009).pdf 2018-08-10
31 1568-MUMNP-2009-FORM 2.pdf 2018-08-10
31 1568-mumnp-2009-pct-isa-210.pdf 2018-08-10
32 1568-MUMNP-2009-FORM 3(16-2-2010).pdf 2018-08-10
32 1568-mumnp-2009-pct-ib-304.pdf 2018-08-10
33 1568-mumnp-2009-form 3.pdf 2018-08-10
33 1568-mumnp-2009-other document.pdf 2018-08-10
34 1568-mumnp-2009-form 5.pdf 2018-08-10
34 1568-mumnp-2009-international publication report a1.pdf 2018-08-10
35 1568-mumnp-2009-international publication report a1.pdf 2018-08-10
35 1568-mumnp-2009-form 5.pdf 2018-08-10
36 1568-mumnp-2009-other document.pdf 2018-08-10
36 1568-mumnp-2009-form 3.pdf 2018-08-10
37 1568-MUMNP-2009-FORM 3(16-2-2010).pdf 2018-08-10
37 1568-mumnp-2009-pct-ib-304.pdf 2018-08-10
38 1568-MUMNP-2009-FORM 2.pdf 2018-08-10
38 1568-mumnp-2009-pct-isa-210.pdf 2018-08-10
39 1568-MUMNP-2009-POWER OF ATTORNEY(7-9-2009).pdf 2018-08-10
40 1568-mumnp-2009-form 2(title page).pdf 2018-08-10
40 1568-MUMNP-2009_EXAMREPORT.pdf 2018-08-10
41 1568-MUMNP-2009-FORM 18(8-2-2011).pdf 2018-08-10
41 ABSTRACT.pdf 2018-08-10
42 1568-mumnp-2009-form 1.pdf 2018-08-10
42 ABSTRACT.pdf_44.pdf 2018-08-10
43 1568-MUMNP-2009-FORM 1(7-9-2009).pdf 2018-08-10
43 abstract1.jpg 2018-08-10
44 1568-mumnp-2009-drawing.pdf 2018-08-10
44 ASSIGNMENT.pdf 2018-08-10
45 Claims - Clean.pdf 2018-08-10
45 1568-MUMNP-2009-DRAWING(7-9-2009).pdf 2018-08-10
46 1568-mumnp-2009-description(complete).pdf 2018-08-10
46 Claims - Clean.pdf_45.pdf 2018-08-10
47 Complete Specification.pdf 2018-08-10
48 1568-MUMNP-2009-DESCRIPTION(COMPLETE)-(7-9-2009).pdf 2018-08-10
48 Complete Specification.pdf_43.pdf 2018-08-10
49 1568-mumnp-2009-correspondence.pdf 2018-08-10
49 FER RESPONSE.pdf 2018-08-10
50 1568-MUMNP-2009-CORRESPONDENCE(IPO)-(HEARING NOTICE)-(5-1-2017).pdf 2018-08-10
50 FER RESPONSE.pdf_46.pdf 2018-08-10
51 1568-MUMNP-2009-CORRESPONDENCE(IPO)-(DECISION)-(28-2-2017).pdf 2018-08-10
51 FORM-6.pdf 2018-08-10
52 1568-MUMNP-2009-CORRESPONDENCE(IPO)-(28-2-2017).pdf 2018-08-10
52 Others.pdf 2018-08-10
53 1568-MUMNP-2009-CORRESPONDENCE(8-2-2011).pdf 2018-08-10
53 Others.pdf_42.pdf 2018-08-10
54 1568-MUMNP-2009-CORRESPONDENCE(7-9-2009).pdf 2018-08-10
54 POA,FORM-1,2.pdf 2018-08-10
55 1568-MUMNP-2009-CORRESPONDENCE(16-2-2010).pdf 2018-08-10
55 1568-MUMNP-2009-RELEVANT DOCUMENTS [16-02-2018(online)].pdf 2018-02-16
56 Other Patent Document [13-02-2017(online)].pdf 2017-02-13
56 1568-MUMNP-2009-CORRESPONDENCE(12-3-2012).pdf 2018-08-10
57 1568-mumnp-2009-claims.pdf 2018-08-10
57 Other Patent Document [07-02-2017(online)].pdf 2017-02-07
58 Other Patent Document [05-10-2016(online)].pdf 2016-10-05
59 1568-MUMNP-2009-CLAIMS(7-9-2009).pdf 2018-08-10
59 Abstract [23-06-2016(online)].pdf 2016-06-23
60 1568-MUMNP-2009-CERTIFICATE(7-9-2009).pdf 2018-08-10
60 Claims [23-06-2016(online)].pdf 2016-06-23
61 1568-mumnp-2009-abstract.pdf 2018-08-10
61 Description(Complete) [23-06-2016(online)].pdf 2016-06-23
62 Examination Report Reply Recieved [23-06-2016(online)].pdf 2016-06-23
63 Form 3 [23-06-2016(online)].pdf 2016-06-23
63 1568-MUMNP-2009-ABSTRACT(7-9-2009).pdf 2018-08-10
64 Other Patent Document [23-06-2016(online)].pdf 2016-06-23
64 1568-MUMNP-2009-RELEVANT DOCUMENTS [21-02-2019(online)].pdf 2019-02-21
65 1568-MUMNP-2009-RELEVANT DOCUMENTS [04-03-2020(online)].pdf 2020-03-04
65 OTHERS [23-06-2016(online)].pdf 2016-06-23
66 1568-MUMNP-2009-RELEVANT DOCUMENTS [13-08-2021(online)].pdf 2021-08-13
66 Petition Under Rule 137 [23-06-2016(online)].pdf 2016-06-23
67 1568-MUMNP-2009-CORRESPONDENCE-(08-03-2016).pdf 2016-03-08
67 1568-MUMNP-2009-RELEVANT DOCUMENTS [20-09-2022(online)].pdf 2022-09-20
68 1568-MUMNP-2009-ENGLISH TRANSLATION-(08-03-2016).pdf 2016-03-08
68 1568-MUMNP-2009-RELEVANT DOCUMENTS [22-09-2023(online)].pdf 2023-09-22

ERegister / Renewals

3rd: 25 May 2017

From 28/02/2010 - To 28/02/2011

4th: 25 May 2017

From 28/02/2011 - To 28/02/2012

5th: 25 May 2017

From 28/02/2012 - To 28/02/2013

6th: 25 May 2017

From 28/02/2013 - To 28/02/2014

7th: 25 May 2017

From 28/02/2014 - To 28/02/2015

8th: 25 May 2017

From 28/02/2015 - To 28/02/2016

9th: 25 May 2017

From 28/02/2016 - To 28/02/2017

10th: 25 May 2017

From 28/02/2017 - To 28/02/2018

11th: 12 Jan 2018

From 28/02/2018 - To 28/02/2019

12th: 17 Jan 2019

From 28/02/2019 - To 28/02/2020

13th: 09 Jan 2020

From 28/02/2020 - To 28/02/2021

14th: 13 Jan 2021

From 28/02/2021 - To 28/02/2022

15th: 12 Jan 2022

From 28/02/2022 - To 28/02/2023

16th: 09 Jan 2023

From 28/02/2023 - To 28/02/2024

17th: 10 Jan 2024

From 28/02/2024 - To 28/02/2025