Sign In to Follow Application
View All Documents & Correspondence

Method And System For Processing Data In An Inbuilt Memory

Abstract: A method and system for processing of data in a memory on a big data platform has been provided. In the present invention, the input data will be loaded in the memory as a resilient distributed dataset. When the data is received in a node in the memory, the data is filtered, transformed and aggregated. The system is further configured to dynamically select one or more rules out of the plurality of rules based on attributes of aggregated input data using a rule generation module. Further a set of data from the input data corresponding to the retrieved one or more rules. And finally, the selected one or more rules are executed on the set of data, in parallel, using the rule execution module. Each node will be an individual execution unit and independent from other execution nodes.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
27 January 2017
Publication Number
31/2018
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ip@legasis.in
Parent Application
Patent Number
Legal Status
Grant Date
2023-10-26
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point, Mumbai-400021, Maharashtra, India

Inventors

1. JAYAVEER, Deepa Devi
Tata Consultancy Services Limited, No. 79, IT Highway, Near Scope International, Karapakkam, Chennai - 600096, Tamil Nadu, India
2. VELLAIYATHA, Sirajudeen
Tata Consultancy Services Limited, No. 79, IT Highway, Near Scope International, Karapakkam, Chennai - 600096, Tamil Nadu, India
3. PAKKIRIKANDAR MARAN, Jaidish Hari Raj
Tata Consultancy Services Limited, No. 79, IT Highway, Near Scope International, Karapakkam, Chennai - 600096, Tamil Nadu, India

Specification

Claims:1. A method for parallel processing of data in a memory, the method comprising a processor implemented steps of:

retrieving an input data and information pertaining to a plurality of rules from a storage;
loading the input data in the memory, wherein the memory comprising a root node, a plurality of execution nodes, a plurality of parent nodes and a plurality of leaf nodes;
filtering, by the processor, the input data;
transforming, by the processor, the filtered input data;
aggregating, by the processor, the transformed input data;
dynamically selecting one or more rules out of the plurality of rules based on attributes of aggregated input data using a rule generation module;
retrieving a set of data from the input data corresponding to the retrieved one or more rules;
distributing the plurality of rules to the plurality of execution nodes; and
executing the selected one or more rules on the set of data, in parallel, using the rule execution module, wherein the execution results in generation of an output.

2. The method of claim 1 further comprising consolidating the output from various executed rules to generate consolidated output on an output module.

3. The method of claim 1, wherein the storage is a distributed storage.

4. The method of claim 1, wherein the filtering is performed based on a predefined set of rules.

5. The method of claim 1, wherein the input data is stored on a big data platform.

6. A system for parallel processing of data in a memory, the system comprising:

a storage for storing the input data and information pertaining to a plurality of rules;
an input / output interface for retrieving the input data and information pertaining to the plurality of rules from the storage;
a memory, the memory further comprising a root node, a plurality of execution nodes, a plurality of parent nodes and a plurality of leaf nodes;
a processor in communication with the memory, the processor further comprising:
a filtration module for filtering the input data;
a transformation module for transforming the filtered input data;
an aggregation module for aggregating the transformed input data;
a rule generation module for dynamically selecting one or more rules out of the plurality of rules based on attributes of aggregated input data;
a retrieval module for retrieving a set of data from the input data corresponding to the retrieved one or more rules;
a distribution module for distributing the plurality of rules to the plurality of execution nodes; and
a rule execution module for executing the selected one or more rules on the set of data, in parallel, wherein the execution results in generation of an output.

7. The system of claim 6, wherein the memory is an internal memory of a device.

8. The system of claim 6 further includes an output module for presenting the output.
, Description:FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of invention:
METHOD AND SYSTEM FOR PROCESSING DATA IN AN INBUILT MEMORY

Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India

The following specification particularly describes the invention and the manner in which it is to be performed.

FIELD OF THE INVENTION
[0001] The present application generally relates to the field of in-memory big data processing. More particularly, but not specifically, the invention provides a system and method for parallel processing of data within the memory on a big data platform.

BACKGROUND OF THE INVENTION
[0002] In the recent times, a large amount of data are being produced in the form of big data from various sources. The analysis of big data may result in generation of important information. The improvements in the field of big data has prompted much research to develop systems to support ultra-low latency service and real time data analytics. In the last decade, multi-core processors and the availability of large amounts of main memory at plummeting cost are creating new breakthroughs, making it viable to build in-memory systems where a significant part, if not the entirety, of the database fits in memory. Growing main memory capacity has fueled the development of in-memory big data management and processing. However, in-memory systems are much more sensitive to other sources of overhead that do not matter in traditional I/O-bounded disk-based systems. Some issues such as fault-tolerance and consistency are also more challenging to handle in in-memory environment.

[0003] In the present scenario, there is a need of the parallel data processing in the memory. These data processing in the memory are normally governed by a rule engine. The current environment are using dynamic rule engine which includes multiple nodes. The dynamic rule engine are very slow and might run for considerable hours. The input data for each node is fetched from a secondary storage and the results are stored back to the secondary storage. This I/O operation (data transfer from primary memory to secondary memory) is resource intensive and slow. In the existing system, the rules are executed in high configuration servers. In addition to that current challenges are costly not scalable.

[0004] Moreover, the current system doesn't handle dynamic rules efficiently and the current system doesn't do in memory rules processing which works for big data which provides high performance and horizontally scalable. In the existing rule engine models a rule is usually pre-defined and the rule will be executed when a business case demands its execution. When the user has to apply another rule over another set of data, he then selects the appropriate rule and applies it over the data. In existing systems, when a rule has to be applied, an execution unit would be created during runtime and the data would be passed into the created unit. When this output is passed as input to another rule execution unit, another execution unit has to be created which handles the filtered data.

SUMMARY OF THE INVENTION
[0005] The following presents a simplified summary of some embodiments of the disclosure in order to provide a basic understanding of the embodiments. This summary is not an extensive overview of the embodiments. It is not intended to identify key/critical elements of the embodiments or to delineate the scope of the embodiments. Its sole purpose is to present some embodiments in a simplified form as a prelude to the more detailed description that is presented below.

[0006] In view of the foregoing, an embodiment herein provides a system for parallel processing of data in a memory. The system comprises a storage, an input / output interface, a memory and a processor. The storage stores the input data and information pertaining to a plurality of rules. The input / output interface retrieves the input data and information pertaining to the plurality of rules from the storage. The memory comprises a root node, a plurality of execution nodes, a plurality of parent nodes and a plurality of leaf nodes. The processor is in communication with the memory. The processor further comprises a filtration module, a transformation module, an aggregation module, a rule generation module, a retrieval module, a distribution module and a rule execution module. The filtration module filters the input data. The transformation module transforms the filtered input data. The aggregation module for aggregates the transformed input data. The rule generation module dynamically selects one or more rules out of the plurality of rules based on attributes of aggregated input data. The retrieval module retrieves a set of data from the input data corresponding to the retrieved one or more rules. The distribution module distributes the plurality of rules to the plurality of execution nodes. The rule execution module executes the selected one or more rules on the set of data, in parallel, wherein the execution results in generation of an output.

[0007] Another embodiment provides a method for parallel processing of data in a memory. Initially, the input data and information pertaining to a plurality of rules is retrieved from the storage. At the next step, the input data is loaded in the memory. The memory comprising a root node, a plurality of execution nodes, a plurality of parent nodes and a plurality of leaf nodes. The input data is then filtered. The filtered input data is then transformed. The transformed input data is then aggregated by the processor. At the next step, one or more rules out of the plurality of rules are dynamically selected based on attributes of aggregated input data using a rule generation module. In the next step, a set of data is retrieved from the input data corresponding to the retrieved one or more rules. The plurality of rules are then distributed to the plurality of execution nodes. And finally, the selected one or more rules are executed on the set of data, in parallel, using the rule execution module. The execution results in generation of an output.

BRIEF DESCRIPTION OF THE DRAWINGS
[0008] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:

[0009] Fig. 1 illustrates a block diagram of a system for parallel processing of data in a memory, in accordance with an embodiment of the disclosure;

[0010] Fig. 2 is a flowchart illustrating the steps involved for parallel processing of data in a memory, in accordance with an embodiment of the disclosure; and

[0011] Fig. 3 shows an example of rule structure, in accordance with an embodiment of the present disclosure.

DETAILED DESCRIPTION OF THE INVENTION
[0012] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.

[0013] The words "comprising," "having," "containing," and "including," and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.

[0014] It must also be noted that as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise. Although any systems and methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the preferred, systems and methods are now described.

[0015] Some embodiments of this disclosure, illustrating all its features, will now be discussed in detail. The disclosed embodiments are merely exemplary of the disclosure, which may be embodied in various forms.

[0016] Before setting forth the detailed explanation, it is noted that all of the discussion below, regardless of the particular implementation being described, is exemplary in nature, rather than limiting.

[0017] Referring now to the drawings, and more particularly to FIG. 1, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.

[0018] In the context of the present disclosure, the expression “big data” refers to a term for data sets that are so large or complex that traditional data processing applications are inadequate to deal with them.

[0019] According to an embodiment of the disclosure, a system 100 for parallel processing of data in a memory is shown in Fig. 1. The system 100 is configured to be operate on a big data platform / framework such as Hadoop®. Though use of any other platform is well within the scope of this disclosure. The invention provides parallel processing of data in multiple nodes wherein the data is loaded into the memory as a resilient distributed dataset and rules are decomposed in individual nodes and is filtered, transformed and aggregated and sent to the execution node.

[0020] According to an embodiment of the disclosure, a block diagram of the system 100 is shown in Fig. 1. The system 100 includes a storage 102, an input/output interface 104, a memory 106 and a processor 108 in communication with the memory 106. The memory 106 is configured to store a plurality of algorithms. The processor 108 further includes a plurality of modules for performing various functions. The plurality of modules include a filtration module 110, a transformation module 112, an aggregation module 114, a rule generation module 116, a retrieval module 118, a distribution module 120 and a rule execution module 122. The plurality of modules access the plurality of algorithms stored in the memory 106 to perform various functions.

[0021] The input / output interface 104 (I/O interface) 104 is configured to retrieve an input data. The I/O interface 104 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) 104 can include one or more ports for connecting a number of devices to one another or to another server.

[0022] According to an embodiment of the disclosure, the storage 102 is configured to store the input data and information pertaining to a plurality of rules. In an embodiment, the storage 102 acts as a rule repository 102. The rule repository 102 is the database of all business rules which have been designed for fulfilling the business requirements. Depending on the business requirements, the business rules can be stored in the storage 102 using the I/O interface 104. For example, in retail industry application the storage 102 may contain rules like “Lowest price among its competitors”, “Match a competitor’s price”, or “Fix the price with 10% margin etc.

[0023] According to an embodiment of the disclosure, the storage 102 is in communication with the memory 106. The memory 106 is a distributed memory. The input data will be loaded into the memory as a Resilient Distributed Dataset (RDD). The data can be from different data source like HBase, relational DB or flat file. The memory 106 further comprising a root node, a plurality of execution nodes, a plurality of parent nodes and a plurality of leaf nodes as shown in the example of Fig. 3. Each of the nodes have a different function. The root node is a node which has no parent node. The execution node is a node which are being used for the execution. Each of the plurality of execution nodes have filtering and reduce logic. The parent node is a node which has child nodes coming out of them. The leaf nodes are nodes which don’t have child nodes. In Fig. 3, N1 is the root node, N1 – N12 are the plurality of execution nodes, N1 – N5 are the plurality of parent nodes and N6 – N12 are the plurality of leaf nodes.

[0024] According to an embodiment of the disclosure, the system 100 includes the filtration module 110, the transformation module 112 and the aggregation module 114 for the processing of the input data. The filtration module 110 is configured to filter the input data. The input data is filtered based on a predefined set of rules. Initially all the required data is loaded in the memory 106. Only the nodes which will have intelligence of what data it wants will be filtered by the filtration module 110. The filtering intelligence will be decided only at run time. The data in in-memory will be filtered. The transformation module 112 is configured to transform the filtered input data. And the aggregation module 114 is configured to aggregate the transformed input data. The aggregated input data is then provided for further in-memory processing.

[0025] According to an embodiment of the disclosure, the system 100 comprises the rule generation module 116. The rule generation module 116 is configured to dynamically select one or more rules out of the plurality of rules based on attributes of the aggregated input data. In the rule generation module 116, the rule selection will be dynamic i.e. the user will predefine all possible rules and their context of execution. So instead of user determining the rule to be executed by giving manual input, the context determines the rules to be applied. So the contextual data determines the rule selection and application. It can be explained with the help of following example: If we want to apply a location specific business rule, the proposed rule engine would dynamically load appropriate predefined rules for that location and apply over the master data instead of user selecting a location data and applying the appropriate rule for that location data.

[0026] According to an embodiment of the disclosure, the retrieval module 118 is configured to retrieve a set of data from the input data corresponding to the retrieved one or more rules. When a rule is executed, the execution node is created and the set of input data is passed as input to this execution unit. In the present disclosure, the execution node would be created when the rule generation module 116 is initiated and no unit has to be created during runtime. So when a series of rules has to be applied, appropriate units would be chosen instead of creating an execution node every time.

[0027] According to an embodiment of the disclosure, the system 100 further configured to distribute the plurality of rules to a plurality of execution nodes using a distribution module 120. The rule execution module 122 is configured to execute the selected one or more rules on the set of data, in parallel, wherein the execution results in generation of an output. Since each of the execution is independent, therefore the execution can be done in parallel.

[0028] According to an embodiment of the disclosure, the information processing of the system 100 is improved because of the in-memory data processing, which helps in reducing the execution time. It should be appreciated that the rule generation module can be executed for the big data and can execute a plurality of rules simultaneously.

[0029] According to an embodiment of the disclosure, the system 100 also includes an output module 124. The output from the various executed rules can be consolidated by the processor 108 to generate the consolidated output. The consolidated output is then displayed on the output module 124.

[0030] In operation, a flowchart 200 for parallel processing of data in the memory is shown in Fig. 2 according to an embodiment of the disclosure. Initially at step 202, the input data is retrieved and information pertaining to a plurality of rules is retrieved from the storage 102. At next step 204, the input data is loaded in the memory 106. The input data is normally a large amount of data. The memory 106 comprises a root node, a plurality of execution nodes, a plurality of parent nodes and a plurality of leaf nodes. At step 206, the input data is filtered by the filtration module 110. Only the nodes which will have intelligence of what data it wants will be filtered by the filtration module 110. The filtering intelligence will be decided only at run time. At the next step 208, the filtered input data is transformed by the transformation module 112. And step 210, the transformed input data is aggregated by the aggregation module 114. The aggregated input data will then be provided for the further processing.

[0031] At step 212, one or more rules out of the plurality of rules are dynamically selected based on attributes of aggregated input data using the rule generation module 116. The rule selection will be dynamic i.e. the user will predefine all possible rules and their context of execution. At the next step 214, the set of data is retrieved from the aggregated input data corresponding to the retrieved one or more rules. At step 216, the plurality of rules are distributed to the plurality of execution nodes. And finally at step 218, the selected one or more rules are executed on the set of data, in parallel, using the rule execution module 122. Since each of the execution is independent, therefore the execution can be done in parallel. The execution results in generation of an output.

[0032] According to an embodiment of the disclosure, the method for processing of data in the memory can also be explained with the help of example shown in Fig. 3. Fig. 3 shows a sample rule structure which is being used here. The nodes N1-N12 are execution nodes as, each of the nodes can execute the rules in parallel. Nodes N1-N5 are the parent node, as each of them are giving out further nodes. Node N1 is the root node, as it has no parent node. And nodes N6 – N12 are the leaf nodes, as they don’t have child nodes.

[0033] The process flow for this example can be explained as follows: Initially, all the data which is relevant for the calculation is loaded in the memory. This processes will be repeated until it is ascertained that the current node is not a root node. Further, data is then passed to all the leaf nodes. In the next step, the data from the leaf nodes and the data from the child nodes will be filtered. For example, the node N12 needs only “Retailer 5” data. The node N10 needs only “Retailer 3” only for particular zone. If node expects data from child nodes, all the child nodes has to complete processing before the parent starts processing the data. In the next step, the data is aggregated for example ‘National minimum’. And finally, the result is provided as an output for the post processing. In this example, all the execution nodes are executed parallel. For example, all leaf nodes can be processed parallel. N9 & N8 can processed parallel, but N4 can start only after N9 and N8 are complete. N4 is dependent on output of N9 and N8.

[0034] It is, however to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.

[0035] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.

[0036] The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.

[0037] A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.

[0038] Input/output (I/O) devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.

[0039] A representative hardware environment for practicing the embodiments may include a hardware configuration of an information handling/computer system in accordance with the embodiments herein. The system herein comprises at least one processor or central processing unit (CPU). The CPUs are interconnected via system bus to various devices such as a random access memory (RAM), read-only memory (ROM), and an input/output (I/O) adapter. The I/O adapter can connect to peripheral devices, such as disk units and tape drives, or other program storage devices that are readable by the system. The system can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments herein.

[0040] The system further includes a user interface adapter that connects a keyboard, mouse, speaker, microphone, and/or other user interface devices such as a touch screen device (not shown) to the bus to gather user input. Additionally, a communication adapter connects the bus to a data processing network, and a display adapter connects the bus to a display device which may be embodied as an output device such as a monitor, printer, or transmitter, for example. The preceding description has been presented with reference to various embodiments. Persons having ordinary skill in the art and technology to which this application pertains will appreciate that alterations and changes in the described structures and methods of operation can be practiced without meaningfully departing from the principle, spirit and scope.

Documents

Application Documents

# Name Date
1 201721003039-IntimationOfGrant26-10-2023.pdf 2023-10-26
1 Form 3 [27-01-2017(online)].pdf 2017-01-27
2 201721003039-PatentCertificate26-10-2023.pdf 2023-10-26
2 Form 20 [27-01-2017(online)].jpg 2017-01-27
3 Form 18 [27-01-2017(online)].pdf_329.pdf 2017-01-27
3 201721003039-CLAIMS [16-12-2020(online)].pdf 2020-12-16
4 Form 18 [27-01-2017(online)].pdf 2017-01-27
4 201721003039-COMPLETE SPECIFICATION [16-12-2020(online)].pdf 2020-12-16
5 Drawing [27-01-2017(online)].pdf 2017-01-27
5 201721003039-FER_SER_REPLY [16-12-2020(online)].pdf 2020-12-16
6 Description(Complete) [27-01-2017(online)].pdf_328.pdf 2017-01-27
6 201721003039-OTHERS [16-12-2020(online)].pdf 2020-12-16
7 Description(Complete) [27-01-2017(online)].pdf 2017-01-27
7 201721003039-FER.pdf 2020-06-16
8 Other Patent Document [20-03-2017(online)].pdf 2017-03-20
8 ABSTRACT1.jpg 2018-08-11
9 201721003039-ORIGINAL UNDER RULE 6 (1A)-27-03-2017.pdf 2017-03-27
9 Form 26 [20-03-2017(online)].pdf 2017-03-20
10 201721003039-ORIGINAL UNDER RULE 6 (1A)-27-03-2017.pdf 2017-03-27
10 Form 26 [20-03-2017(online)].pdf 2017-03-20
11 ABSTRACT1.jpg 2018-08-11
11 Other Patent Document [20-03-2017(online)].pdf 2017-03-20
12 201721003039-FER.pdf 2020-06-16
12 Description(Complete) [27-01-2017(online)].pdf 2017-01-27
13 201721003039-OTHERS [16-12-2020(online)].pdf 2020-12-16
13 Description(Complete) [27-01-2017(online)].pdf_328.pdf 2017-01-27
14 201721003039-FER_SER_REPLY [16-12-2020(online)].pdf 2020-12-16
14 Drawing [27-01-2017(online)].pdf 2017-01-27
15 201721003039-COMPLETE SPECIFICATION [16-12-2020(online)].pdf 2020-12-16
15 Form 18 [27-01-2017(online)].pdf 2017-01-27
16 201721003039-CLAIMS [16-12-2020(online)].pdf 2020-12-16
16 Form 18 [27-01-2017(online)].pdf_329.pdf 2017-01-27
17 201721003039-PatentCertificate26-10-2023.pdf 2023-10-26
17 Form 20 [27-01-2017(online)].jpg 2017-01-27
18 Form 3 [27-01-2017(online)].pdf 2017-01-27
18 201721003039-IntimationOfGrant26-10-2023.pdf 2023-10-26

Search Strategy

1 2020-06-1515-58-36E_15-06-2020.pdf

ERegister / Renewals

3rd: 24 Jan 2024

From 27/01/2019 - To 27/01/2020

4th: 24 Jan 2024

From 27/01/2020 - To 27/01/2021

5th: 24 Jan 2024

From 27/01/2021 - To 27/01/2022

6th: 24 Jan 2024

From 27/01/2022 - To 27/01/2023

7th: 24 Jan 2024

From 27/01/2023 - To 27/01/2024

8th: 24 Jan 2024

From 27/01/2024 - To 27/01/2025

9th: 27 Jan 2025

From 27/01/2025 - To 27/01/2026