Sign In to Follow Application
View All Documents & Correspondence

Method And System For Efficient Execution Of Ordered And Unordered Tasks In Multi Threaded And Networked Computing

Abstract: The present disclosure provides methods for concurrently executing ordered and unordered tasks using a plurality of processing units. Certain embodiments of the present disclosure may store the ordered and unordered tasks in the same processing queue. Further, processing tasks in the processing queue may comprise concurrently preprocessing ordered tasks, thereby reducing the amount of processing unit idle time and improving load balancing across processing units. Embodiments of the present disclosure may also dynamically manage the number of processing units based on a rate of unordered tasks being received in the processing queue, a processing rate of unordered tasks, a rate of ordered tasks being received in the processing queue, a processing rate of ordered tasks, and/or the number of sets of related ordered tasks in the processing queue. Also provided are related systems and non-transitory computer-readable media. FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
09 October 2013
Publication Number
43/2013
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
ipo@knspartners.com
Parent Application
Patent Number
Legal Status
Grant Date
2023-06-07
Renewal Date

Applicants

WIPRO LIMITED
Doddakannelli, Sarjapur Road, Bangalore 560035, Karnataka, India.

Inventors

1. Maheshwaran Govindarajeswaran
15/1 Periyar Ramaswamy Street, Ethiraj Nagar, West Mambalam, Chennai 600033, Tamil Nadu, India.
2. Arun Jeyaprasad Arjun Jeyarani
76/230, Yadhaval Street, Adambakkam, Chennai 600088, Tamil Nadu, India.

Specification

CLIAMS:We claim:
1. A method for concurrently executing ordered and unordered tasks using a plurality of processing units, the method comprising:
receiving an unordered task or an ordered task into a processing queue;
processing a task in the processing queue using at least one processing unit,
wherein if the task is an unordered task or an ordered task with no other processing unit processing a related ordered task, the processing unit processes the task,
wherein if the task is an ordered task and another processing unit is processing a related ordered task, the processing unit preprocesses the ordered task if the ordered task needs preprocessing; and
dynamically managing the number of processing units.

2. The method according to claim 1, wherein ordered and unordered tasks are stored in the same processing queue.

3. The method according to claim 1, wherein the processing queue comprises a linear data structure.

4. The method according to claim 1, wherein the processing queue comprises a linear data structure, and ordered and unordered tasks are stored in the same processing queue.

5. The method according to claim 1, wherein the processing unit comprises a hardware processor.

6. The method according to claim 1, wherein the processing unit comprises a software processing unit.

7. The method according to claim 6, wherein the software processing unit comprises a thread.

8. The method according to claim 1, wherein dynamically managing the number of processing units comprises:
determining a required number of processing units based on at least one of a rate of unordered tasks being received in the processing queue, a processing rate of unordered tasks, a rate of ordered tasks being received in the processing queue, a processing rate of ordered tasks, and the number of sets of related ordered tasks in the processing queue; and
adjusting the number of processing units based on the required number of processing units.

9. The method according to claim 8, wherein the required number of processing units ( ) is equal to the lesser of or , wherein
is a maximum number of processing units,
, wherein is the rate of unordered tasks being received in the processing queue, is the processing rate of unordered tasks, and is a threshold number of processing units per unordered task, and
is determined based on values of , , and ,
wherein , wherein is a number of tasks to be preprocessed concurrently and is a threshold number of processing units per ordered task,
wherein , wherein is the rate of ordered tasks being received in the processing queue, is the processing rate of ordered tasks, and
corresponds to the number of sets of related ordered tasks in the processing queue,
and is equal to the greater of and if , or is equal to lesser of and if .

10. A system for executing ordered and unordered tasks, the system comprising one or more hardware processors and a computer-readable medium storing instructions, wherein the one or more hardware processors are configured by the instruction to:
receive an unordered task or an ordered task into a processing queue;
process a task in the processing queue using at least one processing unit,
wherein if the task is an unordered task or an ordered task with no other processing unit processing a related ordered task, the processing unit processes the task,
wherein if the task is an ordered task and another processing unit is processing a related ordered task, the processing unit preprocesses the ordered task if the ordered task needs preprocessing; and
dynamically manage the number of processing units.

11. The system according to claim 10, wherein ordered and unordered tasks are stored in the same processing queue.

12. The system according to claim 10, wherein the processing queue comprises a linear data structure.

13. The system according to claim 10, wherein the processing queue comprises a linear data structure, and ordered and unordered tasks are stored in the same processing queue.

14. The system according to claim 10, wherein the processing unit comprises a hardware processor.

15. The system according to claim 10, wherein the processing unit comprises a software processing unit.

16. The system according to claim 15, wherein the software processing unit comprises a thread.

17. The system according to claim 10, wherein dynamically managing the number of processing units comprises
determining a required number of processing units based on at least a rate of unordered tasks being received in the processing queue, a processing rate of unordered tasks, a rate of ordered tasks being received in the processing queue, a processing rate of ordered tasks, and the number of sets of related ordered tasks in the processing queue; and
adjusting the number of processing units based on the required number of processing units.

18. The system according to claim 17, wherein the required number of processing units ( ) is equal to the lesser of or , wherein:
is a maximum number of processing units,
, wherein is the rate of unordered tasks being received in the processing queue, is the processing rate of unordered tasks, and is a threshold number of processing units per unordered task, and
is determined based on values of , , and ,
wherein , wherein is a number of tasks to be preprocessed concurrently and is a threshold number of processing units per ordered task,
wherein , wherein is the rate of ordered tasks being received in the processing queue, is the processing rate of ordered tasks, and
corresponds to the number of sets of related ordered tasks in the processing queue,
and is equal to the greater of and if , or is equal to lesser of and if .

19. A non-transitory computer-readable medium storing instructions for executing ordered and unordered tasks, wherein upon execution of the instructions by one or more hardware processors, the one or more hardware processors are configured by the instructions to:
receive an unordered task or an ordered task into a processing queue;
process a task in the processing queue using at least one processing unit,
wherein if the task is an unordered task or an ordered task with no other processing unit processing a related ordered task, the processing unit processes the task,
wherein if the task is an ordered task and another processing unit is processing a related ordered task, the processing unit preprocesses the ordered task if the ordered task needs preprocessing; and
dynamically manage the number of processing units.

20. The non-transitory computer-readable medium according to claim 19, wherein the processing queue comprises a linear data structure.

21. The non-transitory computer-readable medium according to claim 19, wherein the processing queue comprises a linear data structure, and ordered and unordered tasks are stored in the same processing queue.

22. The non-transitory computer-readable medium according to claim 19, wherein dynamically managing the number of processing units comprises:
determining a required number of processing units based on at least a rate of unordered tasks being received in the processing queue, a processing rate of unordered tasks, a rate of ordered tasks being received in the processing queue, a processing rate of ordered tasks, and the number of sets of related ordered tasks in the processing queue; and
adjusting the number of processing units based on the required number of processing units.

23. The non-transitory computer-readable medium according to claim 22, wherein the required number of processing units ( ) is equal to the lesser of or , wherein:
is a maximum number of processing units;
, wherein is the rate of unordered tasks being received in the processing queue, is the processing rate of unordered tasks, and is a threshold number of processing units per unordered task; and
is determined based on values of , , and ,
wherein , wherein is a number of tasks to be preprocessed concurrently and is a threshold number of processing units per ordered task,
wherein , wherein is the rate of ordered tasks being received in the processing queue, is the processing rate of ordered tasks, and
corresponds to the number of sets of related ordered tasks in the processing queue,
and is equal to the greater of and if , or is equal to lesser of and if .

24. A method for dynamically managing processing units concurrently processing ordered tasks and unordered tasks stored in processing queue, the method comprising:
determining a required number of processing units based on at least a rate of unordered tasks being received in the processing queue, a processing rate of unordered tasks, a rate of ordered tasks being received in the processing queue, a processing rate of ordered tasks, and the number of sets of related ordered tasks in the processing queue; and
adjusting the number of processing units based on the required number of processing units.

25. The method according to claim 24, wherein the required number of processing units ( ) is equal to the lesser of or , wherein:
is a maximum number of processing units;
, wherein is the rate of unordered tasks being received in the processing queue, is the processing rate of unordered tasks, and is a threshold number of processing units per unordered task; and
is determined based on values of , , and ,
wherein , wherein is a number of tasks to be preprocessed concurrently and is a threshold number of processing units per ordered task,
wherein , wherein is the rate of ordered tasks being received in the processing queue, is the processing rate of ordered tasks, and
corresponds to the number of sets of related ordered tasks in the processing queue,
and is equal to the greater of and if , or is equal to lesser of and if .

Dated this 09th day of October, 2013

MADHUSUDAN S.T.
OF K & S PARTNERS
ATTORNEY FOR THE APPLICANTS

,TagSPECI:FIELD OF THE INVENTION
BACKGROUND
Over the past sixty years, the use of computing devices to perform large-scale calculations has become an integral force in almost all technology sectors. Over much of that time, the paradigm of computing has largely adhered to a serial execution model in which calculations are performed sequentially until a desired solution has been achieved. Archetypal problems falling into this paradigm include iterative solutions to non-linear differential equations such the determination of solutions for many-body problems in Newtonian and non-Newtonian systems. Serial execution is also the manner in which humans have naturally approached most problems including, for example, the familiar algorithms for long division or for multiplication taught to young elementary-school students. A hallmark of serial execution has been the reliance of subsequent calculations on solutions determined by earlier calculations. This reliance implies the need for serial execution because, without the solution obtained in an initial phase, a later phase is unable to begin.

Documents

Application Documents

# Name Date
1 4584-CHE-2013 FORM-3 21-02-2013.pdf 2013-02-21
1 4584-CHE-2013-IntimationOfGrant07-06-2023.pdf 2023-06-07
2 4584-CHE-2013 CORRESPONDENCE OTHERS 21-02-2013.pdf 2013-02-21
2 4584-CHE-2013-PatentCertificate07-06-2023.pdf 2023-06-07
3 Form-9(Online).pdf 2013-10-17
3 4584-CHE-2013-PETITION UNDER RULE 137 [02-06-2023(online)].pdf 2023-06-02
4 IP25115-Specification.pdf 2013-10-18
4 4584-CHE-2013-FORM 3 [10-02-2023(online)].pdf 2023-02-10
5 IP25115 Drawings.pdf 2013-10-18
5 4584-CHE-2013-FORM-26 [10-02-2023(online)].pdf 2023-02-10
6 FORM 5.pdf 2013-10-18
6 4584-CHE-2013-PETITION UNDER RULE 137 [10-02-2023(online)]-1.pdf 2023-02-10
7 abstract4584-CHE-2013.jpg 2013-10-18
7 4584-CHE-2013-PETITION UNDER RULE 137 [10-02-2023(online)].pdf 2023-02-10
8 Form 3.pdf 2014-01-15
8 4584-CHE-2013-Written submissions and relevant documents [10-02-2023(online)].pdf 2023-02-10
9 4584-CHE-2013-AMENDED DOCUMENTS [18-01-2023(online)].pdf 2023-01-18
9 4584-CHE-2013-FER.pdf 2019-04-04
10 4584-CHE-2013-Correspondence to notify the Controller [18-01-2023(online)].pdf 2023-01-18
10 4584-CHE-2013-Information under section 8(2) (MANDATORY) [04-10-2019(online)].pdf 2019-10-04
11 4584-CHE-2013-FORM 13 [18-01-2023(online)].pdf 2023-01-18
11 4584-CHE-2013-FORM 3 [04-10-2019(online)].pdf 2019-10-04
12 4584-CHE-2013-FER_SER_REPLY [04-10-2019(online)].pdf 2019-10-04
12 4584-CHE-2013-POA [18-01-2023(online)].pdf 2023-01-18
13 4584-CHE-2013-US(14)-HearingNotice-(HearingDate-27-01-2023).pdf 2023-01-16
14 4584-CHE-2013-FER_SER_REPLY [04-10-2019(online)].pdf 2019-10-04
14 4584-CHE-2013-POA [18-01-2023(online)].pdf 2023-01-18
15 4584-CHE-2013-FORM 13 [18-01-2023(online)].pdf 2023-01-18
15 4584-CHE-2013-FORM 3 [04-10-2019(online)].pdf 2019-10-04
16 4584-CHE-2013-Correspondence to notify the Controller [18-01-2023(online)].pdf 2023-01-18
16 4584-CHE-2013-Information under section 8(2) (MANDATORY) [04-10-2019(online)].pdf 2019-10-04
17 4584-CHE-2013-FER.pdf 2019-04-04
17 4584-CHE-2013-AMENDED DOCUMENTS [18-01-2023(online)].pdf 2023-01-18
18 4584-CHE-2013-Written submissions and relevant documents [10-02-2023(online)].pdf 2023-02-10
18 Form 3.pdf 2014-01-15
19 abstract4584-CHE-2013.jpg 2013-10-18
19 4584-CHE-2013-PETITION UNDER RULE 137 [10-02-2023(online)].pdf 2023-02-10
20 FORM 5.pdf 2013-10-18
20 4584-CHE-2013-PETITION UNDER RULE 137 [10-02-2023(online)]-1.pdf 2023-02-10
21 IP25115 Drawings.pdf 2013-10-18
21 4584-CHE-2013-FORM-26 [10-02-2023(online)].pdf 2023-02-10
22 IP25115-Specification.pdf 2013-10-18
22 4584-CHE-2013-FORM 3 [10-02-2023(online)].pdf 2023-02-10
23 Form-9(Online).pdf 2013-10-17
23 4584-CHE-2013-PETITION UNDER RULE 137 [02-06-2023(online)].pdf 2023-06-02
24 4584-CHE-2013-PatentCertificate07-06-2023.pdf 2023-06-07
24 4584-CHE-2013 CORRESPONDENCE OTHERS 21-02-2013.pdf 2013-02-21
25 4584-CHE-2013 FORM-3 21-02-2013.pdf 2013-02-21
25 4584-CHE-2013-IntimationOfGrant07-06-2023.pdf 2023-06-07

Search Strategy

1 searchstrat_04-04-2019.pdf

ERegister / Renewals

3rd: 02 Sep 2023

From 09/10/2015 - To 09/10/2016

4th: 02 Sep 2023

From 09/10/2016 - To 09/10/2017

5th: 02 Sep 2023

From 09/10/2017 - To 09/10/2018

6th: 02 Sep 2023

From 09/10/2018 - To 09/10/2019

7th: 02 Sep 2023

From 09/10/2019 - To 09/10/2020

8th: 02 Sep 2023

From 09/10/2020 - To 09/10/2021

9th: 02 Sep 2023

From 09/10/2021 - To 09/10/2022

10th: 02 Sep 2023

From 09/10/2022 - To 09/10/2023

11th: 04 Oct 2023

From 09/10/2023 - To 09/10/2024

12th: 08 Oct 2024

From 09/10/2024 - To 09/10/2025

13th: 06 Oct 2025

From 09/10/2025 - To 09/10/2026