Sign In to Follow Application
View All Documents & Correspondence

A Method And Apparatus Of Scheduling Resources In System Architecture

Abstract: The present invention relates to a scheduling method and apparatus based on request arrival-order. In one embodiment this is accomplished by receiving at least one task by a priority tagging module from at least one input line  wherein each task is tagged with a priority tag  packetizing the received task in to a plurality of packets  wherein each packet is provided with a sequence number and serving the packets from the queues to the output lines  wherein the serving of the packets is based on the sequence number  and where the same queue is not served consecutively and wait for serve timer to expire.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
31 March 2012
Publication Number
40/2013
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2024-03-05
Renewal Date

Applicants

Tejas Networks Limited
2nd floor  GNR Tech Park  46/4  Garbebhavi Palya  Kudlu Gate  Hosur main road  Bangalore 560 068  Karnataka  India

Inventors

1. Venkata Suman Kumar M
Flat No. 102  Sesha’s Bhanu Residency-2  6th Cross  4th Main  NS Palaya  BTM 2nd Stage  BANGALORE – 560 076

Specification

FORM 2

THE PATENTS ACT  1970
(39 of 1970)
&
THE PATENTS RULES  2003

COMPLETE SPECIFICATION
(See section 10  rule 13)


“A method and apparatus of scheduling resources in system architecture”

Tejas Networks Limited
2nd floor  GNR Tech Park  46/4  Garbebhavi Palya 
Kudlu Gate  Hosur main road 
Bangalore 560 068  Karnataka  India

The following specification particularly describes the invention and the manner in which it is to be performed.

Field of the Invention
The disclosure generally relates to multiple task execution processes. More particularly  the present invention relates to scheduling method based on request arrival-order.
Background of the Invention
As is known in the art  an operating system is a set of software routines used to provide functionality to a computer system. One function of an operation system is to schedule and execute multiple tasks. The multiple tasks provide various functionality including reading and writing to memory and secondary storage  input and output to a computer display  execution of application programs  input and output to peripheral device and others. Many operating systems such as UNIX of Unix Systems Laboratories  owned by Novell of Provo  Utah  or Windows 95/NT by Microsoft Corporation of Redmond  Wash.  provide operating systems that use priority schemes to schedule and execute tasks. Such priority schemes include various levels of priority from lower priority tasks to higher priority tasks.
Operating systems known in the art  use queues to schedule tasks of different priorities. When tasks of varying priorities are used  one or more scheduling methods are used to specify an order in which to satisfy priority tasks in the queue. Examples of common scheduling methods include First-Come  First-Served ("FCFS")  Shortest-Job-First ("SJF")  Round Robin ("RR") and preemptive priority scheduling methods. Some of these methods ensure that higher priority tasks are executed before lower priority tasks  but also ensure that lower priority tasks are not "starved out" of execution time.
In a First-Come  First-Served scheduling method  a first task received in a queue is the first one executed. As the name suggests  in a Shortest-Job-First scheduling method the shortest task in terms of execution time  in the queue is executed first. Whereas  a Round Robin scheduling method grants each task a set amount of time  or "time-slice " to execute before moving on to the next task even if the previous one is not completed. After each task in the queue has executed for the time-slice  the round robin scheduling method repeats  or continues to "cycle " until all tasks in a queue are satisfied. Preemptive priority scheduling methods assign a priority to each task as it enters a queue. Tasks are executed in the queue in the order of the assigned priority; however  the method allows for specific tasks to execute immediately and thereby preempting higher priority tasks from executing. Each of these scheduling methods present various problems.
In the First-Come  First-Served scheduling method  an important task at the end  or "tail" of a queue must wait to be executed until tasks ahead of the it have been executed. Furthermore  a time-intensive task at the beginning  or "head " of the queue may prevent the remaining tasks is from executing. This is called "starving" which can lead to catastrophic events. For example  if a task in the queue such as one for memory maintenance is starved out  the computer system may fail due to lack of resources. To prevent the starving of shorter tasks by longer  more time-intensive tasks  the shortest-job-first scheduling method allows the shorter tasks to execute before the longer tasks. However  these methods run the risk of delaying higher priority tasks. The Round Robin attempts to prevent the delaying of high priority tasks by allocating only a "time-slice " to each task in the queue. Although each task is treated equal in priority  longer tasks may take many cycles of time-slices before completely executing. However  this may actually delay the execution of a high priority task. Thus  priority scheduling was developed to prevent lower priority tasks from delaying or "starving" out higher priority tasks.
The Round Robin scheduling introduces latency and burstiness to the traffic when used for arbitrating request across banks of some external memories. Achieving of low latency is not addressed in the prior-art except for the case where if a request is waiting for a long time its priority can be increased in some of the prior arts.
Thus there is a need for a scheduling method which overcomes the above limitation described above.

Summary of the Invention
The following presents a simplified summary of one or more embodiments in order to provide a basic understanding of such embodiments. This summary is not an extensive overview of all contemplated embodiments  and is intended to neither identify key or critical elements of all embodiments nor delineate the scope of any or all embodiments. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later.

In accordance with one aspect of the present invention is a method of scheduling resources in system architecture  the method comprising: receiving at least one task by a priority tagging module from at least one input line  wherein each task is tagged with a priority tag  packetizing the received task in to a plurality of packets  wherein each packet is provided with a sequence number and serving the packets from the queues to the output lines  wherein the serving of the packets is based on the sequence number  and where the same queue is not served consecutively and wait for serve timer to expire.

In another aspect of the present invention is an apparatus for scheduling resources in system architecture  the apparatus comprising: a scheduler or a processor including a memory  wherein the scheduler or processor is configured for 
receiving at least one task by a priority tagging module from at least one input line  wherein each task is tagged with a priority tag  packetizing the received task in to a plurality of packets  wherein each packet is provided with a sequence number  storing the packets along with the sequence number into the plurality of queues and serving the packets from the queues to the output lines  wherein the serving of the packets is based on the sequence number  and where the same queue is not served consecutively and wait for serve timer to expire.

The foregoing has outlined rather broadly the features and technical advantages of the present invention so that those skilled in the art may better understand the detailed description of the invention that follows. Additional features and advantages of the invention will be described hereinafter that form the subject of the claims of the invention. Those skilled in the art should appreciate that they may readily use the conception and the specific embodiment disclosed as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the invention in its broadest form.

Before undertaking the detailed description of the invention below  it may be advantageous to set forth definitions of certain words and phrases used throughout this patent document: the terms “include” and “comprise ” as well as derivatives thereof  mean inclusion without limitation; the term “or ” is inclusive  meaning and/or; the phrases “associated with” and “associated therewith ” as well as derivatives thereof  may mean to include  be included within  interconnect with  contain  be contained within  connect to or with  couple to or with  be communicable with  cooperate with  interleave  juxtapose  be proximate to  be bound to or with  have  have a property of  or the like; and the term “controller” means any device  system or part thereof that controls at least one operation  such a device may be implemented in hardware  firmware or software  or some combination of at least two of the same. It should be noted that the functionality associated with any particular controller may be centralized or distributed  whether locally or remotely. Definitions for certain words and phrases are provided throughout this patent document  those of ordinary skill in the art should understand that in many  if not most instances  such definitions apply to prior  as well as future uses of such defined words and phrases.

Brief description of the drawings

For a more complete understanding of the present invention  and the advantages thereof  reference is now made to the following descriptions taken in conjunction with the accompanying drawings  wherein like numbers designate like objects  and in which:

Figure 1 shows a conventional apparatus which uses round robin scheduling process in system architecture according to prior art.
Figure 2 shows an apparatus for scheduling resources in system architecture according to one embodiment of the present invention.
Figure 3 shows a method of scheduling resources in system architecture according to one embodiment of the present invention.
Figure 4 is a diagrammatic block view of a processing system  according to various embodiments.
Persons skilled in the art will appreciate that elements in the figures are illustrated for simplicity and clarity and may have not been drawn to scale. For example  the dimensions of some of the elements in the figure may be exaggerated relative to other elements to help to improve understanding of various exemplary embodiments of the present disclosure.

Throughout the drawings  it should be noted that like reference numbers are used to depict the same or similar elements  features  and structures.

Detail description of the Invention

In the following description  for purposes of explanation and not limitation  specific details are set forth such as particular architectures  interfaces  techniques  etc. in order to provide a thorough understanding of the present invention. However  it will be apparent to those skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. That is  those skilled in the art will be able to devise various arrangements which  although not explicitly described or shown herein  embody the principles of the invention and are included within its spirit and scope. In some instances  detailed descriptions of well-known devices  circuits  and methods are omitted so as not to obscure the description of the present invention with unnecessary detail. All statements herein reciting principles  aspects  and embodiments of the invention  as well as specific examples thereof  are intended to encompass both structural and functional equivalents thereof. Additionally  it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future  i.e.  any elements developed that perform the same function  regardless of structure.

Thus  for example  it will be appreciated by those skilled in the art that block diagrams herein can represent conceptual views of illustrative circuitry embodying the principles of the technology. Similarly  it will be appreciated that any flow charts  state transition diagrams  pseudo code  and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor  whether or not such computer or processor is explicitly shown.

The functions of the various elements including functional blocks labeled or described as "computer"  "processor" or "controller" may be provided through the use of dedicated hardware as well as hardware capable of executing software in the form of coded instructions stored on computer readable medium. A computer is generally understood to comprise one or more processors  and the terms computer and processor may be employed interchangeably herein. When provided by a computer or processor  the functions may be provided by a single dedicated computer or processor  by a single shared computer or processor  or by a plurality of individual computers or processors  some of which may be shared or distributed. Such functions are to be understood as being computer-implemented and thus machine-implemented. Moreover  use of the term "processor" or "controller" shall also be construed to refer to other hardware capable of performing such functions and/or executing software  and may include  without limitation  digital signal processor (DSP) hardware  reduced instruction set processor  hardware (e.g.  digital or analog) circuitry  and (where appropriate) state machines capable of performing such functions.

Figure 1 shows a conventional apparatus which uses round robin scheduling process in system architecture according to prior art. Queuing and scheduling are two critical function blocks that are used in a memory. Usually a data packet enters a memory from one end and departs from another. The processor which is coupled with the memory can receive and forward data packet simultaneously.
During normal operation  multiple packets may be received from several ingress ports and leave processor on one egress port. These packets must be queued in front of the egress port to wait for an opportunity to be forwarded.
The apparatus may includes multiple queues as shown in figure for storing data packets received from ingress ports  and scheduler or processor (not shown). Each of queues is controlled by scheduler or a processor. Based on QoS requirements  scheduler selects one of queues to send data packets over the egress port. Different queuing and scheduling algorithms are implemented in managing the memory in order to reduce the processing time and also to meet various QoS requirements. The idea of round-robin scheduling  in essence  is that traffic from different flows are queued separately in their own queues and the scheduler circularly and repeatedly “visits” all the packet queues  and sends one packet from each queue during the visit. In terms of distributing bandwidth among different flows  round-robin scheduling is a fairer solution than FIFO scheduling because each flow can be guaranteed the opportunity to send a packet in each round-robin scheduling cycle. By using the simple round robin scheduler the output as shown in figure 1  where the request 2 is served after a long time.
Figure 2 shows an apparatus for scheduling resources in system architecture according to one embodiment of the present invention. As shown in figure  the apparatus including a priority tagging module receives one or more task from one or more input line  where each task is tagged with a priority tag (e.g. Queue 1  Queue 2  Queue 3 and Queue 4). The number of priority is atleast equal to the number of queues. Further  the apparatus packetizes the received task in to a plurality of packets. Each packet is provided with a sequence number (1  2  3  4  5 …….13). The apparatus further stores the packets along with the sequence number into the plurality of queues. When the packets has to be served from the queues to the output lines  the apparatus follow or serve the packets based on the sequence number and further  the apparatus is configured such that same queue is not served consecutively until the serve time expires. Memory accesses like DDR-SDRAM does not permit accessing the same memory bank consecutively. In this particular example  each queue can be seen as a request for a bank of DDR-SDRAM. There is certain fixed time that is to be elapsed in the timer for being able to access the memory bank for reading operations. Therefore same queue cannot be served consecutively and one has to wait for the timer to expire. The serving of the packets is based on the sequence number by starting with the lowest sequence number of the packet in order to serve all packets of the request immediately if wait-to-serve-timer permits. The general round robin scheduling have taken five scheduling opportunities to complete the task as shown in the figure 1  where the present mechanism would take only 3 scheduling opportunities to complete it. By giving an additional priority to the request order along with scheduling rules one can reduce the time of completion of tasks. In contrast with figure 1  the request order along with scheduling rule (no back-to-back to the same queue) the request 2 is served in the third opportunity rather than fifth opportunity (as shown in figure 2).
Figure 3 shows a method of scheduling resources in system architecture according to one embodiment of the present invention.
At step 310  the method receives one or more task by a priority tagging module from at least one input line  where each task is tagged with a priority tag. The number of priority is atleast equal to the number of queues.

At step 320  the method packetizes the received task in to a plurality of packets  where each packet is provided with a sequence number. The task includes a message  a frame  a request etc.
At step 330  the method stores the packets along with the sequence number into the plurality of queues.

At step 340  the method serves the packets from the queues to the output lines  where the serving of the packets is based on the sequence number  and where the same queue is not served consecutively and waits for serve timer to expire. Serving the packets based on the sequence number by starting with the lowest sequence number of the packet in order to serve all packets of the request immediately if wait-to-serve-timer permits. The packets which are served from the queues are according to a Round Robin scheduling scheme.

By using the above method  the memory requirement is reduces as need not to store the tasks/packets for a long time which further reduces the cost. Further  the method reduces the jitter in packets and also the delay.

Although the flowchart 300 includes steps 310-340 that are arranged serially in the exemplary embodiments  other embodiments of the subject matter may execute two or more steps in parallel  using multiple processors or a single processor organized as two or more virtual machines or sub-processors. Moreover  still other embodiments may implement the steps as two or more specific interconnected hardware modules with related control and data signals communicated between and through the modules  or as portions of an application-specific integrated circuit. Thus  the exemplary process flow diagrams are applicable to software  firmware  and/or hardware implementations.

FIG. 4 is a diagrammatic block view of a processing system 400 according to various embodiments. The processing system 400 may include a central processing unit (CPU) 410  which may include any digital device capable of receiving data and programmed instructions  and processing the data according to the programmed instructions. Accordingly  the CPU 410 may include a microprocessor  such as a general purpose single-chip  or a multi-chip microprocessor. In particular  the multi-chip microprocessor may be structured as a three-dimensional multi-chip package  such as a system in package (SiP)  or a chip stack multi-chip module (MCM) the chip-stack multi-chip module  and may include one or more of the redundant signal transmission systems  according to one or more of the embodiments  such as  for example  the redundant transmission system of FIG. 2. The CPU 410 is generally configured to communicate with a memory unit 420 over a suitable communications bus 430. The memory unit 420 may also be structured as a three-dimensional multi-chip package  and may include one or more of the redundant signal transmission systems  according to one or more of the embodiments. The processing system 410 may also include various other devices that are operably coupled to the bus 430  which are configured to cooperatively interact with the CPU 410 and the memory unit 420. For example  the processing system 400 may include one or more input/output (I/O) devices 440  such as a printer  a display device  a keyboard  a mouse  or other known input/output devices. The processing system 400 may also include a mass storage device 450  which may include a hard disk drive  a floppy disk drive  an optical disk device (CD-ROM)  or other similar devices. It is understood that FIG. 4 provides a simplified representation of the processing system 400. Accordingly  it is understood that other devices not shown in FIG. 4  but known in the art (such as  for example  a memory controller  and other similar devices) may nevertheless be present in the processing system 400. As the various figures have shown  there may be multiple local paths and global paths in a memory system.

The processing system 400 may also form a part of other larger systems  such as a wireless device  which may include devices such as a wireless telephone  a personal digital assistant (PDA)  or another of a variety of known wireless devices.

FIGS. 1-4 are merely representational and are not drawn to scale. Certain portions thereof may be exaggerated  while others may be minimized. FIGS. 1-4 illustrate various embodiments of the invention that can be understood and appropriately carried out by those of ordinary skill in the art.

While various embodiments have been illustrated and described  as noted above  changes can be made without departing from the disclosure. The accompanying drawings that form a part hereof show by way of illustration  and not of limitation  various embodiments in which the subject matter may be practiced. The embodiments illustrated are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed herein. Other embodiments may be utilized and derived therefrom.

This Detailed Description  therefore  is not to be taken in a limiting sense. Although specific embodiments have been illustrated and described herein  it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the various embodiments shown. Furthermore  although the various embodiments have described redundant signal transmission systems  it is understood that the various embodiments may be employed in a variety of known electronic systems and devices without modification. This disclosure is intended to cover any and all adaptations or variations of various embodiments. Combinations of the above embodiments  and other embodiments not specifically described herein  will be apparent to those skilled in the art upon reviewing the above description.

We Claim:
1. A method of scheduling resources in system architecture  the method comprising:
receiving at least one task by a priority tagging module from at least one input line  wherein each task is tagged with a priority tag;
packetizing the received task in to a plurality of packets  wherein each packet is provided with a sequence number; and
serving the packets from the queues to the output lines  wherein the serving of the packets is based on the sequence number  and where the same queue is not served consecutively and wait for serve timer to expire.

2. The method of claim 1  further comprising:
storing the packets along with the sequence number into the plurality of queues.

3. The method of claim 1  wherein the step of serving the packets based on the sequence number by starting with the lowest sequence number of the packet in order to serve all packets of the request immediately if wait-to-serve-timer permits.

4. The method of claim 1  wherein the number of priority is atleast equal to the number of queues.

5. The method of claim 1  wherein the task includes a message  a frame  a request etc.

6. The method of claim 1  wherein the step of serving the packets from the queues according to a Round Robin scheduling scheme.

7. An apparatus for scheduling resources in system architecture  the apparatus comprising:
a scheduler or a processor including a memory  wherein the scheduler or processor is configured for 
receiving at least one task by a priority tagging module from at least one input line  wherein each task is tagged with a priority tag;
packetizing the received task in to a plurality of packets  wherein each packet is provided with a sequence number;
storing the packets along with the sequence number into the plurality of queues; and
serving the packets from the queues to the output lines  wherein the serving of the packets is based on the sequence number  and where the same queue is not served consecutively and wait for serve timer to expire.

8. The apparatus of claim 7  wherein the scheduler or processor may include at least one programmable logic and/or logic gates.

9. The apparatus of claim 7  wherein the system architecture include at least one node.

10. The apparatus of claim 7  wherein the node includes a router  switch  scheduler  Operating system  data communication system  etc.
Dated this the 30th day of March  2012

S Afsar
Of Krishna & Saurastri Associates
Agent for the Applicant
Registration No. IN/PA-1073

Abstract
A method and apparatus of scheduling resources in system architecture

The present invention relates to a scheduling method and apparatus based on request arrival-order. In one embodiment this is accomplished by receiving at least one task by a priority tagging module from at least one input line  wherein each task is tagged with a priority tag  packetizing the received task in to a plurality of packets  wherein each packet is provided with a sequence number and serving the packets from the queues to the output lines  wherein the serving of the packets is based on the sequence number  and where the same queue is not served consecutively and wait for serve timer to expire.

Figure 3

Documents

Application Documents

# Name Date
1 Form-5.pdf 2012-04-09
2 Form-3.pdf 2012-04-09
3 Form-1.pdf 2012-04-09
4 Drawings.pdf 2012-04-09
5 abstract1282-CHE-2012.jpg 2013-04-20
6 1282-CHE-2012-OTHERS [16-02-2021(online)].pdf 2021-02-16
7 1282-CHE-2012-FER_SER_REPLY [16-02-2021(online)].pdf 2021-02-16
8 1282-CHE-2012-DRAWING [16-02-2021(online)].pdf 2021-02-16
9 1282-CHE-2012-COMPLETE SPECIFICATION [16-02-2021(online)].pdf 2021-02-16
10 1282-CHE-2012-CLAIMS [16-02-2021(online)].pdf 2021-02-16
11 1282-CHE-2012-ABSTRACT [16-02-2021(online)].pdf 2021-02-16
12 1282-CHE-2012-FER.pdf 2021-10-03
13 1282-CHE-2012-Response to office action [12-09-2022(online)].pdf 2022-09-12
14 1282-CHE-2012-PatentCertificate05-03-2024.pdf 2024-03-05
15 1282-CHE-2012-IntimationOfGrant05-03-2024.pdf 2024-03-05

Search Strategy

1 SEARCHQUERYE_27-07-2020.pdf

ERegister / Renewals

3rd: 04 Jun 2024

From 31/03/2014 - To 31/03/2015

4th: 04 Jun 2024

From 31/03/2015 - To 31/03/2016

5th: 04 Jun 2024

From 31/03/2016 - To 31/03/2017

6th: 04 Jun 2024

From 31/03/2017 - To 31/03/2018

7th: 04 Jun 2024

From 31/03/2018 - To 31/03/2019

8th: 04 Jun 2024

From 31/03/2019 - To 31/03/2020

9th: 04 Jun 2024

From 31/03/2020 - To 31/03/2021

10th: 04 Jun 2024

From 31/03/2021 - To 31/03/2022

11th: 04 Jun 2024

From 31/03/2022 - To 31/03/2023

12th: 04 Jun 2024

From 31/03/2023 - To 31/03/2024

13th: 04 Jun 2024

From 31/03/2024 - To 31/03/2025

14th: 27 Mar 2025

From 31/03/2025 - To 31/03/2026