Sign In to Follow Application
View All Documents & Correspondence

System And Method For Scheduling In A Multi Processor

Abstract: The present invention relates to a method for scheduling in a multi- processor system. At step 201, the method comprises receiving, by a receiving unit of a scheduler, a request for execution of a task. The said task is a foreground task belonging to a visible process. At step 202, the method comprises determining, by a determining unit of the scheduler, in response to receiving a request for execution of said task, a list of available processors wherein a processor forms part of list of available processors if the same is in an un-masked state. At step 203, the method comprises scheduling the request for execution of the said task to a processor selected from the said list of available processors.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
01 October 2015
Publication Number
14/2017
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
mail@lexorbis.com
Parent Application
Patent Number
Legal Status
Grant Date
2023-10-31
Renewal Date

Applicants

Samsung India Electronics Pvt. Ltd.
Logix Cyber Park, Plot No. C 28-29, Tower D - Ground to 10th Floor, Tower C - 7th to 10th Floor, Sector-62, Noida – 201301, Uttar Pradesh, India

Inventors

1. SHANDILYA, Onkar
Professor's Colony, Station Road, Sheikhpura, Bihar - 811105, India
2. SINGH, Neetesh
Vill - Rampura, Post - Chandrapura, Tehsil - Karhail, Dist - Mainpuri (205261), Uttar Pradesh, India
3. MODI, Anurag
85 Jyoti Nagar, RK Puri, Thatipur, Gwalior 474011, Madhya Pradesh, India
4. ANAND, Amitesh
C/o Dilip Kr. Yarpur, Khagaul Road, P.O. GPO, Patna - 800001, India

Specification

TECHNICAL FIELD

The present invention generally relates to the field of scheduling. More particularly, the present invention relates to a scheduler and a method of scheduling in a multi-processor system.

BACKGROUND

Operating systems perform variety of tasks in which scheduling is one of the basic task. Sharing of resources between multiple processes/resources is also called scheduling and is achieved by means of a scheduler.

The existing prior arts disclose schedulers and methods for scheduling multiple processing units/processors in a multiple processor system. Such prior arts disclose that if a first foreground task of visible process is being executed by a first processor and a new foreground task with higher priority comes into queue, the scheduler re-schedules said first task from said first processor to another processor.

Figure 1 illustrates an example of a prior art scheduling method 100. The multiprocessor system in said prior art comprises a first processor P1, a second processor P2, a third processor P3 and a fourth processor P4. As can be seen, a first task T1 belonging to a visible foreground process is assigned by the scheduler to the first processor at time t= t1

After some time at t= t2, due to some process interruptions such as queuing of a task T2 having a priority higher than that of said first task T1 , the scheduler assigns said first task T1 to the fourth processor P4. Such shifting of the first task T1 from one processor P1 to another processor P4 causes performance overhead due to addition of task T1 in a new run-queue of said processor P4. Also, the scheduler 101 has to create a new cache for processor P4 for execution of task T1 which will take some time resulting in cache flush due to which performance of the system will be degraded. Further, in maximum cases, frame drop is observed during rescheduling of tasks from one processor to another processor which results in degraded user experience.

Accordingly, it can be seen that there is an unmet need for a scheduler and a method for scheduling in a multi-processor system which overcomes the disadvantages mentioned above, increases performance of a foreground task belonging to a visible process and enhances user experience for said foreground tasks belonging to a visible process.

OBJECTS OF THE INVENTION

Apart from overcoming the disadvantages discussed above, an object of the present invention is to provide a scheduler and a method for scheduling in a multi-processing system that avoids the switching of a foreground application/task belonging to a visible process from one processor to another processor. In other words, the processor, on which a foreground task is being executed, will be masked for any other high priority tasks/applications.

Another object of the invention is to upgrade the performance of the system and enhance user experience by avoiding cache flush and latency.

These and other objects as well as advantages will be more clearly understood from the detailed description taken in conjugation with the accompanying drawings and claims.

SUMMARY OF THE INVENTION

In accordance with the purposes of the invention, the present invention as embodied and broadly described herein, comprises a scheduler and a method of scheduling multiple processors in a multi-processor system.

According to one aspect of the invention, the present invention provides a method for scheduling in a multi-processor system wherein a foreground task belonging to a visible process and executed on a processor is prioritized over another high priority task thereby resulting in continued execution of said foreground task. The said another high priority tasks are assigned to other available processors.

According to another aspect of the invention, the present invention provides a method for scheduling in a multi-processor system wherein the processors, on which a foreground task belonging to a visible process is being executed, are masked.

According to another aspect of the invention, the present invention provides a method for scheduling in a multi-processor system wherein a scheduler determines, in response to receiving a request for execution of a task, a list of available processors. The said task is a foreground task belonging to a visible process. A processor forms a part of the list of available processors if said processor is in an un-masked state. The scheduler, thereafter, schedules the request for execution of said task to a processor from a list of available processors.

According to another aspect of the invention, the present invention provides a scheduler for scheduling in a multi-processor system comprising a receiving module for receiving a request for execution of a task, a determining module for determining a list of available or unmasked processors and a scheduling module for scheduling said task to a processor selected from the list of available/unmasked processors. The scheduler of the present invention may further comprise a masking unit for masking the processors on which foreground task of visible process are being executed.

These and other aspects as well as advantages will be more clearly understood from the following detailed description taken in conjugation with the accompanying drawings and claims.

BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS:

To further clarify advantages and aspects of the invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings in accordance with various embodiments of the invention, wherein:

Figure 1 illustrates an example of a prior art method for scheduling processors in a multiprocessor system.

Figure 2 illustrates a method for scheduling processors in a multi- processor system, in accordance with an embodiment of the present invention.

Figure 3 illustrates a method for scheduling in a multi- processor system, in accordance with another embodiment of the present invention.

Figure 4 is a block diagram illustrating scheduling in a multi-processor system, in accordance embodiments of the present invention.

Figure 5 illustrates a flowchart comparing scheduling method of a prior art system with the scheduling method of the present invention.

Figure 6 (a) and 6(b) are systrace data comparing prior art scheduling methods with scheduling method of the present invention.

Figure 7 illustrates a scheduler 700 for scheduling in a multi-processor system, in accordance with an embodiment of the present invention.

Figure 8 illustrates a typical hardware configuration of a computing device 800, which is representative of a hardware environment for implementing the present invention. As would be understood, the computing devices as described above, includes the hardware configuration as described below.

It may be noted that to the extent possible, like reference numerals have been used to represent like elements in the drawings. Further, those of ordinary skill in the art will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily drawn to scale. For example, the dimensions of some of the elements in the drawings may be exaggerated relative to other elements to help to improve understanding of aspects of the invention. Furthermore, the one or more elements may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.

DETAILED DESCRIPTION

It should be understood at the outset that although illustrative implementations of the embodiments of the present disclosure are illustrated below, the present invention may be implemented using any number of techniques, whether currently known or in existence. The present disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary design and implementation illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.

The term “some” as used herein is defined as “none, or one, or more than one, or all.” Accordingly, the terms “none,” “one,” “more than one,” “more than one, but not all” or “all” would all fall under the definition of “some.” The term “some embodiments” may refer to no embodiments or to one embodiment or to several embodiments or to all embodiments. Accordingly, the term “some embodiments” is defined as meaning “no embodiment, or one embodiment, or more than one embodiment, or all embodiments.”

The terminology and structure employed herein is for describing, teaching and illuminating some embodiments and their specific features and elements and does not limit, restrict or reduce the spirit and scope of the claims or their equivalents.

More specifically, any terms used herein such as but not limited to “includes,” “comprises,” “has,” “consists,” and grammatical variants thereof do NOT specify an exact limitation or restriction and certainly do NOT exclude the possible addition of one or more features or elements, unless otherwise stated, and furthermore must NOT be taken to exclude the possible removal of one or more of the listed features and elements, unless otherwise stated with the limiting language “MUST comprise” or “NEEDS TO include.”

Whether or not a certain feature or element was limited to being used only once, either way it may still be referred to as “one or more features” or “one or more elements” or “at least one feature” or “at least one element.” Furthermore, the use of the terms “one or more” or “at least one” feature or element do NOT preclude there being none of that feature or element, unless otherwise specified by limiting language such as “there NEEDS to be one or more . . . ” or “one or more element is REQUIRED.”

Unless otherwise defined, all terms, and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by one having an ordinary skill in the art.

Reference is made herein to some “embodiments.” It should be understood that an embodiment is an example of a possible implementation of any features and/or elements presented in the attached claims. Some embodiments have been described for the purpose of illuminating one or more of the potential ways in which the specific features and/or elements of the attached claims fulfill the requirements of uniqueness, utility and non-obviousness.

Use of the phrases and/or terms such as but not limited to “a first embodiment,” “a further embodiment,” “an alternate embodiment,” “one embodiment,” “an embodiment,” “multiple embodiments,” “some embodiments,” “other embodiments,” “further embodiment”, “furthermore embodiment”, “additional embodiment” or variants thereof do NOT necessarily refer to the same embodiments. Unless otherwise specified, one or more particular features and/or elements described in connection with one or more embodiments may be found in one embodiment, or may be found in more than one embodiment, or may be found in all embodiments, or may be found in no embodiments. Although one or more features and/or elements may be described herein in the context of only a single embodiment, or alternatively in the context of more than one embodiment, or further alternatively in the context of all embodiments, the features and/or elements may instead be provided separately or in any appropriate combination or not at all. Conversely, any features and/or elements described in the context of separate embodiments may alternatively be realized as existing together in the context of a single embodiment.

Any particular and all details set forth herein are used in the context of some embodiments and therefore should NOT be necessarily taken as limiting factors to the attached claims. The attached claims and their legal equivalents can be realized in the context of embodiments other than the ones used as illustrative examples in the description below.

Figure 2 illustrates a method for scheduling in a multi- processor system, in accordance with an embodiment of the present invention.

At step 201, the method comprises receiving, by a receiving unit of a scheduler, a request for execution of a task. The said task is a foreground task belonging to a visible process. At step 202, the method comprises determining, by a determining unit of the scheduler, in response to receiving a request for execution of said task, a list of available processors wherein a processor forms part of list of available processors if the same is in an un-masked state. At step 203, the method comprises scheduling the request for execution of the said task to a processor selected from the said list of available processors.

In one embodiment, the method further comprises determining, by the determining unit, in response to said scheduling of the request for execution of the said task, a masked as well as unmasked state of the processors as shown at step 204. The processors are in masked state if the processors are executing a foreground task belonging to a visible process. The processors are in un-masked state if the processors are not executing a foreground task belonging to a visible process.

In another embodiment, the method further comprises changing a masked state of a processor to an unmasked state in response to satisfaction of atleast one criterion as shown at step 205. The said atleast one criterion includes any one of the following: (a) changing of a state of the visible foreground process to background process; (b) completion of the task; and (c) termination of the task.

In another embodiment, the method further comprises masking, by a masking unit, processors on which a foreground task belonging to a visible process is being executed as shown at step 206.

It is understood that the said method of scheduling is applicable to multiprocessor operating systems including linux kernel 2.6.18.

Figure 3 illustrates a method for scheduling processors in a Linux based multi-processor system, in accordance with an embodiment of the present invention.

At step 301, the method comprises receiving, by a receiving unit, a plurality of tasks. At step 302, the method comprises scheduling, by a scheduling unit, execution of said plurality of tasks to a plurality of processors of a multi-processor system in accordance with a priority based scheduling method defined by:

stop_sched_class ? rt_sched_class ? fair_sched_class ? idle_sched_class

wherein a task of visible foreground process belonging to “fair_sched_class” under execution by a first processor is prioritized over another task belonging to “fair_sched_class”, thereby resulting in continued execution of the said task of visible foreground process belonging to “fair_sched_class”. In other words, if a task is from a “fair_sched_class” and running as foreground task on a first processor, said first processor is masked by a masking unit such that other high priority task from “fair_sched_class” and/or real time tasks from “rt_sched_class” cannot be assigned to said first processor.

As already known in the prior art, the “stop_sched_class” refers to stop scheduling class, “rt_sched_class” refers to real time scheduling class, “fair_sched_class” refers to fair scheduling class and “idle_sched_class” refers to idle scheduling class. Also, the priority given to “rt_sched_class” tasks, in the prior art scheduling methods, is higher than the priority given “fair_sched_class” tasks. Accordingly, if a “rt_sched_class” task is received by a scheduler in prior art methods, said “rt_sched_class” task will be scheduled on a processor executing a foreground task belonging to a visible process and the said foreground task will be migrated to other processor.

On the contrary, the present invention masks all the processors on which a foreground task belonging to a visible process is running. As a result, all the other new tasks from either “rt_sched_class” or “fair_sched_class” are assigned to other available/unmasked processors. In other words, the new tasks are assigned to processors on which no foreground tasks/applications are running.

Figure 4 is a block diagram illustrating scheduling in a multi-processor system, in accordance embodiments of the present invention.

As indicated in Figure, a foreground task T1 belonging to a visible process is assigned to processor P1 at t= t1. As the processor is executing a foreground task, the scheduling method of the present invention masks the processor P1. After some time i.e. at t= t2, the scheduler receives a request for scheduling another high priority task T2. As processor P1 is in masked state and is not available to the scheduler for assigning, the said scheduler assigns one of the available processors P2, P3 and P4.

Figure 5 illustrates a flowchart comparing scheduling method of a prior art system with the scheduling method of the present invention.

At step 501, a foreground task T1 belonging to a visible process starts. The scheduler comprises four processors P1, P2, P3 and P4. The scheduler decides on which processor foreground task T1 is to be scheduled. At step 502, the scheduler assigns said foreground task T1 to a first processor P1. In the meantime, the scheduler receives a request for execution of another high priority task T2.

In case of prior art scheduling methods as shown at step 503, all the processors P1, P2, P3 and P4 will be available for assigning the task T2. If the priority of task T2 is higher than the priority of task T1 and the capacity of processor P1 is higher than that of remaining processors, processor P1 will be assigned to task T2 and task T1 will be rescheduled to either of P2, P3 and P4. However, in case of the present invention as shown at step 504, P1 will be masked or unavailable for execution of task T2. The scheduler will therefore be assigned to any one of the remaining processors P2, P3 and P4.

Figure 6 (a) and 6(b) are systrace data comparing prior art scheduling methods with scheduling method of the present invention.

Figure 6(a) illustrates a prior art method of scheduling wherein Unity Main thread of a Game i.e. a foreground task T1 is running at processor CPU 0 at t= t1. After some time i.e. at t= t2, the scheduler receives a request to schedule another high priority task T2. The scheduler re-assigns the said Unity main thread of Game i.e. T1 to processor CPU 2 and assigns T2 to CPU 0.

Figure 6(b) illustrates scheduling method of the present invention wherein Unity Main thread of Game i.e. a foreground task T1 is running at processor CPU 0 at t= t1. As the processor is executing a foreground task, the scheduling method of the present invention masks the processor CPU 0. After some time i.e. at t= t2, the scheduler receives a request for scheduling another high priority task T2. As processor CPU 0 is in masked state and is not available to the scheduler for assigning, the said CPU 0 continues executing task T1 and said high priority task is assigned to other available processors such as CPU 2 (as shown in the Figure 6(b)).

Figure 7 illustrates a scheduler 700 for scheduling multiple processors in a multi-processor system, in accordance with an embodiment of the present invention.

The scheduler comprises a receiving unit 701, a determining unit 702 and a scheduling unit 703. The scheduler may further comprise a masking unit 704.

The said receiving unit 701 is configured for receiving a request for execution of a task. The said receiving unit 701 is in communication with a determining module 702. The said determining module is configured for determining, in response to receiving the request from said receiving unit, a list of available or unmasked processors. The determining module is in communication with a scheduling unit wherein said scheduling unit is configured for allocating/assigning said task to a processor from a list of available processor or unmasked processor.

In one embodiment, the scheduler further comprises a masking unit wherein said masking unit is in communication with the processors of the multi-processing system. The said masking unit detects the processor on which a task of visible foreground process is executed and masks the said processors. The said masking unit is in communication with a determining unit and provides a list of masked processors to said determining unit.

Figure 8 illustrates a typical hardware configuration of a computing device 800, which is representative of a hardware environment for implementing the present invention. As would be understood, the computing devices as described above, includes the hardware configuration as described below.

In a networked deployment, the computing device 800 may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment. The computing device 800 can also be implemented as or incorporated into various devices, such as a personal computer (PC), a tablet PC, a personal digital assistant (PDA), a smart phone, a palmtop computer, a laptop, a desktop computer, and a communications device.

The computing device 800 includes multiple processors 801. The multiple processors may include central processing units (CPU) or a graphics processing unit (GPU), or both. The processors 801 may be a component in a variety of systems. For example, the processor 800 may be part of a standard personal computer or a workstation. The processors 801 may be general processors, digital signal processors, application specific integrated circuits, field programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now known or later developed devices for analysing and processing data. The processors 800 may implement a software program, such as code generated manually (i.e., programmed).

The computing device 800 includes a scheduler 802 communicating with the processor via communicating means such as a bus 803. The said scheduler comprises a receiving unit, a determining unit and a scheduling unit. The scheduler may further comprise a masking unit. The said scheduler and its components have been discussed in detail in Figure 7.

The computing device 800 may include a memory 804 communicating with the processor 801 via the bus 803. The memory 802 may be a main memory, a static memory, or a dynamic memory. The memory 804 may include, but is not limited to computer readable storage media such as various types of volatile and non-volatile storage media, including but not limited to random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. The memory 804 may be an external storage device or database for storing data. Examples include a hard drive, compact disc ("CD"), digital video disc ("DVD"), memory card, memory stick, floppy disc, universal serial bus ("USB") memory device, or any other device operative to store data. The memory 804 is operable to store instructions executable by the processor 801. The functions, acts or tasks illustrated in the figures or described may be performed by the programmed processor 801 executing the instructions stored in the memory 804. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firm-ware, micro-code and the like, operating alone or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing and the like.

The computing device 800 may further include a display unit 805, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid state display, a cathode ray tube (CRT), or other now known or later developed display device for outputting determined information.

Additionally, the computing device 800 may include an input device 806 configured to allow a user to interact with any of the components of computing device 800. The input device 806 may be a number pad, a keyboard, a stylus, an electronic pen, or a cursor control device, such as a mouse, or a joystick, touch screen display, remote control or any other device operative to interact with the computing device 800.

The computing device 800 may also include a disk or optical drive unit 807. The drive unit 807 may include a computer-readable medium 808 in which one or more sets of instructions 809, e.g. software, can be embedded. In addition, the instructions 809 may be separately stored in the processor 801 and the memory 804.

The computing device 800 may further be in communication with other device over a network 810 to communicate voice, video, audio, images, or any other data over the network 810. Further, the data and/or the instructions 809 may be transmitted or received over the network 810 via a communication port or interface 811 or using the bus 803. The communication port or interface 811 may be a part of the processor 801 or may be a separate component. The communication port 811 may be created in software or may be a physical connection in hardware. The communication port 811 may be configured to connect with the network 810, external media, the display 805, or any other components in system 800 or combinations thereof. The connection with the network 810 may be a physical connection, such as a wired Ethernet connection or may be established wirelessly as discussed later. Likewise, the additional connections with other components of the system 800 may be physical connections or may be established wirelessly. The network 810 may alternatively be directly connected to the bus 803.

The network 811 may include wired networks, wireless networks, Ethernet AVB networks, or combinations thereof. The wireless network may be a cellular telephone network, an 802.11, 802.16, 802.20, 802.1Q or WiMax network. Further, the network 811 may be a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols.

In an alternative example, dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays and other hardware devices, can be constructed to implement various parts of the computing device 800.
Applications that may include the systems can broadly include a variety of electronic and computer systems. One or more examples described may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit. Accordingly, the present system encompasses software, firmware, and hardware implementations.

The computing device 800 may be implemented by software programs executable by the processor 801. Further, in a non-limited example, implementations can include distributed processing, component/object distributed processing, and parallel processing. Alternatively, virtual computer system processing can be constructed to implement various parts of the system.

The computing device 800 is not limited to operation with any particular standards and protocols. For example, standards for Internet and other packet switched network transmission (e.g., TCP/IP, UDP/IP, HTML, HTTP) may be used. Such standards are periodically superseded by faster or more efficient equivalents having essentially the same functions. Accordingly, replacement standards and protocols having the same or similar functions as those disclosed are considered equivalents thereof.

The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. In addition, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.
While certain present preferred embodiments of the invention have been illustrated and described herein, it is to be understood that the invention is not limited thereto. Clearly, the invention may be otherwise variously embodied, and practiced within the scope of the following claims.

Claims:WE CLAIM:

1. A method for scheduling in a multi-processor system, said method comprising:
receiving (201) a request for execution of a task;
determining (202) in response to receiving a request for execution of said task, a list of available processors; and
scheduling (203) the request for execution of the said task to a processor selected from the said list of available processors;
wherein a processor forms part of the list of available processors if the same is in an un- masked state.

2. The method as claimed in claim 1 wherein said task is a foreground task belonging to a visible process.

3. The method as claimed in claim 1 wherein said processor is in masked state if the processor is executing a foreground task belonging to a visible process.

4. The method as claimed in claim 1 further comprising: determining (204) in response to said scheduling of the request for execution of the said task, a masked/un-masked state of the processor.

5. The method as claimed in claim 1 further comprising: changing (205) a masked state of a processor to an un-masked state in response to satisfaction of at least one criterion.

6. The method as claimed in claim 5 wherein the said at least one criterion includes:
a. changing of a state of the visible foreground process to background process;
b. completion of the task; and
c. termination of the task;

7. The method as claimed in claim 1, further comprising masking (206) processors on which a foreground task belonging to a visible process is being executed.

8. A method for scheduling in in a multi-processor system for execution of tasks, said method comprising: scheduling (302) the execution of plurality of tasks to a plurality of processors of a multi-processor system in accordance with a priority based scheduling method defined by:

stop_sched_class ? rt_sched_class ? fair_sched_class ? idle_sched_class

wherein, a task of visible foreground process belonging to “fair_sched_class” under execution by a processor is prioritized over another task belonging to “fair_sched_class”, thereby resulting in continued execution of the said task of visible foreground process belonging to “fair_sched_class”.

9. The method as claimed in claim 8 wherein the processor executing said task of visible foreground process is in masked state.

10. A scheduler (700) for scheduling in a multi-processor system, said scheduler comprising:
a receiving unit (701) for receiving a request for execution of a task;
a determining unit (702) for determining, in response to receiving the request for execution of the task, a list of available processors;
a scheduling unit (703) for scheduling the request for execution of the task to a processor selected from the said list of available processors;
wherein a processor forms part of the list of processors if it is in an un-masked state.

11. The scheduler (700) as claimed in claim 10, further comprising a masking unit (704) configured for masking processors on which a foreground task belonging to a visible process is being executed.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 3168-DEL-2015-IntimationOfGrant31-10-2023.pdf 2023-10-31
1 Power of Attorney [01-10-2015(online)].pdf 2015-10-01
2 3168-DEL-2015-PatentCertificate31-10-2023.pdf 2023-10-31
2 Form 5 [01-10-2015(online)].pdf 2015-10-01
3 Form 3 [01-10-2015(online)].pdf 2015-10-01
3 3168-DEL-2015-Written submissions and relevant documents [26-06-2023(online)].pdf 2023-06-26
4 Form 18 [01-10-2015(online)].pdf 2015-10-01
4 3168-DEL-2015-FORM-26 [14-06-2023(online)].pdf 2023-06-14
5 Drawing [01-10-2015(online)].pdf 2015-10-01
5 3168-DEL-2015-Correspondence to notify the Controller [13-06-2023(online)].pdf 2023-06-13
6 Description(Complete) [01-10-2015(online)].pdf 2015-10-01
6 3168-DEL-2015-US(14)-HearingNotice-(HearingDate-15-06-2023).pdf 2023-05-29
7 3168-del-2015-Form-1-(07-10-2015).pdf 2015-10-07
7 3168-DEL-2015-CLAIMS [20-07-2020(online)].pdf 2020-07-20
8 3168-del-2015-Correspondence Others-(07-10-2015).pdf 2015-10-07
8 3168-DEL-2015-COMPLETE SPECIFICATION [20-07-2020(online)].pdf 2020-07-20
9 3168-DEL-2015-DRAWING [20-07-2020(online)].pdf 2020-07-20
9 3168-DEL-2015-PA [19-09-2019(online)].pdf 2019-09-19
10 3168-DEL-2015-ASSIGNMENT DOCUMENTS [19-09-2019(online)].pdf 2019-09-19
10 3168-DEL-2015-FER_SER_REPLY [20-07-2020(online)].pdf 2020-07-20
11 3168-DEL-2015-8(i)-Substitution-Change Of Applicant - Form 6 [19-09-2019(online)].pdf 2019-09-19
11 3168-DEL-2015-OTHERS [20-07-2020(online)].pdf 2020-07-20
12 3168-DEL-2015-FER.pdf 2020-01-31
12 3168-DEL-2015-OTHERS-101019.pdf 2019-10-14
13 3168-DEL-2015-Correspondence-101019.pdf 2019-10-14
14 3168-DEL-2015-FER.pdf 2020-01-31
14 3168-DEL-2015-OTHERS-101019.pdf 2019-10-14
15 3168-DEL-2015-8(i)-Substitution-Change Of Applicant - Form 6 [19-09-2019(online)].pdf 2019-09-19
15 3168-DEL-2015-OTHERS [20-07-2020(online)].pdf 2020-07-20
16 3168-DEL-2015-ASSIGNMENT DOCUMENTS [19-09-2019(online)].pdf 2019-09-19
16 3168-DEL-2015-FER_SER_REPLY [20-07-2020(online)].pdf 2020-07-20
17 3168-DEL-2015-PA [19-09-2019(online)].pdf 2019-09-19
17 3168-DEL-2015-DRAWING [20-07-2020(online)].pdf 2020-07-20
18 3168-DEL-2015-COMPLETE SPECIFICATION [20-07-2020(online)].pdf 2020-07-20
18 3168-del-2015-Correspondence Others-(07-10-2015).pdf 2015-10-07
19 3168-del-2015-Form-1-(07-10-2015).pdf 2015-10-07
19 3168-DEL-2015-CLAIMS [20-07-2020(online)].pdf 2020-07-20
20 Description(Complete) [01-10-2015(online)].pdf 2015-10-01
20 3168-DEL-2015-US(14)-HearingNotice-(HearingDate-15-06-2023).pdf 2023-05-29
21 Drawing [01-10-2015(online)].pdf 2015-10-01
21 3168-DEL-2015-Correspondence to notify the Controller [13-06-2023(online)].pdf 2023-06-13
22 Form 18 [01-10-2015(online)].pdf 2015-10-01
22 3168-DEL-2015-FORM-26 [14-06-2023(online)].pdf 2023-06-14
23 Form 3 [01-10-2015(online)].pdf 2015-10-01
23 3168-DEL-2015-Written submissions and relevant documents [26-06-2023(online)].pdf 2023-06-26
24 Form 5 [01-10-2015(online)].pdf 2015-10-01
24 3168-DEL-2015-PatentCertificate31-10-2023.pdf 2023-10-31
25 3168-DEL-2015-IntimationOfGrant31-10-2023.pdf 2023-10-31
25 Power of Attorney [01-10-2015(online)].pdf 2015-10-01

Search Strategy

1 3168Search_31-01-2020.pdf

ERegister / Renewals

3rd: 12 Dec 2023

From 01/10/2017 - To 01/10/2018

4th: 12 Dec 2023

From 01/10/2018 - To 01/10/2019

5th: 12 Dec 2023

From 01/10/2019 - To 01/10/2020

6th: 12 Dec 2023

From 01/10/2020 - To 01/10/2021

7th: 12 Dec 2023

From 01/10/2021 - To 01/10/2022

8th: 12 Dec 2023

From 01/10/2022 - To 01/10/2023

9th: 12 Dec 2023

From 01/10/2023 - To 01/10/2024

10th: 28 Sep 2024

From 01/10/2024 - To 01/10/2025

11th: 11 Sep 2025

From 01/10/2025 - To 01/10/2026