Sign In to Follow Application
View All Documents & Correspondence

Methods And Systems For Prioritising File Transfer Based On User Interactions

Abstract: The present invention relates to methods and systems for prioritising file transfer or a plurality of processes concurrently running on a computing device based on user interactions and accordingly dynamically allocating a network bandwidth amongst the plurality of processes. Accordingly, a user input is received corresponding to a first set of processes from the plurality of processes. Based on the user input, network bandwidth is allocated to each of the processes in the first set of processes. <>

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 May 2015
Publication Number
49/2016
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
mail@lexorbis.com
Parent Application
Patent Number
Legal Status
Grant Date
2023-03-28
Renewal Date

Applicants

Samsung India Electronics Pvt. Ltd.
Logix Cyber Park, Plot No. C 28-29, Tower D - Ground to 10th Floor, Tower C - 7th to 10th Floor, Sector-62, Noida – 201301, Uttar Pradesh, India

Inventors

1. SINDHWANI, Jatin
627/24, DLF Colony, Rohtak – 124001, Haryana, India

Specification

TECHNICAL FIELD
The present invention relates to methods and system for improving network
performance and in particular relates to methods and systems for prioritising file transfer
based on user interactions.

BACKGROUND
With the advent of Internet, users are now able to access wide variety of data
such as emails, videos, music, and research documents on their electronic devices such as
smart phones, laptops, and tablets. In addition, the users are now able to connect with other
users and share data such as images, videos, and music through various applications such as
chat application and voice over IP (VOIP) applications. Moreover, quality of data being
shared or accessed has improved dramatically. For example, present videos being shared or
accessed are in high definition format. However, such an extensive usage has led to dramatic
increase in consumption of network bandwidth. Consequently, network overload problems
have increased.
Various solutions are available that allow a user to manage the network
bandwidth among different applications and/or processes. In one solution, download
parameters are provided for downloading desired content items. Upon selecting desired
values for the one or more download parameters, the desired content item is downloaded in
accordance with the desired values, and is then presented to a user. In another solution, a
higher network bandwidth is provided to an upload or a download process based on a user
input such as by touching a button. In one another solution, enhancement of upload and/or
download performance of a process is based on client and/or server feedback information. In
yet another solution, dynamic bandwidth adjustment is performed for browsing or streaming
activity in a wireless network based on prediction of user behaviour. In yet one another
solution, a network performance of a current foreground application is increased by
allocating of entire network bandwidth to the current foreground application only.
However, all these solutions are directed towards enhancing network
performance· or increasing network bandwidth for a single process or application only.
Consequently, the other processes or applications are deprived of the network bandwidth.
Further, these solutions do not provide flexibility to a user for allocating network bandwidth
to multiple file transfer processes such as uploading process and downloading processes.
Thus, there exists a need for a solution to enable a user to prioritize file transfer processes
based on user interactions.
SUMMARY OF THE INVENTION
In accordance with the purposes of the invention, the present invention as
embodied and broadly described herein, enables users to prioritize file transfer processes
based on user interactions. Accordingly, a plurality of file transfer processes (hereinafter
referred to as processes) are depicted on a computing device. The process can be either a
downloading of a file or uploading of a file. A user can provide a user input corresponding to
one or more processes form the plurality of processes. Upon receiving the user input, the user .
input is analysed and a priority is assigned to the each of the plurality of processes. Based on
the assigned priority, a network bandwidth is allocated to the plurality of processes such that
a process with a highest priority is allocated maximum network bandwidth and a process with
a lowest priority is allocated minimum network bandwidth.
The advantages of the invention include, but are not limited to, enabling a user
quickly prioritize and/or de-prioritize various processes according to user's requirement.
Thus, flexibility is provided to the user to prioritize and/or de-prioritize the various processes
based on user interactions or user inputs from a window in which the various processes are
depicted.
These aspects and advantages will be more clearly understood from the
following detailed description taken in conjunction· with the accompanying drawings and
claims.
BRIEF DESCRIPTION OF THE ACCOMPANYING ORA WINGS:
To further clarify advantages and aspects of the invention, a more patticular
description of the invention will be rendered by reference to specific embodiments thereof,
which is illustrated in the appended drawings. It is appreciated that these drawings depict
only typical. embodiments of the invention and are therefore not to be considered limiting of
its scope. The invention will be described and explained with additional specificity and detail
with the accompanying drawings, which are listed below for quick reference.
Figures la to lb and Figure 2 illustrate exemplary methods for prioritising
file transfer based on user interactions, in accordance with various embodiments of present
invention.
Figure 3 illustrates an exemplary computing device prioritising file transfer
based on user interactions, in accordance with an embodiment of present invention.
Figures 4 to 13 illustrate prioritising two or more file transfers based on one or
more user interactions, in accordance with various embodiments of present invention.
Figure 14-17 illustrate example manifestations depicting the implementation
ofthe present invention; and
Figure 18 illustrates a typical hardware configuration of a computing device,
which is representative of a hardware environment for practicing the present invention.
It may be noted that to the extent possible, like reference numerals have been
used to represent like elements in the drawings. Further, those of ordinary skill in the art will
appreciate that elements in the drawings are illustrated for simplicity and may not have been
necessarily drawn to scale. For example, the dimensions of some of the elements in the
drawings may be exaggerated relative to other elements to help to improve understanding of
aspects of the invention. Furthermore, the one or more elements may have been represented
in the drawings by conventional symbols, and the drawings may show only those specific
details that are pertinent to understanding the embodiments of the invention so as not to
obscure the drawings with details that will be readily apparent to those of ordinary skill in the
art having benefit of the description herein.
DETAILED DESCRIPTION
It should be understood at the outset that although illustrative implementations
of the embodiments of the present disclosure are illustrated below, the present invention may
be implemented using any number of techniques, whether currently known or in existence.
The present disclosure should in no way be limited to the illustrative implementations,
drawings, and techniques illustrated below, including the exemplary design and
implementation illustrated and described herein, but may be modified within the scope of the
appended claims along with their full scope of equivalents.
The term "some" as used herein is defined as "none, or one, or more than one,
· or all." Accordingly, the terms "none," "one," "more than one," "more than one, but not all"
or "all" would all fall under the definition of "some." The term "some embodiments" may
refer to no embodiments or to one embodiment or to several embodiments or to all
embodiments. Accordingly, the term "some embodiments" is defined as meaning "no
embodiment, or one embodiment, or more than one embodiment, or all embodiments."
The terminology and structure employed herein is for describing, teaching and
illuminating some embodiments and their specific features and elements and does not limit,
restrict or reduce the spirit and scope of the claims or their equivalents.
More specifically, any terms used herein such as but not limited to "includes,"
"comprises," "has," "consists," and grammatical variants thereof do NOT specify an exact
limitation or restriction and certainly do NOT exclude the possible addition of one or more
features or elements, unless otherwise stated, and furthermore must NOT be taken to exclude
the possible removal of one or more of the listed features and elements, unless otherwise
stated with the limiting language "MUST comprise" or ''NEEDS TO include."
Whether or not a certain feature or element was limited to being used only
once, either way it may still be referred to as "one or more features" or "one or more
elements" or "at least one feature" or "at least one element." Furthermore, the use of the·
terms "one or more" or "at least one" feature or element do NOT preclude there being none
of that feature or element, unless otherwise specified by limiting language such as "there
NEEDS to be one or more or "one or more element is REQUIRED."
Unless otherwise defined, all terms, and especially any technical and/or
scientific terms, used herein may be taken to have the same meaning as commonly
understood by one having an ordinary skill in the art. ·
Reference is made herein to some "embodiments." It should be understood
that an embodiment is an example of a possible implem~ntation of any features and/or
elements presented in the attached claims. Some embodiments have been described for the
purpose of illuminating one or more of the potential ways in which the specific features
and/or elements of the a~tached claims fulfil the requirements of uniqueness, utility and nonobviousness.
Use of the phrases and/or terms such as but not limited to "a first
embodiment," "a further embodiment," "an alternate embodiment," "one embodiment," "an
embodiment," "multiple embodiments," "some embodiments," "other embodiments,"
"further embodiment", "furthermore embodiment", "additional embodiment" or variants
thereof do NOT necessarily refer to the same embodiments. Unless otherwise specified, one
or more particular features and/or elements described in connection with one or more
embodiments may be found in one embodiment,· or may be found in more than one
embodiment, or may be found in all embodiments, or may be found in no embodiments.
Although one or more features and/or elements may be described herein in the context of
only a single embodiment, or alternatively in the context of more than one embodiment, or
further alternatively in the context of all embodiments, the features and/or elements may
instead be provided separately or in any appropriate combination or not at all. Conversetr,
any features and/or elements described in the context of separate embodiments may
alternatively be realized as existing together in the context of a single embodiment.
Any particular and all details set forth herein are used in the context of some
embodiments and therefore should NOT be necessarily taken as limiting factors to the
attached claims. The attached claims and their legal equivalents can be realized in the context
of embodiments other than the ones used as illustrative examples in the description below.
Figures 1 a to 1 b illustrate exemplary method 100 implemented in a computing
device for prioritising file transfer or process based on user interactions or user inputs and
consequently dynamically allocating a network bandwidth amongst a plurality of processes
concurrently running on the computing device, according to an embodiment of present
invention. In said embodiment, the method comprises: depicting 101 the plurality of
processes concurrently running on the computing device, the processes being either
downloading a file or uploading a file; receiving 102 a user input, the user input
corresponding to a first set of processes from the plurality of processes; and allocating 103
network bandwidth to each of the processes in the first set of processes based on the user
input.
In a further embodiment, the method 100 further comprises: allocating 104
remaining network bandwidth amongst remaining of the plurality of processes.
In a further embodiment, the method 100 further comprises: reordering 105
the depiction of the plurality of processes based on the allocated network bandwidth.
In a further embodiment, the method I 00 further comprises: analysing I 06 the
user input.
In a further embodiment, the analysis I 06 of the user input comprises:
determining I 07 at least one characteristic of the user input; and assigning I 08 a priority to
each of the processes in the plurality of processes based on the at least one characteristic such
that a maximum bandwidth is allocated to a process having a highest priority assigned thereof
and a minimum bandwidth is allocated to a process having a lowest priority assigned thereof.
In a further embodiment, the first set ofthe processes is a non-null set.
In a further embodiment, the plurality of processes are depicted in a window.
In a further embodiment, the plurality of processes are depicted in a
notification window.
In a further embodiment, the user input is one of a touch gesture input, a nontouch
gesture input, and an input from an input device communicatively coupled to the
computing device.
In a further embodiment, the file is one of an audio file, a video file, an image
file, a data file, and an application.
Figure 2 illustrates exemplary method 200 implemented in a computing device
for prioritising file transfer based on user interactions, in accordance with various
embodiments of present invention. In such embodiment, a network bandwidth is dynamically
allocated amongst a first set of files being downloaded or uploaded by the computing device
via an application. In said embodiment, the method 200 comprises: providing 20 I a window
depicting the· application; depicting 202 in the window the first set of files that are
concurrently being downloaded or uploaded; receiving 203 at least one user input on the
window, the at least one user input corresponding to at least one file from the first set; and
allocating 204 network bandwidth to the at least one file based on the at least one user input.
Figure 3 illustrates an exemplary computing device 300 prioritising file
transfer based on user interactions.
In one embodiment the computing device 300 dynamically allocates a network
bandwidth amongst a concurrently running plurality of processes based on the prioritization.
In said embodiment, said computing device 300 comprises: a display unit 30I to depict the
concurrently running plurality of processes, the processes being either downloading a file or
uploading a file; a receiving unit 302 to receive a user input, the user input corresponding to a
first set of processes from the plurality of processes; and a netWork allocating unit 303 to
allocate the network bandwidth to each of the process in the first set of the processes based
on the user input.
In a further ellluouimt:ut, tht: computing device 300 further comprises: an
analysing unit 304 to analyse the user input.
In another embodiment, said computing device 300 dynamically allocates a
network bandwidth amongst a first set of files being downloaded or uploaded via an
application based on the prioritization. In said embodiment, said computing device 300
comprises: a display unit 301 to: depict the application in a window; and depict the first set of files in the window; a receiving unit 302 to receive at least one user input through the
window, the at least one user input corresponding to at least one file from the first set; and a
network allocating unit 303 to allocate a network bandwidth to the at least one file based on
the at least one user input.
It would be understood that the computing device 300, the receiving unit 302,
the network allocating unit 303, and the analysing unit 304 may include various software
components or modules as necessary for implementing the invention.
Figures 4 to 18 illustrates prioritising two or more file transfers based on one
or more user interactions, in accordance with various embodiments of present invention.
In accordance with the present invention, the file transfer, hereinafter referred
to as process, is related to downloading a file or uploading a file on a computing device.
Examples of the file include an image file, an audio file, a video file, a data file, and an
application. Examples of the computing device include desktop, notebook, tablet, smart
phone, and laptop.
Further, in accordance wi~h the present invention, two or more processes
concurrently run on the computing device. The two or more processes are depicted in a
window on the computing device. Examples of the window include an application window,
notification window, notification panel, n~tiftcation bar, notification drawer, a chat window,
and any other window configured to depict .the two or more processes.
Further, the user interaction or user input, in accordance with one embodiment
of the present invention, is a swipe based user input such that each swipe based user input
corresponds to one process only. The user input can be provided through various methods. In
one embodiment, the user input can be provided as a touch based gesture input. In another
embodiment, the user input can be provided as a non-touch based gesture input. In yet
another embodiment, the user input can be provided as an input from an input device
communicatively coupled to the computing device. Examples of the input device inc.lude a
stylus, an electronic pen, and a mouse. ·
Furthermore, the user input is associated with one or more characteristics such
as a direction, a speed, and a sequence in which the user input is received. A direction
associated with the user input indicates whether to prioritize or de-prioritize a current process
and accordingly indicates allocation of network bandwidth. In an embodiment, a swipe in a
right direction assigns a highest priority to a process and indicates allocation of a maximum
network bandwidth out of total available network bandwidth to the process. Similarly, a
swipe in a left direction assigns a lowest priority to a process and indicates allocation of a minimum network bandwidth out of total available network bandwidth.
Further, a speed associated with the user input is indicative of a bandwidth gap
to be provided between two processes receiving the same user input. In an embodiment, a
faster swipe or a flick in a right direction for first process and slow swipe in the right
direction for a second process, results in allocation of network bandwidth to the first process
and the second process such that a bandwidth gap of predefined value is provided between
the allocated network bandwidths.
Furthermore; a sequence in which the user input is received indicates a
sequence of assigning priority. In an· embodiment, a first swipe in a right direction for a
second process and a second swipe in a right direction for a· first process, assigns a highest
priority to the second process and a second highest priority to the first process. Similarly, a
first swipe in a left direction for a first process and a second swipe in the left direction for a
second process, assigns a lowest priority to the first process and a second lowest priority to
the second process.
Based on the one or more characteristics of the user input, a priority is
assigned to a current process corresponding to which the user input is received. Accordingly,
a predefined mapping of the user input, the one or more characteristics, and the particular
action i.e. prioritizing or deprioritizing a process is stored in a memory of the computing
device.
Further, a predefined distribution matrix is defined for allocation of the
network bandwidth based on a priority assigned to a process. In an embodiment, allocation of
the network bandwidth is performed upon receiving user input. In an embodiment, the
predefined distribution matrix defines allocation .of maximum network bandwidth to a
process with highest priority and minimum network bandwidth to a process with iowest
priority. In addition, the highest priority, the lowest priority, and subsequent priorities are
determined based on a number of processes concurrently running on the computing device.
For example, for N processes, a first user input in right direction received for a first process
may assign a highest priority to the first process, a second user input in right direction
received for a third process may assign a second highest priority to the third process, and so
on. Similarly, a third user input in left direction received for a second process may assign a
Nth priority to the second process, a fourth user input in left direction received for a fourth
process may assign a N-1 priority to the fourth process, and so on. The predefined
distribution matrix is stored in a memory ofth~ computing device.
Figures 4 to 6 illustrate prioritising of two processes based on a user input in
accordance with one embodiment of present invention.
Figure 4 illustrates two processes concurrently running on a computing device
400 in accordance with an embodiment of the present invention. The computing device 400
includes hardware components (not shown in the figure) as described in reference to Figure
Figure 4a illustrates a display unit 401 of the computing device 400. The
display unit 401 includes a window 402 depicting two processes 403a and 403b running
concurrently on the computing device 400. The processes 403a and 403b relate to transferring
of a file such as downloading the file and uploading the file. The window 402 further depicts
the processes 403a and 404b by way of an icon representing the process, a text indicating a
type of the process, and a status bar indicating a progress of the process.
Figure 4b illustrates a network bandwidth allocated to the processes 403a and
403b. It would be understood that the allocation of the network bandwidth would not be
depicted on the display unit 40 1. The allocation of the network bandwidth is depicted in the
figure for the sake of the providing better understanding to the present invention.
Accordingly, when processes related to downloading a file and uploading a
file are concurrently running on a computing device, a network bandwidth is equally
distributed amongst the processes initially. The network bandwidth allocated to such
processes is independent of a network bandwidth allocated to other applications or processes
concurrently running on the computing device. For example, an email application and a chat
application are concurrently running o~ the computing device. When images are being
downloaded through the chat application, network bandwidth allocated to the downloading of
the images in the chat application is independent of the network bandwidth allocated to the
email application and the chat application.
Thus, Figure 4b represents a bar graph illustrating equal distrihution of the
network bandwidth to each of the processes 403a and 403b. A horizontal axis of the bar
graph represents concurrently running processes and a vertical axis of the bar graph
represents percentage of network bandwidth allocated to each of the concurrently running
processes. As such, bar 404a in the bar graph represents 50% of network bandwidth allocated
to the process 403a and bar 404b in the bar graph represents 50% of network bandwidth
allocated to the process 403b.
Figure 5 illustrates prioritizing one process out of two processes concurrently
running on a computing device 500 in accordance with an embodiment of the present
invention. The computing device 500 includes hardware components (not shown in the
figure) as described in reference to Figure 3.
Referring to Figure Sa, the computing device 500 includes a display unit 501
for displaying the two processes concurrently running on the computing device 500, as
described in reference to Figure 4a. The display unit 501 includes a window 502 depicting
two processes 503a and 503b running concurrently on the computing device 500. The
processes 503a and 503b correspond to the processes 403a and 403b described in reference to
Figure 4a.
Referring to Figure 3 and Sa, the receiving unit 302 in the computi~g device
500 receives a· user input 504 for process 503b. In the present embodiment, the user input 504
is a swipe based user input and is indicative of a swipe in a right direction. As such, a highest
priority is assigned to the process 503b based on the predefined mapping, as described earlier.
Consequently, a lowest priority is assigned to the process 503a. Accordingly, the network
allocating unit 303 allocates a maximum network bandwidth to the process 503b and
minimum network bandwidth to the process 503a based on the predefined distribution matrix,
as described above.
Figure 5b represents a bar graph illustrating distribution of the network
bandwidth to each of the processes 503a and 503b upon prioritization of the processes based
on the user input 504. A horizontal axis of the bar graph represents concurrently running
processes and a vertical axis of the bar graph represents percentage of network bandwidth
allocated to each of the concurrently running processes. Accordingly, bar 505b in the bar
graph represents 80% of network bandwidth alloc~ted to.the process 503b and bar 505a in the
bar graph represents 20% of network bandwidth allocated to the process 503a upon receiving ·
the user input 504 based on the predefined distribution matrix. It would be understood that
the allocation of the network bandwidth would not be depicted on the display unit 501. The
allocation of the network bandwidth is depicted in the figure for the sake of the providing
better understanding to the present invention.
Further, upon prioritization ofthe processes 503a and 503b, the analysing unit
304 reorders the depiction of the processes 503a and 503b according to the allocated network
bandwidth. Figure 5c illustrates the reordered processes in the display unit 501 such that
process 503b is depicted above the process 503a in the window 502.
Figure 6 illustrates deprioritizing one process out of two processes
concurrently running on a computing device 600 in accordance with an embodiment of the
present invention. The computing device 600 includes hardware components (not shown in
the figure) as described in reference to Figure 3. .
Referring to Figure 6a, the computing device 600 includes a display unit 601
for displaying the two processes concurrently running on the computing device 600, as
described in reference to Figure 4a. The display unit 601 includes a window 602 depicting
two processes 603a and 603b running concurrently on the computing device 600. The
processes 603a and 603b correspond to the processes 403a and 403b described in reference to
Figure 4a.
Referring to Figure 3 and 6a, the receiving unit 302 in the computing device
600 receives a user input 604 for process 603b. In the present embodiment, the user input 604
is a swipe based user input and is indicative of a swipe in a left direction. As such, a lowest
priority is assigned to the process 603b based on the predefined· mapping, as described earlier.
Consequently, a highest priority is assigned to the process 603a. Accordingly, the network
allocating unit 303 allocates a maximum network bandwidth to the process 603a and
minimum network bandwidth to the process 603b based on the predefined distribution
matrix, as described above.
Figure 6b represents a bar graph illustrating distribution of the network
bandwidth to each of the processes 603a and 603b upon prioritization of the processes based
on the user- input 604. A horizontal axis of the bar graph represents concurrently running
processes and a vertical axis of the bar graph represents percentage of network bandwidth
allocated to each of the concurrently running processes. Accordingly, bar 605b in the bar
graph represents 20% of network bandwidth allocated to the process 601h and bar 605a in the
bar graph represents 80% of network bandwidth allocated to the process 603a upon receiving
the user input 604. It would be understood that the allocation of the network bandwidth
would not be depicted on the display unit 601. The allocation of the network bandwidth is
depicted in the figure for the sake of the providing better understanding to the present
invention.
Further, upon prioritization of the processes 603a and 603b, the analysing unit
304 reorders the depiction of the processes 603a and 603b according to the allocated network
bandwidth.
Figures 7 to 9 illustrate prioritising of four processes based on a user input in
accordance with one embodiment of present invention.
Figure 7 illustrates four processes concurre~tly running on a computing device
700 in accordance with an embodiment of the present invention. The computing device 700
hardware components (not shown in the figure) as described in reference to Figure 3.
Figure 7a illustrates· a display unit 701 of the computing device 700. The
display unit 701 includes a window 702 depicting four processes 703a,,703b, 703c, and 703d
running concurrently on the computing device 700. The processes 703a, 703b, 703c, and
703d relate to transferring of a file such as downloading the file and uploading the file. The
window 702 further depicts the processes 703a, 703b, 703c, and 703d by way of an icon
representing the process, a text indicating a type of the process, and a status bar indicating a
progress ofthe process.
Figure 7b illustrates a network bandwidth allocated to the processes 703a,
703b, 703c, and 703d. It would be understood that the allocation of the network bandwidth
would not be depicted on the display unit 70 I. The allocation of the network bandwidth is
depicted in the figure for the sake of the providing better understanding to the present
invention.
Initially, when processes 703a, 703b, 703c, and 703d are concurrently running
on the computing device 700, a network bandwidth is equally distributed amongst the
processes 703a, 703b, 703c, and 703d. Thus, Figure 7b represents a bar graph illustrating
equal distribution of the network bandwidth to each of the processes 703a, 703b, 703c, and
703d. A horizontal axis of the bar graph. represents concurrently running processes and a
vertical axis of the bar graph represents percentage of network bandwidth allocated to each of
the concurrently running processes. Accordingly, bar 704a in the bar graph represents 25% of
network bandwidth allocated to the process 703a, bar 704b in the bar graph represents 25% of
network bandwidth allocated to the process 703b, bar 704c in the bar graph represents 25% of
network bandwidth allocated to the process 703c, and bar 704d in the bar graph represents
25% of network bandwidth allocated to the process 703d.
Figure 8 illustrates prioritizing one process out of four processes concurrently
running on a computing device 800 in accordance with an embodiment of the present
invention. The computing device 800 includes hardware components (not shown in the
figure) as described in reference to Figure 3.
Referring to Figure 8a, the computing device 800 includes a display unit 801
for displaying the four processes concurrently running on the computing device 800, as
described in reference to Figure 7a. The display unit 801 includes a window 802 depicting
four processes 803a, 803b, 803c, and 803d running concurrently on the computing device
800. The processes 803a, 803b, 803c, and 803d correspond to the processes 703a, 703b,
703c, and 703d described in reference to Figure 7a.
Referring to Figure 3 and 8a, the receiving unit 302 in the computing device
800 receives a user input 804 for process 803c. In the present embodiment, the user input 804
is a swipe based user input and is indicative of a swipe in a right direction. As such, a highest
priority is assigned to the process 803c based on the predefined mapping, as described earlier.
Consequently, low priorities are assigned to the processes· 803a, 803b, and 803d.
Accordingly, the network allocating unit 303 allocates a maximum network bandwidth to the
process 803c and lower network bandwidths to the processes 803a, 803b, and 803d based on
the predefined distribution matrix, as described above. In an embodiment, the network
allocating unit 303 equally distributes remaining bandwidth between the processes 803a,
803b, and 803d upon allocating maximum network bandwidth to the process 803c.
Figure 8b represents a bar gr~ph illustrating distribution of the network
bandwidth to each of the processes 803a, 803b, 803c, and 803d upon prioritization of the
processes based on the user· input 804. A horizontal axis of the bar graph represents
concurrently running processes and a vertical axis of the bar graph represents percentage of
network bandwidth allocated to each of the concurrently running processes. Accordingly, bar
805c in the bar graph repr~s~nts 70% of network bandwidth allocated to the process R0.1r..
Remaining 30% of the network bandwidth is equally distributed between the processes 803a,
803b, and 803d. As such bar 805a in the bar graph represents 10% of network bandwidth
allocated to the process 803a, 8Q5b in the bar graph represents 1 0% of network bandwidth
allocated to the process 803b, and bar 805d in the bar graph represents 10% of network
bandwidth allocated to the process 803d. It would be. understood that the allocation of the
network band.w idth would not be depicted on the display unit 801. The allocation of the
network bandwidth is depicted in the figure for the sake of the providing better understanding
to the present invention.
Further, upon prioritization of the processes, the analysing unit 304 reorders
the depiction of the processes 803a, 803b, 803c, and 803d according to the allocated network
bandwidth. Figure 8c illustrates the reordered processes in the. display unit 801 such that
process 803c is depicted at the top of the window 802 followed by processes 803a, 803b, and
803d.
Figure 9 illustrates deprioritizing one process out of four processes
concurrently running on a computing device 900 in accordance with an embodiment of the
present invention. The computing device 900 includes hardware components (not shown in
the figure) as described in reference to Figure 3.
Referring to Figure 9a, the computing device 900 includes a display unit 901
for displaying the four processes concurrently running on the computing device 900, as
described in reference to Figure 7a. The display unit 901 includes a window 902 depicting
four processes 903a, 903b, 903c, and 903d running concurrently on the computing device
900. The processes 903a, 903b, 903c, and 903d correspond to the processes 703a, 703b,
703c, and 703d described in reference to Figure
Referring to Figure 3 and 9a, the receiving unit 302 in the computing device
900 receives a user input 904 for process 903c. In the present embodiment, the user input 904
is a swipe based user input and is indicative of a swipe in a left direction. As such, a lowest
priority is assigned to the process 903c based on the predefined mapping, as described earlier.
Consequently, high priorities are assigned to the processes 903a, 903b, and 903d.
Accordingly, the network allocating unit 303 allocates a minimum network bandwidth to the
process 903c and higher network bandwidths to the processes 903a, 903b, and 90Jd based orr
the predefined distribution matrix, as described above. In an embodiment, the network
allocating unit 303 equally distributes remaining bandwidth between the processes 903a,
903b, and 903d upon allocating minimum network bandwidth to the process 903c.
Figure 9b represents a bar graph illustrating distribution of the network
bandwidth to each of the processes 903a, 903b, 903c, and 903d upon prioritization of the
processes based on the user input 904. A horizontal axis of the bar graph represents
concurrently running processes and a vertical axis of the bar graph represents percentage of
network bandwidth allocated to each of the concurrently running processes. Accordingly, bar
905c in the bar graph represents I 0% of network bandwidth allocated to the process 903c.
Remaining 90% of the network bandwidth is equally distributed between the processes 903a,
903b, and 903d. As such bar 905a in the bar graph represents 30% of network bandwidth
allocated to the process 903a, bar 905b in the bar graph represents 30% of network bandwidth
allocated to the process 903b and bar 905d in the bar graph represents 30% of network
bandwidth allocated to the process 903d. It would be understood that the allocation of the
network bandwidth would not be depicted on the display unit 901. The allocation of the
network bandwidth is depicted in the figure for the sake of the providing better understanding
to the present invention.
Further, upon prioritization of the processes 903a, 903b, 903c, and 903d, the
analysing unit 304 reorders the depiction of the processes 903a, 903b, 903c, and 903d
according to the allocated network bandwidth. Figure 9c illustrates the reordered processes in
the display unit" 90 I such that process 903a is depicted at the top of the widow 902 followed
by process 903b, 903d, and 903c.
Figures 10 and 11 illustrate prioritising of four processes based on multiple
user input in accordance with one embodiment of present invention.
Figure 10 illustrates prioritising of four processes concurrently running on a
computing device 1 000 in accordance with an embodiment of the present invention. The
computing device 1000 includes hardware components (not shown in the figure) as described
in reference to Figure 3.
Referring to Figure 1 Oa, the cum puling device 1000 includes a display unit
1001 for displaying the four processes concurrently running on the computing device 1000,
as described in reference to Figure 7a. The display unit 1001 includes a window 1002
depicting four processes 1 003a, 1 003b, 1 003c, and 1 003d running concurrently on the
computing device 1000. The processes 1 003a, 1 003b, 1 003c, and 1 003d correspond to the
processes 703a, 703b, 703c, and 703d described in reference to Figure 7a.
Referring to Figure 3 and 1 Oa, the receiving unit 302 in the computing device
1000 receives multiple user inputs 1 004a, 1 004b, and 1005 for prioritizing the processes
1003a, 1003b, 1003c, and 1003d. In the present embodiment, the user input 1004a and 1004b
are swipe based user input and are indicative of a swipe in a right direction. In said
embodiment, the user input 1005 is a swipe based user input and is indicative of a swipe in a
left direction. As indicated in the figure, the user input 1 004a, 1 004b, and 1005 are received
in a sequence such that the user input 1 004a is first received for the process 1 003b.
Thereafter, the user input 1 004b is received for the process 1 003a. Finally, the user input
1005 is received for the process 1 003d. As described earlier, the assignment of priority and
the distribution of the network bandwidth is performed after receiving each ofthe user input.
In said embodiment, upon receiving the user input 1 004a, the analysing unit
304 determines a direction and a sequence associated with the user input 1 004a. Upon
determining, the direction associated with the user 1 004a is a right direction and the sequence
associated with the user input 1 004a is 1 in the right direction, the analysing unit 304 assigns
a highest priority to the process 1 003b based on the predefined mapping. Accordingly, the
network allocating unit 303 allocates a maximum network bandwidth to the process 1003b
based on the predefined distribution matrix. In an example, 70% of network bandwidth
allocated to the process 1 003b. Remaining 30% of the network bandwidth is equally
distributed between the processes 1 003a, 1 003c, and 1303d.
Upon receiving the user input 1 004b, the analysing unit 304 determines a
direction and a sequence associated with the user input 1 004b. Upon determining, the
direction associated with the user input 1 004b is a right direction and the sequence of the user
input 1 004b is 2 in the right direction, the analysing unit 304 assigns a second highest priority
to the process 1 003a based on the predefined mapping. Accordingly, the network allocating
unit 303 reallocates the network bandwidth based on the predefined distribution matrix such
that a first maximum network bandwidth is allocated to the process 1 003b and a second
maximum network bandwidth is allocated to the process 1 003a, and remaining b(!ndwidth is
equally distributed among the remaining processes. In an example, 80% of network
_bandwidth _is allocated to the processes 1 003 b and 1 OOJa. -Remaining -20% of the network
bandwidth is equally distributed between the processes 1 003c and 1303d ..
Upon receiving the user input 1005, the analysing unit 304 determines a
direction and a sequence associated with the user input 1005. Upon determining, the direction
associated with the user input 1005 is a left direction and the sequence of the user input 1005
is 1 (represented as 3 in the figure to avoid ambiguity) in the left direction, the analysing unit
304 assigns a lowest priority to the process 1 003d based on the predefined mapping.
Accordingly, the network allocating unit 303 reallocates the network bandwidth based on the
predefined distribution matrix such that a first maximum network bandwidth is allocated to
the process 1 003 b, a second maximum network bandwidth is allocated to the process 1 003a,
and a minimum network bandwidth is allocated to the process 1 003d.
In said embodiment, the receiving unit 302 receives no further input. In .
another embodiment, the receiving unit 302 receives a third user input (not shown in the
figure) in the right direction. In either of the embodiments, the analysing unit 304 assigns a
third maximum priority to the process 1 003c based on the predefined mapping. In yet another
embodiment, the receiving unit 302 receives a third user input (not shown in the figure) in the
left direction. In such embodiment, the analysing unit 304 assigns a second minimum priority
to the process 1 003c based on the predefined mapping. Accordingly, the network allocating
unit 303 reallocates the network bandwidth based on the predefined distribution matrix.
Further, upon prioritization of the processes 1 003a, 1 003b, 1 003c, and 1 003d
the analysing unit 304 reorders the depiction of the processes 1 003a, 1 003b, 1 003c, and
1 003d according to the allocated network bandwidth. Figure 1 Ob illustrates the reordered
processes in the display unit 1001 such that process 1003b is depicted at the top of the widow
1002 followed by process 1 003a, 1 003c, and 1 003d.
Figure 11 represents bar graphs illustrating distribution of the network
bandwidth to each of the processes 1 003a, I 003 b, 1 003c, and 1 003d based on the multiple
user inputs I 004a, I 004b, and I 005 and the predefined distribution matrix. A vertical axis of
a bar graph represents percentage of network bandwidth and -a horizontal axis of the bar
graph represents processes. It would be understood that the allocation of the network
bandwidth would not be depicted on the display unit 1 001. The allocation of the network
bandwidth is depicted in the figure for the sake of the providing better understanding to the
present invention.
ln said embodiment, the predefined distribution matrix defines a distribution
network bandwidth amongst multiple processes such that a process with the lowest priority is
set for a sequential download after all processes are completed and therefore the process is
suspended. Accordingly, Figure 11a represents the distribution of the network bandwidth
between the process 1 004a, I 004b, and 1 OOc according to predefined distribution matrix. Bar
II OI b in the bar graph represents 50% of network bandwidth allocated to the process I 003b,
bar II 0 I a in the bar graph represents 30% of network bandwidth allocated to the process .
I 003a, and bar II 0 I c in the bar graph represents 20% of network bandwidth allocated to the
process I 003c. As the process 1 003d is suspended, bar IIO I d in the bar graph represents 0%
of network bandwidth allocated to process I 003d.
Further as described earlier, upon completion of one of the processes, the
network bandwidth is reallocated among remaining active processes. In said embodiment, the
process I 003b is completed first due to various factors such as small size of a file being
uploaded or downloaded and the allotment of maximum network bandwidth. Accordingly,
the network bandwidth is reallocated among remaining active process~s, i.e., process I 003a
and I 003c, according to the predefined distribution matrix. Referring to Figure II b, bar
II 02a in the bar graph represents 60% of network bandwidth allocated to the process I 003a,
and bar 11 02c in the bar graph represents 40% of network bandwidth allocated to the process
1 003c. As the process 1 003d is suspended, bar 11 02d in the bar graph represents 0% of
network bandwidth allocated to process I 003d.
Furthermore, as described earlier, upon completion of all of the active
processes, the suspended processes are activated and the network bandwidth is reallocated
among the reactivated processes. In said embodiment, _the processes I 003a and 1 003c are
completed due to various factors. Accordingly, the process· I 003d is activated and the
network bandwidth is reallocated , to 'the proc.ess l003d, according to the predefined
distribution matrix. Referring to Figure II c, bar 11 03d in the bar graph represents I 00% of
network bandwidth allocated to the process 1 003d as all the remaining processes 1 003a,
1003b, and 1003c are completed, represented by bars 1103a, 1103b, and 1103c.
Figures 12 and 13 illustrate prioritising of four processes concurrently running
on computing devices 1200 and 1300, respectively, based on multiple user inputs in
accordance with one another embodiment of the present invention. The computing devices
1200 and 1300 include hardware components (not shown in the figure) as described in
r~::ft:rence to Figure 3.
Referring to Figure 12a, the computing device 1200 includes a display unit
1201 for displaying the four processes concurrently running on the computing device 1200.
The display unit 1201 includes a window 1202 depicting four processes 1203a, 1203b, 1203c,
and 1203d running concurrently on the computing device 1200.
Referring to Figure 3 and 12a, the receiving unit 302 in the computing device
1200 receives multiple user inputs 1204 and 1205 for prioritizing the processes 1203a, 1203b,
1203c, and 1203d. In the present embodiment, the user input 1204 and 1205 are a swipe
based user input and are indicative of a swipe in a right direction. As indicated in the figure,
the user input 1204 is first received for the process 1203a. Thereafter, the user input 1205 is
received for the process 1203c. As described earlier, the assignment of priority and the
distribution of the network bandwidth is performed after receiving each of the user input
based on the predefined mapping and the predefined distribution matrix.
In said embodiment, upon receiving the user input 1204, the analysing unit
304 determines a direction, a speed, and a sequence associated with the user input 1204.
Upon determining, the direction associated with the user input 1204 is a right direction, the
speed associated with the user input 1204 is a fast swipe, and the sequence associated with
the user input 1204 is 1 in the right direction, the analysing unit 304 assigns a highest priority
to the process 1203a based on the predefined mapping. Accordingly, the network allocating
unit 303 allocates a maximum network bandwidth to the process 1203a based on the
predefined distribution matrix.
Upon receiving the user input 1205, the analysing unit 304 determines a
direction, a speed, and a sequence associated with the user input 1205. Upon determining, the
direction associated the user input 1205 is a right direction, the speed associated with the user
input 1205 is a slow swipe, and the sequence of the user input 1205 is 2 in the right direction,
the analysing unit 304 assigns a second highest priority to the process 1203c based on the
predefined mapping.
Further, in said embodiment, the analysing unit 304 determines that the speed
of the user input 1204 is higher than the speed of the user input 1205. Such a difference
between the speeds of the user inputs indicate.s that a highest priority and a highest bandwidth
is to be assigned to a process receiving a first user input with higher speed than a process
receiving a second user inp!Jt in same direction with lesser speed. As a result, a higher
bandwidth gap. is provided between a network bandwidth allocated to the prn()ess receiving
the first user input with higher speed and a network bandwidth allocated to the process
receiving the second user input in same direction with lesser speed. Accordingly, the network
allocating unit 303 reallocates the network bandwidth based on the predefined distribution
matrix by allocating a first maximum network bandwidth to the process 1203a and a second
inaximum network bandwidth to the process 1203c such that a higher bandwidth gap of
predefined value is provided between the allocated network bandwidths. In an example, 60%
of bandwidth gap is provided between the network bandwidth allocated to the processes
1203a and 1203b. Remaining network bandwidth is distributed between the processes 1203b
and 1203d, as described earlier.
Figure 12b ·represents a bar graph illustrating distribution of the network
bandwidth to the processes 1203a and 1203c upon prioritization of the processes based on the
multiple user inputs 1204 and 1205, and the predefined distribution matrix. A vertical axis of
a bar graph represents percentage of network bandwidth and a horizontal axis of the bar
graph represents processes. It would be understood that the allocation of the network
bandwidth would not be depicted on the display unit 1201. The allocation of the network
bandwidth is depicted in the figure for the sake of the providing better understanding to the
present invention.
Accordingly, bar 1206a in the bar graph represents 80% of network bandwidth
allocated to the process 1203a and bar 1206c in the bar graph represents 20% of network
bandwidth allocated to the process 1203c. As can be observed, a bandwidth gap of 60% is
provided between the bars 1206a and 1206c. For the sake of clarity and brevity, the allocation
of network bandwidth to the processes 1203 b and 1203d is not depicted in the figure.
Referring to Figure 13a, the computing device 1300 includes a display unit
1301 for .displaying the four processes concurrently. running on the computing device
as described in reference to Figure 7a. The display unit 1301 includes a window 1302
depicting four processes 1303a, 1303b, 1303c, and 1303d running concurrently on the
computing device 1300.
Referring to Figure 3 and 13a, the receiving unit 302 in the computing device
1300 receives multiple user inputs 1304 and 1305 for prioritizing the processes 1303a, 1303b,
1303c, and 1303d. In the present embodiment, the user input 1304 and 1305 are a swipe
based user input and are indicative of a swipe in a right direction. As indicated in the figure,
the user input 1304 is first received for the process 1303a. Thereafter, the user input 1305 is
received for the process 1303c. As described earlier, the assignment of priority and the
distribution of the network bandwidth are performed after receiving each of the user input
based on the predefined mapping and the predefined distribution matrix.
In said embodiment, upon receiving the user input t304, the analysing unit
304 determines a direction, a speed, and a sequence associated with the user input 1304.
Upon determining, the direction associated with the user input 1304 is a right direction, the
speed associated with the user input 1304 is a slow swipe, and the sequence associated with
the user input 1304 is I in the right direction, the analysing unit 304 assigns a highest priority
to the process 1303a based on the predefined mapping. Accordingly, the network allocating
unit 303 allocates a maximum network bandwidth to the process 1303a based on the
predefined distribution matrix.
Upon receiving the user input 1305, the analysing unit 304 determines a
direction, a speed, and a sequence associated with the user input 1305. Upon determining, the
direction associated the user input 1305 is a right direction, the speed associated with the user
input 1305 is a slow swipe, and the sequence of the user input 1305 is 2 in the right direction,
the analysing unit 304 assigns a second highest priority to the process 1303b based on the
predefined mapping.
Further, in said embodiment, the analysing unit 304 determines that the speed
of the user input 1304 is either equal or approximately equal to the speed of the user input
1305. Such a minimal or zero difference between the speeds of the user inputs indicates that a
process receiving a first user input at particular speed is to be assigned a priority over a
process receiving a second user input in same direction at similar or same speed based on a
direction of the user input. Such an assignment of priority is independent of bandwidth being
allocated to either of the processes. For example, a process receiving a first user input at
particular speed in right direction is assigned a higher priority than a process receiving a
· second user input in same direction at similar or same speed, irrespective of the bandwidth
being allocated to either of the processes. Similarly, a process receiving a first user input at
particular speed in left direction is assigned a lower priority than a process receiving a second
user input in same direction at similar or same speed, irrespective of the bandwidth being
allocated to either of the processes. As a result, a lower bandwidth gap is provided between a network bandwidth allocated to the process receiving the first user. input and a network
bandwidth allocated to the process receiving the second user input in same direction at
similar or same speed.
Accordingly, the network allocating unit 303 reallocates the network
bandwidth based on the predefined distribution matrix by allocating a first maximum network
bandwidth to the process 1303a and a second maximum network bandwidth to the process
1303c such that a lesser bandwidth gap of predefined value is provided between the allocated
network bandwidths. In an example, 20% of bandwidth gap is provided between the network
bandwidth allocated to the processes 1303a and 1303c. Remaining network bandwidth is
distributed between the processes 1303b and 1303d, as described earlier. In another example,
I 0% of bandwidth gap is provided between the network bandwidth allocated to the processes
1303a and 1303c (not shown in the figure). In yet another example, 5% of bandwidth gap is
provided between the network bandwidth allocated to the processes 1303a and 1303c (not
shown in the figure).
Figure 13b represents a bar graph illustrating distribution of the network
bandwidth to the processes 1303a and 1303c upon prioritization of the processes based on the
multiple user inputs 1304 and 1305, and the predefined distribution matrix. A vertical axis of
a bar graph represents percentage of network bandwidth and a horizontal axis of the bar
graph represents processes. It would be understood that the allocation of the network
bandwidth would not be depicted on the display unit 130 I. The allocation of the network
bandwidth is depicted in the figure for the sake of the providing better understanding to the
present invention.
Accordingly, bar 1306a in the bar. graph represents 60% of network bandwidth
allocated to the process 1303a and bar 1306c in the bar graph represents 40% of network
bandwidth allocated to the process 1303c. As can be observed, a bandwidth gap of 20% is
provided between the bar 1306a and the bar 1306c. For the sake of clarity and brevity, the
network bandwidth allocated to the processes 1303b and 1303d are not depicted in the figure.
EXEMPLARY IMPLEMENTATIONS
Figure 14-17 illustrate example manifestations depicting the implementation
of the present invention. However, it may be strictly understood that the forthcoming
examples shall not be construed as being limitations towards the present invention and the
present invention may be extended to cover analogous _manifestations through other type of
like mechanisms.
Figure 14 illustrates an exemplary web page 1400. comprising various
elements such as videos, text,. and links to other web pages. The web page 1400 can be
accessed by a computing device 1401. Examples of such computing device include desktop,
notebook, tablet, smart phone, and laptop. A user can select any ofthe elements on the web
page to access· further information through an input mechanism. Upon such selection of two
or more elements, a window can open depicting a downloading status of the selected
elements. Thereafter, the user can prioritize a downloading process of the selected elements
by providing a user input corresponding to one or more of the selected elements. Examples of
the user input include touch based gesture input, non-touch based gesture input, and input
from the input device. Based on the user input, the downloading processes are prioritized and
accordingly network bandwidth is dynamically allocated as described earlier.
Figure 15 illustrates an exemplary window 1500 depicting uploading of
various files to an application or a storage device. Examples of the application include imagesharing
application, media sharing application, and document sharing application. Examples
of storage device include hard disk, pen drives, CD, floppy, and network based storage
device. The window 1500 can be depicted on a computing device 1501. Examples of such
computing device include desktop, notebook, tablet, smart phone, and laptop. A user can
prioritize the uploading process of all files or selected files by providing a user input on the
window. Examples of the user input include touch based gesture input, non-touch based
gesture input, and input from the input device. Based on the user input, the uploading
processes are prioritized and accordingly network bandwidth is dynamically allocated as
described earlier.
Figure 16 illustrates an exemplary notification window 1600 depicting
downloading of various files on a smart phone 1601. Examples of the files include document,
image, video, audio, and application. A user ca~--prioritize the downloa~ing process of all
files or selected files by providing a user input on the window. Examples of the user input
24
'IP tl ·ut:'ti.. .. HI ·2 'G. -l~J"5 - ·2· 8 l_ "S ·1·6 -: 'f3 4
include touch based gesture input, non-touch based gesture input, and input from the input
device. Based on the user input, the downloading processes are prioritized and accordingly
network bandwidth is dynamically allocated as described earlier.
Figure 17 illustrates an exemplary chat window 1700 depicting uploading and
downloading of various images. The chat window 1700 can be accessed on a computing
device 170 I. Examples of such computing device include desktop, notebook, tablet, smart
phone, and laptop. A user can prioritize the uploading and downloading process of all images
or selected images by providing a user input on the winclow. F.xnmples of the user input
include touch based gesture input, non-touch based gesture input, and input from the input
device. Based on the user input, the downloading and uploading processes are prioritized and
accordingly network bandwidth is dynamically allocated as described earlier.
EXEMPLARY HARDWARE CONFIGURATION
Figure 18 illustrates a typical hardware configuration of a computing device
I800, which is representative of a hardware environment for implementing the present
invention. As would be understood, the computing devices 300, 400, 500, 600, 700, 800, 900,
I 000, and I200, as described above, include the hardware configuration as described below.
In a networked deployment, the computing device I800 may operate in the
capacity of a server or as a client user computer in a server-client user network environment,
or as a peer computer system in a peer-to-peer (or distributed) network environment. The
computing device I800 can also be implemented as or incorporated into various devices, such
as a personal computer (PC), a tablet PC, a personal digital assistant (PDA), a mobile device,
a palmtop computer, a laptop, a desktop computer, and a communications device. F~rther,
while a single computing device I800 is illustrated, the term "system" shall also be taken to
include any collection of systems or sub-systems that individually or jointly execute a set, or
multiple sets, of instructions to perform one or more computer functions.
The computing device 1800 may include a processor 1801 e.g., a central
processing unit (CPU), a graphics processing unit (GPU), or both. The processor I80 1 may
be a component in a variety of systems. For example, the processor 1801 may be part of a
standard personal computer or a workstation. The processor I80 I may be one or more
general processors, digital signal processors, application specific integrated circuits, field
programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations
thereof, or other now known or later developed devices for analysing and processing data.
The processor 1801 may implement a software program, such as code generated manually
(i.e., programmed).
The computing device 1800 may include a memory 1802 communicating with
the processor 1801 via a bus 1803. The memory 1802 may be a main memory, a static
memory, or a dynamic memory. The memory 1802 may include, but is not limited to
computer readable storage media such as various types of volatile and non-volatile storage
media, including but not limited to random access memory, read-only memory,
programmable read-only memory, electrically prngrl'lmml'lble: readaonly memory, electrically
erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like.
The memory 1802 may be an external storage device or database for storing data. Examples
include a hard drive, compact disc ("CD"), digital video disc ("DVD"), memory card,
memory stick, floppy disc, universal serial bus ("USB") memory device, or any other device
operative to store data. The memory 1802 is operable to store instructions executable by the
processor 1801. The functions, acts or tasks illustrated in the figures or described may be
performed by the programmed processor 1801 executing the instructions stored in the
memory 1802. The functions, acts or tasks are independent of the particular type of
instructions set, storage media, processor or processing strategy and may be performed by
software, hardware, integrated circuits, firm-ware, micro-code and the like, operating alone or
in combination. Likewise, processing strategies may include multiprocessing, multitasking,
parallel processing and the like.
The computing device 1800 may further include a display unit 1804, such as a
liquid crystal display (LCD), an organic light emitting diode (OLEO), a flat panel display, a
solid .state display, a cathode ray tube (CRT), or other now known or later developed display
device for outputting determined information.
Additionally, the computing device 1800 may include an input device 1805
configured to allow a user to interact with any of the components of system 1800. The input
device 1805 may be a number pad, a keyboard, a stylus, an electronic pen, or a cursor control
device, such as a mouse, or a joystick, touch screen display, remote control or any other
device operative to interact with the computing device 1800.
The computer system 1800 may also include a disk or optical drive unit 1806.
The drive unit 1806 may include a computer-readable medium 1807 in which one or more
sets of instructions 1808, e.g. software, can be embedded. In addition, the instructions 1808
may be separately stored in the processor 1801 and the memory 1802.
The computing system 1800 may further be in communication with other
device over a network 1809 to communicate voice, video, audio, images, or any other data
over the network 1809. Further, the data and/or the instructions 1808 may be transmitted or
received over the network 1809 via a communication port or interface 1810 or using the bus
1803. The communication port or interface 1810 may be a pait of the processor 1801 or may
be a separate component. The communication port 1810 may he cr~M~d in software or may
be a physical connection in hardware. The communication port 1810 may be configured to
connect with the network 1809, external media, the display 1804, or any other components in
system 1800 or combinations thereof. The connection with the network 1809 may be a
physical connection, such as a wired Ethernet connection or may be established wirelessly as
discussed later. Likewise, the additional connections with other components of the system
1800 may be physical connections or may be established wirelessly. The network 1809 may
alternatively be directly connected to the bus 1803.
The network 1809 may include· wired networks, wireless networks, Ethernet
A VB networks, or combinations thereof. The wireless network may be a cellular telephone
network, an 802.11, 802.16, 802.20, 802.1 Q or WiMax network. Further, the network 1809
may be a public network, such as the Internet, a private network, such as an. intranet, or
combinations thereof, and may utilize a variety of networking protocols now available or
later developed including, but not limited to TCP/IP based networking protocols.
In an alternative example, dedicated hardware implementations, such as
application specific integrated circuits, programmable logic arrays and other hardware
devices, can be constructed to implement various parts of the computing system 1800.
Applications that may include the systems can broadly include a variety of
electronic and computer systems. One or more examples described may implement functions
using two or more specific interconnected hardware modules or devices with related control
and data signals that can be communicated between and through the modules, or as portions
of an application-specific integrated circuit. Accordingly, the present system encompasses
software, firmware, and hardware implementations.
The computing system 1800 may be implemented by software programs
executable by the processor 1801. Further, in a non-limited example, implementations can
include distributed processing, component/object distributed processing, and parallel
processing. Alternatively, virtual computer system processing can be constructed to
implement various parts of the system.
The computing system 1800 is not limited to operation with any particular
standards and protocols. For example, standards for Internet and other packet switched
network transmission (e.g., TCP/IP, UDP/IP, HTML, HTTP) may be used. Such standards
are periodically superseded by faster or more etlicient equivalents having essentially the
same functions. Accordingly, replacement swndards and protocols having the same or simil~r ·
functions as those disclosed are considered equivalents thereof.
The drawings and the forgoing description give examples of embodiments.
Those skilled in the art will appreciate that one or more of the described elements may well
be combined into a single functional element. Alternatively, certain elements may be split
into multiple functional elements.· Elements from one embodiment may be added to another
embodiment. For example, orders of processes described herein may be changed and are not
limited to the manner described herein. Moreover, the actions of any flow diagram need not
be implemented in the order shown; nor do all of the acts necessarily need to be performed.
Also, those acts that are not dependent on other acts may be performed in parallel with the
other acts. The scope of embodiments is by no means limited by these specific examples.
Numerous variations, whether explicitly given in the specification or not, such as differences
in structure, dimen.sion, and use of material, are possible. The scope of embodiments is at
least as broad as given by the following claims.
While certain present preferred embodiments of the invention have been
illustrated and described herein, it is to be understood that the invention is not limited thereto.
Clearly, the invention may be otherwise variously embodied, and practiced within the scope
of the following claims.

We Claim:
I. A method for dynamically allocating a network bandwidth amongst a plurality of
processes concurrently running on a computing device, said method comprising:
depicting the plurality of processes concurrently running on the computing
device, the processes being either downloading a file or uploading a file;
receiving a user input, the user input corresponding to a first set of processes
from the plurality of processes; and
allocating network bandwidth to each of the processes in the first set of
processes based on the user input.
2. The method as claimed in claim 1, wherein the first set of the processes is a non-null
set.
3. The method as claimed in claim 1 further comprises analysing the user input.
4. The method as claimed in claim 3, wherein the analysis of the user input comprises:
determining at least one characteristic of the user input; and
assigning a priority to each of the processes in the plurality of processes based
on the at least one characteristic such that a maximum bandwidth is allocated
to a process having a highest priority assigned thereof and a minimum
bandwidth is allocated to a process having a lowest priority assigned thereof.
5. The method as claimed in claim 4, wherein the at least one characteristic includes a
direction associated with the user input, a speed associated with the user input, and a
sequence in which the user input is received.
6. The method as claimed in claim 1 ~urther .comprises allocating remaining network
bandwidth amongst remaining of the plurality of processes.
7. The method as claimed in claim 1 further comprises reordering the depiction of the
plurality of processes based on the allocated network bandwidth.
8. The method as claimed in claim 1, wherein the plurality of processes are depicted in a
window.
9. The method as claimed in clam 1, wherein the plurality of processes are depicted in a
notification window.
10. The method as claimed in claim 1, wherein the user input is one of a touch gesture
input, a non-touch gesture input, and an input from an input device communicatively
coupled to the computing device.
11. The method as claimed in claim 1, wherein the file is one of an audio file, a video file,
an image file, a data file, and an application.
12. A method for dynamically allocating a network bandwidth amongst a first set of files
being downloaded or uploaded by a computing device via an application, said method
comprising:
providing a window depicting the application;
depicting in the window the first set of files that are concurrently being
downloaded or uploaded;
receiving at least one user input on the window, the at least one user input
corresponding to at least one file from the first set; and
allocating network bandwidth to the at least one file based on the at least one
user input.
13. A computing device for dynamically allocating a network bandwidth amongst a
concurrently running plurality of processes, said computing device comprising:
a display unit to depict the concurrently running plurality of processes, the
processes being either downloading a. file or uploading a file;
a receiving unit to receive a user input, the user input corresponding to a first
set of processes from the plurality of processes; and a network allocating unit to allocate the network bandwidth to each of the
process in the first set of the processes based on the user input.
14. The computing device as claimed in claim 13 further comprises an analysing unit to
analyse the user input.
15. A computing device for dynamically allocating a network bandwidth amongst a first
set of files being downloaded or uploaded via an application, said computing device
comprising:
a display unit to:
depict the application in a window; and
depict the first set of files in the window;
a receiving unit to receive at least one user input through the window, the at
least one user input corresponding to at least one file from the first set; and
a network allocating unit to allocate a network bandwidth to the at least one
file based on the at least one user input.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 1417-DEL-2015-IntimationOfGrant28-03-2023.pdf 2023-03-28
1 FORM 3.pdf 2015-05-21
2 1417-DEL-2015-PatentCertificate28-03-2023.pdf 2023-03-28
2 Form 26..pdf 2015-05-21
3 drawings.pdf 2015-05-21
3 1417-DEL-2015-Response to office action [27-03-2023(online)].pdf 2023-03-27
4 1417-DEL-2015-Written submissions and relevant documents [30-11-2022(online)].pdf 2022-11-30
4 1417-del-2015-GPA-(26-05-2015).pdf 2015-05-26
5 1417-del-2015-Form-5-(26-05-2015).pdf 2015-05-26
5 1417-DEL-2015-FORM-26 [17-11-2022(online)].pdf 2022-11-17
6 1417-del-2015-Form-3-(26-05-2015).pdf 2015-05-26
6 1417-DEL-2015-Correspondence to notify the Controller [16-11-2022(online)].pdf 2022-11-16
7 1417-DEL-2015-US(14)-HearingNotice-(HearingDate-18-11-2022).pdf 2022-11-02
7 1417-del-2015-Form-2-(26-05-2015).pdf 2015-05-26
8 1417-del-2015-Form-1-(26-05-2015).pdf 2015-05-26
8 1417-DEL-2015-CLAIMS [22-06-2020(online)].pdf 2020-06-22
9 1417-DEL-2015-COMPLETE SPECIFICATION [22-06-2020(online)].pdf 2020-06-22
9 1417-del-2015-Drawings-(26-05-2015).pdf 2015-05-26
10 1417-del-2015-Description (Complete)-(26-05-2015).pdf 2015-05-26
10 1417-DEL-2015-DRAWING [22-06-2020(online)].pdf 2020-06-22
11 1417-del-2015-Correspondence Others-(26-05-2015).pdf 2015-05-26
11 1417-DEL-2015-FER_SER_REPLY [22-06-2020(online)].pdf 2020-06-22
12 1417-del-2015-Copy Form-18-(26-05-2015).pdf 2015-05-26
12 1417-DEL-2015-OTHERS [22-06-2020(online)].pdf 2020-06-22
13 1417-del-2015-Claims-(26-05-2015).pdf 2015-05-26
13 1417-DEL-2015-FER.pdf 2019-12-27
14 1417-del-2015-Abstract-(26-05-2015).pdf 2015-05-26
14 1417-DEL-2015-Correspondence-101019.pdf 2019-10-14
15 1417-DEL-2015-OTHERS-101019.pdf 2019-10-14
15 1417-DEL-2015-PA [18-09-2019(online)].pdf 2019-09-18
16 1417-DEL-2015-8(i)-Substitution-Change Of Applicant - Form 6 [18-09-2019(online)].pdf 2019-09-18
16 1417-DEL-2015-ASSIGNMENT DOCUMENTS [18-09-2019(online)].pdf 2019-09-18
17 1417-DEL-2015-ASSIGNMENT DOCUMENTS [18-09-2019(online)].pdf 2019-09-18
17 1417-DEL-2015-8(i)-Substitution-Change Of Applicant - Form 6 [18-09-2019(online)].pdf 2019-09-18
18 1417-DEL-2015-OTHERS-101019.pdf 2019-10-14
18 1417-DEL-2015-PA [18-09-2019(online)].pdf 2019-09-18
19 1417-del-2015-Abstract-(26-05-2015).pdf 2015-05-26
19 1417-DEL-2015-Correspondence-101019.pdf 2019-10-14
20 1417-del-2015-Claims-(26-05-2015).pdf 2015-05-26
20 1417-DEL-2015-FER.pdf 2019-12-27
21 1417-del-2015-Copy Form-18-(26-05-2015).pdf 2015-05-26
21 1417-DEL-2015-OTHERS [22-06-2020(online)].pdf 2020-06-22
22 1417-del-2015-Correspondence Others-(26-05-2015).pdf 2015-05-26
22 1417-DEL-2015-FER_SER_REPLY [22-06-2020(online)].pdf 2020-06-22
23 1417-del-2015-Description (Complete)-(26-05-2015).pdf 2015-05-26
23 1417-DEL-2015-DRAWING [22-06-2020(online)].pdf 2020-06-22
24 1417-del-2015-Drawings-(26-05-2015).pdf 2015-05-26
24 1417-DEL-2015-COMPLETE SPECIFICATION [22-06-2020(online)].pdf 2020-06-22
25 1417-del-2015-Form-1-(26-05-2015).pdf 2015-05-26
25 1417-DEL-2015-CLAIMS [22-06-2020(online)].pdf 2020-06-22
26 1417-DEL-2015-US(14)-HearingNotice-(HearingDate-18-11-2022).pdf 2022-11-02
26 1417-del-2015-Form-2-(26-05-2015).pdf 2015-05-26
27 1417-del-2015-Form-3-(26-05-2015).pdf 2015-05-26
27 1417-DEL-2015-Correspondence to notify the Controller [16-11-2022(online)].pdf 2022-11-16
28 1417-del-2015-Form-5-(26-05-2015).pdf 2015-05-26
28 1417-DEL-2015-FORM-26 [17-11-2022(online)].pdf 2022-11-17
29 1417-DEL-2015-Written submissions and relevant documents [30-11-2022(online)].pdf 2022-11-30
29 1417-del-2015-GPA-(26-05-2015).pdf 2015-05-26
30 drawings.pdf 2015-05-21
30 1417-DEL-2015-Response to office action [27-03-2023(online)].pdf 2023-03-27
31 1417-DEL-2015-PatentCertificate28-03-2023.pdf 2023-03-28
31 Form 26..pdf 2015-05-21
32 1417-DEL-2015-IntimationOfGrant28-03-2023.pdf 2023-03-28
32 FORM 3.pdf 2015-05-21

Search Strategy

1 2019-12-1317-30-26_13-12-2019.pdf

ERegister / Renewals

3rd: 26 Jun 2023

From 19/05/2017 - To 19/05/2018

4th: 26 Jun 2023

From 19/05/2018 - To 19/05/2019

5th: 26 Jun 2023

From 19/05/2019 - To 19/05/2020

6th: 26 Jun 2023

From 19/05/2020 - To 19/05/2021

7th: 26 Jun 2023

From 19/05/2021 - To 19/05/2022

8th: 26 Jun 2023

From 19/05/2022 - To 19/05/2023

9th: 26 Jun 2023

From 19/05/2023 - To 19/05/2024

10th: 17 May 2024

From 19/05/2024 - To 19/05/2025

11th: 16 May 2025

From 19/05/2025 - To 19/05/2026