Abstract: The present invention describes method and device for dynamically allocating network bandwidth on an electronic device. The method includes determining a currently active foreground application from amongst a plurality of concurrently running applications. The method further includes ascertaining a network data requirement for the currently active foreground application and allocating an entire available network bandwidth to the currently active foreground application, in response to a positive ascertaining.
CLIAMS:We claim:
1. A method for dynamically allocating network bandwidth on an electronic device, said method comprising:
determining a currently active foreground application from amongst a plurality of concurrently running applications;
ascertaining a network data requirement for the currently active foreground application; and
allocating an entire available network bandwidth to the currently active foreground application, in response to a positive ascertaining.
2. The method as claimed in claim 1, wherein said allocating the entire available network bandwidth to the currently active foreground application results in de-allocating network bandwidth previously allocated to all other concurrently running applications.
3. The method as claimed in claim 1 further comprising detecting closure of the currently active foreground application and de-allocating the entire available bandwidth.
4. The method as claimed in claim 1 further comprising detecting transition of the currently active foreground application to a state of a background application and in response thereto, de-allocating at least a part or the entire available bandwidth.
5. The method as claimed in claim 1, wherein the currently active foreground application being an application that is currently being viewed or accessed by a user.
6. The method as claimed in claim 1, wherein said allocating the entire available network bandwidth to the currently active foreground application is done in presence of a predefined instruction.
7. The method as claimed in claim 6, further comprising sending a message to a user in the absence of said predefined instruction, said message prompting the user to provide one or more user inputs.
8. The method as claimed in claim 7, further comprising detecting one or more user inputs and in response thereto, allocating the entire available network bandwidth to the currently active foreground application.
9. A method for dynamically allocating a network bandwidth amongst a plurality of applications running concurrently on a plurality of electronic devices, the plurality of electronic devices accessing network data through a common source, said method comprising:
determining a currently active electronic device from amongst the plurality of electronic devices;
determining a currently active foreground application from amongst a plurality of applications running on the currently active electronic device;
ascertaining a network data requirement for the currently active foreground application; and
allocating an entire available network bandwidth to the currently active foreground application, in response to a positive ascertaining.
10. The method as claimed in claim 9, wherein said allocating the entire available network bandwidth to the currently active foreground application results in de-allocating network bandwidth previously allocated to all other concurrently running applications on plurality of electronic devices.
11. The method as claimed in claim 9 further comprising detecting closure of the currently active foreground application and de-allocating the entire available bandwidth.
12. The method as claimed in claim 9 further comprising detecting transition of the currently active foreground application to a state of a background application and in response thereto, de-allocating at least a part or the entire available bandwidth.
13. The method as claimed in claim 9, wherein the currently active electronic device being a device that is currently being accessed by a user.
14. The method as claimed in claim 9, wherein the currently active foreground application being an application that is currently being viewed or accessed by a user of the currently active electronic device.
15. The method as claimed in claim 9, wherein said allocating the entire available network bandwidth to the currently active foreground application is done in presence of a predefined instruction.
16. The method as claimed in claim 15, further comprising sending a message to a user in the absence of said predefined instruction, said message prompting the user to provide one or more user inputs.
17. The method as claimed in claim 16, further comprising detecting one or more user inputs and in response thereto, allocating the entire available network bandwidth to the currently active foreground application.
18. A device for dynamically allocating network bandwidth, said device comprising:
a foreground application detector unit configured to determine a currently active foreground application from amongst a plurality of concurrently running applications;
a controller unit configured to ascertain a network data requirement for the currently active foreground application; and
a network bandwidth managing unit configured to allocate an entire available network bandwidth to the currently active foreground application, in response to a positive ascertaining.
19. A device for dynamically allocating a network bandwidth amongst a plurality of applications running concurrently on a plurality of electronic devices, the plurality of electronic devices accessing network data through a common source, said device comprising:
an active device application detector unit configured to determine a currently active electronic device from amongst the plurality of electronic devices;
a foreground application detector unit configured to determine a currently active foreground application from amongst a plurality of applications running on the currently active electronic device;
a controller unit configured to ascertain a network data requirement for the currently active foreground application; and
a network bandwidth managing unit configured to allocate an entire available network bandwidth to the currently active foreground application, in response to a positive ascertaining.
,TagSPECI:FIELD OF THE INVENTION:
The present invention relates to methods and system for improving application network performance and in particular relates to dynamically allocating network bandwidth on an electronic device.
BACKGROUND OF THE INVENTION:
In today’s world, the use of smart phones and other smart wearable devices has increased significantly. These devices allow users to connect to others via instant chat, twitter and social media in real time. These devices are further being used extensively for accessing emails, downloading data, streaming movies and videos etc. Furthermore, the quality of content that is being streamed or accessed has increased significantly these days from VGA resolution to Full HD. The extensive use has led to dramatic increase in network bandwidth consumption.
While with time, the network speed has also increased to cope with the bandwidth requirements. However, there are places where network speed has not been able to keep pace with changing content standards and their increased bandwidth requirement. In such scenarios, a very common problem of frequent pauses and interruption during buffering of video is observed. Even where sufficient network bandwidth is available to support the continuous streaming, a lot of background applications that are syncing their content during playback leads to frequent interruption of streaming content and hence result in a bad user experience. Hence, it’s absolutely essential that network bandwidth is managed optimally between the current (or foregoing or foreground) application and the applications running in the background.
Several solutions exist in the art that allow users to manage network data usage by setting network data usage threshold range for an individual application or setting overall network data usage threshold range for the device and disabling the background executable components if amount of data exceeds a threshold range of a network data usage limit.
In one of the existing solutions, method and devices for managing data usage of computing devices are disclosed that involve disabling of an executable component if the amount of data communicated with the network interface enters within a threshold range of a network interface data usage limit for the computing device.
In yet another existing solution, a solution to manage network data based on a data usage policy is provided. The solution involves determining whether the amount of data consumed by an application is in compliance with the data usage policy according to a calculated amount of data consumed by the application.
The existing solutions give options to user to allocate network interface data usage for each application & device. These network interface data usage allocations are not related to current application network performance. Rather, they try to control & manage application wise network interface data usage, device network interface data usage & disable background executable components if amount of data communicated with the network interface enters within a threshold range of a network interface data usage limit.
However, enforcing these threshold limits on data usage, as provided in the above existing solutions, doesn’t increase the network performance of the current foreground application. Accordingly, there exists a need for a solution to increase the network performance of the current foreground application.
OBJECT OF THE INVENTION:
It is an object of the present invention to improve the network performance of current foreground application intelligently.
It is another object of the present invention to automatically restore or reallocate the network bandwidth after current foreground application has been closed or goes into background.
It is another object of the present invention to avoid manually setting of speed limit/priority to improve performance.
SUMMARY OF THE INVENTION:
In an embodiment, the present invention describes a method for dynamically allocating network bandwidth on an electronic device. The method includes determining a currently active foreground application from amongst a plurality of concurrently running applications. The method further includes ascertaining a network data requirement for the currently active foreground application and allocating an entire available network bandwidth to the currently active foreground application, in response to a positive ascertaining.
In another embodiment, the present invention describes a method for dynamically allocating a network bandwidth amongst a plurality of applications running concurrently on a plurality of electronic devices, the plurality of electronic devices accessing network data through a common source. The method includes determining a currently active electronic device from amongst the plurality of electronic devices and determining a currently active foreground application from amongst a plurality of applications running on the currently active electronic device. The method further includes ascertaining a network data requirement for the currently active foreground application and allocating an entire available network bandwidth to the currently active foreground application, in response to a positive ascertaining.
In another embodiment, the present invention describes a device for dynamically allocating network bandwidth. The device includes a foreground application detector unit that is configured to determine a currently active foreground application from amongst a plurality of concurrently running applications and a controller unit that is configured to ascertain a network data requirement for the currently active foreground application. The device further includes a network bandwidth managing unit configured to allocate an entire available network bandwidth to the currently active foreground application, in response to a positive ascertaining.
In yet another embodiment, the present invention describes a device for dynamically allocating a network bandwidth amongst a plurality of applications running concurrently on a plurality of electronic devices, the plurality of electronic devices accessing network data through a common source. The device includes an active device application detector unit configured to determine a currently active electronic device from amongst the plurality of electronic devices and a foreground application detector unit that is configured to determine a currently active foreground application from amongst a plurality of applications running on the currently active electronic device. The device further includes a controller unit configured to ascertain a network data requirement for the currently active foreground application anda network bandwidth managing unit configured to allocate an entire available network bandwidth to the currently active foreground application, in response to a positive ascertaining.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
Brief Description of Figures:
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Figure 1 shows a flow chart for a method for dynamically allocating network bandwidth on an electronic device in accordance with an embodiment of the invention;
Figure 2 shows a construction of a device for dynamically allocating network bandwidth in accordance with an embodiment of the present invention;
Figure 3 shows a flow chart for a method for method for dynamically allocating a network bandwidth amongst a plurality of applications running concurrently on a plurality of electronic devices;
Figure 4 shows a construction of a device for dynamically allocating a network bandwidth amongst a plurality of applications running concurrently on a plurality of electronic devices;
Figure 5 shows an exemplary system depicting plurality of applications running concurrently on a plurality of electronic devices that are being accessed by a single user in accordance with an embodiment of the invention;
Figure 6 illustrates an exemplary system depicting a plurality of applications running concurrently on a plurality of electronic devices that are being accessed by a plurality of users in accordance with an embodiment of the invention;
Figure 7 illustrates an exemplary embodiment of a device in accordance with the present invention;
Figure 8 illustrates an exemplary control flow of a method for dynamically allocating a network bandwidth in accordance with the present invention;
Figure 9-14 illustrate example manifestations depicting the usefulness of the present invention; and
Figure 15 illustrates a typical hardware configuration of a computer system, which is representative of a hardware environment for practicing the present invention.
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
Detailed Description:
For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates.
It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the invention and are not intended to be restrictive thereof.
Reference throughout this specification to “an aspect”, “another aspect” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrase “in an embodiment”, “in another embodiment” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by "comprises...a" does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The system, methods, and examples provided herein are illustrative only and not intended to be limiting.
Embodiments of the present invention will be described below in detail with reference to the accompanying drawings.
Figure 1 illustrates a flow chart for a method for dynamically allocating network bandwidth on an electronic device in accordance with one embodiment of the present invention. The method 100 includes determining a currently active foreground application from amongst a plurality of concurrently running applications on the electronic device as indicated in step 102; ascertaining a network data requirement for the currently active foreground application as indicated in step 104 and allocating, as indicated in step 106, an entire available network bandwidth to the currently active foreground application, in response to a positive ascertaining.
In one embodiment, allocating the entire available network bandwidth to the currently active foreground application results in de-allocating network bandwidth previously allocated to all other concurrently running applications.
In one embodiment, the method 100 further comprises detecting closure of the currently active foreground application and de-allocating the entire available bandwidth.
In one embodiment, the method 100 further comprises detecting transition of the currently active foreground application to a state of a background application and in response thereto, de-allocating at least a part or the entire available bandwidth.
In one embodiment, the currently active foreground application being an application that is currently being viewed or accessed by a user.
In one embodiment, allocating the entire available network bandwidth to the currently active foreground application is done in presence of a predefined instruction. The predefined instruction corresponds to a factory setting configuration made in the electronic device to automatically allocate the entire available network bandwidth to the currently active foreground application.
In one embodiment, the method 100 further comprises sending a message to a user in the absence of said predefined instruction, said message prompting the user to provide one or more user inputs. In a preferred embodiment, the message may be in the form of, but not limited to, a pop-up message, a beep, a notification etc. Based on the user inputs, the method further involves sending a further notification in the form of, but not limited to a toast message, to the user indicating the result of the user input.
In one embodiment, the method 100 further comprises detecting one or more user inputs and in response thereto, allocating the entire available network bandwidth to the currently active foreground application.
In an embodiment, the electronic device is selected from a group comprising a smartphone, smart glass, smart watch, smart television, PDA, tablets, netBooks, e-readers, a laptop, a desktop computer, and other wearable smart devices including necklace, band, ring, watch, anklet, etc.
In an embodiment, the common source may be any of a modem, networking switch, router, network adapter, a Wifi hotspot, or any access point that is used to transmit and distribute data packets to all the connected devices.
Figure 2 illustrates a device 200 for dynamically allocating network bandwidth. The device 200 includes a foreground application detector unit 202 that is configured to determine a currently active foreground application from amongst a plurality of concurrently running applications and a controller unit 204 that is configured to ascertain a network data requirement for the currently active foreground application. The device 200 further includes a network bandwidth managing unit 206 configured to allocate an entire available network bandwidth to the currently active foreground application, in response to a positive ascertaining.
Figure 3 illustrates a method 300 for dynamically allocating a network bandwidth amongst a plurality of applications running concurrently on a plurality of electronic devices, wherein the plurality of electronic devices access network data through a common source. The method includes determining, as indicated in step 302, a currently active electronic device from amongst the plurality of electronic devices and determining, as indicated in step 304, a currently active foreground application from amongst a plurality of applications running on the currently active electronic device. The method further includes ascertaining, as indicated in step 306, a network data requirement for the currently active foreground application and allocating, as indicated in step 308, an entire available network bandwidth to the currently active foreground application, in response to a positive ascertaining.
In one embodiment, allocating the entire available network bandwidth to the currently active foreground application results in de-allocating network bandwidth previously allocated to all other concurrently running applications on plurality of electronic devices.
In one embodiment, the present invention comprises detecting closure of the currently active foreground application and de-allocating the entire available bandwidth.
In one embodiment, the present invention comprises detecting transition of the currently active foreground application to a state of a background application and in response thereto, de-allocating at least a part or the entire available bandwidth.
In one embodiment, the currently active electronic device is a device that is currently being accessed by a user or the device with which the user is currently interacting.
In one embodiment, the currently active foreground application is an application that is currently being viewed or accessed by a user of the currently active electronic device.
In one embodiment, allocating the entire available network bandwidth to the currently active foreground application is done in presence of a predefined instruction. The predefined instruction corresponds to a factory setting configuration made in the electronic device to automatically allocate the entire available network bandwidth to the currently active foreground application.
In one embodiment, the present invention further comprises sending a message to a user in the absence of said predefined instruction, said message prompting the user to provide one or more user inputs. In a preferred embodiment, the message may be in the form of, but not limited to, a pop-up message, a beep, a notification etc. Based on the user inputs, the method further involves sending a further notification in the form of, but not limited to a toast message, to the user indicating the result of the user input.
In one embodiment, the present invention further comprises detecting one or more user inputs and in response thereto, allocating the entire available network bandwidth to the currently active foreground application.
In an embodiment, the selected from a group comprising a smartphone, smart glass, smart watch, smart television, PDA, tablets, netBooks, e-readers, a laptop, a desktop computer, and other wearable smart devices including necklace, band, ring, watch, anklet, etc.
In an embodiment, the common source may be any of a modem, networking switch, router, network adapter, a Wi-Fi hotspot, or any access point that is used to transmit and distribute data packets to all the connected devices.
Figure 4 illustrates a device 400 in accordance with an embodiment of the invention for dynamically allocating a network bandwidth amongst a plurality of applications running concurrently on a plurality of electronic devices, the plurality of electronic devices accessing network data through a common source. The device 400 includes an active device application detector unit 402 configured to determine a currently active electronic device from amongst the plurality of electronic devices and a foreground application detector unit 404 that is configured to determine a currently active foreground application from amongst a plurality of applications running on the currently active electronic device. The device 400 further includes a controller unit 404 configured to ascertain a network data requirement for the currently active foreground application and a network bandwidth managing unit 406 configured to allocate an entire available network bandwidth to the currently active foreground application, in response to a positive ascertaining.
Figure 5 illustrates an exemplary system 500 including a plurality of electronic devices that are being accessed by a single user in accordance with an embodiment of the invention. As can be seen in the figure, the plurality of electronic devices 502, 504, 506, 508 are being accessed by a single user. Furthermore, the plurality of devices 502, 504, 506, 508 access network data through a common source (not shown). The network data coming from the common source is shared amongst the plurality of electronic devices. The common source may be any of a modem, networking switch, router, network adapter, a Wifi hotspot, or any such access point that is used to transmit and distribute data packets to all the connected devices 502, 504, 506, 508.
Referring to Figure 4 and Figure 5 together, the active device application unit 402 determines the device which is currently being accessed by the user or the device with which the user is currently interacting. In the present case, let us consider that the user is interacting with device 506. The foreground application detector unit 404 scans the device 506 to determine a currently active foreground application running on device 506. Thereafter, the controller unit 404 ascertains whether the currently active foreground application in the currently active device 506 requires network or mobile data.
As an example, the applications that require network data include, but not limited to, browsing applications such as chrome, chat or messenger application such as Whatsapp, Hike messenger, Viber, Snapchat, ChatOn, etc.; mail applications such as Gmail, Yahoo mail, Hotmail, S mail etc.; video streaming applications such YouTube, Video Hub; Social networking applications such as Facebook, Instagram, LinkedIn, twitter, sync applications, update services etc.
Upon positive ascertaining, the network bandwidth managing unit 406 allocates the entire 100% network bandwidth to the currently active foreground application in the device 506. Allocating the entire available network bandwidth to the currently active foreground application results in de-allocating network bandwidth previously allocated to all other concurrently running applications on plurality of electronic devices 502, 504, 508.
In one embodiment, the device 400 as illustrated in Figure 4 may be amongst one of the electronic device from amongst the plurality of electronic devices being accessed by the user. In one embodiment, the device 400 as illustrated in Figure 4 may be located externally to the plurality of electronic devices and is configured to remotely monitor the plurality of electronic devices.
Figure 6 illustrates an exemplary system 600 depicting a plurality of applications running concurrently on a plurality of electronic devices that are being accessed by a plurality of users in accordance with an embodiment of the invention. As can be seen from the figure, the plurality of devices 604, 606, 608, 610 access network data through a common source 602. Let’s consider a scenario where the common source supplies a network bandwidth of 4 Mbps that gets evenly distributed (25% each) amongst all the devices 604, 606, 608, 610 that are being accessed by a plurality of users. In such a case, the present invention shall be implemented independently for each user. A plurality of applications is shown to be concurrently running on each of the device 604, 606, 608, 610. The 4 Mbps of available network bandwidth for the respective devices 604, 606, 608, 610 is shared amongst the respective plurality of concurrently running applications. The current foreground application in each of the devices 604, 606, 608, 610 is indicated in the figure. On implementing the present invention, the entire available bandwidth of 1 Mbps available to each of the device is allocated to the current foreground application of the respective devices for each of the device 604, 606, 608, 610 and the network access is suspended for the remaining applications running in the background till the time the current application remains in the foreground.
In an embodiment, the network data available to a plurality of devices from a plurality of sources may be bridged and allocated to the connected devices. Let us consider a scenario where there exist more than one device or source that provide network bandwidth independently by using different networks to the connected plurality of devices. In such a case, the present invention provides for bridging the bandwidth to obtain a combined bandwidth and dividing combined bandwidth equally between all the connected devices. For instance, if there exists two common sources that supply a network bandwidth of 4 Mbps each to 4 connected devices, then the network bandwidths from each of the sources are bridged making the total combined bandwidth as 8Mbps. Thereafter, the 8Mbps of network bandwidth gets evenly distributed (25% each) amongst the 4 connected devices, thereby making the bandwidth available to each of the connected devices as 2Mbps. Thereafter, the 2 Mbps of network bandwidth that is available to each of the devices is allocated to their respective current foreground application in accordance with the invention as explained in the previous embodiments.
Figure 7 illustrates an exemplary embodiment of a device 700 in accordance with the present invention. The device 700 includes a turbo mode module 702 and a network policy manager module 704. The turbo mode module 702 further includes a foreground application detector unit 706, a turbo mode controller unit 708, a network bandwidth manager unit 710 and a turbo mode settings unit 712. The turbo mode module 702 is configured to allocate full network bandwidth to a currently active application that requires network data from amongst a plurality of applications running concurrently on an electronic device. Allocating the full network bandwidth to the currently active foreground application results in de-allocating network bandwidth previously allocated to all other concurrently running applications. The aforesaid feature of allocating the full network bandwidth to the currently active application and de-allocating the network bandwidth previously allocated to all other concurrently running applications for certain duration based on current foreground application shall be referred to as turbo mode feature or turbo mode functionality throughout the description. The turbo mode setting module 712 contains the turbo mode settings and is used to enable/ disable the operation of the turbo mode feature. The turbo mode setting module 712 is configured to predefine the instructions for enabling the turbo mode feature. The predefined instruction corresponding to a factory setting configuration made to automatically allocate the entire available network bandwidth to the currently active foreground application. If the turbo mode setting is enabled, it will auto enable the turbo mode feature for that duration based on current foreground application. If turbo mode setting is disabled, it will give a prompt message in the form of, but not limited to, a beep, a popup, a notification etc. to the user receive one or more inputs to enable or disable the turbo mode. When the user gets the prompt to enable turbo mode, the user can accept or decline to enable the turbo mode feature. If users accepts to enable the turbo mode, it will auto enable the turbo mode feature for the current foreground application. If turbo mode setting is enabled then turbo mode will automatically suspend network activities for all other background applications and services. When user exits that application or goes in background, turbo mode feature will be automatically disabled and normal behaviour for network access will be activated. The turbo mode module 702 in operational interconnection with the network policy manager module are configured to send notification messages in the form of, but not limited to, such as toast, flash message, pop up message etc. to the users at every stage of implementing the present invention. For instance, sending a toast message at a stage when the turbo mode feature is enabled or disabled, when the turbo mode application is detected etc..
Figure 8 illustrates an exemplary flowchart of a method for dynamically allocating a network bandwidth in accordance with the present invention. The flowchart shall be described in conjunction with the exemplary device illustrated in Figure 7. When a user launches an application (the turbo mode module operation starts) as indicated in step 802, the foreground application detector unit 706 at step 804 scans the plurality of applications running concurrently on an electronic device to determine the current foreground application. The current foreground application is the application that is currently being accessed by the user or with which the user is currently interacting. In the present case, the current foreground application is referred as Current App in the figure 7. Once the current foreground application is determined, the turbo mode controller unit 708 ascertains whether the current foreground application needs network or mobile data as indicated in step 806. If it is ascertained that the current foreground application does not require mobile or network data, it is further ascertained whether the turbo mode feature is active or not by default as indicated in step 808. If the turbo mode feature is active for a currently active foreground application that does not need mobile data, the turbo mode controller unit 708 disables the turbo mode feature as indicated in step 810 and waits for the next foreground application to open or till the current foreground application closes or goes into the background (step 812).
However, if it is ascertained by the turbo mode controller unit 708 that the current foreground application needs network or mobile data, the turbo mode controller unit 708 ascertains if Auto Turbo Mode setting is enabled as indicated in step 814. If it is found that turbo mode setting is enabled, the turbo mode feature is activated by the network bandwidth manager unit 710 till that application is in foreground as indicated in step 816. Once the turbo mode feature is activated, full network bandwidth is allocated to the currently active foreground and the network bandwidth previously allocated to all other concurrently running applications is de-allocated till the time the current foreground application closes or goes into the background. And once the user exits the current foreground application or goes into background, the network bandwidth manager unit 710 disables the turbo mode feature and normal behaviour for network access will be activated. If Auto Turbo mode setting is not enabled, then a confirmation prompt is shown to the user to enable Turbo mode feature as indicated in step 818. If accepted by the user, turbo mode functionality is activated for the duration till that application is running in foreground. And once user exits that application, turbo mode functionality is disabled. Thereafter, the process shall repeat for the next foreground application.
In an embodiment, if it is determined that a foreground application needs Turbo mode functionality, the turbo mode module 702 in operational interconnection with the network policy manager module 712 set the turbo mode flag enabled for that application ID in network policy. When background running applications, sync and update services try to access network, the turbo mode module checks the network policy whether any foreground application turbo mode flag is enabled or not. If Turbo mode flag is enabled in the network policy, turbo mode module will restrict the network access for background running apps, sync and update services IDs till that foreground application is running. After rejecting network access for background running apps, sync adapters and update services, foreground application will have full network bandwidth which will improve network performance and improve user experience. Once that foreground application closes or moves into background, the turbo mode flag in network policy is reset. And a broadcast notification, that the network bandwidth is available, is sent to all other applications running on the electronic device. Once the turbo mode flag is reset, it will allow those background running apps, sync and update services to access network normally. In this way, the background running applications, sync and update services are only suspended for the duration till foreground application with turbo mode feature is running. Once foreground application stops or goes into the background, those other background running apps, sync and update services can access the network normally. In case of multiple foreground applications, all application will set turbo mode flag in network policy with respect to application ID and once application closes or moves into background, it will reset turbo mode flag for own application ID in network policy.
In an alternate embodiment, a method for allocating bandwidth is provided. The method includes detecting a current foreground application and determining if the current foreground application requires network data and the turbo mode functionality. If Turbo Mode is required for the current foreground application, the method includes setting the Turbo mode flag as per application ID. To set the Turbo Mode Flag, Application ID and Turbo Mode entry are written in Network Policy. The method includes checking if the Turbo Mode Flag is set or not for all network request. If the Turbo Mode Flag is enabled, then the method includes verifying if request app ID matches the Application ID entry. If the ID matches, internet Access is allowed else internet access is blocked. The method further includes removing the flag from the network policy once the current foreground application exits or goes in background. Thereafter, the turbo mode Flag is reset and the entry is removed from network policy.
EXEMPLARY MANIFESTATION OF RESULTS:
The forthcoming description of Figures 9-14 depicts examples to illustrate the usefulness of the present invention. However, it may be strictly understood that the forthcoming examples shall not be construed as being limitations towards the present invention and the present invention may be extended to cover analogous manifestations through other type of like mechanisms.
Let’s consider a situation where four applications (Browser application (e.g. S Search), a mail application (e.g. S mail), a chat application (e.g. ChatOn), a video streaming application (e.g. Video Hub)) are currently active and concurrently running on a mobile device for the purpose of depicting the examples without and with turbo mode.
Browser Application Example
Figures 9.1 and 9.2 depict the network usage statistics in case of a Browser Application (e.g. S Search) being the current foreground application running without and with turbo mode respectively in accordance with the invention.
Figure 9.1 depicts the network usage/allocation statistics when user is browsing rich multimedia content such as high resolution images in S Search application without turbo mode functionality and with other applications such as S mail sync, ChatOn and Video Hub running in background consuming shared bandwidth. The browser application takes considerably more time to load images as full bandwidth is not allocated to S Search application.
Figure 9.2 depicts the network usage/allocation statistics when the user is using turbo mode functionality. As can be seen in the bar graph, 100% bandwidth is allocated to browser application and the network bandwidth available to the other background activities is suspended till browser is in foreground. As a result of 100 % bandwidth allocation, images will load faster in comparison to S Search application running without turbo mode functionality as illustrated in figure 9.1.
Chat Application Example
Figures 10.1 and 10.2 depict the network usage statistics in case of a Chat Application, such as ChatOn, being the current foreground application running without and with turbo mode respectively in accordance with the invention.
Figure 10.1 depicts the network usage/allocation statistics when user is uploading or downloading rich multimedia content such as high resolution images/videos in ChatOn application without turbo mode functionality and with other applications such as S mail sync, browsing/downloading in browser application such as S Search and Video Hub running in background consuming shared bandwidth. As a result of shared bandwidth, ChatOn application takes considerably more time to upload or download images/videos as full bandwidth is not allocated to ChatOn application.
Figure 10.2 depicts the network usage/allocation statistics when the user is using turbo mode functionality. As can be seen in the bar graph, 100% bandwidth is allocated to ChatOn application and the network bandwidth available to the other background activities is suspended till ChatOn application is in foreground. As a result of 100 % bandwidth allocation, images will load faster in comparison to loading images (or video or any document) in a ChatOn application working without turbo mode functionality as illustrated in figure 10.1.
Mail Application Example
Figures 11.1 and 11.2 depict the network usage statistics in case of a mail application , such as S Mail, being the current foreground application without and with turbo mode respectively in accordance with the invention.
Figure 11.1 depicts the network usage/allocation statistics when a user is trying to sync mails without turbo mode functionality and with other applications such as ChatOn application, browsing/downloading in browser such as S Search and Video Hub running in background consuming shared bandwidth. As a result of shared bandwidth, the S Mail application takes considerably more time to load mails as full bandwidth is not allocated to the S Mail application.
Figure 11.2 depicts the network usage/allocation statistics when the user is using turbo mode functionality. As can be seen in the bar graph, 100% bandwidth is allocated to the S Mail application and the network bandwidth available to other background activities is suspended till the S Mail application is in foreground. As a result of 100 % bandwidth allocation, emails will be synced faster in comparison to that in S Mail application working without turbo mode functionality as illustrated in figure 11.1.
Video Sharing/ Streaming Example
Figures 12.1 and 12.2 depict the network usage statistics in case of a video streaming application (e.g. Video Hub), being the current foreground application, without and with turbo mode in accordance with the invention.
Figure 12.1 depicts the network usage/allocation statistics when a user is trying to play any rich video in a Video Hub application without turbo mode functionality and other applications such as ChatOn, browsing/downloading in browser application such as S Search and S Mail sync running in background consuming shared bandwidth. As a result of shared bandwidth, Video Hub application takes considerably more time to stream rich video as full bandwidth is not allocated to the Video Hub application.
Figure 12.2 depicts the network usage/allocation statistics when the user is using turbo mode functionality. As can be seen in the bar graph, 100% bandwidth is allocated to the Video Hub application in accordance with the invention and the network bandwidth available to other background activities is suspended till the Video Hub application is in the foreground. As a result of 100 % bandwidth allocation, video streaming is faster in comparison to that in Video Hub application working without turbo mode functionality as illustrated in figure 12.1.
The following example as illustrated in Figure 13 depicts a scenario when turbo mode is disabled by default.
Referring to Figure 13.1, when a video streaming application, e.g. Video Hub application, is opened and is currently being accessed by the user, Turbo Mode module detects that Turbo Mode is needed for this foreground current application. A Turbo Mode prompt message is shown to user to confirm whether he wishes to enable the turbo mode or not.
Referring to figure 13.2, if the user selects Ok, then Turbo Mode is automatically enabled and a notification message such as a Toast to notify the user that Turbo Mode has been enabled is displayed on the device display. Once the Turbo Mode is enabled, 100% bandwidth is allocated to the Video Hub application in accordance with the invention as can be seen in the bar graph and other background activities are suspended till the Video Hub application is in foreground.
Referring to figure 13.3, when Video Hub application is closed, Turbo Mode is automatically disabled and a toast message is displayed to the user. Once the Turbo Mode is disabled, normal behaviour for network access is activated as is illustrated in the bar graph.
The following example as illustrated in Figure 14 depicts the scenario when turbo mode is enabled by default.
Figure 14.1 illustrates an interface depicting that the Turbo Mode setting is enabled by default. Referring to Figure 14.2, when a video streaming application, e.g. Video Hub application is opened, the Turbo Mode module detects that Turbo Mode is needed for this foreground application (Video Hub) and sends a toast message to the user depicting that the Turbo Mode is detected for the current foreground application.
Referring to Figure 14.3, once it is detected that Turbo Mode is needed for this foreground application (Video Hub), the Turbo Mode is automatically enabled and a toast message is displayed to the user depicting that the Turbo Mode is enabled. Once the Turbo Mode is enabled, 100% bandwidth is allocated to the Video Hub application in accordance with the invention as can be seen in the bar graph and other background activities are suspended till the Video Hub application remains in the foreground or the turbo mode feature is disabled by the user. When Video Hub closes, Turbo Mode is automatically disabled and a toast message is displayed to the user. Once the Turbo Mode is disabled, normal behaviour for network access is activated as is illustrated in the bar graph (similar to illustration made in to figure 13.3).
Referring to Figure 15, a hardware configuration of the device 200, 400 in the form of a computer system 1500 is shown. The computer system 1500 can include a set of instructions that can be executed to cause the computer system 1500 to perform any one or more of the methods disclosed. The computer system 1500 may operate as a standalone device or may be connected, e.g., using a network, to other computer systems or peripheral devices.
In a networked deployment, the computer system 1500 may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment. The computer system 1500 can also be implemented as or incorporated into various devices, such as a personal computer (PC), a tablet PC, a personal digital assistant (PDA), a mobile device, a palmtop computer, a laptop computer, a desktop computer, a communications device, a wireless telephone, a land-line telephone, a control system, a camera, a scanner, a facsimile machine, a printer, a pager, a personal trusted device, a web appliance, a network router, switch or bridge, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single computer system 1500 is illustrated, the term "system" shall also be taken to include any collection of systems or sub-systems that individually or jointly execute a set, or multiple sets, of instructions to perform one or more computer functions.
The computer system 1500 may include a processor 1502 e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both. The processor 1502 may be a component in a variety of systems. For example, the processor 302 may be part of a standard personal computer or a workstation. The processor 1502 may be one or more general processors, digital signal processors, application specific integrated circuits, field programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now known or later developed devices for analysing and processing data. The processor 1502 may implement a software program, such as code generated manually (i.e., programmed).
The computer system 1500 may include a memory 1504, such as a memory 1504 that can communicate via a bus 1508. The memory 1504 may be a main memory, a static memory, or a dynamic memory. The memory 1504 may include, but is not limited to computer readable storage media such as various types of volatile and non-volatile storage media, including but not limited to random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. In one example, the memory 1504 includes a cache or random access memory for the processor 1502. In alternative examples, the memory 1504 is separate from the processor 1502, such as a cache memory of a processor, the system memory, or other memory. The memory 1504 may be an external storage device or database for storing data. Examples include a hard drive, compact disc ("CD"), digital video disc ("DVD"), memory card, memory stick, floppy disc, universal serial bus ("USB") memory device, or any other device operative to store data. The memory 1504 is operable to store instructions executable by the processor 1502. The functions, acts or tasks illustrated in the figures or described may be performed by the programmed processor 1502 executing the instructions stored in the memory 1504. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firm-ware, micro-code and the like, operating alone or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing and the like.
As shown, the computer system 1500 may or may not further include a display unit 1510, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid state display, a cathode ray tube (CRT), a projector, a printer or other now known or later developed display device for outputting determined information. The display 1510 may act as an interface for the user to see the functioning of the processor 1502, or specifically as an interface with the software stored in the memory 1504 or in the drive unit 1516.
Additionally, the computer system 1500 may include an input device 1512 configured to allow a user to interact with any of the components of system 1500. The input device 1512 may be a number pad, a keyboard, or a cursor control device, such as a mouse, or a joystick, touch screen display, remote control or any other device operative to interact with the computer system 1500.
The computer system 1500 may also include a disk or optical drive unit 1516. The disk drive unit 1516 may include a computer-readable medium 1522 in which one or more sets of instructions 1524, e.g. software, can be embedded. Further, the instructions 1524 may embody one or more of the methods or logic as described. In a particular example, the instructions 1524 may reside completely, or at least partially, within the memory 1504 or within the processor 1502 during execution by the computer system 1500. The memory 1504 and the processor 1502 also may include computer-readable media as discussed above.
The present invention contemplates a computer-readable medium that includes instructions 1524 or receives and executes instructions 1524 responsive to a propagated signal so that a device connected to a network 1526 can communicate voice, video, audio, images or any other data over the network 1526. Further, the instructions 1524 may be transmitted or received over the network 1526 via a communication port or interface 1520 or using a bus 1508. The communication port or interface 1520 may be a part of the processor 1502 or may be a separate component. The communication port 1520 may be created in software or may be a physical connection in hardware. The communication port 1520 may be configured to connect with a network 1526, external media, the display 1510, or any other components in system 1500 or combinations thereof. The connection with the network 1526 may be a physical connection, such as a wired Ethernet connection or may be established wirelessly as discussed later. Likewise, the additional connections with other components of the system 1500 may be physical connections or may be established wirelessly. The network 1526 may alternatively be directly connected to the bus 1508.
The network 1526 may include wired networks, wireless networks, Ethernet AVB networks, or combinations thereof. The wireless network may be a cellular telephone network, an 802.11, 802.16, 802.20, 802.1Q or WiMax network. Further, the network 1526 may be a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols.
In an alternative example, dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays and other hardware devices, can be constructed to implement various parts of the system 1500.
Applications that may include the systems can broadly include a variety of electronic and computer systems. One or more examples described may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit. Accordingly, the present system encompasses software, firmware, and hardware implementations.
The system described may be implemented by software programs executable by a computer system. Further, in a non-limited example, implementations can include distributed processing, component/object distributed processing, and parallel processing. Alternatively, virtual computer system processing can be constructed to implement various parts of the system.
The system is not limited to operation with any particular standards and protocols. For example, standards for Internet and other packet switched network transmission (e.g., TCP/IP, UDP/IP, HTML, HTTP) may be used. Such standards are periodically superseded by faster or more efficient equivalents having essentially the same functions. Accordingly, replacement standards and protocols having the same or similar functions as those disclosed are considered equivalents thereof.
The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.
Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any component(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or component of any or all the claims.
| # | Name | Date |
|---|---|---|
| 1 | 3085-DEL-2014-IntimationOfGrant28-07-2023.pdf | 2023-07-28 |
| 1 | FORM 5.pdf | 2014-11-13 |
| 2 | 3085-DEL-2014-PatentCertificate28-07-2023.pdf | 2023-07-28 |
| 2 | FORM 3.pdf | 2014-11-13 |
| 3 | form 26.pdf | 2014-11-13 |
| 3 | 3085-DEL-2014-Written submissions and relevant documents [30-05-2023(online)].pdf | 2023-05-30 |
| 4 | 4159IN008_Specification.pdf | 2014-11-13 |
| 4 | 3085-DEL-2014-FORM-26 [16-05-2023(online)].pdf | 2023-05-16 |
| 5 | 4159In008_Drawings.pdf | 2014-11-13 |
| 5 | 3085-DEL-2014-Correspondence to notify the Controller [15-05-2023(online)].pdf | 2023-05-15 |
| 6 | 3085-DEL-2014-US(14)-HearingNotice-(HearingDate-17-05-2023).pdf | 2023-05-01 |
| 6 | 3085-del-2014-Form-1-(17-11-2014).pdf | 2014-11-17 |
| 7 | 3085-DEL-2014-Correspondence-171114.pdf | 2014-12-04 |
| 7 | 3085-DEL-2014-CLAIMS [08-05-2020(online)].pdf | 2020-05-08 |
| 8 | 3085-DEL-2014-PA [18-09-2019(online)].pdf | 2019-09-18 |
| 8 | 3085-DEL-2014-COMPLETE SPECIFICATION [08-05-2020(online)].pdf | 2020-05-08 |
| 9 | 3085-DEL-2014-ASSIGNMENT DOCUMENTS [18-09-2019(online)].pdf | 2019-09-18 |
| 9 | 3085-DEL-2014-DRAWING [08-05-2020(online)].pdf | 2020-05-08 |
| 10 | 3085-DEL-2014-8(i)-Substitution-Change Of Applicant - Form 6 [18-09-2019(online)].pdf | 2019-09-18 |
| 10 | 3085-DEL-2014-FER_SER_REPLY [08-05-2020(online)].pdf | 2020-05-08 |
| 11 | 3085-DEL-2014-OTHERS [08-05-2020(online)].pdf | 2020-05-08 |
| 11 | 3085-DEL-2014-OTHERS-101019.pdf | 2019-10-14 |
| 12 | 3085-DEL-2014-Correspondence-101019.pdf | 2019-10-14 |
| 12 | 3085-DEL-2014-FER.pdf | 2019-11-13 |
| 13 | 3085-DEL-2014-Correspondence-101019.pdf | 2019-10-14 |
| 13 | 3085-DEL-2014-FER.pdf | 2019-11-13 |
| 14 | 3085-DEL-2014-OTHERS [08-05-2020(online)].pdf | 2020-05-08 |
| 14 | 3085-DEL-2014-OTHERS-101019.pdf | 2019-10-14 |
| 15 | 3085-DEL-2014-8(i)-Substitution-Change Of Applicant - Form 6 [18-09-2019(online)].pdf | 2019-09-18 |
| 15 | 3085-DEL-2014-FER_SER_REPLY [08-05-2020(online)].pdf | 2020-05-08 |
| 16 | 3085-DEL-2014-ASSIGNMENT DOCUMENTS [18-09-2019(online)].pdf | 2019-09-18 |
| 16 | 3085-DEL-2014-DRAWING [08-05-2020(online)].pdf | 2020-05-08 |
| 17 | 3085-DEL-2014-PA [18-09-2019(online)].pdf | 2019-09-18 |
| 17 | 3085-DEL-2014-COMPLETE SPECIFICATION [08-05-2020(online)].pdf | 2020-05-08 |
| 18 | 3085-DEL-2014-Correspondence-171114.pdf | 2014-12-04 |
| 18 | 3085-DEL-2014-CLAIMS [08-05-2020(online)].pdf | 2020-05-08 |
| 19 | 3085-DEL-2014-US(14)-HearingNotice-(HearingDate-17-05-2023).pdf | 2023-05-01 |
| 19 | 3085-del-2014-Form-1-(17-11-2014).pdf | 2014-11-17 |
| 20 | 4159In008_Drawings.pdf | 2014-11-13 |
| 20 | 3085-DEL-2014-Correspondence to notify the Controller [15-05-2023(online)].pdf | 2023-05-15 |
| 21 | 4159IN008_Specification.pdf | 2014-11-13 |
| 21 | 3085-DEL-2014-FORM-26 [16-05-2023(online)].pdf | 2023-05-16 |
| 22 | form 26.pdf | 2014-11-13 |
| 22 | 3085-DEL-2014-Written submissions and relevant documents [30-05-2023(online)].pdf | 2023-05-30 |
| 23 | FORM 3.pdf | 2014-11-13 |
| 23 | 3085-DEL-2014-PatentCertificate28-07-2023.pdf | 2023-07-28 |
| 24 | FORM 5.pdf | 2014-11-13 |
| 24 | 3085-DEL-2014-IntimationOfGrant28-07-2023.pdf | 2023-07-28 |
| 1 | Search3085DEL2014_01-11-2019.pdf |