Abstract: The present invention relates to method and device for displaying content. In accordance with one embodiment the method comprises detecting an event associated with a display of content on the mobile device. An expiry of a predetermined time period from the detected event is detected. A non-reception of user-input is detected upon expiry of the predetermined time period. A list of learning frames associated with the content is fetched from a storage unit in response to the detection of non-reception of user-input. A learning frame is then selected from the list of learning frames and displayed on the mobile device.
DESC:FIELD OF INVENTION
The present invention relates to displaying digital content on a mobile device.
BACKGROUND OF INVENTION
Nowadays, the world around us is full of information. People are always busy on various devices exploring/reading information and content. People are constantly bombarded with advertisements, notifications, alerts, and messages from companies or systems, all of which constantly crave for attention. To top it all, people also juggle with multiple priorities. This scenario has occasioned some noticeable behavioral changes in us. For example, these days’ people suffer from an overdose of information that effectively reduces the amount of time spent on individual tasks. As a result, the average attention span has gone down drastically with so many distractions around us, putting a roadblock on how much content a person can consume at once. Further, patience levels have lowered substantially, for example, most people generally leave a learning exercise midway due to lack of discipline and motivation.
Some solutions are available that solve the above mentioned problems to a certain extent. In one solution, byte size notifications are pushed on mobile device. Examples of such notifications includes byte size news pushed by newspaper application, byte size ads and radio jingles pushed by media content applications, and byte size learning frames pushed by mobile based learning applications. The byte size content is crisp content, for quick absorption by users. Sometimes such content is made interactive so that users engage and spend more time in viewing/learning. Specifically with mobile based learning applications, learning modules are followed by assessments in the end to assess effectiveness of learning. However, all these solutions are still dependent upon the pull factor, i.e., the user is pulled towards the application for detailed content which the user may or may not be interested in. After a point, users become immune to all these pull information/ alerts and stop paying attention or start ignoring.
Further, mobile based learning applications provide short courses, for example, in form of downloadable or online/offline viewable lessons, videos, presentations, games, etc. for the purposes of immersive learning and competency building. Thus, upon receiving the notification, the use may access the short course. However, if such courses are longer than 5-10 minutes, course completion rate typically remains at 10-20% due to lack of discipline and motivation. This further leads to user discontinuing the course further. Some solutions are available in market that overcome the above mentioned issues. By way of example, US 20150286383 discloses systems and methods for presenting content to users using desktop widgets. The systems and methods allow users to quickly and easily access content (such as news articles) from their home screen without having to independently start a software application to do so. However, the user might not be available to read such content at the time when it is presented/ displayed. Thus, the widgets get piled up at the user interface and the user might kill all or some of them in one go without going through them.
By way of another example, US8806378 discloses a mobile phone messaging system and method for managing display of messages to mobile phone users. A mobile client application operates at user's mobile phone. Mobile content providers manage the display of messages and related interactions throughout a specific period of time (e.g., daily, weekly, bi-weekly). Phone wakeup time data and message identifying data are transmitted from a mobile content provider server and stored in a mobile phone. The wakeup times are also added to a phone registry that facilitates launching of applications at the times indicated in the registry. At the specified wakeup times, the mobile client application determines the message identifying data associated with the wakeup time, connects to the mobile content provider server, and provides the message identifying data. The provider responds with a specific message and the mobile client application displays the message. However, the user might not be available to read such message at the time when it is presented/ displayed. Thus, messages get piled up at the user interface and the user might kill all or some of them in one go without going through them. Further, the system does not keep record/track of the learning history of the user.
By way of another example, US5827071 discloses a method, computer program product, and system for teaching reinforcing concepts, principals, and other learned information without requiring user initiation of a learning sequence. Learning or reinforcement occurs by presenting "learning frames" in the environment automatically without requiring user initiation of the learning sequence. The user of the environment receives these intrusive or non-intrusive opportunities for learning while doing other tasks within the environment and can be interrupted from the task at hand and be required to respond to the presented learning frame or can simply have the opportunity for learning without requiring interruption of the task at hand. However, the learning frames are presented only at the pre-specified time. The presentation of learning frames does not depend upon the time since the user has not accessed applications or seen such learning frames.
By way of another example, US6301573 discloses a recurrent training method for providing training-related information on a particular subject to a user of the computer. During a first training session, the training-related information is presented to the user at a first time, and during subsequent training sessions at recurrent times thereafter under computer control. Upon initiation of the subsequent training sessions, either by presenting training-related information to the user or by prompting the user for approval to present training-related information, the training method interrupts the user's interaction with other on-going activities on the computer, thereby reminding the computer user, at various times, of the need to perform training and, at the user's discretion, providing the user with recurrent training through recurrent training sessions. The training method also enables the user of the computer to establish the recurrent times at which subsequent training sessions are initiated by the computer through allowing the user to define a period of time between the termination of a training session and the initiation of a subsequent training session. The training method additionally enables the computer to suppress the initiation of one or more subsequent training sessions upon direction, given prior to initiation, to the computer by the user in the form of a period of time during which the initiation of subsequent training sessions is to be suppressed by the computer. However, the training sessions are presented only at the pre-specified time. The presentation of training sessions does not depend upon the time since the user has not accessed the training session.
Therefore there is an un-met need for a solution that overcomes the disadvantages of the existing solutions.
SUMMARY OF THE INVENTION
In accordance with the purposes of the invention, the present invention as embodied and broadly described herein, provides for displaying of content on a mobile device. Accordingly, an event associated with a display of content on the mobile device is determined and an expiry of a predetermined time period from the detected event is determined. Upon detection of a non-reception of user-input after expiry of the predetermined time period a list of learning frames associated with the content is fetched from a database. The user-input is indicative of accessing an application associated with the content on the mobile device. Thus, the predetermined time period is based on at least one of a user-defined time period, a default time period, a pattern associated with the display of the learning frame, and a pattern associated with the access of the application. Thereafter, a learning frame is selected from the list of learning frames and displayed on the mobile device. The learning frame includes at least one snippet of the content or byte size content.
The advantages of the present invention include, but are not limited to, continuous and more disciplined consumption of content as the content is pushed to the mobile device rather than pulling the user into the application. Such pushing of content is based predetermined time or user’s pattern of consuming the content. Thus, the user is provided with learning frames as per user’s interest. Further, the learning frame includes snippet of the content or byte size content. This invokes interest in the user as the user can view all information without having to access the application. Such continuous invocation of interest can finally enable the user to access the application and further consume the content, specifically in mobile based learning applications.
These and other aspects as well as advantages will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
To further clarify the advantages and features of the invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings in which:
Figure 1 illustrates method for displaying content on a mobile device, in accordance with an embodiment of the present invention.
Figure 2 illustrates an exemplary system to implement the present invention, in accordance with an embodiment of the present invention.
Figure 3 illustrates an exemplary network environment that facilitates displaying of content on a mobile device, in accordance with an embodiment of present invention.
Figures 4 and 5 illustrate an exemplary flow chart for indicating the access flow of content application, in accordance with an embodiment of present invention.
Figure 6 illustrates an example learning frame for displaying content on the mobile device, in accordance with an embodiment of present invention.
It may be noted that to the extent possible, like reference numerals have been used to represent like elements in the drawings. Further, those of ordinary skill in the art will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily drawn to scale. For example, the dimensions of some of the elements in the drawings may be exaggerated relative to other elements to help to improve understanding of aspects of the invention. Furthermore, the one or more elements may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having the benefits of the description herein.
DETAILED DESCRIPTION
It should be understood at the outset that although illustrative implementations of the embodiments of the present disclosure are illustrated below, the present invention may be implemented using any number of techniques, whether currently known or in existence. The present disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary design and implementation illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
The term “some” as used herein is defined as “none, or one, or more than one, or all.” Accordingly, the terms “none,” “one,” “more than one,” “more than one, but not all” or “all” would all fall under the definition of “some.” The term “some embodiments” may refer to no embodiments or to one embodiment or to several embodiments or to all embodiments. Accordingly, the term “some embodiments” is defined as meaning “no embodiment, or one embodiment, or more than one embodiment, or all embodiments.”
The terminology and structure employed herein is for describing, teaching and illuminating some embodiments and their specific features and elements and does not limit, restrict or reduce the spirit and scope of the claims or their equivalents.
More specifically, any terms used herein such as but not limited to “includes,” “comprises,” “has,” “consists,” and grammatical variants thereof do NOT specify an exact limitation or restriction and certainly do NOT exclude the possible addition of one or more features or elements, unless otherwise stated, and furthermore must NOT be taken to exclude the possible removal of one or more of the listed features and elements, unless otherwise stated with the limiting language “MUST comprise” or “NEEDS TO include.”
Whether or not a certain feature or element was limited to being used only once, either way it may still be referred to as “one or more features” or “one or more elements” or “at least one feature” or “at least one element.” Furthermore, the use of the terms “one or more” or “at least one” feature or element do NOT preclude there being none of that feature or element, unless otherwise specified by limiting language such as “there NEEDS to be one or more . . . ” or “one or more element is REQUIRED.”
Unless otherwise defined, all terms, and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by one having an ordinary skill in the art.
Reference is made herein to some “embodiments.” It should be understood that an embodiment is an example of a possible implementation of any features and/or elements presented in the attached claims. Some embodiments have been described for the purpose of illuminating one or more of the potential ways in which the specific features and/or elements of the attached claims fulfill the requirements of uniqueness, utility and non-obviousness.
Use of the phrases and/or terms such as but not limited to “a first embodiment,” “a further embodiment,” “an alternate embodiment,” “one embodiment,” “an embodiment,” “multiple embodiments,” “some embodiments,” “other embodiments,” “further embodiment”, “furthermore embodiment”, “additional embodiment” or variants thereof do NOT necessarily refer to the same embodiments. Unless otherwise specified, one or more particular features and/or elements described in connection with one or more embodiments may be found in one embodiment, or may be found in more than one embodiment, or may be found in all embodiments, or may be found in no embodiments. Although one or more features and/or elements may be described herein in the context of only a single embodiment, or alternatively in the context of more than one embodiment, or further alternatively in the context of all embodiments, the features and/or elements may instead be provided separately or in any appropriate combination or not at all. Conversely, any features and/or elements described in the context of separate embodiments may alternatively be realized as existing together in the context of a single embodiment.
Any particular and all details set forth herein are used in the context of some embodiments and therefore should NOT be necessarily taken as limiting factors to the attached claims. The attached claims and their legal equivalents can be realized in the context of embodiments other than the ones used as illustrative examples in the description below.
Figure 1 illustrates a method 100 for displaying content on a mobile device, in accordance with an embodiment of the present invention.
At block 101, an event is detected on the mobile device. The event is associated with a display of content on the mobile device. The content may include, for example, text, image, and/or combinations thereof. The examples of content include, but are not limited to, course, news, advertisement and/or combinations thereof.
At step 102, an expiry of a predetermined time period from the detected event is determined.
At step 103, a non-reception of a user-input is detected upon expiration of the predetermined time period. The user-input is indicative of accessing an application associated with the content on the mobile device. At step 104, upon detection of the non-reception of the said user-input, a list of learning frames associated with the content is fetched from the database. The learning frame includes at least one snippet of the content. At step 105, a learning frame is selected from the list of learning frames and is displayed on the mobile device. The selection of the learning frame is based on a predetermined sequence of display of a plurality of learning frames the at least one list of the learning frames. The learning frame can be displayed, for example, as a pop-up notification or as a flash card on a screen of the mobile device (200).
Thus, the event detected at step 101 can be indicative of the display of learning frame from one of said list of learning frames and/or a further list of learning frames. The detected event can also be indicative of the accessing of the content application (208) by the user.
Further, the predetermined time period is based on at least one of a user-defined time period, a default time period, a pattern associated with the display of the learning frame, and a pattern associated with the access of the content application (208).
Further, the step 105 in the method 100 comprises additional steps. Accordingly, at step 106, before the learning frame is displayed, detection is made if the mobile device in an inactive mode. The inactive mode is associated with at least one of locked screen state and screen-off state of the mobile device.
At step 107, the mobile device is switched from the inactive mode to an active mode. The active mode is associated with at least one of un-locked screen state and screen-on state of the mobile device. After said switching of the mobile device, the learning frame is displayed on the mobile device.
Further, the method 100 comprises following steps. At step 108, at least one user-selectable task associated with the learning frame is displayed along with the learning frame. The user-selectable tasks include at least one of snooze input, view input, and progress input.
At step 109, a user-interaction with the mobile device is prevented until at least one of said user-selectable task is selected. On the basis of selection of the user-selectable task, the flow of the method changes as described below.
Referring to Figure 1b, at step 110, upon receiving a selection of the snooze input the user-interaction with the mobile device is allowed. As such, the learning frame is removed from the mobile device.
At step 111, the learning frame is again displayed upon expiry of a predetermined snooze time.
Further, upon receiving a selection of the view input, the user-interaction with the mobile device is allowed to access the content application and view the content within the application.
Now referring to Figure 1c, at step 112, upon receiving a selection of the progress input, the user-interaction with the mobile device is allowed. As such, the learning frame is removed from the mobile device.
At step 113, a further learning frame is displayed upon expiry of a predetermined time period, as described from steps 104 to 107.
Figure 2 illustrates an exemplary mobile device (200) for displaying content, in accordance with an embodiment of the present invention. The mobile device (200) comprises a storage unit (201), a display unit (202), a detecting unit (203), a timing unit (204), an input unit (205), a control unit (206), and a processor (207). The storage unit (201) can be used to store software and data. Examples of the display unit (202) include, but not limited to, liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, and a solid state display. Examples of the input unit (205) include, but not limited to, key pad and stylus. The storage unit (201) can be internal to the mobile device (200) or external to the mobile device (200). The storage unit (201) can include any of a main memory, a static memory, or a dynamic memory. The storage unit (201) may be volatile and non-volatile memory, including but not limited to random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media, hard drive, memory card, floppy, and any other component capable of storing data.
Accordingly, the storage unit (201) can be used to store a content application (208) that is capable of fetching content from a web server (not shown in the figure) and displaying the content on the display unit (202) of the mobile device (200).The content may include, for example, text, image, and/or combinations thereof. The examples of content include, but are not limited to, course, news, advertisement and/or combinations thereof. In one implementation, user has to download the content application (208) on the mobile device (200). In another implementation, the content application can be a pre-downloaded application. Further, the user has to create a profile on the content application (208) for identification of the user. For example, to make the profile on the content application (208), the user may have to enter his name, mobile number, and email address. After successful verification of the user, the user has to enter one or more topics on which he wishes to receive the content on the mobile device (200).
Further, the content can be delivered in the form of a learning frame on the display unit (202) of the mobile device (200). The learning frame is designed in such a manner so as to minimize the effort required for consuming the content. The learning frame includes at least one snippet of content or byte size content. The learning frame can be displayed, for example, as a pop-up notification or as a flash card on the display unit (202) of the mobile device (200). The content application (208) stores at least one list of learning frames (209) associated with the content in the storage unit (201). The at least one list of learning frames (209) can be downloaded periodically by the content application (208). As such, the user can define time period for display of the learning frames. For example, the time period can be of short intervals such as 2 hours or of longer intervals such as 24 hours. In addition, a default time period can be set by an administrator of the content application (208). The default time period can be lesser or greater than the user defined time period. As would be understood, the default time period will be used when user does not set the time period. The user-defined time period and/or the default time period can be stored as settings (201) in the storage unit (201).
In accordance with the present invention, a learning frame is displayed on the display unit (202) in an intrusive manner when the user is not consuming content via the content application (208). Accordingly, the detecting unit (203) tracks a pattern of the user accessing the content application (208) and stores the pattern in the storage unit (201) as user-pattern (211). The said pattern associated with the access, for example, depends upon frequency and time the user has accessed the content application (208) to consume the content.
In operation, the detecting unit (203) detects an event associated with display of content. In other words, the detecting unit (203) detects accessing of the content application (208). The timing unit (204) determines an expiry of a predetermined time period from the detected event. The predetermined time period is based on at least one of the user patterns (211) indicative of the pattern of accessing the content application (208) and the settings (210) indicative of the user defined time period and/or the default time period. In an example, the predetermined time period measured from the detected event can be different than the time period saved in the settings (210) based on the pattern stored in the user patterns (211). In such example, the predetermined time period measured based on the pattern overrides the time period saved in the settings (210). In another example, the predetermined time period measured from the detected event can be same as the time period saved in the settings (210).
Further, the input unit (205) receives a user-input indicative of accessing the content application (208) on the mobile device. The user-input can be received by way of non-limiting examples such as by a stylus, touch-gesture, a keypad etc. Based on the user-input received by the input unit (205), the control unit (206) tracks the pattern of accessing the content application (208).
Accordingly, if the user-input is not received upon expiry of the predetermined time period, then the control unit (206) fetches the at least one list of learning frames (209) from the storage unit (201). If the at least one list of learning frames (209) are not available, the control unit (206) may direct the content application (208) to fetch the at least one list of learning frames (209) from a web server hosting the content application (208).
Further, the control unit (206) selects a learning frame on the basis of a predetermined sequence from the fetched at least one list of learning frames (209). The predetermined sequence determines a linear arrangement of the learning frames within the at least one list of learning frames and/or among multiple lists of learning frames in accordance with the pattern of the content consumed via the application (208) as stored in the storage unit (201). In an example of mobile based learning, the predetermined sequence of learning frames depends upon the learning history of the user stored in the storage unit (201). Upon selecting, the control unit (206) displays the selected learning frame on the display unit (202) of the mobile device (200).
In an implementation, before displaying the learning frame, the control unit (206) detects if the mobile device (200) is in an inactive mode. In such implementation, the inactive mode may be associated with at least one of locked screen state of the mobile device (200) and screen off state of the mobile device. If it is detected that the mobile device is in the inactive mode, the control unit (206) waits until the mobile device (200) is switched to an active mode. In such implementation, the active mode may be associated with at least one of un-locked screen state of the mobile device (200) and screen-on state of the mobile device (200). After the mobile device (200) is in active mode, the control unit (206) displays the learning frame on the display unit (202) of the mobile device (200).
Further, the control unit (206) displays at least one user-selectable task associated with the learning frame. The user-selectable task can be displayed in form of buttons, icons, and the like. In an implementation, said at least user-selectable task include snooze input, view input, and progress input. The user-interaction with the mobile device (200) is prevented until selection of said at least one user-selectable task.
Accordingly, upon receiving a selection of the snooze input, the control unit (206) allows the user-interaction with the mobile device (200). As such, the learning frame is removed from the display unit (202). Further, the control unit (206) displays the learning frame upon expiry of a predetermined snooze time. In an example, the predetermined snooze time can be a default snooze time. In another example, the predetermined snooze time can be user-defined snooze time. The predetermined snooze time is stored in the storage unit (201) as settings (210). Thus, the same learning frame along with the user-selectable tasks is displayed upon expiry of the predetermined snooze time. Again, the user has to select one of the user-selectable tasks to continue interaction with the mobile device (200).
Further, upon receiving a selection of the progress input, the control unit (206) allows the user-interaction with the mobile device (200). As such, the learning frame is removed from the display unit (202).
Thus, the control unit (206) tracks a pattern associated with the display of the learning frame. The said pattern associated with the display, for example, depends upon the frequency and time of display of the said learning frames. The control unit (206) stores the pattern in the storage unit (201) as the user-pattern (211). Based on the pattern of display of the learning frames, the control unit (206) determines the pattern of accessing the application (208) by the user. In an example, the control unit (2206) determines the user has not accessed the application (208), if learning frames are displayed periodically upon expiry of the predetermined time period.
Thus, the detecting unit (203) detects the event as indicative of display of a learning frame from the at least one list of learning frames (209). The timing unit (204) determines expiry of the predetermined time period based from the detected event. Thereafter, the control unit (206) displays a further learning frame upon expiry of the predetermined time period when non-reception of the user-input is detected, as described earlier. In an implementation, said further predetermined time period can be a default time period. In another implementation, said further predetermined time period can be user-defined time period. In one another implementation, said further predetermined time period is based on a pattern associated with the display of the learning frame. In yet another implementation, said further predetermined time period is based on a pattern associated with the access of the content application (208).
Further, upon receiving a selection of the view input, the control unit (206) allows the user-interaction with the mobile device to access the content application (208) and view the content within the content application (208).
Thus, the detecting unit (203) detects either the event indicative of accessing of the content application (208) or the event indicative of displaying of a learning frame. Based on the detection, the timing unit (204) determines an expiry of the time period. Upon expiry of the time period, the control unit (204) displays a learning frame upon non-reception of the user-input. The user-selectable task can further include other tasks as like and share, as known in the art.
The mobile device (200) further includes the processor (207) adapted to perform necessary functions of the mobile device (200). The processor (207) may be implemented as one or more processors, microprocessors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or the like. In one implementation, the above mentioned units, i.e., the detecting unit (203), the timing unit (204), and the control unit (206) are provided separately from the processor (207), as illustrated in the figure.
In another implementation, all the above mentioned units, i.e., the detecting unit (203), the timing unit (204), and the control unit (206) are present as a part of the processor (207).
In one another implementation, any one of above mentioned units is present as a part of the processor (207) and the other units can be provided separately from the processor (207).In one such implementation, only the detecting unit (203) is present as a part of the processor (207). In one such implementation, only the timing unit (204) is present as a part of the processor (207). In one such implementation, only the control unit (206) is present as a part of the processor (207).
In yet another implementation, any combination of above mentioned units is present as a part of the processor (207). In one such implementation, the detecting unit (203) and the timing unit (204) are present as a part of the processor (207). In one such implementation, the timing unit (204) and the control unit are present as a part of the processor (207). In one such implementation, only the control unit (206) is present as a part of the processor (207). In one such implementation, the detecting unit (203) and the control unit (206) are present as a part of the processor (207).
In yet another implementation, the detecting unit (203), the timing unit (204), and the control unit (206) can be provided as software unit/module. In one example of such implementation, the detecting unit (203), the timing unit (204), and the control unit (206) can part of the content application (208). In another example of such implementation, the detecting unit (203), the timing unit (204), and the control unit (206) can part of a separate application having access to the content application (208). In still another implementation, the detecting unit (203), the timing unit (204), and the control unit (206) can be provided as hardware unit/module. In still one another implementation, the detecting unit (203), the timing unit (204), and the control unit (206) can be provided as a combination of software and hardware unit/module.
Further, in one implementation, the display unit (202) can be the input unit (205). Example of such display unit (202) and input unit (205) includes but not limited to touch screen display. Further, the mobile device (200) can include output unit (not shown in the figure) such as speaker. Although, the illustration depicts specific units, it would be understood that the mobile device (200) and the units therein might include various software and/or hardware components or modules as necessary for implementing the invention.
Although, few combinations have been illustrated above, it is to be understood that any combination of the units is possible within the scope of the invention.
Figure 3 illustrates an exemplary network environment (300) that facilitates displaying of content on the mobile device (200), in accordance with an embodiment of present invention. In such embodiment, the mobile device (200) and a web server (301) communicate with each other over a communication network (302). The communication network (302) can be suitable wired or wireless communication method, including cellular communications, Wi-Fi (e.g., any suitable IEEE 802.11 protocol), internet, 3G, LTE, etc. The content application (208) is capable of downloading content and the at least one list of the learning frames from the web server (301) on the mobile device (200) through the communication network (302).
Figure 4 illustrates an exemplary flow chart 400 for indicating the access flow of content application (208). At step 401, the user opens the content application (208) for the first time and makes a profile for user identification. After successful identification of user, the content is downloaded on the mobile device (200) at step 402. At step 403, the user consumes content via the application (208). At step 404, the timing unit (204) counts the predetermined time period in the background. At step 405, upon expiration of the predetermined time period, it is determined whether the user-input was received in the last predetermined time period. The user-input can be indicative of accessing the content application (208) on the mobile device (200).
Accordingly at step 406, in case, the reception of user-input was detected by the detecting unit (203) in the last predetermined time period, then no immediate action is required (‘Yes’ path from step 405) from the control unit (206). The flow will again resume from step 403 independently.
On the contrary, at step 407, in case the reception of the user-input was not detected by the control unit (206) in the predetermined time period, then the list of learning frames is fetched from the storage unit (201) (‘No’ path from step 405).
At step 408, the learning frame is selected from said list of learning frames based on the predetermined sequence. At step 409, the selected learning frame is displayed on the display unit (202) of the mobile device (200).
Figure 5 illustrates an exemplary flow chart 500 for displaying of content on the mobile device (200) during operation, as discussed in Figure 2. After the selected learning frame is displayed at step 409, at step 501, at least one user-selectable task associated with the learning frame is displayed along with the learning frame. The user-selectable tasks include at least one of snooze input, view input, and progress input.
At step 502, it is determined whether the selected user-selectable task is snooze input. If the selected user-selectable task is snooze input, the process flows to step 503 (‘Yes’ path from step 502).
At step 503, settings associated with the snooze time of the learning frame are displayed. Accordingly, the setting for the snooze time can be updated/ changed or kept at default at step 504. The updated/changes snooze time is stored in the settings (210). As described earlier, at step 505, upon expiry of the snooze time, the learning frame is again displayed.
If the selected user-selectable task is not the snooze input at step 502, the process flows to step 506 (‘No’ path from step 502). At step 506, it is determined whether the selected user-selectable task is view input. If the selected user-selectable task is view input, the process flows to step 404 (‘Yes’ path from step 506). In other words, once the user access the application (208) upon display of the learning frame to consume the content, the flow will again resume from step 403 independently.
On the contrary, if the selected user-selectable task is not the view input at step 502, the process flows to step 507 (‘No’ path from step 506). At step 506, the timing unit (204) starts counting the predetermined time period for displaying further learning frame based on the predetermined sequence as the selected user-selectable task is progress input. Upon expiry of the predetermined time, the process flows to 505 for displaying the further learning frame
Figure 6 illustrates example learning frame (600) displayed on the displaying unit (202) of the mobile device (200). . In the example, the learning frame (600) is displayed for a mobile based soft skills learning course. The learning frame 600 can includes a first portion (601) and a second portion (602). The first portion (601) includes an index (601a) identifying sequence of the learning frame and title of the content and/module. In the example, sequence of learning frame is LeafE #001 of a module ‘Module 1’ titled ‘Assertiveness’. The first portion (601) further includes combination of text and image to depict the snippet of the content.
Further, the second portion (602) includes the plurality of user-selectable tasks. The plurality of user-selectable tasks includes snooze input (603), view input (604), and the progress input (605). Although, not shown in the figure, the plurality of user-selectable tasks may include other options such as like and share, as known in the art.
While certain present preferred embodiments of the invention have been illustrated and described herein, it is to be understood that the invention is not limited thereto. Clearly, the invention may be otherwise variously embodied, and practiced within the scope of the following claims.
,CLAIMS:We Claim:
1. A method for displaying content on a mobile device, the method comprising:
• detecting an event associated with a display of content on the mobile device;
• determining expiry of a predetermined time period from the detected event;
• detecting a non-reception of user-input upon expiry of the predetermined time period, the user-input being indicative of accessing an application associated with the content on the mobile device;
• fetching at least one list of learning frames associated with the content from a storage unit in response to the detection of non-reception of user-input; and
• selecting and displaying a learning frame from the list of learning frames on the mobile device.
2. The method as claimed in claim 1, wherein the event is associated with a display of content and is indicative of one of:
• display of learning frame from said at least one list of learning frames; and
• access of the application.
3. The method as claimed in claim 1, wherein the predetermined time period is based on at least one of:
• a user-defined time period; and
• a default time period;
• a pattern associated with the display of the learning frame; and
• a pattern associated with the access of the application.
4. The method as claimed in claim 1, wherein the learning frame includes at least one snippet of the content.
5. The method as claimed in claim 1, wherein the selection of the learning frame is based on a predetermined sequence of display of a plurality of learning frames.
6. The method as claimed in claim 1, wherein the displaying further comprises:
• detecting the mobile device is in an inactive mode; and
• displaying the learning frame when the mobile device is switched from the inactive mode to an active mode.
7. The method as claimed in claim 6, wherein the inactive mode is associated with at least one of locked screen state of the mobile device and screen-off state of the mobile device.
8. The method as claimed in claim 6, wherein the active mode is associated with at least one of un-locked screen state of the mobile device and screen-on state of the mobile device.
9. The method as claimed in claim 1, further comprises:
• displaying at least one user-selectable task associated with the learning frame; and
• preventing a user-interaction with the mobile device until selection of the at least one user-selectable task.
10. The method as claimed in claim 9, wherein the at least one user-selectable tasks include snooze input, view input, and progress input.
11. The method as claimed in claim 10, wherein upon receiving a selection of the snooze input, the method comprises:
• allowing the user-interaction with the mobile device; and
• displaying the learning frame upon expiry of a predetermined snooze time.
12. The method as claimed in claim 10, wherein upon receiving a selection of the view input, the method comprises:
• allowing the user-interaction with the mobile device to access the application and view the content within the application.
13. The method as claimed in claim 10, wherein upon receiving a selection of the progress input, the method comprises:
• allowing the user-interaction with the mobile device; and
• displaying a further learning frame upon expiry of a predetermined time period.
14. A mobile device for displaying content, the mobile device comprising:
• a detecting unit to detect an event associated with a display of content on the mobile device;
• a timing unit to determine expiry of a predetermined time period from the detected event; and
• a control unit to:
o detect a non-reception of user-input upon expiry of the predetermined time period, the user-input being indicative of accessing an application associated with the content on the mobile device;
o fetch at least one list of learning frames associated with the content from a storage unit in response to the detection of non-reception of user-input; and
o select and display a learning frame from the list of learning frames on the mobile device.
15. The mobile device as claimed in claim 14, wherein the event is one of:
• display of learning frame from said at least one list of learning frames; and
• access of the application.
16. The mobile device as claimed in claim 14, wherein the predetermined time period is based on at least one of:
• a user-defined time period; and
• a default time period;
• a pattern associated with the display of the learning frame; and
• a pattern associated with the access of the application.
17. The mobile device as claimed in claim 14, wherein the learning frame includes at least one snippet of the content.
18. The mobile device as claimed in claim 14, wherein the control unit selects the learning frame based on a predetermined sequence of display of a plurality of learning frames.
19. The mobile device as claimed in claim 14, wherein the displaying further:
• detects the mobile device is in an inactive mode; and
• displays the learning frame when the mobile device is switched from the inactive mode to an active mode.
20. The mobile device as claimed in claim 19, wherein the inactive mode is associated with at least one of locked screen state of the mobile device and screen-off state of the mobile device.
21. The mobile device as claimed in claim 19, wherein the active mode is associated with at least one of un-locked screen state of the mobile device and screen-on state of the mobile device.
22. The mobile device as claimed in claim 14, wherein the control unit further:
• displays at least one user-selectable task associated with the learning frame; and
• prevents a user-interaction with the mobile device until selection of the at least one user-selectable task.
23. The mobile device as claimed in claim 22, wherein the at least one user-selectable tasks include snooze input, view input, and progress input.
24. The mobile device as claimed in claim 23, wherein upon receiving a selection of the snooze input, the control unit further:
• allows the user-interaction with the mobile device; and
• displays the learning frame upon expiry of a predetermined snooze time.
25. The mobile device as claimed in claim 23, wherein upon receiving a selection of the view input, the control unit further:
• allows the user-interaction with the mobile device to access the application and view the content within the application.
26. The mobile device as claimed in claim 23, wherein upon receiving a selection of the progress input, the control unit further:
• allows the user-interaction with the mobile device; and
• displays a further learning frame upon expiry of a predetermined time period.
| # | Name | Date |
|---|---|---|
| 1 | 4137-DEL-2015-IntimationOfGrant13-02-2023.pdf | 2023-02-13 |
| 1 | Power of Attorney [16-12-2015(online)].pdf | 2015-12-16 |
| 2 | 4137-DEL-2015-PatentCertificate13-02-2023.pdf | 2023-02-13 |
| 2 | Form 5 [16-12-2015(online)].pdf | 2015-12-16 |
| 3 | Form 3 [16-12-2015(online)].pdf | 2015-12-16 |
| 3 | 4137-DEL-2015-CLAIMS [12-09-2022(online)].pdf | 2022-09-12 |
| 4 | Drawing [16-12-2015(online)].pdf | 2015-12-16 |
| 4 | 4137-DEL-2015-FER_SER_REPLY [12-09-2022(online)].pdf | 2022-09-12 |
| 5 | Description(Provisional) [16-12-2015(online)].pdf | 2015-12-16 |
| 5 | 4137-DEL-2015-Response to office action [25-07-2022(online)].pdf | 2022-07-25 |
| 6 | 4137-DEL-2015-Response to office action [10-08-2020(online)].pdf | 2020-08-10 |
| 6 | 4137-del-2015-GPA-(10-03-2016).pdf | 2016-03-10 |
| 7 | 4137-DEL-2015-FER.pdf | 2020-02-10 |
| 7 | 4137-del-2015-Correspondence Others-(10-03-2016).pdf | 2016-03-10 |
| 8 | Other Patent Document [16-06-2016(online)].pdf | 2016-06-16 |
| 8 | Form 18 [20-12-2016(online)].pdf | 2016-12-20 |
| 9 | 4137-del-2015-Form-1-(01-07-2016).pdf | 2016-07-01 |
| 9 | Form 9 [20-12-2016(online)].pdf | 2016-12-20 |
| 10 | 4137-del-2015-Correspondence Others-(01-07-2016).pdf | 2016-07-01 |
| 10 | Description(Complete) [16-12-2016(online)].pdf | 2016-12-16 |
| 11 | Description(Complete) [16-12-2016(online)].pdf_27.pdf | 2016-12-16 |
| 11 | Other Patent Document [29-07-2016(online)].pdf | 2016-07-29 |
| 12 | 4137-DEL-2015-OTHERS-010816.pdf | 2016-08-05 |
| 12 | Drawing [16-12-2016(online)].pdf | 2016-12-16 |
| 13 | 4137-DEL-2015-Correspondence-010816.pdf | 2016-08-05 |
| 14 | 4137-DEL-2015-OTHERS-010816.pdf | 2016-08-05 |
| 14 | Drawing [16-12-2016(online)].pdf | 2016-12-16 |
| 15 | Description(Complete) [16-12-2016(online)].pdf_27.pdf | 2016-12-16 |
| 15 | Other Patent Document [29-07-2016(online)].pdf | 2016-07-29 |
| 16 | 4137-del-2015-Correspondence Others-(01-07-2016).pdf | 2016-07-01 |
| 16 | Description(Complete) [16-12-2016(online)].pdf | 2016-12-16 |
| 17 | Form 9 [20-12-2016(online)].pdf | 2016-12-20 |
| 17 | 4137-del-2015-Form-1-(01-07-2016).pdf | 2016-07-01 |
| 18 | Form 18 [20-12-2016(online)].pdf | 2016-12-20 |
| 18 | Other Patent Document [16-06-2016(online)].pdf | 2016-06-16 |
| 19 | 4137-DEL-2015-FER.pdf | 2020-02-10 |
| 19 | 4137-del-2015-Correspondence Others-(10-03-2016).pdf | 2016-03-10 |
| 20 | 4137-DEL-2015-Response to office action [10-08-2020(online)].pdf | 2020-08-10 |
| 20 | 4137-del-2015-GPA-(10-03-2016).pdf | 2016-03-10 |
| 21 | Description(Provisional) [16-12-2015(online)].pdf | 2015-12-16 |
| 21 | 4137-DEL-2015-Response to office action [25-07-2022(online)].pdf | 2022-07-25 |
| 22 | Drawing [16-12-2015(online)].pdf | 2015-12-16 |
| 22 | 4137-DEL-2015-FER_SER_REPLY [12-09-2022(online)].pdf | 2022-09-12 |
| 23 | Form 3 [16-12-2015(online)].pdf | 2015-12-16 |
| 23 | 4137-DEL-2015-CLAIMS [12-09-2022(online)].pdf | 2022-09-12 |
| 24 | Form 5 [16-12-2015(online)].pdf | 2015-12-16 |
| 24 | 4137-DEL-2015-PatentCertificate13-02-2023.pdf | 2023-02-13 |
| 25 | 4137-DEL-2015-IntimationOfGrant13-02-2023.pdf | 2023-02-13 |
| 25 | Power of Attorney [16-12-2015(online)].pdf | 2015-12-16 |
| 1 | searchstrategy_05-02-2020.pdf |