Sign In to Follow Application
View All Documents & Correspondence

Method And Device For Contextually Processing An Application

Abstract: The present invention relates to method and device for contextually processing an application. Accordingly, the method comprises receiving, within a first application (301), instructions to invoke a second application; receiving a selection of an area (303) within the first application (301); performing, within the selected area (303) of the first application (301), a task corresponding to the second application; receiving an output (305) from the second application in response to the task performed; determining a contextual use of the output of the second application in the first application (301) based on one or more parameters; identifying an action to be performed on the output (305) of the second application within first application (301) based on the determined contextual use; and performing the action on the output (305) of the second application.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
01 October 2015
Publication Number
14/2017
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
mail@lexorbis.com
Parent Application
Patent Number
Legal Status
Grant Date
2020-10-20
Renewal Date

Applicants

Samsung India Electronics Pvt. Ltd.
Logix Cyber Park, Plot No. C 28-29, Tower D - Ground to 10th Floor, Tower C - 7th to 10th Floor, Sector-62, Noida – 201301, Uttar Pradesh, India

Inventors

1. HARKAWAT, Ankur
458 Bhopalpura, Near Shastri Circle, Udaipur, Rajasthan - 313001, India

Specification

TECHNICAL FIELD
The present invention relates to a method for processing an input on a computing device and the computing device thereof. More particularly, the present invention relates to method for contextually processing an application on the computing device in accordance with input.

BACKGROUND
With the increasing penetration of smart phones, easy availability and access to network infrastructure, and reduced prices of mobile data services, use of mobile data has proliferated over years and is continuing to increase. As such, users are now able to access a wide range of services over applications, which are downloaded and installed on the smart phones. Examples of such applications include chat applications, mail applications, messaging applications, social media applications, imaging applications, video applications, music applications, and document processing applications. In addition, the users are now able to connect with other users and share data such as images, videos, text, audio, and music through various applications such as chat application, email applications, and voice over IP (VOIP) applications.

Typically, the user creates data such as images, videos, audio recording, and documents, and stores the data in memory, prior to sharing the data via applications. In one example of sharing or uploading an image via an application such as email application, social media application, and web page on a browser application, the user first clicks the image and stores in memory. The user then accesses the memory via the application and the attaches or uploads the image in the application. In another example of sharing a document via an application such as email application and web page on a browser application, the user has to create the document prior to sharing the document. Similarly, in yet another example of embedding a first document in a second document, the user has to pre-create and pre-store the first document and then access pre-stored first document from the memory to embed the first document in the second document. However, this process requires the user to perform a number of steps on the smart phone and does not provide flexibility to the user to select any application from underlying application in accordance with individual needs.
Various solutions are now available that reduce number of such steps and provide flexibility and correspondingly a better user experience. In one solution, a camera application and an audio recording application can be accessed via corresponding icons provided in messaging application. Thus, upon clicking on the camera application icon, the camera application is invoked. Upon clicking an image through the camera application, the image is automatically attached in a new message. In another solution, upon accessing a memo application, a window is provided with icons corresponding camera and audio recording. Upon accessing the camera, the camera will open and image can be clicked. Upon clicking the image, the image is saved in the memo.

However, in such solution, limited and specific applications such as camera application and audio recording application, are configured for access based on content acceptable by the underlying application such as the messaging application and the memo application. Further, these specific applications can be accessed via icons available on the underlying application. As such, these icons always occupy some space on the underlying application.
In one another solution, upon detecting a user input from an input region on smart phone, an input mode from a set of input modes is identified. The set of input modes includes a text input, an equation input, and a drawing input. Based on the identified input, a message is created, wherein the message comprises input content processed according to the identified input mode.

In yet another solution, a message is composed in a first message composition mode according to a first message format. Upon receiving a predetermined touch input, the first message composition mode is switched to a second message composition mode for composing the message according to a second message format, such that the first message format is different from the second message format. Examples include inserting of images, contact, and other text input.

However, such solutions are specific to messaging or texting applications only and cannot be used with other applications available on the smart phone.

In one another solution, upon receiving a command to capture an image via pressing a button on a computing device, a camera is invoked and an image is capture. Upon capturing the image, the captured image is automatically attached with a document which is currently being viewed before receiving the command. In one aspect, a new page is added to the current document, and the image is attached in that new page and metadata of the page is updated. In other aspect, the captured image is stored at the same location where the document is stored and accordingly the metadata of the document is updated. However, such solution is specific to document only and cannot be used with other applications available on the smart phone.

In still another solution, a feature is provided that enables selection of very few applications such as memo, screenshot capture, and search from anywhere on the smart phone. The feature can be enabled upon activating a stylus communicatively coupled with the smart phone. Thus, this feature allows quick access for few selected application. In addition, the feature provides an option to open multiple applications in specified areas on a screen. Upon selecting the option, the user is enabled to draw a shape anywhere and within an underlying application using the stylus. Upon drawing the shape, the user can select a pre-mapped application and the selected application is invoked in the shape. A window of the invoked application floats on the underlying application such that the window can be moved freely, maximized, or minimized. As such, multiple icons representing minimized windows can be created for fast access. However, an output from the invoked application is stored in a memory and the user has to again follow the traditional route for accessing the output and use the output in the underlying application.

Thus, the above mentioned solutions are applicable for specific applications only. Further, the above mentioned solutions restrict the user to select few specific applications only that are preconfigured in accordance with type of content processed by the underlying application. As such, the above mentioned solutions do not provide flexibility to user to select any application from underlying application in accordance with individual needs.
Thus, there exists a need for a solution that is generic for all applications and contextually processes applications in accordance with user actions, need, and demand.

SUMMARY OF THE INVENTION
In accordance with the purposes of the invention, the present invention as embodied and broadly described herein, enables contextual processing applications in accordance with user actions, need, and demand.
Accordingly, an instruction to invoke a second application is received within a first application. Thereafter, a selection of an area within the first application is received. Upon receiving the selection of the area, a task corresponding to the second application is performed within the selected area. In response to the task performed, an output from the second application is received. Subsequently, a contextual use of the output of the second application in the first application is determined based on one or more parameters, such that the contextual use identifies an action to be performed on the output of the second application within first application. Upon determining, the action on the output of the second application is performed within the first application.

The advantages of the invention include, but are not limited to, providing a generic solution for all application since any application can be launched within any application inside a user-selected area. The user-selected area can be of any shape and any size. This provides a better user-experience and saves time of processing. In addition, a multitasking feature is provided to the user, as the user can directly launch the second application anywhere on the first application itself inside the selected area and can use the output of the second application contextually. Examples include, but not limited to, automatically filling up forms using photo of business cards or ID cards, tagging saved images with audio, and attaching the clicked photo on background image to create a new image, such that the filled forms, tagged images, and new images can shared anywhere.

Further, the action can be auto-performed in the first application. In addition, if a determination is made that the action can be performed at plurality of locations in the first application, then a selection of a location is received from the user. Upon receiving such selection, the action is auto-performed at the selected location in the first application. Thus, the minimizing user-interactions and thereby providing a better user experience.

These aspects and advantages will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.

BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS:
To further clarify advantages and aspects of the invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings, which are listed below for quick reference.

Figure 1 illustrates exemplary method for contextually processing applications, in accordance with an embodiment of present invention.

Figure 2 illustrates exemplary computing device for contextually processing applications, in accordance with an embodiment of present invention.

Figure 3 illustrates contextually processing of an application as described in Figure 1, in accordance with an embodiment of the invention.

Figure 4-8 illustrate example manifestations depicting the implementation of the present invention.

Figure 9 illustrates a typical hardware configuration of a computing device, which is representative of a hardware environment for practicing the present invention.

It may be noted that to the extent possible, like reference numerals have been used to represent like elements in the drawings. Further, those of ordinary skill in the art will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily drawn to scale. For example, the dimensions of some of the elements in the drawings may be exaggerated relative to other elements to help to improve understanding of aspects of the invention. Furthermore, the one or more elements may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.

DETAILED DESCRIPTION
It should be understood at the outset that although illustrative implementations of the embodiments of the present disclosure are illustrated below, the present invention may be implemented using any number of techniques, whether currently known or in existence. The present disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary design and implementation illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.

The term “some” as used herein is defined as “none, or one, or more than one, or all.” Accordingly, the terms “none,” “one,” “more than one,” “more than one, but not all” or “all” would all fall under the definition of “some.” The term “some embodiments” may refer to no embodiments or to one embodiment or to several embodiments or to all embodiments. Accordingly, the term “some embodiments” is defined as meaning “no embodiment, or one embodiment, or more than one embodiment, or all embodiments.”

The terminology and structure employed herein is for describing, teaching and illuminating some embodiments and their specific features and elements and does not limit, restrict or reduce the spirit and scope of the claims or their equivalents.

More specifically, any terms used herein such as but not limited to “includes,” “comprises,” “has,” “consists,” and grammatical variants thereof do NOT specify an exact limitation or restriction and certainly do NOT exclude the possible addition of one or more features or elements, unless otherwise stated, and furthermore must NOT be taken to exclude the possible removal of one or more of the listed features and elements, unless otherwise stated with the limiting language “MUST comprise” or “NEEDS TO include.”
Whether or not a certain feature or element was limited to being used only once, either way it may still be referred to as “one or more features” or “one or more elements” or “at least one feature” or “at least one element.” Furthermore, the use of the terms “one or more” or “at least one” feature or element do NOT preclude there being none of that feature or element, unless otherwise specified by limiting language such as “there NEEDS to be one or more . . . ” or “one or more element is REQUIRED.”
Unless otherwise defined, all terms, and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by one having an ordinary skill in the art.

Reference is made herein to some “embodiments.” It should be understood that an embodiment is an example of a possible implementation of any features and/or elements presented in the attached claims. Some embodiments have been described for the purpose of illuminating one or more of the potential ways in which the specific features and/or elements of the attached claims fulfil the requirements of uniqueness, utility and non-obviousness.

Use of the phrases and/or terms such as but not limited to “a first embodiment,” “a further embodiment,” “an alternate embodiment,” “one embodiment,” “an embodiment,” “multiple embodiments,” “some embodiments,” “other embodiments,” “further embodiment”, “furthermore embodiment”, “additional embodiment” or variants thereof do NOT necessarily refer to the same embodiments. Unless otherwise specified, one or more particular features and/or elements described in connection with one or more embodiments may be found in one embodiment, or may be found in more than one embodiment, or may be found in all embodiments, or may be found in no embodiments. Although one or more features and/or elements may be described herein in the context of only a single embodiment, or alternatively in the context of more than one embodiment, or further alternatively in the context of all embodiments, the features and/or elements may instead be provided separately or in any appropriate combination or not at all. Conversely, any features and/or elements described in the context of separate embodiments may alternatively be realized as existing together in the context of a single embodiment.

Any particular and all details set forth herein are used in the context of some embodiments and therefore should NOT be necessarily taken as limiting factors to the attached claims. The attached claims and their legal equivalents can be realized in the context of embodiments other than the ones used as illustrative examples in the description below.

Figure 1 illustrates exemplary method (100) for contextually processing applications, according to one embodiment. Referring to Figure 1, in said embodiment, the method (100) comprises: receiving (101), within a first application, instructions to invoke a second application; receiving (102) a selection of an area within the first application; performing (103), within the selected area of the first application, a task corresponding to the second application; receiving (104) an output from the second application in response to the task performed; determining (105) a contextual use of the output of the second application in the first application based on one or more parameters; identifying (106) an action to be performed on the output of the second application within first application based on the determined contextual use; and performing (107) the action on the output of the second application.

Further, the one or more parameters include location of the selected area in the first application, content of the first application, type of the first application, and one or more user-actionable items available in the first application.

Further, the output of the second application is one of an image, text, video, and audio.

Further, the step of performing (106) the action comprises auto-performing the action on the output of the second application.

Further, the step of performing (106) the action comprises: determining a plurality of locations on the first application for performing the action; receiving a selection of a location from amongst the plurality of locations; and auto-performing the action at the selected location.

Further, the selection of the location is received via one of: a touch based gesture input and an input device.

Further, the method (100) comprises storing the output of the second application.

Further, the method (100) comprises providing a predefined list of one or more applications on the first application prior to receiving the instructions, wherein the one or more applications includes the second application.
Further, the method (100) comprises providing a predefined list of one or more applications on the first application prior to receiving the instructions, wherein the one or more applications includes the second application.

Further, the method (100) comprises providing a user-interface corresponding to the second application within the selected area, the user-interface including a plurality of user-actionable items.

As illustrated in Figure 2, the present invention further provides a computing device (200) implementing the aforesaid method as illustrated in Figure 1, in accordance with an embodiment. Examples of the computing device (200) include smart phone, laptop, tablet, and Personal Digital Assistance (PDA). Accordingly, the computing device (200) includes display unit (201) adapted to depict a user-interface corresponding to various features of the computing device (200) and to various application available in the computing device (200). In accordance with the embodiment, the display unit (201) displays a first application invoked by a user. The computing device (200) further includes a receiving unit (202) adapted to receive, within the first application, instructions to invoke a second application. The receiving unit (202) is further adapted to receive a selection of an area within the first application from the user.

The computing device (200) further includes a controller (203), a memory (204), and a context determining unit (205). In accordance with the invention, upon receiving the instructions to invoke the second application and the selection of the area, the controller (203) is adapted to perform, within the selected area of the first application, a task corresponding to the second application. In response to the task performed, the context determining unit (205) receives an output from the second application. Further, the controller (203) saves the output in the memory (204).

Accordingly, the context determining unit (205) is adapted to determine a contextual use of the output of the second application in the first application based on one or more parameters. The context determining unit (205) is further adapted to identify an action to be performed on the output of the second application within first application based on the determined contextual use. Upon determination of the contextual use and subsequently identifying the action to be performed, the controller (203) performs the action on the output of the second application.

Further, the computing device (200) includes a user-input recognizing unit (206) adapted to recognize the inputs corresponding to the instructions to invoke the second application and the selection of the area. In an example, the user-input recognizing unit (206) is a touch controller capable of receiving a touch input and identifying the touch input. In such example, either the touch input can be received via a user’s finger or the touch input can be received via an input device (not shown in the figure) such as a stylus coupled to the computing device (200).

Further, the computing device (200) includes an application launching unit (207) adapted to launch an application. Thus, upon receiving the instructions to invoke the second application and the selection of the area, the controller (203) provides corresponding instructions to the application launching unit (207) to launch or invoke the second application in the selected area.

It would be understood that the computing device (200), the display unit (201), the receiving unit (202), and the controller (203) may include various software components or modules as necessary for implementing the invention.

Further, the user-input recognizing unit (206) and the application launching unit (207) can be implemented as hardware modules or software modules or a combination of hardware and software modules In one aspect of the invention, the user-input recognizing unit (206) and the application launching unit (207) can be implemented as forming a part of the controller (203). In another aspect of the invention, the user-input recognizing unit (206) and the application launching unit (207) can be implemented as forming a part of the memory (204).

Furthermore, the context determining unit (205) can be implemented as hardware module or software module or a combination of hardware and software modules to determine the contextual use of the output of the second application in the first application. In one aspect of the invention, the context determining unit (205) can be implemented as a different entity as depicted in the figure. In another aspect of the invention, the context determining unit (205) can be implemented as forming a part of the controller (203). In one another aspect of the invention, the context determining unit (205) can be implemented as forming a part of the memory (204). In yet another aspect of the invention, the context determining unit (205) can be implemented in a remote device (not shown) separate from the computing device (200).

For the ease of understanding, the forthcoming descriptions of Figure 3 illustrates contextually processing of an application as described in Figures 1 and 2, in accordance with an embodiment of the invention. As such, Figure 3 illustrates an exemplary screenshot (300) of the computing device (200) invoking a second application within a first application and determining a contextual use of the output of the second application in the first application.

Now referring to Figure 3(a), a first user-interface (301) corresponding to the first application is depicted on the display unit (201) of the computing device (200). Examples of the first application include, but not limited to, a document application, a browser application, and an image viewing application. The first user-interface (301) can include various elements such as images, text, videos, links, and hyperlinks. The first user-interface (301) can further include one or more plurality of user-actionable items or buttons. For the ease of understanding, the various elements are depicted on the first user-interface (301) as E1, E2, E3, E4, and E5, and the user-actionable items are depicted as B1, B2, and B3. However, it would be understood that, the presence of elements and the user-actionable items is not mandatory for the implementation of the present invention.

In accordance with the present invention, the receiving unit (202) receives an instruction to launch a feature for invoking a second application. Examples of the second application include, but not limited to, an image capturing application, an audio recording application, and an audio rendering application. In one example, the instruction to launch the feature can be received via a touch based gesture input such as double tap, right swipe, and rotate. In such example, the user-input recognizing unit (206) recognizes the input as launching the feature. In another example, instruction to launch the feature can be received via an input device such as a stylus (not shown) coupled to the computing device (200). In such example, the feature can be invoked by hovering the input device on the computing device (200).

Upon receiving the input, a pre-defined list of applications (302) is provided on the first user-interface (301). In an example, the pre-defined list of applications (302) is provided using miniature icons of all applications. For the ease of understanding and brevity, only five applications A1, A2, A3, A4, and A5 are depicted. In one example, the user can predefine the list of applications via a settings application available on the computing device (200). In another example, the list of applications is predefined during the manufacturing of the computing device (200).

Upon providing the pre-defined list of applications (302), the receiving unit (202) receives an instruction to invoke the second application. Accordingly, the user selects an application A1 as the second application from the pre-defined list of applications (302) (illustrated in the figure 3(a) using a dashed rectangle around A1). In one example, the instruction to invoke the second application can be received via a touch based input such as single tap. In such example, the user-input recognizing unit (206) recognizes the input as invoking the second application. In another example, instruction to invoke the second application can be received via an input device such as a stylus coupled to the computing device. In such example, the feature can be invoked by tapping on the application in the predefined list of application (302) using the input device on the computing device (200).

Subsequently, the receiving unit (202) receives a selection of an area (303) within the first user-interface (301), as illustrated in Figure 3(b). In one aspect of the invention, the selection of the area (303) is provided by drawing a shape of any size and any dimension. In one example, the selection of the area can be received via a touch based input. In such example, the user-input recognizing unit (206) recognizes the input as selection of the area (303). In another example, the selection of the area (303) can be received via an input device such as a stylus coupled to the computing device (200). In such example, the selection of the area (303) can be provided by drawing a shape on the first user-interface (301) using the input device.

Upon receiving the inputs corresponding to invoking the second application and the selection of the area (303), the application launching unit (207) launches the second application within the selected area (303). Accordingly, Figure 3(c) illustrates a second user-interface (304) corresponding to the second application within the selected area (303). The second user-interface (304) can include one or more plurality of user-actionable items. Subsequently, the controller (203) performs a task corresponding to the second application and generates an output. Examples of the task include, but not limited to, capturing a video, capturing an image, receiving a text, and capturing an audio. Correspondingly, examples of the output include video, image, text, and audio. As would be understood, such task would be performed upon receiving corresponding instructions from the user. Further, the controller (203) stores the output in the memory (204).

The output is then received by the context determining unit (205). Upon receiving, the context determining unit (205) determines a contextual use of the output of the second application in the first application based on one or more parameters. The parameters include, but not limited to, location of the selected area in the first application, content of the first application, type of the first application, and one or more user-actionable items available in the first application. In other words, the context determining unit (205) determines how the output of the second application will be used within the first application. Examples of the contextual use include, but not limited to, automatically filling up forms in a web-page in a browser application using a captured image, tagging images in an image viewing application with a captured audio, attaching a captured image on background of a pre-stored image to create a new image, automatically uploading a captured image in an email application or in a web-page in a browser application, and automatically inserting a captured image in a document application.

In one aspect of the invention, the context determining unit (205) analyses the first application based on the one or more parameters and determines the contextual use of the output of the second application in the first application. Further, the context determining unit (205) may analyse the content of the output of the second application to determine the contextual use. Upon determining the contextual use, the context determining unit (205) identifies an action to be performed on the output of the second application based on the determined contextual use. In an example, if the contextual use is determined as tagging images in an image viewing application, the action is determined as tagging an image in the image viewing application with a captured audio. Upon identifying the action, the controller (203) auto-performs the action on the output. The action would be performed using methods as known in the art.

Further, the context determining unit (205) determines one or more locations on the first user-interface (301) of the first application for performing the action based on the one or more parameters. In an example, the location is same as a location of the area selected on the first user-interface (301) of the first application. In another example, the location is a user-actionable item on the first user-interface (301) of the first application. In one aspect, the context determining unit (205) determines one location. Accordingly, the controller (203) auto-performs the action on the output at the determined location.

In another aspect, the the context determining unit (205) determines a plurality of locations. Accordingly, the controller (203) provides a notification message to the user on the first user-interface (301) indicating the plurality of location and requesting for selection of a location from the plurality of locations. Examples of the notification message include a flash message and a pop-up message. In response, the user can select a location. In one example, the selection of the location can be received via a touch based gesture input such as five-finger drag gesture. In such example, the user-input recognizing unit (206) recognizes the input as selection of the location. In another example, the selection of the location can be received via an input device such as a stylus coupled to the computing device (200). In such example, the selection of the location can be provided by tapping on the selected location using the input device.

Upon receiving the selection of the location, the controller (203) auto-performs the action on the output at the selected location. Referring to Figure 3(d), the output is inserted in the first user-interface (301) as a new element 305.

Thus, the present invention enables contextually processing of applications according to user’s need, demands, and action by invoking a second application anywhere at desired location within a first application and contextually using the output of the second application within the first application.

EXEMPLARY IMPLEMENTATIONS
Figure 4-9 illustrate example manifestations depicting the implementation of the present invention, as described with reference to Figures 1-3 above. However, it may be strictly understood that the forthcoming examples shall not be construed as being limitations towards the present invention and the present invention may be extended to cover analogous manifestations through other type of like mechanisms.

Figure 4 illustrates a screenshot (400) depicting an exemplary web page (401) being accessed via a browser application on a computing device (402). The web page (401) includes various text elements and text fields where a user needs to fill information corresponding to instructions given in the text elements. As described earlier and depicted in Figure 4(a), the user invokes an image capturing application, I1, by selecting the image capturing application I1, from a predefined list of application (403) (I1, A2, A3, A4, A5). As depicted in Figure 4(b), the user then selects an area (404) on the web page (401) by drawing a rectangular shape.

As depicted in Figure 4(c), the computing device (402) then invokes the image capturing application in the selected area (404) such that a user-interface (405) corresponding to the image capturing application is launched in the selected area (404). The user-interface (405) also includes a plurality of user-actionable items (406). Examples of the user-actionable items include, but not limited to, capture image, capture video, and select secondary camera.

Upon invoking the image capturing application, the user can capture an image of a document such as a business card and identity card, through the image capturing application. The computing device (402) then obtains the captured image and saves the captured image in a memory. Further, the computing device (402) determines a contextual use of the captured images in the web page (401) by analysing the web page (401). Accordingly, the computing device (402) detects the text fields are available in the web page (401) for filling of information and absence of any user-actionable item for uploading of the capturing image. In addition, the computing device (402) may analyse the output of the image capturing application and determine presence of text in the captured image. As such, the computing device (502) determines that the text from the captured image can be used for filling the text fields. Correspondingly, the computing device (402) identifies an action corresponding to the contextual use. As such, the computing device (402) uses an optical character reader module to extract the text from the captured image and auto-fills the text fields, as depicted in figure 4(d).
Figure 5 illustrates a screenshot (500) depicting an exemplary page (501) being accessed via an application on a computing device (502). In one example, the application is a browser application. In another example, the application is installed on the computing device (502). The page (501) includes various elements such as text, video, and image. The page (501) also includes a user-actionable item (503) for uploading an image of the user. As described earlier and depicted in Figure 5(a), the user invokes an image capturing application by selecting the image capturing application I1 from a predefined list of application (504) (I1, A2, A3, A4, A5). As depicted in Figure 5(b), the user then selects an area (505) on the page (501) by drawing a rectangular shape.

As depicted in Figure 5(c), the computing device (502) then invokes the image capturing application in the selected area (505) such that a user-interface (506) corresponding to the image capturing application is launched in the selected area (505). The user-interface (506) also includes a plurality of user-action items (507). Examples of the user-actionable items include, but not limited to, capture image, capture video, and select secondary camera.

Upon invoking the image capturing application, the user can capture an image of oneself through the image capturing application. The computing device (502) then obtains the captured image and saves in a memory. Further, the computing device (502) determines a contextual use of the captured images in the page (501) by analysing the page (501). Accordingly, the computing device (502) detects the user-actionable item (503) for uploading an image of the user. In addition, the computing device (502) may analyse the output of the image capturing application and determine an absence of text in the captured image. As such, the computing device (502) determines the captured image is to be uploaded in the page (501). Correspondingly, the computing device (502) identifies an action corresponding to the contextual use. As such, the computing device (502) automatically uploads (508) the captured image in the page (501), as depicted in figure 5(d).

Figure 6 illustrates a screenshot (600) depicting a page of an exemplary document application (601) being accessed on a computing device (602). The page (601) includes text data (603). However, it would be understood that, the inclusion of text data is not mandatory for the implementation of the present invention. As described earlier, the user invokes an image capturing application by selecting the image capturing application from a predefined list of application. As depicted in Figure 6(a), the user then selects an area (604) on the page (601) by drawing an elliptical shape.

As depicted in Figure 6(b), the computing device (602) then invokes the image capturing application in the selected area (604) such that a user-interface (605) corresponding to the image capturing application is launched in the selected area (604). The user-interface (605) also includes a plurality of user-actionable items (606). Examples of the user-actionable items include, but not limited to, capture image, capture video, and select secondary camera.

Upon invoking the image capturing application, the user can capture an image through the image capturing application. The computing device (602) then obtains the captured image and saves in a memory. Further, the computing device (602) determines a contextual use of the captured image in the page (601) by analysing the page (601). Accordingly, the computing device (602) detects the page (601) correspond to the document application. As such, the computing device (602) determines the captured image is to be attached in background of the page (601). Correspondingly, the computing device (602) identifies an action corresponding to the contextual use. As such, the computing device (602) automatically attaches (607) the captured image in background of page (601), as depicted in figure 6(c).

Figure 7 illustrates a screenshot (700) depicting a pre-stored image (701) of a person being accessed via an image viewing application on a computing device (702). As described earlier, the user invokes an image capturing application by selecting the image capturing application from a predefined list of application. As depicted in Figure 7(a), the user then selects an area (703) on the pre-stored image (701) by drawing contour on a face in the pre-stored image (701).

As depicted in Figure 7(b), the computing device (702) then invokes the image capturing application in the selected area (703) such that a user-interface (704) corresponding to the image capturing application is launched in the selected area (703). The user-interface (704) also includes a plurality of user-actionable items (705). Examples of the user-actionable items include, but not limited to, capture image, capture video, and select secondary camera.

Upon invoking the image capturing application, the user can capture an image of oneself through the image capturing application. The computing device (702) then obtains the captured image and saves in a memory. Further, the computing device (702) determines a contextual use of the captured image in the image viewing application by analysing the pre-stored image (501). Accordingly, the computing device (602) detects the location of the area (703) on the face of the pre-stored image (701). As such, the computing device (702) determines the captured image is to be embedded at the location of the selected area (703) in the pre-stored image (701). Correspondingly, the computing device (702) identifies an action corresponding to the contextual use. As such, the computing device (702) automatically embeds the captured image as a foreground image on the pre-stored image (701) to create a new image (706), as depicted in figure 7(c).

Figure 8 illustrates a screenshot (800) depicting an exemplary page (801) of a social media application being accessed on a computing device (802). The page (801) includes various video elements, image elements, and text fields where a user can post a text message. As described earlier, the user invokes an image capturing application by selecting the image capturing application from a predefined list of application. As depicted in Figure 8(a), the user then selects an area (803) on the page (801) by drawing a rectangular shape. As depicted in Figure 8(a), the computing device (802) then invokes the image capturing application in the selected area (803) such that a user-interface (804) corresponding to the image capturing application is launched in the selected area (803). The user-interface (804) also includes a plurality of user-actionable items (805). Examples of the user-actionable items include, but not limited to, capture image, capture video, and select secondary camera.

Upon invoking the image capturing application, the user can capture an image through the image capturing application. The computing device (802) then obtains the captured image and saves in a memory. Further, the computing device (802) determines a contextual use of the captured images in the page (801) by analysing the page (801). Accordingly, the computing device (502) detects the page (801) corresponds to a social media application. As such, the computing device (802) determines that the captured images can be uploaded in the page (801) as image elements. Correspondingly, the computing device (802) identifies an action corresponding to the contextual use. Further, the computing device (802) detects a plurality of locations on the page (801) corresponding to the images elements and therefore provides a notification message for selecting of one location. In response, the user provides an input (806) to select the location. In an example, the input is a touch based gesture input for dragging the captured image to location of an image element Image1, as depicted Figure 8(c). As such, the computing device (802) automatically uploads the captured image in the page (801) at the selected location as new image (807), as depicted in figure 8(d).

While the above mentioned exemplary manifestations of the invention have been illustrated and described herein using an image capturing application as the second application, it is to be understood that the invention is not limited thereto. As such, any application available in the computing device can be invoked as the second application.

EXEMPLARY HARDWARE CONFIGURATION
Figure 9 illustrates a typical hardware configuration of a computing device (900), which is representative of a hardware environment for implementing the present invention. As would be understood, the computing device (200), as described above, includes the hardware configuration as described below.

In a networked deployment, the computing device (900) may operate in the capacity of a server or as a client user computer in a server-client user network environment, or as a peer computer system in a peer-to-peer (or distributed) network environment. The computing device (900) can also be implemented as or incorporated into various devices, such as, a tablet, a personal digital assistant (PDA), a palmtop computer, a laptop, a smart phone, a notebook, and a communication device.

The computing device (900) may include a processor (901) e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both. The processor (901) may be a component in a variety of systems. For example, the processor (901) may be part of a standard personal computer or a workstation. The processor (901) may be one or more general processors, digital signal processors, application specific integrated circuits, field programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now known or later developed devices for analysing and processing data. The processor 1801 may implement a software program, such as code generated manually (i.e., programmed).

The computing device (900) may include a memory (902) communicating with the processor (901) via a bus (903). The memory (902) may be a main memory, a static memory, or a dynamic memory. The memory (902) may include, but is not limited to computer readable storage media such as various types of volatile and non-volatile storage media, including but not limited to random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. The memory (902) may be an external storage device or database for storing data. Examples include a hard drive, compact disc ("CD"), digital video disc ("DVD"), memory card, memory stick, floppy disc, universal serial bus ("USB") memory device, or any other device operative to store data. The memory (902) is operable to store instructions executable by the processor (901). The functions, acts or tasks illustrated in the figures or described may be performed by the programmed processor (901) executing the instructions stored in the memory (902). The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firm-ware, micro-code and the like, operating alone or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing and the like.

The computing device (900) may further include a display unit (904), such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a flat panel display, a solid state display, a cathode ray tube (CRT), or other now known or later developed display device for outputting determined information.

Additionally, the computing device (900) may include an input device (905) configured to allow a user to interact with any of the components of system (900). The input device (905) may be a number pad, a keyboard, a stylus, an electronic pen, or a cursor control device, such as a mouse, or a joystick, touch screen display, remote control or any other device operative to interact with the computing device (900).

The computing device (900) may also include a disk or optical drive unit (906). The drive unit (906) may include a computer-readable medium (908) in which one or more sets of instructions (908), e.g. software, can be embedded. In addition, the instructions (908) may be separately stored in the processor (901) and the memory (902).
The computing device (900) may further be in communication with other device over a network (909) to communicate voice, video, audio, images, or any other data over the network (909). Further, the data and/or the instructions (908) may be transmitted or received over the network (909) via a communication port or interface (910) or using the bus (903). The communication port or interface (910) may be a part of the processor (901) or may be a separate component. The communication port (910) may be created in software or may be a physical connection in hardware. The communication port (910) may be configured to connect with the network (909), external media, the display (904), or any other components in system (900) or combinations thereof. The connection with the network (909) may be a physical connection, such as a wired Ethernet connection or may be established wirelessly as discussed later. Likewise, the additional connections with other components of the system (900) may be physical connections or may be established wirelessly. The network (909) may alternatively be directly connected to the bus (903).

The network (909) may include wired networks, wireless networks, Ethernet AVB networks, or combinations thereof. The wireless network may be a cellular telephone network, an 802.9, 802.16, 802.20, 802.1Q or WiMax network. Further, the network (909) may be a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and may utilize a variety of networking protocols now available or later developed including, but not limited to TCP/IP based networking protocols.

In an alternative example, dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays and other hardware devices, can be constructed to implement various parts of the computing system (900).

Applications that may include the systems can broadly include a variety of electronic and computer systems. One or more examples described may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit. Accordingly, the present system encompasses software, firmware, and hardware implementations.

The computing device (900) may be implemented by software programs executable by the processor (901). Further, in a non-limited example, implementations can include distributed processing, component/object distributed processing, and parallel processing. Alternatively, virtual computer system processing can be constructed to implement various parts of the system.

The computing device (900) is not limited to operation with any particular standards and protocols. For example, standards for Internet and other packet switched network transmission (e.g., TCP/IP, UDP/IP, HTML, HTTP) may be used. Such standards are periodically superseded by faster or more efficient equivalents having essentially the same functions. Accordingly, replacement standards and protocols having the same or similar functions as those disclosed are considered equivalents thereof.

The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.

While certain present preferred embodiments of the invention have been illustrated and described herein, it is to be understood that the invention is not limited thereto. Clearly, the invention may be otherwise variously embodied, and practiced within the scope of the following claims.

Claims:We Claim:
1. A method comprising:
- receiving (101), within a first application (301), instructions to invoke a second application;
- receiving (102) a selection of an area (303) within the first application (301);
- performing (103), within the selected area (303) of the first application (301), a task corresponding to the second application;
- receiving (104) an output (305) from the second application in response to the task performed;
- determining (105) a contextual use of the output (305) of the second application in the first application (301) based on one or more parameters;
- identifying (106) an action to be performed on the output (305) of the second application within first application (301) based on the determined contextual use; and
- performing (107) the action on the output of the second application (305).

2. The method as claimed in claim 1, wherein the one or more parameters include location of the selected area in the first application (301), content of the first application (301), type of the first application (301), and one or more user-actionable items available in the first application (301).

3. The method as claimed in claim 1, wherein the output of the second application is one of an image, text, video, and audio.

4. The method as claimed in claim 1, wherein performing (107) the action comprises:
- auto-performing the action on the output (305) of the second application.

5. The method as claimed in claim 1, wherein performing (107) the action comprises.
- determining a plurality of locations on the first application (301) for performing the action;
- receiving a selection of a location from amongst the plurality of locations; and
- auto-performing the action at the selected location.

6. The method as claimed in claim 5, wherein the selection is received via one of: a touch based gesture input and an input device.

7. The method as claimed in claim 1, further comprises:
- storing the output (305) of the second application.

8. The method as claimed in claim 1, further comprises:
- providing a predefined list of one or more applications (302) on the first application (301) prior to receiving the instructions, wherein the one or more applications includes the second application.

9. The method as claimed in claim 1, further comprises:
- providing a user-interface corresponding to the second application (304) within the selected area, the user-interface including a plurality of user-actionable items.

10. A computing device (200) comprising:
- a receiving unit (202) to:
- receive, within a first application (301), instructions to invoke a second application; and
- receive a selection of an area (303) within the first application (301);
- a controller (203) coupled to the receiving unit (202) to:
- perform, within the selected area (303) of the first application (301), a task corresponding to the second application; and
- a context determining unit (205) coupled to the controller (203) to:
- receive an output (305) from the second application in response to the task performed;
- determine a contextual use of the output (305) of the second application in the first application (301) based on one or more parameters; and
- identify an action to be performed on the output (305) of the second application within first application (301) the contextual use;
wherein the controller (203) performs the action on the output (305) of the second application.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 3159-DEL-2015-RELEVANT DOCUMENTS [09-09-2023(online)].pdf 2023-09-09
1 Power of Attorney [01-10-2015(online)].pdf 2015-10-01
2 3159-DEL-2015-RELEVANT DOCUMENTS [01-09-2022(online)].pdf 2022-09-01
2 Form 5 [01-10-2015(online)].pdf 2015-10-01
3 Form 3 [01-10-2015(online)].pdf 2015-10-01
3 3159-DEL-2015-IntimationOfGrant20-10-2020.pdf 2020-10-20
4 Form 18 [01-10-2015(online)].pdf 2015-10-01
4 3159-DEL-2015-PatentCertificate20-10-2020.pdf 2020-10-20
5 Drawing [01-10-2015(online)].pdf 2015-10-01
5 3159-DEL-2015-Written submissions and relevant documents [07-09-2020(online)].pdf 2020-09-07
6 Description(Complete) [01-10-2015(online)].pdf 2015-10-01
6 3159-DEL-2015-FORM-26 [27-08-2020(online)].pdf 2020-08-27
7 3159-del-2015-Form-1-(07-10-2015).pdf 2015-10-07
7 3159-DEL-2015-Correspondence to notify the Controller [25-08-2020(online)].pdf 2020-08-25
8 3159-DEL-2015-FORM-26 [25-08-2020(online)].pdf 2020-08-25
8 3159-del-2015-Correspondence Others-(07-10-2015).pdf 2015-10-07
9 3159-DEL-2015-PA [19-09-2019(online)].pdf 2019-09-19
9 3159-DEL-2015-US(14)-HearingNotice-(HearingDate-27-08-2020).pdf 2020-07-20
10 3159-DEL-2015-ASSIGNMENT DOCUMENTS [19-09-2019(online)].pdf 2019-09-19
10 3159-DEL-2015-CLAIMS [22-05-2020(online)].pdf 2020-05-22
11 3159-DEL-2015-8(i)-Substitution-Change Of Applicant - Form 6 [19-09-2019(online)].pdf 2019-09-19
11 3159-DEL-2015-DRAWING [22-05-2020(online)].pdf 2020-05-22
12 3159-DEL-2015-FER_SER_REPLY [22-05-2020(online)].pdf 2020-05-22
12 3159-DEL-2015-OTHERS-101019.pdf 2019-10-14
13 3159-DEL-2015-Correspondence-101019.pdf 2019-10-14
13 3159-DEL-2015-OTHERS [22-05-2020(online)].pdf 2020-05-22
14 3159-DEL-2015-FER.pdf 2019-11-29
15 3159-DEL-2015-Correspondence-101019.pdf 2019-10-14
15 3159-DEL-2015-OTHERS [22-05-2020(online)].pdf 2020-05-22
16 3159-DEL-2015-FER_SER_REPLY [22-05-2020(online)].pdf 2020-05-22
16 3159-DEL-2015-OTHERS-101019.pdf 2019-10-14
17 3159-DEL-2015-DRAWING [22-05-2020(online)].pdf 2020-05-22
17 3159-DEL-2015-8(i)-Substitution-Change Of Applicant - Form 6 [19-09-2019(online)].pdf 2019-09-19
18 3159-DEL-2015-CLAIMS [22-05-2020(online)].pdf 2020-05-22
18 3159-DEL-2015-ASSIGNMENT DOCUMENTS [19-09-2019(online)].pdf 2019-09-19
19 3159-DEL-2015-PA [19-09-2019(online)].pdf 2019-09-19
19 3159-DEL-2015-US(14)-HearingNotice-(HearingDate-27-08-2020).pdf 2020-07-20
20 3159-del-2015-Correspondence Others-(07-10-2015).pdf 2015-10-07
20 3159-DEL-2015-FORM-26 [25-08-2020(online)].pdf 2020-08-25
21 3159-DEL-2015-Correspondence to notify the Controller [25-08-2020(online)].pdf 2020-08-25
21 3159-del-2015-Form-1-(07-10-2015).pdf 2015-10-07
22 3159-DEL-2015-FORM-26 [27-08-2020(online)].pdf 2020-08-27
22 Description(Complete) [01-10-2015(online)].pdf 2015-10-01
23 3159-DEL-2015-Written submissions and relevant documents [07-09-2020(online)].pdf 2020-09-07
23 Drawing [01-10-2015(online)].pdf 2015-10-01
24 3159-DEL-2015-PatentCertificate20-10-2020.pdf 2020-10-20
24 Form 18 [01-10-2015(online)].pdf 2015-10-01
25 Form 3 [01-10-2015(online)].pdf 2015-10-01
25 3159-DEL-2015-IntimationOfGrant20-10-2020.pdf 2020-10-20
26 Form 5 [01-10-2015(online)].pdf 2015-10-01
26 3159-DEL-2015-RELEVANT DOCUMENTS [01-09-2022(online)].pdf 2022-09-01
27 Power of Attorney [01-10-2015(online)].pdf 2015-10-01
27 3159-DEL-2015-RELEVANT DOCUMENTS [09-09-2023(online)].pdf 2023-09-09

Search Strategy

1 SEARCH3159DEL2015_29-11-2019.pdf

ERegister / Renewals

3rd: 29 Oct 2020

From 01/10/2017 - To 01/10/2018

4th: 29 Oct 2020

From 01/10/2018 - To 01/10/2019

5th: 29 Oct 2020

From 01/10/2019 - To 01/10/2020

6th: 29 Oct 2020

From 01/10/2020 - To 01/10/2021

7th: 27 Sep 2021

From 01/10/2021 - To 01/10/2022

8th: 29 Aug 2022

From 01/10/2022 - To 01/10/2023

9th: 28 Sep 2023

From 01/10/2023 - To 01/10/2024

10th: 28 Sep 2024

From 01/10/2024 - To 01/10/2025

11th: 11 Sep 2025

From 01/10/2025 - To 01/10/2026