Sign In to Follow Application
View All Documents & Correspondence

Method And System For Invoking Context Specific Actions In A Multi Window User Device

Abstract: ABSTRACT A method and system for invoking a context specific action in a multi-window user device. The user device monitors current context of at least two applications that are currently open on the multi-window user device. The system collects a first input on a first application window of said multi-window user device, and a second input on a second application window of said multi-window user device. Further, based on the current context of application that are open on the first application window and the second application window, the system identifies a context specific action that needs to be triggered. The identified context specific action is then triggered by the system. FIG. 3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 July 2015
Publication Number
03/2017
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
patent@bananaip.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-03-04
Renewal Date

Applicants

SAMSUNG R&D Institute India - Bangalore Private Limited
# 2870, Orion Building, Bagmane Constellation Business Park, Outer Ring Road, Doddanekundi Circle, Marathahalli Post,Bangalore-560 037, India

Inventors

1. Sukumar Moharana
Samsung R&D Institute India – Bangalore,#2870, Bagmane Constellation Business Park, Doddanekundi, Marathahalli, Bangalore - 560037
2. Shambu M T
Samsung R&D Institute India – Bangalore,#2870, Bagmane Constellation Business Park, Doddanekundi, Marathahalli, Bangalore - 560037
3. Kranti Chalamalasetti
Samsung R&D Institute India – Bangalore,#2870, Bagmane Constellation Business Park, Doddanekundi, Marathahalli, Bangalore - 560037
4. Venkata Subrahmanyam Gajavalli
Samsung R&D Institute India – Bangalore,#2870, Bagmane Constellation Business Park, Doddanekundi, Marathahalli, Bangalore - 560037
5. Pradeep Kumar Govindaraju
Samsung R&D Institute India – Bangalore,#2870, Bagmane Constellation Business Park, Doddanekundi, Marathahalli, Bangalore - 560037
6. Sushant Khanna
Samsung R&D Institute India – Bangalore,#2870, Bagmane Constellation Business Park, Doddanekundi, Marathahalli, Bangalore - 560037
7. Kumar Sasmit
Samsung R&D Institute India – Bangalore,#2870, Bagmane Constellation Business Park, Doddanekundi, Marathahalli, Bangalore - 560037
8. Ankit Kawatia
Samsung R&D Institute India – Bangalore,#2870, Bagmane Constellation Business Park, Doddanekundi, Marathahalli, Bangalore - 560037

Specification

CLIAMS:CLAIMS
What is claimed is:

1. A method for user interactions in a multi-window user device, said method comprising:
receiving a first input on a first application window of said multi-window user device, wherein said first input triggers a context of a first application in said first application window;
receiving a second input on a second application window of said multi-window user device, wherein said second input triggers a context of a second application in said second application window; and
invoking at least one context specific action corresponding to a combination of contexts of said first application and the second application.
2. The method as claimed in claim 1, wherein identifying said at least one context specific action further comprises of:
generating a communicator dataset;
comparing said communicator dataset with a reference database;
identifying at least one match for data in said communicator dataset, in said reference database; and
selecting an action corresponding to said identified match as said context specific action.
3. The method as claimed in claim 2, wherein said communicator data set is generated based on context of said first application and said second application.
4. A system for user interactions in a multi-window user device, said system configured for:
receiving a first input on a first application window of said multi-window user device, by an auto consolidation system, wherein said first input triggers a context of a first application in said first application window;
receiving a second input on a second application window of said multi-window user device, by said auto consolidation system, wherein said second input triggers a context of a second application in said second application window; and
invoking at least one context specific action corresponding to a combination of contexts of said first application and the second application, by said auto consolidation system.
5. The system as claimed in claim 4, wherein said auto-consolidation system is configured to identify said at least one context specific action by:
generating a communicator dataset, by a multi-window user device of said auto consolidation system;
comparing said communicator dataset with a reference database, by said multi-window user device;
identifying at least one match for data in said communicator dataset, in said reference database, by said multi-window user device; and
selecting an action corresponding to said identified match as said context specific action, by said multi-window user device.
6. The system as claimed in claim 5, wherein said multi-window user device is configured to generate said communicator dataset based on context of said first application and said second application.

Dated this 15th July 2015


Signature:

Name: Kalyan Chakravarthy
(Patent Agent)
,TagSPECI:FORM 2
The Patent Act 1970
(39 of 1970)
&
The Patent Rules, 2005

COMPLETE SPECIFICATION
(SEE SECTION 10 AND RULE 13)

TITLE OF THE INVENTION

“Method and system for invoking context specific actions in a multi-window user device”

APPLICANTS:
Name Nationality Address
SAMSUNG R&D Institute India - Bangalore Private Limited India # 2870, Orion Building, Bagmane Constellation Business Park, Outer Ring Road, Doddanekundi Circle, Marathahalli Post,Bangalore-560 037, India

The following specification particularly describes and ascertains the nature of this invention and the manner in which it is to be performed:-

TECHNICAL FIELD
[001] The embodiments herein relate to multi-window communication devices and, more particularly, to invoke context specific actions in multi-window communication devices.
BACKGROUND
[002] Rapid growth and progress in the technology front has given new meanings for user experience in various applications. Communication devices such as mobile phones, which were only meant for call and Short Message Service (SMS) purposes a few decades ago, have transformed to multi-purpose devices, capable of handling different applications. Many form-factors of computers such as tablet PC, laptops and so on, with advanced function support are also launched to the market. This progress has attracted customers, and they started giving preference to feature loaded devices. As a result, competition increased, and gadget manufacturers are concentrating more on introducing new features, which are intended to improve user experience.
[003] An example is the way display of the gadgets has been improved over the years. The term 'display' does not just refer to the pixel values and display quality, but the features which make displays attractive. Multi-window support is an example for this. The gadgets which support multi-window feature allow the users to multi-task by running more than one application at a time. Another important area researchers have been focusing on is the extent to which they can allow users to customize their gadgets, by automating certain processes. For example, in a multi-window user device, user experience may be improved by facilitating communication between two applications that are open at the same time. However, in the existing gadgets, the communication between two applications can be performed with manual intervention, which in turn spoils user experience.
OBJECT OF INVENTION
[004] An object of the embodiments herein is to dynamically identify at least one context specific action, in a multi window user device.
[005] Another object of the embodiments herein is to dynamically invoke context specific actions in a multi-window user device.
SUMMARY
[006] In view of the foregoing, an embodiment herein provides a method for a method for user interactions in a multi-window user device. A first input on a first application window of the multi-window user device is received, wherein the first input triggers a context of a first application in the first application window. Further, a second input on a second application window of the multi-window user device is received, wherein the second input triggers a context of a second application in the second application window. Further, at least one context specific action corresponding to a combination of contexts of the first application and the second application is invoked.
[007] Embodiments further disclose a system for user interactions in a multi-window user device. The system receives a first input on a first application window of the multi-window user device, by an auto consolidation system, wherein the first input triggers a context of a first application in the first application window. Further, a second input on a second application window of the multi-window user device is received by the auto consolidation system, wherein the second input triggers a context of a second application in the second application window. Further, at least one context specific action corresponding to a combination of contexts of the first application and the second application is invoked by the auto consolidation system.
[008] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings.
BRIEF DESCRIPTION OF THE FIGURES
[009] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
[0010] FIG. 1 illustrates a block diagram of an auto consolidation system, as disclosed in the embodiments herein;
[0011] FIG. 2 is a block diagram that shows components of a User Equipment (UE), as disclosed in the embodiments herein;
[0012] FIG. 3 is a flow diagram that shows steps involved in the process of invoking context specific actions in a multi-window user device, using the auto consolidation system, as disclosed in the embodiments herein; and
[0013] FIG. 4 is a flow diagram that shows steps involved in the process of identifying context specific actions in a multi-window user device, using the auto consolidation system, as disclosed in the embodiments herein.

DETAILED DESCRIPTION OF EMBODIMENTS
[0014] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0015] The embodiments herein disclose an action triggering system for a multi-window user device for identifying and triggering at least one action based on current context of application(s) that is open on the multi-window user device. Referring now to the drawings, and more particularly to FIGS. 1 through 4, where similar reference characters denote corresponding features consistently throughout the figures, as listed in the embodiments.
[0016] FIG. 1 illustrates a block diagram of an auto consolidation system, as disclosed in the embodiments herein. The auto consolidation system 100 comprises of a Multi-window User Equipment (UE) 101. Please note that the terms UE and Multi-window UE are used interchangeably, throughout the specification. The UE 101 can be configured to support multi-window function, wherein the UE 101 can display, and support working of more than one application at a time. The UE 101 can be configured to invoke at least one new application, based on context of an application that is currently open in the UE 101. The UE 101 can be further configured to capture at least one trigger event, wherein the trigger event is a user gesture, and/or a change in context of at least one application that is open in the UE 101. The UE 101 can be further configured to support communication between at least two applications that are open in the multi windows of the UE 101. The UE 101 can be further configured to trigger at least one context specific action, based on current context of at least two applications that are open in the UE 101.
[0017] FIG. 2 is a block diagram that shows components of a multi-window user Equipment (UE), as disclosed in the embodiments herein. The UE 101 comprises of an Input/Output (I/O) interface 201, a memory module 202, a memory module 203, and a triggering module 202.
[0018] The I/O interface 201 can be configured to provide at least one suitable channel with suitable communication protocol support for the UE 101 to interact with a user. The I/O interface 201 can be further configured to support multi window operation, with the help of a multi-window display system. The I/O interface 201 can be further configured to provide at least one option for the user to interact with the UE 101. For example, the I/O interface 201 can support gesture recognition, and allow the user to provide gesture inputs to the UE 101.
[0019] The memory module 202 can be configured to store all information required for the working of the auto consolidation system 100. The memory module 202 can store a reference database, wherein the reference database comprises information pertaining to different contexts of all applications being hosted by the UE 101, and action that can be triggered corresponding to each context of all applications. The reference database further comprises of information pertaining to action to be triggered for each combination of different contexts of at least two of the applications being hosted by the UE 101. In various embodiments, the data in the memory module 202 is updated on a regular basis or at periodic time intervals.
[0020] The monitoring module 203 can be configured to monitor context of all applications that are open on the UE 101, and identify change of context of at least one application. The monitoring module 203 can be further configured to inform the triggering module 204 about the change of context of the application, with any supporting data, as pre-configured. The monitoring module 203 can be further configured to identify, based on data fetched from the I/O interface 201, a user gesture that can be considered as a trigger to initiate at least one function of the auto consolidation system 100.
[0021] The triggering module 204 can be configured to collect from the monitoring module 203, information pertaining to at least one triggering event. The triggering module 204 can be further configured to identify at least one action that needs to be triggered in response to the identified triggering event. The triggering module 204 can be further configured to invoke the identified action.
[001] FIG. 3 is a flow diagram that shows steps involved in the process of invoking context specific actions in a multi-window user device, using the auto consolidation system, as disclosed in the embodiments herein. In various embodiments, the auto consolidation system 100 can be configured to function based on context of one or more application(s) that is open in the UE 101.
[002] The monitoring module 203 monitors, at periodic intervals and/or on a regular basis, the context of the applications that are open. Further, the context information pertaining to the first application is collected (302) by the input collected using the I/O interface 201. Similarly, the context information pertaining to the first application also is collected (304) using the I/O interface 201. In a preferred embodiment, the context information pertaining to at least one of the two applications that are open, is collected upon detecting a triggering event. In an embodiment, the triggering event can be context or context change of at least one of the two applications. In another embodiment, the trigger event can be a gesture of a pre-defined type, made by the user.
[003] Further, based on the context information pertaining to the first application and the second application, the UE 101 identifies (306) a context specific action to be triggered. In an embodiment, the context specific action is an action that matches current contexts of the first and second applications, and can change when context of at least one of the two applications changes.
[004] Further, the identified context specific action is triggered (308) by the UE 101. The various actions in method 300 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 3 may be omitted.
[005] FIG. 4 is a flow diagram that shows steps involved in the process of identifying context specific actions in a multi-window user device, using the auto consolidation system, as disclosed in the embodiments herein. The UE 101 collects context information pertaining to the first application and the second application that are open on the multi-window screen of the UE 101. Further, the UE 101 generates (402) a communicator dataset, wherein the communicator dataset comprises information such as, but not limited to, applications that are open and are being monitored (a unique Id that represents each application can be used), and current context/state of the applications. In another embodiment, the UE 101 generates the communicator dataset automatically from the moment the application has been opened, without waiting for the trigger event.
[006] The UE 101 further compares (404) the communicator dataset with a reference dataset, and identifies (406) an action to be triggered, in response to the trigger event. In a preferred embodiment, the reference database possesses information pertaining to action to be triggered corresponding to each combination of various contexts of all applications being hosted by the UE 101. While comparing the communicator dataset with the reference database, the UE 101 initially identifies data pertaining to a combination of the first and second applications. The UE 101 further identifies data pertaining to a combination of the current contexts of the first and second application. The UE 101 further identifies the action that has been pre-configured in the reference database, corresponding to the combination of the contexts that matches the current contexts of the first and second applications.
[007] For example, assume that the first application is a calendar application, and the second application is image gallery of the UE 101. In a first scenario, assume that the calendar is in month view mode, and the month ‘January’ is displayed in current context, and the gallery displays all images being stored in a memory space associated with the UE 101. In this scenario, upon detecting a triggering event (which can be a ‘collate gesture’), the UE 101 may display all images that are clicked in the month of January. In another scenario, assume that the calendar is set in the year mode, and the calendar displays the year ‘2014’ in the current context. In this case, the UE 101 may display all images that were clicked and/or stored in the year 2014. In this scenario, the UE 101 facilitates data communication between the first application and the second application such that the context specific action is invoked based on data from both applications.
[008] In another embodiment, the auto-consolidation system 100 can be configured to perform the auto consolidation process to trigger an action corresponding to only one application that is open on the UE 101. In this scenario wherein only one application is open in the multi-window display unit of the UE 101, the communicator dataset comprises information pertaining to the application that is open, and the current context of that application. In this case, the reference database comprises information pertaining to at least one action that can be triggered corresponding to the identified state of the application. For example, the action can be invoking another application (i.e. a second application). Assume that a first application that is open on the UE 101 is the chat application, and in the current context of the first application, the user discusses about trailer of a new movie with his/her friend. In this case, the UE 101 may automatically identify that the user is discussing about a movie trailer, invoke a browser and play the trailer mentioned by the user, in a video streaming website. In another example, the UE 101 may invoke a dedicated application of the video streaming website, and play the trailer mentioned by the user. Here, the action is invoking the second application i.e. the browser/video streaming application, and playing the video. In a preferred embodiment, the UE 101 can be configured to initiate any suitable action other than invoking a second application, as per requirements.
[009] The various actions in method 400 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 4 may be omitted.
[0010] A few use-case scenarios are explained below (Please note that all use-case scenarios mentioned below are written assuming that two applications i.e. a first application and a second application are open on the multi-window screen, and that the trigger event is a Collate gesture):
Use-case scenarios:
1) Map & Image of a person:
[0011] Assume that the first application is an image gallery and the second application is a location map. In the first scenario, assume that image gallery is opened, displaying all the images stored in the memory space. In the image gallery, a person’s picture is opened. When a person’s picture is opened, and upon detecting the collate gesture, current location and permanent address of the person are displayed on the location map.
2) Ecommerce site & Gallery photo:
[0012] Assume that the first application is an Image gallery and the second application is an E-commerce website or an E-commerce application. If the user opens a person’s picture in the image gallery, and a collage gesture is provided, the UE identifies interests of the person in the image by detecting his/her age, gender, trend and relationship with the user. Further, based on at least one of the detected interests, the UE identifies and recommends at least one product from the e-commerce website.
3) Photo of Food dish & Map:
[0013] Assume that the first application is an image gallery and the second application is a location map. If the user opens an image of a food item from the gallery, and if a collate gesture is shown, the UE automatically identifies restaurants in the identified location, which sell the selected foot-item. The UE may further show directions to the identified restaurants, to guide the user.
4) First Food Image & Second Food image:
[0014] Assume that the first and second applications are the same or different image galleries. If the user has opened images of two different food items, and upon detecting the collate gesture, the UE identifies, by searching in an associated content reference source (for e.g. online websites), ingredients of the dishes shown in both images, and display to the user. The system may further identify common ingredients, and display that to the user.
5) To-do List & Calendar:
[0015] Assume that the first application is a To-do list application and the second application is a calendar. Upon detecting the collate gesture, the UE, based on the contents of the to-do list and the calendar, displays the time and days left to complete each task listed in the to-do list. In another scenario, if the user has selected a particular task, the UE may display the time and days left to complete the selected task.
6) Ecommerce product page & Credit card User owns:
[0016] Assume that the user has saved details of more than one credit card in the payment desk, which is the first application. When the user wants to check out after adding a few products to the cart of a particular e-commerce application (second application), and when a collate gesture is identified, the UE identifies deals and offers for each of the credit cards saved in the payment desk, identifies best offers, and makes recommendations to the user, as to which is the best credit card to be used for the purchase.
7) Video & Image of a person
[0017] Assume the first application is an image gallery and the second application is video gallery. In a scenario, assume that the user opens image of a person in the image gallery, and assume that a video is being played in the video gallery. Upon detecting the collate gesture, the UE identifies, by processing the image and video together, all frames in the video where the person in the selected image appears. Further, the frame details are displayed to the user in the form of thumbnails. The user may then play only selected portions of the video, using the thumbnails.
[0018] The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the network elements. The network elements shown in Fig. 1 include blocks which can be at least one of a hardware device, or a combination of hardware device and software module.
[0019] The embodiments disclosed herein specify a system for triggering context specific actions. The mechanism allows triggering of at least one action, based on context of at least one application that is open on a User Equipment, providing a system thereof. Therefore, it is understood that the scope of protection is extended to such a system and by extension, to a computer readable means having a message therein, said computer readable means containing a program code for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The method is implemented in a preferred embodiment using the system together with a software program written in, for ex. Very high speed integrated circuit Hardware Description Language (VHDL), another programming language, or implemented by one or more VHDL or several software modules being executed on at least one hardware device. The hardware device can be any kind of device which can be programmed including, for ex. any kind of a computer like a server or a personal computer, or the like, or any combination thereof, for ex. one processor and two FPGAs. The device may also include means which could be for ex. hardware means like an ASIC or a combination of hardware and software means, an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means are at least one hardware means or at least one hardware-cum-software means. The method embodiments described herein could be implemented in pure hardware or partly in hardware and partly in software. Alternatively, the embodiment may be implemented on different hardware devices, for ex. using a plurality of CPUs.
[0020] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the claims as described herein.

CLAIMS
What is claimed is:

1. A method for user interactions in a multi-window user device, said method comprising:
receiving a first input on a first application window of said multi-window user device, wherein said first input triggers a context of a first application in said first application window;
receiving a second input on a second application window of said multi-window user device, wherein said second input triggers a context of a second application in said second application window; and
invoking at least one context specific action corresponding to a combination of contexts of said first application and the second application.
2. The method as claimed in claim 1, wherein identifying said at least one context specific action further comprises of:
generating a communicator dataset;
comparing said communicator dataset with a reference database;
identifying at least one match for data in said communicator dataset, in said reference database; and
selecting an action corresponding to said identified match as said context specific action.
3. The method as claimed in claim 2, wherein said communicator data set is generated based on context of said first application and said second application.
4. A system for user interactions in a multi-window user device, said system configured for:
receiving a first input on a first application window of said multi-window user device, by an auto consolidation system, wherein said first input triggers a context of a first application in said first application window;
receiving a second input on a second application window of said multi-window user device, by said auto consolidation system, wherein said second input triggers a context of a second application in said second application window; and
invoking at least one context specific action corresponding to a combination of contexts of said first application and the second application, by said auto consolidation system.
5. The system as claimed in claim 4, wherein said auto-consolidation system is configured to identify said at least one context specific action by:
generating a communicator dataset, by a multi-window user device of said auto consolidation system;
comparing said communicator dataset with a reference database, by said multi-window user device;
identifying at least one match for data in said communicator dataset, in said reference database, by said multi-window user device; and
selecting an action corresponding to said identified match as said context specific action, by said multi-window user device.
6. The system as claimed in claim 5, wherein said multi-window user device is configured to generate said communicator dataset based on context of said first application and said second application.

Dated this 15th July 2015


Signature:

Name: Kalyan Chakravarthy
(Patent Agent)

ABSTRACT
A method and system for invoking a context specific action in a multi-window user device. The user device monitors current context of at least two applications that are currently open on the multi-window user device. The system collects a first input on a first application window of said multi-window user device, and a second input on a second application window of said multi-window user device. Further, based on the current context of application that are open on the first application window and the second application window, the system identifies a context specific action that needs to be triggered. The identified context specific action is then triggered by the system.

FIG. 3

Documents

Application Documents

# Name Date
1 3627-CHE-2015-IntimationOfGrant04-03-2024.pdf 2024-03-04
1 Samsung_SRIB20140708016_Form 2 _CS.pdf 2015-07-17
2 Samsung_SRIB20140708016_drawings _CS.pdf 2015-07-17
2 3627-CHE-2015-PatentCertificate04-03-2024.pdf 2024-03-04
3 Form5.pdf 2015-07-17
3 3627-CHE-2015-Response to office action [04-03-2024(online)].pdf 2024-03-04
4 FORM3.pdf 2015-07-17
4 3627-CHE-2015-Annexure [19-01-2024(online)].pdf 2024-01-19
5 3627-CHE-2015-Written submissions and relevant documents [19-01-2024(online)].pdf 2024-01-19
5 3627-CHE-2015-FORM-26 [15-03-2018(online)].pdf 2018-03-15
6 3627-CHE-2015-FORM-26 [16-03-2018(online)].pdf 2018-03-16
6 3627-CHE-2015-FORM-26 [03-01-2024(online)].pdf 2024-01-03
7 3627-CHE-2015-FER.pdf 2019-12-27
7 3627-CHE-2015-Annexure [02-01-2024(online)].pdf 2024-01-02
8 3627-CHE-2015-OTHERS [26-06-2020(online)].pdf 2020-06-26
8 3627-CHE-2015-Correspondence to notify the Controller [02-01-2024(online)].pdf 2024-01-02
9 3627-CHE-2015-US(14)-HearingNotice-(HearingDate-05-01-2024).pdf 2023-12-11
9 3627-CHE-2015-FER_SER_REPLY [26-06-2020(online)].pdf 2020-06-26
10 3627-CHE-2015-ABSTRACT [26-06-2020(online)].pdf 2020-06-26
10 3627-CHE-2015-CORRESPONDENCE [26-06-2020(online)].pdf 2020-06-26
11 3627-CHE-2015-CLAIMS [26-06-2020(online)].pdf 2020-06-26
11 3627-CHE-2015-COMPLETE SPECIFICATION [26-06-2020(online)].pdf 2020-06-26
12 3627-CHE-2015-CLAIMS [26-06-2020(online)].pdf 2020-06-26
12 3627-CHE-2015-COMPLETE SPECIFICATION [26-06-2020(online)].pdf 2020-06-26
13 3627-CHE-2015-ABSTRACT [26-06-2020(online)].pdf 2020-06-26
13 3627-CHE-2015-CORRESPONDENCE [26-06-2020(online)].pdf 2020-06-26
14 3627-CHE-2015-FER_SER_REPLY [26-06-2020(online)].pdf 2020-06-26
14 3627-CHE-2015-US(14)-HearingNotice-(HearingDate-05-01-2024).pdf 2023-12-11
15 3627-CHE-2015-Correspondence to notify the Controller [02-01-2024(online)].pdf 2024-01-02
15 3627-CHE-2015-OTHERS [26-06-2020(online)].pdf 2020-06-26
16 3627-CHE-2015-Annexure [02-01-2024(online)].pdf 2024-01-02
16 3627-CHE-2015-FER.pdf 2019-12-27
17 3627-CHE-2015-FORM-26 [03-01-2024(online)].pdf 2024-01-03
17 3627-CHE-2015-FORM-26 [16-03-2018(online)].pdf 2018-03-16
18 3627-CHE-2015-FORM-26 [15-03-2018(online)].pdf 2018-03-15
18 3627-CHE-2015-Written submissions and relevant documents [19-01-2024(online)].pdf 2024-01-19
19 FORM3.pdf 2015-07-17
19 3627-CHE-2015-Annexure [19-01-2024(online)].pdf 2024-01-19
20 Form5.pdf 2015-07-17
20 3627-CHE-2015-Response to office action [04-03-2024(online)].pdf 2024-03-04
21 Samsung_SRIB20140708016_drawings _CS.pdf 2015-07-17
21 3627-CHE-2015-PatentCertificate04-03-2024.pdf 2024-03-04
22 Samsung_SRIB20140708016_Form 2 _CS.pdf 2015-07-17
22 3627-CHE-2015-IntimationOfGrant04-03-2024.pdf 2024-03-04

Search Strategy

1 2020-09-2311-15-18AE_24-09-2020.pdf
1 SearchStrategyMatrix_19-12-2019.pdf
2 2020-09-2311-15-18AE_24-09-2020.pdf
2 SearchStrategyMatrix_19-12-2019.pdf

ERegister / Renewals

3rd: 04 Jun 2024

From 15/07/2017 - To 15/07/2018

4th: 04 Jun 2024

From 15/07/2018 - To 15/07/2019

5th: 04 Jun 2024

From 15/07/2019 - To 15/07/2020

6th: 04 Jun 2024

From 15/07/2020 - To 15/07/2021

7th: 04 Jun 2024

From 15/07/2021 - To 15/07/2022

8th: 04 Jun 2024

From 15/07/2022 - To 15/07/2023

9th: 04 Jun 2024

From 15/07/2023 - To 15/07/2024

10th: 04 Jun 2024

From 15/07/2024 - To 15/07/2025

11th: 15 Jul 2025

From 15/07/2025 - To 15/07/2026