Sign In to Follow Application
View All Documents & Correspondence

Method And System For Augmented Reality Based Smart Classroom Environment

Abstract: The invention provides a method for providing augmented reality based environment using a portable electronic device. The method includes capturing an image of users  recognizing the users in the image  and fetching information associated with the recognized users. Further  the method includes determining location of the users in the image  mapping the fetched information associated with the users with the determined location of the users and communicating with the users based on the mapped information. FIG. 8

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
05 October 2012
Publication Number
33/2014
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2024-01-11
Renewal Date

Applicants

Samsung India Electronics Pvt Ltd.
Samsung India Electronics Pvt.Ltd. Logix Cyber Park Plot No C-28 & 29  Tower D Noida Sec - 62

Inventors

1. Dr. Debi P Dogra
Ramaganja  P.O. Jayantipur  District Paschim Medinipur  PIN – 721201 West Bengal ( India )
2. Saurabh Tyagi
H/No - 6/159  Sector – 2  Rajendra Nagar  Ghaziabad (Pin code - 201005) Uttar Pradesh (INDIA)
3. Trilochan Verma
House No. 40 New Khadi Colony Panipat Haryana - 132103

Specification

OF INVENTION
[001] The present invention relates to augmented reality
environment, and more particularly to processing recognition information
to provide interactive augmented reality based environment.
BACKGROUND OF INVENTIO5 N
[002] Augmented Reality (AR) applications combine real world
data and computer-generated data to create a user environment. The real
world data may be collected using any data acquisition unit such as mobile
phone, Personal Digital Assistant (PDA), smart phone, camera,
10 communicator, wireless electronic device, or any other data acquisition
unit. The augmented reality can be used in video games, Industrial designs,
mapping, navigation, advertising, medical, visualization, military,
emergency services, or in any other application. One of the most common
approaches to the AR is the use of live or recorded videos/images, captured
15 with a camera or mobile phone, which are processed and augmented with
computer-generated data to provide an interactive augmented reality
environment to the user. In many augmented reality applications,
information about the surrounding real world of the user becomes
interactive and digitally manipulated. In order to interact in an augmented
20 reality environment, a user may need location information of other users in
the virtual environment.
[003] The present invention provides a smart and robust method and
system for providing an interactive augmented reality based environment to
a user.
25
3/20
OBJECT OF INVENTION
[004] The principal object of the embodiments herein is to provide a
method and system for providing augmented reality environment using a
portable electronic device.
[005] Another object of the invention is to provide a method an5 d
system for processing recognition information to provide an augment
reality environment to a user.
[006] Another object of the invention is to provide a mechanism for
providing an interactive augmented reality platform that allows users to
10 interact with each other and digitally manipulate the information.
[007] Another object of the invention is to provide a method and
system for deriving location coordinates of users to provide an interactive
user environment.
15 SUMMARY
[008] Accordingly the invention provides a method for providing
augmented reality based environment using a portable electronic device.
The method includes capturing an image of users, recognizing the users in
the image, and fetching information associated with the recognized users.
20 Further, the method includes determining location of the users in the image,
mapping the fetched information associated with the users with the
determined location of the users and communicating with the users based
on the mapped information.
[009] Furthermore, the method includes adjusting position of the
25 portable electronic device according to position of the users. In an
embodiment, the position of the portable electronic device is adjusted
according to a predetermined region of the portable electronic device.
[0010] Furthermore, the method includes sending the image to a
server for recognizing the users. In an embodiment, the server performs a
4/20
facial recognition function on the image to determine face portion of the
users and authenticate the determined face portion in the image to
recognize the users.
[0011] Furthermore, the method includes transferring digital
information to the users using the information and the determined locatio5 n
of the users. In an embodiment, the digital information is transferred by
dragging and dropping the digital information in the determined location of
the users.
[0012] Furthermore, the method includes performing an adaptive
10 communication with the users based on the fetched information.
Furthermore, the method includes using the information and the determined
location of the users to take attendance in the environment.
[0013] These and other aspects of the embodiments herein will be
better appreciated and understood when considered in conjunction with the
15 following description and the accompanying drawings. It should be
understood, however, that the following descriptions, while indicating
preferred embodiments and numerous specific details thereof, are given by
way of illustration and not of limitation. Many changes and modifications
may be made within the scope of the embodiments herein without departing
20 from the spirit thereof, and the embodiments herein include all such
modifications.
BRIEF DESCRIPTION OF FIGURES
[0014] This invention is illustrated in the accompanying drawings,
throughout which like reference letters indicate corresponding parts in the
various figures. The embodiments herein will be better understood from
25 the following description with reference to the drawings, in which:
[0015] FIG. 1 illustrates generally, among other things, a top view
of a classroom, according to embodiments as disclosed herein;
5/20
[0016] FIG. 2 illustrates generally, among other things, an example
of classroom environment, according to embodiments as disclosed herein;
[0017] FIG. 3 depicts an exemplary image of the classroom
environment of the FIG. 2, according to embodiments as disclosed herein;
[0018] FIG. 4 illustrates generally, among other things, exemplar5 y
components of a system in which various embodiments of the present
invention operates;
[0019] FIG. 5 illustrates generally, among other things, an
exemplary predetermined region or field of view of the FIG. 2, according
10 to embodiments as disclosed herein;
[0020] FIG. 6 depicts a sequence diagram illustrating operations
performed by the system of the FIG. 4 using a server, according to
embodiments as disclosed herein;
[0021] FIG. 7 depicts a sequence diagram illustrating operations
15 performed by the system of the FIG. 4 without using the server, according
to embodiments as disclosed herein;
[0022] FIG. 8 depicts a flowchart illustrating a method for
providing augmented reality environment, according to embodiments as
disclosed herein;
20 [0023] FIG. 9 depicts a flowchart illustrating operations performed
by an instructor, according to embodiments as disclosed herein; and
[0024] FIG. 10 depicts a computing environment implementing the
application, in accordance with various embodiments of the present
invention.
25
6/20
DETAILED DESCRIPTION OF INVENTION
[0025] The embodiments herein and the various features and
advantageous details thereof are explained more fully with reference to the
non-limiting embodiments that are illustrated in the accompanying
drawings and detailed in the following description. Descriptions of well5 -
known components and processing techniques are omitted so as to not
unnecessarily obscure the embodiments herein. The examples used herein
are intended merely to facilitate an understanding of ways in which the
embodiments herein can be practiced and to further enable those skilled in
10 the art to practice the embodiments herein. Accordingly, the examples
should not be construed as limiting the scope of the embodiments herein.
[0026] The embodiments herein achieve a method and system for
providing augmented reality based environment using a portable electronic
device (hereinafter “PED”). The method allows an instructor to capture an
15 image of audience using the PED. The instructor adjusts the position of the
PED according to position of the audience. The instructor sends the
captured image to a server for performing a facial recognition function to
recognize the audience. The server recognizes the audience face(s) and
fetches information associated with the recognized audience. Further, the
20 server determines location coordinates of the audience and sends to the
PED. The instructor maps the fetched information associated with the
audience with the determined location of the audience. The instructor
communicates with the audience based on the mapped information.
Furthermore, the instructor performs an adaptive communication with the
25 audience based on the fetched information.
[0027] The method and system disclosed herein is simple, robust,
and reliable to provide an intelligent and smart augmented reality based
environment. The method and system can be used to take attendance,
interact, or perform any other activity inside a classroom, meeting room, or
7/20
any other gathering. Further, the method and system provides an
interactive platform to the instructor to easily interact and exchange digital
information with the audience.
[0028] Referring now to the drawings, and more particularly to
FIGS. 1 through 10, where similar reference characters denot5 e
corresponding features consistently throughout the figures, there are shown
preferred embodiments.
[0029] Throughout the description, the term audience and one or
more users is used interchangeably.
10 [0030] FIG. 1 illustrates generally, among other things, a top view
100 of a classroom 102, according to embodiments as disclosed herein.
The classroom 102 can include an instructor 104 conducting a session with
audience 106. The instructor 104 is standing or sitting in front of the
audience 106 such that the instructor 104 is able to make direct eye-to-eye
15 contact with the audience 106. In an embodiment, the instructor 104
described herein can be, for example, teacher, demonstrator, speaker,
professor, presenter, guider, controller, or any other person. In an
embodiment, the audience 106 described herein can be, for example, an
individual or a group of students, users, participants, attendees, or any other
20 person. The audience 106 can see, hear, and interact with the instructor
104 and among each other easily.
[0031] The classroom 102 described in the FIG. 1 is only for
illustrative purpose and does not limit the scope of the invention. Further,
in real type, the classroom 102 can be circular, square, rectangular, or in
25 any other shape. Though the FIG. 1 is described with respect to a
classroom but the present invention can be used in any type of gathering
such as meeting, conferencing, social platform or environment, public or
private event, company or organization workspace, or any other gathering.
8/20
[0032] FIG. 2 illustrates generally, among other things, an example
of classroom environment 200, according to embodiments as disclosed
herein. In an embodiment, the classroom 102 includes audience 106 facing
towards the instructor 104. The classroom 102 provides a room space 202
for sitting arrangement of the audience 106 as shown in the FIG. 2. In a5 n
example, a sequential setting arrangement of the audience 106 is made in
which some of the audience appears to be close to the instructor 104 as
shown at 204 and some of the audience 106 appears to be far from the
instructor 104 as shown at 206.
10 [0033] In an embodiment, the instructor 104 can have a portable
electronic device (PED) 208 to capture an image or video of the audience
106. The PED 208 described herein can include, for example, mobile
phone, personal digital assistant, smart phone, tablet, or any other wireless
customer electronic device. The PED 208 is capable of including imaging
15 sensor 210 to capture single or multiple images or videos of the audience
106. In an embodiment, the instructor 104 can adjust the position of the
PED 208 according to position of audience 106 such that the room space
202 is visible within a predetermined region 212 (or field of view 212) of
the imaging sensor 210. The instructor 104 can adjust the position of the
20 PED 208 according to the position of audience 106 such that face of every
individual in the audience 106 can be clearly visible in the image. In an
embodiment, the PED 208 can be a placed at any specific location of the
classroom 102 in a way that the predetermined region 212 of the imaging
sensor 210 covers the entire room space 202. The specific location
25 described herein can provide a clear facial view of the audience 106 present
in the classroom 102.
[0034] FIG. 3 depicts an exemplary image 300 of the classroom
environment 200 of the FIG. 2, according to embodiments as disclosed
herein. The exemplary image 300 can be displayed on the display screen of
9/20
the PED 102. In an embodiment, the image 300 can be viewed on any
other display device, for example, liquid crystal display device, cathode ray
tube monitor, plasma display, light-emitting diode (LED) display device,
image projection device, or any other type of display device capable of
presenting the image 300. The image 300 includes a scene having a visua5 l
representation of the audience 106, physical items, locality (for example,
exact coordinates of where an individual user is currently located in the
classroom 102), and other objects of the classroom 102.
[0035] FIG. 4 illustrates generally, among other things, exemplary
10 components of a system 400 in which various embodiment of the present
invention operates. The system 400 can include a server 402 configured to
be connected to the PED 208 through a wired or wireless communication
network 404. In an embodiment, the instructor 104 can use the PED 208 to
capture and send the image of the audience 106 to the server 402.
15 [0036] In an example, multiple images can be continuously sent to
the server 402 as a video stream. Each image generally includes a scene at
which the imaging sensor 210 of the PED 208 is pointed. Such scene can
include visual representation of the audience, physical items, location
coordinates of the audience, or any other object present in the classroom
20 102. In an embodiment, the instructor 104 sends the captured image to the
server 402 for further processing. The operations performed by the system
400 to provide augmented reality environment using the server 402 are
described in conjunction with FIG. 6.
[0037] In an embodiment, the PED 208 creates an image in memory
25 of the PED 208 and uses the image for further processing without sending it
to the server 204. The operations performed by the system 400 to provide
the augmented reality environment, without using the server 402, is
described in conjunction with FIG. 7.
10/20
[0038] FIG. 5 illustrates generally, among other things, an
exemplary predetermined region 212 or field of view of the FIG. 2,
according to embodiments as disclosed herein. The use of a common
coordinates can assist in presenting the captured image to the instructor 104
for providing an interactive augmented reality environment. Th5 e
coordinates (x1, y1), (x2, y2), (x3, y3), and (x4, y4) can define the
predetermined region 212 that is adjusted, by the instructor 104, according
to the position of the audience 106. In an embodiment, the server 402 is
configured to perform a facial recognition function on the image to
10 determine face portion 502 of the audience 106. In an example, the server
402 determines the location of the audience 106 by deriving the location
coordinates of each individual audience in the image. The server 402
determines that the audience 106 is at point 504 facing towards the
instructor 104. The server 402 is configured to determine the position of
15 audience 106 relative to axes of the coordinates, such as axes x, y, and z
illustrated as a part of the system 400. In an embodiment, the PED 208 can
also determine the location of the audience 106 by using local coordinates,
Global Positioning System (GPS) coordinates of the PED 106, or any other
technology known in the art.
20 [0039] FIG. 6 depicts a sequence diagram 600 illustrating
operations performed by the system 400 of the FIG. 4 using the server 402,
according to embodiments as disclosed herein. In an embodiment, at 602,
the audience 106 can provide profile information (or recognition
information) for requesting registration of the recognition information with
25 the server 402. Although such registration is not required in other
embodiments. In an embodiment, the instructor 104 can also provide the
recognition information associated with the audience 106 to the server 402.
In an embodiment, the instructor 104 can also register the audience
instantly based on the instructor knowledge. Further, the instructor 104 can
11/20
perform functions to correct/modify or delete the recognition information
associated with the audience 106. The server 402 is configured to record
the profile information in one or more databases.
[0040] At 604, the instructor 104 can use the PED 208 to capture an
image of the audience 106. In an example, the instructor 104 can adjust th5 e
position of the PED 208 in a way that the audience 106 is within the
predetermined region 212 of the imaging sensor 210. In an example, the
instructor 104 adjusts the position of the PED 208 according to the position
of the audience 106 such that face of every individual in the audience 106
10 can be clearly visible in the image.
[0041] At 606, the instructor 104 can use the PED 208 to send the
image to the server 402 through the communication network 404. In an
embodiment, the PED 208 can process the image without sending to the
server 402 as described in the FIG. 7. At 608, the server 402 is configured
15 to perform facial recognition functions on the image to recognize the
audience 106 in the image. The facial recognition functions described
herein are any facial recognition functions or techniques known in the art
used to determine the facial portions of the audience 106. In an example,
the server 402 is configured to recognize the audience 106 by
20 authenticating the determined face portions with the data stored in the
database.
[0042] At 610, the server 402 is configured to fetch the information
associated with the recognized audience 106. In an example, the
information extracted by the server 402 can include the profile information,
25 previous records, field information, or any other information. At 612, the
server 402 is configured to determine location of the audience 106 in the
image. In an example, the server 402 derives the location coordinates of
the audience 106 using the standard location coordinate systems known in
the art. At 614, the server 402 is configured to provide the information
12/20
associated with the recognized audience 106 and determined location
coordinates of the audience 106 to the PED 208 through the communication
network 404.
[0043] In an embodiment, at 616, the instructor 104 can map the
information with the determined location of the audience 106. In addition5 ,
the instructor 104 can use the information to take attendance, view previous
records, manipulate information, or to perform any other action. At 618,
the instructor 106 can communicate with the audience 106 based on the
mapped information. In an example, an interactive user interface is
10 displayed on the PED 208 of the instructor 104 to transfer data to the
audience 106. The instructor 104 can use the interactive user interface to
transfer or manipulate digital information to the audience 106, through the
communication network 404, by dragging and dropping the digital
information in the location coordinates of the audience 106. In an example,
15 the instructor 104 can perform an adaptive communication with the
audience 106 based on the information received from the server 402.
[0044] FIG. 7 depicts a sequence diagram 700 illustrating
operations performed by the system 400 of the FIG. 4, without using the
server 402, according to embodiments as disclosed herein. In an
20 embodiment, at 702, the instructor 104 can use the PED 208 to capture an
image of the audience 106. In an example, the instructor 104 can adjust the
position of the PED 208 in a way that the audience 106 is within the
predetermined region 212 of the imaging sensor 210. In an example, the
instructor 104 adjusts the position of the PED 208 according to the position
25 of the audience 106 such that face of every individual in the audience 106
can be clearly visible in the image.
[0045] At 704, the PED 208 is configured to create an image in
internal memory and perform a facial recognition function on the image to
recognize the audience 106. The PED 208 can determine the facial
13/20
portions and recognizes the audience 106 by authenticating the determined
face portions with the data stored in the internal memory. At 706, the PED
208 is configured to fetch the information associated with the recognized
audience 106. At 708, the server 402 is configured to determine location of
the audience 106 in the image. In an example, the server 402 derives th5 e
location coordinates of the audience 106 using the local coordinate system
or GPS coordinate system of the PED 208.
[0046] At 710, the PED 208 is configured to display the information
and determined location coordinates of the audience 106. In an example,
10 the PED 208 provides an interactive user interface to the instructor 104 to
transfer digital information to the audience 106. At 712, the instructor 104
can map the information with the determined location coordinates of the
audience 106. In an example, the instructor can use the information to take
attendance, view previous records, manipulate information, or to perform
15 any other action.
[0047] At 714, the instructor 106 can communicate with the
audience 106 based on the mapped information. In an example, the
instructor 104 can use the interactive user interface to transfer or
manipulate any digital information to/from the audience 106 by dragging
20 and dropping the digital information in the location coordinates of the
audience 106. In an example, the instructor 104 can perform an adaptive
communication with the audience 106 based on the information displayed
on the PED 208.
[0048] FIG. 8 depicts a flowchart 800 illustrating a method for
25 providing augmented reality environment, according to embodiments as
disclosed herein. Various steps of the flowchart 800 are provided in
blocks, where the steps are performed by the instructor 106, the PED 208,
the server 402, and a combination thereof. The flowchart 800 starts at step
802. At step 804, the method includes capturing image of the audience
14/20
106. In an example, the instructor 104 uses the PED 208 to capture an
image of the audience 106. The instructor 104 adjusts the position of the
PED 208 in a way that the audience 106 is within the predetermined region
212 of the imaging sensor 210 and face of every individual in the audience
106 is clearly visible in the image5 .
[0049] At step 806, the method includes sending the image to the
server 402. In an example, the instructor 104 uses the PED 208 to send the
image to the server 402 through the communication network 404. In an
example, the instructor 104 uses the PED 208 to further process the image
10 without sending it to the server 402. At step 808, the method includes
recognizing the audience 106 in the image. In an example, the server 402
performs a facial recognition function on the image to determine the face
portion of the audience 106. The server 402 recognizes the audience 106
by authenticating the determined face portions with the audience data
15 stored in the database.
[0050] At step 810, the method includes fetching information
related to the audience 106. In an example, the server 402 fetches the
information associated with the recognized audience 106 from the audience
data stored in the database. At step 812, the method includes determining
20 location of the audience 106 in the image. In an example, the server 402
derives the location coordinates of the audience 106 using the standard
location coordinate systems.
[0051] At step 814, the method includes providing the information
and determined location of audience 106. In an example, the server 402
25 provides the determined location and the information associated with the
audience 106 to the PED 208 through the communication network 404. At
step 816, the method includes performing an adaptive communication with
the audience 106 using the information and the determined location of the
audience 106. In an example, the instructor 104 transfers the digital
15/20
information to the audience 106 by dragging and dropping the digital
information in the location coordinates of the audience 106. In an example,
the instructor 104 communicates with the audience 106 by mapping the
received information with the determined location of the audience 106. At
step 818, if the instructor 104 wants to perform the operation again, the5 n
the method includes repeating the steps 804-818, else the flowchart 800
stops at step 820.
[0052] FIG. 9 depicts a flowchart 900 illustrating operations
performed by the instructor 104, according to embodiments as disclosed
10 herein. The flowchart 900 starts at step 902. At step 904, the instructor
104 captures and sends an image to the server 402. In an example, the
instructor 104 uses the PED 208 to capture the image of the audience 106
and send the image to the server 402 over the communication network 404.
At step 906, the instructor 104 receives the location and information about
15 the audience 106. In an example, the server 402 recognizes the audience
106 and fetches the information associated with the recognized audience.
In an example, the server 402 determines the location coordinates of the
audience 106 in the image. Further, the server 402 sends the location
coordinates and the information associated with the audience 106 to the
20 instructor 104. The instructor 106 uses the PED 208 to receive the location
coordinates and the information about the audience 106 through the
communication network 408.
[0053] At step 908, the instructor 104 communicates with the
audience 106 by mapping the received information with the corresponding
25 location coordinates of the audience 106. At step 910, the instructor 104
uses the received information to take attendance, view previous records,
manipulate information, or to perform any other task in the classroom or
any other gathering. At step 912, if the instructor 104 wants to perform the
16/20
operations again, then the steps 904-912 of the flowchart 900 is repeated,
else the flowchart 900 stops at step 914.
[0054] The various steps described with respect to the FIGS. 6-9
can be performed in sequential order, in random order, simultaneously,
parallel, or a combination thereof. Further, in some embodiments, some o5 f
the steps can be omitted, skipped, or added without departing from the
scope of the invention.
[0055] FIG. 10 depicts a computing environment implementing the
application, in accordance with various embodiments of the present
10 invention. As depicted, the computing environment comprises at least one
processing unit that is equipped with a control unit and an Arithmetic Logic
Unit (ALU), a memory, a storage unit, a clock chip, plurality of networking
devices, and a plurality Input output (I/O) devices. The processing unit is
responsible for processing the instructions of the algorithm. The processing
15 unit receives commands from the control unit in order to perform its
processing. Further, any logical and arithmetic operations involved in the
execution of the instructions are computed with the help of the ALU.
[0056] The overall computing environment can be composed of
multiple homogeneous and/or heterogeneous cores, multiple CPUs of
20 different kinds, special media and other accelerators. The processing unit is
responsible for processing the instructions of the algorithm. The processing
unit receives commands from the control unit in order to perform its
processing. Further, any logical and arithmetic operations involved in the
execution of the instructions are computed with the help of the ALU.
25 Further, the plurality of process units may be located on a single chip or
over multiple chips.
[0057] The algorithm comprising of instructions and codes required
for the implementation are stored in either the memory unit or the storage
or both. At the time of execution, the instructions may be fetched from the
17/20
corresponding memory and/or storage, and executed by the processing unit.
The processing unit synchronizes the operations and executes the
instructions based on the timing signals generated by the clock chip. The
embodiments disclosed herein can be implemented through at least one
software program running on at least one hardware device and performin5 g
network management functions to control the elements. The elements
shown in the FIGS. 1-10 include various units, blocks, modules, or steps
described in relation with methods, processes, algorithms, or systems of the
present invention, which can be implemented using any general purpose
10 processor and any combination of programming language, application, and
embedded processor.
[0058] The foregoing description of the specific embodiments will
so fully reveal the general nature of the embodiments herein that others can,
by applying current knowledge, readily modify and/or adapt for various
15 applications such specific embodiments without departing from the generic
concept, and, therefore, such adaptations and modifications should and are
intended to be comprehended within the meaning and range of equivalents
of the disclosed embodiments. It is to be understood that the phraseology
or terminology employed herein is for the purpose of description and not of
20 limitation. Therefore, while the embodiments herein have been described
in terms of preferred embodiments, those skilled in the art will recognize
that the embodiments herein can be practiced with modification within the
spirit and scope of the embodiments as described herein.

STATEMENT OF CLAIMS
We claim:
1. A method for providing augmented reality based environment using
a portable electronic device, the method comprising:
capturing an image of at least one user;
recognizing said at least one user in said image;
fetching information associated with said at least one
recognized user;
determining location of said at least one user in said image;
mapping said fetched information associated with said at
least one user with said determined location of said at least one
user; and
communicating with said at least one user based on said
mapping.
2. The method of claim 1, wherein said method further comprises
adjusting position of said portable electronic device according to
position of said at least one user.
3. The method of claim 2, wherein said position of said portable
electronic device is adjusted in accordance to a predetermined
region.
4. The method of claim 1, wherein said method further comprises
sending said image to a server for recognizing said at least one user.
5. The method of claim 1, wherein recognizing said at least one user
comprises:
performing a facial recognition function on said image to
determine face portion of said at least one user; and
authenticating said determined face portion in said image to
recognize said at least one user.
19/20
6. The method of claim 1, wherein determining location of said at least
one user comprises deriving location coordinates of said at least one
user in said image.
7. The method of claim 1, wherein said method further comprises
transferring digital information to said at least one user using said
information and said determined location of said at least one user.
8. The method of claim 7, wherein said digital information is
transferred by dragging and dropping said digital information in
said determined location of said at least one user.
9. The method of claim 1, wherein said method further comprises
using said information and said determined location of said at least
one user to take attendance of said at least one user in said
environment.
10. The method of claim 1, said method further comprises performing
an adaptive communication with said at least one user based on said
fetched information.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 3116-DEL-2012-IntimationOfGrant11-01-2024.pdf 2024-01-11
1 3116-DEL-2012-PROOF OF ALTERATION [01-01-2025(online)].pdf 2025-01-01
1 Power of Authority.PDF 2012-10-10
2 3116-DEL-2012-IntimationOfGrant11-01-2024.pdf 2024-01-11
2 3116-DEL-2012-PatentCertificate11-01-2024.pdf 2024-01-11
2 Form-5.pdf 2012-10-10
3 3116-DEL-2012-PatentCertificate11-01-2024.pdf 2024-01-11
3 3116-DEL-2012-US(14)-HearingNotice-(HearingDate-16-06-2021).pdf 2021-10-17
3 Form-3.pdf 2012-10-10
4 Form-1.pdf 2012-10-10
4 3116-DEL-2012-US(14)-HearingNotice-(HearingDate-16-06-2021).pdf 2021-10-17
4 3116-DEL-2012-Annexure [25-06-2021(online)].pdf 2021-06-25
5 Drawings.pdf 2012-10-10
5 3116-DEL-2012-Written submissions and relevant documents [25-06-2021(online)].pdf 2021-06-25
5 3116-DEL-2012-Annexure [25-06-2021(online)].pdf 2021-06-25
6 3116-DEL-2012-Written submissions and relevant documents [25-06-2021(online)].pdf 2021-06-25
6 3116-del-2012-GPA-(22-11-2012).pdf 2012-11-22
6 3116-DEL-2012-Correspondence to notify the Controller [10-06-2021(online)].pdf 2021-06-10
7 3116-DEL-2012-FORM-26 [10-06-2021(online)].pdf 2021-06-10
7 3116-DEL-2012-Correspondence to notify the Controller [10-06-2021(online)].pdf 2021-06-10
7 3116-del-2012-Correspondence Others-(22-11-2012).pdf 2012-11-22
8 3116-DEL-2012-FORM-26 [10-06-2021(online)].pdf 2021-06-10
8 3116-DEL-2012-Proof of Right (MANDATORY) [25-11-2019(online)].pdf 2019-11-25
8 SEL_New POA_ipmetrix.pdf 2014-10-07
9 3116-DEL-2012-FER_SER_REPLY [21-10-2019(online)].pdf 2019-10-21
9 3116-DEL-2012-Proof of Right (MANDATORY) [25-11-2019(online)].pdf 2019-11-25
9 FORM 13-change of POA - Attroney.pdf 2014-10-07
10 3116-DEL-2012-FER.pdf 2019-04-23
10 3116-DEL-2012-FER_SER_REPLY [21-10-2019(online)].pdf 2019-10-21
10 3116-DEL-2012-PETITION UNDER RULE 137 [21-10-2019(online)].pdf 2019-10-21
11 3116-DEL-2012-ASSIGNMENT DOCUMENTS [10-10-2019(online)].pdf 2019-10-10
11 3116-DEL-2012-FORM-26 [11-10-2019(online)].pdf 2019-10-11
11 3116-DEL-2012-PETITION UNDER RULE 137 [21-10-2019(online)].pdf 2019-10-21
12 3116-DEL-2012-8(i)-Substitution-Change Of Applicant - Form 6 [10-10-2019(online)].pdf 2019-10-10
12 3116-DEL-2012-FORM-26 [11-10-2019(online)].pdf 2019-10-11
12 3116-DEL-2012-PETITION UNDER RULE 137 [11-10-2019(online)].pdf 2019-10-11
13 3116-DEL-2012-PETITION UNDER RULE 137 [11-10-2019(online)].pdf 2019-10-11
13 3116-DEL-2012-8(i)-Substitution-Change Of Applicant - Form 6 [10-10-2019(online)].pdf 2019-10-10
14 3116-DEL-2012-8(i)-Substitution-Change Of Applicant - Form 6 [10-10-2019(online)].pdf 2019-10-10
14 3116-DEL-2012-ASSIGNMENT DOCUMENTS [10-10-2019(online)].pdf 2019-10-10
14 3116-DEL-2012-FORM-26 [11-10-2019(online)].pdf 2019-10-11
15 3116-DEL-2012-ASSIGNMENT DOCUMENTS [10-10-2019(online)].pdf 2019-10-10
15 3116-DEL-2012-FER.pdf 2019-04-23
15 3116-DEL-2012-PETITION UNDER RULE 137 [21-10-2019(online)].pdf 2019-10-21
16 3116-DEL-2012-FER.pdf 2019-04-23
16 3116-DEL-2012-FER_SER_REPLY [21-10-2019(online)].pdf 2019-10-21
16 FORM 13-change of POA - Attroney.pdf 2014-10-07
17 FORM 13-change of POA - Attroney.pdf 2014-10-07
17 SEL_New POA_ipmetrix.pdf 2014-10-07
17 3116-DEL-2012-Proof of Right (MANDATORY) [25-11-2019(online)].pdf 2019-11-25
18 3116-DEL-2012-FORM-26 [10-06-2021(online)].pdf 2021-06-10
18 SEL_New POA_ipmetrix.pdf 2014-10-07
18 3116-del-2012-Correspondence Others-(22-11-2012).pdf 2012-11-22
19 3116-del-2012-Correspondence Others-(22-11-2012).pdf 2012-11-22
19 3116-DEL-2012-Correspondence to notify the Controller [10-06-2021(online)].pdf 2021-06-10
19 3116-del-2012-GPA-(22-11-2012).pdf 2012-11-22
20 3116-del-2012-GPA-(22-11-2012).pdf 2012-11-22
20 3116-DEL-2012-Written submissions and relevant documents [25-06-2021(online)].pdf 2021-06-25
20 Drawings.pdf 2012-10-10
21 3116-DEL-2012-Annexure [25-06-2021(online)].pdf 2021-06-25
21 Drawings.pdf 2012-10-10
21 Form-1.pdf 2012-10-10
22 3116-DEL-2012-US(14)-HearingNotice-(HearingDate-16-06-2021).pdf 2021-10-17
22 Form-1.pdf 2012-10-10
22 Form-3.pdf 2012-10-10
23 3116-DEL-2012-PatentCertificate11-01-2024.pdf 2024-01-11
23 Form-3.pdf 2012-10-10
23 Form-5.pdf 2012-10-10
24 3116-DEL-2012-IntimationOfGrant11-01-2024.pdf 2024-01-11
24 Form-5.pdf 2012-10-10
24 Power of Authority.PDF 2012-10-10
25 Power of Authority.PDF 2012-10-10
25 3116-DEL-2012-PROOF OF ALTERATION [01-01-2025(online)].pdf 2025-01-01

Search Strategy

1 searchstrategy_22-04-2019.pdf

ERegister / Renewals

3rd: 10 Apr 2024

From 05/10/2014 - To 05/10/2015

4th: 10 Apr 2024

From 05/10/2015 - To 05/10/2016

5th: 10 Apr 2024

From 05/10/2016 - To 05/10/2017

6th: 10 Apr 2024

From 05/10/2017 - To 05/10/2018

7th: 10 Apr 2024

From 05/10/2018 - To 05/10/2019

8th: 10 Apr 2024

From 05/10/2019 - To 05/10/2020

9th: 10 Apr 2024

From 05/10/2020 - To 05/10/2021

10th: 10 Apr 2024

From 05/10/2021 - To 05/10/2022

11th: 10 Apr 2024

From 05/10/2022 - To 05/10/2023

12th: 10 Apr 2024

From 05/10/2023 - To 05/10/2024

13th: 04 Oct 2024

From 05/10/2024 - To 05/10/2025

14th: 12 Sep 2025

From 05/10/2025 - To 05/10/2026