Sign In to Follow Application
View All Documents & Correspondence

System And Method To Augment New Targets In A Marker Based Augmented Reality Application

Abstract: This disclosure relates generally to a system and method to augment new targets in a marker based augmented reality application without modifying core augmented reality application. The extracted features of the targets and related augmentation content do not need to be specified during the AR application development time. The system is configured to query an external source to retrieve metadata about the targets and their related augmentation content along with their location. Next the system retrieves the feature for the extracted targets and their related augmentation content using the above meta data and dynamically prepares the target to augmentation content mapping hierarchy in the AR application. Since the target features and their related augmentation content is not embedded in the AR application itself, they are retrieved and obtained from external source. Therefore, embodiments allow to specify existing, new and emerging targets and their related augmentation content on an ongoing basis. [To be published with FIG. 3]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
13 March 2020
Publication Number
38/2021
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
kcopatents@khaitanco.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-03-06
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point Mumbai-400021. Maharashtra. India.

Inventors

1. KAR, Debnarayan
Tata Consultancy Services Limited Block -1B, Eco Space, Plot No. IIF/12 (Old No. AA-II/BLK 3. I.T) Street 59 M. WIDE (R.O.W.) Road, New Town, Rajarhat, P.S. Rajarhat, Dist - N. 24 Parganas, Kolkata-700160. West Bengal. India.
2. JOHRI, Vansh
Tata Consultancy Services Limited Block -1B, Eco Space, Plot No. IIF/12 (Old No. AA-II/BLK 3. I.T) Street 59 M. WIDE (R.O.W.) Road, New Town, Rajarhat, P.S. Rajarhat, Dist - N. 24 Parganas, Kolkata 700160 West Bengal India
3. MISRA, Prateep
Tata Consultancy Services Limited Block -1B, Eco Space, Plot No. IIF/12 (Old No. AA-II/BLK 3. I.T) Street 59 M. WIDE (R.O.W.) Road, New Town, Rajarhat, P.S. Rajarhat, Dist - N. 24 Parganas, Kolkata 700160 West Bengal India
4. PANDA, Satanik
Tata Consultancy Services Limited Block -1B, Eco Space, Plot No. IIF/12 (Old No. AA-II/BLK 3. I.T) Street 59 M. WIDE (R.O.W.) Road, New Town, Rajarhat, P.S. Rajarhat, Dist - N. 24 Parganas, Kolkata 700160 West Bengal India
5. KOUL, Neerja
Tata Consultancy Services Limited Quadra II, Survey No. 239, Sadesataranali, Opposite Magarpatta City, Hadapsar, Pune 411028 Maharashtra India

Specification

FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION (See Section 10 and Rule 13)
Title of invention
SYSTEM AND METHOD TO AUGMENT NEW TARGETS IN A MARKER BASED AUGMENTED REALITY APPLICATION
Applicant
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description
The following specification particularly describes the invention and the manner in which it is to be performed.

TECHNICAL FIELD [001] The disclosure herein generally relates to a field of marker-based augmentation without modifying core augment reality application per se and, more particularly, to a system and method to augment new targets in a marker-based augmented reality application without modifying core augmented reality application.
BACKGROUND [002] Augmented reality (AR) applications can be a marker-based and marker-less. In the marker-based AR applications, the targets and their features are known beforehand during the development of AR applications. Therefore, the marker-based AR applications are also called model-based AR application because the features of the targets are its models. However, in the marker-less or model-free AR applications, the targets and their features are not specifically known beforehand.
[003] The marker-less AR applications use a combination of various existing, improving and emerging techniques involving position tracking and understanding sensors, Simultaneous Localizing and Mapping (SLAM), computer vision and deep learning to understand planar surface and object without any prior knowledge of them. The marker-less AR applications do require that each instances of the target and their features need not to be known beforehand. Therefore, the marker-based or model-based AR application has prior (beforehand) knowledge of each specific targets and its features however, the marker-less or model-free AR application do not have prior knowledge of the targets and their features.
[004] In marker-based AR application, when new targets are introduced or when the content for existing targets are added or updated, the marker-based AR application also needs to be changed. The marker-based AR applications are usually deployed on devices like smartphones and smart glasses. So after a marker-based AR application is deployed in multitudes of devices and subsequently whenever

new targets are introduced to the AR application or new or updated content are brought in for the existing targets, the marker-based AR applications on the many devices needs to be re-deployed to show the new augmentation. This is a big challenge for wide deployment of marker-based AR application in dynamic environment.
SUMMARY [005] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a processor-implemented method to augment targets in a marker-based AR application without modifying core of the AR application is provided.
[006] The processor-implemented method includes capturing at least one media of each of a first set of targets using a multimedia component, extracting a plurality of features of the first set of targets from the captured at least one media to augment a second set of targets. It would be appreciated that the first set of targets are dynamically updated in a marker-based AR application based on the extracted plurality of features. Further, the method includes updating a feature store with the extracted a plurality of features of the first set of targets, a content store with one or more augmentation content of the first set of targets for augmenting the second set of targets and a metadata store of a web service. Communicating with the updated metadata store to retrieve a metadata of each of the first set of targets using a marker-based augmented reality application, parsing the retrieved metadata to obtain identity of at least one target and the augmentation content of the first set of targets, comparing each of the plurality of extracted features of the first set of targets with each feature of the second set of targets of the marker-based augmented reality application, and displaying the one or more augmentation content of the first set of targets which are mapped over the second set of targets in a form of augmentation.
[007] In another aspect, a system to augment targets in a marked-based AR application without modifying core of the AR application is provided. The

system includes at least one user interface, at least one memory storing a plurality of instructions, and one or more hardware processors communicatively coupled with at least one memory. Herein, the one or more hardware processors are configured to execute one or more modules. Further, the system includes a capturing module configured to capture at least one media of each of a first set of targets using a multimedia component, an extraction module configured to extract a plurality of features of the first set of targets from the captured at least one media to augment a second set of targets dynamically updated in a marker-based AR application based on the extracted plurality of features wherein the second set of targets is subset of first set of targets and an updating module configured to update a feature store with the extracted a plurality of features of the first set of targets, wherein the feature store is updated dynamically.
[008] Further, the system comprises a communication module configured to communicate with the updated metadata store to retrieve a metadata of a subset of the first set of targets using a marker-based augmented reality (AR) application, a parsing module configured to parse the retrieved metadata to obtain identity of at least one target and corresponding the augmentation content of the first set of targets, a comparing module configured to each of the plurality of extracted features of the first set of targets with each feature of the second set of targets of the marker-based augmented reality application, and a display module configured to display the one or more augmentation content of the first set of targets which are mapped over the second set of targets in a form of augmentation.
[009] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS

[010] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[011] FIG. 1 illustrates an exemplary system to augment targets in a marked-based AR application without modifying core of the AR application according to some embodiments of the present disclosure.
[012] FIG. 2 illustrates a schematic diagram in accordance with some embodiments of the present disclosure.
[013] FIG. 3 is a flow diagram illustrating to augment targets in a marked-based AR application without modifying core of the AR application in accordance with some embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS [014] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope being indicated by the following claims.
[015] Referring now to the drawings, and more particularly to FIG. 1 through FIG. 3, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.

[016] The embodiments herein provide a system and method to augment a marked-based augmented reality (AR) application without modifying core of the marked-based AR application. It is to be noted that the marker-based AR application claims each feature and augmentation content of second set of targets at the time of development. The marker-based AR application retrieves a metadata using a web-based application. A location of the web-based application program interface (API) uniform resource locator (URL) end points needs to be known to the marker-based AR application along with an access credential to a remote API. The extracted metadata comprises a first set of targets and corresponding one or more augmentation content of each of the first set of targets in a predefined exchange format.
[017] Referring FIG. 1, illustrating a system (100) to augment a marked-based augmented reality (AR) application without modifying core of the marked-based AR application. In the preferred embodiment, the system (100) comprises at least one memory (102) with a plurality of instructions, at least one user interface (104) and one or more hardware processors (106), wherein the one or more processors are communicatively coupled with the at least one memory (102) to execute modules therein.
[018] The hardware processor (106) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the hardware processor (106) is configured to fetch and execute computer-readable instructions stored in the memory (102). Further, the system comprises a capturing module (108), an extraction module (110), an updation module (112), a communication module (114), a parsing module (116), a comparison module (118), and a display module (120).

[019] In the preferred embodiment of the disclosure, the capturing module (108) of the system (100) is configured to capture at least one media of each of a first set of targets using a multimedia component. The multimedia component comprises a camera to capture an image, or a video.
[020] In the preferred embodiment of the disclosure, the extraction module (110) of the system (100) is configured to extract a plurality of features of the first set of targets from the captured at least one media and corresponding one or more augmentation content of each of the first set of targets to augment a second set of targets in a marker-based AR application based on the extracted plurality of features. Herein, the second set of targets is a subset of first set of targets. It would be appreciated that the extracted plurality of augmentation content includes static augmentation content and dynamic augmentation content. The static augmentation content is extracted from a predefined location and displayed as overlay on the captured media. Further, if the augmentation content is dynamic API response, then a predefined API Uniform Resource Locator (URL) end point with access credential and response is received and parsed to display as overlay on the captured media.
[021] In one example, wherein distinctive and distinguishable objects like Bar Code, QR Code, Text, 2D Image or even 3D object which triggers the marker-based AR application are called targets. The patterns or distinct structures like point, edge, corners found in the targets are extracted as features. When the marker-based AR application needs to decide if a target in current camera feed matches any of the set of targets available with the marker-based AR application, the marker-based AR application compares the extracted features of the each frame from the camera feed with the features of the sets of targets. It would be appreciated that when two instances of the same features of same target are compared then even if one instance of the target is scaled bigger or smaller, or rotated, a match will be found.

[022] In the preferred embodiment of the disclosure, the updation module (112) of the system (100) is configured to update a feature store with the extracted a plurality of features of the first set of targets, a content store with one or more augmentation content of the first set of targets for augmenting the second set of targets and a metadata store of a web service. The web service is an arrangement designed to support interoperable machine-to-machine interaction over a network. The web service provides an object-oriented web-based interface to a database. Herein, the updation of the feature store, the content store and the metadata store are dynamic. The metadata store of a web service is updated with mapping between identity of one or more extracted features of the first set of targets, and one or more augmentation content and location of the one or more extracted features. It is to be noted that the each of the plurality of features from each of the second set of targets are obtained from a predefined location in the feature store.
[023] In the preferred embodiment of the disclosure, the communication module (114) of the system (100) is configured to communicate with the updated metadata store to retrieve a metadata of a subset of the first set of targets using a marker-based augmented reality (AR) application. The metadata includes mapping information, an addressable location of each of the first set of targets, and the extracted one or more features of the first set of targets. When the mapping information is retrieved from the metadata store, then based on a user’s role the mapping information with a subset of the second set of targets are returned. The second set of targets cannot be more than the first set of targets.
[024] In the preferred embodiment of the disclosure, the parsing module (116) of the system (100) is configured to parse the retrieved metadata to obtain identity of at least one target and corresponding the augmentation content of the first set of targets. Herein, the at least one feature of the received one or more features corresponding to the identified at least one target from a feature store specified in the metadata.

[025] In the preferred embodiment of the disclosure, the comparison module (118) of the system (100) is configured to compare each of the plurality of extracted features of the first set of targets with features of each of the target from of the second set of targets of the marker-based augmented reality application to find a match between at least one of the first set of targets and at least one target from the second set of targets.
[026] In the preferred embodiment of the disclosure, the display module (120) of the system (100) is configured to display the one or more augmentation content of the first set of targets which are mapped over the second set of targets in a form of augmentation. Herein, the display one or more augmentation contents over the second set of targets when the first set of targets matches one target from the second set of targets. Further, the augmentation content is mapped to the matched target from the second set of targets.
[027] Referring FIG. 2, a marker-based AR augmentation arrangement, wherein an AR application is deployed in a smartphone. Other components of the arrangement like tablet computer, wearable AR smart glass or other AR device with camera, CPU, GPU, display and internet connectivity among other things. The AR application plays a central role in the functioning of the arrangement. Herein, the AR application does not contain a priori knowledge of the AR targets and related content to be augmented. The arrangement makes use of one or more AR software development kits (SDKs) to process, map, detect, and track target features prepared for that particular AR SDK. The AR application requests a remoted web-application using web-based REST or other suitable protocol. In response to a remote web application provides a metadata about a second set of targets along with related augmentation content in a predefined format like JSON. Further, the AR application retrieves features for each of the second set of targets from respective predefined feature location. The location of the web-application API URL end point needs to be known to the AR application along with access credential like access key or user ID, password etc., which are required to securely access the remote API.

[028] In the preferred embodiment of the disclosure, the comparison module (118) of the system (100) is configured to compare each of the plurality of extracted features of the first set of targets with each feature of the second set of targets of the marker-based augmented reality application. Each of the compared one or more features of the first set of targets are in a mutually comprehensible format with each of the second set of targets.
[029] In the preferred embodiment of the disclosure, the display module (120) of the system (100) is configured to display one or more augmentation content over the first set of targets whose features matches with the features of the second set of targets and the augmentation content is determined based on the mapping information of the matched target in the second set of targets.
[030] Referring FIG. 3, a flow chart to illustrate a processor-implemented method (300) to augment a marked-based AR application without modifying core of the AR application. The method comprises one or more steps as follows.
[031] Initially, at the step (302), capturing at least one media of each of a first set of targets using a multimedia component, wherein the multimedia component comprises a camera to capture an image, or a video.
[032] In the preferred embodiment of the disclosure, at the next step (304), extracting a plurality of features of the first set of targets from the captured at least one media and corresponding one or more augmentation content of each of the first set of targets to augment a second set of targets in a marker-based AR application based on the extracted plurality of features. The capturing (302) and feature extraction (304) are usually performed only once for each target and extracted features for the targets are stored for future reference in subsequent step (306). The second set of targets is subset of first set of targets. In one embodiment of the disclosure the second set of targets can include all the instances of first set of targets. In another embodiment of the disclosure, the second set of targets can be a subset of the first set of targets. Let’s explain it with an example. At a particular instance

there are 100 targets in the first set of targets. Users in Manager role has access to all the 100 targets, but users with Worker role has access to only 50 targets. So, for Manager, the second set of targets contain all the 100 targets but for Worker, the second set of targets contain 50 targets. Any set is a subset of itself and thus the word subset includes any valid subset including the whole of the set itself. Wherein, the metadata extraction is related to each of the first set of targets and the augmentation content includes static augmentation content and/or dynamic augmentation content.
[033] In the preferred embodiment of the disclosure, at the next step (306), updating a feature store with the extracted a plurality of features of the first set of targets, a content store with one or more augmentation content of the first set of targets for augmenting the second set of targets and a metadata store of a web service. The updation of the feature store, the content store and the metadata store are dynamic. The metadata store of a web service is updated with mapping between identity of one or more extracted features of the first set of targets, and one or more augmentation content and location of the one or more extracted features. When a new target is introduced or when existing features of an existing target needs to be updated, then only the steps 302, 304 and 306 needs to be performed for that target.
[034] In the preferred embodiment of the disclosure, at the next step (308), communicating with the updated metadata store to retrieve a metadata of subset of the first set of targets using a marker-based augmented reality (AR) application. The metadata includes mapping information, an addressable location of each of the first set of targets, and the extracted one or more features of the first set of targets. Herein, the marker-based AR application with the features of the first set of targets and location of content. Wherein, features of the subset of the first set of targets is updated dynamically in the marker-based AR application.
[035] In the preferred embodiment of the disclosure, at the next step (310), parsing the retrieved metadata to obtain identity of at least one target and corresponding the augmentation content of the second set of targets. The at least

one feature of the received one or more features corresponding to the identified at least one target from a feature store specified in the metadata.
[036] In one example, wherein a user wants to see augmentation against a target of the marker-based AR application. A live target is captured, and corresponding features are extracted. Herein, a target on which the live capture and feature extraction is performed is called a live target to distinguish the target from set of targets. The term live target is not a permanent attribute of a target. It would be appreciated that the extraction part is same as regular extraction and it is performed against the target, against which the user intends to see augmentation at that time instance. When user want to see augmentation against target 1, then target 1 is the live target for that user. When user want to see augmentation against target 2, then target 2 is the live target for that user, a target becomes live target every time user wants to see augmentation against it and thus capture and extract features for immediate use.
[037] In the preferred embodiment of the disclosure, at the next step (312), comparing each of the plurality of extracted features of the first set of targets with each feature of the second set of targets of the marker-based augmented reality application.
[038] In the preferred embodiment of the disclosure, at the last step (314), displaying the one or more augmentation content of the first set of targets which are mapped over the second set of targets in a form of augmentation.
[039] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.

[040] The embodiments of present disclosure herein address unresolved problem of using a marker-based augmented reality application to augment new targets which were unknown at the time of AR application development. Changes in targets are not reflected in real-time in marker-based AR application. It is also not possible to dynamically support multiple AR views for different roles without changes to marker-based AR application. The marker-based AR applications are usually deployed on devices like smartphones and smart glasses. So after a marker-based AR application is deployed in multitudes of devices and subsequently whenever new targets are introduced to the AR application or new or updated content are brought in for the existing targets, the marker-based AR applications on the many devices needs to be re-deployed to show the new augmentation.
[041] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means, and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[042] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not

limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[043] The illustrated steps are set out to explain the exemplary
embodiments shown, and it should be anticipated that ongoing technological
development will change the manner in which particular functions are performed.
These examples are presented herein for purposes of illustration, and not limitation.
Further, the boundaries of the functional building blocks have been arbitrarily
defined herein for the convenience of the description. Alternative boundaries can
be defined so long as the specified functions and relationships thereof are
appropriately performed. Alternatives (including equivalents, extensions,
variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[044] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or

stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[045] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.

We Claim:
1. A processor-implemented method comprising:
capturing, via one or more hardware processors, at least one media of each of a first set of targets using a multimedia component, wherein the multimedia component comprises a camera to capture the at least one media;
extracting, via one or more hardware processors, a plurality of features of the first set of targets and corresponding one or more augmentation content of each of the first set of targets to augment a second set of targets in a marker-based AR application based on the plurality of features, wherein the second set of targets is subset of first set of targets;
updating, via one or more hardware processors,
a feature store with the extracted plurality of features of the
first set of targets,
a content store with one or more augmentation content of the
first set of targets for augmenting the second set of targets, and
a metadata store of a web service, wherein the updation of
the feature store, the content store and the metadata store are
dynamic;
communicating, via one or more hardware processors, with the updated metadata store to retrieve a metadata of a subset of the first set of targets using a marker-based augmented reality (AR) application, wherein the metadata includes mapping information, an addressable location of each of the first set of targets, and the extracted one or more features of the first set of targets;
parsing, via one or more hardware processors, the retrieved metadata to obtain an identity of at least one of the first set of targets and corresponding the augmentation content of the identified first set of targets, and at least one feature of the extracted plurality of features corresponding

to the identified at least one of the first set of targets from the updated feature store specified in the metadata;
comparing, via one or more hardware processors, the obtained at least one feature of the extracted plurality of features of the first set of targets with each feature of the second set of targets of the marker-based augmented reality application to find a match between the first set of targets and the second set of targets; and
displaying, via a display screen, the one or more augmentation content of the first set of targets which are mapped over the second set of targets in the form of augmentation.
2. The method claimed in claim 1, wherein the metadata extraction is related to each of the second set of targets.
3. The method claimed in claim 1, wherein the augmentation content includes static augmentation content and dynamic augmentation content.
4. The method claimed in claim 1, wherein the web service provides an interoperable machine-to-machine interaction over a network.
5. The method claimed in claim 1, wherein the metadata store of the web service is updated with mapping between identity of one or more extracted features of the first set of targets, and one or more augmentation content and location of the one or more extracted features.
6. The method claimed in claim 1, wherein the marker-based AR application with the features of the first set of targets and location of content, wherein features of the subset of the first set of targets is updated dynamically in the marker-based AR application.

7. The system comprising:
at least one user interface;
at least one memory storing a plurality of instructions;
one or more hardware processors communicatively coupled with at least one memory, wherein one or more hardware processors are configured to execute one or more modules;
a capturing module configured to capture at least one media of each of a first set of targets using a multimedia component, wherein the multimedia component comprises at least one camera to capture at least one media;
an extraction module configured to a plurality of features of the first set of targets from the captured at least one media and corresponding one or more augmentation content of each of the first set of targets to augment a second set of targets in a marker-based augmented reality (AR) application based on the plurality of features, wherein the second set of targets is subset of first set of targets;
an updation module configured to update a feature store with the extracted a plurality of features of the first set of targets, a content store with one or more augmentation content of the first set of targets for augmenting the second set of targets and a metadata store of a web service, wherein updation of the feature store, the content store and the metadata store is dynamic;
a communication module configured to communicate with the updated metadata store to retrieve a metadata of a subset of the first set of targets using a marker-based AR application, wherein the metadata includes mapping information, an addressable location of each of the first set of targets, and the extracted one or more features of the first set of targets;
a parsing module configured to parse the retrieved metadata to obtain identity of at least one target and corresponding the augmentation content of the first set of targets, wherein the at least one feature of the

received one or more features corresponding to the identified at least one target from a feature store specified in the metadata;
a comparison module configured to compare each of the plurality of extracted features of the first set of targets with each feature of the second set of targets of the marker-based augmented reality application to find a match between at least one of the first set of targets and at least one target from the second set of targets; and
a display module configured to display via a display screen the one or more augmentation content of the first set of targets which are mapped over the second set of targets in a form of augmentation, wherein the augmentation content is mapped to the matched target from the second set of targets.
8. The system claimed in claim 7, wherein the metadata extraction is related to each of the second set of targets.
9. The system claimed in claim 7, wherein the augmentation content includes static augmentation content and dynamic augmentation content.
10. The system claimed in claim 7, wherein the web service provides an interoperable machine-to-machine interaction over a network.
11. The system claimed in claim 7, wherein the metadata store of a web service is updated with mapping between identity of one or more extracted features of the first set of targets, and one or more augmentation content and location of the one or more extracted features.
12. The system claimed in claim 7, wherein the marker-based AR application with the features of the first set of targets and location of content, wherein features of the subset of the first set of targets is updated dynamically in the marker-based AR application.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 202021010922-IntimationOfGrant06-03-2024.pdf 2024-03-06
1 202021010922-STATEMENT OF UNDERTAKING (FORM 3) [13-03-2020(online)].pdf 2020-03-13
2 202021010922-PatentCertificate06-03-2024.pdf 2024-03-06
2 202021010922-REQUEST FOR EXAMINATION (FORM-18) [13-03-2020(online)].pdf 2020-03-13
3 202021010922-FORM 18 [13-03-2020(online)].pdf 2020-03-13
3 202021010922-CLAIMS [21-01-2022(online)].pdf 2022-01-21
4 202021010922-FORM 1 [13-03-2020(online)].pdf 2020-03-13
4 202021010922-FER_SER_REPLY [21-01-2022(online)].pdf 2022-01-21
5 202021010922-OTHERS [21-01-2022(online)].pdf 2022-01-21
5 202021010922-FIGURE OF ABSTRACT [13-03-2020(online)].jpg 2020-03-13
6 202021010922-FER.pdf 2021-11-01
6 202021010922-DRAWINGS [13-03-2020(online)].pdf 2020-03-13
7 202021010922-FORM-26 [16-10-2020(online)].pdf 2020-10-16
7 202021010922-DECLARATION OF INVENTORSHIP (FORM 5) [13-03-2020(online)].pdf 2020-03-13
8 202021010922-COMPLETE SPECIFICATION [13-03-2020(online)].pdf 2020-03-13
8 202021010922-Proof of Right [14-09-2020(online)].pdf 2020-09-14
9 Abstract1.jpg 2020-03-20
10 202021010922-Proof of Right [14-09-2020(online)].pdf 2020-09-14
10 202021010922-COMPLETE SPECIFICATION [13-03-2020(online)].pdf 2020-03-13
11 202021010922-FORM-26 [16-10-2020(online)].pdf 2020-10-16
11 202021010922-DECLARATION OF INVENTORSHIP (FORM 5) [13-03-2020(online)].pdf 2020-03-13
12 202021010922-FER.pdf 2021-11-01
12 202021010922-DRAWINGS [13-03-2020(online)].pdf 2020-03-13
13 202021010922-OTHERS [21-01-2022(online)].pdf 2022-01-21
13 202021010922-FIGURE OF ABSTRACT [13-03-2020(online)].jpg 2020-03-13
14 202021010922-FORM 1 [13-03-2020(online)].pdf 2020-03-13
14 202021010922-FER_SER_REPLY [21-01-2022(online)].pdf 2022-01-21
15 202021010922-FORM 18 [13-03-2020(online)].pdf 2020-03-13
15 202021010922-CLAIMS [21-01-2022(online)].pdf 2022-01-21
16 202021010922-REQUEST FOR EXAMINATION (FORM-18) [13-03-2020(online)].pdf 2020-03-13
16 202021010922-PatentCertificate06-03-2024.pdf 2024-03-06
17 202021010922-STATEMENT OF UNDERTAKING (FORM 3) [13-03-2020(online)].pdf 2020-03-13
17 202021010922-IntimationOfGrant06-03-2024.pdf 2024-03-06

Search Strategy

1 searchdtrategyE_22-10-2021.pdf

ERegister / Renewals

3rd: 14 Mar 2024

From 13/03/2022 - To 13/03/2023

4th: 14 Mar 2024

From 13/03/2023 - To 13/03/2024

5th: 14 Mar 2024

From 13/03/2024 - To 13/03/2025

6th: 10 Feb 2025

From 13/03/2025 - To 13/03/2026