Sign In to Follow Application
View All Documents & Correspondence

System And Method Of Decoding Encoded Information Based On Colour Code

Abstract: A system and method for decoding an encoded information based on a colour code. The method encompasses receiving, media content/s comprising the colour code. Thereafter, the method leads to detecting, circle/s associated with the colour code, present in the at least one media content. The method further, detects one or more corners within each detected circle based on a pixel intensity variation. Thereafter, one or more false circles are removed from the detected circle/s. The method then leads to determining, a set of Hue values for each remaining circle comprising four corners. Further, the method encompasses matching in a sequence, the determined set of Hue values of each remaining circle with one or more pre-defined set of Hue values stored at a storage unit. The method thereafter comprises decoding, the encoded information based on an extraction of an information from the storage unit based on a successful matching. FIGURE 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 February 2020
Publication Number
36/2021
Publication Type
INA
Invention Field
TEXTILE
Status
Email
patent@saikrishnaassociates.com
Parent Application
Patent Number
Legal Status
Grant Date
2025-03-13
Renewal Date

Applicants

RELIANCE JIO INFOCOMM LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad-380006, Gujarat, India

Inventors

1. GAURAV DUGGAL
Flat 305, Block 18, Rain Tree Park, Kukatpally, Hyderabad 500072, Telangana, India
2. BHUPENDRA SINHA
Flat B 504, Sahil heights, Pimple Nilakh, Pune 411027, Maharashtra, India
3. PARAS AHUJA
A2, Sumukhi Orbit, Madhapur, Hyderabad 500081, Telangana, India
4. ANIL KUMAR GOYAL
137, Primrose Towers, L&T Serene county, Telecom Nagar, Gacchibowli, Hyderabad 500032, Telangana, India
5. NIKHIL KUMAR SINGH
W4, Westend Villas, Near Reliance Paradise, Masjid Banda, Kondapur, Hyderabad 500084, Telangana, India
6. SAMEER MEHTA
Flat E-2103, Mahindra Splendor, LBS Marg, Bhandup West, Mumbai 400078, Maharashtra India

Specification

FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
AND
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“SYSTEM AND METHOD OF DECODING ENCODED INFORMATION BASED ON COLOUR CODE”
We, Reliance Jio Infocomm Limited, an Indian National, of, 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad-380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.

TECHNICAL FIELD:
The present invention generally relates to the field of image/video fingerprinting to retrieve information from transformed version/s of a query, and more particularly, to a system and method of decoding an encoded information based on colour code.
BACKGROUND OF THE DISCLOSURE:
The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
With the advancement in digital technologies, image processing techniques are also evolved to a great extent. An ‘image’ generally refers to a picture or code that has been created or copied and stored in electronic form which can be described in terms of vector graphics or raster graphics. An image stored in a raster form sometimes called a bitmap, while an image map is a file containing information that associates different locations on a specified image with hypertext links. Further, an ‘image’ in the field of image processing is described as consisting of an array, or a matrix, of square pixels (picture elements) arranged in columns and rows. In particular, an image in an image processing space is defined as a two-dimensional function, F(x,y) where x and y are spatial coordinates, and the amplitude of F at any pair of coordinates (x,y) is called the intensity of that image at that point. When x,y, and amplitude values of F are finite, it called a digital image.
In other words, an image can be defined by a two-dimensional array specifically arranged in rows and columns. Digital Image is composed of a finite number of elements, each of which elements have a particular value at a particular location. These elements are referred to as picture elements, image elements, and pixels. A Pixel is most widely used to denote the elements of a digital image.
Further, with respect to Colour Codes (RGB/CMYK/HSV Colour Space) in Image Processing, in general, there are two main colour spaces which are RGB and CMYK,

wherein the RGB colour model relates very closely to the way human eyes perceive
colour with the r, g and b receptors in the retinas. RGB uses additive colour mixing
and is the basic colour model used in television or any other medium that projects
colour with light. It is the basic colour model used in computers and for web graphics,
but it cannot be used for print production. While, the secondary colours of RGB – cyan,
magenta, and yellow – are formed by mixing two of the primary colours (red, green
or blue) and excluding the third colour. Red and green combine to make yellow, green
and blue to make cyan, and blue and red form magenta. The combination of red,
green, and blue (RGB) in full intensity makes white and the HSV (hue, saturation,
value) in the colour picker of a graphics software. Unlike RGB and CMYK, which use
primary colours, HSV is closer to how humans perceive colour. It has three
components: hue, saturation, and value (HSV). This colour space
describes colours (hue or tint) in terms of their shade (saturation or amount of gray) and their brightness value. Hue is the colour portion of the model, expressed as a number from 0 to 360 degrees:
• Red falls between 0 and 60 degrees.
• Yellow falls between 61 and 120 degrees.
• Green falls between 121-180 degrees.
• Cyan falls between 181-240 degrees.
• Blue falls between 241-300 degrees.
• Magenta falls between 301-360 degrees.
The 4-colour CMYK model used in printing lays down overlapping layers of varying percentages of transparent cyan (C), magenta (M) and yellow (Y) inks. In addition, a layer of black (K) ink can be added. The CMYK model uses the subtractive colour model, wherein the colours created by the subtractive model of CMYK don't look exactly like the colours created in the additive model of RGB but, most importantly, CMYK cannot reproduce the brightness of RGB colours in the images.
Further, in image analysis ‘Hough transform’ technique can be used to isolate features of a particular shape within an image. Since it requires that the desired features be

specified in some parametric form, the classical Hough transform is most commonly
used for the detection of regular curves such as lines, circles,
ellipses, etc. A generalized Hough transform can be employed in applications where a simple analytic description of a feature(s) is not possible.
Furthermore, ‘Video Processing’ techniques generally covers most of the image processing methods, but also includes methods where the temporal nature of video data is exploited. The goal of image analysis is to analyze the image to first find objects of interest and then extract some parameters of these objects. For example, finding an object's position, size, colour, embedded advertisement link in the image. Further, there are three stages in image processing which are as follows:
• Image acquisition: This method of image processing is directly related to a camera and setup of a system, e.g., camera type, camera settings, optics, and light sources, wherein at pre-processing stage, the image is analyzed before the actual processing commences, e.g., convert the image from colour to grey-scale or crop the most interesting part of the image.
• Image segmentation: In this method of image processing, a relevant information of interest is extracted from the image or video data which is often the “heart” or “crux” of a system intended to be captured, for example, advertisement link embedded into the image.
• Image classification: In this method, image block examines the information produced by the previous block in a grid and classifies each object as being an object of interest or not to finally select the intended image and fulfill the objective of image identification and/or advertisement extraction.
Further, an ‘image fingerprinting system’ refers to a system designed to retrieve transformed versions of a query image from a large database for a number of use case scenarios. In this technique, hash of each image is stored in a grid. It is a unique and reduced code that represents the visual content of the image which allows the user to recognize similar images and the content embedded into it. Similar images have similar image fingerprints, which implies that if the image fingerprint of two images matches completely this means that it is the same image and contains the same

embedded code. If the image fingerprint differs just a bit, the images also just differ a bit. If the image fingerprint differs a lot, then it depicts that of different images. The more the image fingerprints differ, the more their visual content differs and subsequently the more their embedded code differs.
Also, the 'Multimedia fingerprinting system’ refers to the ability to generate associated identifiable data, referred to as a fingerprint, from the multimedia image, audio, and video content. The ‘multimedia fingerprinting and search system’ is used, where the fingerprint for a piece of multi-media content is not limited to image, audio, and video but is composed of a number of compact signatures, including traversal hash signatures and associated metadata. The compact signatures and traversal hash signatures are constructed to be easily searchable when scaling to a large database of multimedia fingerprints. The multimedia content is also represented by many signatures that relate to various aspects of the multimedia content that are relatively independent of each other. Furthermore, an image/video/audio fingerprint ideally has several properties which are as follows:
• The fingerprint should be much smaller than the original data.
• The fingerprint should be designed in such a way, that it can be searched for in a large database of fingerprints.
• The original multimedia content should not be able to be reconstructed from the fingerprint.
• At last, for multimedia content that is a distorted version of another multimedia content, fingerprints of the original and distorted versions should be similar.
Also, the term ‘codes/hash’ also referred to as ‘dHash’ contains logs/values related to a particular content which helps in the identification of a particular content embedded into it. The process of embedding these codes/hash in advertisements generally refers to the image hashing, wherein, it examines the contents of an image and constructs a value that uniquely identifies an image based on the embedded contents.
The media applications which include an image, video, and audio database

management, database browsing, and image identification are undergoing explosive growth and are expected to continue to grow. But, currently, there are no solution for efficiently and distinctly capturing from a media (image/video) a content along with an embedded code comprising distinct colour codes or colour hues, visible from a distance owing to the challenge of increased invariance of pixel intensity with distance and subsequently synchronizing the content such as advertisement in a user device for an interactive user engagement. Therefore, there is a need for a comprehensive solution related to the problem of capturing content from the distance owing to the increased invariance of pixel intensity with distance and synchronizing the content such as advertisement in the user device for interactive user engagement.
Also, unlike multimedia watermarking and other image/video processing techniques, image fingerprinting does not change the content where it is embedded. Although, capturing the fingerprinting from a distance is, however, a very challenging problem. Increasing demand for such image fingerprinting solutions, which include standard resolution (SR) and high resolution (HR) formats of images, requires increasing sophistication, flexibility, and performance in the supporting systems and existing hardware. The sophistication, flexibility, and performance that are desired exceed the capabilities of current generations of solutions, in many cases, by an order of magnitude. A few existing limitations/challenges are as follows:
• Due to the absence of image processing techniques relating to efficiently and effectively capturing colour codes there is a requirement to provide a solution to clearly identify the colour codes with distinct hues or colours from a distance.
• Various techniques are present today, which captures the information embedded in a grid using pixel intensity and colour histograms but, has limitations w.r.t the images captured with distance and decoding of the codes.
• Existing QR code technique, which is a type of barcode that contains a matrix of dots that can be scanned using a QR scanner or a smartphone with built-in camera but, has limitations of capturing the images at a small distance only. Existing methods like barcode scanning, doesn’t work for long-range images because images pixelate at large distance and pixel intensity changes with

distance.
• Due to capturing images from distance using the normal user device camera, owing to the increased invariance of pixel intensity with the increasing distance the image captured is not qualitative and hence cannot be used for extracting distinct content features embedded inside it. The known processes like FAST, SIFT, Congas feature extraction methods are extensively used, but are computationally very expensive. Also, the images may get pixelate which causes error rate to increase with increasing distance.
There are various inherent limitations and existing drawbacks in the available solutions which not only includes multimedia distortions due to capturing the large size of multimedia databases and the density of media files. But, simultaneously there exists a necessity for high performance, accurate multi-media identification and search technologies in the current scenario. Furthermore, the current technology w.r.t robust image/video content identification, management, and copyright protection should be resistant to intentional or unintentional image/video content change or distortion within the limits of parameters, such as reasonable view-ability from distance.
Hence, there is a need for novel system and method for efficiently and distinctly capturing content/s along with the embedded distinct colour code/s or encoded colour hues, visible from the distance owing to the challenge if increased invariance of pixel intensity with distance, to decode an information based on the encoded colour code/s and to subsequently synchronize the decoded content/information such as an advertisement in a user device for an interactive user engagement.
SUMMARY OF THE DISCLOSURE
This section is provided to introduce certain objects and aspects of the present invention in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
In order to overcome at least some of the drawbacks mentioned in the previous section and those otherwise known to persons skilled in the art, an object of the

present invention is to provide a system for enhancing the user experience of the conventional image fingerprinting system by providing custom-designed coloured circles for better recognition from a distance. Also, an object of the present invention is to provide a system and method for determining decoded unique identifier based upon the logo shape and encoded logo colour code which is composed of hue values of grids surrounding corner points within bounding boxes. Another object of the present invention is to provide recognizable colour-coded identifiers within the advertisement/media contents viewable from a far distance. An object of the present invention is to provide a novel system and method for decoding codes/hash embedded in an advertisement (image/video advertisement) using image hashing/image processing. Also, an object of the present invention is to provide a signature to be generated from an image using the colour components of HSV colour space. Another object of the present invention is to provide an image recognition system that caters to the users by displaying an advertisement on a user device in real-time based on user-clicked images/videos of the advertisement from the user device. One other object of the present invention is to provide a system and method for decoding appropriate hue values corresponding to encoded HSV colour space present inside the grid within a circle in an advertisement. Another object of the present invention is to make the overall design scalable enough to efficiently handle very large databases of images and an arbitrary length of a query sequence for speedy processing of multimedia contents for identification and decoding purpose on a user device. Another object of the present invention is to provide a system and method to facilitate processing and identification of embedded code/s in an images. Yet another object of the present disclosure is to retrieve transformed versions of a query image from a large database to give a user an interactive advertisement experience.
Furthermore, in order to achieve the aforementioned objectives, the present invention provides a method and system for decoding an encoded information based on colour code. A first aspect of the present invention relates to a method for decoding an encoded information based on a colour code. The method encompasses receiving, at a processing unit, at least one media content comprising the colour code. Thereafter, the method leads to detecting, by a detection unit, one or more circles present in the at least one media content, based on an analysis of the at least one

media content, wherein the one or more circles are associated with the colour code. The method further comprises detecting, by the detection unit, one or more corners within each circle of the one or more circles based on a pixel intensity variation. Further, the method encompasses removing, by the processing unit, one or more false circles from the one or more circles based on an analysis of the one or more corners. The method then leads to determining, by the processing unit, a set of Hue values for each remaining circle, wherein each remaining circle comprises four corners. Thereafter, the method comprises matching in a sequence, by the processing unit, the determined set of Hue values of each remaining circle of the one or more circles with one or more pre-defined set of Hue values stored at a storage unit. The method further encompasses decoding, by the processing unit, the encoded information based on an extraction of an information corresponding to the encoded information from the storage unit based on a successful matching.
Another aspect of the present invention relates to a system for decoding an encoded information based on a colour code. The system comprises a processing unit, configured to receive, at least one media content comprising the colour code. The system further comprises a detection unit, configured to detect one or more circles present in the at least one media content, based on an analysis of the at least one media content, wherein the one or more circles are associated with the colour code. The detection unit is also configured to detect one or more corners within each circle of the one or more circles based on a pixel intensity variation. Further, the processing unit is configured to remove one or more false circles from the one or more circles based on an analysis of the one or more corners. Also, the processing unit is further configured to determine a set of Hue values for each remaining circle, wherein each remaining circle comprises four corners. The processing unit is thereafter configured to match in a sequence, the determined set of Hue values of each remaining circle of the one or more circles with one or more pre-defined set of Hue values stored at a storage unit. Further the processing unit is configured to decode the encoded information based on an extraction of an information corresponding to the encoded information from the storage unit based on a successful matching.
BRIEF DESCRIPTION OF DRAWINGS

The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
Figure 1 illustrates an exemplary block diagram of a system [100] for decoding an encoded based on a colour code, in accordance with exemplary embodiments of the present invention.
Figure 2 illustrates an exemplary diagram of a circle present in at least one media content, wherein the circle comprises an embedded colour code, in accordance with exemplary embodiments of the present invention.
Figure 3 illustrates an exemplary method flow diagram [300], depicting a method for decoding an encoded information based on a colour code, in accordance with exemplary embodiments of the present invention.
Figure 3(a) illustrates an exemplary use case, in accordance with exemplary embodiments of the present invention.
Figure 4 illustrates an exemplary flow diagram, depicting an instance implementation of an exemplary process of decoding an encoded information based on a colour code, in accordance with exemplary embodiments of the present invention.
Figure 5 illustrates an exemplary flow diagram, depicting an instance implementation of an exemplary process of decoding an encoded information based on a colour code, in accordance with exemplary embodiments of the present invention.
The foregoing shall be more apparent from the following more detailed description of the disclosure.

DESCRIPTION OF THE INVENTION
In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention as set forth.
Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a sequence diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a

procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks (e.g., a computer-program product) may be stored in a machine-readable medium. A processor(s) may perform the necessary tasks.
The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms

“comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
As utilized herein, terms “component,” “system,” “platform,” “node,” “layer,” “selector,” “interface,” and the like are intended to refer to a computer-related entity, hardware, software (e.g., in execution), and/or firmware. For example, a component can be a process running on a processor, a processor, an object, an executable, a program, a storage device, and/or a computer. By way of illustration, an application running on a server and the server can be a component. One or more components can reside within a processor and a component can be localized on one computer and/or distributed between two or more computers.
Further, these components can execute from various computer-readable media having various data structures stored thereon. The components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems via the signal). As another example, a component can be an apparatus with specific functionality provided by mechanical parts operated by electric or electronic circuitry which is operated by a software application or a firmware application executed by a processor, wherein the processor can be internal or external to the apparatus and executes at least a part of the software or firmware application. As yet another example, a component can be any apparatus that provides specific functionality through electronic components without mechanical parts, the electronic components can include a processor therein to execute software or firmware that confers at least in part the functionality of the electronic components.
In addition, the disclosed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to

control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, computer-readable carrier, or computer-readable media. For example, computer-readable media can include, but are not limited to, magnetic storage devices, e.g., hard disk; floppy disk; magnetic strip(s); optical disk (e.g., compact disk (CD), digital video disc (DVD), Blu-ray Disc™ (BD); smart card(s), flash memory device(s) (e.g., card, stick, key drive etc.).
Moreover, terms like “user equipment” (UE), “electronic device”, “mobile station”, “user device”, “mobile subscriber station,” “access terminal,” “terminal,” “smartphone,” “smart computing device,” “handset,” and similar terminology refers to any electrical, electronic, electro-mechanical equipment or a combination of one or more of the above devices. Smart computing devices may include, but not limited to, a mobile phone, smart phone, virtual reality (VR) devices, augmented reality (AR) devices, pager, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device as may be obvious to a person skilled in the art. In general, a smart computing device is a digital, user configured, computer networked device that can operate autonomously. A smart computing device is one of the appropriate systems for storing data and other private/sensitive information. The said device operates at all the seven levels of ISO reference model, but the primary function is related to the application layer along with the network, session and presentation layer with any additional features of a touch screen, apps ecosystem, physical and biometric security, etc. Further, a ‘smartphone’ is one type of “smart computing device” that refers to the mobility wireless cellular connectivity device that allows end-users to use services on 2G, 3G, 4G and the like mobile broadband Internet connections with an advanced mobile operating system which combines features of a personal computer operating system with other features useful for mobile or handheld use. These smartphones can access the Internet, have a touchscreen user interface, can run third-party apps including the capability of hosting online applications, music players and are camera phones possessing high-speed mobile broadband 4G LTE internet with video calling, hotspot functionality, motion sensors, mobile payment mechanisms and enhanced security features with alarm and alert in emergencies. Mobility devices may include

smartphones, wearable devices, smart-watches, smart bands, wearable augmented devices, etc. For the sake of specificity, we will refer to the mobility device to both feature phone and smartphones in this disclosure but will not limit the scope of the disclosure and may extend to any mobility device in implementing the technical solutions. The above smart devices including the smartphone as well as the feature phone including IoT devices enable the communication on the devices. Furthermore, the foregoing terms are utilized interchangeably in the subject specification and related drawings.
Furthermore, the terms “user,” “subscriber,” “customer,” “consumer,” “agent,”, “owner,” and the like are employed interchangeably throughout the subject specification and related drawings, unless context warrants particular distinction(s) among the terms. It should be appreciated that such terms can refer to human entities, or automated components supported through artificial intelligence, e.g., a capacity to make inference based on complex mathematical formulations, that can provide simulated vision, sound recognition, decision making, etc. In addition, the terms “wireless network” and “network” are used interchangeable in the subject application, unless context warrants particular distinction(s) among the terms.
As used herein, a “processor” or “processing unit” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special-purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, a low-end microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
As used herein, “memory unit”, “storage unit” and/or “memory” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”),

magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media.
As used herein, a “media content” may include but not limited to any kind of photo/image/video or any data format that contains an image and a fingerprint/code can be embedded in that image for further processing to recognize one or more embedded identifiers to provide one or more interactive services.
The present invention in order to decode an encoded information based on a colour code encompasses scanning/capturing from a distance via a user device/camera, a media (image/video) associated with the encoded information (such as an advertisement). Further, one or more encoded contents/codes embedded in the captured media in form of grids inside a circle with distinct hue values or colours around one or more corners, are decoded. Further, once the one or more contents/codes embedded in the media are decoded, the decoded information is extracted based on a successful matching of the decoded code/s or content/s with one or more pre-stored codes. In an example the information may comprise an advertisement related promotional content, such promotional content may include but not limited to gift coupons, recharge vouchers, detailed information on particular products, wherein the media may be on any medium not limited to print media, electronic display screens or billboards. More specifically, a media such as street-level images comprising fingerprints/codes having encoded features, compact signatures, traversal hash signatures, and/or associated metadata, is captured by image capturing devices. Further, one or more interest regions are identified from such media and the fingerprints/colour codes corresponding to such one or more interest regions are matched to one or more pre-stored fingerprints/ colour codes. Further, one or more information is decoded based on a successful fingerprint/ colour code match, wherein such the one or more information may differ for different use cases, like, an information related to identification of copyright infringement, promotional contents etc.
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present disclosure.

Referring to FIG. 1, an exemplary block diagram of a system [100] for decoding an encoded information based on a colour code, in accordance with exemplary embodiments of the present invention, is shown.
As shown in Figure 1, the system encompasses at least one processing unit [102] and at least one detection unit [104]. The system may also comprises other units such as at least one storage unit and/or at least one transceiver unit (the same is not shown in Figure 1). In an implementation, the system [100] may reside in a server device connected to a user device. In other implementation, the system [100] may reside in the user device. In yet another implementation, the system [100] may reside partially in the user device and the server device. Also, all of the components/ units of the system [100] are assumed to be connected to each other unless otherwise indicated below. Also, in Fig. 1 only a few units are shown, however, the system [100] may comprise multiple such units or the system [100] may comprise any such numbers of said units, obvious to a person skilled in the art or as required to implement the features of the present disclosure.
The system [100], is configured to decode an encoded information based on a colour code, with the help of the interconnection between the components/ units of the system [100].
The processing unit [102] is configured to receive, at least one media content comprising the colour code. The colour code is embedded in each media content of the at least one media content in form of one or more grids inside a circle comprising distinct hue values along vertices. Furthermore, Figure 2 illustrates an exemplary diagram of a circle present in at least one media content, wherein the circle comprising an embedded colour code, in accordance with exemplary embodiments of the present invention. Figure 2 indicates various grids inside the circle comprising distinct hue values along vertices. More specifically, the values 1 to 16 as indicated in the figure 2 are the 16 different hue values along the vertices and each hue value from the 16 hue values is associated with a grid having a particular colour.
Further, the processing unit [102] is configured to convert the at least one media content in grayscale format for further processing. In an implementation where the system [100] is present in the server device connected to the user device, once an

image or video (i.e. at least one media content) is captured by a camera unit of the user device, the captured image/video is converted to base 64 string. Further, this base 64 string is sent to the system [100] present at the server device. The system [100] further decodes the base 64 string to retrieve the image/video content. Subsequently, the image/video content is sent to the processing unit [102]. The processing unit [102] is further configured to convert the image/video content in grayscale format. The processing unit [102] is connected to the detection unit [104] and in an implementation the processing unit [102] is further configured to transmit to the detection unit [104] the at least one media content in grayscale format.
The detection unit [104] is thereafter configured to detect one or more circles present in the at least one media content, based on an analysis of the at least one media content, wherein the one or more circles are associated with the colour code. Also, each of the one or more circle is associated with a varying radius for every pixel. The detection unit [104] is further configured to detect the one or more circles in the at least one media content based on a voting mechanism based on 3D Hough Transform Matrix. Furthermore, in an implementation, for detecting the one or more circles in the at least one media content, colour space of the at least one media content is converted to Hough space and the one or more circles are further detected based on Hough space’s voting principle. For example, if there is a particular shape such as the one or more circles in the Hough space, all the points that lie on that shape (i.e. in current scenario points lying on the one or more circles), satisfies that particular shape’s equation, i.e. in current scenario the circle’s equations: (x – h)2 + (y – k)2 = r2, where center point is denoted by (h,k), r is the radius of the circle and x and y are values corresponding to X and Y axis respectively. Hence, the number of points satisfying a particular shape directly denotes the number of votes for that shape in the corresponding Hough space. So, for a media containing the one or more circles, it comprises votes for the one or more circles in the Hough space, wherein the one or more circles can be filtered out using some threshold on the number of votes. Furthermore, in an implementation the Hough transform is used for circle detection, for different combinations of accumulator matrix threshold, radius and accumulator resolution.

The detection unit [104] is further configured to detect one or more corners within each circle of the one or more circles based on a pixel intensity variation. The pixel intensity variation comprises detecting a change in one or more pixel intensity values, wherein the change is above a threshold. The threshold is a pre-defined value to indicate a significant change in one or more pixel intensity values. Furthermore, in an implementation the detection unit [104] is configured to detect, one or more corners within each circle based on creating, by the processing unit [102], a bounding box around each circle of the one or more circles and thereafter by detecting, by the detection unit [104], the one or more corners within each bounding box. The creation of bounding box heavily reduces the computation load, and is hence much faster to detect the one or more corners within each bounding box. More specifically, in order to detect one or more corners within each circle, a significant change in one or more pixel intensity values in one or more directions is determined. Further, to determine such change in the one or more pixel intensity values in all directions, a small window on the at least one media content is selected and said small window is then shifted in a direction to analyse a change in appearance. Shifting this small window in any direction results in a large change in pixel intensity values, if that particular window happens to be located on a corner. For points that do not lie on the corner, no significant change in pixel intensity values are observed. For points along an edge, large change is observed only in the direction perpendicular to edge. Therefore, using this the one or more corners are filtered from all the rest of the points within each circle of the one or more circles. Furthermore, for a window(W) located at (X, Y) with pixel intensity I(X, Y), the formula for Corner Detection is:
f(X, Y) = Σ (I(Xk, Yk) - I(Xk + ΔX, Yk + ΔY))2 where (Xk, Yk) ϵ W
According to the above formula, if the at least one media content is scanned with a window, in the same way, a kernel and it is noticed that there is an area where there’s a major change irrespective of direction of scanning, then a corner is identified in that space.
Once the one or more corners within each circle are detected, the processing unit [102] is further configured to remove, one or more false circles from the one or more circles based on an analysis of the one or more corners. Further, the processing unit

[102] is configured to detect a circle as a false circle to remove the one or more false circles based on detection of less than four corners within the circle, detection of a same colour in adjacent corners within the circle, and detection of a position of a corner, wherein the position of the corner with respect to radius of the circle is not in a pre-defined ratio.
Further, the processing unit [102] is configured to determine, a set of Hue values for each remaining circle, wherein each remaining circle comprises four corners and is considered as a true circle. Also, the processing unit [102] is further configured to determine the set of Hue values for each remaining circle based on sorting, the four corners of each of the remaining circle in a clockwise manner and extracting, a corresponding hue value associated with four grids surrounding each corner of the four corners. For instance, the exemplary circle with the embedded colour code as indicated in the Figure 2 is a true circle having four corners and the hue values associated with four grids surrounding each corner of the four corners are 1 to 4 for first corner, 5 to 8 for second corner, 9 to 12 for third corner and 13 to 16 for fourth corner. Furthermore, the set of corresponding hue value for the given circle i.e. 1 to 16 hue values are then determined based on sorting, the four corners of the circle in a clockwise manner and extracting, a corresponding hue value associated with each grid from the four grids surrounding each corner from the four corners.
Thereafter, the processing unit [102] is configured to match in a sequence, the determined set of Hue values of each remaining circle of the one or more circles with one or more pre-defined set of Hue values stored at a storage unit. For example, considering the exemplary circle as indicated in Figure 2, once the set of hue values (i.e. 1 to 16 hue values) is determined based on a sorting of the four corners of the circle in a clockwise manner, each hue value from the determined 1 to 16 hue values is then matched one to one with at least one pre-defined set of Hue values (i.e. at least one set of pre-defined 16 hue values) associated with at least one information, stored at the storage unit.
The processing unit [102] is further configured to decode, the encoded information based on an extraction of an information corresponding to the encoded information from the storage unit, wherein the information is extracted based on a successful

matching. Also, the information further comprises at least one of an advertisement related promotional content, a product related data, a service related data and such similar data. Further in an implementation a matching in sequence, of the determined set of Hue values with the one or more pre-defined set of Hue values, is considered as the successful matching in an event such matching is above a certain threshold matching value. For example, considering the exemplary circle as indicated in Figure 2, once the determined 1 to 16 hue values matched one to one with a set of pre-defined 16 hue values associated with at least one information stored at the storage unit and the result of such matching is above a pre-defined matching threshold value, then the corresponding at least one information is extracted from the storage unit.
Further, the extracted/decoded information is provided on the user device via a user interface. In an implementation if the system [100] is placed in the server device connected to the user device or the system [100] is partially placed in the server device and the user device, the information extracted from the storage unit of the server device/system, is then transmitted to the user device from the server device. Also, in an implementation, where the system [100] is placed in the user device, the information can be extracted locally from a storage unit of the user device/system or the information can be extracted from an external storage/remote storage connected to the user device based on the successful matching. Further, such extracted information is provided on the user device via the user interface as a decoded information.
Referring to Figure 3 an exemplary method flow diagram [300], depicting a method for decoding an encoded information based on a colour code, in accordance with exemplary embodiments of the present invention is shown. In an implementation the method can be implemented at a user device. In another implementation the method can be implemented at a server device. In yet another implementation the method can be implemented partially at the server device and the user device. As shown in Fig. 3, the method begins at step [302].
At step [304], the method comprises receiving, at a processing unit [102], at least one media content comprising the colour code. The colour code is embedded in each media content of the at least one media content in form of one or more grids inside a

circle comprising distinct hue values along vertices around a corner. For example, the at least one media content comprises one or more circles, wherein each circle of the one or more circles having various grids comprising distinct hue values along vertices around a corner. More specifically, in a valid/true circle, 16 different hue values are present and each hue value from the 16 hue values is associated with a grid having a particular colour.
Also, in an implementation the method encompasses converting the at least one media content in grayscale format for further processing. In such implementation the method further encompasses transmitting to the detection unit [104] the at least one media content in grayscale format.
Next, at step [306], the method comprises detecting, by a detection unit [104], the one or more circles present in the at least one media content, based on an analysis of the at least one media content, wherein the one or more circles are associated with the colour code. Also, each of the one or more circle is associated with a varying radius for every pixel. Furthermore, the process of detecting, by a detection unit [104], one or more circles in the at least one media content is further based on a voting mechanism based on 3D Hough Transform Matrix. For instance, in an implementation, for detecting the one or more circles in the at least one media content, colour space of the at least one media content is converted to Hough space and the one or more circles are further detected based on Hough space’s voting principle. For example, if a circle is present in the Hough space associated with a media and all the points that lie on the circle satisfies the circle’s equation, then the number of points satisfying the equation directly denotes the number of votes for that circle in the corresponding Hough space. Therefore, in the given example for the media containing the circle, the Hough space comprises votes for the circle in the Hough space, wherein the circle can be filtered out using a threshold on the number of votes. Furthermore, in an implementation the Hough transform is used for circle detection, for different combinations of accumulator matrix threshold, radius and accumulator resolution.
Further, at step [308], the method comprises detecting, by the detection unit [104], one or more corners within each circle of the one or more circles based on a pixel intensity variation. The pixel intensity variation comprises detecting a change in one

or more pixel intensity values, wherein the change is above a threshold. The threshold is a pre-defined value to indicate a significant change in one or more pixel intensity values. Furthermore, in an implementation the method encompasses detecting by the detection unit [104], the one or more corners within each circle based on creating, by the processing unit [102] a bounding box around each circle of the one or more circles and after creating the bounding box the method leads to detecting, by the detection unit [104], the one or more corners within each bounding box. Furthermore, in order to detect the one or more corners within each circle, the method encompasses determining a significant change in one or more pixel intensity values in one or more directions. Further, to determine such change in the one or more pixel intensity values in all directions, the method comprises selecting a small window on the at least one media content and said small window is then shifted in a direction to analyse a change in appearance. Shifting this small window in any direction results in a large change in pixel intensity values, if that particular window happens to be located on a corner. For points that do not lie on the corner, no significant change in pixel intensity values are observed. For points along an edge, large change is observed only in the direction perpendicular to edge. Therefore, using this variation in the one or more pixel intensity values the one or more corners are filtered from all the rest of the points present within each circle of the one or more circles.
Once the one or more corners within each circle are detected, the method, at step [310] comprises removing, by the processing unit [102], one or more false circles from the one or more circles based on an analysis of the one or more corners. Also, in an implementation the removing, by the processing unit [102], one or more false circles further comprises detecting from the one or more circles, a circle as a false circle based on at least one of a detection of less than four corners within the circle, detection of a same colour in adjacent corners/grids within the circle and detection of a position of a corner, wherein the position of the corner with respect to radius of the circle is not in a pre-defined ratio. Also, in an implementation, the method also encompasses removing by the processing unit [104], one or more false corners present within each circle of the one or more circles based on one or more mathematical properties of circle.

Next, at step [312], the method comprises determining, by the processing unit [102], a set of Hue values for each remaining circle, wherein each remaining circle comprises four corners and is considered as a true/valid circle. Further, the process of determining, by the processing unit [102], a set of Hue values for each remaining circle further comprises sorting, the four corners of each of the remaining circle in a clockwise manner and extracting, a corresponding hue value associated with four grids surrounding each corner of the four corners. Therefore, for each true circle, a set of Hue values comprising 16 Hue values corresponding to different grids is determined, wherein the 16 hue values are determined based on sorting of the four corners of each remaining/true circle in the clockwise manner and then extracting, a corresponding hue value associated with four grids surrounding the each corner.
Further, at step [314], the method comprises matching in a sequence, by the processing unit [102], the determined set of Hue values of each remaining circle of the one or more circles with one or more pre-defined set of Hue values stored at a storage unit. For example, once a set of 16 hue values is determined for a true circle, each hue value from the determined 1 to 16 hue values is then matched one to one with at least one pre-defined set of 16 Hue values associated with at least one information (such as a product detail), stored at the storage unit.
Next, at step [316], the method comprises decoding, by the processing unit [102], the encoded information based on an extraction of an information corresponding to the encoded information from the storage unit, wherein the information is extracted based on a successful matching. Also, the information further comprises at least one of an advertisement related promotional content, a product related data, a service related data and such similar data. Further in an implementation matching in sequence, the determined set of Hue values with the one or more pre-defined set of Hue values, is considered as the successful matching in an event such matching is above a certain threshold matching value. For example, once each hue value of a set of determined 16 hue values matched one to one with hue values of a set of pre¬defined 16 hue values associated with at least one information stored at the storage unit and the result of such matching is above a pre-defined matching threshold value, then the corresponding at least one information is extracted from the storage unit. Further, Figure 3(a) illustrates an exemplary use case depicting, at [300 A] a colour

code with an actual colour sequence and at [300 B] the colour code with a captured colour sequence. More specifically, a variance in the actual colour sequence depicted at [300 A] and the captured colour sequence depicted at [300 B] is shown. Further, if the one or more pre-defined set of Hue values associated with the captured colour sequence is matched in sequence above a pre-defined threshold matching value, then the corresponding at least one information is extracted from the storage unit.
Further, the method encompasses providing the extracted information as the decoded information on the user device via a user interface. In an implementation, the method [300] is performed by the system [100] and if the system [100] is placed in the server device connected to the user device or the system [100] is partially placed in the server device and the user device, the information extracted from the storage unit of the server device/system, is then transmitted to the user device from the server device. Also, in an implementation, where the system [100] is placed in the user device, the information can be extracted locally from a storage unit of the user device/system or the information can be extracted from an external storage/remote storage connected to the user device based on the successful matching. Further, such extracted information is provided on the user device via the user interface as the decoded information.
The method thereafter terminates at step [318].
Referring to Figure 4, an exemplary flow diagram, depicting an instance implementation of the process of decoding an encoded information based on a colour code, in accordance with exemplary embodiments of the present invention is shown. As shown in Figure 4, the method of the exemplary process starts at [402].
At step [404] the method comprises searching one or more circles in a captured image. In an implementation 3D Hough Transform Matrix is used for circle detection. Also, in another implementation, Hough transform is used for circle detection, for different combinations of accumulator matrix threshold, radius and accumulator resolution. Further, if the one or more circles are identified the method leads to step [410], also if no circles are identified in the image the method leads to step [406].

Next at step [406] the method indicates a false result and leads to step [408]. Further, the method terminates at step [408].
Next at step [410] the method comprises detecting one or more corners within each detected circle. The one or more corners are detected based on a pixel intensity variation determined based on an analysis of the image. Further, if the one or more corners are detected the method leads to step [412], otherwise the method leads to step [406], and thereafter terminates at step [408].
Further at step [412] the method comprises identifying, if at least four corners are present in each detected circle. Also, the method encompasses filtering out the false corner/s based on basic geometry and boundary condition/s. If at least four corners are found in one or more detected circles and the method leads to step [414]. Also, if no circle with at least four corners is identified the method leads to step [406], and thereafter terminates at step [408].
Next at step [414] the method indicates that the number of corners in each circle is four after filtering out false corners, wherein such circle with four corners is considered as a true circle. Also, if no true circle is present the method leads to step [406], and thereafter terminates at step [408], otherwise the method leads to step [416].
Thereafter at step [416] the method comprises, getting a hue value of four grids surrounding each corner of a true circle and hence a total 16 Hue values are obtained for the true circle. The method further encompasses matching individually one to one 16 Hue values of the detected circles with 16 Hue values of different products stored as a list of product codes at a storage unit. Also, in an event if the matching is below a threshold level the method leads to step [406], and thereafter terminates at step [408], otherwise the method leads to step [418].
Next at step [418] the method comprises providing at a user device, one or more product details (i.e. the decoded information) based on a successful matching.
Thereafter, at step [420] the method terminates.

Referring to Figure 5, an exemplary flow diagram, depicting an instance implementation of the process of decoding an encoded information based on a colour code, in accordance with exemplary embodiments of the present invention is shown.
As shown in figure 5, a user device [502] is connected to a server device [504]. The server device is further connected to a system [100] and the system [100] is connected to a storage unit [506].
Figure 5 at step 1 indicates that, at the user device [502], an image of an encoded advertisement is captured. Further such image is converted into base 64 string and the base 64 string is then sent to the server device [504].
Next at step 2, the server device [504] decodes the base 64 string to retrieve the image content. Further, the decoded image content is then sent to the processing unit [102] of the system [100].
Further, at step 3, the image is converted into a grayscale format by the processing unit [102]. Also, in an implementation, the image is further resized.
Further, at step 4, one or more circles are then detected over the whole image by the detection unit [104] of the system [100].
Next step 5 indicates that a corner detection is carried out in parallel within each circular region. More specifically, one or more corners are identified in each detected circle based on a pixel intensity variation.
Thereafter, at step 6 the method indicates that if any false corner/s detected in a circular region, then such false corner/s are subsequently removed by the processing unit [102].
Further, at step 7, a set of hue values comprising 16 Hue values for each valid/true circular region is determined, wherein the true circle comprises 4 corners and each corner is associated with 4 grids. Each grid is further associated with a hue value.
Next at step 8, to further identify the determined hue combinations (i.e. set of hue values) in a list of product codes, 16 Hue values of each circular region is then matched one to one with 16 Hue values of each existing product code present in the storage unit [506].

Further, at step 9, if any match is found, the corresponding product details are sent to the server device [504], which in turn sends it to the user device [502] at step 10. Further, based on the receipt of the product details, a particular product-related promotions or any other interactive services (i.e. the decoded information) are provided on the user device [502].
Thus, the present invention provides a novel solution for decoding an encoded information based on a colour code. Also, as according to the current invention, corner detection is dependent on pixel intensity variation, the image fingerprinting can be done for distant images with similar accuracy. Furthermore, the present invention provides a novel solution for efficiently and distinctly capturing a content (such as an advertisement) along with an embedded code in form of grids inside a circle with distinct colour codes or colour hues along the vertices around a corner, visible from the distance owing to the challenge if increased invariance of pixel intensity with distance and subsequently synchronizing the content/advertisement in a user device for an interactive user engagement. Also, the present invention provides a system and method, which allows the system to be robust to distortion of the multimedia content in terms of clarity, colour variance even when only small portions of the multimedia content are visible on a screen from a distance.
While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the invention. These and other changes in the preferred embodiments of the invention will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter to be implemented merely as illustrative of the invention and not as limitation.

We Claim:
1. A method of decoding an encoded information based on colour code, the method comprising:
- receiving, at a processing unit [102], at least one media content comprising the colour code;
- detecting, by a detection unit [104], one or more circles present in the at least one media content, based on an analysis of the at least one media content, wherein the one or more circles are associated with the colour code;
- detecting, by the detection unit [104], one or more corners within each circle of the one or more circles based on a pixel intensity variation;
- removing, by the processing unit [102], one or more false circles from the one or more circles based on an analysis of the one or more corners;
- determining, by the processing unit [102], a set of Hue values for each remaining circle, wherein each remaining circle comprises four corners;
- matching in a sequence, by the processing unit [102], the determined set of Hue values of each remaining circle of the one or more circles with one or more pre-defined set of Hue values stored at a storage unit; and
- decoding, by the processing unit [102], the encoded information based on an extraction of an information corresponding to the encoded information from the storage unit, wherein the information is extracted based on a successful matching.

2. The method as claimed in claim 1, wherein the colour code is embedded in each media content of the at least one media content in form of one or more grids inside a circle comprising distinct hue values along vertices.
3. The method as claimed in claim 1, wherein the information further comprises at least one of an advertisement related promotional content, a product related data and a service related data.
4. The method as claimed in claim 1, wherein detecting, by a detection unit [104], one or more circles in the at least one media content is further based on a voting mechanism based on 3D Hough Transform Matrix.
5. The method as claimed in claim 1, wherein detecting, by the detection unit [104], one or more corners within each circle further comprises:

- creating, by the processing unit [102], a bounding box around each circle of the one or more circles, and
- detecting, by the detection unit [104], the one or more corners within each bounding box.

6. The method as claimed in claim 1, wherein the pixel intensity variation comprises detecting a change in one or more pixel intensity values, wherein the change is above a threshold.
7. The method as claimed in claim 1, wherein removing, by the processing unit [102], one or more false circles further comprises detecting from the one or more circles, a circle as a false circle based on at least one of:

- detection of less than four corners within the circle,
- detection of a same colour in adjacent corners within the circle, and
- detection of a position of a corner, wherein the position of the corner with respect to radius of the circle is not in a pre-defined ratio.
8. The method as claimed in claim 1, wherein determining, by the processing unit
[102], a set of Hue values for each remaining circle further comprises:

- sorting, the four corners of each of the remaining circle in a clockwise manner; and
- extracting, a corresponding hue value associated with four grids surrounding each corner.
9. A system of decoding an encoded information based on colour code, the system comprises:
- a processing unit [102], configured to receive, at least one media content comprising the colour code;
- a detection unit [104], configured to detect:
one or more circles present in the at least one media content, based on an analysis of the at least one media content, wherein the one or more circles are associated with the colour code, and
one or more corners within each circle of the one or more circles based on a pixel intensity variation;
wherein the processing unit [102], is further configured to:
remove, one or more false circles from the one or more circles based on an analysis of the one or more corners,
determine, a set of Hue values for each remaining circle, wherein each remaining circle comprises four corners,
match in a sequence, the determined set of Hue values of each remaining circle of the one or more circles with one or more pre-defined set of Hue values stored at a storage unit, and

decode, the encoded information based on an extraction of an information corresponding to the encoded information from the storage unit, wherein the information is extracted based on a successful matching.
10. The system as claimed in claim 9, wherein the colour code is embedded in each media content of the at least one media content in form of one or more grids inside a circle comprising distinct hue values along vertices.
11. The system as claimed in claim 9, wherein the information further comprises at least one of an advertisement related promotional content, a product related data and a service related data.
12. The system as claimed in claim 9, wherein the detection unit [104] is further configured to detect one or more circles in the at least one media content based on a voting mechanism based on 3D Hough Transform Matrix.
13. The system as claimed in claim 9, wherein the detection unit [104] is further configured to detect, one or more corners within each circle based on:

- creating, by the processing unit [102], a bounding box around each circle of the one or more circles, and
- detecting, by the detection unit [104], the one or more corners within each bounding box.

14. The system as claimed in claim 9, wherein the pixel intensity variation comprises detecting a change in one or more pixel intensity values, wherein the change is above a threshold.
15. The system as claimed in claim 9, wherein the processing unit [102] is further configured to detect a circle as a false circle to remove the one or more false circles, based on at least one of:

- detection of less than four corners within the circle,
- detection of a same colour in adjacent corners within the circle, and

- detection of a position of a corner, wherein the position of the corner
with respect to radius of the circle is not in a pre-defined ratio.
16. The system as claimed in claim 9, wherein the processing unit [102] is further configured to determine the set of Hue values for each remaining circle based on:
- sorting, the four corners of each of the remaining circle in a clockwise manner; and
- extracting, a corresponding hue value associated with four grids surrounding each corner.

Documents

Application Documents

# Name Date
1 202021008661-FORM-8 [17-09-2024(online)].pdf 2024-09-17
1 202021008661-IntimationOfGrant13-03-2025.pdf 2025-03-13
1 202021008661-STATEMENT OF UNDERTAKING (FORM 3) [28-02-2020(online)].pdf 2020-02-28
1 202021008661-US(14)-HearingNotice-(HearingDate-31-01-2025).pdf 2025-01-10
2 202021008661-PROVISIONAL SPECIFICATION [28-02-2020(online)].pdf 2020-02-28
2 202021008661-PatentCertificate13-03-2025.pdf 2025-03-13
2 202021008661-FORM-8 [17-09-2024(online)].pdf 2024-09-17
2 202021008661-FER_SER_REPLY [14-09-2022(online)].pdf 2022-09-14
3 202021008661-FER_SER_REPLY [14-09-2022(online)].pdf 2022-09-14
3 202021008661-FORM 1 [28-02-2020(online)].pdf 2020-02-28
3 202021008661-ORIGINAL UR 6(1A) FORM 1-030325.pdf 2025-03-12
3 202021008661-Response to office action [05-04-2022(online)].pdf 2022-04-05
4 202021008661-FER.pdf 2022-03-22
4 202021008661-FIGURE OF ABSTRACT [28-02-2020(online)].pdf 2020-02-28
4 202021008661-Response to office action [05-04-2022(online)].pdf 2022-04-05
4 202021008661-Written submissions and relevant documents [14-02-2025(online)].pdf 2025-02-14
5 202021008661-Proof of Right [13-02-2025(online)].pdf 2025-02-13
5 202021008661-FORM-26 [15-07-2020(online)].pdf 2020-07-15
5 202021008661-FER.pdf 2022-03-22
5 202021008661-8(i)-Substitution-Change Of Applicant - Form 6 [26-02-2022(online)].pdf 2022-02-26
6 202021008661-Proof of Right [07-08-2020(online)].pdf 2020-08-07
6 202021008661-PETITION UNDER RULE 137 [12-02-2025(online)].pdf 2025-02-12
6 202021008661-ASSIGNMENT DOCUMENTS [26-02-2022(online)].pdf 2022-02-26
6 202021008661-8(i)-Substitution-Change Of Applicant - Form 6 [26-02-2022(online)].pdf 2022-02-26
7 202021008661-PA [26-02-2022(online)].pdf 2022-02-26
7 202021008661-FORM 18 [27-02-2021(online)].pdf 2021-02-27
7 202021008661-Correspondence to notify the Controller [22-01-2025(online)].pdf 2025-01-22
7 202021008661-ASSIGNMENT DOCUMENTS [26-02-2022(online)].pdf 2022-02-26
8 202021008661-ENDORSEMENT BY INVENTORS [27-02-2021(online)].pdf 2021-02-27
8 202021008661-FORM-26 [22-01-2025(online)].pdf 2025-01-22
8 202021008661-PA [26-02-2022(online)].pdf 2022-02-26
8 Abstract1.jpg 2021-10-19
9 202021008661-COMPLETE SPECIFICATION [27-02-2021(online)].pdf 2021-02-27
9 202021008661-DRAWING [27-02-2021(online)].pdf 2021-02-27
9 202021008661-US(14)-HearingNotice-(HearingDate-31-01-2025).pdf 2025-01-10
9 Abstract1.jpg 2021-10-19
10 202021008661-COMPLETE SPECIFICATION [27-02-2021(online)].pdf 2021-02-27
10 202021008661-DRAWING [27-02-2021(online)].pdf 2021-02-27
10 202021008661-FORM-8 [17-09-2024(online)].pdf 2024-09-17
11 202021008661-DRAWING [27-02-2021(online)].pdf 2021-02-27
11 202021008661-ENDORSEMENT BY INVENTORS [27-02-2021(online)].pdf 2021-02-27
11 202021008661-FER_SER_REPLY [14-09-2022(online)].pdf 2022-09-14
11 Abstract1.jpg 2021-10-19
12 202021008661-ENDORSEMENT BY INVENTORS [27-02-2021(online)].pdf 2021-02-27
12 202021008661-FORM 18 [27-02-2021(online)].pdf 2021-02-27
12 202021008661-PA [26-02-2022(online)].pdf 2022-02-26
12 202021008661-Response to office action [05-04-2022(online)].pdf 2022-04-05
13 202021008661-ASSIGNMENT DOCUMENTS [26-02-2022(online)].pdf 2022-02-26
13 202021008661-FER.pdf 2022-03-22
13 202021008661-FORM 18 [27-02-2021(online)].pdf 2021-02-27
13 202021008661-Proof of Right [07-08-2020(online)].pdf 2020-08-07
14 202021008661-Proof of Right [07-08-2020(online)].pdf 2020-08-07
14 202021008661-FORM-26 [15-07-2020(online)].pdf 2020-07-15
14 202021008661-8(i)-Substitution-Change Of Applicant - Form 6 [26-02-2022(online)].pdf 2022-02-26
15 202021008661-ASSIGNMENT DOCUMENTS [26-02-2022(online)].pdf 2022-02-26
15 202021008661-FER.pdf 2022-03-22
15 202021008661-FIGURE OF ABSTRACT [28-02-2020(online)].pdf 2020-02-28
15 202021008661-FORM-26 [15-07-2020(online)].pdf 2020-07-15
16 202021008661-FIGURE OF ABSTRACT [28-02-2020(online)].pdf 2020-02-28
16 202021008661-FORM 1 [28-02-2020(online)].pdf 2020-02-28
16 202021008661-PA [26-02-2022(online)].pdf 2022-02-26
16 202021008661-Response to office action [05-04-2022(online)].pdf 2022-04-05
17 202021008661-FER_SER_REPLY [14-09-2022(online)].pdf 2022-09-14
17 202021008661-FORM 1 [28-02-2020(online)].pdf 2020-02-28
17 202021008661-PROVISIONAL SPECIFICATION [28-02-2020(online)].pdf 2020-02-28
17 Abstract1.jpg 2021-10-19
18 202021008661-COMPLETE SPECIFICATION [27-02-2021(online)].pdf 2021-02-27
18 202021008661-FORM-8 [17-09-2024(online)].pdf 2024-09-17
18 202021008661-STATEMENT OF UNDERTAKING (FORM 3) [28-02-2020(online)].pdf 2020-02-28
18 202021008661-PROVISIONAL SPECIFICATION [28-02-2020(online)].pdf 2020-02-28
19 202021008661-US(14)-HearingNotice-(HearingDate-31-01-2025).pdf 2025-01-10
19 202021008661-STATEMENT OF UNDERTAKING (FORM 3) [28-02-2020(online)].pdf 2020-02-28
19 202021008661-DRAWING [27-02-2021(online)].pdf 2021-02-27
20 202021008661-FORM-26 [22-01-2025(online)].pdf 2025-01-22
20 202021008661-ENDORSEMENT BY INVENTORS [27-02-2021(online)].pdf 2021-02-27
21 202021008661-FORM 18 [27-02-2021(online)].pdf 2021-02-27
21 202021008661-Correspondence to notify the Controller [22-01-2025(online)].pdf 2025-01-22
22 202021008661-Proof of Right [07-08-2020(online)].pdf 2020-08-07
22 202021008661-PETITION UNDER RULE 137 [12-02-2025(online)].pdf 2025-02-12
23 202021008661-Proof of Right [13-02-2025(online)].pdf 2025-02-13
23 202021008661-FORM-26 [15-07-2020(online)].pdf 2020-07-15
24 202021008661-Written submissions and relevant documents [14-02-2025(online)].pdf 2025-02-14
24 202021008661-FIGURE OF ABSTRACT [28-02-2020(online)].pdf 2020-02-28
25 202021008661-FORM 1 [28-02-2020(online)].pdf 2020-02-28
25 202021008661-ORIGINAL UR 6(1A) FORM 1-030325.pdf 2025-03-12
26 202021008661-PatentCertificate13-03-2025.pdf 2025-03-13
26 202021008661-PROVISIONAL SPECIFICATION [28-02-2020(online)].pdf 2020-02-28
27 202021008661-IntimationOfGrant13-03-2025.pdf 2025-03-13
27 202021008661-STATEMENT OF UNDERTAKING (FORM 3) [28-02-2020(online)].pdf 2020-02-28

Search Strategy

1 Search008661E_22-03-2022.pdf

ERegister / Renewals

3rd: 19 Mar 2025

From 28/02/2022 - To 28/02/2023

4th: 19 Mar 2025

From 28/02/2023 - To 28/02/2024

5th: 19 Mar 2025

From 28/02/2024 - To 28/02/2025

6th: 19 Mar 2025

From 28/02/2025 - To 28/02/2026