Abstract: Methods (300, 400, 500) and systems (200) for determining a dominant color of an object in an image (102) are disclosed. A color matrix module (210) stores the image (102) in the form of a matrix and initialize a color matrix comprising an occurrence count value for each of one or more colors of the image. A processing module (212) may identify an object within a bounding box of coordinates and pixels within the bounding box are iterated using one or more approaches related to average windowing, pixel skipping, and pixel-by-pixel. A count occurrence is updated for each color in the matrix in each of the exemplary approaches. A dominant color prediction module (214) may determine the dominant color with most count occurrence upon completion of iteration through the complete bounding box.
Claims:1. A method (300) of determining a dominant color of an object in an image (102), the method (300) comprising:
storing (302) data of an image (102) in the form of a matrix;
initializing (306) a color matrix comprising an occurrence count value for each of one or more colors of the image (102);
selecting (310) a window size based on a uniformity factor of the object;
identifying (312) at least one window of pixels of the object, wherein each of the at least one window corresponds to the window size;
for each of the at least one window:
computing (314) an average of the RGB channels in the window of the pixels, and
updating (316) the occurrence count value for each of the one or more colors based on the computed average; and
determining (318) the dominant color, of the one or more colors, of the object based on maximum count of the occurrence count value.
2. The method (300) as claimed in claim 1, wherein the uniformity factor of the object is based on an average pixel width of a binary matrix associated with the object.
3. The method (300) as claimed in claim 1, comprising:
classifying (304) Hue Saturation Value (HSV) color space into fifteen classes of colors for the image (102), wherein the fifteen classes of colors correspond to the one or more colors.
4. A method (400) of predicting the dominant color of an object in an image (102), the method (400) comprising:
storing (402) data of an image (102) in the form of a matrix;
initializing (406) a color matrix comprising an occurrence count value for each of one or more colors of the image (102);
identifying (412) the object within the image (102) for predicting the dominant color of the object, wherein the object is enclosed by bounding box coordinates;
selecting (414) two pixels from the bounding box coordinates of the object, the two pixels being separated by a threshold number of pixels, wherein the threshold number of pixels is based on a uniformity factor of the object;
comparing (416) the two pixels separated by the threshold number of pixels to determine whether the two pixels are of the same color;
updating (418) the occurrence count value of a color of the one or more colors in the color matrix, corresponding to the color of the two pixels, when the two pixels are of the same color;
selecting (420) next two pixels separated by the threshold number of pixels, wherein the selecting the next two pixels comprises skipping the pixels corresponding to the threshold number of pixels between the two pixels when the two pixels are of the same color;
in response to selecting the next two pixels, repeating (422) the steps of comparing, updating, and selecting for the bounding box of coordinates; and
determining (424) the dominant color, of the one or more colors, of the image object based on maximum count of the occurrence count value.
5. The method (400) as claimed in claim 4, comprising discarding the two pixels when the two pixels are not of the same color.
6. The method (400) as claimed in claim 4, comprising predicting the threshold number of pixels to be of the same color as the two pixels, when the two pixels are of the same color.
7. The method (400) as claimed in claim 4, wherein the uniformity factor of the image object is based on an average pixel width of a binary matrix associated with the image object.
8. A method (500) of predicting the dominant color of an object in an image (102), the method (500) comprising:
storing (502) data of an image (102) in the form of a matrix;
initializing (506) a color matrix comprising an occurrence count value for each of one or more colors of the image (102);
identifying (510) the object within the image (102) for predicting the dominant color of the object, wherein the object is enclosed by bounding box coordinates and is associated with a masked portion;
updating (512), for each pixel of the bounding box coordinates of the object, the occurrence count value of the color, of the one or more colors, in the color matrix in response to determining that the respective pixel of the masked portion is equal to the masked color; and
determining (514) the dominant color, of the one or more colors, of the image object based on maximum count of the occurrence count value.
9. The method (500) as claimed in claim 8, comprising:
classifying (504) hue saturation value (HSV) color space into fifteen classes of colors for the image (102), wherein the fifteen classes of colors correspond to the one or more colors.
10. A system (200) for determining a dominant color of an object in an image (102), the system (200) comprising:
a color matrix module (210) configured to:
store data of an image (102) in the form of a matrix, and
initialize a color matrix comprising an occurrence count value for each of one or more colors of the image (102);
a processing module (212) in communication with the color matrix module (210) and configured to:
select a window size based on a uniformity factor of the object,
identify at least one window of pixels of the object, wherein each of the at least one window corresponds to the window size,
compute, for each of the at least one window, an average of the RGB channels in the window of the pixels, and
update, for each of the at least one window, the occurrence count value for each of the one or more colors based on the computed average; and
a dominant color prediction module (214) in communication with the color matrix module (210) and processing module (212), and configured to determine the dominant color, of the one or more colors, of the object based on maximum count of the occurrence count value.
11. The system (200) as claimed in claim 10, wherein the uniformity factor of the object is based on an average pixel width of a binary matrix associated with the object.
12. The system (200) as claimed in claim 10, wherein the color matrix module is configured to classify Hue Saturation Value (HSV) color space into fifteen classes of colors for the image (102), wherein the fifteen classes of colors correspond to the one or more colors.
13. A system (200) for determining a dominant color of an object in an image (102), the system (200) comprising:
a color matrix module (210) configured to:
store data of an image (102) in the form of a matrix, and
initialize a color matrix comprising an occurrence count value for each of one or more colors of the image (102);
a processing module (212) in communication with the color matrix module (210) and configured to:
identify the object within the image (102) for predicting the dominant color of the object, wherein the object is enclosed by bounding box coordinates,
select two pixels from the bounding box coordinates of the object, the two pixels being separated by a threshold number of pixels, wherein the threshold number of pixels is based on a uniformity factor of the object,
compare the two pixels separated by the threshold number of pixels to determine whether the two pixels are of the same color,
update the occurrence count value of a color of the one or more colors in the color matrix, corresponding to the color of the two pixels, when the two pixels are of the same color,
select next two pixels separated by the threshold number of pixels, wherein the selecting the next two pixels comprises skipping the pixels corresponding to the threshold number of pixels between the two pixels when the two pixels are of the same color, and
in response to selection of the next two pixels, repeat the steps of comparing, updating, and selecting for the bounding box of coordinates; and
a dominant color prediction module (214) in communication with the color matrix module (210) and processing module (212), and configured to determine the dominant color, of the one or more colors, of the object based on maximum count of the occurrence count value.
14. The system (200) as claimed in claim 13, wherein the processing module (212) is configured to discard the two pixels when the two pixels are not of the same color.
15. The system (200) as claimed in claim 13, wherein the processing module (212) is configured to predict the threshold number of pixels to be of the same color as the two pixels, when the two pixels are of the same color.
16. The system (200) as claimed in claim 13, wherein the uniformity factor of the image object is based on an average pixel width of a binary matrix associated with the image object.
17. A system (200) for determining a dominant color of an object in an image (102), the system (200) comprising:
a color matrix module (210) configured to:
store data of an image (102) in the form of a matrix, and
initialize a color matrix comprising an occurrence count value for each of one or more colors of the image (102);
a processing module (212) in communication with the color matrix module (210) and configured to:
identify the object within the image (102) to predict the dominant color of the object, wherein the object is enclosed by bounding box coordinates and is associated with a masked portion,
update, for each pixel of the bounding box coordinates of the object, the occurrence count value of the color, of the one or more colors, in the color matrix in response to determining that the respective pixel of the masked portion is equal to the masked color; and
a dominant color prediction module (214) in communication with the color matrix module (210) and processing module (212), and configured to determine the dominant color, of the one or more colors, of the object based on maximum count of the occurrence count value.
18. The system (200) as claimed in claim 17, wherein the color matrix module (210) is configured to classify Hue Saturation Value (HSV) color space into fifteen classes of colors for the image (102), wherein the fifteen classes of colors correspond to the one or more colors.
, Description:FIELD OF THE INVENTION
The present invention generally relates to predicting a dominant color in an image, and more particularly relates to systems and methods for predicting a dominant color among one or more colors of an object in the image based on an efficient occurrence count.
BACKGROUND
Proliferated deployment of robotics and automated applications in diverse aspects of everyday life, in the current era, has drawn increased attention of researchers to detection and identification of objects. This feature outfits robots with a capability to mimic human behaviour. Object detection plays a significant role in the field of automation with increased precision and simplification of work, and thus, the object detection is integrated with several robotic applications. Applications of object detection gadgets can be found in video surveillance, security cameras, and self-driving systems. Various object detection models have been introduced in the past decade, such as Region-based Convolutional Neural Networks (R-CNN), Fast R-CNN, You Only Look Once (YOLO), and Mask R-CNN. All these models are improved versions of primitive models, operable using convolutional neural networks. Though the object detection feature is being utilized widely, it lacks the color prediction of objects.
Accordingly, there is a need to provide color prediction along with object detection. Additionally, there is a need to provide efficient mechanisms to identify colored objects within images.
SUMMARY
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention. This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
According to one embodiment of the present disclosure, a method for determining a dominant color of an object in an image is disclosed. The method includes storing data of an image in the form of a matrix. Further, the method includes initializing a color matrix comprising an occurrence count value for each of one or more colors of the image. Furthermore, the method includes selecting a window size based on a uniformity factor of the object. Additionally, the method includes identifying at least one window of pixels of the object, wherein each of the at least one window corresponds to the window size. Moreover, the method includes for each of the at least one window, computing an average of the RGB channels in the window of the pixels, and updating the occurrence count value for each of the one or more colors based on the computed average. Also, the method includes determining the dominant color, of the one or more colors, of the object based on maximum count of the occurrence count value.
According to another embodiment of the present disclosure, a method for predicting a dominant color of an object in an image is disclosed. The method includes storing data of an image in the form of a matrix. Further, the method includes initializing a color matrix comprising an occurrence count value for each of one or more colors of the image. Furthermore, the method includes identifying the object within the image for predicting the dominant color of the object, wherein the object is enclosed by bounding box coordinates. Yet further, the method includes selecting two pixels from the bounding box coordinates of the object, the two pixels being separated by a threshold number of pixels, wherein the threshold number of pixels is based on a uniformity factor of the object. Still further, the method includes comparing the two pixels separated by the threshold number of pixels to determine whether the two pixels are of the same color. In addition, the method includes updating the occurrence count value of a color of the one or more colors in the color matrix, corresponding to the color of the two pixels, when the two pixels are of the same color. Furthermore, the method includes selecting next two pixels separated by the threshold number of pixels, wherein the selecting the next two pixels comprises skipping the pixels corresponding to the threshold number of pixels between the two pixels when the two pixels are of the same color. Additionally, the method includes repeating, in response to selecting the next two pixels, the steps of comparing, updating, and selecting for the bounding box of coordinates. Further, the method includes determining the dominant color, of the one or more colors, of the image object based on maximum count of the occurrence count value.
According to yet another embodiment of the present disclosure, a method for predicting a dominant color of an object in an image is disclosed. The method includes storing data of an image in the form of a matrix. Further, the method includes initializing a color matrix comprising an occurrence count value for each of one or more colors of the image. Furthermore, the method includes identifying the object within the image for predicting the dominant color of the object, wherein the object is enclosed by bounding box coordinates and is associated with a masked portion. Additionally, the method includes updating, for each pixel of the bounding box coordinates of the object, the occurrence count value of the color, of the one or more colors, in the color matrix in response to determining that the respective pixel of the masked portion is equal to the masked color. Yet further, the method includes determining the dominant color, of the one or more colors, of the image object based on maximum count of the occurrence count value.
According to one embodiment of the present disclosure, a system for determining a dominant color of an object in an image is disclosed. The system comprises a color matrix module configured to store data of an image in the form of a matrix, and configured to initialize a color matrix comprising an occurrence count value for each of one or more colors of the image. Further, the system comprises a processing module configured to select a window size based on a uniformity factor of the object, to identify at least one window of pixels of the object, to compute for each of the at least one window an average of the RGB channels in the window of the pixels, and to update the occurrence count value for each of the one or more colors based on the computed average. Also, the system comprises a dominant color prediction module configured to determine the dominant color, of the one or more colors, of the object based on maximum count of the occurrence count value.
According to another embodiment of the present disclosure, a system for determining a dominant color of an object in an image is disclosed. The system comprises a color matrix module configured to store data of an image in the form of a matrix, and configured to initialize a color matrix comprising an occurrence count value for each of one or more colors of the image. Further, the system comprises a processing module configured to identify the object within the image to predict the dominant color of the object, wherein the object is enclosed by bounding box coordinates. Further, the processing module is configured to select two pixels from the bounding box coordinates of the object, the two pixels being separated by a threshold number of pixels, wherein the threshold number of pixels is based on a uniformity factor of the object. Still further, the processing module is configured to compare the two pixels separated by the threshold number of pixels to determine whether the two pixels are of the same color. In addition, the processing module is configured to update the occurrence count value of a color of the one or more colors in the color matrix, corresponding to the color of the two pixels, when the two pixels are of the same color. Furthermore, the processing module is configured to select next two pixels separated by the threshold number of pixels, wherein the selecting the next two pixels comprises skipping the pixels corresponding to the threshold number of pixels between the two pixels when the two pixels are of the same color. Additionally, the processing module is configured to repeat, in response to selecting the next two pixels, the steps of comparing, updating, and selecting for the bounding box of coordinates. Further, the system comprises a dominant color prediction module configured to determine the dominant color, of the one or more colors, of the image object based on maximum count of the occurrence count value.
According to yet another embodiment of the present disclosure, a system for determining a dominant color of an object in an image is disclosed. The system comprises a color matrix module configured to store data of an image in the form of a matrix, and configured to initialize a color matrix comprising an occurrence count value for each of one or more colors of the image. Furthermore, the system comprises a processing module configured to identify the object within the image for predicting the dominant color of the object, wherein the object is enclosed by bounding box coordinates and is associated with a masked portion. Additionally, the processing module is configured to update, for each pixel of the bounding box coordinates of the object, the occurrence count value of the color, of the one or more colors, in the color matrix in response to determining that the respective pixel of the masked portion is equal to the masked color. Yet further, the system comprises a dominant color prediction module configured to determine the dominant color, of the one or more colors, of the image object based on maximum count of the occurrence count value.
To further clarify the advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Figure 1 illustrates a schematic flow diagram for dominant color prediction in an image, according to various embodiments of the present invention;
Figure 2 illustrates a schematic block diagram of a system for dominant color prediction in an image, according to various embodiments of the present invention;
Figure 3 illustrates an exemplary process flow for dominant color prediction in an image, according to an embodiment of the present invention;
Figures 4A and 4B illustrate another exemplary process flow for dominant color prediction in an image, according to an embodiment of the present invention;
Figure 5 illustrates yet another exemplary process flow for dominant color prediction in an image, according to an embodiment of the present invention; and
Figures 6A-6C illustrate exemplary use cases for dominant color prediction in images, according to various embodiment of the present invention.
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
DETAILED DESCRIPTION
For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the various embodiments and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates.
It will be understood by those skilled in the art that the foregoing general description and the following detailed description are explanatory of the invention and are not intended to be restrictive thereof.
Reference throughout this specification to “an aspect”, “another aspect” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrase “in an embodiment”, “in another embodiment” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by “comprises... a” does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components.
This disclosure relates to integration of object detection and color prediction technologies. This integration is facilitated by a Dominant Color Prediction Color Map (DCPCM) model based technology, as discussed herein.
Dominant Color Prediction Color Map (DCPCM) model
There are many shades of different colors available in each color space. Some of the exemplary color spaces include Hue Saturation Value (HSV), Cyan Magenta Yellow Key (CMYK), and Red Green Blue (RGB). Humans generally predict the color of an object, merely by a glance, as we are already trained about various color shades, unlike a machine that perceives the image in the form of a pixel having three channels, i.e., RGB. In order to predict the dominant color of an object, algorithms have to iterate through all the pixels of an object image and maintain a count of the colors, to predict the dominant color.
According to various embodiments of the present invention, the DCPCM model classifies the available color shades of the entire HSV color space into 15 basic color classes: red, green, blue, yellow, cyan, magenta, orange, yellow-green, green-cyan, cyan-blue, violet, pink, grey, white, and black. These 15 colors are the most widely used that include primary, secondary, tertiary, and neutral colors. The DCPCM model extracts each pixel in the object and maps it into one of the 15 pre-determined classes of colors.
The proposed technologies of DCPCM models in the present disclosure relate to pixel-by-pixel, Average Windowing (AVW), and Pixel Skip (PXS) methodologies which are respectively based on each pixel, averaging and skipping techniques that add an extra block to Mask Region-Convolutional Neural Networks, leading to a decrease in time and increase in accuracy. Further, a comparison of these two methodologies with the existing clustering techniques like the K-means, Fuzzy C-means, etc. is presented in the later sections. The accuracy and the average time take for the color prediction are 95.4% and 9.2 seconds respectively for PXS methodology and 93.6% and 6.5 seconds respectively for AVW methodology. Hence, the methodologies of the present disclosure provide superior results compared to previously known clustering algorithms.
Figure 1 illustrates an operational exemplary flow diagram for dominant color prediction in an image, according to various embodiments of the present invention.
Specifically, the flow diagram illustrates the workflow of various embodiments such as, AVW and PXS methodologies that predict the object class, along with its color property, using the Mask R-CNN framework. As may be appreciated, using Mask R-CNN is merely exemplary and other frameworks such as, but not limited to, CNN, R-CNN, and Faster R-CNN may also be used. In operation, when the proposed methodology is performed, an image 102 may be assigned or input to the system 100 for dominant color prediction in the image. Further, the image 102 may be presented in an RGB model at 104. It may be apparent that RGB model is an exemplary model, and other various models may as well be used to represent the image 102. The image 102 may subsequently be converted to grey scale using a standard grey scale converter 106, and stored in the form of a matrix.
Using the grey scale image, object detection function is triggered using an exemplary model, such as Mask R-CNN 108. It may be apparent that other models may be used in place of Mask R-CNN model 108. As an output of the Mask R-CNN model 108, the objects along with other parameters such as, but not limited to, bounding boxes and mask-color may be predicted. If objects are detected in the image 102, then the proposed methodologies store the location of objects, i.e., its bounding box coordinates along with its masked portion for each object at 110. Such data may be stored within a memory.
Further, at 112, an edge detection model may be employed on the stored data in the memory at 110. An exemplary edge detection model may be Canny edge detector. In addition, at 114, further pre-processing may be performed such as, calculation of window size and/or skip size based on a uniformity factor obtained from Canny edge detector for AVW and PXS techniques discussed later herein. At 116, the exemplary various embodiments related to DPCPM models may be implemented to predict the dominant color 118 in the image 102. As an output of the processing of DPCPM models, an image 120 with the dominant colors may be output. The detailed operation of various embodiments of DPCPM models are discussed in conjunction with subsequent figures, as discussed hereinbelow.
Figure 2 illustrates a schematic block diagram of a system for dominant color prediction in an image, according to various embodiments of the present invention.
In an embodiment, the system 200 may be configured to determine the dominant color in an image. The system 200 may correspond to a server to process image data, in accordance with various embodiments of the present invention. The system 200 may include at least one processor 202, Input/Output (I/O) interface 204, a transceiver 206, and a memory 208. The memory 208 may further include a color matrix module 210, a processing module 212, a dominant color detection module 214, and a database 216. In an alternative embodiment, the processor 202 may be a controller.
The processor 202 may include at least one data processor for executing processes in Virtual Storage Area Network. The processor 202 may include specialized processing units such as, integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
The processor 202 may be disposed in communication with one or more input/output (I/O) devices via the I/O interface 204 and transceiver 206. The I/O interface 204 may employ communication protocols/methods such as, without limitation, audio, analog, digital, monaural, RCA, stereo, IEEE-1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), radio frequency (RF) antennas, S-Video, VGA, IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
Using the I/O interface 204, the system 200 may communicate with one or more I/O devices. For example, the input device may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, stylus, scanner, storage device, transceiver, video device/source, etc. The output devices may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, Plasma Display Panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc.
The processor 202 may be disposed in communication with a communication network via a network interface. The network interface may be the I/O interface 204. The network interface may connect to a communication network. The network interface may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc. The communication network may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc. Using the network interface and the communication network, the system 200 may communicate with one or more devices to receive image based input data for processing and to transmit output data such as color and object detection data. The network interface may employ connection protocols including, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc.
In some embodiments, the memory 208 may be communicatively coupled to the at least one processor 202. The memory 208 stores data, instructions executable by the at least one processor 202. In one embodiment, the memory 208 may include one or more modules 210-214 and a database 216 to store data. The one or more modules 216 may be configured to perform the steps of the present disclosure using the image data stored in the database 218 or received at the I/O interface, to predict color and identify objects in the image. In an embodiment, each of the one or more modules 210-214 may be a hardware unit which may be outside the memory 208.
Among other capabilities, the processor 202 may be configured to fetch and execute computer-readable instructions and data stored in the memory 208. The memory 208 may include any non-transitory computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), and/or non-volatile memory, such as read-only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes.
The modules 210-214, amongst other things, include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement data types. The modules 210-214 may also be implemented as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulate signals based on operational instructions. Further, the modules 210-214 may be implemented in hardware, instructions executed by at least one processing unit, for e.g., the processor, or by a combination thereof. The processing unit may comprise a computer, a processor, a state machine, a logic array and/or any other suitable devices capable of processing instructions. The processing unit may be a general-purpose processor which executes instructions to cause the general-purpose processor to perform operations or, the processing unit may be dedicated to perform the required functions. In some example embodiments, the module(s) 210-214 may be machine-readable instructions (software, such as web-application, mobile application, program, etc.) which, when executed by a processor/processing unit, perform any of the functionalities related to color prediction, as discussed throughout this disclosure.
In one embodiment, the color matrix module 210 may be configured to convert the image into grayscale and storing data of an image in the form of a matrix [Gscale]. Further, the color matrix module 210 may be configured to classify Hue Saturation Value (HSV) color space into one or more classes of colors for the image. In one embodiment, the HSV color space may be classified into fifteen classes of colors for the image. Each of these fifteen classes of colors correspond to the one or more colors. For all these colors of the image, the color matrix module may be configured to initialize a color matrix comprising an occurrence count value for the colors.
In a first embodiment, the processing module 212 may be configured to iterate through the image and store the pixels in a list [L] from the original image data that contribute to the masked portion of the masked image data determined through the Mask R-CNN framework. Further, in the same embodiment, the processing module 212 may be configured to identify a window of size f. This window slides through the pixels and predicts the average of the RGB channels, at each stop. If the window size is too high, the average of the values may get disrupted and the prediction may tend to lose accuracy, whereas if the window size is too small, the time complexity does not vary much as compared to a pixel-by-pixel approach. Hence, the value of f should be chosen precisely. The value of f may vary according to the uniformity factor of the image, i.e., the value of f increases with an increase in the uniformity factor and vice-versa. A matrix may be maintained for all colors, and the count is increased for each color at every averaging step. Finally, once the complete object is iterated, the processing module 212 may be configured to determine the dominant color based on highest occurrence count in the maintained matrix.
In a second embodiment, the processing module 212 may be configured to predict the dominant color of an image object by selectively skipping the pixels based on a uniformity factor of the object. This algorithm consumes less time compared to an approach where the dominant color is predicted based on iterating the process pixel-by-pixel for the complete object. The object identification and a determination of masked data for an image may be performed using Mask R-CNN framework. Specifically, in the same embodiment, to predict the dominant color, the processing module 212 may be configured to compare two pixels of a current object that are inside the masked portion and separated by (S-2) pixels. If the compared pixels are of the same color, then all the (S-2) pixels are skipped and are assumed to be the same color as their corner pixels. Otherwise, the (S-2) pixels are skipped and not considered for predicting the dominant color. The accuracy and time complexity of the process/model may be dependent on the S factor. A matrix is maintained for all colors, and the count is increased for each color at every comparison step. Finally, once the complete object is iterated, the processing module 212 may be configured to determine the dominant color based on highest occurrence count in the maintained matrix.
If the S factor increases, then the number of pixels skipped in the object increases, which may lead to decrease in accuracy. If the S factor decreases, then the time difference between a pixel-by-pixel (or all-pixel) approach and pixel skip methodology is minimal. Hence, the value of S should be chosen precisely. The value of S is predicted, using the uniformity approach, mentioned previously. The complete step-by-step process
In a third embodiment, the processing module 212 may be configured to predict the dominant color of an image object based on iterating pixel-by-pixel within a bounding box and masked data corresponding to a detected object in an image. The object identification and a determination of masked data for the image may be performed using Mask R-CNN framework. Specifically, in the same embodiment, to predict the dominant color, the processing module 212 may be configured to update, for each pixel of the bounding box coordinates of the object, the occurrence count value of the color, of the one or more colors, in the color matrix in response to determining that the respective pixel of the masked portion is equal to the masked color. A matrix may be maintained for all colors, and the count is increased for each color at every iteration step. Finally, once the complete object is iterated, the processing module 212 may be configured to determine the dominant color based on highest occurrence count in the maintained matrix.
In various embodiments, the processing module 212 may be configured to convert the non-masked pixels of the object to a single RGB value. For example, the single RGB value may be white. Thereby, only the masked pixels will contribute for uniformity.
In one embodiment, the dominant color prediction module 214 may be configured to determine the dominant color, of the one or more colors, of the image object based on maximum count of the occurrence count value in the matrix
Figure 3 illustrates an exemplary average windowing method 300 for dominant color prediction in an image, according to an embodiment of the present invention. In one embodiment, the method may include predicting the dominant color in an image as an extra block to a Mask R-CNN framework. Accordingly, the dominant color may be predicted using the detailed method provided below, in addition to the known steps executed for object detection using Mask R-CNN framework, as also illustrated in Figure 1. In other words, the mask output and the object detection is performed using the Mask R-CNN framework. This acts as an input for color detection using the detailed embodiments related to AVW, PXS, etc. throughout this disclosure. From the masked data, pixels are extracted and further processing is performed using the embodiments discussed herein for color prediction. As mentioned previously, using Mask R-CNN is merely exemplary, and other frameworks such as, but not limited to, CNN, R-CNN, and Faster R-CNN may also be used.
At step 302, the method includes converting the image into grayscale and storing data of an image in the form of a matrix [Gscale].
At step 304, the method includes classifying Hue Saturation Value (HSV) color space into one or more classes of colors for the image. In one embodiment, the HSV color space may be classified into fifteen classes of colors for the image. Each of these fifteen classes of colors correspond to the one or more colors.
At step 306, the method includes initializing a color matrix comprising an occurrence count value for each of one or more colors (or classes of colors) of the image. In one embodiment, where the HSV color space is classified into fifteen colors, [A]1*15 may be the 1-D array where each index corresponds to the respective color class.
At step 308, the method includes iterating through the image and storing the pixels in a list [L] from the original image data that contribute to the masked portion of the masked image data determined through the Mask R-CNN framework. Each of the stored pixels from the original image data correspond to the pixels inside the bounding box coordinates of a current object identified through the Mask R-CNN framework. In one embodiment, the list [L] may correspond to:
[L] = Io[r][c] if Im[r][c] = [Mc], ∀ {r 𝟄 W, y1<=r<=y2} & {c 𝟄 W, x1<=r<=x2}
wherein the definition of each symbol is provided herein below:
Symbol Description
m Length of the Original-Image.
n Width of the Original-Image.
[Mc]1*3 This represents the Masked-Color of the Original-Image. This notation implies that Im is a matrix of dimension 1*3 (since it has 3 channels i.e., R, G, B).
[Io]m*n Original Image Data. This notation implies that Iois a matrix of size m*n.
[Im]m*n MaskedImageData. This notation implies that iImt is a matrix of size m*n.
r variable that iterates through rows of [Io] and [Im].
c variable that iterates through columns of [Io] and [Im].
W Whole number
Table 1
At step 310, the method includes performing edge detection on the stored pixel data and using the gray scale image of the original image, and selecting a window size “f” based on a uniformity factor of the object. For a specific object, the object bounding box is extracted from the gray scale image, and the unmasked pixels are changed to a uniform color (i.e., white). This matrix is used as an input for edge detection. In one exemplary embodiment, the edge detection may be performed using canny edge detector framework. This edge detection framework works only on grayscale images, and outputs a binary matrix of dimensions, similar to the original image, indicating whether the respective pixel is contributing to an edge or not. An average pixel width (APW) is the average of the entire binary matrix. The APW is more likely to give the percentage of non-uniformity in an image.
At step 312, the method includes identifying at least one window of pixels of the object, wherein each of the at least one window corresponds to the window size f.
At step 314, the method includes computing, for each of the at least one window, an average of the RGB channels in the window of the pixels. This window slides through the pixels and predicts the average of the RGB channels at each stop, i.e., for the selected window. If the window size is too high, the average of the values may get disrupted and the prediction may tend to lose accuracy, whereas if the window size is too small, the time complexity does not vary much as compared to the all-pixel approach where each color occurrence for each color is counted pixel-by-pixel for the object. Hence, the value of uniformity factor should be chosen precisely. The value of f can vary according to the uniformity factor of the image, i.e., the value of f increases with an increase in the uniformity factor and vice-versa.
In one exemplary embodiment, the value of f may be predicted in the following manner. As stated earlier, the APW of the current object is more likely to predict the uniformity of an object.
If APW = 0, then the object may be presumed to be 100% uniform.
If APW = 100, then the object may be presumed to be 100% non-uniform.
Let k be the number of pixels contributing to edges,
wherein k=APW/100*T and T is the total number of pixels.
Assuming that the k pixels are uniformly divided in the object (average case), and considering that one window has one pixel contributing to edge (assuming that 1 value does not disrupt the average value), then, f=T/k
Thus, f=T*100/(APW*T),
On simplification, f=100/APW.
In one embodiment, the representation of average of the HSV values for a particular window may correspond to:
[h, s, v]= [(1/f)*∑_(i=j)^(j+f) R2H(L[i])]; j= n*f, n [0,(T/f))
wherein, h,s,v implies Hue Saturation Value,
f = window size,
R2H = Function that converts RGB value to HSV,
L = List containing masked pixels, and
T = size of list L, i.e., total number of pixels.
At step 316, the method includes updating the occurrence count value for each of the one or more colors based on the computed average. Accordingly, the count for each of the one or more colors may be updated in the matrix [A] based on repetition of step 314 till each complete object is covered through the sliding window. In one embodiment, the mathematical representation of [A] and the resultant array [K] for AVW method may correspond to:
[A] = ∑_(r=y1)^y2 ∑_(c=x1)^x2[K] if Im[r][c] = [MC]
K[r][c][i]={1,C_SET[i] =C_P(h,s,v) 0,C_SET[i] ≠C_P(h,s,v) ,∀ iϵ[0,15]
Wherein [K]m*n*15 is the matrix representing the result of DCPCM model for the respective pixel. The number 15 in the notation represents the predetermined colors in the HSV color space.
Further, C_P denotes the color-prediction function with parameters h, s, and v. It takes in the values of h, s, and v as input, and returns the color corresponding to the pixel based on DCPCM models as discussed herein. Also, CSET is an array that contains all the color classes. In particular, CSET = [“red”, “green”, “blue”, “yellow”, “cyan”, “magenta”, “orange”, “yellow-green”, “green-cyan”, “cyan-blue”, “violet”, “pink”, “grey”, “white”, and “black”].
At step 318, the method includes determining the dominant color, of the one or more colors, of the object based on maximum count of the occurrence count value. In one embodiment, the extraction of dominant color from the complete set of colors is represented as:
Dc = CSET[i], where A[i] > A[j], ∀ ((j 𝟄 [0,15)) ⋂ (j! =i))
Wherein Dc is the dominant color of the object in the image.
T Length of list containing masked pixels.
Figures 4A and 4B illustrate an exemplary pixel skip (PXS) method 400 for dominant color prediction in an image, according to an embodiment of the present invention. In the PXS method 400, the pixels may be selectively skipped based on a uniformity factor of the object. Accordingly, the PXS method requires lesser time compared to other methods which are associated with iterating through all the pixels in a particular image, and without affecting accuracy.
At step 402, the method includes converting the image into grayscale and storing data of an image in the form of a matrix [Gscale].
At step 404, the method includes classifying Hue Saturation Value (HSV) color space into one or more classes of colors for the image. In one embodiment, the HSV color space may be classified into fifteen classes of colors for the image. Each of these fifteen classes of colors correspond to the one or more colors.
At step 406, the method includes initializing a color matrix comprising an occurrence count value for each of one or more colors (or classes of colors) of the image. In one embodiment, where the HSV color space is classified into fifteen colors, [A]1*15 may be the 1-D array where each index corresponds to the respective color class.
At step 408, the method includes iterating through the image and storing the pixels in a list [L] from the original image data that contributes to the masked portion of the masked image data determined through the Mask R-CNN framework. Each of the stored pixels from the original image data correspond to the pixels inside the bounding box coordinates of a current object identified through the Mask R-CNN framework. In one embodiment, the list [L] may correspond to:
[L] = Io[r][c] if Im[r][c] = [Mc], ∀ {r 𝟄 W, y1<=r<=y2} & {c 𝟄 W, x1<=r<=x2}
wherein the definition of each symbol is provided herein below:
Symbol Description
m Length of the Original-Image.
n Width of the Original-Image.
[Mc]1*3 This represents the Masked-Color of the Original-Image. This notation implies that Im is a matrix of dimension 1*3 (since it has 3 channels i.e., R, G, B).
[Io]m*n Original Image Data. This notation implies that Iois a matrix of size m*n.
[Im]m*n MaskedImageData. This notation implies that iImt is a matrix of size m*n.
r variable that iterates through rows of [Io] and [Im].
c variable that iterates through columns of [Io] and [Im].
W Incorporated
At step 410, the method includes performing edge detection and selecting a pixel skip factor ‘S’ for the object. In one embodiment, the edge detection may be performed on the stored pixel data and using the gray scale image of the original image. For a specific object, the object bounding box is extracted from the gray scale image, and the unmasked pixels are changed to a uniform color (i.e., white). This matrix is used as an input for edge detection. In one exemplary embodiment, the edge detection may be performed using canny edge detector framework. The pixel skip factor may also be denoted as threshold number of pixels to be skipped. In one embodiment, the number of pixels skipped may be considered to be (S-2).
This edge detection framework works only on grayscale images, and outputs a binary matrix of dimensions, similar to the original image, indicating whether the respective pixel is contributing to an edge or not. An average pixel width (APW) is the average of the entire binary matrix. The APW is more likely to give the percentage of non-uniformity in an image.
An exemplary embodiment to predict the value of pixel skip factor S is detailed herein below. Similar to AVW methodology discussed in conjunction with Fig. 3, “k” is considered to be the number of edges in the image and the edges are considered to be uniformly distributed throughout the image (average case). Unlike AVW, rather than taking the average of the entire window, the pixels inside the window (between edge pixels) are assumed to be of similar color, since no edge in between implies that the pixels are uniform.
Hence, S=100/APW.
This exemplary equation for S may be derived in a manner similar to the process discussed with respect to f in Fig. 3. Since, (S-2) is the number of pixels to be skipped, the value of (S-2) should be a non-negative integer. Hence, considering the corner cases,
(S-2) = Max (0,100/APW)
At step 412, the method includes identifying the object within the image for predicting the dominant color of the object, wherein the object is enclosed by bounding box coordinates. In one exemplary embodiment, the object may be identified using the Mask R-CNN framework.
At step 414, the method includes selecting two pixels from the bounding box coordinates of the object, the two pixels being separated by the factor ‘S’ or the threshold number of pixels, wherein the threshold number of pixels is based on a uniformity factor of the object.
At step 416, the method includes comparing the two pixels separated by the threshold number of pixels to determine whether the two pixels are of the same color.
At step 418, the method includes updating the occurrence count value of a color of the one or more colors in the color matrix, corresponding to the color of the two pixels, when the two pixels are of the same color.
At step 420, the method includes selecting next two pixels separated by the threshold number of pixels, wherein the selecting the next two pixels comprises skipping the pixels corresponding to the threshold number of pixels between the two pixels when the two pixels are of the same color.
In an exemplary embodiment, let p be the current pixel. If [p]= [Mc] and C_P(p) = C_P (p- S + 1), the middle pixels may will be skipped assuming that they are of the same color and increment the count of the respective color by S. If, C_P(p) ≠C_P (p- S + 1), the pixels between them may be skipped without considering them for color-prediction. Otherwise, the current pixel may be discarded, and the process be moved to the next pixel to repeat the same step. The mathematical representation of [A] and [K] for PXS may correspond to:
[A] = ∑_(r=y1)^y2 ∑_(c=x1+n*S)^x2[K]
If Im[r][c] = [MC]; n 𝟄 [0, (x2-x1)/S)
K[r][c][i]={S,(〖(C〗_SET[i] =C_P(h,s,v)) ꓵ(K[r][c]=K[r][c-s+1])) 1,〖((C〗_SET[i] =C_P(h,s,v)) ꓵ(K[r][c]≠K[r][c-s+1])) 0,C_SET[i] ≠C_P(h,s,v) ∀i𝟄 [0,15),
At step 422, the method includes in response to selecting the next two pixels, repeating the steps of comparing, updating, and selecting for the pixels inside the bounding box of coordinates and masked image data.
At step 424, the method includes determining the dominant color, of the one or more colors, of the image object based on maximum count of the occurrence count value. In one embodiment, the extraction of dominant color from the complete set of colors is represented as:
Dc = CSET[i], where A[i] > A[j], ∀ (j 𝟄 [0,15)) ⋂ (j! =i).
Figure 5 illustrates yet another exemplary process flow 500 for dominant color prediction in an image using an all-pixel or pixel-by-pixel approach, according to an embodiment of the present invention. The all-pixel approach that considers all the pixels in the current object image to find the dominant color has a higher execution time to predict the output as compared to previously discussed embodiments of AVW and PXS.
At step 502, the method includes taking as an input the object detected using an exemplary framework, as discussed previously.
At step 504, the method includes classifying Hue Saturation Value (HSV) color space into one or more classes of colors for the image. In one embodiment, the HSV color space may be classified into fifteen classes of colors for the image. Each of these fifteen classes of colors correspond to the one or more colors.
At step 506, the method includes initializing a color matrix comprising an occurrence count value for each of one or more colors (or classes of colors) of the image. In one embodiment, where the HSV color space is classified into fifteen colors, [A]1*15 may be the 1-D array where each index corresponds to the respective color class.
At step 508, the method includes iterating through the image and storing the pixels in a list [L] from the original image data that contribute to the masked portion of the masked image data determined through the Mask R-CNN framework. Each of the stored pixels from the original image data correspond to the pixels inside the bounding box coordinates of a current object identified through the Mask R-CNN framework. In one embodiment, the list [L] may correspond to:
[L] = Io[r][c] if Im[r][c] = [Mc], ∀ {r 𝟄 W, y1<=r<=y2} & {c 𝟄 W, x1<=r<=x2}
wherein the definition of each symbol is provided herein below:
Symbol Description
m Length of the Original-Image.
n Width of the Original-Image.
[Mc]1*3 This represents the Masked-Color of the Original-Image. This notation implies that Im is a matrix of dimension 1*3 (since it has 3 channels i.e., R, G, B).
[Io]m*n Original Image Data. This notation implies that Iois a matrix of size m*n.
[Im]m*n MaskedImageData. This notation implies that iImt is a matrix of size m*n.
r variable that iterates through rows of [Io] and [Im].
c variable that iterates through columns of [Io] and [Im].
W Whole number
At step 510, the method includes identifying the object within the image for predicting the dominant color of the object, wherein the object is enclosed by bounding box coordinates and is associated with a masked portion.
At step 512, the method includes updating, for each pixel of the bounding box coordinates of the object, the occurrence count value of the color, of the one or more colors, in the color matrix in response to determining that the respective pixel of the masked portion is equal to the masked color. As illustrated above, there may be two different matrices corresponding to the image. One matrix may include the pixels of the original image (I_o) and the other matrix may include pixels of the masked image (I_m). Hence, while iterating through the bounding box coordinates, it may be verified that if the pixel in the I_m matrix is equal to the masked color. If yes, the RGB value of the respective pixel (lying in the same row and column) of I_o may be considered for color prediction.
In one embodiment, the mathematical representation of [A] and [K] for all-pixel or pixel-by-pixel approach is represented below. At this stage, the pixel value should be converted from RGB to HSV color space using R2H (r, g, b).
[A] = ∑_(r=y1)^y2 ∑_(c=x1)^x2 K[r][c] if Im [r][c] = [MC]
K[r][c][i]={1,C_SET[i] =C_P(h,s,v) 0,C_SET[i] ≠C_P(h,s,v) , ∀ i 𝟄 [0,15), where h, s, v = R2H (Io[r][c]).
Further, C_P denotes the color-prediction function with parameters h, s, and v. It takes in the values of h, s, and v as input, and returns the color corresponding to the pixel based on DCPCM models as discussed herein. Also, CSET is an array that contains all the color classes. In particular, CSET = [“red”, “green”, “blue”, “yellow”, “cyan”, “magenta”, “orange”, “yellow-green”, “green-cyan”, “cyan-blue”, “violet”, “pink”, “grey”, “white”, and “black”].
At step 516, the method includes determining the dominant color, of the one or more colors, of the image object based on maximum count of the occurrence count value. In one embodiment, the extraction of dominant color from all the colors is represented as:
Dc = CSET[i], where A[i] > A[j], ∀ (j 𝟄 [0,15)) ⋂ (j! =i).
Figure 6A, 6B, and 6C illustrate exemplary use cases for dominant color prediction in images, according to various embodiment of the present invention. Fig. 6A depicts identification of a plurality of vases within an image. As depicted, the dominant color of each of the three vases may be predicted using any of the three embodiments discussed above. In the current example, the three vases may be identified as green-cyan, yellow-green, and orange vase. Similarly, Fig. 6B depicts identification of a yellow-green apple and a yellow banana in another image using the various embodiments of the present invention. Further, Fig. 6C depicts identification of three different colored keyboards using the embodiments of the present invention.
Specifically, in the Figs. 6A-6C, the object color prediction using all pixel, pixel skipping, and average windowing methodologies results in object recognition and identification, wherein bounding boxes are drawn around each object, and also the color and object name are displayed together on top of the bounding box. Further, the indicated decimal values are the output given by Mask R-CNN framework as the confidence score for the object detected. For Example, in Fig. 6B, 0.996 for banana indicates that the probability of the object being a banana is 0.996 (99.6%). Figures 6A-6C show the result of object recognition and identification using the AVW algorithm.
The present invention provides one or more technical advancements with respect to the previous frameworks used in the similar domain of technology. One of the key technical advancement is higher accuracy with lesser time consumption for identification of dominant colors. Table 3 represents the approach-specific prediction accuracy of an object’s color and time reduction as compared to all-pixel approach for the respective algorithm. Discernment of the results in Table 3 reveals the following:
AVW and PXS algorithms have higher accuracies compared to all other appraised algorithms.
PXS algorithm is the most accurate among all the other clustering algorithms.
AVW algorithm has the highest reduction in time along with decent prediction accuracy.
The negative values of reduction in time for Mean shift and Fuzzy C-means algorithms connote that they require longer than the All-pixel approach for the color prediction task.
Algorithm Color Prediction Accuracy (%) Reduction in time compared to All-pixel (%)
AVW 93.6 62
PXS 95.4 44.3
K-means 84.1 45.1
Mini Batch K-means 83.8 47.5
Mean shift 85.2 -70.7
Fuzzy C-means 85.9 -53.4
Gaussian mixture model 88 22.4
Table 2 Comparison of time reduction and accuracies of all the discussed algorithms
While specific language has been used to describe the present subject matter, any limitations arising on account thereto, are not intended. As would be apparent to a person in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein. The drawings and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment.
| # | Name | Date |
|---|---|---|
| 1 | 202241000389-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [04-01-2022(online)].pdf | 2022-01-04 |
| 2 | 202241000389-STATEMENT OF UNDERTAKING (FORM 3) [04-01-2022(online)].pdf | 2022-01-04 |
| 3 | 202241000389-FORM FOR SMALL ENTITY(FORM-28) [04-01-2022(online)].pdf | 2022-01-04 |
| 4 | 202241000389-FORM 1 [04-01-2022(online)].pdf | 2022-01-04 |
| 5 | 202241000389-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [04-01-2022(online)].pdf | 2022-01-04 |
| 6 | 202241000389-EVIDENCE FOR REGISTRATION UNDER SSI [04-01-2022(online)].pdf | 2022-01-04 |
| 7 | 202241000389-EDUCATIONAL INSTITUTION(S) [04-01-2022(online)].pdf | 2022-01-04 |
| 8 | 202241000389-DRAWINGS [04-01-2022(online)].pdf | 2022-01-04 |
| 9 | 202241000389-DECLARATION OF INVENTORSHIP (FORM 5) [04-01-2022(online)].pdf | 2022-01-04 |
| 10 | 202241000389-COMPLETE SPECIFICATION [04-01-2022(online)].pdf | 2022-01-04 |
| 11 | 202241000389-RELEVANT DOCUMENTS [04-04-2022(online)].pdf | 2022-04-04 |
| 12 | 202241000389-Proof of Right [04-04-2022(online)].pdf | 2022-04-04 |
| 13 | 202241000389-POA [04-04-2022(online)].pdf | 2022-04-04 |
| 14 | 202241000389-FORM 13 [04-04-2022(online)].pdf | 2022-04-04 |
| 15 | 202241000389-FORM 18 [24-11-2023(online)].pdf | 2023-11-24 |
| 16 | 202241000389-FORM-8 [03-02-2025(online)].pdf | 2025-02-03 |
| 17 | 202241000389-RELEVANT DOCUMENTS [20-03-2025(online)].pdf | 2025-03-20 |
| 18 | 202241000389-POA [20-03-2025(online)].pdf | 2025-03-20 |
| 19 | 202241000389-FORM 13 [20-03-2025(online)].pdf | 2025-03-20 |
| 20 | 202241000389-FER.pdf | 2025-04-08 |
| 21 | 202241000389-OTHERS [07-05-2025(online)].pdf | 2025-05-07 |
| 22 | 202241000389-EDUCATIONAL INSTITUTION(S) [07-05-2025(online)].pdf | 2025-05-07 |
| 23 | 202241000389-FORM 3 [08-07-2025(online)].pdf | 2025-07-08 |
| 24 | 202241000389-FORM-5 [06-10-2025(online)].pdf | 2025-10-06 |
| 25 | 202241000389-FER_SER_REPLY [06-10-2025(online)].pdf | 2025-10-06 |
| 26 | 202241000389-DRAWING [06-10-2025(online)].pdf | 2025-10-06 |
| 27 | 202241000389-CORRESPONDENCE [06-10-2025(online)].pdf | 2025-10-06 |
| 28 | 202241000389-CLAIMS [06-10-2025(online)].pdf | 2025-10-06 |
| 1 | 202341013040-1E_26-03-2024.pdf |