Abstract: ABSTRACT METHODS AND SYSTEMS FOR SKEW CORRECTION IN IMAGES Embodiments herein disclose techniques for skew correction in images. An electronic device is configured to output at least one image with rotated bounding boxes that bound over text contents within image. Angles between the orientation of the rotated bounding boxes and a horizontal axis are determined and rotated images are obtained. Coordinates of one of four corners of each of the rotated bounding boxes in the rotated images are computed. The rotated images are divided into a plurality of vertical segments. Box distributions of each of the rotated images are determined and number of the rotated bounding boxes with the coordinates falling within each of the plurality of vertical segments are computed. The correct de-skewed image is outputted based on the box distributions. FIG. 6
Claims:STATEMENT OF CLAIMS
I/We claim:
1. A method for correcting skew in at least one image, the method comprising:
outputting, by a text detection module (108), the at least one image with at least one rotated bounding box over at least one text content within the at least one image;
determining, by a skew correction module (110), a first angle and a second angle with respect to a horizontal axis;
obtaining, by the skew correction module (110), a first resultant image and a second resultant image by rotating the at least one image by the first angle and the second angle respectively;
computing, by the skew correction module (110), coordinates of one of four corners of each of the rotated bounding boxes in each of the first and second resultant images;
dividing, by the skew correction module (110), each of the first resultant image and the second resultant image into a plurality of vertical segments;
determining, by the skew correction module (110), a first box distribution and a second box distribution corresponding to the first resultant image and the second resultant image respectively; and
determining, by the skew correction module (110), a correct image from one of the first resultant image and the second resultant image using the first box distribution and the second distribution.
2. The method as claimed in claim 1, comprising, using a pre-trained machine learning based object detector, by the text detection module (108), to determine the rotated bounding boxes.
3. The method as claimed in claim 1, wherein the text contents of the at least one image is one of left-aligned and right-aligned.
4. The method as claimed in claim 3, wherein, in the case of the text contents being left-aligned, determining, by the skew correction module (110), one of top left coordinates and bottom left coordinates of each of the rotated bounding boxes from each of the first and second resultant images.
5. The method as claimed in claim 3, wherein, in the case of the text contents being right-aligned, determining, by the skew correction module (110), one of top right coordinates and bottom right coordinates of each of the rotated bounding boxes from each of the first and second resultant images.
6. The method as claimed in claim 1, comprising obtaining, by the text detection module (108), the at least one image from an image testing dataset (107).
7. The method as claimed in claim 1, wherein the first angle is an angle between the rotated bounding boxes and the horizontal axis in an anticlockwise direction and the second angle is an angle between the rotated bounding boxes and the horizontal axis in a clockwise direction.
8. The method as claimed in claim 1 comprising obtaining, by the skew correction module (110), each of the first resultant image and the second resultant images with the rotated bounding boxes bounding the text contents.
9. The method as claimed in claim 1, comprising determining, by the skew correction module (110), a correct image, by one of calculating maximum number of the rotated bounding boxes in each of the plurality of vertical segments and calculating a minimum number of rotated bounding boxes in the first box distribution and the second box distribution.
10. The method as claimed in claim 1, comprising, computing, by the skew correction module (110), number of the rotated bounding boxes with the coordinates falling within each of the plurality of vertical segments in the first resultant image and the second resultant image.
11. An electronic device (100) for correcting skew in at least one image, the electronic device (100) comprising:
a processor (102) coupled to a memory (104);
a text detection module (108) and a skew correction module (110) executed by the processor (102), wherein the text detection module (108) is to:
output the at least one image with rotated bounding boxes that bound over text contents within the at least one image; and
wherein a skew correction module (110) is to:
determine a first angle and a second angle with a horizontal axis;
obtain a first resultant image and a second resultant image by rotating the at least one image by the first angle and second angle;
compute coordinates of one of four corners of each of the rotated bounding boxes in the first and second resultant images;
divide the first resultant image and the second resultant image into a plurality of vertical segments;
determine a first box distribution and a second box distribution corresponding to the first resultant image and the second resultant image respectively; and
determine a correct image from one of the first resultant image and the second resultant image from the first box distribution and the second distribution.
12. The electronic device (100) as claimed in claim 11, wherein the text detection module (108) uses a pre-trained machine learning based object detector to calculate the rotated bounding boxes.
13. The electronic device (100) as claimed in claim 11, wherein the text contents of the at least one image is one of left-aligned and right-aligned.
14. The electronic device (100) as claimed in claim 13, wherein, in the case of the text contents being left-aligned, the skew correction module (110) is to determine one of top left coordinates and bottom left coordinates of each of the rotated bounding boxes from each of the first and second resultant images.
15. The electronic device (100) as claimed in claim 13, wherein, in the case of the text contents being right-aligned, the skew correction module (110) is to determine one of top right coordinates and bottom right coordinates of each of the rotated bounding boxes from each of the first and second resultant images.
16. The electronic device (100) as claimed in claim 11, wherein the text detection module is to obtain the at least one image from an image testing dataset (107);
17. The electronic device (100) as claimed in claim 11, wherein the first angle is an angle between the rotated bounding boxes and the horizontal axis in an anticlockwise direction and the second angle is an angle between the rotated bounding boxes and the horizontal axis in a clockwise direction.
18. The electronic device (100) as claimed in claim 11, wherein the skew correction module is to obtain each of the first resultant image and the second resultant images with the rotated bounding boxes bounding the text contents.
19. The electronic device (100) as claimed in claim 11, wherein the skew correction module (110) is to determine a correct image, by one of calculating maximum number of the rotated bounding boxes in each of the plurality of vertical segments and calculating a minimum number of rotated bounding boxes in the first box distribution and the second box distribution.
20. The electronic device (100) as claimed in claim 11, wherein the skew correction module is to compute number of the rotated bounding boxes with the coordinates falling within each of the plurality of vertical segments in the first resultant image and the second resultant image.
, Description:The following specification particularly describes and ascertains the nature of this invention and the manner in which it is to be performed:-
TECHNICAL FIELD
[001] Embodiments disclosed herein relate to image processing, and more particularly to methods and systems for correcting skew in images.
BACKGROUND
[002] Optical character recognition (OCR) technology automates extraction of data, such as printed text, or written text from a scanned document or image file and then converts the text into a machine-readable form. The machine-readable form is used for data processing like editing, or searching. In real-world, scanned images, such as a scanned ID card that are stored in a device, such as a smartphone can be skewed. It is challenging to process such skewed images for machine learning tasks, such as Optical Character Recognition (OCR), face recognition, and the like.
[003] Existing solutions can correct the skew if the skew falls in the range from -90 degrees to +90 degrees. Such solutions cannot correct skews if the image is inverted, i.e., when the skew angle is between -180 degrees to -90 degrees and +90 degrees to +180 degrees. Further, most of the current solutions are computationally intensive as they are built for a particular document template and are not scalable and automatable. Further, current solutions cannot process multiple document templates. Further, existing solutions assume that the images do not contain any background artefacts, i.e., that the image space contains only document content. On the contrary, in the real-world, any documents captured by cameras and uploaded by users can contain background artefacts.
OBJECTS
[004] The principal object of embodiments herein is to disclose automatable and scalable methods and systems for performing skew in images, wherein the image contains at least one text.
[005] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating at least one embodiment and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF FIGURES
[006] Embodiments herein are illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:
[007] FIG. 1 shows an exemplary electronic device for implementing methods for automatable and scalable skew correction in images, according to embodiments herein;
[008] FIG. 1A illustrates the concept of rotated bounding boxes, according to embodiments herein;
[009] FIG. 2a illustrates the difference between axis-aligned bounding boxes and rotated bounding boxes;
[0010] FIG. 2b shows texts with varying orientations in any image, according to embodiments as disclosed herein;
[0011] FIG. 3a illustrates an ideal image and the text detection output generated by the text detection module, according to embodiments as disclosed herein;
[0012] FIG. 3b illustrates a skewed image as typically found in real-world scenario;
[0013] FIG. 4a shows a skewed image rotated in an anti-clockwise direction with respect to the expected frame of reference, according to embodiments as disclosed herein;
[0014] FIG. 4b shows a skewed image rotated in a clockwise direction with respect to the expected frame of reference, according to embodiments as disclosed herein;
[0015] FIG. 5a shows the box distribution obtained on dividing the resultant image as shown in FIG. 4a, according to embodiments as disclosed herein;
[0016] FIG. 5b shows the box distribution obtained on dividing the resultant image as shown in FIG. 4b, according to embodiments as disclosed herein;
[0017] FIG. 5c shows an example box distribution, according to the embodiments as disclosed herein;
[0018] FIG. 6 shows the flow chart of method for skew correction in images, according to the embodiments herein;
[0019] FIG. 7 shows a scenario in which the techniques according to the embodiments herein are applied;
DETAILED DESCRIPTION
[0020] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0021] The embodiments herein achieve a faster de-skewing of images by using lesser number of computations. Referring now to the drawings, and more particularly to FIGS. 1 through 7, where similar reference characters denote corresponding features consistently throughout the figures, there are shown embodiments.
[0022] FIG. 1 shows an exemplary electronic device 100 for implementing methods for automatable and scalable skew correction in images, according to embodiments herein. The electronic device 100 can be any device, such as a scanner, a copier, a PDA, a cell phone, a digital camera, a smart phone, or any other device which can capture images or process images. The electronic device 100 comprises a processor 102 coupled to a memory 104. The memory 104 provides storage for instructions, modules, and other data that are executable by the processor 102. The memory 104 comprises an image training dataset 106, an image testing dataset 107, a text detection module 108, and a skew correction module 110. The image testing dataset 107 comprises a collection of images obtained by using an image capture device, such as a digital camera, a scanner, a smart phone, and the like. The images can be any image that comprises at least one text, for example, ID cards with photograph, such as Aadhar card. In an example scenario, images, such as ID cards, are used in eKYC applications. In eKYC applications, a skewed image of a document, such as an ID card, may have to be read in order to extract the details from the ID card.
[0023] The images, such as scanned copies of ID cards, can be stored in, but not limited to, the image testing dataset 107 within the electronic device. Other implementations of storage of the images are possible, such as a cloud, a file server, an online storage means, a data server, and so on. In an example implementation, the images can be stored in a database external to the electronic device.
[0024] The text detection module 108 is a deep learning neural network that has been trained to detect the text contents of an image. The images that are used in the training process are stored in the image training dataset 106. The neural network is trained on the principles of object detection. Open-source convolution neural network architectures that have been trained on millions of real-world images are used for the object detection. For instance, the embodiments herein employ object detectors based on neural networks like R-CNN, fast R-CNN or faster-RCNN. These networks are augmented with additional components to detect text at an arbitrary orientation.
[0025] The concept of axis-aligned bounding boxes and rotated bounding boxes is illustrated in FIG. 1A. Conventional object detectors are equipped to detect the objects in an image with bounding boxes that are aligned to the axes as shown in top panel 114. The axis-aligned bounding boxes are enabled by placing object proposals around pixels of the images in image training dataset 106 in different scales and aspect ratios. The proposals are all possible bounding boxes in which an object can possibly contain. The scales are set at size of 8, 16, and 32, aspect ratio as 1:1, 1:2 and 2:1 and angle as only 0. From these collection of the object proposals, the ones that bound the objects are picked to be learned by the neural network model. With this learning, the conventional object detectors can identify objects in unseen image in the image testing dataset 107. In the text detection module 108, the neural network model learns by creating the object proposals at different angles ranging from -π/6 to 2π/3 in steps of π/6 degrees as shown in bottom panel 116. This is in contrast with the conventional object detectors whose proposals are only at 0 degrees or aligned with the axes. In addition to this, there is a change in the aspect ratios. While conventional methods use 1:1, 1:2 and 2:1, the embodiments herein use 1:2, 1:5 and 1:8, which enables easy detection of text in images as they appear in these aspect ratios in real world. In an example implementation, the text detection module 108 can be part of a larger solution like an OCR, or a text content verifier executed by the processor 102. The whole solution of the text detection module 108 which is a new neural network consisting of object detector and additional components is trained on the principles of transfer learning. The knowledge to detect objects learned by the object detector known conventionally is transferred to the new neural network, i.e., the text detection module 108, which is equipped to do text detection at an arbitrary orientation. The new network is further trained with the help of an image training dataset 106 to detect the text content of the images located at arbitrary orientations.
[0026] FIG. 2a is an example illustrating the difference between axis-aligned bounding boxes (prior art) and rotated bounding boxes. Once the text detection module 108 has been trained, texts with varying orientations in any image can be read, as shown in FIG. 2b.
[0027] The text detection module 108 can obtain an image from the image testing dataset 107 and can detect the text in the image. The result of detecting the text content in the image from the image testing dataset 107 is an output image in which the text contents are bounded by rotated bounding boxes, i.e., each of the text is bounded by a bounding box as per the degree of the skew of the document. FIG. 3a illustrates an example image and the text detection output generated by the text detection module 108. FIG. 3b illustrates a skewed image as typically found in a real-world scenario. In real-world scenarios, the images can be captured by a user and may be uploaded to the database. Further, the real-world images can contain background artefacts, can be of lower resolution., can be blurred, or the image can be at an arbitrary angle with respect to the image frame as shown in the example in FIG. 3b.
[0028] The text detection module 108 can generate rotated bounding boxes which are rotated depending on the orientation of the text content in the image after it is properly trained using the image training dataset 106. The text detection module 108 can give the output in the form of the coordinates of the four corners of the rotated bounding boxes with top left position as the origin. Every word, or most of the words in the image is detected through the rotating bounding boxes that encloses them. On obtaining the image with the rotated bounding boxes from the text detection module 108, the skew correction module 110 determines an angle that the original image must be rotated from the rotated bounding boxes. This angle is found out by finding the slope the larger side of the bounding box makes with respect to the x-axis. Depending on the orientation of the rotated bounding boxes, the image can be either rotated in anti-clockwise direction, or in a clockwise direction with respect to the horizontal axis or the expected frame of reference. As a result of the possibility of two angles of rotation, two resultant images are obtained by the skew correction module 110. For example, if the real-world image, as shown in FIG. 3b, is rotated by φ degrees in an anti-clockwise direction, the resultant image is an inverted image with respect to the expected frame of reference as shown in FIG. 4a. On the other hand, if the real-world image, as shown in FIG. 3b is rotated by θ degrees in a clockwise direction, the resultant image is the actual image, i.e., the image in the correct form, as shown in FIG. 4b.
[0029] The skew correction module 110 can identify the correct image from the two resultant images. The skew correction module 110 utilizes minimal prior structural information of the document in the image. The skew correction module 110 assumes that the text in the document are either left justified, or right justified. The skew correction module 110 does not need the structural information of the entire document to determine the correct image. The two resultant images are passed to the text detection module 108 to get new coordinates of the bounding boxes after applying the two angles of the rotation.
[0030] The skew correction module 110 analyzes each of the two resultant images simultaneously. In the case of left-aligned texts in the images, the skew correction module 110 determines top left coordinate of each of the rotated bounding boxes from each of the resultant images. Alternately, the skew correction module 110 can determine the bottom left coordinate of each of the rotated bounding boxes from each of the resultant images, depending on the preferences of the user. In the case of right-aligned text in the images, either the top right coordinate of the rotated bounding boxes, or the bottom right coordinate of the rotated bounding boxes are determined by the skew correction module 110. The skew correction module 110 divides each of the resultant images into vertical segments of a pre-defined size. For each segment, in the case of images with left-aligned text, the skew correction module 110 determines the number of boxes whose top left (or bottom left) coordinate falls within the vertical segment. The skew correction module 110 obtains a box distribution within each of the vertical segments and determines the correct image from the two resultant images.
[0031] In an example implementation, the resultant images as shown in FIG. 4a and FIG. 4b can be considered. FIG. 5a shows the box distribution obtained on dividing the resultant image as shown in FIG. 4a. The box distribution in case of FIG. 5a is given as [1, 1, 3, 2, 2, 1, 2, 1]. Similarly, FIG. 5b shows the box distribution obtained on dividing the resultant image as shown in FIG. 4b. The box distribution in case of FIG. 5b is given as [3, 1, 6, 3].
[0032] According to an embodiment herein, the skew correction module 110 determines the correct image by using one of the following properties of the box distribution: i) maximum number of boxes per vertical segment and ii) minimum number of members in the distribution. For example, in the case of an image with text in left-alignment, with correct orientation of the text, i.e., in the case of FIG. 5b, the vertical segment that falls in the starting point of the text characters have the maximum number of words or boxes compared to the 180 degree flipped or the inverted version, as shown in FIG. 5a. Similarly, in the case of the correct image, i.e., FIG. 5b, within certain vertical segments, corners of the rotated bounding boxes cannot be seen, showing that the text in the image are left-aligned. According to embodiments herein, the segment width can be varied with multiple options and decision is made based on majority voting. The segment widths are important in cases where the text contents gets split between two segments, as shown in reference numeral 502 of FIG. 5c. In order to avoid the above occurrence, segment width is varied. Each choice of the segment width gives a different box distribution. The majority voting is useful in such instances. In majority voting, for each of the segment width, one among the two resultant images will be predicted by the skew correction module 110. The skew correction module 110 gives the final result, based on the image that receives the majority voting based on the variation of the segment width.
[0033] FIG. 6 shows the flow chart of method for skew correction in images, according to the embodiments herein. At step 602, the text detection module 108 obtains an image from the image testing dataset 107. On obtaining the image, the text detection module 108 computes an output image in which the text contents are bounded by rotated bounding boxes, i.e., each of the words or the text is bounded by a bounding box as per the degree of skew of the image. As explained above, the text detection module 108 uses a pre-trained object detection network for the text detection. At step 604, the skew correction module 110 performs an initial skew correction of the image by determining two resultant images. As explained above, the text detection module 108 can provide the output in the form of coordinates of the four corners of the rotated bounding boxes with top left position as the origin. The skew correction module 110 determines an angle by which the original image has to be rotated from the rotated bounding boxes. The angle is obtained by finding the slope the larger side of the bounding box makes with respect to the x-axis. The image can either be rotated in clockwise direction or in anti-clockwise direction with respect to the expected frame of reference. As a result of the above rotations, two resultant images are computed by the skew correction module 110. At step 606, the skew correction module 110 inputs the two resultant images to the text detection module 108 to obtain a new set of coordinates of the rotated bounding boxes over text contents after applying the two angles of rotation. At step 608, the skew correction module 110 analyzes the two resultant images simultaneously. The skew correction module 110 computes coordinates of one of four corners of each of the rotated bounding boxes in the first and second resultant images. The text contents of the at least one image is one of left-aligned and right-aligned. In the case of the text contents being left-aligned, the skew correction module 110 determines one of top left coordinates and bottom left coordinates of each of the rotated bounding boxes from each of the first and second resultant images. In the case of the text contents being right-aligned, the skew correction module 110 determines one of top right coordinates and bottom right coordinates of each of the rotated bounding boxes from each of the first and second resultant images.
[0034] At step 610, the skew correction module 110 divides the first and the second resultant images into a plurality of vertical segments. As explained above, the segment width can be varied with multiple options and decision is made based on majority voting. For each segment, in the case of images with left-aligned text, the skew correction module 110 determines the number of boxes whose top left (or bottom right) coordinate falls within the vertical segment. At step 612, the skew correction module 110 obtains a box distribution within each of the vertical segments and determines the correct image from the two resultant images. The skew correction module 110 determines a first box distribution and a second box distribution corresponding to the first resultant image and the second resultant image respectively. The skew correction module 110 computes a number of the rotated bounding boxes with the coordinates falling within each of the plurality of vertical segments in the first resultant image and the second resultant image. The skew correction module determines a correct image from one of the first resultant image and the second resultant image from the first box distribution and the second distribution. The skew correction module determines a correct image by one of calculating maximum number of the rotated bounding boxes in each of the plurality of vertical segments and calculating a minimum number of rotated bounding boxes in the first box distribution and the second box distribution. The box distribution is explained in detail in Figures 5a, 5b, and 5c.
[0035] The various actions in method 600 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 6 may be omitted.
[0036] FIG. 7 shows a scenario in which the techniques according to the embodiments herein are applied. Image 702 shows the skewed image with the rotated bounding boxes 704. On applying the techniques according to the embodiments herein, image 706 is obtained, which shows the skew corrected image.
[0037] Embodiments herein utilizes minimal prior information about the document structure to retrieve the bounding boxes around text contents from a pre-trained machine learning-based object detector. Embodiments herein disclose an image processing technique for correcting the skew in rotation of the images ranging from -180 degree to +180 degree. Embodiments herein can be used in conjunction with any document information extraction processes or systems. Embodiments herein utilize the coordinates and orientation of the bounding boxes to calculate the degree of skew in rotation and the skew correction is done by applying the same degree of rotation in the opposite direction. Embodiments herein can be used to process large image databases in an automated way. Embodiments herein are robust against documents with background artefacts.
[0038] The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the network elements. The network elements shown in Fig. 1 include blocks which can be at least one of a hardware device, or a combination of hardware device and software module.
[0039] The embodiment disclosed herein describes methods and systems for skew correction in images. Therefore, it is understood that the scope of the protection is extended to such a program and in addition to a computer readable means having a message therein, such computer readable storage means contain program code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The method is implemented in at least one embodiment through or together with a software program written in e.g. Very high speed integrated circuit Hardware Description Language (VHDL) another programming language, or implemented by one or more VHDL or several software modules being executed on at least one hardware device. The hardware device can be any kind of portable device that can be programmed. The device may also include means which could be e.g. hardware means like e.g. an ASIC, or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. The method embodiments described herein could be implemented partly in hardware and partly in software. Alternatively, the invention may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[0040] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of embodiments and examples, those skilled in the art will recognize that the embodiments and examples disclosed herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
| # | Name | Date |
|---|---|---|
| 1 | 202241018838-POWER OF AUTHORITY [30-03-2022(online)].pdf | 2022-03-30 |
| 2 | 202241018838-FORM 18 [30-03-2022(online)].pdf | 2022-03-30 |
| 3 | 202241018838-FORM 1 [30-03-2022(online)].pdf | 2022-03-30 |
| 4 | 202241018838-DRAWINGS [30-03-2022(online)].pdf | 2022-03-30 |
| 5 | 202241018838-COMPLETE SPECIFICATION [30-03-2022(online)].pdf | 2022-03-30 |
| 6 | 202241018838-FORM-9 [11-04-2022(online)].pdf | 2022-04-11 |
| 7 | 202241018838-Proof of Right [23-05-2022(online)].pdf | 2022-05-23 |
| 8 | 202241018838-FORM 3 [03-08-2022(online)].pdf | 2022-08-03 |
| 9 | 202241018838-FER.pdf | 2022-08-03 |
| 10 | 202241018838-ENDORSEMENT BY INVENTORS [03-08-2022(online)].pdf | 2022-08-03 |
| 11 | 202241018838-OTHERS [03-02-2023(online)].pdf | 2023-02-03 |
| 12 | 202241018838-FORM 13 [03-02-2023(online)].pdf | 2023-02-03 |
| 13 | 202241018838-FER_SER_REPLY [03-02-2023(online)].pdf | 2023-02-03 |
| 14 | 202241018838-CORRESPONDENCE [03-02-2023(online)].pdf | 2023-02-03 |
| 15 | 202241018838-CLAIMS [03-02-2023(online)].pdf | 2023-02-03 |
| 16 | 202241018838-AMMENDED DOCUMENTS [03-02-2023(online)].pdf | 2023-02-03 |
| 17 | 202241018838-PatentCertificate27-05-2024.pdf | 2024-05-27 |
| 18 | 202241018838-IntimationOfGrant27-05-2024.pdf | 2024-05-27 |
| 1 | SearchHistoryE_01-08-2022.pdf |