Abstract: ABSTRACT A METHOD FOR EXTRACTING CARD DETAILS FROM IMAGES OF BANK CARDS The present invention relates to a method for extracting card details from images of bank cards. The method primarily involves two steps comprising detecting corner coordinates of card details such as, but not limited to, card number and expiry data in the images of bank cards followed by extracting patches corresponding to location of card details using detected coordinates; and extracting individual characters corresponding to the card details from the extracted patches, thereby assisting in an efficient card detail extraction and achieving accurate results. Figure 1
Description:FIELD OF INVENTION
[001] The present invention relates to a method for extracting card details from images of bank cards. More particularly, the present invention relates to a two-step process of extraction of card details from images of bank cards involving extraction of patches corresponding to the location of the card details, preferably card number, valid from date, and expiry date from an image of a bank card followed by extracting individual characters corresponding the card details from the extracted patches, thereby assisting in an efficient card detail extraction and achieving accurate results.
BACKGROUND OF THE INVENTION
[002] Nowadays, with rapid advancement in development of technologies, there exists several techniques to ease the process of transaction using bank card, particularly efficient and accurate extraction of debit/credit card related information including card number, valid from date, and expiry date from the image, in a time-consuming and efficient manner. In conventional methods, user has to manually enter all the digits of a card number in order to process the payment in case the card is either un-tokenized or used for the first time on E-commerce platform. In order to overcome the inconvenience, and difficulty that the user experiences, the detection of the card details was performed through computer vision (CV), which also improves the user experience as well as payment success rate.
[003] Conventionally, several off-the-shelf OCR algorithms exist for detecting text from various kind of images. Such algorithms are trained on a variety of image datasets. However, such datasets fail to include real credit/debit card images. As a result these algorithms fail to perform efficiently for the purpose of detecting text from the card images, and therefore, there arises a need for a customized CV model. However, such conventional CV models need sufficiently large real and manually labelled examples. Further, the sensitive nature of the card images make the operationally heavy process of data collection and labelling even more difficult.
[004] Any machine learning model trained on card images may require at least 50K-100K images with sufficient resolution. Due to the sensitive nature of cards, it is thus difficult to curate such a training dataset. Hence, existing solutions depend on Optical Character Recognition (OCR) to extract card details such as card number, valid from date, and expiry date from the image.
[005] Card image is drastically different from normal character images due to various card specific variations such as same color of number and background with only differentiator being the 3D texture of numbers, rotated and tilted numbers, embossed text and different light conditions. Some of the well-known conventional OCR technique include Google Tesseract, Google ML-kit and Walmart OCR that are trained on normal text images and not card images, and thus perform inefficiently while extracting numbers from the image especially for embossed text and for cases where number and background is not easily distinguishable due to the same color. Moreover, the existing solutions apply OCR on the entire card image, neglecting the fact that card number, valid from date, and expiry date are located on a small portion of the card. Such techniques depend on the image and corresponding labels for training. Furthermore, generating card images is a tedious task. In addition, it may require manual labor to generate labels, which include the location and class for each digit in the image. Furthermore, there are several character level pre-trained models, however, such models also functions inefficiently on card images.
[006] There are several patent applications that discloses a method for extracting card details from bank cards. One such Chinese Patent Literature CN105431867A relates to extracting card data using card art. The method involves receiving a digital image of a card by one or more computing devices; performing an image recognition process on the digital representation of the card; identifying an image in the digital representation of the card; comparing the identified image to an image database comprising a plurality of images and determining that the identified image matches a stored image in the image database; determining a card type associated with the stored image and associating the card type with the card based on the determination that the identified image matches the stored image; and performing a particular optical character recognition algorithm on the digital representation of the card, the particular optical character recognition algorithm being based on the determined card type. However, the cited prior art depends on curating a database of card images, which may be difficult due to sensitive nature of bank cards such as credit and debit cards., Further, the OCR algorithm applied on the card depends on the card type, which in turn is determined by matching with card images in the database. Hence, the method may perform inefficiently for any new card type which is not in the database.
[007] Another Chinese Patent Application CN105247541A relates to capturing information from payment instruments comprising of receiving, using one or more computer devices, an image of a back side of a payment instrument, the payment instrument comprising information imprinted thereon such that the imprinted information protrudes from a front side of the payment instrument and the imprinted information is indented into the back side of the payment instrument followed by extracting sets of characters from the image of the back side of the payment instrument based on the imprinted information indented into the back side of the payment instrument and depicted in the image of the back side of the payment instrument; applying a first character recognition application to process the sets of characters extracted from the image of the back side of the payment instrument; and categorizing each of the sets of characters into one of a plurality of categories relating to information required to conduct a payment transaction. However, the cited prior art fails to leverage the front side of card image, which contain the actual printed details.
[008] In view of the problems associated with the above state of art, there is a need to develop a method for efficient and accurate extraction of card details from bank cards.
OBJECTIVES OF THE INVENTION
[009] The primary objective of the present invention is to provide a method for extracting card details from images of bank cards.
[0010] Another objective of the present invention is to generate synthetic card images resembling real card images to train the machine learning models for accurate detection of card details.
[0011] Another objective of the present invention is to extract accurate card details without any risk of error.
[0012] Yet another objective of the present invention is to provide a simple, time saving, efficient, two-step process for the extraction of card details from images of bank cards.
[0013] Other objects and advantages of the present invention will become apparent from the following description taken in connection with the accompanying drawings, wherein, by way of illustration and example, the aspects of the present invention are disclosed.
SUMMARY OF THE INVENTION
[0014] The present invention relates to a method for extracting card details from images of bank cards. A method for extracting card details from images of bank cards involves training at least two machine learning models using an image dataset comprising of synthetic card images generated using a standard programming language and a computer vision library; installing the trained machine learning models in a server through a computing device via a wireless connection by administration; capturing a plurality of images of a bank card using a handheld device; transferring the captured images to the server via internet; detecting coordinates of quadrilateral corners of bounding boxes, each enclosing individual character of card details by a first trained machine learning model, extracting patches corresponding to the location of the card details using detected coordinates through a standard programming language and computer vision library; and extracting individual characters of the card details from the extracted patches by a second trained machine learning model. The method is a two-step process of extraction, thereby assisting in an efficient card detail extraction and achieving accurate results.
BRIEF DESCRIPTION OF DRAWINGS
[0015] The present invention will be better understood after reading the following detailed description of the presently preferred aspects thereof with reference to the appended drawings, in which the features, other aspects and advantages of certain exemplary embodiments of the invention will be more apparent from the accompanying drawing in which:
[0016] Figure 1 illustrates a flow chart for the method of extraction of card details from images of bank cards.
DETAILED DESCRIPTION OF THE INVENTION
[0017] The following description describes various features and functions of the disclosed system with reference to the accompanying figures. In the figures, similar symbols identify similar components, unless context dictates otherwise. The illustrative aspects described herein are not meant to be limiting. It may be readily understood that certain aspects of the disclosed system can be arranged and combined in a wide variety of different configurations, all of which have not been contemplated herein.
[0018] Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
[0019] Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
[0020] The terms and words used in the following description are not limited to the bibliographical meanings, but, are merely used to enable a clear and consistent understanding of the invention. Accordingly, it should be apparent to those skilled in the art that the following description of exemplary embodiments of the present invention are provided for illustrative purpose only and not for the purpose of limiting the invention.
[0021] It is to be understood that the singular forms “a”, “an” and “the” include plural referents unless the context clearly dictates otherwise.
[0022] It should be emphasized that the term “comprises/comprising” when used in this specification is taken to specify the presence of stated features, steps or components but does not preclude the presence or addition of one or more other features, steps, components or groups thereof.
[0023] Some embodiments of this disclosure, illustrating all its features, will now be discussed in detail. The words "extracting," "identifying,” "determining," and other forms thereof, are intended to be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
[0024] The term “bank card” and “card” may interchangeably be used in the present disclosure.
[0025] The term “first machine learning model” and “first trained machine learning model” may interchangeably be used in the present disclosure.
[0026] The term “second machine learning model” and “second trained machine learning model” may interchangeably be used in the present disclosure.
[0027] The present invention may be used to extract user name and card verification value (CVV). However, it is not preferred as the user name and CVV is highly sensitive and redundant.
[0028] Accordingly, the present invention relates to a method for extracting card details from images of bank cards. More particularly, the present invention relates to a two-step process of extraction of card details from images of bank cards involving extraction of patches corresponding to the location of card details, preferably card number, valid from date, and expiry date from an image of a bank card followed by extracting individual characters corresponding to the card details from the extracted patches using detected coordinates, thereby assisting in an efficient card detail extraction and achieving accurate results. In an embodiment, a system for extracting card details from the images of bank cards comprises of following components:
(a) Handheld device – A handheld device to capture images of bank cards such as, but not limited to, credit card, debit card, and the like.
(b) Sever – A sever is connected to the handheld device to receive the captured images from the handheld device via an Internet. The server is installed with at least two machine learning models for extracting card details such as, but not limited to, card number, valid from date, and expiry date from the bank cards.
(d) Computing device - A computing device is connected to server via a wireless network to install machine learning models onto the server by the administration. In an exemplary embodiment, Pytorch library installed in the computing device is used to install the machine learning models onto the server to extract the card details. .
[0029] The first machine learning facilitates detection of corner coordinates of card details, preferably, card number, valid from date, and expiry data in the images of bank cards followed by extracting patches corresponding to the location of card details from images of bank cards using detected coordinates through a standard programming language and computer vision library; and a second machine model for extracting individual characters corresponding to card details from the extracted patches.
[0030] In an embodiment, a method for extracting card details from images of bank cards primarily involves two steps comprising detecting corner coordinates of card details such as, but not limited to, card number, valid from date, and expiry date in the images of bank cards followed by extracting patches corresponding to location of card details; and extracting individual characters corresponding to the card details from the extracted patches. The method comprises of the following steps provided herein detail:
(a) training at least two machine learning models using an image dataset comprising of synthetic card images generated using a standard programming language and a computer vision library;
(b) installing the trained machine learning models in a server through a computing device via a wireless connection by administration;
(c) capturing a plurality of images of a bank card using a handheld device;
(d) transferring the captured images to the server via internet;
(e) detecting coordinates of quadrilateral corners of bounding boxes, each enclosing individual character of card details by a first trained machine learning model,
(f) extracting patches corresponding to the location of the card details using detected coordinates through a standard programming language and computer vision library; and
(g) extracting individual characters of the card details from the extracted patches by a second trained machine learning model.
[0031] In an exemplary embodiment, the handheld device may be selected from a group consisting of, such as, but not limited to, mobile device, and the like.
[0032] The images captured by the handheld device are encrypted by algorithms installed in the handheld device, thereby protecting the data from being altered, stolen, or compromised. In a preferred embodiment, plurality of images are captured by the handheld device, which improves the accuracy of the result as each image shows different perspective of the same card such as different light conditions, rotation and tilting etc., thereby reducing the probability of error.
[0033] In another exemplary embodiment, the computing device may be selected from a group, consisting of, such as, but not limited to, desktop, laptop, and the like.
[0034] In another exemplary embodiment, the card details may include such as, but not limited to, card number, valid from date, and expiry date.
[0035] In yet another exemplary embodiment, the bank cards may be selected from a group consisting of, such as, but not limited to, debit card, credit card, or any other card resembling the bank card in appearance such as, but not limited to, sodexo cards, priority pass, and the like. In a preferred embodiment, bank cards used in the present invention may be such as, but not limited to, credit card and/or debit card.
[0036] In an exemplary embodiment, the generation of the image dataset may involve image reshaping, concatenation, text and noise addition to the image, etc. The process for creating of image data set majorly involve placing text images comprising of card number on a black colored blank card with height of 414 pixel and width of 674 pixel. Such dimensions represent dimension of real card as well as provides sufficient pixels for individual character in the image. The detailed steps of the process of generation of synthetic card images involves generating a valid 14-19 digit card number; drawing card number on black background using desired color, font size, text font, and bottom left corner text location; drawing a bounding box enclosing individual character of, such as, but not limited to, card number, valid from date, and expiry dates; evaluating x and y coordinates of corners of each bounding box considering that pixel between any two consecutive texts may be black in color; evaluating x and y coordinates and height and weight of text center using the evaluated value of x and y of the bounding box corresponding to each character, which may be used as label while training. The same process is repeated to add issue date, expiry date, user name and bank name one by one onto the same image, ensuring no overlapping with existing text. The process further involves saving the images as two separate data sets, wherein one data set comprises of the images with only “valid from date” and the other data set comprises of images with both “expiry date” and “valid from date”, reflecting real card details as some of the cards contain only “expiry date” while other cards contain both “expiry date” as well as “valid from date”. The font size may be comparatively smaller for the date fields as compared to the other fields. The text colour may also be randomly changed for each field. With a probability of 25%, the normal texts changed to embossed text using such as, but not limited to, embossed filter K or – K, wherein k is presented by equation 1 provided below.
[0037] The filter is applied to all the pixels inside the bounding boxes containing the characters. This process results in creation of synthetic card image with black background and texts written on the image.
[0038] The process for creating image dataset further involves adding background images of card as well as for the surroundings. This step involves appending a text image with card image background, which further involves selecting random image out of a given candidate images from such as, but not limited to, natural scenes, monuments, sports, house interior etc. The brightness, contrast, saturation and hue of the image may be randomly changed. The image may be horizontally and vertically flipped, each with a probability of 0.5. The background image may be cropped upto height of 414 pixel and width of 674 pixels. Further, the cropped card background with text image is combined using equation 2 provided below:
where, α, β ~ U (0.25, 0.75) and ɤ ~ U (1, 100)
[0039] The process further involves perturbing 50% of the synthetic card images by four noise types each selected independently with probability of 0.25. The noise types blur the synthetic card images, add Gaussian noise to each pixel, adds glare to the synthetic card image, and change the mean pixel value of the three channels i.e., Red, Green and Blue channels. The pixel location of each character may also be projected on the obtained image, serving as a raw card image dataset which may be used to obtain model specific images.
[0040] In an exemplary embodiment, the standard programming language may include, such as, but not limited to, Python. In another exemplary embodiment, the computer vision library may include, such as, but not limited to, OpenCV library.
[0041] In another exemplary embodiment, synthetic card images contained in the image dataset are resized upto 224x224 pixels prior to training the first machine learning model. Such dimensions provides enhanced accuracy and reduces the training time. Further, decrease in the dimension below 224x224 pixel may decrease the accuracy and increase in the dimension beyond 224x224 pixel, for example 512x512 pixel, may increase training and interference time. The information corresponding to the location of the individual character as per the new image dimension to reflect the new location of characters on respective image patches. The images are saved as two separate data sets wherein one data set comprising of only images with only “expiry date” and other data set comprises of images with both “expiry date” and “valid from date”, reflecting real card details as some of the cards contain only “expiry date” while other cards contain both expiry date as well as “expiry date” as well as “valid from date”.
[0042] In an exemplary embodiment, the first learning model may be selected from a group of convolutional neural networks (CNN), consisting of, such as, but not limited to EfficientNet. The first machine learning model is trained by an algorithm such as, but not limited to, gradient descent algorithm, fed with raw card image dataset (training dataset) to train the algorithm on the location of patch on synthetic card images, which further trains the first learning model. The EfficientNet detects coordinates of quadrilateral corners of bounding box enclosing individual character of, such as, but not limited to, card number, valid-from and expiry dates. The process for detection by EfficientNet involves denoting each corner of the bounding box with x and y coordinates of its pixel, thus generating an output of 24 dimensions, wherein first 8 dimensions correspond to x and y coordinates of bounding box enclosing card number and last 8 dimensions correspond to x and y coordinates of the bounding box enclosing expiry patch and the intermediate eight dimensions correspond to x and y coordinates of the bounding box enclosing “valid-from date” patch if valid-from is present on the bank card, if not then it may correspond to x and y coordinates of expiry date patch only. Therefore, in case of the bank cards where “valid-from date” is not present, the values of intermediate 8 dimensions may be same as the last 8 dimensions.
[0043] The extraction of patches after the detection of coordinates may be performed using a standard programming language and computer vision library. In an exemplary embodiment, the standard programming language may include such as, but not limited to, Python. In another exemplary embodiment, the computer vision library may include, such as, but not limited to, OpenCV library. In an embodiment, standard programming language and computer vision library extracts and projects the patches onto rectangular shapes, which are further processed by the second trained machine learning model to extract individual character from the extracted patches.
[0044] In an exemplary embodiment, the second machine learning model may be selected from a group consisting of, such as, but not limited to, YOLO (You only look once), Faster-R-CNN, SSD (Single Shot MultiBox Detector), RetinaNet, and the like. In a preferred embodiment, the second machine learning model used in the present invention is YOLO. The second machine learning model uses an individual character as an object and provides an output comprising of digit values (0, 1, 2….n) and its location in the image. The second machine learning model is trained using raw card image dataset (training data) on the patches of card number, valid from date, and expiry date extracted from synthetic card images, labelled with a true character and location thereof to extract all the individual characters. In an exemplary embodiment, the patch is extracted from original image of size 414x674 for training the second machine learning model. The labels i.e. pixel location of the corners of, such as, but not limited to, card number, valid from date, expiry date patches, and the location of the characters in an entire synthetic card image are generated and stored during the step of creation of the synthetic card images. The location of pixel co-ordinate is taken with respect to top left corner of the synthetic card image. The label of a given character, preferably digit, consist of columns “digit name, which correspond to the value of digit in the image and is one of the values from the set {0,1,2,3,4,5,6,7,8,9,/}”, “x and y co-ordinate of bounding box surrounding the digit, where x and y distance is in pixels and computed with respect to top left corner of the image”, and “height and width of the bounding box in pixels”. Once the corners of card number, valid-from and expiry date patch are extracted from an entire synthetic image, the location of individual character is modified so that the individual character represent correct location of digits with respect to top left corner of the corresponding patch. The second machine learning model is trained in such a way that each individual character and corresponding location of that character may be detected. The second trained model extracts all the individual characters, preferably, digits from the extracted patches of card number as well as the valid-from and expiry date. Out of both the extracted dates, the latest date is taken as expiry date.
[0045] The present invention exhibits a plurality of advantages:
1. The image data set containing synthetic card images generated in the present invention aids in covering various aspects such as cards placed on random background, rotated and tilted cards, finger blocking some portion of card, number and background merged such that it becomes difficult to identify characters properly, embossed/3D texture of numbers, different font sizes, different fonts and colors of text.
2. The generation of image data set and training the machine leaning models using the generated image data set improves accuracy of results.
3. The plurality of card images captured by the handheld device improves the accuracy of the results since the probability of extracting correct card details increases as each card image shows different perspective of the same card such as different light conditions, rotation and tilting, etc.
4. The extraction of patches corresponding to the location of card number and expiry data on the card reduce the noise i.e. remaining text such as user name and bank name on the real card, which ultimately reduces the chances of incorrect results as only the relevant portion of the card image is captured for analysis and fetching out card details.
5. The present invention does not employ any external data for the extraction of card details.
6. The present invention involves end-to-end training of the models such that the synthetic data is generation without any dependency on external sources and label is generated as part of data creation. The generated synthetic card and patch location are used to train first model. Further, number and date patches are extracted from the card image and used to train the second model. Therefore, the present invention involves end-to-end training without any dependency on external factors..
7. The method of the present invention is used to for extracting card details in several difficult cases such as when same image/color overlaid on both number and background, tilted and rotated card images, or card having some additional numbers apart from the card number.
8. The use of YOLO algorithm to train the second machine model ensures enhanced performance with respect to runtime and accuracy.
9. The first machine learning model provides accurate result with minimal error of 1 pixel for each corner coordinate on 224x224 dimensional image.
10. Once the corner coordinates are detected by the first machine learning model followed by extraction of the patches using a standard programming language and computer vision library, the second machine learning model may directly run to extract individual characters from the extracted patches as both the first and the second machine learning models are in same code/script, for example, python.
[0046] While this invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
, Claims:WE CLAIM:
1. A method for extracting card details from images of bank cards, comprising steps of:
(a) training at least two machine learning models using an image dataset comprising of synthetic card images generated using a standard programming language and a computer vision library;
(b) installing the trained machine learning models in a server through a computing device via a wireless connection by administration;
(c) capturing a plurality of images of a bank card using a handheld device;
(d) transferring the captured images to the server via internet;
(e) detecting coordinates of quadrilateral corners of bounding boxes, each enclosing individual character of card details by a first trained machine learning model,
(f) extracting patches corresponding to the location of the card details using detected coordinates through a standard programming language and computer vision library; and
(g) extracting individual characters of the card details from the extracted patches by a second trained machine learning model.
2. The method as claimed in claim 1, wherein the card detail is selected from a group, consisting of, card number, valid from date, and expiry date.
3. The method as claimed in claim 1, wherein the bank card is selected from group, consisting of, debit card and credit card.
4. The method claimed in claim 1, wherein the standard programming language for the generating image data set is selected from a group, consisting of, python.
5. The method as claimed in claim 1, wherein the computer vision library for the generating image data set is selected from a group, consisting of, OpenCV library.
6. The method as claimed in claim 1, wherein the first machine learning model is selected from a group of convolutional neural networks (CNN), consisting of, EfficientNet.
7. The method as claimed in claim 1, wherein the standard programming language for the extraction of the patch is selected from a group, consisting of, python.
8. The method as claimed in claim 1, wherein the computer vision library for extraction of the patch is selected from a group, consisting of, OpenCV library.
9. The method as claimed in claim 1,wherein the second machine learning is selected from a group, consisting of, YOLO (You only look once), Faster-R-CNN, SSD (Single Shot MultiBox Detector), and RetinaNet.
10. The method as claimed in claim 1, wherein the first machine learning model is trained by an algorithm fed with raw card image dataset (training dataset).
11. The method as claimed in claim 1, wherein the algorithm is selected from a group, consisting of, gradient descent algorithm.
12. A system for extracting card details from images of bank cards as claimed in claim 1, comprising:
• a handheld device to capture images of bank cards;
• a server connected to the handheld device to receive the captured images from the handheld device via an Internet; and
• a computing device connected to the server via a wireless network to install machine learning models onto the server by the administration.
13. The system as claimed in claim 1 wherein, the handheld device is a mobile device.
14. The system as claimed in claim 1 wherein the computing device is selected from a group, consisting of, desktop and laptop.
| # | Name | Date |
|---|---|---|
| 1 | 202341044532-STATEMENT OF UNDERTAKING (FORM 3) [03-07-2023(online)].pdf | 2023-07-03 |
| 2 | 202341044532-REQUEST FOR EXAMINATION (FORM-18) [03-07-2023(online)].pdf | 2023-07-03 |
| 3 | 202341044532-REQUEST FOR EARLY PUBLICATION(FORM-9) [03-07-2023(online)].pdf | 2023-07-03 |
| 4 | 202341044532-PROOF OF RIGHT [03-07-2023(online)].pdf | 2023-07-03 |
| 5 | 202341044532-POWER OF AUTHORITY [03-07-2023(online)].pdf | 2023-07-03 |
| 6 | 202341044532-FORM-9 [03-07-2023(online)].pdf | 2023-07-03 |
| 7 | 202341044532-FORM 18 [03-07-2023(online)].pdf | 2023-07-03 |
| 8 | 202341044532-FORM 1 [03-07-2023(online)].pdf | 2023-07-03 |
| 9 | 202341044532-DRAWINGS [03-07-2023(online)].pdf | 2023-07-03 |
| 10 | 202341044532-DECLARATION OF INVENTORSHIP (FORM 5) [03-07-2023(online)].pdf | 2023-07-03 |
| 11 | 202341044532-COMPLETE SPECIFICATION [03-07-2023(online)].pdf | 2023-07-03 |
| 12 | 202341044532-FER.pdf | 2025-03-11 |
| 13 | 202341044532-OTHERS [23-04-2025(online)].pdf | 2025-04-23 |
| 14 | 202341044532-MARKED COPIES OF AMENDEMENTS [23-04-2025(online)].pdf | 2025-04-23 |
| 15 | 202341044532-FORM-26 [23-04-2025(online)].pdf | 2025-04-23 |
| 16 | 202341044532-FORM 13 [23-04-2025(online)].pdf | 2025-04-23 |
| 17 | 202341044532-FER_SER_REPLY [23-04-2025(online)].pdf | 2025-04-23 |
| 18 | 202341044532-DRAWING [23-04-2025(online)].pdf | 2025-04-23 |
| 19 | 202341044532-COMPLETE SPECIFICATION [23-04-2025(online)].pdf | 2025-04-23 |
| 20 | 202341044532-CLAIMS [23-04-2025(online)].pdf | 2025-04-23 |
| 21 | 202341044532-AMMENDED DOCUMENTS [23-04-2025(online)].pdf | 2025-04-23 |
| 22 | 202341044532-ABSTRACT [23-04-2025(online)].pdf | 2025-04-23 |
| 1 | 4532E_19-01-2024.pdf |