Sign In to Follow Application
View All Documents & Correspondence

Device And Method For Assessing A Subject For Down Syndrome

Abstract: A device (100) for assessing a subject for Down Syndrome is disclosed. The device (100) includes modules configured for assessing the subject for Down Syndrome from an image (106) of the subject and data (108) associated with the subject. The modules are a codifying module (122) for codifying data (108) associated with the subject; an implanting module (124) for implanting the codified data in one or more channels of an image data file for creating a hybrid image file containing both the image data and the codified data; and a determining module (126) configured as a convolutional neural network trained a priori using hybrid images of a plurality of subjects for determining a score for a possibility of the subject having Down Syndrome by processing the hybrid image file; and an output module (128) for outputting the score determined by the determining module (126).

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 February 2022
Publication Number
10/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
shivani@lexorbis.com
Parent Application
Patent Number
Legal Status
Grant Date
2022-12-23
Renewal Date

Applicants

1. MEHRA, Saanvi
ROCK 101, FOREST APARTMENT SECTOR 92, NOIDA, U.P. - 201304

Inventors

1. MEHRA, Saanvi
ROCK 101, FOREST APARTMENT SECTOR 92, NOIDA, U.P. - 201304
2. MALHOTRA, Aarna
FLAT 7011, ATS ADVANTAGE, TOWER 7, INDIRAPURA, GHAZIABAD, U.P. - 201014

Specification

The present disclosure generally relates to the field of machine learning, and in particular to the application of machine learning for detecting Down Syndrome in a human subject and more particularly to a device and method for assessing a subject for Down Syndrome.
BACKGROUND TO THE INVENTION:
[0002] The estimated incidence of Down Syndrome shows that many children globally are born with Down Syndrome. For example, in India, the prevalence of Down Syndrome is estimated to be roughly one out of every thousand children (1:1000), leading to over 30,000 children being born every year with this genetic disorder. Children afflicted by Down Syndrome are at an increased risk of congenital heart defect, developing pulmonary hypertension, and other medical conditions related to hearing and vision. Many of these conditions are manageable if detected early.
[0003] Down Syndrome is associated with a high mortality and the primary reason for this high mortality is non-diagnosis or late-diagnosis due to unavailability of qualified doctors who can visually detect Down Syndrome in children based on their facial features and recommend them for further testing. However, lack of access to medical facilities or and expenses of testing or both (for example, pre-natal ultrasound, post-natal FISH test or karyotyping) may result in non-diagnosis or late-diagnosis of Down Syndrome.
[0004] Doctors and diagnostic tests, for example, prenatal ultrasound, or FISH test are inaccessible, prohibitively expensive for a majority of the population in developing countries. In addition to above, the challenges with current pre-natal testing protocol include: lack of awareness in remote areas, prohibitively expensive cost of testing, especially for the underprivileged, lack of access to medical infrastructure and ultrasound facilities in rural areas, inconclusive results from primary pre-natal testing through ultrasound, misuse of ultrasound testing for pre-natal sex determination and sex selection, Poor DR and FPR of the blood serum markers, risk to fetus with follow-on pre-natal testing such as amniocentesis, long waiting time for results with follow-on tests such as amniocentesis.
[0005] To overcome the challenges mentioned above, Artificial Intelligence (AI) and Machine learning (ML) based tools, especially with graphical pattern recognition tools, relevant facial points in photographs or physical features in an ultrasound scan can be extracted, and necessary measurements can be computed from any image. Through Machine Learning, such facial or physical anomalies or both can be automatically identified. However, diagnosis of Down Syndrome requires extraction of facial or physical features or both (from photographic images or ultrasound images) as well as analysis of other associated data, for example., background information such as age, sex, ethnicity or race, mother’s age at child’s birth etc.
[0006] One of the existing methods comprises, for example, image classification CNNs with sub-segmentation based on each of the data associated with the subject. In this method, the training dataset is segmented and sub-segmented for each data element (such as ethnicity, age, etc.). Separate models are trained for each sub-segmented dataset and results are tabulated. However, the existing method of sub-segmentation leads to a serious degradation of dataset and output accuracy. For example, if children were to be categorized into 5 age groups, 3 sexes, 10 ethnicities or races, 3 age groups of mothers: there would be 5*3*10*3= 450 sub-segments requiring training and validation of 450 models.
[0007] The first challenge with the image classification CNNs with sub-segmentation includes managing the data and number of models for each subsegment. As the number of data parameters, possible values for each data parameter increase, the number of sub-segments grows reaching unmanageable levels. It is an extremely difficult task to train and validate the 450 models in the example above. Secondly, even if these sub-segmental models were to be managed, the total dataset would be divided across these 450 models. This leads to a severe reduction in the quality of training since the common features across various sub-segments would be under-weighted. This would impact result accuracy.
[0008] The only way to solve this problem with the existing approach is to keep the number of subsegments to a manageable level (say less than ten), by guessing which parameters are important and which are not, leading to a compromise in the learning objectives.
[0009] Another existing method includes manual extraction of data from images to run a combined Artificial Neural Network (ANN). In this method, each image is analysed by a human for various parameters of importance (for example, distance between the inner canthi of the eyes), and the parameter value is manually recorded (for example, 5 cm). These image data parameters are then combined with other data parameters (for example, ethnicity, sex, mother’s age, etc.) to create a combined dataset. A traditional ANN is then trained with this combined dataset. However, manual extraction of data from images is extremely unproductive, sub-optimal, and besides, often prone to human error. Given the amount of time and effort required to extract multiple data points (such as the distance between the inner canthi of the eyes) from each image makes the dataset limited by availability of trained human workforce. Mostly such models rarely have datasets exceeding 500 datapoint, a critical drawback for ML models requiring datasets in thousands, if not hundreds of thousands.
[00010] Another method includes combined Convolutional Neural Network (CNN) approach – sequential approach to process image followed by clinical data. In this method, the CNN is first pre-trained with images, that is, input images are processed through the convolution and subsampling layers. Just before the final fully connected (decision) layer, other clinical data parameters are concatenated as an additional vector. The combined dataset (pre-processed image and additional data vector) is fed into the decision layer.
[00011] This approach requires modification to the ML model involving significant programming effort. Frequently, the entire model has to be ‘recreated which is akin to the proverbial reinventing the wheel. Further, every time a data parameter is to be added or deleted the program needs to be tweaked, entailing time as well as effort. This approach also restricts the building, training, and deployment of such models to AI/ML programming experts, limiting the usability for others (for example, medical professionals).
SUMMARY
[00012] This summary is provided to introduce a selection of concepts in simple manners that are further described in the detailed description of the disclosure. This summary is not intended to identify key or essential inventive concepts of the subject matter nor is it intended to determine the scope of the disclosure.
[00013] To overcome or mitigate at least one of the problems in the state of the art, a device and a method is needed for assessing a subject for Down Syndrome, which do not compromise on the quality of training of the AI or ML models implemented for assessing the subject for Down Syndrome, and thereby impacting overall result accuracy of the result. The existing models are based on only one of input data, for example, based on either one or more of image features or data associated with the subject. Such models compromise the accuracy of the result and are inefficient. It is thus preferable to have a combination of image classification Convolutional Neural Networks (CNNs) and data inputs associated with the subject, for assessing the subject for Down Syndrome.
[00014] Briefly, according to an exemplary embodiment, a device for assessing a subject for Down Syndrome, the device comprising: a processor with a memory, the memory storing a plurality of modules configured for assessing the subject for Down Syndrome from an image of the subject and one or more data associated with the subject, wherein the modules are: a codifying module configured for codifying each of the one or more data associated with the subject; an implanting module for implanting the codified data in one or more channels of an image data file for creating a hybrid image file containing both the image data and the codified data; and a determining module configured for determining a score for a possibility of the subject having Down Syndrome by processing the hybrid image file, wherein the determining module is configured as a convolutional neural network trained a priori using hybrid images of a plurality of subjects; and an output module for outputting the score determined by the determining module.
[00015] Briefly, according to an exemplary embodiment, a method for creating a hybrid image of a subject is disclosed. The method includes the steps of: converting the image to a predefined image file format; filtering of non-frontal face image; and resizing the image to a predefined size; and processing the image for enabling the implanting module to implant codified data associated with the subject in one or more channels of the image file; wherein the one or more channels are additional channels created or channels made free in the image.
[00016] Briefly, according to an exemplary embodiment, a method for training a convolutional neural network model for determining a score for the possibility of a subject having Down Syndrome based on a hybrid image of the subject is disclosed. The method includes the steps of: creating a hybrid image file each, of a number of subjects each of whom is independently confirmed as having Down Syndrome; a step of labelling each of those hybrid files Down Syndrome Positive; a step of creating a hybrid image file each, of a number of subjects each of whom is independently confirmed as not having Down Syndrome; a step of labelling each of those as Down Syndrome Negative; providing the convolutional neural network model with all the hybrid image files labelled Down Syndrome Positive and all the hybrid image files labelled Down Syndrome Negative as training sample data.
[00017] The summary above is illustrative only and is not intended to be in any way limiting. Further aspects, exemplary embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE FIGURES
[00018] These and other features, aspects, and advantages of the exemplary embodiments can be better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[00019] FIG. 1 illustrates a device for assessing a subject for Down Syndrome, according to an embodiment of the present disclosure;
[00020] FIG. 2 illustrates an example representation of key facial characteristics of Down syndrome detection through subjects’ digital image;
[00021] FIG. 3 is a flow chart illustrating a method for creating a dataset for training a model for assessing a subject for Down Syndrome, implemented according to an embodiment of the present disclosure;
[00022] FIG. 4 is a flow chart illustrating a method for training a convolutional neural network model for determining a score for the possibility of a subject having Down Syndrome based on a hybrid image of the subject, implemented according to an embodiment of the present disclosure; and
[00023] FIG. 5 illustrates a block diagram of an electronic device, implemented according to an embodiment of the present disclosure.
[00024] Further, skilled artisans will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the figures with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[00025] For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiments illustrated in the figures and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates.
[00026] It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the invention and are not intended to be restrictive thereof.
[00027] The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not comprise only those steps but may comprise other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by "comprises . . . a" does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components. Appearances of the phrase “in an embodiment”, “in another embodiment”, and similar language throughout this specification may, but do not necessarily, refer to the same embodiment as well.
[00028] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The device, methods, and examples provided herein are illustrative only and not intended to be limiting.
[00029] In addition to the illustrative aspects, exemplary embodiments, and features described above, further aspects, exemplary embodiments of the present disclosure will become apparent by reference to the drawings and the following detailed description.
[00030] Embodiments of the present disclosure particularly disclose a device for assessing a subject for Down Syndrome, the device comprising: a processor with a memory, the memory storing a plurality of modules configured for assessing the subject for Down Syndrome from an image of the subject and one or more data associated with the subject, wherein the modules are: a codifying module configured for codifying each of the one or more data associated with the subject; an implanting module for implanting the codified data in one or more channels of an image data file for creating a hybrid image file containing both the image data and the codified data; and a determining module configured for determining a score for a possibility of the subject having Down Syndrome by processing the hybrid image file, wherein the determining module is configured as a convolutional neural network trained a priori using hybrid images of a plurality of subjects; and an output module for outputting the score determined by the determining module.
[00031] The implanting may take multiple forms depending on the number of bits of the codified data that needs to be implanted. One exemplary way of implanting the data is to implant all the data sequentially in one or more bytes in the channels of each pixel and repeated throughout the whole image data file. Another exemplary method may be to form a tile of, say eight pixels by eight pixels and implant each codified data into one of the channels of the pixels in the tile in a predefined order. Such a tile could be of the first eight pixels of the first eight columns and rows of pixels of the image. However, it is also possible to repeat the tiled throughout the image file. Thus, it is a matter of convenience or even preference. Suffice to say that all such variations are deemed to fall within the scope of this disclosure since the critical idea is to implant data related to the subject as a part of the image file so that the CNN processes or analyses the data as if it is a part of the image data. Further, it is deemed obvious that the image is one of a digital photographic image of the face of the subject or an ultrasound image of the subject in the prenatal stage.
[00032] It is to be noted that the modules of the present disclosure are scalable and not only limited for detection of Down Syndrome in a subject. By providing appropriated training data sets the device may be used to detect other diseases as well, for example, skin cancer. Furthermore, it is to be noted that the modules of the present disclosure are also configured for early detection of Down syndrome, not only for new-born infants and prenatal babies, but can also be implemented for older children without any age limitation by providing appropriate training datasets.
[00033] The device and method as disclosed herein are configured for assessing a subject for Down Syndrome. The device and method as disclosed are configured for Down Syndrome detection by implementing an AI/ML based approach for graphical analysis of facial dysmorphic features in children and combining the analyzed features with the data associated with the children. The device and method as disclosed are also configured for Down Syndrome detection by implementing an AI/ML based approach for graphical analysis of Down Syndrome markers in prenatal babies and combining the analyzed markers with the data associated with the prenatal babies.
[00034] In some embodiments, the word ‘subject’, ‘child’, ‘children’ and ‘individual’ used in the description may reflect the same meaning and may be used interchangeably. In some embodiments, the word ‘image’ used in the description may be a digital photographic image of a face of an infant or a child. In some embodiments, the word ‘image’ used in the description may be an ultrasound image of a subject (for example, a prenatal stage of a baby) obtained from an ultrasound scanner.
[00035] Embodiments of the present invention will be described below in detail with reference to the accompanying figures.
[00036] FIG. 1 is a block diagram of a device 100 illustrating components configured for assessing a subject for Down Syndrome, according to an embodiment of the present disclosure. In particular, the FIG. 1 illustrates an input image which may be a digital photographic image 106 or an ultrasound image 106, one or more data 108 associated with the subject, a processor with or in communication with a memory (not shown), a measurement module 120, a codifying module 122, an implanting module 124, a determining module 126, an output module 128 and an assessed result 130. It is to be noted that the device 100 may be the user device such as a mobile device such as a smart phone or tablet and the like. Each block is explained in detail further below.
[00037] It is to be noted that the digital photographic image 106 may be a frontal image of the subject. However, it is equally possible that the image is profile image of the subject. each type of image may have its own advantage. For example, the frontal image may be more suitable for revealing the distance between the inner canthi of the eyes but not so suitable for assessing the angle of the bridge of the nose. In contrast, the profile image may be more suitable for measuring the angle of slant of the eyes and the angle of the bridge of the nose and cannot measure the distance between the inner canthi of the eyes at all. Thus, it is ensured that the training data set includes both the frontal images and profile images. The model learns to distinguish the two types of images and perform the assessment.
[00038] Further, in the case of the ultrasound images, the images are of, what is commonly referred to as, the fetal position. That is, the profile of the whole of the fetus, with the knee drawn close to the chest. Suffice to say that a person skilled in the art to the field to which this disclosure belongs understands this.
[00039] Examples of user devices include, but are not limited to, a mobile phone, a computer, a tablet, a laptop, a palmtop, a handheld device, a telecommunication device, a personal digital assistant (PDA), and the like. The disclosed device may also be configured with a computing device such as a desktop or a laptop for example. It may also be configured as a standalone special purpose device meant for the purpose of assessing subjects for Down Syndrome.
[00040] The device 100 is configured for assessing the subject for Down Syndrome. The device 100 is configured for assessing the subject for Down Syndrome by combination of an image 106 of the subject and the one or more data 108 associated with the subject.
[00041] In one example, the image 106 is one of a digital photographic image of the face of the subject – frontal or profile - and an ultrasound image of the subject in the subject’s prenatal stage.
[00042] The device 100 is configured for postnatal assessment of the subject for Down Syndrome by a combining the image data of the digital photographic image 106 and the one or more data 108 associated with the subject. In one example, the digital photographic image 106 of the subject may be obtained either by the camera present on the user device 104, or from any other source.
[00043] The device 100 may also be configured for prenatal assessment of the subject for Down Syndrome by combining the data retrieved from an ultrasound image 106 and the one or more data 108 associated with the subject. In this case it is to be noted that the subject in this case is fetus. But for ease of description and for the purpose of brevity it will be referred to only as a subject and the term “subject” when associated with an ultrasound image must be assumed to be a fetus and the assessment being done is prenatal assessment. The down syndrome markers are retrieved from the ultrasound image 106 of the subject. In one example, the Down Syndrome markers such as nuchal translucency, skin folds, thickness of neck, angle of nose bridge may be retrieved from an ultrasound of the subject.
[00044] The codifying module 122 is configured for codifying each of the one or more data 108 associated with the subject. The one or more data 108 associated with the subject may include, but not limited to, age and sex of the subject, the age of the subject’s mother at the estimated date of conception, data associated with ethnicity of subject’s biological mother, data associated with ethnicity of subject’s biological father, data associated with the subject’s one or more elements of clinical data and pathological data of the subject. In one example, the clinical data may comprise temperature, weight, pulse rate, etc., and the pathological data may comprise data determined by tests in a pathological laboratory, such as results of the measurements of the constituents of the subject’s blood, etc. In one example, the ethnicity of the subject may include details associated with the origin of the subject, the subject’s biological mother and the subject’s biological father. For example, details such as Caucasian, Indian, East Asian, African, Southeast Asian and such data may be captured.
[00045] The implanting module 124 is configured for implanting the codified data in one or more channels of an image data file for creating a hybrid image file containing both the image data and the codified data. The steps for creating the hybrid image file includes the steps of: converting the image to a predefined image file format, resizing the image to a predefined size, and processing the image for enabling the implanting module to implant codified data associated with the subject in one or more channels of the image file, wherein the one or more channels are additional channels created or channels made free in the image.
[00046] The term channels used herein, can refer to channels providing the RGB values in each pixel the image. The digital color images are made of pixels, and pixels are made of combinations of primary colors represented by a series of values. For example, there may three channels in a RGB image, the red channel, the green channel, and the blue channel. Each of the channels in each pixel represents the intensity of each color that constitute that pixel. To clarify, most of the current image formats use three or four channels to capture various image parameters such as color’s, hue, intensity, transparency etc. If the image is reduced to a black and white image, these channels are freed and may be used to implant the codified data. The implanting module 124 is configured for implanting the codified data in one or more available channels of the image data file.
[00047] If the image format allows flexibility in the number of channels, the codified data is implanted and added as an additional channel(s). In one example, each additional codified data can be an additional channel (of 8 bits). In another example, the codified data may be implanted into less than 8 bits. For example, the sex can have only two values – Male or Female and can be classified using only 1 bit that can be either 0 or 1. This way, one additional channel can capture multiple codified data.
[00048] If the image format does not allow flexibility in the number of channels, then the existing channels are freed either partly or fully to make space for implanting the codified data.
[00049] In one example, the color images can be translated to grayscale and thereby freeing up two channels. In another example, the color images can be compressed. For example, some bits from blue channel can be used for implantation of the codified data.
[00050] The determining module 126 is configured for determining a score for a possibility of the subject having Down Syndrome by processing the combined ‘image data and the codified data’. The image data includes the data retrieved from the digital photographic image 106. The determining module 126 is configured as a convolutional neural network trained a priori to determine the score using images of a plurality of subjects and a codified data associated with each image of the plurality of subjects. The hybrid image file is analysed by the determining module 126 for one or more factors from a set of factors comprising, but not limited to, nuchal translucency, presence of skin folds, thickness of neck, angle of nose bridge, distance between inner canthi of the eyes of the subject and any other parameter decided by the determining module. Additionally, the hybrid image file is analysed by the determining module 126 for factors including, but not limited to, the one or more physical parameters such as slanting eyes, small chin, round face, flat nasal bridge, Brush field spots in the iris, abnormal outer ears, etc. In one example, the determining module 126 is trained with n number of hybrid image files with known Down Syndrome Positive subjects and m number of hybrid files with known Down Syndrome Negative subjects. It is made sure that m and n are substantially equal. And the CNN model learns on its own factors that indicate Down Syndrome Positive or Down Syndrome Negative based on the factors listed about and may also determine factors not known to human medical professionals.
[00051] It is to be noted that care is to be taken that the training data set comprises as many samples as possible from representing each of the variables in the data connected with the subjects. That means that the training data sets include samples from the different age groups for the biological mothers, similarly for the race or ethnicity of the biological parents and so on to make the training data sets provide enough data to the model to learn and also to eliminate biases in the model. The training data set may be whetted by data scientists as well to eliminate unanticipated weightage or bias.
[00052] In one example, a model trained in the determining module 126, defines a codification chart, based on which the score is determined. For example, the number of bits for each data item can vary, the below provides just one example and may vary.
[00053] For example, a first bit captures sex: Male = 0, Female = 1;
[00054] The next three bits capture race or ethnicity: for example, Indian = 000, East Asian = 001, South-East Asian = 010, Germanic = 011, Anglo-Saxon = 100, North African = 101, South African = 110, Others = 111;
[00055] The subsequent bit may capture mother's age at the estimated date of conception: For example, mothers age: less than 25 = 00; between 25 and 30 = 01; between 30 and 35 = 10; over 35 = 11 and so on where 00, 01, 11, and 11 are binary representation of numbers 0 to 3 in the decimal system.
[00056] The output module 128 is configured for outputting an assessed result 130 based on the determined score for assessing the subject for Down Syndrome. It has to be noted that the determined score may take many forms. For example, the assessment result could just be Down Syndrome Positive or Down Syndrome Negative. It may also be on a scale of 0 to 10 or any other range wherein, 0 means Down Syndrome Negative and 10 means Down Syndrome Positive and all other values in between could be an indication of how likely is it that the subject being Down Syndrome Positive or Down Syndrome Negative. The assessment result may be configured to be expressed as a probability varying between 0 (Down Syndrome Negative) and 1 (Down Syndrome Positive)
[00057] The device 100 is configured to be implemented with an image classification CNN model that can be embedded inside a Mobile or Web Application or both, to be used directly by a user. The mobile or the Web application serves as a tool for resource optimization, rather than universal testing, where resources can be deployed to confirm the detection of Down Syndrome and treat high-risk children as early as possible. The device 100 is configured to identify high risk children and recommend them to the hospitals for confirmation of diagnosis and treatment. The device 100 can be used as a supplementary tool for assisting the medical and health care professionals for early detection of Down Syndrome. It is to be noted, that the device 100 can be used, as noted earlier, at pre-natal diagnostic centers to work with ultrasound scans as well.
[00058] FIG. 2 illustrates the example representation of key facial characteristics of Down syndrome detection through the subject’s digital photographic image.
[00059] Referring to FIG. 2, the one or more physical parameters such as slanting eyes, small chin, round face, flat nasal bridge, Brush field spots in the iris, abnormal outer ears, etc. are retrieved and measured by the CNN module. The determining module 126 is configured for determining a score for the possibility of the subject having Down Syndrome by processing the hybrid image data file comprising both the image data and the codified data. The image data includes the data retrieved from the digital photographic image 106.
[00060] FIG. 3 is a flow chart illustrating a method 300 for creating a dataset for training a CNN model for assessing a subject for Down Syndrome, implemented according to an embodiment of the present disclosure. FIG. 3 may be described from the perspective of a processor (not shown) that is configured for executing computer readable instructions stored in a memory to carry out the functions of the modules (described in FIG. 1) of the device 100. In particular, the steps as described in FIG. 3 may be executed for assessing a subject for Down Syndrome by processing a hybrid image file obtained from the image of the subject and the codified data associated with the subject. The trained model is created for Down Syndrome detection by implementing an AI/ML based approach for graphical analysis of facial dysmorphic features in new-born children and combining the analyzed features with the codified data associated with the new-born children. The trained model is created for Down Syndrome detection by implementing an AI/ML based approach for graphical analysis of markers in an ultrasound image and combining the analyzed markers with the codified data associated with babies. Each step is described in detail below. It is to be noted that the trained model is implemented either for an input image which may be (a) facial image or (b) ultrasound image.
[00061] At step 302, images of a plurality of subjects and a data associated with each image of the plurality of subjects is obtained. In one example, the images obtained from the plurality of subjects may be one of a digital photographic image of the face of the subject and an ultrasound image of the subject.
[00062] At step 306, the one or more data associated with each of the plurality of subjects is codified. The one or more data associated with each of the plurality of subjects may include, but not limited to age and sex of the subject, the age of subject’s mother at the estimated date of conception, data associated with the ethnicity of subject’s biological mother, data associated with ethnicity of subject’s biological father, data associated with subjects one or more elements of clinical data and pathological data of the subject. In one example, the clinical data may be temperature, weight, pulse rate, etc., and the pathological data may comprise data determined by tests in a pathological laboratory, such as blood constituents’ measurements results etc. In one example, the ethnicity of the subject may include details associated with the origin of the subject, the subject’s biological mother and subject’s biological father. For example, details such as Caucasian, Indian, East Asian, African, Southeast Asian, and similar such data shall be captured. In one example, the elements of the clinical data may include details obtained from the blood reports of the subject, values of blood serum report parameters, and any similar data.
[00063] At step 308, the images of the plurality of subjects are processed for creating an image data file. The steps for processing the images of the plurality of subjects include image format conversion, image pre-processing, image processing for channel addition for creating the dataset. In one example, the image manipulation tools available in the state of art are implemented for image format conversion, image pre-processing, image processing for channel addition for creating the dataset.
[00064] In one example for image format conversion, the obtained images are converted to jpeg (file name extension .jpg) or to a bit map image (file name extension .bmp) format. This may apply to ultrasound images as well. For example, one of the multiple formats that may be used for an ultrasound image is the DICOM format. This shall also be converted to jpeg format, for example, or any other predefined format before being fed to the model either at the time of training or when it is expected to assess a subject for Down Syndrome. In another example, for image pre-processing, the steps include:
a) resizing image so that face covers a certain proportion (say 75%) of the image frame; and
b) colour saturation or lighting correction, and
c) background colour standardisation.
[00065] In one example for image processing, the steps for channel addition are performed:
d) In case of channel addition:
i. Addition of one or more channels;
ii. Adding codified data into the added channel;
e) In case of no channel addition:
i. Image compression (using standard compression tools known in the state of art)
ii. Adding codified data into the freed-up space after compression
OR
iii. Image conversion from colour to grayscale (using standard tools)
iv. Adding codified data into the freed-up channel.
[00066] The codified data is now a part of the hybrid image file.
[00067] At step 310, a dataset is created using the hybrid images created (image data files) of the each of the plurality of subjects. This includes labelling each of the images as one of Down Syndrome Negative and Down Syndrome Positive. The hybrid image files of each of plurality of subjects, is fed to the CNN model. The CNNs are designed to process data captured in image channels. The images are modified to include data associated in one or more channels. The CNN module is unaware of this manipulation and treats the data associated with the subject, as an image feature itself.
[00068] At step 312, the CNN is trained using the created dataset. In one example, the CNN works in a three-step process known in the state of art.
[00069] Model training – a large dataset of the hybrid images containing both image data and implanted codified data associated with each of the images such as clinical data, along with classification (one of Down Syndrome Positive and Down Syndrome Negative) is fed to the model for training. CNN Model identifies different image parameters that are important for image classification. The model is programmed to undertake complicated mathematical transformations on the input data to achieve this objective.
[00070] Model testing – the CNN itself may take a subset of the image dataset fed above to test the model – how accurately it is able to predict results. In this phase also, it is preferable to test the model with hybrid images or images and data representing all the variables in the data.
[00071] It is to be noted that the model trained and as disclosed may include a feedback loop also. Each time a subject is assessed using the trained model, the training dataset is updated by appending the hybrid image file of the subject to the training dataset and the model is trained again with the updated training dataset. This has the further advantage as follows. It may not be possible, at the time of training the model, to provide training data having representation of all the possible variables in the data such as the different age groups for the biological mothers, and for the race or ethnicity of the biological parents and so on. Setting up this feedback loop ensures that over time, the model is likely to get more and more samples representing all the variables and the model become more robust.
[00072] Model deployment – if the testing results are satisfactory, the model is deployed on any processing device (server, laptop, mobile, etc.). It may be noted here that they method of assessing is not deemed to be a method of diagnosis to replace a human medical expert, for example a pediatrician, but as a method of assisting such a professional to make more informed decisions. As mentioned earlier, there is a paucity of such trained medical experts in many of the developing and underdeveloped regions of the world. Thus, a general medical practitioner equipped with a device, say a smart phone, hosting the disclosed model takes a picture of a baby on the same smart phone and uses the model to suggest to him the future course of action – to take no action regarding Down Syndrome with reference to the subject or recommend to the concerned, for example the parents of the subject to get further tests done or consult an expert. This is especially helpful in cases where the general medical practitioner has already suspected Down Syndrome in a subject and wants to decide with greater confidence.
[00073] Embodiments of the present disclosure include modules configured for post-natal Down Syndrome detection using facial photographs of subject, but not limited to, between age group (0-2, 2-5, 5-10, 10+), gender (M/F), ethnicity or race (Caucasian, Indian, East Asian, African, South-East Asian, etc.), mother’s age at the estimated time of conception and so on.
[00074] Embodiments of the present disclosure include modules configured for pre-natal Down Syndrome detection from, but not limited to only, ultrasound scans, blood serum report parameters, mother’s age at the estimated time of conception, ethnicity or race, and combination thereof.
[00075] The technical advancement of the disclosed trained model implemented for assessing the subject for Down Syndrome include, but not limited to:
1) No need to train multiple models,
2) Larger unified dataset captures common features across each sub-segment leading to higher accuracy,
3) No need to guess which data parameters are important since, all of them can be fed at one go and the model determines which are important or not (no need to guess),
4) Easily manageable since, any data item (for example, age) can be suppressed or activated easily through data manipulation, thus allowing flexibility to test various hypotheses.
[00076] The technical advancement of the disclosed trained model implemented for assessing the subject for Down Syndrome, with respect to existing methods as mentioned in background, for ANN approach include, but not limited to:
5) No need to translate image features into data and hence saves time with higher productivity thereby enabling processing of much larger datasets,
6) Less prone to human errors in translating image features into data.
[00077] The technical advancement of the disclosed trained model implemented for assessing the subject for Down Syndrome, with respect to existing methods as mentioned in background, such as Combined CNN approach include, but not limited to:
7) No programming effort required on the model, only dataset (image metadata) needs to be manipulated,
8) Standardised ‘black box’ models can be used, designed with hundreds of person-years of effort to yield best results,
9) Adding or deleting clinical parameters only requires manipulation of the dataset, and not the model, and
10) Potentially higher accuracy as the clinical data and image parameters are evaluated together rather than sequentially.
[00078] FIG. 4 is a flow chart illustrating a method 400 for training a convolutional neural network model for determining a score for the possibility of a subject having Down Syndrome based on a hybrid image of the subject, implemented according to an embodiment of the present disclosure. FIG. 4 may be described from the perspective of a processor (not shown) that is configured for executing computer readable instructions stored in a memory to carry out the functions of the modules (described in FIG. 1) of the device 100. In particular, the steps as described in FIG. 4 may be executed for assessing a subject for Down Syndrome by processing a hybrid image created from the image of the subject and the codified data associated with the subject. The trained model is created for Down Syndrome detection by implementing an AI/ML based approach for graphical analysis of facial dysmorphic features in new-born children and combining the analyzed features with the codified data associated with the new-born children or the trained model is created for Down Syndrome detection by implementing an AI/ML based approach for graphical analysis of markers in an ultrasound image and combining the analyzed markers with the codified data associated with prenatal babies. Each step is described in detail below. It is to be noted that the trained model is implemented either for an input image which may be (a) facial image-based image or (b) ultrasound scan-based image
[00079] At step 402, a hybrid image file is created for each, of a number of subjects each of whom is independently confirmed as having Down Syndrome. At step 404, each of those hybrid files as Down Syndrome Positive are labelled. At step 406, a hybrid image file is created each, of a number of subjects each of whom is independently confirmed as not having Down Syndrome. At step 408, labelling each of those as Down Syndrome Negative. At step 410, the convolutional neural network model is provided with all the hybrid image files labelled Down Syndrome Positive and all the hybrid image files labelled Down Syndrome Negative as training sample data.
[00080] The hybrid image file is created using the steps of: Converting the image to a predefined image file format, resizing the image to a predefined size, processing the image for enabling the implanting module to implant codified data associated with the subject data in one or more channels of the image file, wherein the one or more channels are additional channels created or channels made free in the image.
[00081] FIG. 5 is a block diagram 500 for of a computing device utilized for implementing the device 100 of FIG. 1 implemented according to an embodiment of the present disclosure. The modules of the device 100 described herein are implemented in computing devices. The computing device 500 comprises one or more processor 502, one or more computer-readable RAMs 504 and one or more computer-readable ROMs 506 on one or more buses 508.
[00082] Further, the computing device 500 includes a tangible storage device 510 that may be used to execute operating systems 520 and modules existing in the device 100. The various modules of the device 100 can be stored in tangible storage device 510. Both, the operating system, and the modules existing in the device 100 are executed by processor 502 via one or more respective RAMs 504 (which typically include cache memory).
[00083] Examples of storage devices 510 include semiconductor storage devices such as ROM 506, EPROM, flash memory, or any other computer-readable tangible storage device 510 that can store a computer program and digital information. Computing device also includes R/W drive or interface 514 to read from and write to one or more portable computer-readable tangible storage devices 528 such as a CD-ROM, DVD, and memory stick or semiconductor storage device. Further, network adapters or interfaces 512 such as a TCP/IP adapter cards, wireless WI-FI interface cards, or 3G or 5G wireless interface cards or other wired or wireless communication links are also included in computing device 500. In one embodiment, the modules existing in the device 100 can be downloaded from an external computer via a network (for example, the Internet, a local area network or other, wide area network) and network adapter or interface 512. Computing device 500 further includes device drivers 516 to interface with input and output devices. The input and output devices may include a computer one or more of a display monitor 518, a keyboard 525, a keypad, a touch screen, a computer mouse 526, and some other suitable input device.
[00084] While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
[00085] The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.

I Claim:

1. A device (100) for assessing a subject for Down Syndrome, the device (100) comprising:
a processor with a memory, the memory storing a plurality of modules configured for assessing the subject for Down Syndrome from an image (106) of the subject and one or more data (108) associated with the subject, wherein the modules are:
a codifying module (122) configured for codifying each of the one or more data (108) associated with the subject;
an implanting module (124) for implanting the codified data in one or more channels of an image data file for creating a hybrid image file containing both the image data and the codified data; and
a determining module (126) configured for determining a score for a possibility of the subject having Down Syndrome by processing the hybrid image file, wherein the determining module (126) is configured as a convolutional neural network trained a priori using hybrid images of a plurality of subjects; and
an output module (128) for outputting the score determined by the determining module (126).

2. The device (100) as claimed in claim 1, wherein the image (106) is one of a digital photographic image of the face of the subject and an ultrasound image of the subject in the subject’s prenatal stage.

3. The device (100) as claimed in claim 1, wherein:
the hybrid image file is analysed by the determining module (126) for one or more factors from a set of factors comprising, but not limited to, nuchal translucency, presence of skin folds, thickness of neck, angle of nose bridge, distance between inner canthi of eyes of the subject and any other parameter decided by the determining module (126); or
the hybrid image file is analysed for parameters related to factors including, but not limited to, the one or more physical parameters such as slanting eyes, small chin, round face, flat nasal bridge, Brush field spots in the iris, abnormal outer ears, etc. are retrieved and measured.

4. The device (100) as claimed in claim 1, wherein the one or more data (108) associated with the subject include, but not limited to, age of the subject, sex of the subject, age of the subject’s mother at an estimated date of conception, ethnicity of subject’s biological mother, ethnicity of subject’s biological father, one or more elements of clinical data and pathological data of the subject.

5. The device (100) as claimed in claim 1, wherein the device (100) is configured as a mobile device (104).

6. A method for creating a hybrid image of a subject the method comprising one or more of:
a step of converting the image (106) to a predefined image file format;
a step of resizing the image (106) to a predefined size; and
a step of processing the image (106) for enabling the implanting module (124) a step of implanting codified data associated with the subject data in one or more channels of the image file; wherein the one or more channels are additional channels created or channels made free in the image.

7. A method (400) for training a convolutional neural network model for determining a score for the possibility of a subject having Down Syndrome based on a hybrid image of the subject, wherein the method comprises the steps of:
a step of creating (402) a hybrid image file each, of a number of subjects each of whom is independently confirmed as having Down Syndrome;
a step of labelling (404) each of those hybrid files Down Syndrome Positive;
a step of creating (406) a hybrid image file each, of a number of subjects each of whom is independently confirmed as not having Down Syndrome;
a step of labelling (408) each of those as Down Syndrome Negative;
providing (410) the convolutional neural network model with all the hybrid image files labelled Down Syndrome Positive and all the hybrid image files labelled Down Syndrome Negative as training data set.

8. The method (400) as claimed in claim 7, wherein the convolutional neural network model is a generic machine learning model.

9. The method (400) as claimed in claim 8, wherein the convolutional neural network is selected from a list of convolutional neural networks including, but not limited to, Google Cloud ML Vision Image Classification Model from Google®, FaceNet512 from Google®, FaceNet512 from Google®, ArcFace from Imperial College London, DeepFace from Facebook®.

10. The method (400) as claimed in claim 7, wherein each time a subject is assessed using the trained model, the training dataset is updated with by appending the hybrid image file of the subject to the training dataset and the model is trained with the updated training dataset.

Documents

Application Documents

# Name Date
1 202211010815-STATEMENT OF UNDERTAKING (FORM 3) [28-02-2022(online)].pdf 2022-02-28
2 202211010815-REQUEST FOR EARLY PUBLICATION(FORM-9) [28-02-2022(online)].pdf 2022-02-28
3 202211010815-FORM-9 [28-02-2022(online)].pdf 2022-02-28
4 202211010815-FORM 18A [28-02-2022(online)].pdf 2022-02-28
5 202211010815-FORM 1 [28-02-2022(online)].pdf 2022-02-28
6 202211010815-DRAWINGS [28-02-2022(online)].pdf 2022-02-28
7 202211010815-FORM-26 [10-10-2022(online)].pdf 2022-10-10
7 202211010815-DECLARATION OF INVENTORSHIP (FORM 5) [28-02-2022(online)].pdf 2022-02-28
8 202211010815-COMPLETE SPECIFICATION [28-02-2022(online)].pdf 2022-02-28
9 202211010815-FER.pdf 2022-03-17
10 202211010815-Proof of Right [11-04-2022(online)].pdf 2022-04-11
11 202211010815-FORM-26 [11-04-2022(online)].pdf 2022-04-11
12 202211010815-OTHERS [12-04-2022(online)].pdf 2022-04-12
13 202211010815-FER_SER_REPLY [12-04-2022(online)].pdf 2022-04-12
14 202211010815-CLAIMS [12-04-2022(online)].pdf 2022-04-12
15 202211010815-US(14)-HearingNotice-(HearingDate-11-10-2022).pdf 2022-09-09
16 202211010815-FORM-26 [10-10-2022(online)].pdf 2022-10-10
17 202211010815-Correspondence to notify the Controller [10-10-2022(online)].pdf 2022-10-10
18 202211010815-Written submissions and relevant documents [18-10-2022(online)].pdf 2022-10-18
19 202211010815-PatentCertificate23-12-2022.pdf 2022-12-23
20 202211010815-IntimationOfGrant23-12-2022.pdf 2022-12-23
21 202211010815-Request Letter-Correspondence [04-03-2023(online)].pdf 2023-03-04
22 202211010815-Covering Letter [04-03-2023(online)].pdf 2023-03-04

Search Strategy

1 202211010815E_15-03-2022.pdf

ERegister / Renewals

3rd: 26 Feb 2024

From 28/02/2024 - To 28/02/2025

4th: 27 Feb 2025

From 28/02/2025 - To 28/02/2026