Abstract: ABSTRACT A SYSTEM AND A METHOD FOR ALIGNMENT AND POSITIONING OF TEETH The present disclosure discloses a system (100) for positioning and alignment of teeth. The system (100) comprises, a repository (200) used to store a dataset of digital images, a dataset of trained and test images, a set of rules, and predefined commands. The learning module (104) splits trained and test dataset of digital images. The imaging device (106) captures images of the teeth set includes maxilla and mandible. The extractor module (108) extracts features from digital image. The alignment module (110) determine the current alignment of teeth. The computing module (112) determine the desired positioning of teeth based on current alignment of teeth. The building module (114) generate sliders (124) to be applied in the molars to enable the desired positioning of teeth. The display module (116) display 3D digital view with desired aligned and positioned of teeth.
Description:FIELD OF INVENTION
The present disclosure generally relates to the field of teeth alignment and forward positioning of mandible in relation to maxilla. More particularly, the present disclosure relates to the systems for alignment and positioning of teeth and lower jaw.
BACKGROUND
The background information herein below relates to the present disclosure but is not necessarily prior art.
Historically, the modification for deficient mandible has been done using removable appliances such as bionator, activator, and twin block and by fixed appliances such as forsus and Herbst appliance. All these appliances are constructed using cold cure powder and liquid and appliances are fabricated in the lab and patients need to wear them full time. As these appliances are bulky, the patient encounters great difficulty in eating, swallowing, mastication, and maintaining good oral hygiene thus compromising the compliance of the patient. Also, these bulky appliances are un-anaesthetic, thereby leading to a lot of resistance from the patient to wear it, thus compromising outcome.
To overcome the aforementioned drawbacks, fixed functional appliances which includes a telescopic device having an open coil spring needs to be placed on a fully bonded upper and lower arch. However, the telescopic devices are bulky and exert a high force which leads to frequent breakage and de-bonding of braces and sometimes even leads to injury and ulceration of oral tissues. The conventional removable and fixed functional appliances are very unaesthetic to the patient and cannot be removed while eating leading to a lot of discomfort for the patient.
Conventionally, teeth placement can be done manually and in a digital way.
In the manual method, the current teeth position of a patient is calculated by orthodontists, further, a dental jaw plaster model of the patient is created, and later calculates the distance required for placement of teeth from the current position to the actual position requirement is calculated later. After several attempts, the position of the teeth is fixed. Teeth movement measurement can be done in time intervals of two weeks. The teeth measurement will be done until and unless the actual positioning of teeth is identified and fixed. Based on the result generated by manual treatment, braces are designed and used by patients.
In a digital method for teeth placement, the digital image is captured for the current teeth position of a patient. Digitally it creates a dental jaw plaster model, and by performing a forwarding approach the current position of teeth is aligned to get the desired position of the teeth. Teeth measurement can be done in time intervals of two weeks. The teeth measurement will be done until and unless the actual positioning of teeth is identified and are fixed. Based on the result generated by the digital method, braces are designed and used by patients.
The main limitation of manual and digital teeth placement procedures is time consuming and required large material. The manual teeth arrangement step mainly includes creating a dental jaw plaster model, cutting off the teeth according to the plaster model, repositioning the teeth to be moved, etc.
Manual and digital methods for designing braces for patients are time consuming process. In both the processes there is a possibility of error and accuracy. This error and accuracy may lead to the inappropriate structure of braces.
There is, therefore, felt a need for a system and a method for alignment and positioning of teeth, that eliminates the above-mentioned drawbacks.
OBJECTS
Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows:
It is an object of the present disclosure to ameliorate one or more problems of the prior art or to at least provide a useful alternative.
An object of the present disclosure is to provide a system for alignment and positioning of teeth.
Another object of the present disclosure is to provide a system that helps in precise determination of the desired displacement of teeth for alignment and positioning.
Other objects and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
SUMMARY
The present disclosure envisages a system for alignment and positioning of teeth and of the interrelationship of the mandible to maxilla by forwarding positioning of the madible. The system includes a repository, a learning module, an imaging device, an extractor module, an alignment module, an editor module, a computing module, a building module, and a display module.
The repository, the learning module, the imaging device, the extractor module, the alignment module, the editor module, the computing module, the building module, and the display module are implemented using one or more processor(s).
The repository is configured to store a dataset of digital images, a dataset of trained and test images, a set of rules, pre-stored displacement information, and predefined commands.
The learning module is configured to split the trained and test dataset of digital images in accordance with the rules stored in the repository.
The imaging device is configured to capture a digital image of a patient’s teeth set which includes the maxilla and mandible.
The extractor module is configured to receive the digital image from the imaging device, and further configured to extract features from the received digital image.
The alignment module configured to receive the extracted features from the extractor module, and further configured to determine the current alignment of teeth.
The editor module configured to edit the rules and command from the repository.
The computing module configured to determine desired positioning of teeth based on the current alignment of teeth.
The building module configured to generate sliders to be applied to the molars that enable desired positioning of teeth and build a 3D digital view.
The display module configured to display the built 3D digital view of aligned and positioned teeth.
In an embodiment, the system comprises a learning module. The learning module includes a training module, a testing module, and a splitter. The training module is configured to train the digital images, and the testing module is configured to test the digital images. The splitter is used to split the trained dataset and test dataset in accordance with the rules specified in the repository.
In an embodiment, the imaging device is selected from the group consisting of a scanner device, an X-ray device, and a 3D image capturing device.
In an embodiment, the extractor module includes a feature reader and a feature extractor, the feature reader is configured to scan and read the features from the digital image received from the imaging device, and the feature extractor is configured to extract features from the digital image and store the extracted features in the repository.
In an embodiment, the alignment module includes a retrieval module, an analyzing module, and a virtual primary module, the retrieval module is configured to retrieve the digital image from the imaging device, analyzing module is configured to scan for teeth sets including maxilla and mandible region and identify the current alignment of teeth, and the virtual primary module is configured to determine current alignment of teeth based on the identified maxilla and mandible region received from the analyzing module.
In an embodiment, the editor module is configured to edit rules and the command from the repository.
In an embodiment, the computing module includes a fetching module, and a determination module, compiling module, a verifier module, a displacement module, and an iterator module. The fetching module is configured to fetch alignment of teeth from the alignment module, and the determination module is configured to determine the desired positioning of teeth. The compiling module is configured to compile alignment of teeth with the combination of rules and commands.
The displacement module is configured to calculate the displacement of the desired positioning of teeth in accordance with the rules from the repository. The iterator module is configured to select and change the permutation and combination of rules and commands to obtain alignment and positioning of teeth. The iterator module is further configured to iteratively compare the computed displacement with pre-stored displacements to achieve desired positioning of teeth.
In an embodiment, the building module includes a 3D digital module is configured to generate the sliders to be applied onto the upper and lower molars to enable the desired positioning of teeth and built a 3D digital view of alignment and positioning of teeth.
The present disclosure also envisages a method for alignment and positioning of teeth. The method comprises the steps of:
• storing, in a repository a dataset of digital images, a dataset of trained and test images, a set of rules, pre-stored displacement information, and predefined commands;
• splitting, by a learning module, the train and test dataset of digital image;
• capturing, by an imaging device, a digital image of a patient’s maxilla and mandible;
• receiving, by an extractor module, a digital image from the imaging device;
• extracting, by an extractor module, the features from received the digital image;
• receiving, by an alignment module, the extracted features from the extractor module;
• determining, by an alignment module, the current alignment of teeth;
• editing, by an editor module, the rules and commands from the repository;
• determining, by a computing module, the desired positioning of teeth based on the current alignment of teeth;
• verifying, by a verifier module, to verify displacement of the desired positioning of teeth;
• generating, by a building module, to generate sliders to be applied in the molars to enable desired positioning of teeth, and
• displaying, by a display unit, to display 3D digital view of aligned and positioned of teeth;
The method further comprises the steps of:
• scanning, by a feature reader, the digital image received from the imaging device;
• reading, by the feature reader, the features from the digital image;
• extracting, by a features extractor, the features from the digital image; and
• storing, by the feature extractor, the extracted features in the repository.
The method further comprises the steps of:
• retrieving, by a retrieval module, to retrieve the digital image from the imaging device;
• scanning, by an analyzing module, the maxilla and mandible region and identify the current alignment of teeth; and
• determining, by a virtual primary module, the current alignment of teeth based on identified maxilla and mandible region received from the analyzing module.
The method further comprises the steps of:
• fetching, by a fetching module, the current alignment of teeth from the alignment module;
• fetching, by a determination module, the combination of rules and commands from the repository;
• generating, by the determination module, a determine desired positioning of teeth; and
• compiling, by a compilation module the desired alignment of teeth with the combination of rules and commands.
The method further comprises the steps of:
• calculating, by a displacement module, the displacement for achieving desired positioning of teeth; and
• iteratively comparing, by an iteration module the computed displacement with pre-stored displacements to achieve desired positioning of teeth.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING
A system and a method for alignment and positioning of teeth of the present disclosure will now be described with the help of the accompanying drawing, in which:
Figure 1 illustrates a block diagram of the system for alignment and positioning of teeth;
Figure 2A to Figure 2F illustrate a flow chart depicting steps involved in a method for alignment and positioning of teeth; and
Figure 3 illustrates a schematic diagram of the alignment and positioning of teeth achieved by sliders.
LIST OF REFERENCE NUMERALS
102 - Repository
104 - Learning Module
104a - Training Module
104b - Testing Module
104c - Splitter
106 - Imaging Device
108 - Extractor Module
108a - Feature Reader
108b - Feature Extractor
110 - Alignment Module
110a - Retrieval Module
110b - Analyzing Module
110c - Virtual Primary Module
112 - Editor Module
114 - Computing Module
114a - Fetching Module
114b - Determination Module
114c - Compiling Module
115 - Verifier Module
115a - Displacement Module
115b - Iterator Module
116 - Building Module
116a - 3D Digital Module
118 - Display module
124 - Sliders
DETAILED DESCRIPTION
Embodiments, of the present disclosure, will now be described with reference to the accompanying drawing.
Embodiments are provided so as to thoroughly and fully convey the scope of the present disclosure to the person skilled in the art. Numerous details, are set forth, relating to specific components, and methods, to provide a complete understanding of embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments should not be construed to limit the scope of the present disclosure. In some embodiments, well-known processes, well-known apparatus structures, and well-known techniques are not described in detail.
The terminology used, in the present disclosure, is only for the purpose of explaining a particular embodiment and such terminology shall not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms "a,” "an," and "the" may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms “including,” and “having,” are open ended transitional phrases and therefore specify the presence of stated features, integers, steps, operations, elements and/or components, but do not forbid the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The particular order of steps disclosed in the method and process of the present disclosure is not to be construed as necessarily requiring their performance as described or illustrated. It is also to be understood that additional or alternative steps may be employed.
When an element is referred to as being “engaged to,” "connected to," or "coupled to" another element, it may be directly engaged, connected or coupled to the other element. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed elements.
The present disclosure envisages a system (herein after referred to as “system 100”) for the desired positioning and alignment of teeth and a method for the desired positioning and alignment of teeth (herein after referred to as “method 200”). The system 100 will now be described with reference to Figure 1 and the method 200 will be described with reference to Figure 2A to Figure 2F.
Referring to Figure 1, the system 100 comprises a repository 102, a learning module 104, an imaging device 106, an extractor module 108, an alignment module 110, an editor module 112, a computing module 114, a building module 116, and display unit 118.
The repository 102 is used to store a dataset of digital images, a dataset of trained and test images, a set of rules, pre-stored displacement information, and predefined commands.
The learning module 104 includes a training module 104a, a testing module 104b, and a splitter 104c. The training module 104a is configured to train the digital images, and testing module 104b is configured to test the digital images. splitter 104c is used to split the trained dataset and test dataset in accordance with the rules specified in repository
The imaging device 106 is used to capture a digital image of patient’s teeth of the maxilla and mandible. The digital image format is selected from the group consisting of a 3D image, scanned image, virtual image, Standard Triangle Language, or Standard Tessellation Language (STL) format image. Imaging device 106 is selected as a scanner device, x-ray device, 3D image capturing device, and the like.
The extractor module 108 is configured to receive the captured image, and is further configured to extract features from received digital images. The extractor module 108 includes feature reader 108a and feature extractor 108b. The feature reader 108a is configured to scan and read the features from the digital image received from the imaging device 106, and feature extractor 108b is configured to extract features from digital image and store the extracted features in the repository 102.
The alignment module 110 is configured to receive extracted features and further configured to determine the current alignment of teeth. The alignment module 110 includes a retrieval module 110a, an analyzing module 110b, and a virtual primary module 110c. The retrieval module 110a retrieves the digital image from the imaging device 106 analyzing module 110b scan for teeth set including maxilla and mandible region and identify the current alignment of teeth. The virtual primary module 110c determine the current alignment of teeth based on the identified maxilla and mandible region received from analyzing module 110b.
The editor module 112 is configured to edit rules and commands from the repository 102 for computation.
The computing module 114 is configured to determine the desired positioning of teeth based on the current alignment of teeth. The computing module 114 includes a fetching module 114a, a determination module 114b, a compiling module 114c, and a verifier module 115. The fetching module 114a is configured to fetch the alignment of the teeth from alignment module 110. The determination module 114b to determine the desired to positioning of teeth. The compiling module 112d is configured to compile the alignment of teeth with the combination of algorithms.
The verifier module 115, a displacement module 115a, and an iterator module 115b. The displacement module 115a is configured to calculate the displacement of the desired positioning of teeth in accordance with the rules from the repository.
The iterator module 115b is configured to select and change the permutation and combination of rules and commands to obtain alignment and positioning of teeth, the iterator module 115b is further configured to iteratively compare the computed displacement with pre-stored displacements to achieve desired positioning of teeth.
The building module 116 includes a 3D digital module 116a that is configured to generate the sliders 124 to be applied onto the upper and lower molars to enable the desired positioning of teeth and built a 3D digital view of alignment and positioning of teeth
The display module 116 is configured to display a 3D digital view for alignment and positioning of teeth.
Figure 2A to Figure 2F illustrates a method 200 for the desired positioning and alignment of teeth, the method 200 comprises the following steps:
At Step 202, storing, in a repository 102 a dataset of digital images, a dataset of trained and test images, a set of rules, pre-stored displacement information, and predefined commands;
At Step 204, splitting, by a learning module 104, the train and test dataset of digital image;
At Step 206, capturing, by an imaging device 106, a digital image of a patient’s maxilla and mandible;
At Step 208, receiving, by an extractor module 108, a digital image from the imaging device 106;
At Step 210, extracting, by an extractor module 108, the features from received the digital image;
At Step 212, receiving, by an alignment module 110, the extracted features from the extractor module 108;
At Step 214, determining, by an alignment module 110, the current alignment of teeth;
At Step 216, editing, by an editor module 112, the rules and commands from the repository 102;
At Step 218, determining, by a computing module 114, the desired positioning of teeth based on the current alignment of teeth;
At Step 220, verifying, by a verifier module 115, the displacement of the desired positioning of teeth;
At Step 222, generating, by a building module 116, the sliders 124 to be applied in the molars to enable desired positioning of teeth, and
At Step 224, displaying, by a display unit 118, the 3D digital view of aligned and positioned of teeth (as shown in Figure 3).
Step 208 comprises the further steps of:
At step 208a, scanning, by a feature reader 108a, the digital image received from the imaging device 106;
At step 208b, reading, by the feature reader 108a, the features from the digital image;
At step 208c, extracting, by a features extractor 108b, the features from the digital image; and
At step 208d, storing, by the feature extractor 108b, the extracted features in the repository 102.
Step 214 comprises the further steps of:
At step 214a, retrieving, by a retrieval module 110a, the digital image from the imaging device 106;
At step 214b, scanning, by an analyzing module 110b, the maxilla and mandible region and identify the current alignment of teeth; and
At step 214c, determining, by a virtual primary module 110c, the current alignment of teeth based on identified maxilla and mandible region received from the analyzing module 110b.
Step 218 comprises the further steps of:
At step 218a, fetching, by a fetching module 114a, the current alignment of teeth from the alignment module 110;
At step 218b, fetching, by a determination module 114b, the combination of rules and commands from the repository 102;
At step 218c, generating, by the determination module 114b, a determine desired positioning of teeth; and
At step 218d, compiling, by a compilation module 114c the desired alignment of teeth with the combination of rules and commands.
Step 220 comprises the further steps of:
At step 220a, calculating, by a displacement module 115a, the displacement for achieving desired positioning of teeth; and
At step 220b, iteratively comparing, by an iteration module 115b the computed displacement with pre-stored displacements to achieve desired positioning of teeth.
An exemplified pseudo-code depicting the execution of various modules of system 100 for the desired positioning and alignment of teeth is given below:
Class Teeth
{
repository ()
{ store a dataset of digital images, a dataset of trained and test images, a set of rules, pre-stored displacement information, and predefined commands;
}
learning_module()
{
Call.repository();
Tr=Trained dataset;
Te=Test Dataset;
S=Splitter;
}
imaging_device()
{
Call.learning_module();
I= capture a digital images of a patient’s teeth set includes maxilla and mandible;
}
extractor_module()
{
Call.imaging_device();
While (Receive== I) then
{
Select Image “I”;
do(E=extract features from I)
{
System.out.println(“Feature extracted, &E”);
} }
}
alignment_module()
{
Call.extractor_module();
If (select I and E==1)
{
CT= determine current alignment of teeth;
System.out.println(“current alignment of teeth, CTE”);
}}
Editor_module()
{
Edit rules and commands for computation;
}
Computing_module()
{
Call.alignment_module();
fetching module ()
{
fetch alignment of teeth from alignment module;
}
determination module ();
{
determine the desired position of teeth;
}
compiling module ()
{compile alignment of teeth with combination of rules and commands;
}
verifier module ()
{
accuracy module ()
{
calculate the accuracy of the desired positioning of teeth in accordance with the rules from repository;
}
iterator module ()
{
selects and changes the permutation and combination of rules and commands to obtain of the desired positioning of teeth;
}
determine the desired positioning of teeth based on current alignment of teeth;
}
builder_module()
{
Call. computing_module()l
SL= Generate slider;
Apply sliders SL in the molars to enable the desired positioning of teeth (as shown in Figure 3);
Build 3d digital view;
}
display_unit()
{
Display 3 d digital 3D digital view of the desired positioning of teeth generated by sliders (as shown in Figure 3);
}
}
In an operative configuration, a user captures at least one digital image of patient’s teeth set including maxilla and mandible in accordance with the rules stored in the repository 102 for computation. Further, learning module 104 splits the trained dataset and test dataset in accordance with the rules stored in the repository. The imaging device 106 is used to capture at least one digital image. The extractor module 108 includes feature reader 108a and feature extractor 108b, wherein feature reader 108a scan and read the feature from the inputted digital image received from the imaging device 106, and feature extractor 108b extracts feature from the digital image and stores the extracted feature in repository 102. The alignment module 110 receives extracted feature from repository 102, and further determine the current alignment of teeth. The alignment module 110 includes retrieval module 110a that retrieves the digital image from imaging device 106, an analyzing module 110b scan for teeth set including maxilla and mandible region and identify the current alignment of teeth, and a virtual primary module 110c determine the current alignment of teeth based on the identified maxilla and mandible region received from the analyzing module 110b. The editor module 112 is configured to edit rules and commands for computation.
The computing module 114 includes a fetching module 114a that fetches the alignment of teeth from the alignment module 110, and a determination module 114b to determine the desired position of teeth. The compiling module 114c compiles the alignment of teeth with the combination of rules and commands. The verifier module 115 includes a displacement module 115a and an iterator module 115b. The displacement module 115a is configured to calculate the displacement for the desired positioning of teeth in accordance with the rules from the repository 102, and iterator module 115b is configured to select and change the permutation and combination of rules and commands to obtain alignment and positioning of teeth, the iterator module 115b further configured to iteratively compare the computed displacement with pre-stored displacements to achieve desired positioning of teeth. The building module 116 includes 3D digital module 116a configured to generate the sliders 124 to be applied onto the upper and lower molars to enable the desired positioning of teeth by interrelationship of the mandible to maxilla by forwarding positioning of the mandible and built a 3D digital view of alignment and positioning of teeth (as shown in Figure 3). The 3D digital view displays precise alignment and positioning of teeth.
The system 100 of the present disclosure facilitates determination of accurate and precise displacement of teeth and accordingly generates the sliders 124 to enable alignment and positioning of teeth. Further, the system 100 enables generation of braces i.e., sliders 124 as per anatomy of patient’s teeth to facilitate effective alignment and positioning of teeth.
The foregoing description of the embodiments has been provided for purposes of illustration and not intended to limit the scope of the present disclosure. Individual components of a particular embodiment are generally not limited to that particular embodiment, but are interchangeable. Such variations are not to be regarded as a departure from the present disclosure, and all such modifications are considered to be within the scope of the present disclosure.
TECHNICAL ADVANCEMENTS
The present disclosure described herein above has several technical advantages including, but not limited to, the realization of a system and a method for positioning and alignment of teeth that:
• provides the desired positioning of teeth;
• accurately determine teeth position;
• enables generation of braces as per anatomy of patient’s teeth; and
• reduces error rate ratio while determining teeth position.
The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The foregoing description of the specific embodiments so fully reveals the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the disclosure to achieve one or more of the desired objects or results.
While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
, Claims:WE CLAIM:
1. A system (100) for alignment and positioning of teeth, said system (100) comprises:
• a repository (102) configured to store a dataset of digital images, a dataset of trained and test images, a set of rules, pre-stored displacement information, and predefined commands;
• a learning module (104) configured to split said trained and test dataset of digital images in accordance with said rules stored in said repository (102);
• an imaging device (106) configured to capture a digital image of a patient’s teeth set which includes the maxilla and mandible;
• an extractor module (108) configured to receive said digital image from said imaging device (106), and further configured to extract features from said received digital image;
• an alignment module (110) configured to receive said extracted features from said extractor module (108), and further configured to determine the current alignment of teeth;
• an editor module (112) configured to edit said rules and said command from said repository (102);
• a computing module (114) configured to determine the desired positioning of teeth based on the current alignment of teeth;
• a building module (116) configured to generate sliders (124) to be applied to the molars that enable the desired positioning of teeth, and build a 3D digital view; and
• a display module (118) configured to display said built 3D digital view of aligned and positioned teeth.
wherein said learning module (104), said imaging device (106), said extractor module (108), said alignment module (110), said editor module (112), said computing module (114), said building module (116), and said display module (118) are implemented using one or more processor(s).
2. The system (100) as claimed in claim 1, wherein said digital image is an image selected from the set of images consisting of 3D images, virtual images, Standard Triangle Language or Standard Tessellation Language (STL) format image.
3. The system (100) as claimed in claim 1, wherein said learning module (104) consists:
• a training module (104a) is configured to train the digital images;
• a testing module (104b) is configured to test the digital images; and
• a splitter (104c) is configured to split the trained dataset and test dataset in accordance with the rules specified in said repository (102).
4. The system (100) as claimed in claim 1, wherein said imaging device (106) is a device selected from the set of an imaging device (106) consisting of a scanner device, an x-ray device, and a 3D image capturing device.
5. The system (100) as claimed in claim 1, wherein said extractor module (108) consists:
• a feature reader (108a) is configured to scan and read the features from said digital image received from said imaging device (106); and
• a feature extractor (108b) is configured to extract features from said digital image and store the extracted features in said repository.
6. The system (100) as claimed in claim 1, wherein said alignment module (110) consists:
• a retrieval module (110a) is configured to retrieve said digital image from said imaging device (106),
• an analyzing module (110b) is configured to scan for teeth set including maxilla and mandible region and identify the current alignment of teeth, and
• a virtual primary module (110c) is configured to determine current alignment of teeth based on said identified maxilla and mandible region received from said analyzing module (110b).
7. The system (100) as claimed in claim 1, wherein said computing module (114) consists:
• a fetching module (114a) is configured to fetch alignment of teeth from said alignment module (110);
• a determination module (114b) is configured to determine the desired positioning of teeth;
• a compiling module (114c) is configured to compile alignment of teeth with said combination of rules and commands;
• a verifier module (115) is configured and cooperate with a displacement module (115a) and iterator module (115b);
• said displacement module (115a) is configured to calculate the displacement of the desired positioning of teeth in accordance with the said rules from said repository (102); and
• said iterator module (115b) is configured to select and change the permutation and combination of rules and commands to obtain alignment and positioning of teeth, said iterator module (115b) further configured to iteratively compare the computed displacement with pre-stored displacements to achieve desired positioning of teeth.
8. The system (100) as claimed in claim 1, wherein said building module (116) includes a 3D digital module (116a) configured to generate said sliders (124) to be applied onto the upper and lower molars to enable the desired positioning of teeth and built a 3D digital view of alignment and positioning of teeth.
9. A method (200) for alignment and positioning of teeth, said method (200) comprising the following steps:
• storing (202), in a repository (102) a dataset of digital images, a dataset of trained and test images, a set of rules, pre-stored displacement information, and predefined commands;
• splitting (204), by a learning module (104), the train and test dataset of digital image;
• capturing (206), by an imaging device (106), a digital image of a patient’s maxilla and mandible;
• receiving (208), by an extractor module (108), a digital image from said imaging device (106);
• extracting (210), by an extractor module (108), the features from received said digital image;
• receiving (212), by an alignment module (110), the extracted features from said extractor module (108);
• determining (214), by said alignment module (110), the current alignment of teeth;
• editing (216), by an editor module (112), the rules and commands from said repository (102);
• determining (218), by a computing module (114), the desired positioning of teeth based on the current alignment of teeth;
• verifying (220), by a verifier module (115), the displacement of said desired positioning of teeth;
• generating (222), by a building module (116), the sliders (124) to be applied in the molars to enable desired positioning of teeth, and
• displaying (224), by a display unit (118), the 3D digital view of aligned and positioned of teeth.
10. The method (200) as claimed in claim 9, wherein said determining step (208) comprises essentially of:
• scanning (208a), by a feature reader (108a), said digital image received from said imaging device (106);
• reading (208b), by said feature reader (108a), the features from said digital image;
• extracting (208c), by a features extractor (108b), the features from said digital image; and
• storing (208d), by said feature extractor (108b), the extracted features in said repository (102).
11. The method (200) as claimed in claim 9, wherein said determining step (214) comprises essentially of:
• retrieving (212a), by a retrieval module (110a), the digital image from said imaging device (106);
• scanning (212b), by an analyzing module (110b), the maxilla and mandible region and identify the current alignment of teeth; and
• determining (212c), by a virtual primary module (110c), the current alignment of teeth based on identified maxilla and mandible region received from said analyzing module (110b).
12. The method (200) as claimed in claim 9, wherein said determining step (218) comprises essentially of:
• fetching (218a), by a fetching module (114a), the current alignment of teeth from said alignment module (110);
• fetching (218b), by a determination module (114b), the combination of rules and commands from said repository (102);
• generating (218c), by said determination module (114b), a determine desired positioning of teeth; and
• compiling (218d), by a compilation module (114c) the desired alignment of teeth with said combination of rules and commands.
13. The method (200) as claimed in claim 9, wherein said determining step (220), comprises essentially of:
• calculating (220a), by a displacement module (115a), the displacement for achieving desired positioning of teeth; and
• iteratively comparing (220b), by an iteration module (115b), the computed displacement with pre-stored displacements to achieve desired positioning of teeth.
Dated this 25th day of July, 2022
_______________________________
MOHAN RAJKUMAR DEWAN, IN/PA – 25
of R.K.DEWAN & CO.
Authorized Agent of Applicant
TO,
THE CONTROLLER OF PATENTS
THE PATENT OFFICE, AT MUMBAI
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 202221042546-IntimationOfGrant17-10-2024.pdf | 2024-10-17 |
| 1 | 202221042546-STATEMENT OF UNDERTAKING (FORM 3) [25-07-2022(online)].pdf | 2022-07-25 |
| 2 | 202221042546-PatentCertificate17-10-2024.pdf | 2024-10-17 |
| 2 | 202221042546-PROOF OF RIGHT [25-07-2022(online)].pdf | 2022-07-25 |
| 3 | 202221042546-POWER OF AUTHORITY [25-07-2022(online)].pdf | 2022-07-25 |
| 3 | 202221042546-AMMENDED DOCUMENTS [03-10-2024(online)].pdf | 2024-10-03 |
| 4 | 202221042546-FORM FOR STARTUP [25-07-2022(online)].pdf | 2022-07-25 |
| 4 | 202221042546-FORM 13 [03-10-2024(online)].pdf | 2024-10-03 |
| 5 | 202221042546-MARKED COPIES OF AMENDEMENTS [03-10-2024(online)].pdf | 2024-10-03 |
| 5 | 202221042546-FORM FOR SMALL ENTITY(FORM-28) [25-07-2022(online)].pdf | 2022-07-25 |
| 6 | 202221042546-Written submissions and relevant documents [02-10-2024(online)].pdf | 2024-10-02 |
| 6 | 202221042546-FORM 1 [25-07-2022(online)].pdf | 2022-07-25 |
| 7 | 202221042546-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [25-07-2022(online)].pdf | 2022-07-25 |
| 7 | 202221042546-Correspondence to notify the Controller [11-09-2024(online)].pdf | 2024-09-11 |
| 8 | 202221042546-FORM-26 [11-09-2024(online)].pdf | 2024-09-11 |
| 8 | 202221042546-EVIDENCE FOR REGISTRATION UNDER SSI [25-07-2022(online)].pdf | 2022-07-25 |
| 9 | 202221042546-DRAWINGS [25-07-2022(online)].pdf | 2022-07-25 |
| 9 | 202221042546-US(14)-HearingNotice-(HearingDate-17-09-2024).pdf | 2024-08-27 |
| 10 | 202221042546-DECLARATION OF INVENTORSHIP (FORM 5) [25-07-2022(online)].pdf | 2022-07-25 |
| 10 | 202221042546-FER_SER_REPLY [10-07-2023(online)].pdf | 2023-07-10 |
| 11 | 202221042546-COMPLETE SPECIFICATION [25-07-2022(online)].pdf | 2022-07-25 |
| 11 | 202221042546-FORM 3 [27-06-2023(online)].pdf | 2023-06-27 |
| 12 | 202221042546-FER.pdf | 2023-01-10 |
| 12 | 202221042546-STARTUP [12-08-2022(online)].pdf | 2022-08-12 |
| 13 | 202221042546-CORRESPONDENCE(IPO)(CERTIFIED COPY)-10-10-2022.pdf | 2022-10-10 |
| 13 | 202221042546-FORM28 [12-08-2022(online)].pdf | 2022-08-12 |
| 14 | 202221042546-FORM-9 [12-08-2022(online)].pdf | 2022-08-12 |
| 14 | 202221042546-Response to office action [04-10-2022(online)].pdf | 2022-10-04 |
| 15 | 202221042546-FORM 18A [12-08-2022(online)].pdf | 2022-08-12 |
| 15 | 202221042546-REQUEST FOR CERTIFIED COPY [01-10-2022(online)].pdf | 2022-10-01 |
| 16 | 202221042546-Response to office action [16-09-2022(online)].pdf | 2022-09-16 |
| 16 | Abstract.jpg | 2022-08-22 |
| 17 | 202221042546-REQUEST FOR CERTIFIED COPY [03-09-2022(online)].pdf | 2022-09-03 |
| 17 | 202221042546-CORRESPONDENCE(IPO)(CERTIFIED COPY)-08-09-2022.pdf | 2022-09-08 |
| 18 | 202221042546-CORRESPONDENCE(IPO)(CERTIFIED COPY)-08-09-2022.pdf | 2022-09-08 |
| 18 | 202221042546-REQUEST FOR CERTIFIED COPY [03-09-2022(online)].pdf | 2022-09-03 |
| 19 | 202221042546-Response to office action [16-09-2022(online)].pdf | 2022-09-16 |
| 19 | Abstract.jpg | 2022-08-22 |
| 20 | 202221042546-FORM 18A [12-08-2022(online)].pdf | 2022-08-12 |
| 20 | 202221042546-REQUEST FOR CERTIFIED COPY [01-10-2022(online)].pdf | 2022-10-01 |
| 21 | 202221042546-FORM-9 [12-08-2022(online)].pdf | 2022-08-12 |
| 21 | 202221042546-Response to office action [04-10-2022(online)].pdf | 2022-10-04 |
| 22 | 202221042546-CORRESPONDENCE(IPO)(CERTIFIED COPY)-10-10-2022.pdf | 2022-10-10 |
| 22 | 202221042546-FORM28 [12-08-2022(online)].pdf | 2022-08-12 |
| 23 | 202221042546-FER.pdf | 2023-01-10 |
| 23 | 202221042546-STARTUP [12-08-2022(online)].pdf | 2022-08-12 |
| 24 | 202221042546-FORM 3 [27-06-2023(online)].pdf | 2023-06-27 |
| 24 | 202221042546-COMPLETE SPECIFICATION [25-07-2022(online)].pdf | 2022-07-25 |
| 25 | 202221042546-DECLARATION OF INVENTORSHIP (FORM 5) [25-07-2022(online)].pdf | 2022-07-25 |
| 25 | 202221042546-FER_SER_REPLY [10-07-2023(online)].pdf | 2023-07-10 |
| 26 | 202221042546-DRAWINGS [25-07-2022(online)].pdf | 2022-07-25 |
| 26 | 202221042546-US(14)-HearingNotice-(HearingDate-17-09-2024).pdf | 2024-08-27 |
| 27 | 202221042546-EVIDENCE FOR REGISTRATION UNDER SSI [25-07-2022(online)].pdf | 2022-07-25 |
| 27 | 202221042546-FORM-26 [11-09-2024(online)].pdf | 2024-09-11 |
| 28 | 202221042546-Correspondence to notify the Controller [11-09-2024(online)].pdf | 2024-09-11 |
| 28 | 202221042546-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [25-07-2022(online)].pdf | 2022-07-25 |
| 29 | 202221042546-FORM 1 [25-07-2022(online)].pdf | 2022-07-25 |
| 29 | 202221042546-Written submissions and relevant documents [02-10-2024(online)].pdf | 2024-10-02 |
| 30 | 202221042546-FORM FOR SMALL ENTITY(FORM-28) [25-07-2022(online)].pdf | 2022-07-25 |
| 30 | 202221042546-MARKED COPIES OF AMENDEMENTS [03-10-2024(online)].pdf | 2024-10-03 |
| 31 | 202221042546-FORM FOR STARTUP [25-07-2022(online)].pdf | 2022-07-25 |
| 31 | 202221042546-FORM 13 [03-10-2024(online)].pdf | 2024-10-03 |
| 32 | 202221042546-POWER OF AUTHORITY [25-07-2022(online)].pdf | 2022-07-25 |
| 32 | 202221042546-AMMENDED DOCUMENTS [03-10-2024(online)].pdf | 2024-10-03 |
| 33 | 202221042546-PROOF OF RIGHT [25-07-2022(online)].pdf | 2022-07-25 |
| 33 | 202221042546-PatentCertificate17-10-2024.pdf | 2024-10-17 |
| 34 | 202221042546-STATEMENT OF UNDERTAKING (FORM 3) [25-07-2022(online)].pdf | 2022-07-25 |
| 34 | 202221042546-IntimationOfGrant17-10-2024.pdf | 2024-10-17 |
| 1 | searchstrategy-GoogleDocsE_09-01-2023.pdf |
| 1 | searchstrategy_202221042546_SERAE_16-08-2023.pdf |
| 2 | searchstrategy-GoogleDocsE_09-01-2023.pdf |
| 2 | searchstrategy_202221042546_SERAE_16-08-2023.pdf |