Abstract: ABSTRACT ARTIFICIAL INTELLIGENCE BASED SYSTEM FOR AN ORAL SUPPORT RECEPTACLE OF AN ORAL APPLIANCE Present disclosure provides an AI-based system for an Oral Support Receptacle (OSR) of an oral appliance. AI-based system comprises an Electronic Device (ED) and the OSR. OSR determines whether the oral appliance is present inside the OSR and dispenses a mist when the oral appliance is present. OSR captures images/video of the oral appliance and transmits captured images/video to the ED. ED receives captured images/video, determines a type of the oral appliance using the images/video and determines at least one feature of the oral appliance using the images/video based on the type. ED generates 2D and/or 3D dental structure based on at least one feature of the oral appliance and the images/video. ED compares, using a trained AI model, the 2D and/or 3D dental structure with at least one of historic dental structure images and associated metadata in ED and provides information on a dental structure data based on comparison. FIG. 1ABSTRACT ARTIFICIAL INTELLIGENCE BASED SYSTEM FOR AN ORAL SUPPORT RECEPTACLE OF AN ORAL APPLIANCE Present disclosure provides an AI-based system for an Oral Support Receptacle (OSR) of an oral appliance. AI-based system comprises an Electronic Device (ED) and the OSR. OSR determines whether the oral appliance is present inside the OSR and dispenses a mist when the oral appliance is present. OSR captures images/video of the oral appliance and transmits captured images/video to the ED. ED receives captured images/video, determines a type of the oral appliance using the images/video and determines at least one feature of the oral appliance using the images/video based on the type. ED generates 2D and/or 3D dental structure based on at least one feature of the oral appliance and the images/video. ED compares, using a trained AI model, the 2D and/or 3D dental structure with at least one of historic dental structure images and associated metadata in ED and provides information on a dental structure data based on comparison. FIG. 1
DESC:TECHNICAL FIELD
The present disclosure generally relates to the field of Artificial Intelligence (AI). More particularly, the present disclosure relates to an Artificial Intelligence (AI) based system for an oral support receptacle of an oral appliance.
BACKGROUND
In a conventional way, for fixing issues associated with a teeth or tooth, oral appliances (also, referred as dental devices or orthodontic devices) may be utilized. These oral appliances are typically of two types i.e., a removable oral appliance and a fixed oral appliance. The removable oral appliance may be removed occasionally by a user for performing chores such as cleaning, removing prior to eating food and the like. Whereas the fixed oral appliance is attached to the teeth or tooth and is not removable. These existing oral appliances may cause lot of issues. For instance, the fixed oral appliances typically made of materials such as metal may undergo wear and tear causing side effects such as a cut or break in the skin in the vicinity of the fixed oral appliance. On the other hand, the removable oral appliance may need to be changed frequently whenever the preset changes occurring on the oral appliance is achieved. Further in some cases, the removable oral appliance may carry, viruses, germs, and bacteria due to continuous usage. In this situation, if the removable oral appliance is not disinfected frequently and properly, it may cause foul breath and teeth related issues. Also, since the removable oral appliances are removable in nature, the user may not wear those oral appliances as prescribed. Due to this non-compliance by the user, the oral appliance for the user may not be effective.
The information disclosed in this background of the disclosure section is only for enhancement of understanding of the general background of the disclosure and should not be taken as an acknowledgement or any form of suggestion that this information forms existing information already known to a person skilled in the art.
SUMMARY
In an embodiment, the present disclosure relates to an AI-based system for an oral support receptacle of an oral appliance. The AI-based system comprising an electronic device and the oral support receptacle communicatively coupled to the electronic device. The oral support receptacle is configured to determine whether the oral appliance is present inside the oral support receptacle and when the oral appliance is determined to be present inside the oral support receptacle, dispense a mist using a first sensor arranged in the oral support receptacle. Thereafter, the oral support receptacle is configured to capture at least one of images and videos data of the oral appliance using a scanner arranged in the oral support receptacle and transmit the at least one of images and videos data of the oral appliance to the electronic device. The electronic device is configured to receive the at least one of images and videos data of the oral appliance from the oral support receptacle. Subsequently, the electronic device is configured to determine a type of the oral appliance using at least one of the images and the videos data of the oral appliance and determine at least one feature of the oral appliance using at least one of the images and the videos data of the oral appliance based on the type of the oral appliance. The electronic device is configured to generate at least one of 2D and 3D dental structure of a user using the oral appliance based on the at least one feature of the oral appliance and at least one of the images and videos data of the oral appliance and compare, using a trained AI model, at least one of the 2D or 3D dental structure with at least one of historic dental structure images and associated metadata stored in the electronic device. Lastly, the electronic device is configured to provide information on a dental structure data of the user based on the comparison.
In an embodiment, the present disclosure relates to a method for operating an AI-based system for an oral support receptacle of an oral appliance. The method comprising determining whether the oral appliance is present inside the oral support receptacle and when the oral appliance is determined to be present inside the oral support receptacle, dispensing a mist using a first sensor arranged in the oral support receptacle. Thereafter, the method comprises capturing at least one of images and videos data of the oral appliance using a scanner arranged in the oral support receptacle and transmitting the at least one of images and videos data of the oral appliance to the electronic device. The method comprises receiving the at least one of images and videos data of the oral appliance from the oral support receptacle. Subsequently, the method comprises determining a type of the oral appliance using at least one of the images and videos data of the oral appliance and determining at least one feature of the oral appliance using at least one of the images and videos data of the oral appliance based on the type of the oral appliance. The method comprises generating at least one of 2D and 3D dental structure of a user using the oral appliance based on the at least one feature of the oral appliance and at least one of the images and videos data of the oral appliance and comparing, using a trained AI model, at least one of the 2D or 3D dental structure with at least one of historic dental structure images and associated metadata stored in the electronic device. Lastly, the method comprises providing information on a dental structure data of the user based on the comparison.
In an embodiment, the present disclosure relates to an AI-based system for an oral support receptacle of an oral appliance. The AI-based system comprising the oral support receptacle. The oral support receptacle is configured to determine whether the oral appliance is present inside the oral support receptacle and when the oral appliance is determined to be present inside the oral support receptacle, dispense a mist using a first sensor arranged in the oral support receptacle. Thereafter, the oral support receptacle is configured to capture at least one of images and videos data of the oral appliance using a scanner arranged in the oral support receptacle. Subsequently, the oral support receptacle is configured to determine a type of the oral appliance using at least one of the images and the video of the oral appliance and determine at least one feature of the oral appliance using at least one of the images and the video of the oral appliance based on the type of the oral appliance. The oral support receptacle is configured to generate at least one of 2D and 3D dental structure of a user using the oral appliance based on the at least one feature of the oral appliance and at least one of the images and videos data of the oral appliance and compare, using a trained AI model, at least one of the 2D or 3D dental structure with at least one of historic dental structure images and associated metadata stored in an electronic device. Lastly, the oral support receptacle is configured to provide information on a dental structure data of the user based on the comparison.
In an embodiment, the present disclosure relates to a method for operating an AI-based system for an oral support receptacle of an oral appliance. The method comprising determining whether the oral appliance is present inside the oral support receptacle and when the oral appliance is determined to be present inside the oral support receptacle, dispensing a mist using a first sensor arranged in the oral support receptacle. Thereafter, the method comprises capturing at least one of images and videos data of the oral appliance using a scanner arranged in the oral support receptacle. Subsequently, the method comprises determining a type of the oral appliance using at least one of the images and videos data of the oral appliance and determining at least one feature of the oral appliance using at least one of the images and videos data of the oral appliance based on the type of the oral appliance. The method comprises generating at least one of 2D and 3D dental structure of a user using the oral appliance based on the at least one feature of the oral appliance and at least one of the images and videos data of the oral appliance and comparing, using a trained AI model, at least one of the 2D or 3D dental structure with at least one of historic dental structure images and associated metadata stored in an electronic device. Lastly, the method comprises providing information on a dental structure data of the user based on the comparison.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and together with the description, serve to explain the disclosed principles. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and methods in accordance with embodiments of the present subject matter are now described below, by way of example only, and with reference to the accompanying figures.
FIG. 1 illustrates an exemplary environment for operating an AI-based system for an oral support receptacle of an oral appliance in accordance with some embodiments of the present disclosure.
FIGS. 2a to 2c shows schematic representation depicting an overview of an oral support receptacle for an oral appliance in accordance with some embodiments of the present disclosure.
FIG. 3 shows a detailed block diagram of an oral support receptacle in accordance with some embodiments of the present disclosure.
FIG. 4 shows a detailed block diagram of an electronic device in accordance with some embodiments of the present disclosure.
FIG. 5 illustrates a flowchart showing a method for operating an AI-based system for an oral support receptacle of an oral appliance in accordance with first embodiment of the present disclosure.
FIG. 6 illustrates a flowchart showing a method for operating an AI-based system for an oral support receptacle of an oral appliance in accordance with second embodiment of the present disclosure.
It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flowcharts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and executed by a computer or processor, whether or not such computer or processor is explicitly shown.
DESCRIPTION OF THE DISCLOSURE
In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure.
The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises… a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.
The present disclosure generally relates to an AI-based intelligent system for an oral support receptacle of an oral appliance. The oral appliance is one of, but not limited to, an aligner, a retainer, a denture, a mouth guard (sports or bruxism), a removable palatal expander, a pedodontics positioner, a twin block appliance, a removable space maintainer, a snoring device, a night guard, a removable habit breaking appliance, a removable bite plan and the like. Further, the present disclosure includes a scanner to capture at least one of images and videos data of the oral appliance and creating oral cavity images of a dental (teeth) structure using image processing techniques and/or AI techniques, which may be utilized for future reconstruction and 3D printing of the dental (teeth) structure. Further, these at least one of the images and videos data of the oral appliance may be sent to AI or Machine Learning (ML) models on an electronic device (i.e., a mobile device, a computer system, a remote server, or a cloud server, or a specialized processing device) for further dental structural learning by the AI/ML model. Based on results from the AI/ML models at least one of 2D and 3D image libraries or dental structure may be created. These images may be used to check wear and tear of the oral appliance and provide information, for example, report to a concerned doctor and a user of the oral appliance. Furthermore, the present disclosure may check the oral appliance in a timely manner present inside the oral receptacle to sanitize the oral appliance using a (first) sensor. The AI/ML models or techniques used in the present disclosure may be any known AI/ML models or techniques in the art.
FIG. 1 illustrates an exemplary environment for operating an AI-based system for an oral support receptacle of an oral appliance in accordance with some embodiments of the present disclosure.
With reference to FIG. 1, the environment 100 comprises of a oral support receptacle 101, a communication network 103, an electronic device 105. The oral support receptacle 101 is communicatively connected to the electronic device 105 through the communication network 103. The electronic device 105 may be a mobile device, a computer system, a remote server, a specialized processing device, or a cloud server. In one embodiment, the oral support receptacle 101 is communicatively connected to more than one electronic device 105 through the communication network 103.
In first embodiment, the oral support receptacle 101 and the electronic device 105 together form an AI-based system for the oral support receptacle 101 of an oral appliance. In second embodiment, only the oral support receptacle 101 forms the AI-based system for the oral support receptacle 101 of the oral appliance.
The communication network 103 can be any of the following, but is not limited to, communication protocols/methods: a direct interconnection, an e-commerce network, a Peer-to-Peer (P2P) network, Local Area Network (LAN), Wide Area Network (WAN), wireless network (for example, using Wireless Application Protocol), Internet, Wi-Fi, Bluetooth, General Pack Radio Service (GPRS), Global System for Mobile communication (GSM), Code-Division Multiple Access (CDMA), WiMAX, WLAN, ZigBee, and the like.
In the embodiment, the electronic device 105 includes an Input-Output (I-O) interface 107, a memory 109, and a processor 111. The I-O interface 111 is configured to receive the at least one of images and videos data of the oral appliance from the oral support receptacle 101. The I-O interface 107 employs communication protocols/methods such as, without limitation, audio, analog, digital, monoaural, Radio Corporation of America (RCA) connector, stereo, IEEE®-1394 high speed serial bus, serial bus, Universal Serial Bus (USB), infrared, Personal System/2 (PS/2) port, Bayonet Neill-Concelman (BNC) connector, coaxial, fibre optic, component, composite, Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI®), Radio Frequency (RF) antennas, S-Video, Video Graphics Array (VGA), IEEE® 802.11b/g/n/x, Bluetooth, cellular e.g., Code-Division Multiple Access (CDMA), High-Speed Packet Access (HSPA+), Global System for Mobile communications (GSM®), Long-Term Evolution (LTE®), Worldwide interoperability for Microwave access (WiMax®), or the like.
The at least one of images and videos data of the oral appliance received by the I-O interface 107 is stored in the memory 109. The memory 109 is communicatively coupled to the processor 111 of the electronic device 105. The memory 109, also, stores processor-executable instructions which may cause the processor 111 to execute the instructions for processing the at least one of images and videos data of the oral appliance. The memory 109 includes, without limitation, memory drives, removable disc drives, etc. The memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, Redundant Array of Independent Discs (RAID), solid-state memory devices, solid-state drives, etc.
The processor 111 includes at least one data processor for processing the at least one of images and videos data of the oral appliance. The processor 111 may include specialized processing units such as integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
With reference to FIGS. 2a to 2c, the oral support receptacle 101 includes a case 201 with a lid 203. In one embodiment, the oral support receptacle 101 can support one oral appliance 221 as shown in FIG. 2b. In another embodiment, the oral support receptacle 101 can support two oral appliances 221 separately (not shown in FIG. 2b) within the oral support receptacle 101 i.e., one for upper jaw oral appliance (for example, upper aligner or retainer) and one for lower jaw oral appliance (for example, lower aligner or retainer).
A scanner 205, a light source 207, and a mirror 209 are arranged in the lid 203 of the oral support receptacle 101, as shown in FIG. 2a. The scanner 205 is an optical scanner or a camera or a camera sensor. In one embodiment, the optical scanner is an all-axis scanner. The all-axis optical scanner refers to an optical scanner that can capture images or videos along X-axis, Y-axis and Z-axis. The light source 207 is used to lighten the internal environment of the oral support receptacle 101 when the lid 203 is closed. In one embodiment, the light source 207 acts like a flash. The mirror 209 is used for user’s reference. The scanner 205 and the light source 207 are communicatively coupled to a processor 303 of the support receptacle 101. A Light Emitting Diode (LED) or Liquid Crystal Display (LCD) screen 211, a temperature and humidity sensor 213, an Infrared (IR) and/or laser sensor 215, 223, one or more LEDs 217, a second sensor 219, a battery 225, a first sensor 227, an accelerometer and/or gyroscope 229, a power management unit 233, and a Real Time Clock (RTC) 235 are arranged in the case 201 of the oral support receptacle 101, as shown in FIGS. 2a and 2b. The first sensor 227 is least one of an ultrasonic disk sensor, a vibrating disc sensor, and a pressurized mist sensor. The ultrasonic disk sensor, the vibrating disc sensor, and the pressurized mist sensor may be referred as an ultrasonic liquid dispenser sensor. A liquid dispenser 239 (also, referred as a liquid container) may be a part of the ultrasonic disk sensor, the vibrating disc sensor, and the pressurized mist sensor. The liquid dispenser 239 is used to store a liquid, which is converted to mist through vibrating mechanism of at least one of the ultrasonic disk sensor, the vibrating disc sensor, and the pressurized mist sensor. The ultrasonic disk sensor, the vibrating disc sensor, and the pressurized mist sensor vibrates at a frequency, typically at 108 kHz to 115 kHz, which causes microdroplets from the liquid contained in the liquid dispenser 239 to form mist in order to spray or dispense on the oral appliance 221 inside the oral support receptacle 101 whenever required. The mist is an ultrafine aerosolized droplets. A droplet size of the ultrafine aerosolized droplets is in a range of 1 to 10 µm. The mist is a one of a coloured dye or disinfectant solvent. At least one of the temperature and humidity sensor 213, and/or the scanner 205 is used to determine whether the mist was dispensed. The temperature and humidity sensor 213 is used to measure the change in temperature and humidity, respectively, when the oral appliance 221 is present or absent in the oral support receptacle 101 and/or when the mist was dispensed. In one embodiment, a Ultraviolet (UV) sensor, in addition to the first sensor 227, may be arranged in the case 201 of the oral support receptacle 101 and communicatively coupled to the processor 303 of the oral support receptacle 101. The UV sensor uses UV light rays to disinfect bacteria present on the oral appliance 221 and inside the oral support receptacle 101. The UV exposure time may be any appropriate time exposure known in the art.. The second sensor 219 is at least one of a Hall effect sensor, a Reed switch sensor, an optical sensor, a tilt sensor, an ultrasonic sensor, a microswitch, a strain gauge sensor, a light dependent resistor and an IR break-beam sensor. The second sensor 219 is used to detect opening or closing of the lid 203 of the oral support receptacle 101. The opening and closing of the lid 203 of the oral support receptacle 101 may be a starting point to determine presence or absence of the oral appliance 221 in the oral support receptacle 101. At least one of the laser sensor 215, 223, the scanner 205, and IR sensor 215, 223 is used to detect presence or absence of the oral appliance 221 inside the oral support receptacle 101. The laser sensor 215, 223 is utilized to detect the oral appliance 221 at a distant and near that could be transparent or having limited opacity. In some embodiments, the laser sensor 215, 223 may be a permissible laser sensor which may be placed in a certain position to identify the presence or absence of the oral appliance 221. The accelerometer 229 is used to provide information about the oral support receptacle’s movement, orientation, and tilt. The accelerometer 229 measures acceleration along three axes: X axis, Y axis, and Z axis. The gyroscope 229 is a sensor to measure angular velocity or rate of rotation of the oral support receptacle 101 around an axis. The gyroscope 229 is used to provide information about changes in angular position, rotation speed, and angular displacement of the oral support receptacle 101. The one or more LEDs 217 is used to indicate to a user through blinking of the one or more LEDs 217. In some embodiment, a buzzer 237 (also, referred as a speaker) is arranged in the case 201 of the oral support receptacle 101 to provide a buzzer sound as an indicator to the user. The LED or LCD screen 211 is used to indicate information to the user. This information may be oral appliance 221 wear and usage time, presence or absence of the oral appliance 221, disinfection cycles, reminders, alarms, a type of oral appliance 221, oral appliance 221 change reminder, images and/or videos captured, date and time, battery power indicator, charging status, lid 203 open or close status, communication indicator and settings, temperature and humidity data, orientation data, comparison with treatment plans and progress, presence of plaque or other oral conditions, container volume and reminders. The LED or LCD screen 211, the temperature and humidity sensor 213, the IR and/or laser sensor 215, 223, one or more LEDs 217, the second sensor 219, the battery 225, the first sensor 227, and the accelerometer and/or gyroscope 229 are communicatively coupled to the processor 303 of the support receptacle 101. Further, the oral support receptacle 101 includes one or more slots for the battery 225. The battery 225 are rechargeable batteries and/or removable coin cell batteries. The rechargeable batteries and/or a removable coin cell batteries when placed in their respective slots in the oral support receptacle 101 are utilized to operate the scanner 205, the light source 207, the LED or LCD screen 211, the temperature and humidity sensor 213, the IR and/or laser sensor 215, 223, one or more LEDs 217, the second sensor 219, the first sensor 227, and the accelerometer and/or gyroscope 229 and/or supply power to internal hardware (i.e., a processor 303, and a memory 305) of the oral support receptacle 101. The battery can be one of, but not limited to, lithium-ion, lithium-polymer, and the like. When the battery 225 are the rechargeable batteries, a charging point 231 (as shown in FIG. 2c) is arranged in the case 201 of the oral support receptacle 101 and communicatively coupled to the rechargeable battery. The charging point 231 can be used to connect to an external power source through a cable for charging the rechargeable batteries or a wireless power source. An oral appliance 221 can be placed inside the oral support receptacle 101, as shown in FIG. 2b. In another embodiment, more than one oral appliance 221 can be placed inside the oral support receptacle 101 separately (not shown in FIG. 2b) within the oral support receptacle 101 i.e., one for upper jaw oral appliance (for example, upper aligner or retainer) and one for lower jaw oral appliance (for example, lower aligner or retainer). The lid 203 may open or close automatically based on one or more notifications from one or more sensors such as the scanner 205, the IR and/or laser sensor 215, 223, and the second sensor 219. For example, if the oral appliance 221 is placed inside the oral support receptacle 101, then the lid 203 may be closed. The lid 203 may also function via a user application installed in the electronic device 105 associated to a user. In an embodiment, the oral support receptacle 101 includes a IP65, IP67, IP68, IP69 rated casing (that is for the case 201 and the lid 203) for internal circuit board protection of the oral appliance 221 and from dust. The power management unit 233 is used to optimize energy usage in the oral support receptacle 101 by regulating and distributing power efficiently amongst components arranged in the case 201 and the lid 203 of the oral support receptacle 101. The power management unit 233 controls voltage levels, and maintains low-power states when the oral support receptacle 101 is in sleep mode, and oversees charging functions, thereby, significantly extending battery life. The Real-Time Clock (RTC) 235 helps to maintain accurate timekeeping independently of the main power supply, enabling precise timestamping of events and data logs, which ensures chronological accuracy crucial for various applications. Additionally, the RTC 235 facilitates synchronization with the connected electronic device’s 105 clock, for consistency and data integrity. For example, when the oral support receptacle 101 is not connected to the electronic device 105, the RTC 235 helps to store events with timestamps until the oral support receptacle 101 reconnects to the electronic device 105.
Hereinafter, the operation of the AI-based system for the oral support receptacle 101 of the oral appliance 221 is explained with reference to the first embodiment and the second embodiment.
First embodiment: In the first embodiment, the operation of the AI-based system for the oral support receptacle 101 of the oral appliance 221 involves using the oral support receptacle 101 and the electronic device 105.
At first, during execution phase, the oral support receptacle 101 determines whether the oral appliance 221 is present inside the oral support receptacle 101. In detail, the oral support receptacle 101 using the second sensor 219 arranged in the oral support receptacle 101 detects opening or closing of the lid 203 of the oral support receptacle 101. Thereafter, the oral support receptacle 101 using at least one of the laser sensor 215, 223, the scanner 205, and the IR sensor 215, 223 arranged in the oral support receptacle detects presence or absence of the oral appliance 221 inside the oral support receptacle 101. In one embodiment, the oral support receptacle 101 using at least one of the laser sensor 215, 223, the scanner 205, and the IR sensor 215, 223 arranged in the oral support receptacle detects presence or absence of more than one oral appliance 221 inside the oral support receptacle 101. When the oral appliance 221 is determined to be present inside the oral support receptacle 101, the oral support receptacle 101 using the first sensor 227 arranged in the oral support receptacle 101 dispenses a mist. The mist performs sanitization of the oral appliance 221 to kill bacterial particles, fungi, viral particles, or infections causing germs in the oral support receptacle 101.
In some embodiments, the oral support receptacle 101 utilizes an automatic disinfection mechanism where the mist is released or dispensed from the liquid dispenser 239 using the first sensor 227. This disinfection mechanism may run as regular disinfection cycles automatically or manually to disinfect the oral appliance 221 before and after the oral appliance 221 is worn by a user. In some embodiment, the dispensation of the mist from the liquid dispenser 239 by the oral support receptacle 101 is based on the analysis of at least one of images and videos data (captured using the scanner 205) to disinfect the oral appliance 221 and the oral support receptacle 101 may determine when a disinfection protocol needs to be executed. In some embodiment, the dispensing of the mist on the oral appliance 221 from the liquid dispenser 239 can also be controlled by the user via a user application installed in the electronic device 105. The disinfection protocol controls when the mist should be released in the oral support receptacle 101. The optimal droplet size may be utilized in the oral support receptacle 101 to kill viruses or germs or bacteria. For example, in some cases, an ultrafine aerosolized droplets in the range of 1 to 10 micrometre (µm) may be used to disinfect the oral appliance 221. The mist is a one of a coloured dye or disinfectant solvent. The ultrafine aerosolized droplets remains suspended for longer periods, allowing for better distribution and coverage over the oral appliance 221. Further, in some instances, the oral support receptacle 101 uses the accelerometer 229 and/or gyroscope 229 to detect orientation of the oral support receptacle 101. The orientation may enable functioning of the liquid dispensing mechanism which ensures efficiency of the automatic disinfection mechanism to effectively disinfect the oral appliance 221 placed in the oral support receptacle 101. Further, the accelerometer 229 may detect orientation of the oral appliance 221 within the oral support receptacle 101 in X axis, Y axis, and Z axis.
The oral support receptacle 101 using at least one of the temperature and humidity sensor 213, and the scanner 205 determines whether the mist was dispensed.
Post-dispensing of the mist, the oral support receptacle 101 using the scanner 205 captures at least one of images and videos data of the oral appliance 221. Subsequently, the oral support receptacle 101 transmits the at least one of images and videos data of the oral appliance 221 to the electronic device 105.
The electronic device 105 receives the at least one of images and videos data of the oral appliance 221 from the oral support receptacle 101. Thereafter, the electronic device 105 determines a type of the oral appliance 221 using at least one of the images and the videos data of the oral appliance 221. The type of the oral appliance 221 may be one of an aligner, a retainer, a denture, a mouth guard, a removable palatal expander, a pedodontics positioner, a twin block appliance, a removable space maintainer, a snoring device, a night guard, a removable habit breaking appliance, and a removable bite plan. The electronic device 105 determines at least one feature of the oral appliance 221 using at least one of the images and the videos data of the oral appliance 221 based on the type of the oral appliance 221. In some embodiments, the oral support receptacle 221 uses a scanning mechanism to obtain at least one feature of the oral appliance 221 based on at least one of the images and videos data. The at least one feature of the oral appliance 221 comprises a shape of the oral appliance 221, a size of the oral appliance 221, a structure of the oral appliance 221, discolouration (also, referred as decolourization) of the oral appliance 221, and disfigurement (also, referred as deterioration, or deformation) of the oral appliance 221. Based on the at least one feature of the oral appliance 221 and at least one of the images and videos data of the oral appliance 221, the electronic device 105 using image processing techniques generates at least one of two-dimensional (2D) and three-dimensional (3D) dental structure of a user using the oral appliance 221. The image processing techniques may be any known image processing techniques in the art. In some embodiments, the image processing techniques may be used for capturing at least one of 2D and 3D dental structure which includes not just the size (or form) but also other characteristics such as the colour shades and shape of the dental (or teeth) structure for creating 2D or 3D image libraries of the dental (or teeth) within an oral cavity, which may be further used for reconstruction of individual teeth or a complete set of teeth or dentures in future maintaining the original or sophisticated looks of the user at various timestamps. Further, in some embodiments, the 2D and 3D dental structure may be stored as files in the memory 305 of the oral support receptacle 101. In some instances, the 2D and 3D dental structure may be stored in the electronic device 105 with timestamps. Further, these stored images may be sent to AI or ML models on the electronic device 105 for further structural learning by these model. Based on results from these models a 3D image library may be created. Any changes in the 3D dental structure may be recorded and stored in the 3D image library whenever the changes occur in the oral appliance 221. Thereafter, the electronic device 105 using a trained AI model compares at least one of the 2D or 3D dental structure with at least one of historic dental structure images and associated metadata stored in the electronic device 105. Based on the comparison, the electronic device 105 provides information on a dental structure data of the user. The information (also, referred as data or results) may relate to (1) wear and tear, damage, and cut-outs in the oral appliance 221, (2) user habits on using the oral appliance 221, and (3) the reason for the cause of failure of non-working of the oral appliance on the user, for example, is it because of user non-compliance.
During training phase of the AI model, the electronic device 105 receives a plurality of historic dental structure images or videos data and associated metadata of one or more users from one or more sources. The one or more sources may be a mobile device, a computer system, a remote server, a specialized processing device, or a cloud server. Thereafter, the electronic device 105 determines progress in the dental structure data for each user by comparing the plurality of historic dental structure images or videos data and associated metadata with a pre-determined dental structure and associated metadata. The progress in the dental structure data refers to detecting one of abnormality in the dental structure data, normality in the dental structure data, or movement of the oral appliance 221 over a period of time.
In some embodiments, the present disclosure may be configured to identify the plaque formation or cavities on the dental (or teeth) of the user by performing the following functions. Firstly, the liquid dispenser 239 may release a mist in the oral support receptacle 101 which includes a coloured dye. This coloured dye remains on the oral appliance 221 which the user wears. On wearing such an oral appliance 221, the coloured dye may stick to the portion of the dental (or teeth) where plaque has been formed or a cavity exists. The plaque picks the coloured dye by way of a reaction causing the dye to change colour which could be identified visually or using the oral support receptacle’s scanning techniques. The oral support receptacle 101 determines whether the oral appliance 221 is present inside the oral support receptacle 101, wherein the oral appliance 221 with the coloured dye was used by the user. Thereafter, the oral support receptacle 101 captures at least one of images and videos data of the oral appliance 221 using the scanner 205 when the oral appliance 221 is determined to be present inside the oral support receptacle 101. The oral support receptacle 101 transmits the at least one of images and videos data of the oral appliance 221 to the electronic device 105. The electronic device 105 receives the at least one of the images and videos data of the oral appliance 221 from the oral support receptacle 101 and determines oral condition present in a dental structure of the user through change in colour of the coloured dye on the oral appliance 221 based on at least one of the images and the videos data of the oral appliance 221. The captured images and videos data of the user’s dental structure may be utilized for analysis using the AI or ML model for determining timely corrective actions on the oral appliance 221.
The above-mentioned operation between the oral support receptacle 101 and the electronic device 105 through the communication network 103 can be extended to an operation between the oral support receptacle 101 and more than one electronic device 105 through the communication network 103. Since the operation is same as the above-mentioned operation, repetition of the operation between the oral support receptacle 101 and more than one electronic device 105 is omitted.
Second embodiment: In the second embodiment, the operation of the AI-based system for the oral support receptacle 101 of the oral appliance 221 involves using only the oral support receptacle 101.
At first, during execution phase, the oral support receptacle 101 determines whether the oral appliance 221 is present inside the oral support receptacle 101. In detail, the oral support receptacle 101 using the second sensor 219 arranged in the oral support receptacle 101 detects opening or closing of the lid 203 of the oral support receptacle 101. Thereafter, the oral support receptacle 101 using at least one of the laser sensor 215, 223, the scanner 205, and the IR sensor 215, 223 arranged in the oral support receptacle detects presence or absence of the oral appliance 221 inside the oral support receptacle 101. In one embodiment, the oral support receptacle 101 using at least one of the laser sensor 215, 223, the scanner 205, and the IR sensor 215, 223 arranged in the oral support receptacle detects presence or absence of more than one oral appliance 221 inside the oral support receptacle 101. When the oral appliance 221 is determined to be present inside the oral support receptacle 101, the oral support receptacle 101 using the first sensor 227 arranged in the oral support receptacle 101 dispenses a mist. The mist performs sanitization of the oral appliance 221 to kill bacterial particles, fungi, viral particles, or infections causing germs in the oral support receptacle 101.
In some embodiments, the oral support receptacle 101 utilizes an automatic disinfection mechanism where the mist is released or dispensed from the liquid dispenser 239 using the first sensor 227. This disinfection mechanism may run as regular disinfection cycles automatically or manually to disinfect the oral appliance 221 before and after the oral appliance 221 is worn by a user. In some embodiment, the dispensation of the mist from the liquid dispenser 239 by the oral support receptacle 101 is based on the analysis of at least one of images and videos data (captured using the scanner 205) to disinfect the oral appliance 221 and the oral support receptacle 101 may determine when a disinfection protocol needs to be executed. In some embodiment, the dispensing of the mist on the oral appliance 221 from the liquid dispenser 239 can also be controlled by the user via a user application installed in the electronic device 105. The disinfection protocol controls when the mist should be released in the oral support receptacle 101. The optimal droplet size may be utilized in the oral support receptacle 101 to kill viruses or germs or bacteria. For example, in some cases, an ultrafine aerosolized droplets in the range of 1 to 10 micrometre (µm) may be used to disinfect the oral appliance 221. The mist is a one of a coloured dye or disinfectant solvent. The ultrafine aerosolized droplets remains suspended for longer periods, allowing for better distribution and coverage over the oral appliance 221. Further, in some instances, the oral support receptacle 101 uses the accelerometer 229 and/or gyroscope 229 to detect orientation of the oral support receptacle 101. The orientation may enable functioning of the liquid dispensing mechanism which ensures efficiency of the automatic disinfection mechanism to effectively disinfect the oral appliance 221 placed in the oral support receptacle 101. Further, the accelerometer 229 may detect orientation of the oral appliance 221 within the oral support receptacle 101 in X axis, Y axis, and Z axis.
The oral support receptacle 101 using at least one of the temperature and humidity sensor 213, and the scanner 205 determines whether the mist was dispensed.
Post-dispensing of the mist, the oral support receptacle 101 using the scanner 205 captures at least one of images and videos data of the oral appliance 221.
The oral support receptacle 101 determines a type of the oral appliance 221 using at least one of the images and videos data of the oral appliance 221. The type of the oral appliance 221 may be one of an aligner, a retainer, a denture, a mouth guard, a removable palatal expander, a pedodontics positioner, a twin block appliance, a removable space maintainer, a snoring device, a night guard, a removable habit breaking appliance, and a removable bite plan. The oral support receptacle 101 determines at least one feature of the oral appliance 221 using at least one of the images and the videos data of the oral appliance 221 based on the type of the oral appliance 221. In some embodiments, the oral support receptacle 221 uses a scanning mechanism to obtain at least one feature of the oral appliance 221 based on at least one of the images and the videos data. The at least one feature of the oral appliance 221 comprises a shape of the oral appliance 221, a size of the oral appliance 221, a structure of the oral appliance 221, discolouration (also, referred as decolourization) of the oral appliance 221, and disfigurement (also, referred as deterioration, or deformation) of the oral appliance 221. Based on the at least one feature of the oral appliance 221 and at least one of the images and videos data of the oral appliance 221, the oral support receptacle 101 using image processing techniques generates at least one of two-dimensional (2D) and three-dimensional (3D) dental structure of a user using the oral appliance 221. The image processing techniques may be any known image processing techniques in the art. In some embodiments, the image processing techniques may be used for capturing at least one of 2D and 3D dental structure which includes not just the size (or form) but also other characteristics such as the colour shades and shape of the dental (or teeth) structure for creating 2D or 3D image libraries of the dental (or teeth) within an oral cavity, which may be further used for reconstruction of individual teeth or a complete set of teeth or dentures in future maintaining the original or sophisticated looks of the user at various timestamps. Further, in some embodiments, the 2D and 3D dental structure may be stored as files in the memory 305 of the oral support receptacle 101. In some instances, the 2D and 3D dental structure may be stored in the electronic device 105 with timestamps. Further, these stored images may be sent to AI or ML models on the electronic device 105 for further structural learning by these model. Based on results from these models a 3D image library may be created. Any changes in the 3D dental structure may be recorded and stored in the 3D image library whenever the changes occur in the oral appliance 221. Thereafter, the oral support receptacle 101 using a trained AI model compares at least one of the 2D or 3D dental structure with at least one of historic dental structure images and associated metadata stored in the oral support receptacle 101. Based on the comparison, the oral support receptacle 101 provides information on a dental structure data of the user through the electronic device 105. The information (also, referred as data or results) may relate to (1) wear and tear, damage, and cut-outs in the oral appliance 221, (2) user habits on using the oral appliance 221, and (3) the reason for the cause of failure of non-working of the oral appliance on the user, for example, is it because of user non-compliance.
During training phase of the AI model, the oral support receptacle 101 receives a plurality of historic dental structure images or videos data and associated metadata of one or more users from one or more sources. The one or more sources may be a mobile device, a computer system, a remote server, a specialized processing device, or a cloud server. Thereafter, the oral support receptacle 101 determines progress in the dental structure data for each user by comparing the plurality of historic dental structure images or videos data and associated metadata with a pre-determined dental structure and associated metadata. The progress in the dental structure data refers to detecting one of abnormality in the dental structure data, normality in the dental structure data, or movement of the oral appliance 221 over a period of time.
In some embodiments, the present disclosure may be configured to identify the plaque formation or cavities on the dental (or teeth) of the user by performing the following functions. Firstly, the liquid dispenser 239 may release a mist in the oral support receptacle 101 which includes a coloured dye. This coloured dye remains on the oral appliance 221 which the user wears. On wearing such an oral appliance 221, the coloured dye may stick to the portion of the dental (or teeth) where plaque has been formed or a cavity exists. The plaque picks the coloured dye by way of a reaction causing the dye to change colour which could be identified visually or using the oral support receptacle’s scanning techniques. The oral support receptacle 101 determines whether the oral appliance 221 is present inside the oral support receptacle 101, wherein the oral appliance 221 with the coloured dye was used by the user. Thereafter, the oral support receptacle 101 captures at least one of images and videos data of the oral appliance 221 using the scanner 205 when the oral appliance 221 is determined to be present inside the oral support receptacle 101. The oral support receptacle 101 determines oral condition present in a dental structure of the user through change in colour of the coloured dye on the oral appliance 221 based on at least one of the images and videos data of the oral appliance 221. The captured images and videos data of the user’s dental structure may be utilized for analysis using the AI or ML model for determining timely corrective actions on the oral appliance 221.
The following embodiments are common to the first embodiment and the second embodiment.
In some embodiments, upon analysis of the oral appliance 221 by the oral support receptacle 101 or by the electronic device 105, if the oral appliance 221 has been determined to be worn for a shorter duration than a predefined time, then in such a case, the oral support receptacle 101 or by the electronic device 105 may notify about the condition of wearing the oral appliance 221 for the shorter duration via blinking of the one or more LEDs 217 or by a buzzer sound from the buzzer 237 or through the display screen i.e., LED or LCD screen 211. Further, the oral support receptacle 101 may provide a push notification when a user is away from the oral support receptacle 101 for more than a predefined distance which may ensure that the user do not end up leaving the oral support receptacle 101 behind. The push notification on the electronic device 105 associated with the user may be utilized to provide real-time feedback to the user such that the user does not misplace the oral support receptacle 101.
In some embodiment, the oral support receptacle 101 further comprises tracking means adapted to track the location of the oral support receptacle 101 and may be arranged in the case 201 or the lid 203 of the oral support receptacle 101. The tracking means uses a GPS tracking system to keep a track of the location of the oral support receptacle 101. The tracking means may also have a wireless device. The wireless device has a unique customized identification code to distinguish the wireless device of the oral support receptacle 101 from other wireless electronic devices such as mobile phones, laptop, and the like. Preferably, the unique customized identification code is, but not limited to, an 8-bit to 1024-bit code. The tracking means receive data log related to the lid 203 of the oral support receptacle 101 being open, detect the oral appliance 221 within the oral support receptacle 101, check orientation of the oral appliance 221, monitor sanitization status of the oral appliance 221, and detect volume of the disinfectant. The wireless device in the tracking means periodically communicates its unique customized identification code along with data to provide the location of the oral support receptacle 101. The wireless device can co-operate with any one of GPRS, GSM, CDMA, Wi-Fi, WiMAX, WLAN, Bluetooth, ZigBee, local wireless and a hardwired connection. Additionally, communication techniques such as frequency hopping, phase shift keying, amplitude shift keying, and encryption techniques are used to prevent intrusion and data security using decryption techniques in the communication between the oral support receptacle 101 and the electronic device 105. The encryption techniques and the decryption techniques used in the present disclosure may be any known techniques in the art.
Further, the oral support receptacle and/or the electronic device 105 includes an automated system such that reminders and notifications are shared regarding refilling of liquid in the liquid dispenser 239. Further, various reminders for wearing of the oral appliance regularly, reminders for not forgetting to carry the oral support receptacle and reminders about disinfection cycles for disinfecting the oral appliance 221 may be shared in a timely manner via the oral support receptacle and/or the electronic device 105. Further, the users may be provided with reminders to inspect the oral appliance on a timely basis via the oral support receptacle and/or the electronic device 105.
FIG. 3 shows a detailed block diagram of an oral support receptacle in accordance with some embodiments of the present disclosure.
The oral support receptacle 101 may include an I-O interface 301 (also, referred as communication unit), a processor 303, data 307 and one or more units (also, referred as modules) 313, which are described herein in detail. In the embodiment, the data 307 may be stored within a memory 305.
The I-O interface 301 is configured to transmit captured at least one of images and videos data of the oral appliance from the oral support receptacle 101. The I-O interface 301 employs communication protocols/methods such as, without limitation, audio, analog, digital, monoaural, Radio Corporation of America (RCA) connector, stereo, IEEE®-1394 high speed serial bus, serial bus, Universal Serial Bus (USB), infrared, Personal System/2 (PS/2) port, Bayonet Neill-Concelman (BNC) connector, coaxial, component, composite, Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI®), Radio Frequency (RF) antennas, S-Video, Video Graphics Array (VGA), IEEE® 802.11b/g/n/x, Bluetooth, cellular e.g., Code-Division Multiple Access (CDMA), High-Speed Packet Access (HSPA+), Global System for Mobile communications (GSM®), Long-Term Evolution (LTE®), Worldwide interoperability for Microwave access (WiMax®), or the like.
The at least one of images and videos data of the oral appliance 221 is stored in the memory 305. The memory 305 is communicatively coupled to the processor 303 of the oral support receptacle 101. The memory 305, also, stores processor-executable instructions which may cause the processor 303 to execute the instructions for operating the oral support receptacle 101. The memory 305 includes, without limitation, memory drives, removable disc drives, etc. The memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, Redundant Array of Independent Discs (RAID), solid-state memory devices, solid-state drives, etc.
The processor 303 includes at least one data processor for operating the oral support receptacle 101. The processor 303 may include specialized processing units such as integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
For first embodiment, the data 307 may include, for example, image/video data 309, and miscellaneous data 311.
The image/video data 309 may include at least one or more images and one or more videos data captured using a scanner 205 arranged in the oral support receptacle 101.
The miscellaneous data 311 may store data, including temporary data and temporary files, generated by units (or modules) 313 for performing various functions of the oral support receptacle 101.
For second embodiment, in addition to image/video data 309, and miscellaneous data 311, the data 307 may include historic data (not shown in FIG. 3).
The image/video data 309 may include at least one or more images and one or more videos data captured using a scanner 205 arranged in the oral support receptacle 101.
The historic data may include at least one of historic dental structure images and associated metadata, which are stored in the electronic device 105 and received from the electronic device 105. The term historic or historical refers to from the past (and not current).
The miscellaneous data 311 may store data, including temporary data and temporary files, generated by units (or modules) 313 for performing various functions of the oral support receptacle 101.
In an embodiment, the data 307 in the memory 305 are processed by the one or more units (or modules) 313 present within the memory 305 of the oral support receptacle 101. In the embodiment, the one or more units (or modules) 313 may be implemented as dedicated hardware units. As used herein, the term unit (or module) refers to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a Field-Programmable Gate Arrays (FPGA), Programmable System-on-Chip (PSoC), a combinational logic circuit, and/or other suitable components that provide the described functionality. In some implementations, the one or more units (or modules) 313 may be communicatively coupled to the processor 303 for performing one or more functions of the oral support receptacle 101. The units (or modules) 313 when configured with the functionality defined in the present disclosure will result in a novel hardware.
For first embodiment, the one or more units (or modules) 313 of the oral support receptacle 101 may include, but are not limited to, a transceiver 315, a determining unit 317, a capturing unit 319, and a detecting unit 321. The one or more units (or modules) 313 may, also, include miscellaneous units (or modules) 323 to perform various miscellaneous functionalities of the oral support receptacle 101. The scanner 205, and the light source 207 arranged in the lid 203 of the oral support receptacle 101, and the LED or LCD screen 211, the temperature and humidity sensor 213, the IR and/or laser sensor 215, 223, one or more LEDs 217, the second sensor 219, the battery 225, the first sensor 227, the accelerometer and/or gyroscope 229, the power management unit 233, and the Real Time Clock (RTC) 235 arranged in the case 201 of the oral support receptacle 101 may be part of the miscellaneous units (or modules) 323. In one embodiment, the UV sensor arranged in the case 201 of the oral support receptacle 101 may be part of the miscellaneous units (or modules) 323.
Transceiver 315:
The transceiver 315 transmits captured at least one of images and videos data of the oral appliance 221 to the electronic device 105 and receives commands from the electronic device 105.
Determining unit 317:
The determining unit 317 determines whether the oral appliance 221 is present inside the oral support receptacle 101. When the oral appliance 221 is determined to be present inside the oral support receptacle 101, the determining unit 317 dispenses a mist using a first sensor 227 arranged in the oral support receptacle 101. The first sensor 227 is at least one of an ultrasonic disk sensor, a vibrating disc sensor, and a pressurized mist sensor. The mist is an ultrafine aerosolized droplets. A droplet size of the ultrafine aerosolized droplets is in a range of 1 to 10 µm. The mist is a one of a coloured dye or disinfectant solvent. The determining unit 317 determines, using at least one of the temperature and humidity sensor 213, and the scanner 205 arranged in the oral support receptacle 101, whether the mist was dispensed. The determining unit 317 determines whether the oral appliance 221 is present inside the oral support receptacle 101, wherein the oral appliance 221 with the coloured dye was used by a user.
Capturing unit 319:
The capturing unit 319 captures at least one of images and videos data of the oral appliance 221 using the scanner 205 arranged in the oral support receptacle 101. The capturing unit 319 captures at least one of images and videos data of the oral appliance 221 using the scanner 205 arranged in the oral support receptacle 101 when the oral appliance 221 is determined to be present inside the oral support receptacle 101.
Detecting unit 321:
The detecting unit 321 detects, using a second sensor 219 arranged in the oral support receptacle 101, opening or closing of a lid 203 of the oral support receptacle 101. The second sensor 219 is at least one of a Hall effect sensor, a Reed switch sensor, an optical sensor, a tilt sensor, an ultrasonic sensor, a microswitch, a strain gauge sensor, a light dependent resistor and an Infrared (IR) break-beam sensor. Thereafter, the detecting unit 321 detects, using at least one of a laser sensor 215, 223, the scanner 205, and Infrared (IR) sensor 215, 223 arranged in the oral support receptacle 101, presence or absence of the oral appliance 221 inside the oral support receptacle 101.
For second embodiment, the one or more units (or modules) 313 of the oral support receptacle 101 may include, but are not limited to, a transceiver 315, a determining unit 317, a capturing unit 319, a detecting unit 321, a generating unit (not shown in FIG. 3) and a providing unit (not shown in FIG. 3). The one or more units (or modules) 313 may, also, include miscellaneous units (or modules) 323 to perform various miscellaneous functionalities of the oral support receptacle 101. The scanner 205, and the light source 207 arranged in the lid 203 of the oral support receptacle 101, and the LED or LCD screen 211, the temperature and humidity sensor 213, the IR and/or laser sensor 215, 223, one or more LEDs 217, the second sensor 219, the battery 225, the first sensor 227, the accelerometer and/or gyroscope 229, the power management unit 233, and the Real Time Clock (RTC) 235 arranged in the case 201 of the oral support receptacle 101 may be part of the miscellaneous units (or modules) 323. In one embodiment, the UV sensor arranged in the case 201 of the oral support receptacle 101 may be part of the miscellaneous units (or modules) 323.
Transceiver 315:
During execution phase, the transceiver 315 receives, from the electronic device 105, at least one of historic dental structure images and associated metadata stored in the electronic device 105.
During training phase of an AI model, the transceiver 315 receives a plurality of historic dental structure images or videos data and associated metadata of one or more users from one or more sources. The one or more sources may be a mobile device, a computer system, a remote server, a specialized processing device, or a cloud server.
Determining unit 317:
During execution phase and/or training phase, the determining unit 317 determines whether the oral appliance 221 is present inside the oral support receptacle 101. When the oral appliance 221 is determined to be present inside the oral support receptacle 101, the determining unit 317 dispenses a mist using a first sensor 227 arranged in the oral support receptacle 101. The first sensor 227 is at least one of an ultrasonic disk sensor, a vibrating disc sensor, and a pressurized mist sensor. The mist is an ultrafine aerosolized droplets. A droplet size of the ultrafine aerosolized droplets is in a range of 1 to 10 µm. The mist is a one of a coloured dye or disinfectant solvent. The determining unit 317 determines, using at least one of the temperature and humidity sensor 213, and the scanner 205 arranged in the oral support receptacle 101, whether the mist was dispensed.
During execution phase, the determining unit 317 determines a type of the oral appliance 221 using at least one of the images and videos data of the oral appliance 221. The type of the oral appliance 221 may be one of an aligner, a retainer, a denture, a mouth guard, a removable palatal expander, a pedodontics positioner, a twin block appliance, a removable space maintainer, a snoring device, a night guard, a removable habit breaking appliance, and a removable bite plan. Thereafter, the determining unit 317 determines at least one feature of the oral appliance 221 using at least one of the images and videos data of the oral appliance 221 based on the type of the oral appliance 221. The at least one feature of the oral appliance 221 comprises a shape of the oral appliance 221, a size of the oral appliance 221, a structure of the oral appliance 221, discolouration of the oral appliance 221, and disfigurement of the oral appliance 221. The determining unit 317 determines whether the oral appliance 221 is present inside the oral support receptacle 101, wherein the oral appliance 221 with the coloured dye was used by a user. The determining unit 317 determines oral condition present in a dental structure of the user through change in colour of the coloured dye on the oral appliance 221 based on at least one of the images and videos data of the oral appliance 221.
During training phase of an AI model, the determining unit 317 determines progress in the dental structure data for each user by comparing the plurality of historic dental structure images or videos data and associated metadata with a pre-determined dental structure and associated metadata. The progress in the dental structure data refers to detecting one of abnormality in the dental structure data, normality in the dental structure data, or movement of the oral appliance 221 over a period of time.
Capturing unit 319:
During execution phase and/or training phase, the capturing unit 319 captures at least one of images and videos data of the oral appliance 221 using the scanner 205 arranged in the oral support receptacle 101. The capturing unit 319 captures at least one of images and videos data of the oral appliance 221 using the scanner 205 arranged in the oral support receptacle 101 when the oral appliance 221 is determined to be present inside the oral support receptacle 101.
Detecting unit 321:
During execution phase and/or training phase, the detecting unit 321 detects, using a second sensor 219 arranged in the oral support receptacle 101, opening or closing of a lid 203 of the oral support receptacle 101. The second sensor 219 is at least one of a Hall effect sensor, a Reed switch sensor, an optical sensor, a tilt sensor, an ultrasonic sensor, a microswitch, a strain gauge sensor, a light dependent resistor and an Infrared (IR) break-beam sensor. Thereafter, the detecting unit 321 detects, using at least one of a laser sensor 215, 223, the scanner 205, and Infrared (IR) sensor 215, 223 arranged in the oral support receptacle 101, presence or absence of the oral appliance 221 inside the oral support receptacle 101.
Generating unit (not shown in FIG. 3):
During execution phase, the generating unit generates at least one of 2D and 3D dental structure of a user using the oral appliance 221 based on the at least one feature of the oral appliance 221 determined by the determining unit 317 and at least one of the images and videos data of the oral appliance 221.
Providing unit (not shown in FIG. 3):
During execution phase, the providing unit compares, using a trained AI model, at least one of the 2D or 3D dental structure with at least one of historic dental structure images and associated metadata stored in the electronic device 105. Thereafter, the providing unit provides information on a dental structure data of the user based on the comparison.
In one embodiment, the generating unit and the providing unit may together form the AI model.
FIG. 4 shows a detailed block diagram of an electronic device in accordance with some embodiments of the present disclosure.
The electronic device 105, in addition to the I-O interface 107 and processor 111 described above, may include data 401 and one or more units (also, referred as modules) 407, which are described herein in detail. In the embodiment, the data 401 may be stored within the memory 109. The data 401 may include, for example, image/video data 403, and miscellaneous data 405.
The image/video data 403 may include at least one or more images and one or more videos data received from the oral support receptacle 101.
The miscellaneous data 405 may store data, including temporary data and temporary files, generated by units (or modules) 407 for performing various functions of the electronic device 105.
In an embodiment, the data 401 in the memory 109 are processed by the one or more units (or modules) 407 present within the memory 109 of the electronic device 105. In the embodiment, the one or more units (or modules) 407 may be implemented as dedicated hardware units. As used herein, the term unit (or module) refers to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a Field-Programmable Gate Arrays (FPGA), Programmable System-on-Chip (PSoC), a combinational logic circuit, and/or other suitable components that provide the described functionality. In some implementations, the one or more units (or modules) 407 may be communicatively coupled to the processor 111 for performing one or more functions of the electronic device 105. The units (or modules) 407 when configured with the functionality defined in the present disclosure will result in a novel hardware.
In one implementation, the one or more units (or modules) 407 may include, but are not limited to, a transceiver 409, a determining unit 411, a generating unit 413, and a providing unit 415. The one or more units (or modules) 407 may, also, include miscellaneous units (or modules) 417 to perform various miscellaneous functionalities of the electronic device 105.
Transceiver 409:
During execution phase, the transceiver 409 receives captured at least one of images and videos data of the oral appliance 221 from the oral support receptacle 101.
During training phase of an AI model, the transceiver 409 receives a plurality of historic dental structure images or videos data and associated metadata of one or more users from one or more sources. The one or more sources may be a remote server, a cloud server, a specialized processing device, or an external database.
Determining unit 411:
During execution phase, the determining unit 411 determines a type of the oral appliance 221 using at least one of the images and videos data of the oral appliance 221 received by the transceiver 409. The type of the oral appliance 221 may be one of an aligner, a retainer, a denture, a mouth guard, a removable palatal expander, a pedodontics positioner, a twin block appliance, a removable space maintainer, a snoring device, a night guard, a removable habit breaking appliance, and a removable bite plan. Thereafter, the determining unit 411 determines at least one feature of the oral appliance 221 using at least one of the images and videos data of the oral appliance 221 based on the type of the oral appliance 221. The at least one feature of the oral appliance 221 comprises a shape of the oral appliance 221, a size of the oral appliance 221, a structure of the oral appliance 221, discolouration of the oral appliance 221, and disfigurement of the oral appliance 221. The determining unit 411 determines oral condition present in a dental structure of the user through change in colour of the coloured dye on the oral appliance 221 based on at least one of the images and videos data of the oral appliance 221.
During training phase of an AI model, the determining unit 411 determines progress in the dental structure data for each user by comparing the plurality of historic dental structure images or videos data and associated metadata with a pre-determined dental structure and associated metadata. The progress in the dental structure data refers to detecting one of abnormality in the dental structure data, normality in the dental structure data, or movement of the oral appliance 221 over a period of time.
Generating unit 413:
During execution phase, the generating unit 413 generates at least one of 2D and 3D dental structure of a user using the oral appliance 221 based on the at least one feature of the oral appliance 221 determined by the determining unit 411 and at least one of the images and videos data of the oral appliance 221.
Providing unit 415:
During execution phase, the providing unit 415 compares, using a trained AI model, at least one of the 2D or 3D dental structure with at least one of historic dental structure images and associated metadata stored in the electronic device 105. Thereafter, the providing unit 415 provides information on a dental structure data of the user based on the comparison.
In one embodiment, the generating unit 413 and the providing unit 415 may together form the AI model.
FIG. 5 illustrates a flowchart showing a method for operating an AI-based system for an oral support receptacle of an oral appliance in accordance with first embodiment of the present disclosure. In the first embodiment, the method for operating the AI-based system for the oral support receptacle of the oral appliance involves using the oral support receptacle 101 and the electronic device 105.
As illustrated in FIG. 5, the method 500 includes one or more blocks for operating an AI-based system for an oral support receptacle of an oral appliance in accordance with some embodiments of the present disclosure. The method 500 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types.
The order in which the method 500 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof.
The following blocks 501 to 507 are performed by the oral support receptacle 101.
At block 501, the determining unit 317 of the oral support receptacle 101 determines whether the oral appliance 221 is present inside the oral support receptacle 101.
At block 503, the determining unit 317 of the oral support receptacle 101 dispenses a mist using a first sensor 227 arranged in the oral support receptacle 101, when the oral appliance 221 is determined to be present inside the oral support receptacle 101. The mist is an ultrafine aerosolized droplets. A droplet size of the ultrafine aerosolized droplets is in a range of 1 to 10 µm. The mist is a one of a coloured dye or disinfectant solvent. The first sensor 227 is at least one of an ultrasonic disk sensor, a vibrating disc sensor, and a pressurized mist sensor.
At block 505, the capturing unit 319 of the oral support receptacle 101 captures at least one of images and videos data of the oral appliance 221 using a scanner 205 arranged in the oral support receptacle 101.
At block 507, the transceiver unit 315 of the oral support receptacle 101 transmits captured at least one of images and videos data of the oral appliance 221 to the electronic device 105.
The following blocks 509 to 519 are performed by the electronic device 105.
At block 509, the transceiver unit 409 of the electronic device 105 receives captured at least one of images and videos data of the oral appliance 221 from the oral support receptacle 101.
At block 511, the determining unit 411 of the electronic device 105 determines a type of the oral appliance 221 using at least one of the images and videos data of the oral appliance 221.
At block 513, the determining unit 411 of the electronic device 105 determines at least one feature of the oral appliance 221 using at least one of the images and videos data of the oral appliance 221 based on the type of the oral appliance 221. The at least one feature of the oral appliance 221 comprises a shape of the oral appliance 221, a size of the oral appliance 221, a structure of the oral appliance 221, discolouration of the oral appliance 221, and disfigurement of the oral appliance 221.
At block 515, the generating unit 413 of the electronic device 105 generates at least one of 2D and 3D dental structure of a user using the oral appliance 221 based on the at least one feature of the oral appliance 221 and at least one of the images and videos data of the oral appliance 221.
At block 517, the providing unit 415 of the electronic device 105 compares, using a trained AI model, at least one of the 2D or 3D dental structure with at least one of historic dental structure images and associated metadata stored in the electronic device 105.
At block 519, the providing unit 415 of the electronic device 105 provides information on a dental structure data of the user based on the comparison performed at block 517.
FIG. 6 illustrates a flowchart showing a method for operating an AI-based system for an oral support receptacle of an oral appliance in accordance with second embodiment of the present disclosure. In the second embodiment, the method for operating the AI-based system for the oral support receptacle of the oral appliance involves using only the oral support receptacle 101.
As illustrated in FIG. 6, the method 600 includes one or more blocks for operating the oral support receptacle of the oral appliance in accordance with some embodiments of the present disclosure. The method 600 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types.
The order in which the method 600 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof.
The following blocks 601 to 615 are performed by the oral support receptacle 101.
At block 601, the determining unit 317 of the oral support receptacle 101 determines whether the oral appliance 221 is present inside the oral support receptacle 101.
At block 603, the determining unit 317 of the oral support receptacle 101 dispenses a mist using a first sensor 227 arranged in the oral support receptacle 101, when the oral appliance 221 is determined to be present inside the oral support receptacle 101. The mist is an ultrafine aerosolized droplets. A droplet size of the ultrafine aerosolized droplets is in a range of 1 to 10 µm. The mist is a one of a coloured dye or disinfectant solvent. The first sensor 227 is at least one of an ultrasonic disk sensor, a vibrating disc sensor, and a pressurized mist sensor.
At block 605, the capturing unit 319 of the oral support receptacle 101 captures at least one of images and videos data of the oral appliance 221 using a scanner 205 arranged in the oral support receptacle 101.
At block 607, the determining unit 317 of the oral support receptacle 101 determines a type of the oral appliance 221 using at least one the images and videos data of the oral appliance 221.
At block 609, the determining unit 317 of the oral support receptacle 101 determines at least one feature of the oral appliance 221 using at least one of the images and the video of the oral appliance 221 based on the type of the oral appliance 221. The at least one feature of the oral appliance 221 comprises a shape of the oral appliance 221, a size of the oral appliance 221, a structure of the oral appliance 221, discolouration of the oral appliance 221, and disfigurement of the oral appliance 221.
At block 611, the generating unit of the oral support receptacle 101 generates at least one of 2D and 3D dental structure of a user using the oral appliance 221 based on the at least one feature of the oral appliance 221 and at least one of the images and videos data of the oral appliance 221.
At block 613, the providing unit of the oral support receptacle 101 compares, using a trained AI model, at least one of the 2D or 3D dental structure with at least one of historic dental structure images and associated metadata stored in the electronic device 105.
At block 615, the providing unit of the oral support receptacle 101 provides information on a dental structure data of the user based on the comparison performed at block 613.
A use case for operating the AI-based system for the oral support receptacle 101 of the oral appliance 221 is given below.
1. User/Patient gets scan of the oral cavity, including upper and lower jaws, through which the oral appliance 221 is made for multiple purposes. This scanned data is stored in the oral support receptacle 101 and/or the electronic device 105.
2. Oral appliance 221 can be for an orthodontic treatment or any dental/ medical treatment or any other use.
3. User/Patient places this oral appliance 221 in the oral support receptacle 101.
4. The oral support receptacle 101 is connected to the electronic device 105 and the oral support receptacle 101 sends and receives sensor and/or processed sensor data to this electronic device 105.
5. This data is sent to user/patient and doctors or other treatment providers for further processing and analytics.
Tracking of the oral appliance wear time is given below:
1. User places the oral appliance in the oral support receptacle 101.
2. Sensor detects the lid 203 status (open/closed) of the oral support receptacle 101.
3. One or more sensors (IR or laser sensor) are operably coupled to the oral support receptacle 101 to capture presence/absence data of the oral appliance 221.
4. The scanner 205 is used to capture images and/or videos of the oral appliance 221 and identify type of oral appliance 221.
5. The electronic device 105 receives sensors and/ or scanner data from the oral support receptacle 101.
6. The electronic device 105 process scanner/sensor data from the oral support receptacle 101 to detect presence/absence of oral appliance 221 in the oral support receptacle 101
7. Sensor data/ processed sensor and/or scanner data is transmitted to the electronic device 105 and/or displayed on the LED or LCD screen 211 of the oral support receptacle 101 to determine whether the oral appliance 221 is present/absent.
8. Wear time is calculated on the basis of number of counts of oral appliance 221 insertion and removal from the oral support receptacle 101 and indicated to the user via the LED or LCD screen 211 of the oral support receptacle 101 or the electronic device 105.
9. This data can further be compared against initial treatment plan, compliance threshold to report on oral appliance wear time and compliance.
10. This data will be further analyzed by the AI model and data is sent to the user for future course of action.
Disinfection cycle of the oral appliance is given below:
1. Once oral appliance 221 is determined to be placed inside the oral support receptacle 101, orientation of the oral support receptacle 101 is checked for optimized misting cycle, using accelerometer or/and gyroscope 229.
2. Second sensor checks lid 203 status of the oral support receptacle 101 to be open/closed.
3. Disinfection of the oral appliance 221 takes place using the first sensor 227.
4. Misting cycle success/failure is validated through the temperature and humidity sensor 213 and/or the scanner 205.
5. Scanner 205 is used to verify whether the oral appliance 221 was evenly disinfected.
Some of the technical advantages of the present disclosure are listed below.
During situations when there are limited sources for cleaning the oral appliance, since the oral support receptacle includes a (first) sensor, which is at least one of an ultrasonic disk sensor, a vibrating disc sensor, and a pressurized mist sensor, the claimed subject matter of the present disclosure provides a functionality of cleaning or disinfecting the oral appliance anytime.
The use of optimal droplet size (i.e., in the range of 1 to 10 µm) of the mist in the claimed subject matter of the present disclosure allows droplets to remain suspended for longer periods, thereby, allowing for better distribution and coverage of the droplets over the oral appliance 221. Consequently, this approach provides an effective disinfectant mechanism to kill viruses or germs or bacteria present on the oral appliance 221.
The claimed subject matter of the present disclosure identifies wear and tear, damage, and cut-outs in the oral appliance accurately by performing analysis on at least one of images and videos data of the oral appliance, captured from the oral support receptacle, using images processing techniques and/or AI techniques.
The claimed subject matter of the present disclosure provides useful information or data or results on user habits such as tracking the oral appliance wear time, insertion and removal of the oral appliance from the oral support receptacle by performing analysis on at least one of images and videos data of the oral appliance, captured from the oral support receptacle, using images processing techniques and/or AI techniques. Due to this information or data or result, a doctor can track wear time of the user and can suggest course correction of an oral treatment for the user, if required.
The claimed subject matter of the present disclosure identifies the reason for the cause of failure of non-working of the oral appliance on the user, for example, is it because of user non-compliance by tracking the oral appliance wear time, insertion and removal of the oral appliance from the oral support receptacle through analysis on at least one of images and videos data of the oral appliance, captured from the oral support receptacle, using images processing techniques and/or AI techniques.
The oral support receptacle of the present disclosure with multi-functionality such a liquid dispense mechanism for disinfecting the oral appliance and a scanner for capturing at least one of images and videos data of the oral appliance for further processing is efficient and cost effective.
Some of the clauses are mentioned below.
[1]: An Artificial Intelligence (AI) based system for an oral support receptacle of an oral appliance, the AI-based system comprising:
an electronic device;
the oral support receptacle communicatively coupled to the electronic device and configured to:
determine whether the oral appliance is present inside the oral support receptacle; and
when the oral appliance is determined to be present inside the oral support receptacle, dispense a mist using a first sensor arranged in the oral support receptacle.
[2]: The AI-based system described in [1],
the oral support receptacle is configured to:
capture at least one of images and videos data of the oral appliance using a scanner arranged in the oral support receptacle; and
transmit the at least one of images and videos data of the oral appliance to the electronic device;
the electronic device is configured to:
receive the at least one of images and videos data of the oral appliance from the oral support receptacle;
determine a type of the oral appliance using at least one of the images and videos data of the oral appliance;
determine at least one feature of the oral appliance using at least one of the images and videos data of the oral appliance based on the type of the oral appliance;
generate at least one of 2D and 3D dental structure of a user using the oral appliance based on the at least one feature of the oral appliance and at least one of the images and videos data of the oral appliance;
compare, using a trained AI model, at least one of the 2D or 3D dental structure with at least one of historic dental structure images and associated metadata stored in the electronic device; and
provide information on a dental structure data of the user based on the comparison.
[3]: The AI-based system described in [2], the electronic device is configured to train an AI model by:
receiving a plurality of historic dental structure images or videos data and associated metadata of one or more users from one or more sources; and
determining progress in the dental structure data for each user by comparing the plurality of historic dental structure images or videos data and associated metadata with a pre-determined dental structure and associated metadata,
wherein the progress in the dental structure data refers to detecting one of abnormality in the dental structure data, normality in the dental structure data, or movement of the oral appliance over a period of time.
[4]: The AI-based system described in [1], the oral support receptacle is configured to:
detect, using a second sensor arranged in the oral support receptacle, opening or closing of a lid of the oral support receptacle; and
detect, using at least one of a laser sensor, a scanner, and Infrared sensor arranged in the oral support receptacle, presence or absence of the oral appliance inside the oral support receptacle.
[5]: The AI-based system described in [1], the oral support receptacle is configured to:
determine, using at least one of a temperature and humidity sensor, and a scanner arranged in the oral support receptacle, whether the mist was dispensed.
[6]: The AI-based system described in any of [1] to [5],
wherein the mist is an ultrafine aerosolized droplets,
wherein a droplet size of the ultrafine aerosolized droplets is in a range of 1 to 10 µm, and
wherein the mist is a one of a coloured dye or disinfectant solvent.
[7]: The AI-based system described in [6],
the oral support receptacle is configured to:
determine whether the oral appliance is present inside the oral support receptacle, wherein the oral appliance with the coloured dye was used by a user;
capture at least one of images and videos data of the oral appliance using a scanner arranged in the oral support receptacle when the oral appliance is determined to be present inside the oral support receptacle;
transmit the at least one of images and videos data of the oral appliance to the electronic device;
the electronic device is configured to:
receive the at least one of images and videos data of the oral appliance from the oral support receptacle; and
determine oral condition present in a dental structure of the user through change in colour of the coloured dye on the oral appliance based on at least one of the images and the videos data of the oral appliance.
[8]: The AI-based system described in [2], wherein the at least one feature of the oral appliance comprises a shape of the oral appliance, a size of the oral appliance, a structure of the oral appliance, discolouration of the oral appliance, and disfigurement of the oral appliance.
[9]: The AI-based system described in [1], wherein the first sensor is at least one of an ultrasonic disk sensor, a vibrating disc sensor, and a pressurized mist sensor.
[10]: The AI-based system described in [4], wherein the second sensor is at least one of a Hall effect sensor, a Reed switch sensor, an optical sensor, a tilt sensor, an ultrasonic sensor, a microswitch, a strain gauge sensor, a light dependent resistor and an Infrared (IR) break-beam sensor.
[11]: An Artificial Intelligence (AI) based system for an oral support receptacle of an oral appliance the AI-based system comprising:
the oral support receptacle configured to:
determine whether the oral appliance is present inside the oral support receptacle; and
when the oral appliance is determined to be present inside the oral support receptacle, dispense a mist using a first sensor arranged in the oral support receptacle.
[12]: The AI-based system described in [11], the oral support receptacle is configured to:
capture at least one of images and videos data of the oral appliance using a scanner arranged in the oral support receptacle;
determine a type of the oral appliance using at least one of the images and the videos data of the oral appliance;
determine at least one feature of the oral appliance using at least one of the images and the videos data of the oral appliance based on the type of the oral appliance;
generate at least one of 2D and 3D dental structure of a user using the oral appliance based on the at least one feature of the oral appliance and at least one of the images and the videos data of the oral appliance;
compare, using a trained AI model, at least one of the 2D or 3D dental structure with at least one of historic dental structure images and associated metadata stored in an electronic device; and
provide information on a dental structure data of the user based on the comparison.
[13]: The AI-based system described in [12], the oral support receptacle is configured to train an AI model by:
receiving a plurality of historic dental structure images or videos data and associated metadata of one or more users from the electronic device; and
determining progress in the dental structure data for each user by comparing the plurality of historic dental structure images or videos data and associated metadata with a pre-determined dental structure and associated metadata,
wherein the progress in the dental structure data refers to detecting one of abnormality in the dental structure data, normality in the dental structure data, or movement of the oral appliance over a period of time.
[14]: The AI-based system described in [11], the oral support receptacle is configured to:
detect, using a second sensor arranged in the oral support receptacle, opening or closing of a lid of the oral support receptacle; and
detect, using at least one of a laser sensor, a scanner, and Infrared sensor arranged in the oral support receptacle, presence or absence of the oral appliance inside the oral support receptacle.
[15]: The AI-based system described in [11], the oral support receptacle is configured to:
determine, using at least one of a temperature and humidity sensor, and a scanner arranged in the oral support receptacle, whether the mist was dispensed.
[16]: The AI-based system described in any of [11] to [15],
wherein the mist is an ultrafine aerosolized droplets,
wherein a droplet size of the ultrafine aerosolized droplets is in a range of 1 to 10 µm, and
wherein the mist is a one of a coloured dye or disinfectant solvent.
[17]: The AI-based system described in [16], the oral support receptacle is configured to:
determine whether the oral appliance is present inside the oral support receptacle, wherein the oral appliance with the coloured dye was used by a user;
capture at least one of images and videos data of the oral appliance using a scanner arranged in the oral support receptacle when the oral appliance is determined to be present inside the oral support receptacle; and
determine oral condition present in a dental structure of the user through change in colour of the coloured dye on the oral appliance based on at least one of the images and the videos data of the oral appliance.
[18]: The AI-based system described in [12], wherein the at least one feature of the oral appliance comprises a shape of the oral appliance, a size of the oral appliance, a structure of the oral appliance, discolouration of the oral appliance, and disfigurement of the oral appliance.
[19]: The AI-based system described in [11], wherein the first sensor is at least one of an ultrasonic disk sensor, a vibrating disc sensor, and a pressurized mist sensor.
[20]: The AI-based system described in [14], wherein the second sensor is at least one of a Hall effect sensor, a Reed switch sensor, an optical sensor, a tilt sensor, an ultrasonic sensor, a microswitch, a strain gauge sensor, a light dependent resistor and an Infrared (IR) break-beam sensor.
With respect to the use of substantially any plural and singular terms herein, those having skill in the art can translate from the plural to the singular and from the singular to the plural as is appropriate to the context or application. The various singular or plural permutations may be expressly set forth herein for sake of clarity.
One or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which a software (program) readable by an information processing apparatus may be stored. The information processing apparatus includes a processor and a memory, and the processor executes a process of the software. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include RAM, ROM, volatile memory, non-volatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
The described operations may be implemented as a method, a system, or an article of manufacture using at least one of standard programming and engineering techniques to produce software, firmware, hardware, or any combination thereof. The described operations may be implemented as code maintained in a “non-transitory computer readable medium”, where a processor may read and execute the code from the computer readable medium. The processor is at least one of a microprocessor and a processor capable of processing and executing the queries. A non-transitory computer readable medium may include media such as magnetic storage medium (e.g., hard disk drives, floppy disks, tape, etc.), optical storage (CD ROMs, DVDs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, Flash Memory, firmware, programmable logic, etc.), etc. Further, non-transitory computer-readable media include all computer-readable media except for a transitory. The code implementing the described operations may further be implemented in hardware logic (e.g., an integrated circuit chip, PGA, ASIC, etc.).
The terms “an embodiment”, “embodiment”, “embodiments”, “the embodiment”, “the embodiments”, “one or more embodiments”, “some embodiments”, and “one embodiment” mean “one or more (but not all) embodiments of the invention(s)” unless expressly specified otherwise.
The terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless expressly specified otherwise.
The enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise.
The terms “a”, “an” and “the” mean “one or more”, unless expressly specified otherwise.
A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention.
When a single device or article is described herein, it will be readily apparent that more than one device or article (whether or not they cooperate) may be used in place of a single device or article. Similarly, where more than one device or article is described herein (whether or not they cooperate), it will be readily apparent that a single device or article may be used in place of the more than one device, or article, or a different number of devices or articles may be used instead of the shown number of devices or programs. At least one of the functionalities and the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality or features. Thus, other embodiments of the invention need not include the device itself.
The illustrated operations of FIGS. 5, and 6 show certain events occurring in a certain order. In alternative embodiments, certain operations may be performed in a different order, modified, or removed. Moreover, steps may be added to the above-described logic and still conform to the described embodiments. Further, operations described herein may occur sequentially or certain operations may be processed in parallel. Yet further, operations may be performed by a single processing unit or by distributed processing units.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope being indicated by the following claims.
REFERRAL NUMERALS:
Reference number Description
100 Environment
101 Oral Support Receptacle (OSR)
103 Communication network
105 Electronic Device (ED)
107 I-O interface
109 Memory
111 Processor
201 Case of OSR
203 Lid of OSR
205 Scanner
207 Light source
209 Mirror
211 LED or LCD screen
213 Temperature and humidity sensor
215, 223 IR and/or laser sensor
217 LEDs
219 Second sensor
221 Oral appliance
225 Battery
227 First sensor
229 Accelerometer and/or Gyroscope
231 Charging point
233 Power management unit
235 Real Time Clock (RTC)
237 Buzzer
239 Liquid dispenser
301 I-O interface
303 Processor
305 Memory
307 Data
309 Image or Video data
311 Miscellaneous data
313 Units
315 Transceiver
317 Determining unit
319 Capturing unit
321 Detecting unit
323 Miscellaneous units
401 Data
403 Image or Video data
405 Miscellaneous data
407 Units
409 Transceiver
411 Determining unit
413 Generating unit
415 Providing unit
417 Miscellaneous units ,CLAIMS:WE CLAIM:
1. An Artificial Intelligence (AI) based system for an oral support receptacle of an oral appliance, the AI-based system comprising:
an electronic device;
the oral support receptacle communicatively coupled to the electronic device and configured to:
determine whether the oral appliance is present inside the oral support receptacle; and
when the oral appliance is determined to be present inside the oral support receptacle, dispense a mist using a first sensor arranged in the oral support receptacle.
2. The AI-based system as claimed in claim 1,
the oral support receptacle is configured to:
capture at least one of images and videos data of the oral appliance using a scanner arranged in the oral support receptacle; and
transmit the at least one of images and videos data of the oral appliance to the electronic device;
the electronic device is configured to:
receive the at least one of images and videos data of the oral appliance from the oral support receptacle;
determine a type of the oral appliance using at least one of the images and the videos data of the oral appliance;
determine at least one feature of the oral appliance using at least one of the images and the videos data of the oral appliance based on the type of the oral appliance;
generate at least one of 2D and 3D dental structure of a user using the oral appliance based on the at least one feature of the oral appliance and at least one of the images and the videos data of the oral appliance;
compare, using a trained AI model, at least one of the 2D or 3D dental structure with at least one of historic dental structure images and associated metadata stored in the electronic device; and
provide information on a dental structure data of the user based on the comparison.
3. The AI-based system as claimed in claim 2, the electronic device is configured to train an AI model by:
receiving a plurality of historic dental structure images or videos data and associated metadata of one or more users from one or more sources; and
determining progress in the dental structure data for each user by comparing the plurality of historic dental structure images or videos data and associated metadata with a pre-determined dental structure and associated metadata,
wherein the progress in the dental structure data refers to detecting one of abnormality in the dental structure data, normality in the dental structure data, or movement of the oral appliance over a period of time.
4. The AI-based system as claimed in claim 1, the oral support receptacle is configured to:
detect, using a second sensor arranged in the oral support receptacle, opening or closing of a lid of the oral support receptacle; and
detect, using at least one of a laser sensor, a scanner, and Infrared sensor arranged in the oral support receptacle, presence or absence of the oral appliance inside the oral support receptacle.
5. The AI-based system as claimed in claim 1, the oral support receptacle is configured to:
determine, using at least one of a temperature and humidity sensor, and a scanner arranged in the oral support receptacle, whether the mist was dispensed.
6. The AI-based system as claimed in claim 1,
wherein the mist is an ultrafine aerosolized droplets,
wherein a droplet size of the ultrafine aerosolized droplets is in a range of 1 to 10 µm, and
wherein the mist is a one of a coloured dye or disinfectant solvent.
7. The AI-based system as claimed in claim 6,
the oral support receptacle is configured to:
determine whether the oral appliance is present inside the oral support receptacle, wherein the oral appliance with the coloured dye was used by a user;
capture at least one of images and videos data of the oral appliance using a scanner arranged in the oral support receptacle when the oral appliance is determined to be present inside the oral support receptacle;
transmit the at least one of images and videos data of the oral appliance to the electronic device;
the electronic device is configured to:
receive the at least one of images and videos data of the oral appliance from the oral support receptacle; and
determine oral condition present in a dental structure of the user through change in colour of the coloured dye on the oral appliance based on at least one of the images and the videos data of the oral appliance.
8. The AI-based system as claimed in claim 2, wherein the at least one feature of the oral appliance comprises a shape of the oral appliance, a size of the oral appliance, a structure of the oral appliance, discolouration of the oral appliance, and disfigurement of the oral appliance.
9. The AI-based system as claimed in claim 1, wherein the first sensor is at least one of an ultrasonic disk sensor, a vibrating disc sensor, and a pressurized mist sensor.
10. The AI-based system as claimed in claim 4, wherein the second sensor is at least one of a Hall effect sensor, a Reed switch sensor, an optical sensor, a tilt sensor, an ultrasonic sensor, a microswitch, a strain gauge sensor, a light dependent resistor and an Infrared (IR) break-beam sensor.
11. An Artificial Intelligence (AI) based system for an oral support receptacle of an oral appliance the AI-based system comprising:
the oral support receptacle configured to:
determine whether the oral appliance is present inside the oral support receptacle; and
when the oral appliance is determined to be present inside the oral support receptacle, dispense a mist using a first sensor arranged in the oral support receptacle.
12. The AI-based system as claimed in claim 11, the oral support receptacle is configured to:
capture at least one of images and videos data of the oral appliance using a scanner arranged in the oral support receptacle;
determine a type of the oral appliance using at least one of the images and the videos data of the oral appliance;
determine at least one feature of the oral appliance using at least one of the images and the videos data of the oral appliance based on the type of the oral appliance;
generate at least one of 2D and 3D dental structure of a user using the oral appliance based on the at least one feature of the oral appliance and at least one of the images and the videos data of the oral appliance;
compare, using a trained AI model, at least one of the 2D or 3D dental structure with at least one of historic dental structure images and associated metadata stored in an electronic device; and
provide information on a dental structure data of the user based on the comparison.
13. The AI-based system as claimed in claim 12, the oral support receptacle is configured to train an AI model by:
receiving a plurality of historic dental structure images or videos data and associated metadata of one or more users from the electronic device; and
determining progress in the dental structure data for each user by comparing the plurality of historic dental structure images or videos data and associated metadata with a pre-determined dental structure and associated metadata,
wherein the progress in the dental structure data refers to detecting one of abnormality in the dental structure data, normality in the dental structure data, or movement of the oral appliance over a period of time.
14. The AI-based system as claimed in claim 11, the oral support receptacle is configured to:
detect, using a second sensor arranged in the oral support receptacle, opening or closing of a lid of the oral support receptacle; and
detect, using at least one of a laser sensor, a scanner, and Infrared sensor arranged in the oral support receptacle, presence or absence of the oral appliance inside the oral support receptacle.
15. The AI-based system as claimed in claim 11, the oral support receptacle is configured to:
determine, using at least one of a temperature and humidity sensor, and a scanner arranged in the oral support receptacle, whether the mist was dispensed.
16. The AI-based system as claimed in claim 11,
wherein the mist is an ultrafine aerosolized droplets,
wherein a droplet size of the ultrafine aerosolized droplets is in a range of 1 to 10 µm, and
wherein the mist is a one of a coloured dye or disinfectant solvent.
17. The AI-based system as claimed in claim 16, the oral support receptacle is configured to:
determine whether the oral appliance is present inside the oral support receptacle, wherein the oral appliance with the coloured dye was used by a user;
capture at least one of images and videos data of the oral appliance using a scanner arranged in the oral support receptacle when the oral appliance is determined to be present inside the oral support receptacle; and
determine oral condition present in a dental structure of the user through change in colour of the coloured dye on the oral appliance based on at least one of the images and the videos data of the oral appliance.
18. The AI-based system as claimed in claim 12, wherein the at least one feature of the oral appliance comprises a shape of the oral appliance, a size of the oral appliance, a structure of the oral appliance, discolouration of the oral appliance, and disfigurement of the oral appliance.
19. The AI-based system as claimed in claim 11, wherein the first sensor is at least one of an ultrasonic disk sensor, a vibrating disc sensor, and a pressurized mist sensor.
20. The AI-based system as claimed in claim 14, wherein the second sensor is at least one of a Hall effect sensor, a Reed switch sensor, an optical sensor, a tilt sensor, an ultrasonic sensor, a microswitch, a strain gauge sensor, a light dependent resistor and an Infrared (IR) break-beam sensor.
| # | Name | Date |
|---|---|---|
| 1 | 202321042250-STATEMENT OF UNDERTAKING (FORM 3) [23-06-2023(online)].pdf | 2023-06-23 |
| 2 | 202321042250-PROVISIONAL SPECIFICATION [23-06-2023(online)].pdf | 2023-06-23 |
| 3 | 202321042250-FORM FOR SMALL ENTITY(FORM-28) [23-06-2023(online)].pdf | 2023-06-23 |
| 4 | 202321042250-FORM FOR SMALL ENTITY [23-06-2023(online)].pdf | 2023-06-23 |
| 5 | 202321042250-FORM 1 [23-06-2023(online)].pdf | 2023-06-23 |
| 6 | 202321042250-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [23-06-2023(online)].pdf | 2023-06-23 |
| 7 | 202321042250-EVIDENCE FOR REGISTRATION UNDER SSI [23-06-2023(online)].pdf | 2023-06-23 |
| 8 | 202321042250-DRAWINGS [23-06-2023(online)].pdf | 2023-06-23 |
| 9 | 202321042250-DECLARATION OF INVENTORSHIP (FORM 5) [23-06-2023(online)].pdf | 2023-06-23 |
| 10 | 202321042250-FORM-26 [22-08-2023(online)].pdf | 2023-08-22 |
| 11 | 202321042250-Proof of Right [21-11-2023(online)].pdf | 2023-11-21 |
| 12 | 202321042250-Power of Attorney [19-06-2024(online)].pdf | 2024-06-19 |
| 13 | 202321042250-Form 1 (Submitted on date of filing) [19-06-2024(online)].pdf | 2024-06-19 |
| 14 | 202321042250-Covering Letter [19-06-2024(online)].pdf | 2024-06-19 |
| 15 | 202321042250-DRAWING [23-06-2024(online)].pdf | 2024-06-23 |
| 16 | 202321042250-CORRESPONDENCE-OTHERS [23-06-2024(online)].pdf | 2024-06-23 |
| 17 | 202321042250-COMPLETE SPECIFICATION [23-06-2024(online)].pdf | 2024-06-23 |
| 18 | 202321042250-CORRESPONDENCE(IPO)-(WIPO DAS)-01-07-2024.pdf | 2024-07-01 |
| 19 | Abstract1.jpg | 2024-09-09 |