Sign In to Follow Application
View All Documents & Correspondence

Skin Ai Virtual Dermatologist

Abstract: Urban areas face significant challenges related to traffic congestion, leading to increased travel time, pollution, accidents and difficulties in clearing traffic for emergency vehicles. The Smart Traffic System with Real-Time Vehicle Tracking is aimed at resolving these problems through the creation of an innovative traffic system. The proposed systems leverage high-definition cameras, sensors, and vehicle-to-Infrastructure(V2I) communication for comprehensive traffic data collection. Also, Other embedded applications are installed with AI algorithms to carry out vehicle detection and monitoring including YOLO for object detection and Deep SORT for vehicle tracking. The system design provides for adaptive traffic signal controls which are based on Edge Machine Learning (Edge ML) techniques and Reinforcement Learning (RL) algorithms to manage the traffic efficiently in real-time. The expected outcome is to get a new twist in terms of efficiency of the traffic management and control system, through, reducing congestion and facilitating safer road usage while taking better care of the environment with smarter, and data-driven decision-making.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
25 July 2025
Publication Number
31/2025
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

MLR Institute of Technology
Hyderabad

Inventors

1. Mrs. K. Jyothsna Reddy
Department of CSE-AI&ML, MLR Institute of Technology, Hyderabad
2. Mr. V. Sai Vinay Reddy
Department of CSE-AI&ML, MLR Institute of Technology, Hyderabad
3. Mr. D. Lokesh
Department of CSE-AI&ML, MLR Institute of Technology, Hyderabad
4. Mr. D. Hema Naga Krishna Chaitanya
Department of CSE-AI&ML, MLR Institute of Technology, Hyderabad

Specification

Description:Field of Invention

The present invention relates to the application of Artificial Intelligence (AI) in medical imaging, particularly dermatology. It provides an AI-based system for the detection and classification of skin lesions such as melanoma, basal cell carcinoma, and benign neoplasms using deep learning models applied to dermoscopic images. It represents an advancement in telemedicine and diagnostic tools for personalized healthcare.

Background of the Invention

Millions of new cases of skin cancer are diagnosed worldwide each year. Successful treatment of this disease relies upon early detection, yet existing techniques to diagnose this malignancy involve invasive and time-consuming methods, including biopsies and visual inspection by experts in the field.Recent advances in AI, including deep learning models such as Convolutional Neural Networks (CNNs), now allow for the automated detection of skin lesions with precision that is comparable to, if not superior to, that of dermatologists. Nonetheless, there are still difficulties in implementing AI-based dermatological solutions. Dataset bias:Current AI models fail to generalize across different skin tones because their training datasets are homogeneous.Lack of Interpretability: Most AI systems remain "black boxes," for which decisions are hard to trust in clinical settings.Integration into Workflows: Current solutions are not designed for use in telemedicine and mobile platforms, which therefore limit their accessibility in more underserved regions.

The invention addresses these challenges with a comprehensive diagnostic system through the use of CNNs, explainable AI, and data augmentation techniques to provide reliable and interpretable diagnostic support.The innovation disclosed in US20210118550A1, it describes thebared is a content- grounded image reclamation( CBIR) system and affiliated styles that serve as a individual aid for diagnosing whether a dermoscopic image correlates to a skin cancer type. Systems and styles according to aspects of the invention use as a reference a set of images of pathologically verified benign or nasty once cases from a collection of different classes that are of high similarity to the unknown new case in question, along with their individual biographies. Systems and styles according to aspects of the invention prognosticate what class of skin cancer is associated with a particular patient skin lesion, and may be employed as a individual aid for general interpreters and dermatologists.

US7689016B2, it describes the improved methods for computer-aided analysis of identifying features of skin lesions from digital images of the lesions are described. Improved preprocessing of the image that 1) eliminates artifacts that occlude or distort skin lesion features and 2) identifies groups of pixels within the skin lesion that represent features and/or facilitate the quantification of features are described including improved digital hair removal algorithms. Improved methods for analyzing lesion features are also described.

US20170172487A1, it describes the present invention is an innovation regarding medical analysis of skin Basal-cell carcinoma and melanoma using 3-D tomographic reconstruction of malignant images by sub skin infrared imaging.This invention aims to provide methods and compositions that will allow data from malignant and benign lesions located on the skin or subsurface of the human skin to be collected in a non-invasive manner..

US12118723B1, it describes the method of operation of a compute system comprises: receiving a patient image; segmenting a skin lesion in the patient image; constructing a normalized image by cropping the patient image and adding padding to position the skin lesion at the center of the normalized image; identifying, by a cancer artificial intelligence already trained, a skin cancer classification, a skin cancer sub-class, and a risk level assessment; and generating a skin cancer display including the normalized image, the skin cancer classification, the skin cancer sub-class, and the risk level assessment for display on a device.

US11298072B2, it describes the present invention provides for a system and method for the diagnosis of skin cancer wherein visual data acquired from a skin lesion through passive or active optical, electronical, thermal, or mechanical means is processed into a classifier, subjected to dermoscopic classification analysis applied to the image and the data, through sonification techniques, transformed into raw audio signals, with the raw audio signals analyzed through a second machine learning algorithm to enhance interpretability and precision of the diagnosis.

Summary of the Invention

The "Skin-AI Virtual Dermatologist" is an advanced AI-based diagnostic system intended to change the face of dermatology by allowing the accurate classification of skin lesions in real time. This is an advancement that integrates deep learning, explainable AI, and tele-medicine to further improve diagnosis precision and access, mainly in underserved areas.

Deep Learning-Based Image Classification:Deep learning architecture utilizing CNN based on large datasets (ISIC, HAM10000) fine-tuned with transfer learning.Classifies lesions into clinically relevant categories such as benign, malignant, or needing further study.Explainable AI (XAI):Tools like Grad-CAM and saliency maps bring to light areas of interest in images, making them more interpretable for clinicians and patients alike.It creates trust as clinicians can verify AI decisions and inform patients of their findings.Data Augmentation and Diversity Handling:Geometric transformations, color adjustment, and synthetic data generation through GANs enhance the diversity of datasets.Guarantees robust performance across different skin tones and populations, minimizing bias.Mobile and Cloud-Based Deployment:Allows patients to take pictures through a mobile app, which are then uploaded into the system.Utilizes cloud-based processing for instant diagnostics at remote locations.Clinical Implementation:Supports telemedicine workflows for instant consultation with prioritization of high-risk cases.Has tools that allow follow-up care while collaborating with clinicians to enable efficient, high-quality service delivery.This system democratizes dermatological care by making accurate diagnostics more accessible, trustworthy, and integrated with modern healthcare workflows.

Brief Description of Drawings

The invention will be described in detail with the reference to the exemplary embodiment shown in the figures wherein:

Figure-1: Architecture of the CNN model, mainly distinguished by preprocessing layers, convolutional layers, attention modules, and the integration with XAI tools.

Figure-2: Workflow diagram of the Skin-AI system, describing the stages from image acquisition, preprocessing, and classification to prediction output.

Detailed Description of the Invention

Smart Urban Traffic utilizes advanced AI and computer vision technologies to revolutionize urban traffic management. At the heart of this lies YOLOv11, a robust object detection algorithm, and on-the-fly vehicle identification. It performs even under low lighting, bad weather, or occlusion, making it most suitable for urban areas with compromised visibility. YOLOv11 processes live video data from high-definition cameras and accurately locates vehicles within the camera's field of vision, providing critical input data to the traffic management system.

High-Edimageth also advanced AI and computer vision technologies to smarten urban traffic management. Here, it mainly involves YOLOv11, a robust on-line vehicle identification object detection algorithm. Even in low lighting, bad weather, or occlusion, it takes place, which makes it most suitable in urban areas, where the visibility is compromised. YOLOv11 processes online images from high-definition cameras to identify and locate vehicles within the camera's field of view and inputs this data directly into the traffic management system.

When vehicles are detected, then the BoTSort algorithm can be applied for tracking the vehicles through video frames without interruption due to traffic congestion and rapid movement. This ensures real-time tracking of vehicle position and traffic flow, which is critical for an efficient and adaptive traffic system. Thus, data regarding the tracking integrate with the high-resolution cameras and embedded sensors inputs into the system central processing unit that allows continual updating and alterations of the traffic signals depending on the real-time traffic conditions.

It is the system's attempts to put emergency vehicle detection among the most significant innovations. This would help the system identify emergency vehicles such as ambulances, fire brigade vehicles, or police cars using some unique visual markers and/or signal patterns. Once identified, the traffic control module then dynamically prioritizes such vehicles, giving them clear traffic signals to make way for them, ensuring their prompt arrival at the location. The significance of this feature is a huge boost towards road safety, and it may save lives under certain high-traffic situations.

Deep Q-learning module acts as the heart of the traffic management decision-making process. This reinforcement learning algorithm not only learns from real-time traffic data but also incorporates learned historical patterns to anticipate congestion and proactively adjust traffic signal timings. Ultimately, continuous optimization of traffic flow, along with the consequent saving in time and prevention of jams, will ensure a reduced environmental footprint from idling vehicles. The Deep Q-learning model gets trained through current sensor inputs and historical traffic data, preparing the system for different urban traffic situations based on data for decision-making.

High-Resolution Cameras: These cameras can record very high-quality videos for YOLOv11 processing with advanced imaging sensors capable of 4K Ultra HD resolution.

LiDAR Sensors: They are used for depth map generation and object detection in real-time in three dimensions.

IMUs (Inertial Measurement Units): Data regarding motion and orientation that helps keep track of the vehicle's position.

Embedded Processors: Real-time use of data and deep learning processing applications use high-performance available processors, such as NVIDIA Jetson Xavier NX.

Communication Modules: Wi-Fi, LTE, and satellite communications would enable seamless data sharing and coordination within the system.

TransferFlow: It is a specialized data management tool designed to withstand efficient storage, retrieval, and processing of qualified data.

The flowchart of Smart Urban Traffic System explains the sequentially chronological flow from collecting the data to making decisions based upon the data set. Initially, streaming video data collection occurs from high-definition cameras and employs YOLOv11 for vehicle recognition from the extracted data. Subsequently, the segmented vehicles are tracked within frames using the BotSort algorithm for identifier consistency.

Detection of emergency vehicles takes place at the same time in the analysis module dedicated to identifying visual markers or signaling signs specific to the emergency vehicle. Upon detection, the system sends a signal to the control unit traffic, where it will immediately adjust traffic light patterns along the path of the emergency vehicle.

The modules of detection and tracking, along with the feedback from IMUs and LiDAR sensors, transmit this data into the central processing unit. The Deep Q-learning algorithm processes the information in real-time against historical data. Inferences drawn from it adjust dynamically the timings for traffic lights and rerouting advice in order to optimize traffic flow. All of this is meant to ensure that the entire flow of data is routed to the main command for real-time monitoring and decision making.

The comprehensive approach changes traditional traffic management into a responsive data-driven solution increasingly adapts to evolving traffic patterns and sometimes-changing environmental conditions.

Equivalents

The present invention leverages the functionalities of specific technologies, such as YOLOv11 for object detection, BotSOrt for vehicle tracking, TensorFlow for data management, and Deep Q-learning for adaptive traffic control, the invention is not restricted to only these implementations. It can use equivalent object detection algorithms and tracking algorithms, reinforcement learning models, or data optimization frameworks with similar functionalities without deviating from the concept and sprit of the invention.It also includes other machine learning techniques including alternative reinforcement learning algorithms or data-handling systems that provide efficient real-time processing. It would cover all systems and methods that could achieve results similar to these for improving urban traffic flow, emergency response, and congestion management. , C , Claims:The scope of the invention will be defined by the following claims:

Claim:

1. A smart urban traffic system with real-time vehicle tracking consisting ,
a. A object detection module configured to detect vehicles in real-time by using YOLOv11 algoriythm, which is a neural network-based network.
b. A BotSort algorithm, a tracking model which is used for continuous tracking of detected vehicles across frames in a dynamic urban environment.
c. A emergency vehicle detection, which is used to identify emergency vehicles as well their movement priority by changing traffic signals at traffic congestion.
d. A Deep Q-learning, which is a reinforcement learning module to adaptively control traffic signals based on real-time and historical traffic data to reduce congestion and improve the traffic flow.
2. According to claim 1, the object detection module identifies vehicles in diverse and complex urban scenarios including low visibility situations, heavy traffic, and occlusions, and gives real-time data for adaptive traffic control with the utilization of YOLOv11 algorithm.
3. According to claim 1, the tracking module which can ensure an appropriate identification of signal vehicles, inspecting credible counting even in substantial traffic density conditions by the use of BotSort algorithm.
4. According to claim 1, an emergency vehicle detection section models specific visual or signal markers to identify and classify emergency vehicles like ambulances, fire trucks, and police vehicles with the function of adjusting signals dynamically to prioritize emergency directions.
5. According to claim 1, Deep Q-learning which helps in reinforcement learning modules to eliminate congestion is achieved by predicting congestion and preparing traffic signals in advance on the basis of both real-time inputs and learned patterns.

Documents

Application Documents

# Name Date
1 202541070882-REQUEST FOR EARLY PUBLICATION(FORM-9) [25-07-2025(online)].pdf 2025-07-25
2 202541070882-FORM-9 [25-07-2025(online)].pdf 2025-07-25
3 202541070882-FORM FOR STARTUP [25-07-2025(online)].pdf 2025-07-25
4 202541070882-FORM FOR SMALL ENTITY(FORM-28) [25-07-2025(online)].pdf 2025-07-25
5 202541070882-FORM 1 [25-07-2025(online)].pdf 2025-07-25
6 202541070882-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [25-07-2025(online)].pdf 2025-07-25
7 202541070882-EVIDENCE FOR REGISTRATION UNDER SSI [25-07-2025(online)].pdf 2025-07-25
8 202541070882-EDUCATIONAL INSTITUTION(S) [25-07-2025(online)].pdf 2025-07-25
9 202541070882-DRAWINGS [25-07-2025(online)].pdf 2025-07-25
10 202541070882-COMPLETE SPECIFICATION [25-07-2025(online)].pdf 2025-07-25