Sign In to Follow Application
View All Documents & Correspondence

System And Method For Plaque Segmentation In Intravascular Ultrasound Images

Abstract: Disclosed is a system (100) for plaque segmentation in intravascular ultrasound (IVUS) images. The system includes a user device (102), a communication network (106), and an information processing apparatus (104). The information processing apparatus includes processing circuitry (108) configured to: acquire IVUS images; preprocess the images; initialize a machine learning model including an encoder, dual decoders, and a discriminator; train the model on labeled data (308); encode unlabeled images; select informative unlabeled samples (312); extract labels for selected samples (314); add newly labeled samples to training data (316); retrain the model (318); generate plaque segmentation masks (320); and evaluate segmentation performance (322). The system iteratively improves model performance by incorporating newly labeled data and refining the training process. FIG. 1 is selected

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
30 August 2024
Publication Number
27/2025
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
Parent Application

Applicants

IHUB DRISHTI FOUNDATION
Indian Institute of Technology Jodhpur, NH 62, Nagaur Road, Karwar, Jodhpur, Rajasthan, 342030, India
INDIAN INSTITUTE OF TECHNOLOGY, JODHPUR
NH 62, Surpura Bypass Rd, Karwar, Jheepasani, Rajasthan 342030, India

Inventors

1. Angshuman Paul
Department of CSE, IIT Jodhpur, NH62, Karwar, Jodhpur, Rajasthan, 342030, India
2. Mayank Vatsa
Department of CSE, IIT Jodhpur, NH62, Karwar, Jodhpur, Rajasthan, 342030, India
3. Richa Singh
Department of CSE, IIT Jodhpur, NH62, Karwar, Jodhpur, Rajasthan, 342030, India
4. Bhanu Duggal
Department of Cardiology, AIIMS Rishikesh, Virbhadra Road, Rishikesh, Uttarakhand, 249203, India
5. Anuradha Mahato
Department of CSE, IIT Jodhpur, NH62, Karwar, Jodhpur, Rajasthan, 342030, India;
6. Rutvik Narendrabhai Jethava
Department of CSE, IIT Jodhpur, NH62, Karwar, Jodhpur, Rajasthan, 342030, India
7. Paromita Banerjee
Department of Cardiology, AIIMS Rishikesh, Virbhadra Road, Rishikesh, Uttarakhand, 249203, India

Specification

DESC:FIELD OF DISCLOSURE
The present disclosure relates to medical image analysis, and more particularly to a system and method for atherosclerotic plaque segmentation in intravascular ultrasound images via active learning.
BACKGROUND
Cardiovascular diseases remain a leading cause of morbidity and mortality worldwide. Intravascular ultrasound (IVUS) imaging has emerged as a powerful diagnostic tool in interventional cardiology, offering high-resolution cross-sectional images of coronary arteries. This imaging technique provides detailed information about vessel wall morphology and atherosclerotic plaque characteristics, which is crucial for accurate diagnosis and treatment planning.
Conventional methods for analyzing IVUS images often rely on manual interpretation by expert clinicians. This process is time-consuming, subject to inter-observer variability, and may lead to inconsistencies in diagnosis and treatment decisions. Automated segmentation techniques have been developed to address these limitations, but they face challenges due to the complex nature of vascular structures, variations in plaque morphology, and the presence of imaging artifacts such as speckle noise.
Recent advancements in machine learning and deep learning have shown promise in medical image analysis. However, these approaches typically require large annotated datasets for training, which are often scarce in the medical domain due to the time-intensive and expertise-dependent nature of image annotation. Additionally, existing automated methods may struggle with generalization across diverse patient populations and varying imaging conditions, limiting their clinical applicability.
Therefore, there exists a need for a technical solution that solves the aforementioned problems of conventional systems and methods for plaque segmentation in intravascular ultrasound images.
SUMMARY
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In an aspect of the present disclosure, an information processing apparatus for plaque segmentation in intravascular ultrasound (IVUS) images is disclosed. The apparatus includes processing circuitry configured to acquire IVUS images. The processing circuitry preprocesses the acquired IVUS images. The processing circuitry initializes a machine learning model including an encoder, dual decoders, and a discriminator. The processing circuitry trains the machine learning model on labeled data. The processing circuitry encodes unlabeled IVUS images using the encoder to produce latent representations. The processing circuitry selects informative unlabeled samples based on the latent representations using the discriminator. The processing circuitry extracts labels for the selected unlabeled samples. The processing circuitry adds the newly labeled samples to the training data. The processing circuitry retrains the machine learning model using the updated training data. The processing circuitry generates plaque segmentation masks for test IVUS images using the retrained machine learning model. The processing circuitry evaluates segmentation performance of the retrained machine learning model.
In some aspects of the present disclosure, the preprocessing of the acquired IVUS images comprises noise reduction, contrast enhancement, and normalization.
In some aspects of the present disclosure, the dual decoders comprise a reconstruction decoder configured to reconstruct the input IVUS image from the latent representation, and a segmentation decoder configured to generate pixel-wise segmentation predictions.
In some aspects of the present disclosure, the processing circuitry is further configured to iteratively improve model performance by incorporating newly labeled data and refining the training process until a desired performance or budget is reached.
In an aspect of the present disclosure, a system for plaque segmentation in intravascular ultrasound (IVUS) images is disclosed. The system includes a user device configured to capture IVUS images. The system includes a communication network. The system includes an information processing apparatus connected to the user device via the communication network. The information processing apparatus comprises processing circuitry configured to acquire IVUS images from the user device. The processing circuitry preprocesses the acquired IVUS images. The processing circuitry initializes a machine learning model including an encoder, dual decoders, and a discriminator. The processing circuitry trains the machine learning model on labeled data. The processing circuitry encodes unlabeled IVUS images using the encoder to produce latent representations. The processing circuitry selects informative unlabeled samples based on the latent representations using the discriminator. The processing circuitry extracts labels for the selected unlabeled samples. The processing circuitry adds the newly labeled samples to the training data. The processing circuitry retrains the machine learning model using the updated training data. The processing circuitry generates plaque segmentation masks for test IVUS images using the retrained machine learning model. The processing circuitry evaluates segmentation performance of the retrained machine learning model.
In some aspects of the present disclosure, the preprocessing of the acquired IVUS images comprises noise reduction, contrast enhancement, and normalization.
In some aspects of the present disclosure, the dual decoders comprise a reconstruction decoder configured to reconstruct the input IVUS image from the latent representation, and a segmentation decoder configured to generate pixel-wise segmentation predictions.
In an aspect of the present disclosure, a method for plaque segmentation in intravascular ultrasound (IVUS) images is disclosed. The method includes acquiring IVUS images. The method includes preprocessing the acquired IVUS images. The method includes initializing a machine learning model including an encoder, dual decoders, and a discriminator. The method includes training the machine learning model on labeled data. The method includes encoding unlabeled IVUS images using the encoder to produce latent representations. The method includes selecting informative unlabeled samples based on the latent representations using the discriminator. The method includes extracting labels for the selected unlabeled samples. The method includes adding the newly labeled samples to the training data. The method includes retraining the machine learning model using the updated training data. The method includes generating plaque segmentation masks for test IVUS images using the retrained machine learning model. The method includes evaluating segmentation performance of the retrained machine learning model.
In some aspects of the present disclosure, preprocessing the acquired IVUS images comprises applying noise reduction to the acquired IVUS images. The preprocessing includes enhancing contrast of the noise-reduced IVUS images. The preprocessing includes normalizing the contrast-enhanced IVUS images.
In some aspects of the present disclosure, the dual decoders comprise a reconstruction decoder configured to reconstruct the input IVUS image from the latent representation. The dual decoders comprise a segmentation decoder configured to generate pixel-wise segmentation predictions. The method further includes iteratively improving model performance by incorporating newly labeled data and refining the training process until a desired performance or budget is reached.
The foregoing general description of the illustrative aspects and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.
BRIEF DESCRIPTION OF FIGURES
The following detailed description of the preferred aspects of the present disclosure will be better understood when read in conjunction with the appended drawings. The present disclosure is illustrated by way of example, and not limited by the accompanying figures, in which like references indicate similar elements.
FIG. 1 illustrates a block diagram of a system for processing information, according to aspects of the present disclosure.
FIG. 2 illustrates a block diagram of an information processing apparatus, according to an aspect.
FIG. 3 illustrates a flowchart of a method for segmenting plaque in intravascular ultrasound images, in accordance with example aspects.
DETAILED DESCRIPTION
The following description sets forth exemplary aspects of the present disclosure. It should be recognized, however, that such description is not intended as a limitation on the scope of the present disclosure. Rather, the description also encompasses combinations and modifications to those exemplary aspects described herein.
The present disclosure provides a system and method for plaque segmentation in intravascular ultrasound (IVUS) images. The system includes processing circuitry configured to acquire IVUS images, preprocess the images, and initialize a machine learning model. The machine learning model includes an encoder, dual decoders, and a discriminator. The processing circuitry is further configured to train the model on labeled data, encode unlabeled images, select informative unlabeled samples, label selected samples, add newly labeled samples to training data, and retrain the model. The system generates plaque segmentation masks and evaluates segmentation performance.
The disclosed system and method leverage active learning techniques to significantly reduce the amount of manually annotated data required for training, addressing the challenge of limited labeled datasets in medical imaging. The dual-branch decoder architecture, combining image reconstruction and plaque segmentation, enables more robust feature extraction and enhances overall segmentation performance. The active learning framework strategically selects the most informative samples for annotation, optimizing the use of available data and accelerating the learning process. By iteratively incorporating diverse and informative samples into the training set, the system improves the ability to generalize across different patient populations and imaging conditions. The continuous retraining process allows the model to adapt to new data distributions and refine the performance over time, ensuring ongoing improvement and relevance in clinical settings. The automated segmentation process reduces the time and effort required for manual analysis, potentially improving clinical efficiency and decision-making in cardiovascular interventions.
FIG. 1 illustrates a block diagram of a system 100 for processing information. The system 100 comprises a user device 102, an information processing apparatus 104, and a communication network 106. The user device 102 is connected to the communication network 106, that facilitates bidirectional data transfer between the user device 102 and the information processing apparatus 104. The information processing apparatus 104 includes processing circuitry 120 and a database 122. The processing circuitry 120 is configured to process information received from the user device 102 via the communication network 106. The database 122 is connected to the processing circuitry 120 and stores data that may be used during information processing. The communication network 106 enables the exchange of information between the user device 102 and the information processing apparatus 104, allowing for the transmission of data and processed results.
The user device 102 may be adapted to facilitate a user to input data, receive data, and/or transmit data within the system 100. In some aspects of the present disclosure, the user device 102 may include, but is not limited to, a desktop, a notebook, a laptop, a handheld computer, a touch sensitive device, a computing device, a smart phone, a smart watch, and the like. The user device 102 may be configured to capture intravascular ultrasound (IVUS) images and transmit them to the information processing apparatus 104 via the communication network 106.
The information processing apparatus 104 may be a network of computers, a framework, or a combination thereof, that may provide a generalized approach to create a server implementation. In some aspects of the present disclosure, the information processing apparatus 104 may be a server. Examples of the information processing apparatus 104 may include, but are not limited to, personal computers, laptops, mini-computers, mainframe computers, any non-transient computers, any non-transient and tangible machine that can execute a machine-readable code, cloud-based servers, distributed server networks, or a network of computer systems. The information processing apparatus 104 may be realized through various web-based technologies such as, but not limited to, a Java web-framework, a .NET framework, a personal home page (PHP) framework, or any other web-application framework.
The processing circuitry 120 may be configured to execute various operations associated with the system 100. The processing circuitry 120 may be configured to process IVUS images received from the user device 102. Examples of the processing circuitry 120 may include, but are not limited to, an ASIC processor, a RISC processor, a CISC processor, a FPGA, and the like. Aspects of the present disclosure are intended to include and/or otherwise cover any type of the processing circuitry 120 including known, related art, and/or later developed technologies.
The database 122 may be configured to store logic, instructions, circuitry, interfaces, and/or codes of the processing circuitry 120 to enable the processing circuitry 120 to execute the one or more operations associated with the system 100. The database 122 may be further configured to store therein, data associated with the system 100, such as IVUS images, processed data, and segmentation results. Examples of the database 122 may include but are not limited to, a Relational database, a NoSQL database, a Cloud database, an Object oriented database, and the like. The data storage and management aspects of the system 100, including the organization and retrieval of data in the database 122, may be implemented using various database management systems and data structures as appropriate for the specific deployment environment.
The communication network 106 may include suitable logic, circuitry, and interfaces that may be configured to provide a plurality of network ports and a plurality of communication channels for transmission and reception of data related to operations of various entities in the system 100. The communication network 106 may be associated with an application layer for implementation of communication protocols based on one or more communication requests from the user device 102 and the information processing apparatus 104.
In operation, the system 100 may acquire IVUS images from the user device 102, preprocess the acquired IVUS images, initialize a machine learning model including an encoder, dual decoders, and a discriminator, train the machine learning model on labeled data, encode unlabeled IVUS images to produce latent representations, select informative unlabeled samples, extract labels for the selected unlabeled samples by way of manual annotation, add newly labeled samples to the training data, retrain the machine learning model, generate plaque segmentation masks for test IVUS images, and evaluate segmentation performance of the retrained machine learning model using F1 score, dice coefficient or the like.
The system 100 may provide several advantages. The use of active learning may reduce the amount of labeled data required for training, that may be particularly beneficial in medical imaging where labeled data is often scarce and expensive to extract. The dual decoder architecture may allow for both image reconstruction and segmentation, potentially improving the model's ability to learn meaningful representations. The iterative nature of the system 100 may allow for continuous improvement of the model's performance over time.
Although FIG. 1 illustrates that the system 100 includes a single user device (i.e., the user device 102), it will be apparent to a person skilled in the art that the scope of the present disclosure is not limited to it. In various other aspects, the system 100 may include multiple user devices without deviating from the scope of the present disclosure. In such a scenario, each user device is configured to perform one or more operations in a manner similar to the operations of the user device 102 as described herein.
FIG. 2 illustrates a block diagram of an information processing apparatus 104 for processing intravascular ultrasound (IVUS) images. The apparatus 104 includes a network interface 200 and an I/O interface 202 for external communication. The processing circuitry 108 forms the core of the apparatus and comprises several interconnected components.
The network interface 200 may include suitable logic, circuitry, and interfaces that may be configured to establish and enable a communication between the information processing apparatus 104 and different elements of the system 100, via the communication network 106. The network interface 200 may be implemented by use of various known technologies to support wired or wireless communication of the information processing apparatus 104 with the communication network 106. The network interface 200 may include, but is not limited to, an antenna, a RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a SIM card, and a local buffer circuit.
The I/O interface 202 may include suitable logic, circuitry, interfaces, and/or code that may be configured to receive inputs and transmit the information processing apparatus's outputs via a plurality of data ports. The I/O interface 202 may include various input and output data ports for different I/O devices. Examples of such I/O devices may include, but are not limited to, a touch screen, a keyboard, a mouse, a joystick, a projector audio output, a microphone, an image-capture device, a liquid crystal display (LCD) screen and/or a speaker.
The processing circuitry 108 includes a data collection engine 206, an image processing engine 208, an image reconstruction engine 210, a plaque segmentation engine 212, a feature extraction engine 214, an active learning engine 216, and an output engine 218. These components are interconnected via a second data bus 220.
The data collection engine 206 may be configured to acquire IVUS images from the user device 102 via the network interface 200. The data collection engine 206 may handle various data formats and ensure proper transmission and storage of the received images. In some aspects of the present disclosure, the data collection engine 206 may also be configured to preprocess the acquired IVUS images, including operations such as noise reduction, contrast enhancement, and normalization.
The image processing engine 208 may be responsible for further preprocessing of the acquired IVUS images. The image processing engine 208 may apply advanced image processing techniques to enhance the quality of the images and prepare them for subsequent analysis. The image processing engine 208 may employ various algorithms for speckle noise reduction, edge enhancement, and image segmentation preprocessing.
The image reconstruction engine 210 may be part of the dual decoder architecture of the machine learning model. The image reconstruction engine 210 may be configured to reconstruct the input IVUS image from the latent representation produced by the encoder. The ability to reconstruct the original image may help ensure that the latent representation captures meaningful features of the IVUS images. In some aspects of the present disclosure, the image reconstruction engine 210 may employ advanced deep learning architectures such as convolutional neural networks (CNNs) or generative adversarial networks (GANs) for image reconstruction.
The plaque segmentation engine 212 may be another part of the dual decoder architecture. The plaque segmentation engine 212 may be configured to generate pixel-wise segmentation predictions for the IVUS images. It may use the latent representations produced by the encoder to identify and delineate plaque regions within the images. The plaque segmentation engine 212 may employ various segmentation techniques, including but not limited to, U-Net architectures, fully convolutional networks, or mask R-CNN.
The feature extraction engine 214 may work in conjunction with the encoder to produce latent representations of the IVUS images. The feature extraction engine 214 may be responsible for identifying and extracting relevant features from the images that are useful for both reconstruction and segmentation tasks. The feature extraction engine 214 may employ various techniques such as principal component analysis, autoencoders, or pretrained CNN models for feature extraction.
The active learning engine 216 may be configured to select informative unlabeled samples based on the latent representations using the discriminator. The discriminator may be configured to differentiate the labelled and the un-labelled samples utilizing the probability distribution of the samples. It may also manage the process of extracting labels for the selected samples and incorporating them into the training data. The active learning engine 216 may employ various sampling strategies, such as uncertainty sampling, diversity sampling, or expected model change, to select the most informative samples for labeling.
The output engine 218 may be responsible for generating the final segmentation results and evaluating the performance of the retrained machine learning model. The output engine 218 may produce visual representations of the segmented plaque regions and calculate various performance metrics such as Dice coefficient, Jaccard index, or mean intersection over union (IoU).
The processing circuitry 108 is connected to a database 110 via a first data bus 204. The database 110 may store data used by the processing circuitry 108 during information processing, such as IVUS images, training data, model parameters, and segmentation results.
The information processing apparatus 104 provides several advantages. The modular architecture allows for easy integration of new components or upgrading existing ones. The dual decoder architecture, combining image reconstruction and plaque segmentation, enables more robust feature extraction and enhances overall segmentation performance. The active learning framework strategically selects the most informative samples for annotation, optimizing the use of available data and accelerating the learning process.
In operation, the information processing apparatus 104 acquires IVUS images, preprocesses them, initializes and trains the machine learning model, performs active learning to improve the model, and generates plaque segmentation masks. The iterative nature of the process allows for continuous improvement of the model's performance over time, adapting to new data distributions and refining the accuracy in clinical settings.
FIG. 3 illustrates a flowchart of a method 300 for plaque segmentation in intravascular ultrasound (IVUS) images. The method 300 begins with step 302, where IVUS images are acquired. In step 304, the acquired IVUS images are preprocessed. Step 306 involves initializing a machine learning model that includes an encoder, dual decoders, and a discriminator.
At step 302, the system 100 may acquire IVUS images. The data collection engine 206 may be configured to receive IVUS images from the user device 102 via the network interface 200. The IVUS images may be captured using an intravascular ultrasound catheter inserted into a patient's blood vessel. In some aspects of the present disclosure, the system 100 may receive a series of IVUS images representing different cross-sections of the blood vessel.
At step 304, the system 100 may preprocess the acquired IVUS images. The image processing engine 208 may be responsible for the preprocessing step. Preprocessing may include noise reduction, contrast enhancement, and normalization. Specifically, the preprocessing may involve applying speckle noise reduction techniques to improve image quality. The image processing engine 208 may further enhance the contrast of the noise-reduced IVUS images to better distinguish between different tissue types. Finally, the engine may normalize the contrast-enhanced IVUS images to ensure consistency across the dataset. The preprocessing techniques applied to the IVUS images may include various noise reduction and contrast enhancement algorithms. The specific algorithms used may be selected based on the characteristics of the input images and the requirements of the subsequent processing steps.
At step 306, the system 100 may initialize a machine learning model including an encoder, dual decoders, and a discriminator. The encoder may be configured to extract meaningful features from the input IVUS images and produce latent representations. The dual decoders may comprise a reconstruction decoder and a segmentation decoder. The reconstruction decoder may be configured to reconstruct the input IVUS image from the latent representation, while the segmentation decoder may be configured to generate pixel-wise segmentation predictions. The discriminator may be used in the active learning process to select informative unlabeled samples.
The specific architectures of the encoder, dual decoders, and discriminator may vary in different implementations of the present disclosure. Various known or later developed neural network architectures may be employed without departing from the scope of the disclosure.
At step 308, the system 100 may train the machine learning model on labeled data. The training process may involve feeding labeled IVUS images through the encoder and dual decoders, comparing the outputs with the ground truth, and adjusting the model parameters to minimize the reconstruction and segmentation losses. In some aspects of the present disclosure, the training may use a combination of supervised and unsupervised learning techniques to leverage both labeled and unlabeled data effectively.
At step 310, the system 100 may encode unlabeled IVUS images using the encoder to produce latent representations. The feature extraction engine 214 may work in conjunction with the encoder to identify and extract relevant features from the unlabeled images. The latent representations may capture one or more characteristics of the IVUS images that are useful for both reconstruction and segmentation tasks.
At step 312, the system 100 may select informative unlabeled samples based on the latent representations using the discriminator. The active learning engine 216 may be responsible for this step. The engine may employ various sampling strategies to identify the most informative unlabeled samples. These strategies may include uncertainty sampling, where samples with the highest uncertainty in the model's predictions are selected, or diversity sampling, where samples that are most different from the currently labeled data are chosen. The active learning engine 216 may employ one or more of various sampling strategies, which may include, but are not limited to, uncertainty sampling, diversity sampling, expected model change, or other suitable strategies developed in the field of active learning.
At step 314, the system 100 may extract labels for the selected unlabeled samples. This step may involve presenting the selected samples to expert annotators who can provide accurate labels for the plaque regions. In some aspects of the present disclosure, the system 100 may employ semi-automated labeling techniques to assist the annotators and speed up the labeling process.
At step 316, the system 100 may add the newly labeled samples to the training data. This step expands the labeled dataset with the most informative samples, which can potentially improve the model's performance significantly.
At step 318, the system 100 may retrain the machine learning model using the updated training data. This retraining process allows the model to incorporate the newly acquired knowledge from the additional labeled samples. The retraining may involve fine-tuning the existing model parameters rather than training from scratch, which can be more efficient.
At step 320, the system 100 may generate plaque segmentation masks for test IVUS images using the retrained machine learning model. The plaque segmentation engine 212 may be responsible for this step. The engine may use the latent representations produced by the encoder to identify and delineate plaque regions within the test images. The system's ability to handle different types of plaques may vary depending on the specific implementation and training data used. The performance may differ for various plaque types such as calcified, fibrous, or lipid-rich plaques.
At step 322, the system 100 may evaluate segmentation performance of the retrained machine learning model. The output engine 218 may be responsible for this evaluation. The engine may calculate various performance metrics such as Dice coefficient, Jaccard index, or mean intersection over union (IoU) to assess the accuracy of the plaque segmentation. The evaluation metrics used to assess segmentation performance may include, but are not limited to, Dice coefficient, Jaccard index, mean intersection over union (IoU), or other suitable metrics developed for image segmentation evaluation. The choice of metrics may depend on the specific clinical requirements and the nature of the segmentation task.
After evaluation, the method 300 reaches a decision point at step 324, where it checks if the desired performance or budget has been reached. When the criteria are not met, the process loops back to step 310, allowing for continuous improvement through additional iterations of active learning and model retraining. When the criteria are met, the method 300 proceeds to step 328, marking the end of the process.
The method 300 provides several advantages. It combines the power of deep learning with active learning, allowing for efficient use of limited labeled data. The iterative nature of the method 300 enables continuous improvement of the model's performance over time. Furthermore, the active learning approach ensures that the most informative samples are labeled, potentially leading to better segmentation results with fewer labeled examples compared to traditional supervised learning approaches.
The plaque segmentation system 100 described herein may have various potential clinical applications in cardiovascular disease diagnosis and treatment planning. The specific applications may evolve as the technology advances and as new clinical needs are identified.
The system 100 may be extended or improved in various ways in future implementations, which may include integration with other imaging modalities or adaptation to different vascular regions. Such extensions or improvements are considered to be within the scope of the present disclosure.
Aspects of the present disclosure are discussed here with reference to flowchart illustrations and block diagrams that depict methods, systems, and apparatus in accordance with various aspects of the present disclosure. Each block within these flowcharts and diagrams, as well as combinations of these blocks, can be executed by computer-readable program instructions. The various logical blocks, modules, circuits, and algorithm steps described in connection with the disclosed aspects may be implemented through electronic hardware, software, or a combination of both. To emphasize the interchangeability of hardware and software, the various components, blocks, modules, circuits, and steps are described generally in terms of their functionality. The decision to implement such functionality in hardware or software is dependent on the specific application and design constraints imposed on the overall system. Person having ordinary skill in the art can implement the described functionality in different ways depending on the particular application, without deviating from the scope of the present disclosure.
Thus, the system 100, the information processing apparatus 104, and the method 300 provide several technical advantages. The use of active learning significantly reduces the amount of manually annotated data required for training, addressing the challenge of limited labeled datasets in medical imaging. The dual-branch decoder architecture, combining image reconstruction and plaque segmentation, enables more robust feature extraction and enhances overall segmentation performance. The active learning framework strategically selects the most informative samples for annotation, optimizing the use of available data and accelerating the learning process. By iteratively incorporating diverse and informative samples into the training set, the system 100 improves its ability to generalize across different patient populations and imaging conditions. The continuous retraining process allows the model to adapt to new data distributions and refine its performance over time, ensuring ongoing improvement and relevance in clinical settings. The automated segmentation process reduces the time and effort required for manual analysis, potentially improving clinical efficiency and decision-making in cardiovascular interventions.
Aspects of the present disclosure are discussed here with reference to flowchart illustrations and block diagrams that depict methods, systems, and apparatus in accordance with various aspects of the present disclosure. Each block within these flowcharts and diagrams, as well as combinations of these blocks, can be executed by computer-readable program instructions. The various logical blocks, modules, circuits, and algorithm steps described in connection with the disclosed aspects may be implemented through electronic hardware, software, or a combination of both. To emphasize the interchangeability of hardware and software, the various components, blocks, modules, circuits, and steps are described generally in terms of their functionality. The decision to implement such functionality in hardware or software is dependent on the specific application and design constraints imposed on the overall system. Person having ordinary skill in the art can implement the described functionality in different ways depending on the particular application, without deviating from the scope of the present disclosure.
The flowcharts and block diagrams presented in the figures depict the architecture, functionality, and operation of potential implementations of systems, methods, and apparatus according to different aspects of the present disclosure. Each block in the flowcharts or diagrams may represent an engine, segment, or portion of instructions comprising one or more executable instructions to perform the specified logical function(s). In some alternative implementations, the order of functions within the blocks may differ from what is depicted. For instance, two blocks shown in sequence may be executed concurrently or in reverse order, depending on the required functionality. Each block, and combinations of blocks, can also be implemented using special-purpose hardware-based systems that perform the specified functions or tasks, or through a combination of specialized hardware and software instructions.
Although the preferred aspects have been detailed here, it should be apparent to those skilled in the relevant field that various modifications, additions, and substitutions can be made without departing from the scope of the disclosure. These variations are thus considered to be within the scope of the disclosure as defined in the following claims.
Features or functionalities described in certain example aspects may be combined and re-combined in or with other example aspects. Additionally, different aspects and elements of the disclosed example aspects may be similarly combined and re-combined. Further, some example aspects, individually or collectively, may form components of a larger system where other processes may take precedence or modify their application. Moreover, certain steps may be required before, after, or concurrently with the example aspects disclosed herein. It should be noted that any and all methods and processes disclosed herein can be performed in whole or in part by one or more entities or actors in any manner.
Although terms like "first," "second," etc., are used to describe various elements, components, regions, layers, and sections, these terms should not necessarily be interpreted as limiting. They are used solely to distinguish one element, component, region, layer, or section from another. For example, a "first" element discussed here could be referred to as a "second" element without departing from the teachings of the present disclosure.
The terminology used here is intended to describe specific example aspects and should not be considered as limiting the disclosure. The singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "includes," "comprising," and "including," as used herein, indicate the presence of stated features, steps, elements, or components, but do not exclude the presence or addition of other features, steps, elements, or components.
As used herein, the term "or" is intended to be inclusive, meaning that "X employs A or B" would be satisfied by X employing A, B, or both A and B. Unless specified otherwise or clearly understood from the context, this inclusive meaning applies to the term "or."
Unless otherwise defined, all terms used herein (including technical and scientific terms) have the same meaning as commonly understood by one of ordinary skill in the relevant art. Terms should be interpreted consistently with their common usage in the context of the relevant art and should not be construed in an idealized or overly formal sense unless expressly defined here.
The terms "about" and "substantially," as used herein, refer to a variation of plus or minus 10% from the nominal value. This variation is always included in any given measure.
In cases where other disclosures are incorporated by reference and there is a conflict with the present disclosure, the present disclosure takes precedence to the extent of the conflict, or to provide a broader disclosure or definition of terms. If two disclosures conflict, the later-dated disclosure will take precedence.
The use of examples or exemplary language (such as "for example") is intended to illustrate aspects of the invention and should not be seen as limiting the scope unless otherwise claimed. No language in the specification should be interpreted as implying that any non-claimed element is essential to the practice of the invention.
While many alterations and modifications of the present invention will likely become apparent to those skilled in the art after reading this description, the specific aspects shown and described by way of illustration are not intended to be limiting in any way.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the scope of the disclosure. Accordingly, other implementations are within the scope of the following claims. ,CLAIMS:1. An information processing apparatus (104) for plaque segmentation in intravascular ultrasound (IVUS) images, comprising:
processing circuitry (108) configured to:
acquire IVUS images;
preprocess the acquired IVUS images;
initialize a machine learning model including an encoder, dual decoders, and a discriminator;
train the machine learning model on labeled data;
encode unlabeled IVUS images using the encoder to produce latent representations;
select informative unlabeled samples based on the latent representations using the discriminator;
extract labels for the selected unlabeled samples;
add the newly labeled samples to the training data;
retrain the machine learning model using the updated training data;
generate plaque segmentation masks for test IVUS images using the retrained machine learning model; and
evaluate segmentation performance of the retrained machine learning model.

2. The information processing apparatus (104) as claimed in claim 1, wherein the dual decoders comprise a reconstruction decoder configured to reconstruct the input IVUS image from the latent representation, and a segmentation decoder configured to generate pixel-wise segmentation predictions.

3. The information processing apparatus (104) as claimed in claim 3, wherein the processing circuitry (108) is further configured to iteratively improve model performance by incorporating newly labeled data and refining the training process until a desired performance or budget is reached.

4. A system (100) for plaque segmentation in intravascular ultrasound (IVUS) images, comprising:
a user device (102) configured to capture IVUS images;
a communication network (106); and
an information processing apparatus (104) connected to the user device (102) via the communication network (106), the information processing apparatus (104) comprising processing circuitry (108) configured to:
acquire IVUS images (302) from the user device (102);
preprocess the acquired IVUS images;
initialize a machine learning model including an encoder, dual decoders, and a discriminator;
train the machine learning model on labeled data;
encode unlabeled IVUS images using the encoder to produce latent representations;
select informative unlabeled samples based on the latent representations using the discriminator;
extract labels for the selected unlabeled samples;
add the newly labeled samples to the training data;
retrain the machine learning model using the updated training data;
generate plaque segmentation masks for test IVUS images using the retrained machine learning model; and
evaluate segmentation performance of the retrained machine learning model.

5. The system (100) as claimed in claim 6, wherein the dual decoders comprise a reconstruction decoder configured to reconstruct the input IVUS image from the latent representation, and a segmentation decoder configured to generate pixel-wise segmentation predictions.

6. A method for plaque segmentation in intravascular ultrasound (IVUS) images, comprising:
acquiring IVUS images (302);
preprocessing the acquired IVUS images (304);
initializing a machine learning model (306) including an encoder, dual decoders, and a discriminator;
training the machine learning model on labeled data (308);
encoding unlabeled IVUS images using the encoder to produce latent representations (310);
selecting informative unlabeled samples based on the latent representations using the discriminator (312);
extracting labels for the selected unlabeled samples (314);
adding the newly labeled samples to the training data (316);
retraining the machine learning model using the updated training data (318);
generating plaque segmentation masks for test IVUS images using the retrained machine learning model (320); and
evaluating segmentation performance of the retrained machine learning model (322).

7. The method as claimed in claim 9, wherein the dual decoders comprise:
a reconstruction decoder configured to reconstruct the input IVUS image from the latent representation; and
a segmentation decoder configured to generate pixel-wise segmentation predictions,
wherein the method further comprises iteratively improving model performance by incorporating newly labeled data and refining the training process until a desired performance or budget is reached.

Documents

Application Documents

# Name Date
1 202411065852-STATEMENT OF UNDERTAKING (FORM 3) [30-08-2024(online)].pdf 2024-08-30
2 202411065852-PROVISIONAL SPECIFICATION [30-08-2024(online)].pdf 2024-08-30
3 202411065852-FORM 1 [30-08-2024(online)].pdf 2024-08-30
4 202411065852-DRAWINGS [30-08-2024(online)].pdf 2024-08-30
5 202411065852-DECLARATION OF INVENTORSHIP (FORM 5) [30-08-2024(online)].pdf 2024-08-30
6 202411065852-Proof of Right [03-10-2024(online)].pdf 2024-10-03
7 202411065852-FORM-26 [16-10-2024(online)].pdf 2024-10-16
8 202411065852-FORM-5 [30-12-2024(online)].pdf 2024-12-30
9 202411065852-DRAWING [30-12-2024(online)].pdf 2024-12-30
10 202411065852-COMPLETE SPECIFICATION [30-12-2024(online)].pdf 2024-12-30
11 202411065852-FORM-26 [08-04-2025(online)].pdf 2025-04-08
12 202411065852-FORM-9 [18-06-2025(online)].pdf 2025-06-18
13 202411065852-FORM 18 [18-06-2025(online)].pdf 2025-06-18