Sign In to Follow Application
View All Documents & Correspondence

System And Method For Crowd Density Estimation Using Deep Learning For Metro Rail Platform

Abstract: The present disclosure relates to a system (100) for crowd density estimation for metro platform, the system includes a series of sensors (102) accommodated across the metro platform to capture a set of frames depicting various activities of passengers, pertaining to moving, waiting, boarding trains, and any combination thereof. A processor (108) operatively coupled to the series of sensors, the processor configured to process the captured set of frames from the series of sensors. Execute, by an image stitching unit (112), a panoramic view of the captured set of frames. Provide, by a crowd density estimation unit (114), crowd count information from the panoramic view in form of both numerical count and a density map and estimate, by a crowd count unit (116), crowd count and density with respect to a spatial distribution on the metro platform.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
12 January 2024
Publication Number
29/2025
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

Bharat Electronics Limited
Corporate Office, Outer Ring Road, Nagavara, Bangalore - 560045, Karnataka, India.

Inventors

1. SHRIVASTAVA, Kavita
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India.
2. JETTIPALLI, Jyotheswar
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India.
3. MITTAL, Virendra Kumar
Central Research Laboratory, Bharat Electronics Limited, Jalahalli P.O., Bangalore - 560013, Karnataka, India.

Specification

Description:TECHNICAL FIELD
[0001] The present disclosure relates, in general, to crowd density estimation, and more specifically, relates to a system and method for crowd density estimation using deep learning for metro rail platforms.

BACKGROUND
[0002] Metro platforms often face challenges in managing crowd density, which can lead to safety concerns and operational inefficiencies. Existing methods for crowd density estimation do not address the unique dynamics of metro platforms. There is a need for a specialized system that can provide accurate real-time crowd density information.
[0003] The prior art discloses the different methods and systems for crowd estimation in subway platforms and other scenarios. An example of such a system is recited in a US8442807B2, patent that discloses the systems, methods, and computer programs for estimating crowd size at a location. Another example is recited in a patent US9183512B2 that discloses the systems and methods for detecting an anomaly in crowd behavior. Yet another example is recited in a patent CN105447458B that relates to a kind of large-scale crowd video analytics system and method. The system includes crowd density estimation module, crowd tracking module, crowd state analysis module and event determination module
[0004] Therefore, it is desired to overcome the drawbacks, shortcomings, and limitations associated with existing solutions, and develop a system that enables operators to make immediate adjustments to passenger flow and implement safety measures in real-time.

OBJECTS OF THE PRESENT DISCLOSURE
[0005] An object of the present disclosure relates, in general, to crowd density estimation, and more specifically, relates to a system and method for crowd density estimation using deep learning for metro rail platforms.
[0006] Another object of the present disclosure is to provide a system that addresses the unique characteristics and challenges of metro platforms, ensuring accurate estimations.
[0007] Another object of the present disclosure is to provide a system that enables operators to make immediate adjustments to passenger flow and implement safety measures in real time.
[0008] Another object of the present disclosure is to provide a system that enhances passenger safety by reducing the risk of overcrowding-related accidents. It creates a safer environment for passengers by providing timely crowd density information and enabling proactive safety measures.
[0009] Yet another object of the present disclosure is to provide a system that optimizes operational efficiency on metro platforms. It assists in scheduling trains, allocating resources, and managing platform operations more effectively. This optimization helps ensure a smooth and streamlined experience for both passengers and operators.

SUMMARY
[0010] The present disclosure relates in general, to crowd density estimation, and more specifically, relates to a system and method for crowd density estimation using deep learning for metro rail platforms. The main objective of the present disclosure is to overcome the drawbacks, limitations, and shortcomings of the existing system and solution, by providing a crowd density estimation system and method specially designed for metro/railway platform scenarios. The system utilizes a combination of imaging sensors, computer vision techniques, and deep learning algorithms to accurately estimate the crowd density/count in real time. The disclosed system and method aid in optimizing passenger flow, ensuring safety, and enhancing overall operational efficiency within metro/railway platforms.
[0011] The present disclosure relates to a system for crowd density estimation for the metro platform, the system includes a series of sensors accommodated across the metro platform to capture a set of frames of the metro platform, the set of frames pertaining to passengers moving, waiting, boarding trains and any combination thereof and a processor operatively coupled to the series of sensors, the processor configured to process the captured set of frames from the series of sensors. Execute, by an image stitching unit, a panoramic view of the captured set of frames. Provide, by a crowd density estimation unit, crowd count information from the panoramic view in the form of both numerical count and a density map. Estimate, by a crowd count unit, crowd count and density with respect to a spatial distribution on the metro platform, allowing operators to dynamically adjust passenger flow and implement real-time safety measures based on spatial distribution.
[0012] In an aspect, the captured set of frames from the series of sensors is transmitted to the processor via Ethernet interface. In another aspect, the image stitching unit, the crowd density estimation unit and the crowd count unit are integrated and processed in the processor, wherein the processor is general-purpose graphics processing unit (GPGPU).
[0013] In another aspect, the generated panoramic view of the captured set of frames undergoes refinement using a seamless blending module.
[0014] In another aspect, the image stitching unit configured to receive the captured set of frames from the series of sensors, perform, from the captured set of frames, feature detection and matching, contribute the detected features to estimate homography between the set of frames and utilize estimated homography values for image warping and panorama generation of the captured set of frames.
[0015] In another aspect, the crowd density estimation unit is configured to pre-process and resize the input stitched image of the captured set of frames, extract features from the pre-processed and resized image, receive the extracted features and generate the density map, process the density map and provide final crowd count based on the processed density map.
[0016] In another aspect, the crowd count unit employs a dilated convolutional neural network to increase reception field and replaces pooling operations for improved crowd counting accuracy. In another aspect, the crowd count unit combines density estimation and convolutional neural network approaches, facilitating accurate crowd counting in both high-density and sparse crowd scenarios. In another aspect, the processor captures a complete panoramic view of the metro platform and performs crowd counting, providing a distribution of crowd with respect to space on the metro platform through the density map.
[0017] Various objects, features, aspects, and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.

BRIEF DESCRIPTION OF THE DRAWINGS
[0018] The following drawings form part of the present specification and are included to further illustrate aspects of the present disclosure. The disclosure may be better understood by reference to the drawings in combination with the detailed description of the specific embodiments presented herein.
[0019] FIG. 1A illustrates a pictorial view of metro platform with an installed camera, in accordance with an embodiment of the present disclosure.
[0020] FIG. 1B illustrates a pictorial view of network of video camera with fixed field of view (FOV), in accordance with an embodiment of the present disclosure.
[0021] FIG. 1C illustrates a block diagram of sensors and GPGPU interface, in accordance with an embodiment of the present disclosure.
[0022] FIG. 2 illustrates a flow chart of image stitching method, in accordance with an embodiment of the present disclosure.
[0023] FIG. 3 illustrates a flow chart of crowd density estimation method, in accordance with an embodiment of the present disclosure.
[0024] FIG. 4 illustrates an exemplary view of method for crowd density estimation for metro platform, in accordance with an embodiment of the present disclosure.

DETAILED DESCRIPTION
[0025] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
[0026] As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
[0027] The present disclosure relates to the system and method of deep learning-based crowd density estimation for metro platform scenarios. It utilizes a sequence of imaging sensors, computer vision, and deep learning techniques to accurately estimate crowd density in real time. A network of video sensors (CCD) is strategically placed across the metro platform to capture the frames, captured frames from all the sensors are transmitted through the Ethernet to the monitoring center where a computing device called GPGPU is placed. Received images are pre-processed in a computing device and stitched to get a panoramic view of the platform. A stitched image is given to the density estimation block, which provides estimated crowd density with approximate count.
[0028] The present invention discloses a system and method for crowd density estimation based on a deep learning approach. This method and system include image processing elements such as image stitching, and crowd density estimation based on deep learning.
[0029] According to one embodiment of the invention, the image stitching module forms a panoramic view of the captured images from the network of video sensors. This module uses a blending approach to generate the panoramic view using captured input images. According to another embodiment, the crowd density estimation module provides crowd count or people headcount, which can be used in surveillance applications. This embodiment predicts the crowd count as well as the distribution of the crowd in an image or video captured by the surveillance camera. This module performs recognition of highly dense crowds within the scene and predicts the head count. The current approach is implemented based on high-quality density estimation and counting. A congested Scene Recognition network (CSR-Net) is used as a backbone network with a dilation rate of 2. CSR-Net Network is modified with training parameters adjustment to achieve high quality density map for both dense and sparse crowds.
[0030] The present disclosure relates to a system for crowd density estimation for the metro platform, the system includes a series of sensors accommodated across the metro platform to capture a set of frames of the metro platform, the set of frames pertaining to passengers moving, waiting, boarding trains and any combination thereof and a processor operatively coupled to the series of sensors, the processor configured to process the captured set of frames from the series of sensors. Execute, by an image stitching unit, a panoramic view of the captured set of frames. Provide, by a crowd density estimation unit, crowd count information from the panoramic view in the form of both numerical count and a density map. Estimate, by a crowd count unit, crowd count and density with respect to a spatial distribution on the metro platform.
[0031] In an aspect, the captured set of frames from the series of sensors is transmitted to the processor via Ethernet interface. In another aspect, the image stitching unit, the crowd density estimation unit and the crowd count unit are integrated and processed in the processor, wherein the processor is general-purpose graphics processing unit (GPGPU).
[0032] In another aspect, the generated panoramic view of the captured set of frames undergoes refinement using a seamless blending module.
[0033] In another aspect, the image stitching unit configured to receive the captured set of frames from the series of sensors, perform, from the captured set of frames, feature detection and matching, contribute the detected features to estimate homography between the set of frames and utilize estimated homography values for image warping and panorama generation of the captured set of frames.
[0034] In another aspect, the crowd density estimation unit is configured to pre-process and resize the input stitched image of the captured set of frames, extract features from the pre-processed and resized image, receive the extracted features and generate the density map, process the density map and provide final crowd count based on the processed density map.
[0035] In another aspect, the crowd count unit employs a dilated convolutional neural network to increase the reception field and replace pooling operations for improved crowd counting accuracy. In another aspect, the crowd count unit combines density estimation and convolutional neural network approaches, facilitating accurate crowd counting in both high-density and sparse crowd scenarios. In another aspect, the processor captures a complete panoramic view of the metro platform and performs crowd counting, providing a distribution of crowd with respect to space on the metro platform through the density map. The present disclosure can be described in enabling detail in the following examples, which may represent more than one embodiment of the present disclosure.
[0036] The advantages achieved by the system of the present disclosure can be clear from the embodiments provided herein. The system is designed specifically for metro platforms, addressing their unique characteristics and challenges to ensure accurate estimations. The system empowers operators with real-time capabilities, enabling them to make immediate adjustments to passenger flow and implement safety measures swiftly. By providing timely crowd density information, the system significantly reduces the risk of overcrowding-related accidents, fostering a safer environment for passengers and facilitating proactive safety measures. Moreover, the present disclosure optimizes operational efficiency on metro platforms by assisting in tasks such as scheduling trains, resource allocation, and effective platform management. This optimization contributes to a smooth and streamlined experience for both passengers and operators, making the system a valuable solution for enhancing safety and efficiency in metro platform environments. The description of terms and features related to the present disclosure shall be clear from the embodiments that are illustrated and described; however, the invention is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents of the embodiments are possible within the scope of the present disclosure. Additionally, the invention can include other embodiments that are within the scope of the claims but are not described in detail with respect to the following description.
[0037] FIG. 1A illustrates a pictorial view of metro platform with an installed camera, in accordance with an embodiment of the present disclosure.
[0038] FIG. 1A describes the pictorial view of the metro rail platform with a camera mounted on a post. Referring to FIG. 1A, the system 100 includes a network of imaging sensors 102 mounted on a post. The network of imaging sensors 102 is placed in a strategic way for maximum coverage of the platform. FIG. 1B illustrates a pictorial view of a network of video cameras with a fixed field of view (FOV), in accordance with an embodiment of the present disclosure. The series of sensors (102-1 to 102-N (which are collectively referred to as series of sensors 102, herein)) is placed in such a way that it may cover the top view of a metro rail platform 104 (also referred to as metro platform 104, herein). Frames captured from imaging sensors 102 are processed in GPGPU to generate seamless panoramic image 106.
[0039] FIG. 1C illustrates a block diagram of sensors and GPGPU interface, in accordance with an embodiment of the present disclosure. The series of sensors 102 (also interchangeably referred to as network of imaging sensors 102, herein) is connected with processor 108 via Ethernet interface 110. The processor 108 can include image stitching unit 112, crowd density estimation network 114 and crowd count unit 116 which processes the captured frames in the processor 108.
[0040] The system 100 includes the series of sensors 102 accommodated across the metro platform to capture the set of frames of the metro platform. The set of frames depicting various activities of passengers, pertaining to moving, waiting, boarding trains, and any combination thereof. The processor 108 operatively coupled to the series of sensors 102, the processor configured to process the captured set of frames from the series of sensors 102. Execute a panoramic view of the captured set of frames by the image stitching unit 112. Provide crowd count information in the form of both numerical count and a density map by the crowd density estimation unit 114 and estimate crowd count and density with respect to a spatial distribution on the metro platform by the crowd count unit 116. Utilizing the crowd count unit 116, the system estimates crowd count and density across the metro platform, allowing operators to dynamically adjust passenger flow and implement real-time safety measures based on spatial distribution.
[0041] The capture of the set of frames from the series of sensors 102 is transmitted to the processor 108 via Ethernet interface 110. The image stitching unit 112, the crowd density estimation unit 114 and the crowd count unit 116 are integrated and processed in the processor 108, where the processor is a general-purpose graphics processing unit (GPGPU). The generated panoramic view of the captured set of frames undergoes refinement using a seamless blending module.
[0042] In an embodiment, the image stitching unit 112 is configured to receive the captured set of frames from the series of sensors 102. Perform, from the captured set of frames, feature detection and matching. Contribute the detected features to estimate homography between the set of frames and utilize estimated homography values for image warping and panorama generation of the captured set of frames.
[0043] In another embodiment, the crowd density estimation unit 114 is configured to pre-process and resize the input stitched image of the captured set of frames, extract features from the pre-processed and resized image, receive the extracted features and generate the density map, process the density map and provide final crowd count based on the processed density map.
[0044] In an exemplary embodiment, the crowd count unit 116 employs a dilated convolutional neural network to increase the reception field and replace pooling operations for improved crowd counting accuracy. In another exemplary embodiment, the crowd count unit 116 combines density estimation and convolutional neural network approaches, facilitating accurate crowd counting in both high-density and sparse crowd scenarios.
[0045] The processor 108 captures a complete panoramic view of the metro platform and performs crowd counting, providing a distribution of crowd with respect to spatial distribution i.e., space on the metro platform through the density map.
[0046] In an implementation, consider a busy metro station during rush hours with people waiting for trains on the platform. The system 100 is deployed in this metro station. The series of sensors 102 are strategically placed across the metro platform to capture a continuous set of frames. These frames capture the dynamic scenes of passengers moving, waiting, and boarding the trains. The captured frames are sent to the processor 108 via an Ethernet interface 110. The processor 108, i.e., GPGPU, processes these frames to execute a panoramic view using the integrated image stitching unit 112. The integrated crowd density estimation unit 114 receives the panoramic view and analyzes it to provide crowd count information. The crowd count unit 116 estimates crowd count and density concerning the spatial distribution on the metro platform.
[0047] Further, the image stitching unit 112 performs feature detection and matching on the captured frames. Detected features contribute to estimating homography between frames, and this homography is used for image warping and panorama generation. Moreover, the crowd density estimation unit 114 pre-processes and resizes the stitched image, extracting features using a fully convolutional neural network with visual geometry group (VGG-16). The density map generation network processes the extracted features to provide a density map. The final crowd count is obtained based on the processed density map. Besides, the crowd count unit 116 employs a dilated convolutional neural network for increased reception field and improved counting accuracy. It combines density estimation and convolutional neural network approaches to handle both high-density and sparse crowd scenarios effectively. Crowd counting is performed, and the density map provides a detailed distribution of the crowd concerning the space on the metro platform.
[0048] Thus, the present invention overcomes the drawbacks, shortcomings, and limitations associated with existing solutions, and provides a system designed specifically for metro platforms, addressing their unique characteristics and challenges to ensure accurate estimations. The system empowers operators with real-time capabilities, enabling them to make immediate adjustments to passenger flow and implement safety measures swiftly. By providing timely crowd density information, the system significantly reduces the risk of overcrowding-related accidents, fostering a safer environment for passengers and facilitating proactive safety measures. Moreover, the present disclosure optimizes operational efficiency on metro platforms by assisting in tasks such as scheduling trains, resource allocation, and effective platform management. This optimization contributes to a smooth and streamlined experience for both passengers and operators, making the system a valuable solution for enhancing safety and efficiency in metro platform environments.
[0049] FIG. 2 illustrates a flow chart of the image stitching method, in accordance with an embodiment of the present disclosure.
[0050] The method for panoramic image generation includes at block 202 input frames are received from the series of imaging sensors. At block 204, the received frames undergo feature detection and matching processing, where features from two frames are utilized.
[0051] At block 206, the detected features contribute to the estimation of homography between frames. At block 208, using the estimated homography values, image warping and panorama generation processes are performed. This stitching process is iteratively executed for all frames captured from various imaging sensors. At block 210, the generated panoramic image undergoes refinement using a seamless blending module and at block 212, the output image is generated.
[0052] FIG. 3 illustrates a flow chart of the crowd density estimation method, in accordance with an embodiment of the present disclosure.
[0053] The method for crowd density and count estimation includes at block 302 the final generated panoramic image is subjected to crowd density and crowd count estimation processing.
[0054] At block 304, the input stitched image is pre-processed and resized. At block 306, features are extracted using a fully convolutional neural network with VGG-16 as the backbone layer. At block 308, extracted features are input into a density map generation network, incorporating a convolutional network with dilated convolution layers.
[0055] At block 310, the resulting density map undergoes post-processing to obtain the final density map and crowd count at block 312. These method steps outline the processes involved in the generation of panoramic images and the estimation of crowd density.
[0056] FIG. 4 illustrates an exemplary view of the method for crowd density estimation for metro platforms, in accordance with an embodiment of the present disclosure.
[0057] The method 400 includes at block 402, the series of sensors accommodated across the metro platform to capture the set of frames of the metro platform. The set of frames depicting various activities of passengers, pertaining to moving, waiting, boarding trains, and any combination thereof. At block 404, the processor operatively coupled to the series of sensors, the processor configured to process the captured set of frames from the series of sensors.
[0058] At block 406, execute a panoramic view of the captured set of frames by the image stitching unit. At block 408, provide crowd count information in the form of both numerical count and a density map by the crowd density estimation unit. At block 410, estimate crowd count and density with respect to a spatial distribution on the metro platform by the crowd count unit. The method allows operators to dynamically adjust passenger flow and implement real-time safety measures.
[0059] It will be apparent to those skilled in the art that the system 100 of the disclosure may be provided using some or all of the mentioned features and components without departing from the scope of the present disclosure. While various embodiments of the present disclosure have been illustrated and described herein, it will be clear that the disclosure is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the spirit and scope of the disclosure, as described in the claims.

ADVANTAGES OF THE PRESENT INVENTION
[0060] The present invention provides a system that addresses the unique characteristics and challenges of metro platforms, ensuring accurate estimations.
[0061] The present invention provides a system that enables operators to make immediate adjustments to passenger flow and implement safety measures in real time.
[0062] The present invention provides a system that enhances passenger safety by reducing the risk of overcrowding-related accidents. It creates a safer environment for passengers by providing timely crowd density information and enabling proactive safety measures.
[0063] The present invention provides a system that optimizes operational efficiency on metro platforms. It assists in scheduling trains, allocating resources, and managing platform operations more effectively. This optimization helps ensure a smooth and streamlined experience for both passengers and operators.
, Claims:1. A system (100) for crowd density estimation for metro platform, the system comprising:
a series of sensors (102) accommodated across the metro platform (104) to capture a set of frames depicting various activities of passengers, pertaining to moving, waiting, boarding trains, and any combination thereof; and
a processor (108) operatively coupled to the series of sensors, the processor configured to:
process the captured set of frames from the series of sensors;
execute, by an image stitching unit (112), a panoramic view of the captured set of frames;
provide, by a crowd density estimation unit (114), crowd count information from the panoramic view in a form of both numerical count and a density map; and
estimate, by a crowd count unit (116), crowd count and density with respect to a spatial distribution on the metro platform, allowing operators to dynamically adjust passenger flow and implement real-time safety measures.
2. The system as claimed in claim 1, wherein the captured set of frames from the series of sensors (102) is transmitted to the processor via an Ethernet interface (110).
3. The system as claimed in claim 1, wherein the image stitching unit (112), the crowd density estimation unit (114) and the crowd count unit (116) are integrated and processed in the processor, wherein the processor is a general-purpose graphics processing unit (GPGPU).
4. The system as claimed in claim 1, wherein the generated panoramic view of the captured set of frames undergoes refinement using a seamless blending module.
5. The system as claimed in claim 1, wherein the image stitching unit (112) is configured to:
receive the captured set of frames from the series of sensors (102);
perform, from the captured set of frames, feature detection and matching;
contribute the detected features to estimate homography between the set of frames; and
utilize estimated homography values for image warping and panorama generation of the captured set of frames.
6. The system as claimed in claim 1, wherein the crowd density estimation unit (114) is configured to:
pre-process and resize an input stitched image of the captured set of frames;
extract features from the pre-processed and resized image;
receive the extracted features and generate the density map;
process the density map; and
provide final crowd count based on the processed density map.
7. The system as claimed in claim 1, wherein the crowd count unit (116) employs a dilated convolutional neural network to increase reception field and replaces pooling operations for improved crowd counting accuracy.
8. The system as claimed in claim 1, wherein the crowd count unit (116) combines density estimation and convolutional neural network approaches, facilitating accurate crowd counting in both high-density and sparse crowd scenarios.
9. The system as claimed in claim 1, wherein the processor (108) captures a complete panoramic view of the metro platform and performs crowd counting, providing a distribution of crowd with respect to the spatial distribution on the metro platform through the density map and minimize inference time of a deep learning engine.
10. A method (400) for crowd density estimation on a metro platform, the method comprising:
capturing (402) a set of frames of the metro platform using a series of sensors accommodated across the metro platform, the set of frames depicting various activities of passengers, pertaining to moving, waiting, boarding trains, and any combination thereof;
processing (404), at a processor, the captured set of frames from the series of sensors;
executing (406), at an image stitching unit, a panoramic view of the captured set of frames;
providing (408), at a crowd density estimation unit, crowd count information in a form of both numerical count and a density map; and
estimating (410), by a crowd count unit, crowd count and density with respect to a spatial distribution on the metro platform, allowing operators to dynamically adjust passenger flow and implement real-time safety measures.

Documents

Application Documents

# Name Date
1 202441002519-STATEMENT OF UNDERTAKING (FORM 3) [12-01-2024(online)].pdf 2024-01-12
2 202441002519-POWER OF AUTHORITY [12-01-2024(online)].pdf 2024-01-12
3 202441002519-FORM 1 [12-01-2024(online)].pdf 2024-01-12
4 202441002519-DRAWINGS [12-01-2024(online)].pdf 2024-01-12
5 202441002519-DECLARATION OF INVENTORSHIP (FORM 5) [12-01-2024(online)].pdf 2024-01-12
6 202441002519-COMPLETE SPECIFICATION [12-01-2024(online)].pdf 2024-01-12
7 202441002519-Proof of Right [09-02-2024(online)].pdf 2024-02-09
8 202441002519-RELEVANT DOCUMENTS [04-10-2024(online)].pdf 2024-10-04
9 202441002519-POA [04-10-2024(online)].pdf 2024-10-04
10 202441002519-FORM 13 [04-10-2024(online)].pdf 2024-10-04
11 202441002519-Response to office action [01-11-2024(online)].pdf 2024-11-01