Sign In to Follow Application
View All Documents & Correspondence

System And Method For Advanced Signal Filtering

Abstract: ABSTRACT SYSTEM AND METHOD FOR ADVANCED SIGNAL FILTERING The present disclosure describes a signal processing system (100) for filtering a multi-dimensional signal generated from an input source (102). The system (100) comprises a signal acquisition unit configured to sense at least one physical parameter of the multi-dimensional signal, at least one analog-to-digital converter (106) communicably coupled to the signal acquisition unit (104) and configured to: receive the at least one sensed physical parameter and generate a sampled discrete multi-dimensional data for the at least one received physical parameter. Further, the system (100) comprises a control unit (108) communicably coupled to the at least one analog-to-digital converter (106), wherein the control unit (108) is configured to perform an adaptive distance-weighted filtering on the sampled discrete multi-dimensional data to generate at least one filter value. Furthermore, the system (100) comprises an output interface (110) communicably coupled to the control unit (108). FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
18 June 2025
Publication Number
27/2025
Publication Type
INA
Invention Field
PHYSICS
Status
Email
Parent Application

Applicants

Matter Motor Works Private Limited
301, PARISHRAM BUILDING, 5B RASHMI SOC., NR. MITHAKHALI SIX ROADS, NAVRANGPURA AHMEDABAD, GUJARAT, INDIA - 380010

Inventors

1. KUMAR PRASAD TELIKEPALLI
301, PARISHRAM BUILDING, 5B RASHMI SOC., NR. MITHAKHALI SIX ROADS, NAVRANGPURA AHMEDABAD, GUJARAT, INDIA - 380010
2. KALP BHATT
301, PARISHRAM BUILDING, 5B RASHMI SOC., NR. MITHAKHALI SIX ROADS, NAVRANGPURA AHMEDABAD, GUJARAT, INDIA - 380010
3. AKASH MODI
301, PARISHRAM BUILDING, 5B RASHMI SOC., NR. MITHAKHALI SIX ROADS, NAVRANGPURA AHMEDABAD, GUJARAT, INDIA - 380010
4. HARESH PATEL
301, PARISHRAM BUILDING, 5B RASHMI SOC., NR. MITHAKHALI SIX ROADS, NAVRANGPURA AHMEDABAD, GUJARAT, INDIA - 380010

Specification

Description:SYSTEM AND METHOD FOR ADVANCED SIGNAL FILTERING
TECHNICAL FIELD
Generally, the present invention relates to the field of signal processing systems. Particularly, the present disclosure relates to a system and method for filtering a multi-dimensional signal.
BACKGROUND
In modern electrical and electronic systems, involving motor drives, power converters, and renewable energy sources, the need for accurate and reliable signal processing of multi-dimensional inputs has become critical. The multi-dimensional signals generated from complex energy sources, such as, but not limited to, a three-phase AC system or advanced inverter outputs, are subject to inherent noise, harmonics, and phase imbalances. The distortions introduce inaccuracies in measurement and control operations, affecting the overall system performance and efficiency.
Conventional signal filtering systems generally rely on fixed-size and fixed-shape convolutional or averaging filters, such as, but not limited to, mean filters, Gaussian filters, or median filters. The filters compute a uniform or static weighted average across a neighbourhood of samples, assuming homogeneity in noise characteristics and signal features. In some enhanced forms, distance-based filters such as bilateral filters use a combination of spatial proximity and signal intensity similarity for weight computation. Further, the above-mentioned systems depend on manually defined parameters, such as kernel size, decay functions, and thresholds, which remain static regardless of the underlying signal’s context. As a result, uniform filters apply the same degree of smoothing across the signal domain, regardless of whether the region exhibits high variability, sharp edges, or flat smoothness.

However, there are underlying problems associated with the existing or above-mentioned mechanism of filtering a multi-dimensional signal generated from an input source. For instance, static or non-contextual filtering techniques encounter critical limitations in processing multi-dimensional signals with dynamic characteristics. Specifically, the fixed kernel filters tend to over-smooth sharp transitions or edges in the signal, leading to loss of important features, while failing to suppress high-frequency noise in fluctuating regions. Further, the lack of adaptability in weight computation results in suboptimal filtering in complex contexts, such as varying entropy, gradient orientation, or feature density. Moreover, conventional methods do not incorporate contextual relevance into the filtering weights, reducing the effectiveness in real-world applications. Therefore, the above-mentioned challenges underscore the need for an adaptive, context-aware filtering framework that integrates proximity-based distance metrics with relevance modulation to achieve more accurate and robust signal enhancement.
Therefore, there exists a need for a mechanism for filtering a multi-dimensional signal generated from an input source that is efficient, accurate, and overcomes one or more problems as mentioned above.
SUMMARY
An object of the present disclosure is to provide a signal processing system for filtering a multi-dimensional signal generated from an input source.
Another object of the present disclosure is to provide a method for filtering a multi-dimensional signal generated from an input source.
Yet another object of the present invention is to provide a signal processing system capable of an adaptive, context-aware signal processing system that performs distance-weighted filtering on multi-dimensional data.
In accordance with an aspect of the present disclosure, there is provided a signal processing system for filtering a multi-dimensional signal generated from an input source, the system comprises:
- a signal acquisition unit configured to sense at least one physical parameter of the multi-dimensional signal;
- at least one analog-to-digital converter communicably coupled to the signal acquisition unit and configured to:
- receive the at least one sensed physical parameter; and
- generate a sampled discrete multi-dimensional data for the at least one received physical parameter;
- a control unit communicably coupled to the at least one analog-to-digital converter, wherein the control unit is configured to perform an adaptive distance-weighted filtering on the sampled discrete multi-dimensional data to generate at least one filter value; and
- an output interface communicably coupled to the control unit.
The system for filtering a multi-dimensional signal generated from an input source, as described in the present disclosure, is advantageous in terms of enhanced accuracy in signal processing by implementing a filtering technique that selects representative signal values based on statistical characteristics of sampled data. Further, the invention provides an adaptive signal processing system designed to filter multi-dimensional signals originating from various input sources by leveraging both spatial and contextual characteristics of the data. The system dynamically selects a sampling window based on local entropy and computes distances between the central and neighbouring data points. Filtering weights are assigned using an exponential decay function modulated by contextual features through an adaptive relevance factor. The final output is a context-sensitive filtered signal that achieves enhanced noise reduction while preserving important structural features of the original signal.
In accordance with another aspect of the present disclosure, there is provided a method for filtering a multi-dimensional signal generated from an input source, the method comprising:
- sensing at least one physical parameter of the multi-dimensional signal, via a signal acquisition unit;
- generating a sampled discrete multi-dimensional data for the at least one received physical parameter, via at least one analog-to-digital converter;
- preprocessing the sampled discrete multi-dimensional data by performing amplitude normalization through min-max scaling, via a signal conditioning module;
- computing a distance between a current data point and the surrounding data points within a dynamically selected sampling window, via a distance computation module; and
- assigning a filtering weight to each data point in the sampling window based on an exponential decay function of the computed distance, via a weight aggregation module.
Additional aspects, advantages, features, and objects of the present disclosure would be made apparent from the drawings and the detailed description of the illustrative embodiments constructed in conjunction with the appended claims that follow.
It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.
BRIEF DESCRIPTION OF DRAWINGS
The summary above, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.
Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:
Figures 1 and 2 illustrate block diagrams of a signal processing system for filtering a multi-dimensional signal generated from an input source, in accordance with different embodiments of the present disclosure.
Figure 3 illustrates a flow chart for filtering a multi-dimensional signal generated from an input source, in accordance with another embodiment of the present disclosure.
In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the non-underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing.
DETAILED DESCRIPTION
The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practicing the present disclosure are also possible.
As used herein, the terms “signal processing” and “processing” are used interchangeably and refer to a hardware-implemented or software-controlled architecture designed to acquire, digitize, preprocess, and selectively enhance or suppress components of a multi-dimensional signal originating from one or more physical parameters. Specifically, the system comprises essential elements including a signal acquisition unit, analog-to-digital conversion hardware, and a control unit equipped with modules for signal conditioning, distance computation, and weighted aggregation. The system processes signals by transforming continuous physical phenomena, such as, but is not limited to, voltage, current, or vibration, into discrete numerical data and performing adaptive filtering based on spatial or temporal relationships among sampled values. Further, the control unit operates on a sampling window defined around each data point, with signal conditioning removing outliers and normalizing dynamic ranges, distance computation quantifies proximity between neighbouring data vectors, and aggregation applies a weighted filter emphasizing relevant structural information while suppressing noise. Furthermore, signal processing systems are classified based on signal dimensionality, domain of operation, and filtering strategy. The dimensional classifications include one-dimensional systems for time-series data (ECG, vibration), two-dimensional systems for spatially structured data (images), and higher-dimensional systems for multisensory fusion or spatiotemporal signals. The filtering strategies encompass linear weighted filters, non-linear distance-based filters, and hybrid models incorporating data-driven relevance modulation. Furthermore, the technique of adaptive filtering involves computing a context-aware sampling window whose size, shape, and orientation vary according to local signal characteristics such as variance, entropy, or gradient magnitude. A proximity score is calculated between the central sample and neighbours, typically using Euclidean or Mahalanobis distance, followed by weight computation via exponential or monotonic functions. Consequently, the final signal values are generated using a normalized weighted sum, optionally adjusted using a learned or adaptive relevance factor derived from prior signal behavior or statistical modeling.
As used herein, the terms “multi-dimensional signal” and “signal” are used interchangeably and refer to a structured set of data points generated by sampling one or more physical parameters across multiple axes, channels, or modalities, resulting in data vectors with two or more independent components. The signals arise from physical systems where measurements vary not only over time but also across spatial coordinates, sensor arrays, or signal sources. Specifically, the multi-dimensional signal includes sampled outputs derived from parameters such as, but is not limited to, current, voltage, vibration, or position, where each sample may consist of multiple correlated values. Further, the values are acquired concurrently or sequentially and stored in structured sampling windows for further processing. The multi-dimensional nature allows the system to analyze relationships between components within a signal vector, enabling context-sensitive operations such as direction-aware filtering and structure-preserving enhancement. Furthermore, types of multi-dimensional signals include time-series vector signals (three-axis accelerometer data), spatial signals (2D image frames), and fused sensor outputs combining inputs from heterogeneous sources. The technique of processing involves sampling using analog-to-digital converters to obtain discrete representations, followed by conditioning to normalize amplitudes and remove statistical outliers. Furthermore, the control unit processes the vectors within adaptive windows, where local statistical measures such as variance or entropy determine the shape, size, and orientation of the window. Specifically, each signal vector in the window is compared to a center point using distance functions to compute proximity scores. The scores are transformed into weights using monotonic or exponential functions, and the filtered signal value is computed using a weighted sum, optionally modulated by context-aware relevance factors to enhance feature preservation in high-dimensional signal spaces.
As used herein, the terms “input source” and “source” are used interchangeably and refer to a physical or electronic origin that generates one or more measurable parameters from which multi-dimensional signal data are derived. Specifically, the input source includes electrical, mechanical, thermal, or environmental systems that emit continuously varying quantities such as voltage, current, acceleration, displacement, or temperature. The signal acquisition unit interfaces directly with the input source to sense the physical parameters in real-time and convert them into electrical signals suitable for digitization. Further, the input sources may be passive or active in nature and operate under predefined operating conditions relevant to the monitored system, such as a rotating machine, a power converter, or a structural element under stress. The fidelity and resolution of the signal generated by the input source directly influence the accuracy and performance of the adaptive filtering process. Furthermore, the types of input sources include single-parameter sources such as strain gauges, thermocouples, or voltage taps, and multi-parameter sources such as Inertial Measurement Units (IMUs), combined voltage-current probes, or integrated sensor arrays. The input sources may operate over different domains, including time, frequency, or space, and may be static or dynamic depending on the physical system. The way of integration involves establishing a signal pathway between the input source and the signal acquisition unit, followed by real-time conversion of analog signals to digital form using analog-to-digital converters. The resulting discrete data is organized into multi-dimensional sampling windows, processed through signal conditioning, and subjected to distance-weighted adaptive filtering. Statistical attributes of the signal, as acquired from the input source, influence subsequent steps in window selection and proximity-based weight computation, ensuring that context-aware filtered outputs maintain high fidelity to the underlying physical behavior of the input system.
As used herein, the terms “signal acquisition unit” and “acquisition unit” are used interchangeably and refer to a hardware interface configured to sense one or more physical parameters from an input source and convert the parameters into analog electrical signals suitable for further digitization and processing. Specifically, the signal acquisition unit functions as the initial interface in the signal processing chain, responsible for accurately capturing multi-dimensional physical phenomena such as, but is not limited to, voltage, current, displacement, force, temperature, or acceleration. The acquisition unit includes sensor interfaces, signal transmission lines, and analog front-end circuits designed to maintain signal integrity, minimize noise, and preserve temporal and spatial alignment across multiple sensing channels. Further, the output of the signal acquisition unit feeds directly into one or more analog-to-digital converters, ensuring that high-fidelity continuous-time signals are available for discrete-time analysis and adaptive filtering. Furthermore, the types of signal acquisition units include single-channel sensing modules, multi-channel synchronized sensor arrays, differential signal capture interfaces, and integrated sensor packages with built-in signal conditioning. The units are selected based on the nature of the input source and the dimensionality of the signals to be processed. The technique of operation involves continuous or periodic sampling of the physical parameter using transducers, followed by low-noise amplification, impedance matching, and signal buffering. The conditioned analog signal is routed to the analog-to-digital converter to generate discrete sampled data.
As used herein, the terms “analog-to-digital converter”, “ADC”, and “converter” are used interchangeably and refer to an electronic component that transforms continuous analog electrical signals into discrete digital representations. The ADC performs sampling of the analog input at specified intervals and assigns quantized values corresponding to signal amplitude at each sampling point. The core functional blocks include a sample-and-hold circuit, a quantizer, and encoding logic. The ADC types applicable to signal processing systems include successive approximation register (SAR), sigma-delta, flash, and pipeline converters. Furthermore, the selection of ADC architecture depends on resolution, sampling speed, power efficiency, and noise tolerance requirements for the target signal. The procedure of operation involves receiving analog signals from a sensing unit and periodically converting them into binary-coded digital values for subsequent digital processing. A sampling frequency is chosen based on the Nyquist criterion and signal bandwidth. The output from the ADC comprises a stream of numerical values corresponding to amplitude levels of the input signal over time. Therefore, the precision of the ADC directly affects the fidelity of the signal in the digital domain and influences downstream control and decision-making processes.
As used herein, the terms “physical parameter” and “parameter” are used interchangeably and refer to a measurable attribute or quantity originating from a physical process or system, which varies in response to operational, environmental, or mechanical conditions. Specifically, the physical parameter includes, but is not limited to, electrical, mechanical, or thermodynamic quantities such as voltage, current, displacement, velocity, acceleration, pressure, temperature, or vibration. The parameters serve as the primary source of information for characterizing the state or behavior of the system under observation. Further, each physical parameter is transduced into an analog signal by a corresponding sensor element and transmitted to the signal acquisition unit for further processing. The selection of relevant physical parameters depends on the nature of the monitored system and the desired resolution or sensitivity of the adaptive filtering operation. Furthermore, the types of physical parameters include scalar parameters such as temperature or pressure, vector parameters such as multi-axis acceleration or velocity, and time-varying waveforms such as alternating current or dynamic load profiles. The technique of processing begins with continuous sensing of the selected parameter using calibrated transducers or probes, followed by signal transmission to the acquisition unit. The analog signals are digitized to form sampled multi-dimensional data, which are grouped into sampling windows for preprocessing and filtering. Consequently, the local variations in the physical parameter, such as sudden changes in amplitude or statistical irregularities, influence the adaptive window selection and the proximity-based filtering strategy. The system maintains high fidelity in filtered outputs by adjusting filtering weights and window configurations in accordance with the spatial and temporal properties of the sensed physical parameters.
As used herein, the terms “control unit” and “controller unit” are used interchangeably and refer to a digital processing module configured to interpret, process, and manage input signals for driving and regulating downstream components within a signal processing system. The control unit comprises processing logic, memory elements, and firmware or algorithms for decision-making and command generation. The common types include microcontrollers, Digital Signal Processors (DSPs), Field-Programmable Gate Arrays (FPGAs), and Application-Specific Integrated Circuits (ASICs). The control unit operates as a centralized logic system that coordinates data flow and signal transformation across interconnected modules, including filtering, coordinate transformation, and gate driving circuits. The technique of operation involves receiving digitized input from an analog-to-digital converter, executing signal conditioning procedures such as synchronization and transformation into rotating reference frames, and applying filtering algorithms to isolate or enhance specific signal features.
As used herein, the term “discrete multi-dimensional data” refers to a structured set of digital samples obtained by quantizing continuous analog signals sensed from one or more physical parameters across multiple axes, channels, or sensor modalities. Specifically, the discrete multi-dimensional data represents sampled values that preserve temporal, spatial, or feature-wise relationships among components, enabling comprehensive signal analysis. Further, each data point is a vector comprising two or more correlated elements, such as simultaneous readings from a tri-axial accelerometer, voltage and current from a power system, or pixel intensities from adjacent regions of an image. The generation of discrete multi-dimensional data involves time-synchronized sampling using analog-to-digital converters, ensuring consistent data alignment across dimensions for robust filtering and feature extraction within adaptive signal processing workflows. The types of discrete multi-dimensional data include time-series vectors from sensor arrays, spatially arranged grids from imaging systems, and multi-channel recordings from hybrid signal sources. The way of construction involves segmenting the digitized signal stream into overlapping or non-overlapping windows, each containing a set of data vectors representing the local signal context. Furthermore, each window is subjected to signal conditioning, followed by distance computation between the central vector and neighbouring vectors within the window. The computed distances are transformed into weights using predefined or adaptive functions, and the final filtered output is derived through weighted aggregation. The structural properties inherent in the discrete multi-dimensional data, such as local variance, entropy, or directional gradients, inform dynamic window configuration and proximity-based relevance adjustment, enabling noise-suppressed yet detail-preserving signal enhancement.
As used herein, the term “adaptive distance-weighted filtering” refers to a signal enhancement procedure that computes a filtered value for a target data point by aggregating surrounding values within a dynamically defined neighbourhood, where each neighbouring value contributes proportionally to its similarity or proximity to the target. Specifically, the adaptive distance-weighted filtering operates on discrete multi-dimensional data sampled from physical parameters, using a distance metric such as Euclidean or Mahalanobis distance to quantify similarity between vectors. The filtering weight assigned to each neighbouring point decreases monotonically with increasing distance, ensuring that closer or more similar points exert a greater influence on the final output. Further, the filtering process dynamically adapts to signal characteristics by adjusting window parameters such as size, shape, or orientation based on local statistical features, including variance, entropy, or gradient direction, preserving structural detail while suppressing noise. The types of adaptive distance-weighted filtering include isotropic filtering with circular or square windows for uniform noise profiles, anisotropic filtering with orientation-aligned windows for preserving directional features, and relevance-modulated filtering where computed weights are further scaled by learned or adaptive relevance scores. The way of execution involves constructing a local sampling window around each target point, computing distance values between the target and each neighbour, applying a decay function (Gaussian or exponential) to obtain weights, and performing a weighted sum of all values within the window. The aggregation process is normalized to maintain output amplitude consistency. Relevance factors may be derived from historical data statistics, spatial derivatives, or application-specific learning modules to refine the filtering response.
As used herein, the term “filter value” refers to a processed output derived from a local neighbourhood of discrete multi-dimensional data, computed to suppress noise and enhance relevant signal characteristics. Specifically, the filter value is generated by applying an adaptive distance-weighted aggregation mechanism, where neighbouring data vectors within a dynamically configured window are assigned, weights based on their proximity to a central reference vector. The filter value represents a context-sensitive replacement or modification of the original center point, preserving structural fidelity in low-noise regions and performing aggressive smoothing in high-variance or noisy zones. The computation integrates contributions from all valid neighbours, scaled by decay-based distance functions and optionally modulated by relevance factors, to produce an output that reflects localized signal behavior with minimized distortion. The types of filter values include scalar outputs derived from one-dimensional projections of vector data, vector outputs preserving all original dimensions of the input data point, and composite outputs incorporating multi-scale or multi-feature aggregation. The way of generation involves initializing a sampling window around the center data point, computing pairwise distances to all neighbouring points within the window using a selected metric, transforming the distances into weights using exponential or monotonic decay functions, and performing a weighted sum over all neighbourhood vectors. The sum is normalized to account for the cumulative weight contribution, ensuring amplitude consistency. Further, the optional relevance modulation adjusts the weight contribution based on signal-specific features such as edge strength, local coherence, or entropy gradients. The resulting filter value serves as the output of the adaptive filtering operation, feeding subsequent analysis or control systems with enhanced, noise-reduced signal representations.
As used herein, the terms “output interface” and “end interface” are used interchangeably and refer to the end component or load that receives the processed or controlled electrical signal for performing a functional operation. The output interface represents the terminal stage of a signal processing system, as electrical energy is converted into mechanical, thermal, or another form of usable energy. The types of output interface include electric motors, actuators, lighting systems, inverters, resistive loads, and power conversion modules. The selection of the output interface depends on application-specific requirements such as power rating, voltage class, and operational dynamics. The procedure of operation involves receiving controlled output signals from a gate driver or control unit, which regulate the magnitude, frequency, and phase of the electrical input supplied to the output interface. The output interface responds to the signals to perform designated output actions, such as rotation in motors or heating in resistive elements. The signal conditioning and regulation by upstream modules ensure the output interface operates with optimal performance, minimal losses, and high reliability under varying load conditions.
As used herein, the term “signal conditioning module” refers to a pre-processing unit configured to prepare raw sampled data for further computational analysis by standardizing signal characteristics and eliminating undesired variations. Specifically, the signal conditioning module operates on discrete multi-dimensional data generated by the analog-to-digital converter, performing operations such as amplitude normalization, statistical outlier suppression, baseline correction, and signal scaling. The module ensures consistency and reliability of the data by bringing each component of the multi-dimensional signal into a common dynamic range, correcting any sensor-induced offsets, and removing transient anomalies. Further, the preprocessing enhances the stability and accuracy of subsequent adaptive filtering, especially in environments with fluctuating or mixed signal sources. The types of signal conditioning include static normalization using predefined bounds, dynamic normalization based on local mean or standard deviation, and threshold-based suppression of outliers exceeding statistical limits. The method of operation begins by segmenting the multi-dimensional signal into sampling windows, within which statistical metrics such as mean, variance, or interquartile range are computed. Further, the amplitude normalization is performed by scaling each vector component to fit within a fixed operational range, while outlier suppression replaces or de-weights signal points that deviate significantly from expected values. The conditioned data is then passed to the distance computation and weight aggregation modules, enabling structurally robust filtering. The signal conditioning module plays a critical role in maintaining data integrity, reducing signal distortion, and supporting precise neighborhood comparisons in the adaptive distance-weighted filtering process.
As used herein, the term “distance computation module” refers to a processing component designed to quantify the similarity or dissimilarity between data points within a multi-dimensional sampling window by calculating numerical measures of separation. Specifically, the distance computation module calculates distances between a reference data vector and neighboring vectors using predefined mathematical metrics such as, but is not limited to, Euclidean distance, Manhattan distance, or Mahalanobis distance. Further, the calculated distances serve as proximity scores that guide the weighting of neighboring points in the adaptive filtering process, enabling context-aware signal smoothing while preserving structural details. The module supports dynamic adjustment of distance metrics and parameters to accommodate varying signal characteristics and dimensionalities. The types of distance metrics include standard Euclidean distance, which measures straight-line separation in multi-dimensional space; Manhattan distance, which sums absolute differences along each dimension; and Mahalanobis distance, which accounts for data covariance and correlation among dimensions. The technique of operation involves extracting the central reference vector from the sampling window and iteratively computing distances to all neighboring vectors within the window boundaries. The computed distances are forwarded to the weight aggregation module, where they influence the calculation of filtering weights according to monotonic decay functions. Further, the parameter tuning and metric selection depend on signal modality and noise characteristics, ensuring accurate representation of local similarity. The distance computation module thereby facilitates adaptive, localized filtering by providing precise quantification of data point proximity within the multi-dimensional signal space.
As used herein, the term “weight aggregation module” refers to a processing component responsible for combining multiple data points within a sampling window by assigning and applying context-sensitive weights to each point, thereby generating a single filtered output value. specifically, the weight aggregation module receives computed distance metrics from the distance computation module and converts these distances into normalized weights using exponential decay or other monotonic functions. The weights represent the relative influence of each neighboring data point based on proximity and similarity to the reference point. The module performs a weighted sum of the neighboring signal values, incorporating optional modulation by adaptive relevance factors that reflect local signal features, to produce an enhanced signal output that preserves important structures while reducing noise. The types of weight aggregation methods include Gaussian weighting functions that emphasize close neighbors, inverse distance weighting that linearly scales influence with proximity, and adaptive weighting schemes modulated by learned or statistical relevance parameters. The technique involves receiving distance values for all points within the sampling window, transforming these distances into preliminary weights using the chosen decay function, and normalizing the weights to ensure their sum equals unity. The normalized weights are multiplied by the corresponding neighboring data values and summed to generate the filtered output. Further, the optional incorporation of adaptive relevance factors adjusts the weights based on local variance, edge strength, or entropy, enabling context-sensitive filtering. The weight aggregation module thus plays a critical role in balancing noise suppression with signal detail preservation by effectively leveraging spatial correlations in the multi-dimensional data.
As used herein, the term “min-max scaling” refers to a data normalization technique that transforms numerical values within a dataset to a predefined range, typically between zero and one, by linearly rescaling original data points based on the minimum and maximum values in the dataset. Specifically, the min-max scaling is applied to multi-dimensional signal data during signal conditioning to standardize amplitude levels across different dimensions and sampling windows. The normalization ensures uniformity in data representation, facilitating accurate distance computations and consistent weighting in adaptive filtering processes. Further, the scaling preserves the relative distribution of data points while removing amplitude disparities that may arise due to sensor variations or environmental factors. The types of min-max scaling include global scaling, where the minimum and maximum values are computed over the entire dataset, and local scaling, where scaling parameters are derived within localized sampling windows or segments. The way of execution involves identifying the minimum and maximum values for each dimension within the target data segment, subtracting the minimum value from each data point, and dividing the result by the range (maximum minus minimum). The scaled data points thus fall within the target interval, allowing direct comparison across different dimensions and samples. In cases with data containing outliers, robust variants of min-max scaling exclude extreme values to prevent distortion of the scaling range. The application of min-max scaling enhances the stability and performance of downstream modules such as distance computation and weight aggregation in the adaptive filtering system.
As used herein, the terms “sampling window” and “sampling frame” are used interchangeably and refer to a contiguous subset of discrete multi-dimensional data points selected from the overall signal for localized analysis and processing. Specifically, the sampling window defines the neighborhood around a target data point within which distance computations, weight assignments, and adaptive filtering are performed. The window’s shape, size, and orientation adapt dynamically based on local signal characteristics such as variance, entropy, or directional gradients to optimize the balance between noise suppression and detail preservation. Further, the sampling window facilitates context-sensitive filtering by providing a focused data segment that captures relevant spatial or temporal relationships inherent in the multi-dimensional signal. The types of sampling windows include fixed-shape windows such as rectangular, circular, or elliptical regions, and adaptive windows whose dimensions and geometry vary in response to signal statistics or structural features. The technique of operation involves selecting a central reference data point and defining a surrounding neighborhood by including all data points that fall within the configured window parameters. Further, the statistical measures calculated over the sampling window inform dynamic adjustments, enabling enlargement in homogeneous regions and contraction in noisy or high-variation areas. The windowed data subset undergoes preprocessing, distance calculation, and weighted aggregation to generate filtered outputs. The adaptive sampling window thus serves as a critical mechanism to localize signal processing operations, enhancing filtering accuracy and computational efficiency in the multi-dimensional signal processing system.
As used herein, the term “entropy” refers to a quantitative measure of randomness, disorder, or unpredictability within a dataset or signal segment, reflecting the complexity or information content present in the data. Specifically, the entropy evaluates the variability and structural complexity within the multi-dimensional signal data localized inside a sampling window. The higher entropy values indicate regions with increased signal variation or noise, while lower entropy values correspond to more homogeneous or smooth signal areas. Further, the entropy serves as a critical parameter for guiding adaptive filtering by informing adjustments to the sampling window size, shape, or filtering weights, thereby enabling selective preservation of important signal details and effective noise reduction. The types of entropy measures include Shannon entropy, which calculates the expected information content based on probability distributions of signal values; Renyi entropy, providing a generalized form with adjustable sensitivity to distribution tails; and sample entropy, estimating the regularity and self-similarity within time-series data. The technique of computation involves constructing a probability distribution of signal intensities or patterns within the sampling window, calculating the logarithmic sum of weighted probabilities according to the chosen entropy formula, and producing a scalar entropy value representing local signal complexity. Further, the entropy values directly influence adaptive filtering parameters, supporting dynamic tuning of the processing window and weight modulation to maintain signal fidelity across varying noise and structural conditions in the multi-dimensional signal processing framework.
As used herein, the term “current data point” refers to the specific discrete multi-dimensional signal vector within a sampling window that serves as the central reference for filtering and analysis operations. Specifically, the current data point represents the target position around which neighbourhood data points are evaluated to compute distances, assign weights, and generate a filtered output value. The current data point acts as the anchor for adaptive filtering, where local signal characteristics and proximity relationships are assessed relative to this reference. Further, accurate identification and processing of the current data point ensure effective noise suppression while preserving important signal features. The types of current data points include single-sample vectors representing instantaneous measurements, aggregated vectors obtained through temporal or spatial averaging, and representative points derived from clustering or segmentation of the multi-dimensional signal. The technique of operation involves selecting the central data vector within the dynamically defined sampling window for each filtering iteration, extracting multi-dimensional components, and calculating relative distances to neighbouring vectors. The current data point undergoes conditioning, and its relationship to neighbours informs the computation of filtering weights and final output generation. Further, the proper handling of the current data point is fundamental to the adaptive distance-weighted filtering mechanism, enabling localized signal enhancement within the multi-dimensional processing system.
As used herein, the term “surrounding data points” refers to the set of discrete multi-dimensional signal vectors located within the defined sampling window around a central reference vector, known as the current data point. Specifically, the surrounding data points provide contextual information necessary for evaluating local signal variations and structural patterns. The neighbouring points serve as candidates for distance measurement and weight assignment, contributing collectively to the generation of a filtered output that enhances signal fidelity and suppresses noise. Further, the precise inclusion and characterization of surrounding data points enable adaptive filtering that respects spatial or temporal correlations inherent in the multi-dimensional signal. The types of surrounding data points include immediate adjacent neighbours in spatial grids, temporally proximate samples in time-series data, and statistically similar points identified by clustering within the sampling window. The method of operation involves defining the sampling window parameters, shape, size, and orientation, and extracting all data points that fall within the neighbourhood relative to the current data point. Further, the distance metrics are computed between the current data point and each surrounding data point to quantify similarity. The distances influence weight calculation, determining each neighbour’s contribution to the filtered output through weighted aggregation. Furthermore, the proper selection and processing of surrounding data points facilitate adaptive and context-sensitive filtering that dynamically responds to local signal characteristics within the multi-dimensional data processing framework.
As used herein, the terms “exponential decay function” and “decay function” are used interchangeably and refer to a mathematical function that models a rapid decrease in value as a variable increases. Specifically, the exponential decay function translates computed distances between data points into weighting factors, assigning higher weights to points closer to the reference and progressively lower weights to more distant points. The decay behavior ensures that neighbouring data points exert influence on the filtered output in a manner that reflects their spatial or feature proximity, thereby enhancing noise suppression while preserving local signal structure. The types of exponential decay functions include standard exponential decay with fixed decay rate parameters, adaptive exponential decay where decay constants are tuned based on signal statistics or learning algorithms, and truncated exponential decay functions that impose minimum thresholds to limit weight reduction. Further, the technique of application involves receiving distance values between the current data point and surrounding neighbours, substituting these distances into the exponential decay formula to compute preliminary weights, and normalizing the weights to ensure their sum equals unity. The parameters controlling decay steepness influence the extent of local smoothing versus detail preservation. The exponential decay function thus plays a critical role in the weight aggregation module by providing a mathematically tractable and context-sensitive mechanism for weighting multi-dimensional signal data points during adaptive filtering.
As used herein, the term “monotonic function” refers to a mathematical function that preserves a consistent order in its output relative to its input, either never decreasing or never increasing throughout its domain. Specifically, the monotonic functions transform computed proximity scores or distances into filtering weights, ensuring that closer or more similar data points receive equal or higher weights compared to those further away or less similar. The property guarantees an ordered and predictable relationship between data point similarity and assigned weight, facilitating stable and interpretable adaptive filtering that emphasizes relevant neighbours while attenuating distant or dissimilar points. The types of monotonic functions include strictly increasing functions with the output strictly rising with increasing input; non-decreasing functions, which allow plateaus but do not decrease; strictly decreasing functions, with output strictly falling as input increases; and non-increasing functions, which permit constant values but no increases. The technique of operation involves applying a monotonic transformation to proximity scores, such as distance metrics, to generate normalized weights. Typical examples include exponential decay functions, linear inverses, and sigmoid-based mappings, each maintaining the monotonic property to ensure consistent weighting behavior. The monotonic function’s selection and parameterization directly impact the balance between noise suppression and detail preservation in the adaptive distance-weighted filtering system.
As used herein, the term “adaptive relevance factor” refers to a dynamic scalar or vector parameter that modulates the influence of individual data points during filtering based on contextual signal characteristics and learned or statistical criteria. Specifically, the adaptive relevance factor adjusts filtering weights assigned to surrounding data points by incorporating additional information such as, but is not limited to, local signal texture, structural features, or temporal consistency. The modulation enables selective enhancement or attenuation of contributions from neighbouring points, improving the filtering system’s ability to preserve significant signal details while effectively suppressing noise or irrelevant variations. The adaptive relevance factor evolves in response to changes in signal conditions, enabling context-sensitive filtering that adapts to diverse multi-dimensional signal environments. The types of adaptive relevance factors include static learned parameters derived from offline training on representative datasets, dynamically computed factors based on local statistical metrics (variance, entropy), and hybrid factors combining both learned and real-time adaptive components. Further, the technique of determination involves analyzing local signal properties within the sampling window, evaluating feature relevance or confidence measures, and computing scaling values applied multiplicatively to base filtering weights.
As used herein, the term “contextual features” refers to measurable characteristics or attributes extracted from localized regions of a multi-dimensional signal that describe underlying structural, statistical, or spatial properties relevant to adaptive processing. Specifically, the contextual features quantify aspects such as local variance, entropy, directional gradients, texture patterns, or signal smoothness within the sampling window. Further, the features inform filtering decisions by capturing the signal environment surrounding each data point, enabling the filtering system to adapt its parameters dynamically to preserve important details while attenuating noise and irrelevant variations. The types of contextual features include statistical measures such as mean, variance, skewness, and entropy; geometric or spatial descriptors such as gradient magnitude, edge orientation, and curvature; and texture-based metrics derived from co-occurrence matrices or frequency-domain analyses. The technique of extraction involves selecting a localized sampling window around the current data point, computing relevant statistical or geometric metrics over the data points contained therein, and aggregating these measurements into feature vectors. The contextual feature vectors influence adaptive filtering modules by guiding window shape and size adjustments, weight computations, and relevance factor modulation, thereby optimizing filter performance for diverse signal conditions within the multi-dimensional signal processing framework.
In accordance with an aspect of the present disclosure, there is provided a signal processing system for filtering a multi-dimensional signal generated from an input source, the system comprises:
- a signal acquisition unit configured to sense at least one physical parameter of the multi-dimensional signal;
- at least one analog-to-digital converter communicably coupled to the signal acquisition unit and configured to:
- receive the at least one sensed physical parameter; and
- generate a sampled discrete multi-dimensional data for the at least one received physical parameter;
- a control unit communicably coupled to the at least one analog-to-digital converter, wherein the control unit is configured to perform an adaptive distance-weighted filtering on the sampled discrete multi-dimensional data to generate at least one filter value; and
- an output interface communicably coupled to the control unit.
Referring to figure 1, in accordance with an embodiment, there is described a signal processing system 100 for filtering a multi-dimensional signal generated from an input source 102. The system 100 comprises a signal acquisition unit configured to sense at least one physical parameter of the multi-dimensional signal, at least one analog-to-digital converter 106 communicably coupled to the signal acquisition unit 104 and configured to: receive the at least one sensed physical parameter and generate a sampled discrete multi-dimensional data for the at least one received physical parameter. Further, the system 100 comprises a control unit 108 communicably coupled to the at least one analog-to-digital converter 106, wherein the control unit 108 is configured to perform an adaptive distance-weighted filtering on the sampled discrete multi-dimensional data to generate at least one filter value. Furthermore, the system 100 comprises an output interface 110 communicably coupled to the control unit 108.
The signal processing system 100 operates by acquiring multi-dimensional signal data from an input source 102 through a signal acquisition unit 104 that senses one or more physical parameters representative of the underlying signal environment. Subsequently, the sensed physical parameters undergo analog-to-digital conversion via at least one analog-to-digital converter 106, generating discrete sampled multi-dimensional data that preserves spatial, temporal, or feature-specific information in digital form. Further, the control unit 108 receives the sampled data and implements an adaptive distance-weighted filtering method designed to reduce noise and distortions while retaining critical signal details by leveraging proximity-based weighting of neighboring data points within dynamically defined sampling windows. Specifically, the adaptive distance-weighted filtering technique utilizes computed distances between the current data point and surrounding sampled points to assign filtering weights, where weights decrease exponentially with increasing distance, thereby emphasizing closer neighbors in the filtering operation. Furthermore, the control unit 108 dynamically adjusts window parameters such as size, shape, and orientation based on local signal characteristics, including statistical measures such as, but is limited to, variance and entropy, enabling selective smoothing in homogeneous regions and preservation of edges or textures in complex regions. Further, the weight aggregation integrates the adaptive weights with an optional relevance factor to generate a filtered signal value that balances noise suppression and detail retention, producing a refined output signal for further processing or analysis. Beneficially, the enhanced signal fidelity is achieved through context-sensitive noise reduction that adapts to the local signal environment, minimizing information loss commonly associated with fixed-parameter filters. Advantages of the signal processing system 100 include improved preservation of subtle structural details and edges in multi-dimensional signals, increased robustness to varying noise levels, and flexibility to process diverse signal types and modalities. The system’s 100 adaptive nature ensures optimal filtering performance across heterogeneous data conditions, making it suitable for applications requiring precise signal interpretation, such as image processing, sensor data analysis, and multi-modal signal integration.
Referring to figure 2, in accordance with an embodiment, there is described a signal processing system 100 for filtering a multi-dimensional signal generated from an input source 102. The system 100 comprises a signal acquisition unit configured to sense at least one physical parameter of the multi-dimensional signal, at least one analog-to-digital converter 106 communicably coupled to the signal acquisition unit 104 and configured to: receive the at least one sensed physical parameter and generate a sampled discrete multi-dimensional data for the at least one received physical parameter. Further, the system 100 comprises a control unit 108 communicably coupled to the at least one analog-to-digital converter 106, wherein the control unit 108 is configured to perform an adaptive distance-weighted filtering on the sampled discrete multi-dimensional data to generate at least one filter value. Furthermore, the system 100 comprises an output interface 110 communicably coupled to the control unit 108. The control unit 108 comprises a signal conditioning module 112, a distance computation module 114, and a weight aggregation module 116. The control unit 108 integrates a signal conditioning module 112, a distance computation module 114, and a weight aggregation module 116 to perform adaptive distance-weighted filtering on the sampled discrete multi-dimensional data. Specifically, the signal conditioning module 112 preprocesses the raw sampled data by normalizing amplitudes within a predefined dynamic range and suppressing outliers based on statistical thresholds, thereby enhancing data quality and consistency prior to filtering. Further, the distance computation module 114 calculates proximity metrics between the current data point and surrounding neighbors within the sampling window using Euclidean or other suitable distance measures, quantifying local similarity and spatial relationships. Subsequently, the weight aggregation module 116 assigns filtering weights based on these computed distances, typically applying an exponential decay function to prioritize nearer neighbors and combines weighted signal values to produce a refined filter output. The filtering begins with the signal conditioning module 112, scaling the multi-dimensional data to reduce amplitude variations and remove anomalous points, which prevents distortion during filtering. Subsequently, the distance computation module 114 processes the conditioned data to generate distance scores reflecting the relative closeness of each neighboring point to the current data point. The scores serve as inputs to the weight aggregation module 116, where monotonic functions convert distances into normalized weights that influence the filtering strength of each neighbor. Further, the weight aggregation incorporates adaptive relevance factors that modulate weights based on contextual signal features, ensuring that filtering adapts dynamically to local signal complexity. The weighted sum of neighboring data points yields the filtered signal value that enhances signal integrity and suppresses noise. The modular control unit configuration, as mentioned above, provides precise and context-aware filtering that improves signal-to-noise ratio while preserving critical details in multi-dimensional data. Advantages of the modular system include robustness against varying noise profiles and signal heterogeneity, improved edge and texture preservation due to adaptive weight modulation, and scalability to diverse signal types through flexible module parameterization. The compartmentalized design enables efficient data processing pipelines, facilitating real-time or near-real-time filtering applications in complex signal environments such as imaging, sensor fusion, and multidimensional data analytics.
In an embodiment, the signal conditioning module 112 is configured to preprocess the sampled discrete multi-dimensional data by performing amplitude normalization through min-max scaling. The signal conditioning module 112 preprocesses the sampled discrete multi-dimensional data by implementing amplitude normalization using min-max scaling. The min-max scaling method transforms the amplitude values of the sampled data within a defined dynamic range, typically mapping the minimum observed value to a lower bound 0 and the maximum observed value to an upper bound 1. The linear scaling ensures uniform distribution of data amplitudes across the target range, effectively standardizing the signal magnitudes regardless of the original amplitude variations. Further, the scaling operation involves computing the minimum and maximum amplitude values within the sampling window and applying the min-max normalization formula to each data point. The technique of min-max scaling within the signal conditioning module 112 is initiated by extracting amplitude extrema from the sampled discrete multi-dimensional data. Furthermore, each data point’s amplitude is adjusted according to the formula: normalized value = (original value − minimum value) ÷ (maximum value − minimum value). The transformation preserves the relative relationships between data points while bounding values within the predefined scale. Subsequently, the filtering operations leverage the normalized data, ensuring consistent and stable weight computations in downstream modules. By removing amplitude scale discrepancies, the signal conditioning module 112 mitigates the risk of biasing distance computations and weight assignments during adaptive filtering. The amplitude normalization through min-max scaling produces enhanced filter stability and improved noise rejection by maintaining a consistent amplitude scale across the input data. Advantages of the min-max include increased robustness of the distance computation module and weight aggregation module against outliers and amplitude variability, as well as improved accuracy in proximity assessment within the adaptive filtering framework. The normalization facilitates reliable adaptive weight modulation and contributes to preserving critical signal features while suppressing noise, thereby improving overall filtering performance for multi-dimensional signal processing applications.
In an embodiment, the distance computation module 114 is configured to receive the pre-processed sampled data and dynamically select a sampling window with a set of surrounding data points based on an entropy of the pre-processed sampled data. The distance computation module 114 receives the pre-processed sampled data from the signal conditioning module 112 and dynamically selects a sampling window comprising a set of surrounding data points based on the entropy calculated over the local region of the pre-processed data. Further, the entropy serves as a quantitative measure of signal complexity or randomness within the neighbourhood of the current data point. Specifically, by analyzing entropy values, the module determines the appropriate size and shape of the sampling window to balance noise reduction and detail preservation. The regions exhibiting low entropy correspond to smooth or homogeneous signal areas, where larger windows are selected to maximize noise averaging, while high entropy regions correspond to complex or highly variable areas, where smaller windows are chosen to preserve signal features. The technique employed by the distance computation module 114 initiates with calculating the entropy metric over localized neighbourhoods within the pre-processed sampled data. Further, the entropy calculation involves evaluating the probability distribution of amplitude values or patterns within the candidate window region and computing the Shannon entropy to characterize signal disorder. The distance computation module 114 further applies predefined entropy thresholds or adaptive criteria to dynamically adjust the sampling window dimensions around the current data point. The selected window defines the set of surrounding data points for which distance metrics, such as Euclidean distances, are computed relative to the current point. The dynamic window selection enables context-sensitive distance measurement tailored to local signal characteristics. The entropy-based dynamic window selection improves filtering accuracy by adapting neighbourhood size to local signal complexity, reducing smoothing in regions containing important features, and enhancing noise suppression in uniform areas. Advantages of entropy-based dynamic window selection include enhanced edge and texture preservation due to minimized over-smoothing in high-entropy zones and improved noise rejection in low-entropy zones by leveraging larger contextual information. The above-mentioned approach optimizes the trade-off between noise reduction and detail retention, resulting in improved overall performance of the adaptive distance-weighted filtering process for multi-dimensional signal processing applications.
In an embodiment, the distance computation module 114 is configured to compute a distance between a current data point and the surrounding data points within the dynamically selected sampling window. The distance computation module 114 calculates the distance between a current data point and surrounding data points within the dynamically selected sampling window. The distance metric quantifies the similarity or dissimilarity between the multi-dimensional signal values of the current point and each neighbouring point. Specifically, the Euclidean distance is employed to measure the spatial and amplitude differences across all signal dimensions, ensuring an accurate representation of proximity in the multi-dimensional signal space. The calculated distances serve as a foundation for subsequent weight assignment in the adaptive filtering process. The technique involves obtaining the multi-dimensional vector representing the current data point and comparing it to the corresponding vectors of each surrounding data point within the selected window. The distance for each pair is computed using the Euclidean formula, which involves calculating the square root of the sum of squared differences across all dimensions. Alternative distance metrics, such as Mahalanobis or Manhattan distances, may be implemented depending on signal characteristics and application requirements. The distance values are normalized or scaled as necessary to maintain consistency across varying signal ranges and facilitate effective weighting. Beneficially, the precise distance computation improves the filtering accuracy by enabling context-aware weight assignments that prioritize closer or more similar neighbouring points. Advantages of the precise distance computation include enhanced noise suppression by reducing the influence of distant or dissimilar points and improved preservation of signal details by emphasizing locally relevant data. The above-mentioned approach supports adaptive filtering mechanisms that adjust dynamically to local signal structure, leading to superior performance in multi-dimensional signal processing tasks requiring robust noise reduction while maintaining critical features.
In an embodiment, the weight aggregation module 116 is configured to assign a filtering weight to each data point in the sampling window based on an exponential decay function of the computed distance. The weight aggregation module 116 assigns a filtering weight to each data point within the sampling window based on an exponential decay function applied to the computed distance between the current data point and each surrounding data point. The exponential decay function transforms the distance metric into a weight that decreases monotonically as the distance increases, thereby emphasizing closer points while diminishing the influence of more distant points. The weighting scheme enhances the contribution of spatially or signal-wise proximate neighbours in the filtering process, improving the relevance and accuracy of the aggregated output. The technique involves applying the exponential decay function to each computed distance value. Subsequently, after computing the weights, normalization is performed to ensure the sum of all weights within the sampling window equals unity, facilitating unbiased aggregation of filtered values. The use of the exponential decay weighting function improves noise suppression by effectively reducing the impact of distant or dissimilar points that are less representative of the local signal context. Advantages of the exponential decay weighting function include enhanced preservation of fine details and edges due to stronger weighting of nearby points, improved robustness to outliers by attenuating their influence, and flexible adaptability to varying signal smoothness through tuning of the decay parameter. The weighting mechanism supports precise and context-aware adaptive filtering, yielding superior signal quality in multi-dimensional data processing applications.
In an embodiment, the weight aggregation module 116 is configured to generate a filtered signal value for the center point by performing a weighted sum of neighbouring signal values, wherein each weight is computed as a monotonic function of the corresponding proximity. The weight aggregation module 116 generates a filtered signal value for the center point by performing a weighted sum of neighbouring signal values within the sampling window. Further, each neighbouring point contributes to the filtered value proportionally to its assigned weight, which reflects the degree of similarity or proximity to the center point. The weights are computed using a monotonic function of the corresponding proximity measure, ensuring that weights decrease consistently as distance or dissimilarity increases. The approach integrates local signal information while emphasizing points most relevant to the center point. Further, the technique involves calculating weights for each neighbouring data point based on a monotonic function applied to the proximity metric, such as distance or similarity score. Furthermore, common monotonic functions include, but is not limited to, exponential decay, inverse distance, or Gaussian kernels. After computing weights, normalization is to make the sum of weights equal one, maintaining the scale of the filtered value. The weighted sum operation aggregates the neighbouring signal values using these normalized weights, resulting in a smoothed signal value at the center point that balances noise reduction and feature preservation. The filtered signal value through monotonic weighting and weighted summation improves the accuracy and quality of signal filtering by adaptively emphasizing spatially or contextually relevant neighbours. Advantages of filtered signal value include enhanced detail preservation due to selective influence of proximate data points, improved noise rejection through reduced impact of distant or dissimilar points, and flexibility in tuning the monotonic function to optimize filtering performance for varying signal characteristics. The weighted aggregation method supports adaptive and context-sensitive multi-dimensional signal processing with superior noise resilience and feature retention.
In an embodiment, the weight aggregation module 116 is configured to modulate the computed weight for each neighbouring data point by an adaptive relevance factor derived from contextual features of the multi-dimensional signal. The weight aggregation module 116 modulates the computed weight for each neighboring data point by applying an adaptive relevance factor derived from contextual features of the multi-dimensional signal. The adaptive relevance factor dynamically adjusts the influence of each neighboring point based on local signal characteristics, such as, but is not limited to, texture, edge presence, or variance, thereby enhancing the filtering process’s sensitivity to the signal’s structural context. The modulation refines the weighting scheme to prioritize more relevant data points, improving the overall filtering accuracy and preserving critical signal features. The technique involves extracting contextual features from the multi-dimensional signal within the sampling window, which serve as indicators of local signal behavior. The adaptive relevance factor is computed using the features through predefined functions or learned models, reflecting the importance of each neighboring data point relative to the center point. The initially computed weight, typically derived from a distance-based monotonic function, is multiplied by the adaptive relevance factor to yield a context-aware modulated weight. Further, the normalization ensures the sum of modulated weights equals unity before performing weighted aggregation for filtering. The modulating weights by an adaptive relevance factor lead to enhanced noise suppression and improved detail preservation by incorporating local signal context into the weighting process. Advantages of modulating weights by an adaptive relevance factor include increased robustness to signal variability, better edge and texture retention in multi-dimensional signals, and improved adaptability to diverse signal conditions without manual parameter tuning. The above-mentioned approach provides a refined and context-sensitive adaptive filtering mechanism that achieves superior performance in complex multi-dimensional signal processing environments.
In accordance with a second aspect, there is described a method for filtering a multi-dimensional signal generated from an input source, the method comprises:
- sensing at least one physical parameter of the multi-dimensional signal, via a signal acquisition unit;
- generating a sampled discrete multi-dimensional data for the at least one received physical parameter, via at least one analog-to-digital converter;
- preprocessing the sampled discrete multi-dimensional data by performing amplitude normalization through min-max scaling, via a signal conditioning module;
- computing a distance between a current data point and the surrounding data points within a dynamically selected sampling window, via a distance computation module; and
- assigning a filtering weight to each data point in the sampling window based on an exponential decay function of the computed distance, via a weight aggregation module.
Figure 3 describes a method 200 for filtering a multi-dimensional signal generated from an input source 102. The method 200 starts at a step 202. At the step 202, the method 200 comprises sensing at least one physical parameter of the multi-dimensional signal, via a signal acquisition unit 104. At a step 204, the method 200 comprises generating a sampled discrete multi-dimensional data for the at least one received physical parameter, via at least one analog-to-digital converter 106. At a step 206, the method 200 comprises preprocessing the sampled discrete multi-dimensional data by performing amplitude normalization through min-max scaling, via a signal conditioning module 112. At a step 208, the method 200 comprises computing a distance between a current data point and the surrounding data points within a dynamically selected sampling window, via a distance computation module 114. At a step 210, the method 200 comprises assigning a filtering weight to each data point in the sampling window based on an exponential decay function of the computed distance, via a weight aggregation module 116.
In an embodiment, the method 200 comprises receiving the at least one sensed physical parameter, via at least one analog-to-digital converter 106.
In an embodiment, the method 200 comprises dynamically selecting a sampling window with a set of surrounding data points based on an entropy of the preprocessed sampled data via the distance computation module 114.
In an embodiment, the method 200 comprises generating a filtered signal value for the center point by performing a weighted sum of neighbouring signal values, via the weight aggregation module 116.
In an embodiment, the method 200 comprises modulating the computed weight for each neighbouring data point by an adaptive relevance factor derived from contextual features of the multi-dimensional signal, via the weight aggregation module 116.
In an embodiment, the method 200 comprises receiving the at least one sensed physical parameter, via at least one analog-to-digital converter 106. Further, the method 200 comprises dynamically selecting a sampling window with a set of surrounding data points based on an entropy of the preprocessed sampled data via the distance computation module 114. Furthermore, the method 200 comprises generating a filtered signal value for the center point by performing a weighted sum of neighbouring signal values, via the weight aggregation module 116. Furthermore, the method 200 comprises modulating the computed weight for each neighbouring data point by an adaptive relevance factor derived from contextual features of the multi-dimensional signal, via the weight aggregation module 116.
In an embodiment, the method 200 comprises sensing at least one physical parameter of the multi-dimensional signal, via a signal acquisition unit 104. Furthermore, the method 200 comprises receiving the at least one sensed physical parameter, via at least one analog-to-digital converter 106. Furthermore, the method 200 comprises generating a sampled discrete multi-dimensional data for the at least one received physical parameter, via at least one analog-to-digital converter 106. Furthermore, the method 200 comprises preprocessing the sampled discrete multi-dimensional data by performing amplitude normalization through min-max scaling, via a signal conditioning module 112. Furthermore, the method 200 comprises computing a distance between a current data point and the surrounding data points within a dynamically selected sampling window, via a distance computation module 114. Furthermore, the method 200 comprises assigning a filtering weight to each data point in the sampling window based on an exponential decay function of the computed distance, via a weight aggregation module 116. Furthermore, the method 200 comprises generating a filtered signal value for the center point by performing a weighted sum of neighbouring signal values, via the weight aggregation module 116. Furthermore, the method 200 comprises modulating the computed weight for each neighbouring data point by an adaptive relevance factor derived from contextual features of the multi-dimensional signal, via the weight aggregation module 116.
Based on the above-mentioned embodiments, the present disclosure provides significant advantages of providing a signal processing system capable of accurately filtering and conditioning electrical signals derived from an input source using coordinated dimensional alignment, transformation, and advanced filtering techniques.
It would be appreciated that all the explanations and embodiments of the system 100 also apply mutatis-mutandis to the method 200.
In the description of the present invention, it is also to be noted that, unless otherwise explicitly specified or limited, the terms “disposed,” “mounted,” and “connected” are to be construed broadly, and may for example be fixedly connected, detachably connected, or integrally connected, either mechanically or electrically. They may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Modifications to embodiments and combinations of different embodiments of the present disclosure described in the foregoing are possible without departing from the scope of the present disclosure as defined by the accompanying claims. Expressions such as “including”, “comprising”, “incorporating”, “have”, and “is” used to describe and claim the present disclosure are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural where appropriate.
Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the present disclosure, the drawings, and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art.
, C , C , C , Claims:WE CLAIM:
1. A signal processing system (100) for filtering a multi-dimensional signal generated from an input source (102), the system (100) comprises:
- a signal acquisition unit (104) configured to sense at least one physical parameter of the multi-dimensional signal;
- at least one analog-to-digital converter (106) communicably coupled to the signal acquisition unit (104) and configured to:
- receive the at least one sensed physical parameter; and
- generate a sampled discrete multi-dimensional data for the at least one received physical parameter;
- a control unit (108) communicably coupled to the at least one analog-to-digital converter (106), wherein the control unit (108) is configured to perform an adaptive distance-weighted filtering on the sampled discrete multi-dimensional data to generate at least one filter value; and
- an output interface (110) communicably coupled to the control unit (108).

2. The system (100) as claimed in claim 1, the control unit (108) comprises a signal conditioning module (112), a distance computation module (114), and a weight aggregation module (116).

3. The system (100) as claimed in claim 2, wherein the signal conditioning module (112) is configured to preprocess the sampled discrete multi-dimensional data by performing amplitude normalization through min-max scaling.

4. The system (100) as claimed in claim 2, wherein the distance computation module (114) is configured to receive the preprocessed sampled data and dynamically select a sampling window with a set of surrounding data points based on an entropy of the preprocessed sampled data.

5. The system (100) as claimed in claim 2, wherein the distance computation module (114) is configured to compute a distance between a current data point and the surrounding data points within the dynamically selected sampling window.

6. The system (100) as claimed in claim 2, wherein the weight aggregation module (116) is configured to assign a filtering weight to each data point in the sampling window based on an exponential decay function of the computed distance.

7. The system (100) as claimed in claim 2, wherein the weight aggregation module (116) is configured to generate a filtered signal value for the center point by performing a weighted sum of neighbouring signal values, wherein each weight is computed as a monotonic function of corresponding proximity.

8. The system (100) as claimed in claim 2, wherein the weight aggregation module (116) is configured to modulate the computed weight for each neighbouring data point by an adaptive relevance factor derived from contextual features of the multi-dimensional signal.

9. A method (200) for filtering a multi-dimensional signal generated from an input source (102), the method (200) comprising:
- sensing at least one physical parameter of the multi-dimensional signal, via a signal acquisition unit (104);
- generating a sampled discrete multi-dimensional data for the at least one received physical parameter, via at least one analog-to-digital converter (106);
- preprocessing the sampled discrete multi-dimensional data by performing amplitude normalization through min-max scaling, via a signal conditioning module (112);
- computing a distance between a current data point and the surrounding data points within a dynamically selected sampling window, via a distance computation module (114); and
- assigning a filtering weight to each data point in the sampling window based on an exponential decay function of the computed distance, via a weight aggregation module (116).

Documents

Application Documents

# Name Date
1 202521058401-STATEMENT OF UNDERTAKING (FORM 3) [18-06-2025(online)].pdf 2025-06-18
2 202521058401-POWER OF AUTHORITY [18-06-2025(online)].pdf 2025-06-18
3 202521058401-FORM-9 [18-06-2025(online)].pdf 2025-06-18
4 202521058401-FORM FOR STARTUP [18-06-2025(online)].pdf 2025-06-18
5 202521058401-FORM FOR SMALL ENTITY(FORM-28) [18-06-2025(online)].pdf 2025-06-18
6 202521058401-FORM 1 [18-06-2025(online)].pdf 2025-06-18
7 202521058401-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [18-06-2025(online)].pdf 2025-06-18
8 202521058401-EVIDENCE FOR REGISTRATION UNDER SSI [18-06-2025(online)].pdf 2025-06-18
9 202521058401-DRAWINGS [18-06-2025(online)].pdf 2025-06-18
10 202521058401-DECLARATION OF INVENTORSHIP (FORM 5) [18-06-2025(online)].pdf 2025-06-18
11 202521058401-COMPLETE SPECIFICATION [18-06-2025(online)].pdf 2025-06-18
12 Abstract.jpg 2025-07-02