Abstract: A computing system (100) and method (400) for performing classification of data instances are disclosed. The computing system (100) includes a processor (104) operatively coupled to a memory (106). The processor (104) may receive an input vector (302) derived from a data instance, assign to each element of the input vector (302) a neuron type selected from a plurality of heterogeneous neuron types (304) arranged in a randomized spatial configuration within an input layer of a chaotic neural architecture (300), and generate, for each assigned neuron type, a chaotic trajectory (306) based on the corresponding input element. The processor (104) may extract a feature representation (308), including at least one of: firing time, firing rate, entropy value, or energy value, from the chaotic trajectories (306), and determine a classification output (310) by comparing the feature representation (308) with reference vectors using a cosine similarity measure.
Description:TECHNICAL FIELD
[0001] The present disclosure relates to the field of machine learning and artificial intelligence. In particular, the present disclosure relates to a computing system and a method for performing classification of data instances using a chaotic neural architecture that integrates randomized heterogeneous chaotic neurons and chaos-based feature extraction.
BACKGROUND
[0002] In machine learning and artificial intelligence systems, classification of data instances is a fundamental task employed across numerous domains, including computer vision, signal processing, natural language processing, and decision support systems. Neural networks are widely used in such classification pipelines due to their ability to model complex, non-linear relationships within data.
[0003] Conventional neural networks typically rely on fixed activation functions and deterministic feature extraction pipelines. However, such static architectures often exhibit limited flexibility in adapting to diverse or noisy data distributions, particularly in scenarios with constrained labeled data. Moreover, conventional neuron models lack inherent dynamic variability, which can limit the richness of the extracted feature representations and affect the overall robustness of the classification process.
[0004] In addition, the deterministic nature of standard neural processing can reduce sensitivity to subtle variations in input data that may be critical for certain classification tasks. As a result, existing approaches may require extensive model retraining or large-scale datasets to achieve acceptable accuracy and generalization across heterogeneous data sources.
[0005] While Artificial Neural Networks (ANNs) were originally inspired by biological neurons, most modern ANN architectures, including Deep neural networks (DNN), fail to replicate key properties of the human brain, such as chaotic firing patterns, neuron heterogeneity, and spatial randomness in connectivity. These properties are essential for the rich dynamic behavior observed in natural neural systems. Consequently, there is a compelling need for architectures that better emulate such biological realism to improve learning robustness and adaptability in artificial systems.
[0006] To address at least these limitations, there is a need for an improved classification architecture capable of generating rich and adaptive feature representations from input data, enhancing classification robustness without requiring extensive training data or highly engineered preprocessing pipelines.
OBJECTS OF THE PRESENT DISCLOSURE
[0007] A general object of the present disclosure is to provide a computing system and method for performing classification of data instances using a chaotic neural architecture.
[0008] Another object of the present disclosure is to enable classification of data instances based on feature representations derived from chaotic trajectories generated by heterogeneous neuron types.
[0009] Another object of the present disclosure is to provide a randomized heterogeneous chaotic neuron layer in which multiple types of chaotic neurons are assigned to input elements in a randomized spatial configuration.
[0010] Another object of the present disclosure is to enable generation of rich and dynamic feature representations from chaotic trajectories, including features such as firing time, firing rate, entropy value, and energy value.
[0011] Another object of the present disclosure is to determine classification outputs by comparing chaos-based feature representations with reference vectors associated with known classes using similarity-based measures.
[0012] Another object of the present disclosure is to improve classification robustness and flexibility in machine learning applications, particularly under conditions of limited labeled data or noisy input data.
SUMMARY
[0013] Aspects of the present disclosure generally relate to the field of machine learning and artificial intelligence. In particular, the present disclosure relates to a computing system and a method for performing classification of data instances using a chaotic neural architecture that integrates randomized heterogeneous chaotic neurons and chaos-based feature extraction.
[0014] An aspect of the present disclosure pertains to a computing system for performing classification of a data instance. The computing system includes a memory and a processor operatively coupled to the memory. The memory includes instructions that, when executed by the processor, cause the processor to receive an input vector derived from a data instance to be classified. Herein, the input vector includes a plurality of elements. The processor is also configured to assign, to each element of the input vector, a neuron type selected from a plurality of heterogeneous neuron types. Herein, the plurality of heterogeneous neuron types are arranged in a randomized spatial configuration within an input layer of a chaotic neural architecture. The processor is also configured to generate, for each assigned neuron type, a chaotic trajectory based on the corresponding element of the input vector, thereby forming a set of chaotic trajectories corresponding to the plurality of elements. The processor is also configured to extract a feature representation based on the set of chaotic trajectories. The processor is also configured to determine a classification output for the data instance based on the feature representation.
[0015] In one embodiment, the plurality of heterogeneous neuron types include at least one of: a logistic map neuron and a generalized Lüroth series (GLS) map neuron.
[0016] In one embodiment, the processor is configured to select the logistic map neuron and the generalized Lüroth series map neuron in a predetermined proportion.
[0017] In one embodiment, the predetermined proportion corresponds to one of: 25% logistic map neurons and 75% generalized Lüroth series map neurons; 50% logistic map neurons and 50% GLS map neurons; and 75% logistic map neurons and 25% GLS map neurons.
[0018] In one embodiment, the chaotic trajectory for each assigned neuron type is generated by iteratively updating an initial neural activity value until a proximity condition with the corresponding element of the input vector is satisfied.
[0019] In one embodiment, the feature representation includes at least one of: a firing time, a firing rate, an entropy value, or an energy value derived from the chaotic trajectories.
[0020] In one embodiment, the processor is configured to determine the classification output by comparing the feature representation with one or more reference vectors associated with known classes of the data instance.
[0021] In one embodiment, the processor is configured to perform the comparison using a cosine similarity measure.
[0022] In one embodiment, the data instance includes at least one of: a time-series signal, a grayscale image, a color image, and a tabular data record.
[0023] Another aspect of the present disclosure pertains to a method for performing classification of data instance. The method includes receiving, by a processor, an input vector derived from a data instance to be classified. Herein, the input vector includes a plurality of elements. The method also includes assigning, by the processor, to each element of the input vector, a neuron type selected from a plurality of heterogeneous neuron types. Herein, the plurality of heterogeneous neuron types are arranged in a randomized spatial configuration within an input layer of a chaotic neural architecture. The method also includes generating, by the processor, for each assigned neuron type, a chaotic trajectory based on the corresponding element of the input vector, thereby forming a set of chaotic trajectories corresponding to the plurality of elements. The method also includes extracting, by the processor, a feature representation based on the set of chaotic trajectories. The method also includes determining, by the processor, a classification output for the data instance based on the feature representation.
[0024] Various objects, features, aspects, and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.
BRIEF DESCRIPTION OF THE DRAWINGS
[0025] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components, or circuitry commonly used to implement such components.
[0026] FIG. 1 illustrates an exemplary schematic diagram of a computing system (100) for performing classification of a data instance using a chaotic neural architecture, in accordance with an embodiment of the present disclosure.
[0027] FIG. 2 illustrates an exemplary block diagram of a server system (200) for performing classification of data instances using a chaotic neural architecture, in accordance with an embodiment of the present disclosure.
[0028] FIG. 3 illustrates an exemplary block diagram of a chaotic neural architecture (300) for performing classification of data instances, in accordance with an embodiment of the present disclosure.
[0029] FIG. 4 illustrates an exemplary flow diagram of a method (400) for performing classification of data instances using a chaotic neural architecture, in accordance with an embodiment of the present disclosure.
[0030] FIG. 5 illustrates an experimental result for classification performance in terms of Macro F1-score of RHNL architecture using different classifiers with ChaosFEXRH25L75G configuration, in accordance with an embodiment of the present disclosure.
[0031] FIG. 6 illustrates an experimental result for classification performance in terms of Macro F1-score of RHNL architecture using different classifiers with ChaosFEXRH50L50 configuration, in accordance with an embodiment of the present disclosure.
[0032] FIG. 7 illustrates an experimental result for classification performance in terms of Macro F1-score of the RHNL architecture using different classifiers with ChaosFEXRH75L25G configuration, in accordance with an embodiment of the present disclosure
[0033] FIG. 8 illustrates a cosine similarity classifier performance of the RHNL architecture in a low training sample regime using ChaosFEXRH25L75G configuration compared to traditional standalone classifiers, in accordance with an embodiment of the present disclosure.
[0034] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[0035] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0036] The ensuing description provides exemplary embodiments only and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0037] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
[0038] As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
[0039] As used herein, the term “chaotic neural architecture” refers broadly to any neural network configuration in which one or more processing units simulate or exhibit chaotic behavior based on deterministic nonlinear dynamics. The term “randomized heterogeneous chaotic neural architecture (RHNL)” refers to a specific embodiment of such an architecture, wherein multiple types of chaotic neurons (e.g., logistic map neurons and generalized Lüroth series neurons) are employed and arranged in a randomized spatial configuration within the network. Unless explicitly stated otherwise, references to a “chaotic neural architecture” are intended to encompass such randomized heterogeneous implementations, including but not limited to the RHNL configurations described herein, without limiting the scope of the present disclosure to any particular neuron type or arrangement.
[0040] Existing neural network architectures often require extensive labeled data and large computational resources to achieve robust classification accuracy across diverse data domains. Conventional neuron models lack inherent dynamic variability and nonlinear complexity, which can limit the conventional neuron models ability to extract discriminative features, particularly in low-data regimes or when faced with noisy, heterogeneous inputs. Furthermore, standard feature extraction methods are typically deterministic and linear, reducing adaptability and sensitivity to complex data structures. There is a need for a neural architecture that can leverage nonlinear, dynamic behavior to enhance feature extraction and classification performance in a computationally efficient manner.
[0041] The present disclosure provides a computing system and method for performing classification of data instance using a chaotic neural architecture. The computing system includes a heterogeneous chaotic neuron layer, wherein each element of an input vector is assigned to a neuron selected from a plurality of heterogeneous chaotic neuron types. A heterogeneous chaotic neuron refers to a chaotic neuron selected from a plurality of distinct chaotic neuron types, wherein each neuron type exhibits unique nonlinear dynamic behavior during state evolution. In the present disclosure, the heterogeneous chaotic neuron types include logistic map neurons and generalized Lüroth series (GLS) map neurons, each contributing different trajectory characteristics to the feature extraction process. The plurality of distinct chaotic neuron types are arranged in a randomized spatial configuration within an input layer of the chaotic neural architecture. Each assigned neuron type independently generates a chaotic trajectory through iterative nonlinear updates based on its corresponding element of input vector. The computing system extracts chaos-based feature representations, including firing time, firing rate, entropy value, and energy value, from the resulting chaotic trajectories. A classification output is determined by comparing the extracted feature representation with reference class vectors, using a similarity-based measure. Experimental evaluations on benchmark datasets, as described in the present disclosure, indicate that the proposed solution provides improved robustness in feature extraction and enhanced classification accuracy, particularly in scenarios involving limited or noisy training data. The chaotic neural architecture introduces nonlinear dynamic diversity that enhances feature discrimination compared to conventional static neural models, as reflected in comparative performance trends.
[0042] The present disclosure achieves a technical effect by integrating nonlinear chaotic dynamics into the feature extraction and classification of the data instance, providing enhanced sensitivity to variability and patterns present in input data.. The randomized heterogeneous chaotic neuron layer introduces architectural variability and dynamic state evolution that cannot be replicated by linear or conventional static neural models. By generating feature representations derived from chaotic trajectories, the proposed computing system produces transformed representations that improve discriminability between classes of the data instance, which results in demonstrable improvements in classification robustness under low-data and high-noise conditions. The architecture operates through an iterative physical computation process over neuron states, realized in hardware or software, and is not a mere abstract algorithm. It provides a concrete technical contribution by enabling an chaos-based feature extraction that materially affects the manner of classification performed by the proposed computing system.
[0043] The present disclosure is industrially applicable across a wide range of domains requiring efficient and robust classification of data instance such as a time-series signal, a grayscale image, a color image, and a tabular data record. The chaotic neural architecture disclosed herein may be implemented using general-purpose computing hardware or specialized neural processing units, and is compatible with existing machine learning techniques. The present disclosure is particularly suited for industrial applications such as image-based quality inspection, time-series anomaly detection, biometric authentication, speech recognition, medical diagnostics, and document classification, where the ability to extract rich, nonlinear features from limited or noisy data is highly valuable. The present disclosure enables deployment in practical products and systems that perform real-time or batch classification tasks, contributing to improved system accuracy and adaptability under varying operational conditions.
[0044] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosures as defined by the appended claims.
[0045] Embodiments explained herein relate to the field of machine learning and artificial intelligence. In particular, the present disclosure relates to a computing system and a method for performing classification of data instance using a chaotic neural architecture that integrates randomized heterogeneous chaotic neurons and chaos-based feature extraction.
[0046] The various embodiments throughout the disclosure will be explained in more detail with reference to FIGs. 1-8.
[0047] Referring to FIG. 1, an exemplary block diagram of a computing system (100) for performing classification of data instance is illustrated, in accordance with one or more embodiments of the present disclosure. The computing system (100) may include various hardware components operatively coupled through a system bus (102), and may be configured to execute a sequence of operations including neuron assignment, chaotic trajectory generation, feature extraction, and class prediction, as enabled by instruction logic stored in system memory.
[0048] The computing system (100) may include a processor (104), a main memory (106), a read-only memory (108), a mass storage device (110), an I/O interface (116), a network interface controller (114), and an external storage device (112). These components may communicate via the system bus (102), which may include data, address, and control lines facilitating synchronous or asynchronous data transfer across the system.
[0049] The processor (104) may include one or more processing cores or execution units, and may be configured to execute software instructions stored in the main memory (106). In particular, the processor (104) may be configured to receive an input vector derived from a data instance, assign chaotic neurons from a heterogeneous set arranged in a randomized spatial configuration, simulate chaotic trajectories until a defined proximity condition is met, extract feature representations from those trajectories, and determine a classification output based on feature similarity or learned mappings.
[0050] The main memory (106) may store executable program instructions and working data, including variables related to neural activity values, neuron type assignments, trajectory states, and feature vectors. The read-only memory (108) may store system-level boot code or baseline routines associated with hardware configuration and diagnostics. The mass storage device (110) may include persistent data structures such as, but not limited to, trained neural configuration files, symbolic mapping dictionaries, reference feature vectors, or labelled training datasets.
[0051] The external storage device (112) may be operatively connected to the computing system (100) and may store extended classification datasets, neuron parameter libraries, or archived inference results. In some embodiments, the external storage device (112) may be used to load or update proportion parameters controlling the randomized neuron layout in the input layer. The network interface controller (114) may facilitate communication with remote computing environments, enabling synchronization of model weights or distributed execution of classification pipelines. The I/O interface (116) may support peripheral connectivity for human-in-the-loop validation, real-time signal acquisition, or system configuration.
[0052] During execution, the processor (104) may be configured coordinate with the main memory (106) and/or the mass storage device (110) to carry out a series of operations associated with classification using a chaotic neural architecture. The series of operations may include invoking a randomization logic to assign logistic map neurons and generalized Lüroth series (GLS) neurons across the input layer, generating chaotic output traces from each neuron based on input values, computing associated feature metrics such as, but not limited to, firing time and entropy, and generating a classification result using cosine similarity or an external classifier stored in the main memory (106) or read-only memory (108). Collectively, the components illustrated in FIG. 1 may enable instruction-driven implementation of a randomized heterogeneous chaotic neural architecture (as shown in FIG. 3 and described later), and may facilitate classification of time-series, image, or tabular data instance using biologically inspired computational dynamics as described in accordance with the present disclosure.
[0053] Referring to FIG. 2, an exemplary block diagram of a server system (200) for performing classification of data instance in a randomized heterogeneous chaotic neural architecture, is illustrated, in accordance with one or more embodiments of the present disclosure. The server system (200) may include hardware modules and specialized processing subsystems that enable scalable, instruction-driven execution of neuron assignment, chaotic trajectory simulation, feature extraction, and classification operations across different data modalities.
[0054] The server system (200) may include one or more processors (202), a memory (204), one or more interfaces (206), a processing engine (208), a database (210), a machine-learning accelerator (212), and one or more administrative interfaces (214). These components may be operatively coupled within the server system (200) and may cooperate to execute classification workflows as described in relation to a framework of the random heterogeneous neurochaos learning (RHNL)model.
[0055] The processor(s) (202) may include single-core or multi-core processing units configured to execute instruction sequences associated with chaotic neural operations. In some embodiments, the processor(s) (202) may be capable of performing some or all of the functions attributed to the processor (104) of the computing system (100) described in FIG. 1. For example, the processor(s) (202) may assign chaotic neurons to input elements, simulate chaotic trajectories, evaluate proximity conditions, and trigger feature extraction pipelines.
[0056] The memory (204) may include volatile and/or non-volatile memory elements for storing executable instructions, neuron parameters, feature extraction rules, and intermediate classification outputs. In some embodiments, the memory (204) may be configured to support or mirror at least a portion of the functionality of the main memory (106) described in FIG. 1, including temporary storage of chaotic states and randomized neuron layouts.
[0057] The interface(s) (206) may include data exchange ports or control-level interfaces configured to receive input vectors, transmit classification outputs, or communicate with peripheral computing nodes or sensor platforms. The processing engine (208) may serve as a dedicated logic block or abstraction layer for executing dataflow sequences aligned with the RHNL pipeline, including trajectory computation, symbolic transformation, and comparison against reference vectors.
[0058] The database (210) may store structured or unstructured data related to training datasets, pre-computed neuron configurations, statistical feature norms, or classification metadata. In some embodiments, the database (210) may also serve as a repository for adaptive thresholds, proportion control parameters, or validation datasets used to evaluate the performance of heterogeneous neuron layouts.
[0059] The machine-learning accelerator (212) may include dedicated hardware or firmware-embedded modules configured to enhance performance during classification decision-making. For instance, the machine-learning accelerator (212) may facilitate rapid evaluation of cosine similarity, support vector machine (SVM) kernels, or ensemble-based prediction logic integrated into the RHNL workflow.
[0060] The administrative interface(s) (214) may enable operational oversight and system management, including model configuration, threshold tuning, neuron proportion selection, or audit logging. The administrative interface(s) (214) may be accessible to system administrators or machine learning (ML) engineers responsible for real-time deployment, retraining, or adaptation of RHNL classifiers.
[0061] Collectively, the components of the server system (200) shown in FIG. 2 may operate as a cohesive execution environment for the deployment and operation of a randomized heterogeneous chaotic neural architecture, wherein the processor(s) (202) and memory (204) may respectively replicate or extend the functionalities of the processor (104) and memory (106) illustrated in FIG. 1.
[0062] Referring to FIG. 3, an exemplary block diagram of a chaotic neural architecture (300) is illustrated, in accordance with one or more embodiments of the present disclosure. The chaotic neural architecture (300) may be instantiated within the computing system (100) of FIG. 1 or the server system (200) of FIG. 2. The chaotic neural architecture (300) may be implemented as a set of software-implemented functional modules executed by the processor (104) of the computing system (100), based on instructions stored in the memory (106). The chaotic neural architecture (300) may execute a randomized heterogeneous neurochaos-learning (RHNL) workflow to classify the data instance.
[0063] The chaotic neural architecture (300) may include, in functional sequence, an input vector module (302), a heterogeneous chaotic neuron layer (304), a chaotic trajectory-generation module (306), a chaos-based feature-extraction module (308) and a classification module (310). These components may be logically or functionally coupled and may may be configured to be executed by the processor (104) in sequence, thereby transforming an input data instance into a classification output. At least a portion of the transformation may be performed using nonlinear chaotic dynamics governed by discrete-time iterative maps. It should be noted that while the elements (302) through (310) are referred to as "modules" for consistency of description, they may also represent distinct processing stages or functional steps within the overall flow of the chaotic neural architecture (300), and need not correspond to discrete hardware or software modules unless explicitly specified.
[0064] The input vector module (302) may receive an input vector whose elements may be scaled to lie in the unit interval and may correspond to features extracted from, for example, a time-series signal, a grayscale or colour image, or a tabular data record, but not limited thereto. Each element may be forwarded to a corresponding chaotic neuron resident in the heterogeneous chaotic neuron layer (304). The heterogeneous chaotic neuron layer (304) may hold chaotic neurons placed in a randomized spatial configuration. Each neuron may be instantiated either as a logistic-map neuron (L) or as a generalised Lüroth-series (GLS) neuron (G), the overall mix being governed by a proportion policy such as, but not limited to, 25 % L / 75 % G, 50 % L / 50 % G or 75 % L / 25 % G. A logistic-map neuron may iterate as:
, (1)
[0065] Herein, is a bifurcation parameter and is iteration/time step. It is widely recognized that the logistic map displays chaotic behavior for values that are greater than 3.56995. Logistic map displays an infinite number of periodic orbits for each integer value of the period (only for specific values of r), indicating intricate dynamics. For numerous values, the logistic map exhibits high sensitivity to initial conditions (Butterfly Effect), a characteristic hallmark/feature of chaos. The degree of sensitivity to initial values can be quantified through the Lyapunov exponent. A Lyapunov exponent value greater than zero is a symptom of chaotic behavior. Lyapunov exponent equal to zero indicates periodic or quasi-periodic behavior. Lyapunov exponent > 0 indicates chaotic behavior. For the difference equation (first-order):
(2)
[0066] Further, the Lyapunov exponent is defined as:
(3)
where G(·) is assumed to be differentiable. The initial value is randomly chosen (from a uniform distribution) to lie between 0.0 and 1.0. → → → … is the trajectory.
[0067] Further, a GLS neuron may iterate a skew-tent map. However, Skew-tent/tent maps, skew-binary/binary maps are commonly used among the GLS maps. The Skew-tent map may be mathematically defined as:
, . (4)
[0068] The logistic map and the skew-tent map both ensure a positive Lyapunov exponent and hence chaotic behaviour.
[0069] The chaotic trajectory-generation module (306) may each neuron’s state evolution beginning from an initial neural activity value (selected during cross-validation) and continues iterating the chosen map until a proximity condition
, (5)
is satisfied, where denotes the neuron’s assigned input element and denotes a tuned noise-intensity threshold. The resulting sequence may be stored as a chaotic trajectory . Because for logistic maps and for GLS maps, each neuron’s trajectory exhibits sensitive dependence on initial conditions and positive Lyapunov exponent, ensuring that each generated trajectory exhibits chaotic dynamics.
[0070] The chaos-based feature-extraction module (308) may process the set of generated chaotic trajectories to compute a multi-dimensional feature representation. For each chaotic trajectory , the chaos-based feature-extraction module (308) may extract:
[0071] (a) A firing time , which is an iteration index at which fist occurs.
[0072] (b) A firing rate is determined by a fraction of time during which the chaotic neural trajectory is greater than the discrimination threshold .
[0073] (c) A Shannon entropy , which is computed from a binary sequence of the trace/trajectory , where
(6)
Where j=1 to M (the firing time). The Shannon first-order entropy for is determined using the following calculations:
, (7)
with denoting the probability of symbol .
[0074] (d) An energy . (8)
[0075] The feature vector dimensionality scales linearly with the number of input elements. The chaos-based feature-extraction module (308) may concatenate into a feature vector that captures each neuron’s chaotic dynamics in a high-dimensional chaos-sensitive space.
[0076] The classification output module (310) (may also be referred to a classification module (310)) may receive the concatenated feature vector and determine a class label for the original data instance. In one embodiment, the classification module (310) may compute cosine similarity between the extracted feature vector and stored reference class vectors, selecting the class associated with the highest similarity score. In another embodiment, the classification output module (310) may invoke a learned classifier, such as, but not limited to, a support vector machine, decision tree, or ensemble model, trained on chaos-derived features. Accordingly, the sequence of modules (302) through (310) may cooperate to convert a received data instance into a class prediction, leveraging randomized heterogeneous chaotic neuron dynamics, trajectory-driven feature extraction and similarity-based decision logic, thereby enabling robust classification even under low-training-sample regimes described elsewhere in the present disclosure.
[0077] In one embodiment, the computing system (100) for performing classification of data instance may include a processor (104) operatively coupled to a memory (106), as illustrated in FIG. 1. The memory (106) may include instructions that, when executed by the processor (104), may cause the processor (104) to implement a sequence of operations aligned with a chaotic neural architecture for classification.
[0078] The processor (104) may be configured to receive an input vector (302) derived from a data instance to be classified, wherein the input vector (302) may include a plurality of elements. The input vector (302) may include multiple numerical values representing scaled features extracted from a data instance such as, but not limited to, a time-series signal, an image, or a tabular data record. Each element of the input vector (302) may be provided for processing in the subsequent stages of the chaotic neural architecture (300).
[0079] The processor (104) may be configured to assign, to each element of the input vector (302), a neuron type selected from a plurality of heterogeneous neuron types (304). The plurality of heterogeneous neuron types may include logistic map neurons (L) and generalized Lüroth series (GLS) map neurons (G). The plurality of heterogeneous neuron types may be arranged in a randomized spatial configuration within an input layer of the chaotic neural architecture (300), such that the position and type of each neuron in the layer may be determined based on a randomized assignment process following a pre-defined proportion policy.
[0080] The processor (104) may also be configured to generate, for each assigned neuron type, a chaotic trajectory (306) (may also be referred to a chaotic trajectory-generation module (306)) based on the corresponding element of the input vector (302). Each neuron type may simulate its internal state evolution by iteratively applying a nonlinear chaotic function, such as, but not limited to, a logistic map or GLS map (L), using the assigned input element as a reference. The processor (104) may be configured to continue the simulation for each neuron type until a predefined proximity condition is satisfied for that neuron’s output relative to its input element, thereby forming a set of chaotic trajectories (306).
[0081] The processor (104) may also be configured to extract a feature representation (308) based on the set of chaotic trajectories (306). For each neuron type, the processor (104) may be configured to compute a plurality of features derived from its respective chaotic trajectory, including at least one of: firing time, firing rate, entropy value, or energy value. The extracted features corresponding to all neurons in the input layer may be concatenated to form a feature representation vector (308) corresponding to the original input vector (302).
[0082] The processor (104) may be configured to determine a classification output (310) for the data instance based on the extracted feature representation (308). Determining classification may involve comparing the feature representation (308) to one or more reference class vectors stored in the memory (106) or applying a learned classification model such as, but not limited to, a support vector machine or decision tree. The classification output (310) may include a class label representing the computing system’s determination of the category to which the data instance belongs.
[0083] In an illustrative scenario, the computing system (100) may receive a feature vector derived from a banknote image, process the feature vector through the heterogeneous chaotic neuron layer (304), extract chaos-based features from the resulting trajectories, and classify the banknote as genuine or counterfeit based on the computed feature representation. The coordinated operation of the processor (104) and memory (106) may thereby enable classification of data instance using nonlinear chaotic dynamics and chaos-derived features.
[0084] In one embodiment, the plurality of heterogeneous neuron types (304) may include at least one of: a logistic map neuron and a generalized Lüroth series (GLS) map neuron. The logistic map neuron may be configured to update its internal state using a logistic map function. The generalized Lüroth series (GLS) map neuron may be configured to update its internal state using a piecewise linear mapping function.
[0085] In one embodiment, the processor (104) may be configured to select the logistic map neurons (L) and the generalized Lüroth series (GLS) map neurons (G) within the plurality of heterogeneous neuron types (304) in a predetermined proportion. The processor (104) may apply the predetermined proportion during the assignment of neurons to the elements of the input vector (302), thereby determining the ratio of logistic map neurons to GLS map neurons within the heterogeneous chaotic neuron layer (304).
[0086] In one embodiment, the predetermined proportion applied by the processor (104) may correspond to one of the following configurations: 25% logistic map neurons and 75% generalized Lüroth series (GLS) map neurons; 50% logistic map neurons and 50% GLS map neurons; or 75% logistic map neurons and 25% GLS map neurons, but not limited thereto. The processor (104) may be configured to implement the selected proportion when performing the random assignment of neuron types within the heterogeneous chaotic neuron layer (304).
[0087] In one embodiment, the chaotic trajectory (306) for each assigned neuron within the heterogeneous chaotic neuron layer (304) may be generated by iteratively updating an initial neural activity value. The iterative update may be performed by applying the respective chaotic mapping function associated with the assigned neuron type. The processor (104) may continue the iterative updates for each neuron until a proximity condition is satisfied with respect to the corresponding element of the input vector (302). The proximity condition may be defined based on the absolute difference between the current activity value of the neuron type and the assigned input element, such that the iteration may terminate when the difference falls below a predefined threshold.
[0088] In one embodiment, the feature representation extracted from the set of chaotic trajectories (306) may include at least one of: a firing time, a firing rate, an entropy value, or an energy value. For each assigned neuron within the heterogeneous chaotic neuron layer (304), the processor (104) may compute the corresponding features based on the neuron’s chaotic trajectory generated during the iterative update process. The firing time may indicate the number of iterations required to satisfy the proximity condition. The firing rate may represent the frequency of state transitions within the trajectory. The entropy value may quantify the level of unpredictability in the trajectory. The energy value may represent the cumulative magnitude of the trajectory states over the iteration sequence.
[0089] In one embodiment, the processor (104) may be configured to determine the classification output (310) by comparing the feature representation (308) with one or more reference vectors associated with known classes. The reference vectors may be stored in the memory (106) and may represent characteristic feature patterns corresponding to pre-defined classification categories. The processor (104) may be configured to evaluate the similarity between the extracted feature representation (308) and the stored reference vectors in order to identify the closest matching class.
[0090] The randomized heterogeneous neurochaos learning (RHNL) architecture implemented within the chaotic neural architecture (300) may reflect a closer approximation to actual brain function than traditional artificial neural networks. In biological neural systems, individual neurons may vary in type and may exhibit chaotic response behavior under external stimuli. Furthermore, the spatial arrangement and synaptic interconnectivity of biological neurons are not fixed or uniform, but instead demonstrate inherent randomness and heterogeneity. To functionally mirror the structural and dynamic variability of biological neural networks, the chaotic neural architecture (300) may include a heterogeneous chaotic neuron layer (304) that incorporates multiple types of chaotic neurons, such as logistic map neurons and generalized Lüroth series (GLS) map neurons. The processor (104) may assign each neuron type to the corresponding element of an input vector (302) based on a predefined proportion policy, wherein the neuron types are arranged in a randomized spatial configuration within the heterogeneous chaotic neuron layer (304). The random spatial allocation of chaotic neurons within the chaotic neural architecture (300) may serve as a functional parallel to the disordered yet purposeful organization observed in biological brains. By generating neuron-specific chaotic trajectories (306) and extracting dynamic feature representations (308) from those trajectories, the chaotic neural architecture (300) may support improved classification performance, particularly in learning environments characterized by limited training data or high input uncertainty.It will be appreciated that in FIG. 3, elements labelled as (302) through (310) are illustrated for clarity as distinct modules in the exemplary block diagram of the chaotic neural architecture. However, these elements may be implemented as hardware components, software components, or a combination of hardware and software components. In some embodiments, one or more of these elements may be implemented as functional steps performed by the processor (102) executing instructions stored in a memory (106). In other embodiments, one or more of these elements may be realized as dedicated hardware circuits, configurable logic, or integrated system-on-chip components. The depiction of elements (302) to (310) in FIG. 3 is therefore not intended to imply any particular limitation with respect to the manner of their implementation.
[0091] In one embodiment, the processor (104) may be further configured to perform the comparison between the feature representation (308) and the one or more reference vectors using a cosine similarity measure, but not limited thereto. The cosine similarity measure may be computed based on the angle between the feature representation vector and a reference vector in the feature space, and the processor (104) may assign the class label associated with the reference vector yielding the highest cosine similarity.
[0092] In one embodiment, the data instance provided to the computing system (100) may include at least one of: a time-series signal, a grayscale image, a colour image, or a tabular data record, but not limited thereto. The processor (104) may be configured to receive an input vector (302) derived from the selected type of data instance. The input vector (302) may represent numerical features extracted from the data instance through appropriate pre-processing, scaling, or encoding methods prior to being processed by the heterogeneous chaotic neuron layer (304) as part of the classification workflow.
[0093] Referring now to FIG. 4, an exemplary flow diagram of a method (400) for performing classification of data instance using a chaotic neural architecture (300) is illustrated, in accordance with one or more embodiments of the present disclosure. In one embodiment, the method (400) may be implemented by the computing system (100) or the server system (200).
[0094] At step (402), the method (400) may include receiving, by a processor (104), an input vector (302) derived from a data instance to be classified. The input vector (302) may include a plurality of elements.
[0095] At step (404), the method (400) may include assigning, by the processor (104), to each element of the input vector (302), a neuron type selected from a plurality of heterogeneous neuron types (304). The plurality of heterogeneous neuron types (304) may be arranged in a randomized spatial configuration within an input layer of a chaotic neural architecture (300), and may include at least one of: a logistic map neuron or a generalized Lüroth series (GLS) map neuron.
[0096] At step (406), the method (400) may include generating, by the processor (104), for each assigned neuron type, a chaotic trajectory (306) based on the corresponding element of the input vector (302), thereby forming a set of chaotic trajectories corresponding to the plurality of elements. Each chaotic trajectory (306) may be generated by iteratively updating an initial neural activity value until a proximity condition is satisfied with respect to the assigned input element.
[0097] At step (408), the method (400) may include extracting, by the processor (104), a feature representation (308) based on the set of chaotic trajectories (306). The feature representation (308) may include at least one of: a firing time, a firing rate, an entropy value, or an energy value computed from the respective chaotic trajectories.
[0098] At step (410), the method (400) may include determining, by the processor (104), a classification output (310) for the data instance based on the feature representation (308). Determining the classification output (310) may include comparing the feature representation (308) with one or more reference vectors associated with known classes, optionally using a cosine similarity measure or a learned classification model.
[0099] It should be appreciated that the steps of the method (400) described herein are not limited to the specific sequence or structure outlined above. The method (400) may be implemented in various other ways, and the steps may be reordered, combined, omitted, or modified without departing from the scope or spirit of the present disclosure. The examples provided are for illustrative purposes only and are not intended to limit the present disclosure to the specific embodiments described. Those skilled in the art will recognize that various modifications and adaptations may be made to the method (400) based on implementation-specific requirements.
[00100] To evaluate the performance of the computing system (100) for performing classification of data instance using the chaotic neural architecture (300), a series of experiments are conducted on multiple publicly available benchmark datasets. Prior to processing, the data samples are normalized to lie within the range [0,1]. Class labels are assigned numerical values, starting from 0. The normalized input vectors are then provided to the heterogeneous chaotic neuron layer (304) (may also be referred to a heterogeneous chaotic neuron type (304)) within the chaotic neural architecture (300), implemented by the computing system (100), for further processing and classification. Further information about the datasets utilized is presented in Table 1.
Data-set Num. of Classes Samples per class (Training) Samples per class (Testing)
Iris 3 (40, 41, 39) (10, 9, 11)
Ionoshpere 2 (98, 182) (28, 43)
Wine 3 (45, 57, 40) (14, 14, 8)
Bank Note Authentication 2 (614, 483) (148, 127)
Haberman’s Survival 2 (181, 63) (44, 18)
Breast Cancer Wisconsin 2 (367, 193) (91, 48)
Statlog (Heart) 2 (117, 99) (33, 21)
Seeds 3 (59, 56, 53) (59, 56, 53)
FSDD 10 (40, 35, 44, 42, 38, 34, 37, 44, 33, 37) (10, 15, 6, 8, 8, 7, 13, 6, 10, 13)
Table 1
[00101] Referring to Table 1, the Iris dataset include 150 instances distributed across three classes: Setosa, Versicolour, and Virginica. The classification features include sepal length, sepal width, petal length, and petal width. The Ionosphere dataset includes two classes labeled as Good and Bad, based on radar signal reflections. The dataset includes 126 instances labeled as Good and 225 instances labeled as Bad, with each instance including 34 attributes. The Wine dataset includes 178 instances categorized into three classes labeled as 1, 2, and 3, with classification based on chemical constituents of the samples. The Banknote Authentication dataset include two classes, Genuine and Forgery, determined based on features such as variance, skewness, kurtosis, and entropy extracted from images of banknotes processed via wavelet transformation. The dataset include 1372 instances, with 762 classified as Genuine and 610 as Forgery.
[00102] Further, the Haberman’s Survival dataset include patient data from individuals who underwent breast cancer surgery, with Class 1 representing patients who survived five or more years, and Class 2 representing those who did not. The Breast Cancer Wisconsin dataset include nine parameters per instance, with data categorized as Malignant or Benign; the dataset include 699 instances, with 241 classified as Malignant and 458 as Benign. The Statlog (Heart) dataset include two classes: Class 1 represents patients with heart disease, and Class 2 represents patients without heart disease. The Seeds dataset include 210 data instances distributed across three classes of wheat varieties: Kama, Rosa, and Canadian. Each instance includes seven features representing physical characteristics of the wheat kernels.
[00103] In addition, a time-series dataset, the Free Spoken Digit Dataset (FSDD), is used in the analysis. The dataset include recordings of six speakers reciting digits from 0 to 9, with 50 recordings per digit per speaker. Recordings are preprocessed using Fast Fourier Transform (FFT). For evaluation, a subset of 480 instances corresponding to one speaker are utilized. Further, a dataset of approximately 85 satellite images, including images of debris scars and urban settlements from five Asian countries (India, Nepal, Japan, Taiwan, and China), are used to assess the classification performance of the chaotic neural architecture (300) in image-based tasks. Additionally, 100 MRI brain images obtained from an online repository is used to evaluate performance on medical image classification tasks.
[00104] Across all evaluations, the chaotic neural architecture (300) is instantiated using a heterogeneous chaotic neuron layer (304) configured in proportions of neuron types, including architectures with 25% logistic map neurons and 75% generalized Lüroth series (GLS) map neurons. Such a configuration may also be referred to, for convenience of reference, as ChaosFEXRH25L75G. The features obtained from the chaotic trajectories generated by the input layer neurons of the chaotic neural architecture (300), including homogeneous Neurochaos Learning (NL), heterogeneous Neurochaos Learning (HNL), or Random Heterogeneous Neurochaos Learning (RHNL) architectures, may collectively be referred to as neurochaos features (ChaosFEX).
[00105] RHNL supports the use of traditional machine learning (ML) classifiers. The neurochaos features (ChaosFEX) can be fed to one of the many widely available machine learning classifiers to perform classification, such as but not limited to, Support Vector Machine (SVM), AdaBoost (AB), Decision Tree (DT), Gaussian Naive Bayes (GNB), k-NN, and Random Forests (RF). Whenever a traditional ML classifier is used on the neurochaos features, the hyperparameters that are already tuned for the various RHNL architectures (ChaosFEX) are maintained and only the ML hyperparameters are further tuned to reduce the computational burden.
[00106] In all experiments, unless otherwise specified, hyperparameters of the baseline machine learning classifiers are maintained at default values as provided by the scikit-learn library. Tuned hyperparameters for the Random Heterogeneous Neurochaos Learning (RHNL) architecture in combination with various classifiers are selected through cross-validation and are detailed as follows.
[00107] Referring now to Table 2, a summarized view of all the different neurochaos learning (NL) architectures with the corresponding notations used during the evaluation are given, including homogenous NL, heterogeneous NL (HNL) and random heterogeneous NL (RHNL), as well as combinations with ML classifiers. In particular, for RHNL architectures, the configurations include: ChaosFEXRH25L75G, referring to an architecture with 25% logistic map neurons and 75% generalized Lüroth series (GLS) map neurons, ChaosFEXRH50L50G, referring to an architecture with 50% logistic map neurons and 50% GLS map neurons, and ChaosFEXRH75L25G, referring to an architecture with 75% logistic map neurons and 25% GLS map neurons. Herein, Chaos-Net refers to 1D Generalized Lüroth Series or GLS map.
No. Architecture Notation Type of Neurons Notation Classifiers
1 ChaosNet Homogeneous, GLS ChaosFEX Cosine similarity
2 ChaosNet Homogeneous, Logistic ChaosFEXlogistic Cosine similarity
3 NL Homogeneous, GLS ChaosFEX+ML SVM, AB, DT, kNN,
GNB, RF
4 NL Homogeneous, Logistic ChaosFEXlogistic+ML SVM, AB, DT, kNN,
GNB, RF
5 HNL: ChaosNet Heterogeneous, GLS, Logistic odd-even structure ChaosFEXHetero Cosine similarity
6 HNL Heterogeneous, GLS, Logistic odd-even structure ChaosFEXHetero+ML SVM, AB, DT, kNN,
GNB, RF
7 RHNL: ChaosNet Heterogeneous & Random, GLS, Logistic in randomized locations ChaosFEXRH25L75G,
ChaosFEXRH50L50G,
ChaosFEXRH75L50G Cosine similarity
8 RHNL Heterogeneous & Random, GLS, Logistic in randomized locations ChaosFEXRH25L75G+ML,
ChaosFEXRH50L50G+ML,
ChaosFEXRH75L50G+ML SVM, AB, DT, kNN,
GNB, RF
Table 2
[00108] The performance of ChaosFEXRH architectures are analysed for various datasets using Macro F1-score (a function of both macro Recall as well as macro Precision). True-Positive rate (TP) signifies a positive target-value correctly identified as Positive. True-Negative rate (TN) denotes a ‘negative target-value’ correctly classified as Negative. False Positive rate (FP) accounts instances when a ‘negative target value’ is inaccurately deemed/classified as Positive. False-Negative rate (FN) accounts those instances when a ‘positive target value’ is erroneously deemed/classified as Negative. Mathematically, they are described as:
(9)
(10)
(11)
(12)
[00109] The Macro F1-score is computed as the average of all F1-scores (for the m classes), given by:
(13)
[00110] Tuning of the 3 hyper-parameters are performed across various datasets for the three RHNL architectures proposed namely ChaosFEXRH25L75G, ChaosFEXRH50L50G, and ChaosFEXRH75L25G. Tuned values of hyperparameters for all architectures are given in Tables 3, Table 4 and Table 5 for ChaosFEXRH25L75G, ChaosFEXRH50L50G, and ChaosFEXRH75L25G respectively given below:
Dataset q b
Iris 0.062 0.185 0.298
Ionosphere 0.01 0.409 0.051
Wine 0.46 0.469 0.141
Bank Note Authentication 0.36 0.419 0.121
Haberman’s Survival 0.05 0.269 0.031
Breast Cancer Wisconsin 0.17 0.46 0.05
Statlog (Heart) 0.47 0.489 0.03
Seeds 0.05 0.189 0.161
Table 3
Dataset q b
Iris 0.05 0.359 0.221
Ionosphere 0.099 0.479 0.061
Wine 0.46 0.469 0.131
Bank Note Authentication 0.09 0.289 0.041
Haberman’s Survival 0.14 0.489 0.021
Breast Cancer Wisconsin 0.069 0.139 0.041
Statlog (Heart) 0.18 0.169 0.011
Seeds 0.050 0.139 0.151
Table 4
Dataset q b
Iris 0.15 0.299 0.231
Ionosphere 0.02 0.219 0.809
Wine 0.47 0.479 0.131
Bank Note Authentication 0.01 0.259 0.071
Haberman’s Survival 0.23 0.1 0.011
Breast Cancer Wisconsin 0.14 0.489 0.021
Statlog (Heart) 0.13 0.1 0.051
Seeds 0.05 0.189 0.151
Table 5
[00111] Macro F1 scores obtained for various datasets with ChaosFEXRH25L75G and ChaosFEXRH25L75G+SVM are given below in Table 6.
Dataset ChaosFEXRH25L75G ChaosFEXRH25L75G+SVM
Iris 1 1
Ionosphere 0.6 0.88
Wine 0.6 0.94
Bank Note Authentication 0.75 0.9
Haberman’s Survival 0.73 0.56
Breast Cancer Wisconsin 0.85 0.98
Statlog (Heart) 0.77 0.84
Seeds 0.81 0.84
Table 6
[00112] Referring to Table 7, the macro F1 score obtained with ChaosFEXRH50L50G and ChaosFEXRH50L50G+SVM is shown.
Dataset ChaosFEXRH50L50G ChaosFEXRH50L50G+SVM
Iris 1 1
Ionosphere 0.58 0.9
Wine 0.59 0.94
Bank Note Authentication 0.59 0.72
Haberman’s Survival 0.68 0.47
Breast Cancer Wisconsin 0.77 0.92
Statlog (Heart) 0.78 0.79
Seeds 0.72 0.81
Table 7
[00113] Further, Table 8 gives the macro F1 score obtained with ChaosFEXRH75L25G, and ChaosFEXRH75L25G+SVM.
Dataset ChaosFEXRH50L50G ChaosFEXRH50L50G+SVM
Iris 1 0.97
Ionosphere 0.71 0.94
Wine 0.63 0.97
Bank Note Authentication 0.65 0.84
Haberman’s Survival 0.6 0.51
Breast Cancer Wisconsin 0.79 0.94
Statlog (Heart) 0.65 0.85
Seeds 0.78 0.86
Table 8
[00114] Random Heterogenous Neurochaos Learning architectures which incorporate ChaosFEX (i.e., chaos-derived feature vector) with other ML classifiers such as AdaBoost (AB), Decision Trees (DT), k-NN, Gaussian Naive Bayes (GNB), and Random Forests (RF) are implemented and the results indicate that randomness and heterogeneity introduced in the NL architectures yields superior performance when compared with homogeneous or fixed heterogeneous structures.
[00115] The macro F1 scores obtained for ChaosFEXRH25L75G+AdaBoost, ChaosFEXRH50L50G+AdaBoost, and ChaosFEXRH75L25G+AdaBoost structures are given in Table 9. Accuracy of 100% is obtained for Wine dataset with ChaosFEXRH50L50G+AdaBoost architecture. Macro F1-score = 0.99 is successfully achieved for Bank Note authentication data-set for both ChaosFEXRH50L50G+AdaBoost, and ChaosFEXRH75L25G+AdaBoost. High F1-score = 0.99 is also achieved for Breast Cancer Wisconsin data-set when ChaosFEXRH75L25G+AdaBoost is implemented.
Dataset ChaosFEXRH25L75G+AB ChaosFEXRH50L50G+AB ChaosFEXRH75L25G+AB
Iris 1 1 0.967
Ionosphere 0.97 0.97 0.97
Wine 0.97 1 0.944
Bank Note Authentication 0.93 0.99 0.989
Haberman’s Survival 0.5 0.56 0.66
Breast Cancer Wisconsin 0.98 0.98 0.99
Statlog (Heart) 0.81 0.85 0.88
Seeds 0.86 0.77 0.73
Table 9
[00116] The macro F1 scores obtained for ChaosFEXRH25L75G+Decision Trees, ChaosFEXRH50L50G+Decision Trees and ChaosFEXRH75L25G+Decision Trees can be found in Table 10. High F1 score = 0.98 for Breast Cancer Wisconsin data-set with ChaosFEXRH25L75G+Decision Trees and ChaosFEXRH75L25G+Decision Trees has been achieved.
Dataset ChaosFEXRH25L75G+DT ChaosFEXRH50L50G+DT ChaosFEXRH75L25G+DT
Iris 1 0.97 0.97
Ionosphere 0.92 0.91 0.97
Wine 0.95 0.94 0.95
Bank Note Authentication 0.95 0.9 0.89
Haberman’s Survival 0.6 0.65 0.63
Breast Cancer Wisconsin 0.98 0.97 0.98
Statlog (Heart) 0.92 0.84 0.86
Seeds 0.81 0.81 0.76
Table 10
[00117] The macro F1 scores obtained for ChaosFEXRH25L75G+kNN, ChaosFEXRH50L50G+Knn, and ChaosFEXRH75L25G+kNN is seen in Table 11, given below.
Dataset ChaosFEXRH25L75G+ kNN ChaosFEXRH50L50+ kNN ChaosFEXRH75L25G+ kNN
Iris 1 1 1
Ionosphere 0.74 0.85 0.8
Wine 0.66 0.72 0.77
Bank Note Authentication 0.93 0.83 0.89
Haberman’s Survival 0.64 0.61 0.61
Breast Cancer Wisconsin 0.98 0.93 0.94
Statlog (Heart) 0.6 0.81 0.78
Seeds 0.76 0.7 0.79
Table 11
[00118] The macro F1 scores obtained for ChaosFEXRH25L75G+GNB, ChaosFEXRH50L50G+GNB, and ChaosFEXRH75L25G+GNB is shown in Table 12.
Dataset ChaosFEXRH25L75G+ GNB ChaosFEXRH50L50+ GNB ChaosFEXRH75L25G+ GNB
Iris 1 1 0.97
Ionosphere 0.83 0.83 0.91
Wine 0.94 0.94 0.94
Bank Note Authentication 0.73 0.67 0.7
Haberman’s Survival 0.62 0.61 0.52
Breast Cancer Wisconsin 0.94 0.89 0.91
Statlog (Heart) 0.77 0.81 0.74
Seeds 0.72 0.63 0.7
Table 12
[00119] The macro F1 scores obtained for ChaosFEXRH25L75G + Random Forests(RF), ChaosFEXRH50L50G + Random Forests, and ChaosFEXRH75L25G + Random Forests can be found in Table 13. High performance is obtained for Breast Cancer Wisconsin dataset using ChaosFEXRH25L75G+Random Forests with F1 score of 0.98.
Dataset ChaosFEXRH25L75G+ RF ChaosFEXRH50L50+ RF ChaosFEXRH75L25G+ RF
Iris 1 1 0.97
Ionosphere 0.96 0.93 0.97
Wine 0.97 0.97 0.97
Bank Note Authentication 0.93 0.92 0.94
Haberman’s Survival 0.66 0.57 0.59
Breast Cancer Wisconsin 0.98 0.99 0.97
Statlog (Heart) 0.86 0.87 0.71
Seeds 0.83 0.76 0.78
Table 13
[00120] When compared with traditional architectures which are either homogeneous NL or heterogeneous NL but with fixed structure (odd-even), it can be found that RHNL yields either comparable or superior classification performance.
[00121] The performance of the proposed randomized heterogeneous neurochaos-learning (RHNL) architecture is further evaluated on a time-series dataset, namely the Free Spoken Digit Dataset (FSDD). The FSDD dataset include recordings of six speakers reciting numbers from 0 to 9, with 50 recordings per number per speaker, preprocessed using fast Fourier transform (FFT). For the analysis, the dataset instance corresponding to one speaker are utilized.
[00122] The RHNL architectures are configured with tuned hyperparameters. The classification performance, measured in terms of Macro F1-score, is obtained across various configurations of the RHNL architectures using different classifier combinations. The results are presented in FIGs 5-7, respectively for ChaosFEXRH25L75G (500), ChaosFEXRH50L50 (600), and ChaosFEXRH75L25G (700), illustrating the effectiveness of the RHNL approach for time-series classification tasks.
[00123] FIG. 8 illustrates a cosine similarity classifier performance (800) of the RHNL architecture in a low training sample regime using ChaosFEXRH25L75G configuration compared to traditional standalone classifiers, in accordance with an embodiment of the present disclosure. The performance of the randomized heterogeneous neurochaos-learning (RHNL) architecture is evaluated in a low training sample regime, where Neurochaos Learning architectures exhibits strong performance. The RHNL configuration used for the analysis included a heterogeneous chaotic neuron layer (304) with 25% logistic map neurons and 75% generalized Lüroth series (GLS) map neurons, corresponding to the ChaosFEXRH25L75G configuration. The evaluation is conducted on the MRI brain tumor dataset, consisting of 100 MRI brain images obtained from a public Kaggle repository. For each data instance, 12 features are generated and used for classification. The training process is conducted using a progressively increasing number of samples per class, starting with one sample per class and increasing up to 15 samples per class, with the remaining data used for testing. For each training sample size, 10 independent random trials are performed, and the average F1-score across these trials are reported. The classification performance of the ChaosFEXRH25L75G architecture, utilizing a cosine similarity classifier, is compared against several traditional standalone classifiers, including Decision Tree (DT), Random Forest (RF), AdaBoost (AB), Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), and Gaussian Naive Bayes (GNB).
[00124] Further, the results, illustrated in FIG. 8, demonstrate that the ChaosFEXRH25L75G architecture consistently maintained high classification performance under low training sample conditions when compared to the baseline classifiers. For training dataset with one sample per class, ChaosFEXRHNL25L75G gives high performance and Decision Tree and K-NN gives low performance. For training dataset with 4 and 5 samples per class, SVM outperforms ChaosFEXRH25L75G. k-NN gives high performance with 6 samples per class in training dataset. However, from 8 samples per class in train dataset onwards, ChaosFEXRHNL25L75G performed well again and continues to outperform other classifiers, which shows that RHNL architecture is able to learn with very few training samples per class making it very desirable in practical applications where there is a paucity of training data.
[00125] The present disclosure demonstrates that the incorporation of both heterogeneity and randomness into the chaotic neural architecture (300), provides superior classification performance across a diverse range of datasets. The RHNL architecture introduces randomized spatial configurations of heterogeneous chaotic neurons within the input layer, providing a richer dynamic space for feature extraction when compared to previously reported homogeneous neurochaos-learning (NL) or fixed-structure heterogeneous NL architectures. Experimental results confirm that RHNL achieves either comparable or superior macro F1 scores relative to prior approaches across nearly all evaluated classification tasks. For example, RHNL achieved macro F1 scores of 1.0 on the Wine dataset, 0.99 on the Bank Note Authentication and Breast Cancer Wisconsin datasets, and 0.98 on the Free Spoken Digit Dataset (FSDD), with the ChaosFEXRH75L25G architecture combined with a Support Vector Machine (SVM) classifier. Notably, this outperformed the best reported homogeneous NL performance in the literature. RHNL further demonstrated strong performance on the debris-urban image dataset, achieving a macro F1 score of 0.94 across multiple classifiers, and on the brain tumor image dataset, achieving a macro F1 score of 0.881. RHNL are also observed to outperform standalone traditional machine-learning classifiers on nearly all evaluated datasets, with the exception of the Seeds dataset. In the low training sample regime, RHNL exhibited consistent superiority over conventional classifiers, demonstrating the architecture’s ability to learn effectively from limited training data. The disclosed RHNL architecture is well suited for application to a wide variety of data classification tasks across domains, including future extensions to large-scale image datasets.
[00126] It will be appreciated that one or more additional components may be incorporated, modified, or omitted in the implementation of the present disclosure without departing from the scope as defined by the appended claims. The described embodiments are merely illustrative, and variations in design, structure, or material selection may be made to suit specific applications. Any such modifications, equivalents, or substitutions are intended to be within the scope and spirit of the present disclosure as defined by the claims.
[00127] While the foregoing describes various embodiments of the present disclosure, other and further embodiments of the present disclosure may be devised without departing from the basic scope thereof. The scope of the present disclosure is determined by the claims that follow. The present disclosure is not limited to the described embodiments, versions, or examples, which are included to enable a person having ordinary skill in the art to make and use the present disclosure when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE PRESENT DISCLOSURE
[00128] The present disclosure provides a computing system and method for performing classification of data instances using a chaotic neural architecture that leverages nonlinear chaotic dynamics.
[00129] The present disclosure provides a randomized heterogeneous chaotic neuron layer that enables assignment of multiple types of chaotic neurons to input elements in a spatially randomized configuration, enhancing architectural diversity.
[00130] The present disclosure provides for the generation of chaotic trajectories through iterative nonlinear updates, enabling extraction of dynamic and information-rich feature representations from input data.
[00131] The present disclosure provides chaos-based feature extraction techniques that compute firing time, firing rate, entropy value, and energy value from neuron trajectories, yielding transformed feature spaces with improved class separability.
[00132] The present disclosure provides a mechanism for determining classification outputs by comparing chaos-based feature representations with reference class vectors using similarity-based measures such as cosine similarity.
[00133] The present disclosure provides improved classification robustness and adaptability across diverse data types, including time-series signals, grayscale images, color images, and tabular data records, particularly under low-data or noisy data conditions.
[00134] The present disclosure provides a computing architecture that integrates nonlinear chaotic processing into standard classification pipelines, enabling practical implementation on general-purpose or specialized computing hardware for a variety of machine learning applications.
, Claims:1. A computing system (100) for performing classification of a data instance, the computing system (100) comprising:
a memory (106); and
a processor (104) operatively coupled to the memory (106), the memory (106) including instructions that, when executed by the processor (104), cause the processor (104) to:
receive an input vector (302) derived from a data instance to be classified, wherein the input vector (302) comprises a plurality of elements;
assign, to each element of the input vector (302), a neuron type selected from a plurality of heterogeneous neuron types (304), wherein the plurality of heterogeneous neuron types (304) are arranged in a randomized spatial configuration within an input layer of a chaotic neural architecture (300);
generate, for each assigned neuron type, a chaotic trajectory (306) based on the corresponding element of the input vector (302), thereby forming a set of chaotic trajectories (306) corresponding to the plurality of elements;
extract a feature representation (308) based on the set of chaotic trajectories (306); and
determine a classification output (310) for the data instance based on the feature representation (308).
2. The computing system (100) as claimed in claim 1, wherein the plurality of heterogeneous neuron types (304) include at least one of: a logistic map neuron and a generalized Lüroth series (GLS) map neuron.
3. The computing system (100) as claimed in claim 2, wherein the processor (104) is configured to select the logistic map neuron and the generalized Lüroth series (GLS) map neuron in a predetermined proportion.
4. The computing system (100) as claimed in claim 3, wherein the predetermined proportion corresponds to one of:
25% logistic map neurons and 75% generalized Lüroth series (GLS) map neurons;
50% logistic map neurons and 50% GLS map neurons; and
75% logistic map neurons and 25% GLS map neurons.
5. The computing system (100) as claimed in claim 1, wherein the chaotic trajectory (306) for each assigned neuron type is generated by iteratively updating an initial neural activity value until a proximity condition with the corresponding element of the input vector (302) is satisfied.
6. The computing system (100) as claimed in claim 1, wherein the feature representation (308) includes at least one of: a firing time, a firing rate, an entropy value, or an energy value derived from the chaotic trajectories (306).
7. The computing system (100) as claimed in claim 1, wherein the processor (104) is configured to determine the classification output (310) by comparing the feature representation (308) with one or more reference vectors stored in the memory (106) and associated with known classes of the data instance.
8. The computing system (100) as claimed in claim 7, wherein the processor (104) is configured to perform the comparison using a cosine similarity measure.
9. The computing system (100) as claimed in claim 7, wherein the data instance includes at least one of: a time-series signal, a grayscale image, a color image, and a tabular data record.
10. A method (400) for performing classification of a data instance, the method (400) comprising:
receiving (402), by a processor (104), an input vector (302) derived from a data instance to be classified, wherein the input vector (302) comprises a plurality of elements;
assigning (404), by the processor (104), to each element of the input vector (302), a neuron type selected from a plurality of heterogeneous neuron types (304), wherein the plurality of heterogeneous neuron types (304) are arranged in a randomized spatial configuration within an input layer of a chaotic neural architecture (300);
generating (406), by the processor (104), for each assigned neuron type, a chaotic trajectory (306) based on the corresponding element of the input vector (302), thereby forming a set of chaotic trajectories (306) corresponding to the plurality of elements;
extracting (408), by the processor (104), a feature representation (308) based on the set of chaotic trajectories (306); and
determining (410), by the processor (104), a classification output (310) for the data instance based on the feature representation (308).
| # | Name | Date |
|---|---|---|
| 1 | 202541072673-STATEMENT OF UNDERTAKING (FORM 3) [30-07-2025(online)].pdf | 2025-07-30 |
| 2 | 202541072673-REQUEST FOR EXAMINATION (FORM-18) [30-07-2025(online)].pdf | 2025-07-30 |
| 3 | 202541072673-REQUEST FOR EARLY PUBLICATION(FORM-9) [30-07-2025(online)].pdf | 2025-07-30 |
| 4 | 202541072673-FORM-9 [30-07-2025(online)].pdf | 2025-07-30 |
| 5 | 202541072673-FORM FOR SMALL ENTITY(FORM-28) [30-07-2025(online)].pdf | 2025-07-30 |
| 6 | 202541072673-FORM 18 [30-07-2025(online)].pdf | 2025-07-30 |
| 7 | 202541072673-FORM 1 [30-07-2025(online)].pdf | 2025-07-30 |
| 8 | 202541072673-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [30-07-2025(online)].pdf | 2025-07-30 |
| 9 | 202541072673-EVIDENCE FOR REGISTRATION UNDER SSI [30-07-2025(online)].pdf | 2025-07-30 |
| 10 | 202541072673-EDUCATIONAL INSTITUTION(S) [30-07-2025(online)].pdf | 2025-07-30 |
| 11 | 202541072673-DRAWINGS [30-07-2025(online)].pdf | 2025-07-30 |
| 12 | 202541072673-DECLARATION OF INVENTORSHIP (FORM 5) [30-07-2025(online)].pdf | 2025-07-30 |
| 13 | 202541072673-COMPLETE SPECIFICATION [30-07-2025(online)].pdf | 2025-07-30 |
| 14 | 202541072673-FORM-26 [14-10-2025(online)].pdf | 2025-10-14 |