Sign In to Follow Application
View All Documents & Correspondence

An Artificial Intelligence (Ai) System For Processing Of An Input And A Method Thereof

Abstract: TITLE: An Artificial Intelligence (AI) system (100) for processing of an input and a method (200) thereof. Abstract The present disclosure proposes an Artificial Intelligence (AI) system (100) for processing of an input and a method (200) thereof. The AI system (100) (100) comprises an input interface (12), an output interface (20), a defense module (16), an AI module (18) and at least a cache processing unit (14). The defense module (16) is adapted to receive input from said at least one user and identify an attack vector amongst said input. The cache processing unit (14) is configured to store characteristics of the received inputs. The characteristics of the received inputs include at least an identification information of the attack vector. The cache processing unit (14) restricts the AI module (18) from processing the input on recognition of attack vector in the input. The AI module (18) processes input data in dependance of communication received from the cache processing unit (14) and the defense module (16). Figure 1.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
27 June 2022
Publication Number
52/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Bosch Global Software Technologies Private Limited
123, Industrial Layout, Hosur Road, Koramangala, Bangalore – 560095, Karnataka, India
Robert Bosch GmbH
Feuerbach, Stuttgart, Germany

Inventors

1. Manojkumar Somabhai Parmar
#202, Nisarg Appartment, Nr.L.G.Corner, Maninangar, Ahmedabad, Gujarat- 38008, India

Specification

Description:Complete Specification:
The following specification describes and ascertains the nature of this invention and the manner in which it is to be performed

Field of the invention
[0001] The present disclosure relates to the field of artificial intelligence. In particular, the present disclosure proposes an Artificial Intelligence (AI) system for processing of an input and a corresponding method thereof.

Background of the invention
[0002] With the advent of data science, data processing and decision making systems are implemented using artificial intelligence modules. The artificial intelligence modules use different techniques like machine learning, neural networks, deep learning etc. Most of the AI based systems, receive large amounts of data and process the data to train AI models. Trained AI models generate output based on the use cases requested by the user. Typically the AI systems are used in the fields of computer vision, speech recognition, natural language processing, audio recognition, healthcare, autonomous driving, manufacturing, robotics etc. where they process data to generate required output based on certain rules/intelligence acquired through training.

[0003] To process the inputs and give a desired output, the AI systems use various models/algorithms which are trained using the training data. Once the AI system is trained using the training data, the AI systems use the models to analyze the real time data and generate appropriate result. The models may be fine-tuned in real-time based on the results. The models in the AI systems form the core of the system. Lots of effort, resources (tangible and intangible), and knowledge goes into developing these models.

[0004] Currently, all AI models are deployed in pipeline line/serial fashion for embedded/IoT devices. They take input, process it using AI model and provide output. The models/pipeline always assumes that the data is new, and it needs to be processed. That means it results in repetitive calculations, even if the input is the same. This is even more complicated in case the AI model also have defense model running simultaneously for checking if the input is adversarial or an attack vector. The assumption also results in increased energy consumption without providing any benefits and sometimes slows down the overall system. The nature of IoT and embedded devices are such that signals are not generally fast-changing considering their low sampling frequency . Therefore, the input remains stable for a given time slice within which there is no need to calculate the output of AI models which are compute heavy.

Brief description of the accompanying drawings
[0005] An embodiment of the invention is described with reference to the following accompanying drawings:
[0006] Figure 1 depicts an AI system (100) for processing of an input;
[0007] Figure 2 illustrates method steps (200) of processing an input in an Artificial Intelligence (AI) system (100).

Detailed description of the drawings

[0008] Figure 1 depicts an Artificial Intelligence (AI) system (100) for processing of an input. The artificial intelligence system (100) comprises an input interface (12), an output interface (20), a defense module (16), an AI module (18) and at least a cache processing unit (14).

[0009] The input interface (12) receives input from at least one user. The input interface (12) is a hardware interface wherein a user can enter his query for the AI module (18) to process and generate an output. The output interface (20) is an audio or visual display hardware that sends an output or a signal to said at least one user.

[0010] A module with respect to this disclosure can either be a logic circuitry or a software programs that respond to and processes logical instructions to get a meaningful result. A module is implemented as any or a combination of: one or more microchips or integrated circuits interconnected using a parent board, hardwired logic, software stored by a memory device and executed by a microprocessor, microcontrollers, firmware, an application specific integrated circuit (ASIC), and/or a field programmable gate array (FPGA). Various modules can either be a software embedded in a single chip or a combination of software and hardware where each module and its functionality is executed by separate independent chips connected to each other to function as the system (100). For example, a neural network ( an embodiment of the AI module (18)) mentioned herein after can be a software residing in the system (100) or the cloud or embodied within an electronic chip. Such neural network chips are specialized silicon chips, which incorporate AI technology and are used for machine learning.

[0011] The defense module (16) is adapted to receive input from said at least one user and identify an attack vector amongst said input. An attack vector is an adversarial input aimed at compromising the function of the AI module (18) or stealing the algorithm or the logic beneath the AI module (18). A model stealing attack uses a kind of attack vector that can make a digital twin/replica/copy of an AI module (18).

[0012] The defense module (16) uses numerous techniques to identify an attack vector amongst the inputs. These techniques use statistical and mathematical analysis along with analysis of the features in the artificial intelligence domain. In some techniques, the defense module runs a second model which is a variation of the model run by the AI module. For example, the second model has different network parameters from the model run by the AI module. The defense module in some embodiment also comprises a comparator and more than one AI model being executed. The defense model is designed and constructed for statistical, mathematical and artificial intelligence and machine domain analysis of the input to identify an attack vector. The defense module (16) The identification information of the attack vector is stored in a memory of the cache processing unit (14). The defense module (16) restricts the AI module (18) from processing the input on identification of attack vector in the input.

[0013] The cache processing unit (14) is again a hardware or software and hardware module configured to store characteristics of the received inputs. The characteristics of the received inputs include at least an identification information of the attack vector. The cache processing unit (14) restricts the AI module (18) from processing the input on recognition of attack vector in the input.

[0014] The AI module (18) processes said input data and generate the output in dependance of communication received from the cache processing unit (14) and the defense module (16). AI module (18) with reference to this disclosure can be explained as a component which runs a model. A model can be defined as reference or an inference set of data, which is use different forms of correlation matrices. Using these models and the data from these models, correlations can be established between different types of data to arrive at some logical understanding of the data. A person skilled in the art would be aware of the different types of AI models such as linear regression, naïve bayes classifier, support vector machine, neural networks and the like. It must be understood that this disclosure is not specific to the type of model being executed in the AI module (18) and can be applied to any AI module (18) irrespective of the AI model being executed. A person skilled in the art will also appreciate that the AI module (18) may be implemented as a set of software instructions, combination of software and hardware or any combination of the same.

[0015] It should be understood at the outset that, although exemplary embodiments are illustrated in the figures and described below, the present disclosure should in no way be limited to the exemplary implementations and techniques illustrated in the drawings and described below. As explained above, these various modules can either be a software embedded in a single chip or a combination of software and hardware where each module and its functionality is executed by separate independent chips connected to each other to function as the system (100).

[0016] Figure 2 illustrates method steps of processing an input in an Artificial Intelligence (AI) system (100). The AI system (100) (100) and its components have been elucidated above in accordance with figure 1.

[0017] Method step 201 comprises transmitting the input received to the cache processing unit (14). Method step 202 comprises recognizing the input by comparing characteristics of the input with characteristics of previous inputs stored in a memory of the cache processing unit (14).

[0018] Method step 203 comprises identifying an attack vector in the input by means of the defense module (16) in dependance of recognition. The techniques for identification of an attack vector have been discussed in brief in above paragraphs. In an alternate embodiment of the present disclosure, the attack vector identification information is further used to calculate an information gain. If the information gain exceeds a pre-defined threshold, the user is blocked, and the notification is sent the owner of the AI system (100) (100). If the information gain is below a pre-defined threshold, although an attack vector was detected, output generated by the AI module (18) may be modified and sent to the output interface (20).

[0019] In addition, the user profile may be used to determine whether the user is habitual attacker or was it one time attack or was it only incidental attack etc. Depending upon the user profile, the steps for unlocking of the system (100) may be determined. If it was first time attacker, the user may be locked out temporarily. If the attacker is habitual attacker then a stricter locking steps may be suggested.

[0020] Method step 204 comprises storing the characteristics of the attack vector in the memory of the cache processing unit (14). The characteristics of the attack vector include not only the attack vector identification but also the metadata associated with the attack such as the user profile, frequency of attack, time stamps, severity of attack and the like.
[0021] Method step 205 comprises processing the input data by means of the AI module (18) to generate the output in dependance of communication received from the cache processing unit (14) and the defense module (16). The processing of input data by means of the AI module (18) is restricted on recognition of attack vector.

[0022] The core of the invention is based on the idea that IoT and embedded devices signals are not fast-changing and they perform a repetitive calculation. Hence a caching mechanism with a memory of the characteristics of the cached data is implemented to store fixed (small number) input and output pairs. The caching will ensure that whenever consecutively the same inputs are detected, then output is provided without involving the heavy calculation of AI Models. The caching will be resulting in increased throughput, reduced computing cycles and reduced energy consumption for edge devices powered by AI algorithms.

[0023] In one embodiment of the present system (100) as depicted in figure 1, a parallel caching mechanism is implemented. Here identifying an attack vector comprises a parallel processing by the cache processing unit (14) and the defense module (16). Any time, a new input comes, in parallel to processing it, the search is initiated in the memory of the cache processing based on the characteristics of the input. If a search is successful, then output is provided, and processing of AI model is stopped. This ensures that the overall performance of the model is not impacted as the time taken to process any input is always less than or equal to the first-time encountering input. However, it will have an impact on computing and energy efficiency.

[0024] In an alternate embodiment of the present system (100) in accordance with figure 1, another caching mechanism is to be implemented. Here identifying an attack vector comprises a sequential processing of the input by the cache processing unit (14) and the defense module (16). Any time, a new input comes, first it goes to cache processing unit (14). Cache processing initiates a search in the database for matching input. If a search is successful, then output is provided, and the AI model is never invoked for processing data. If a search is unsuccessful, then the AI model and correspondingly the defense model is invoked, and output is provided. The memory is also updated with new input and output pair. This ensures that maximum compute and energy efficiency is achieved as duplicate processing is completely stopped.

[0025] In both cases, the overall performance (throughput, energy) is heavily dependent on the size of the memory of cache processing unit (14) and mechanism to update it to get optimal hit ratio in caching. The size of the memory is minimal in embedded and IoT devices. Therefore, depending on the rate of change of input, the available computing power of device, AI model compute needs, memory size of the cache processing unit (14) is decided, and the trade-off is made between the performance of system (100) and capacity of cache processing unit (14). The characteristics of an attack characteristics are stored for in the cache processing unit (14) for a particular duration depending upon a frequency of the input. The more the same input appears, it stays longer in the memory of the cache processing unit (14). Alternate mechanisms can be used to update the memory to improve performance as they are dependent on the use-case and input data stream.

[0026] A person skilled in the art will appreciate that while these method steps describe only a series of steps to accomplish the objectives, these methodologies may be implemented with modification and adaptations to the system (100) disclosed herein.

[0027] This idea to develop an Artificial Intelligence (AI) system (100) for processing of an input and a method thereof is applicable when AI models do not involve any history/memory element or active learning mechanism (learning on the fly from incoming data). The AI model caching technique is inspired from the conventional processor architecture with caching mechanism. Within processor architecture, cache prefetching is a technique used by computer processors to boost execution performance by fetching instructions or data from their original storage in slower memory to a faster local memory before it is actually needed. This is used to improve the performance of a processor by partially eliminating the wait time of processor due to memory latency.

[0028] It must be understood that the embodiments explained in the above detailed description are only illustrative and do not limit the scope of this invention. Any modification and adaption of the Artificial Intelligence (AI) system (100) for processing of an input and the method thereof are envisaged and form a part of this invention. The scope of this invention is limited only by the claims.
, Claims:We Claim:

1. An Artificial Intelligence (AI) system (100) for processing of an input, said artificial intelligence system (100) comprising: an input interface (12) to receive input from at least one user; output interface (20) to send an output to said at least one user; characterized in that AI system (100) (100):
a defense module (16) adapted to receive input from said at least one user, the defense module (16) adapted to identify an attack vector amongst said input, said identification information of the attack vector is stored in a memory of a cache processing unit (14);
the cache processing unit (14) configured to store characteristics of the received inputs;
an AI module (18) to process said input data and generate the output in dependance of communication received from the cache processing unit (14) and the defense module (16).

2. An Artificial Intelligence (AI) system (100) for processing of an input as claimed in claim 1, wherein characteristics of the received inputs include at least an identification information of the attack vector.

3. An Artificial Intelligence (AI) system (100) for processing of an input as claimed in claim 1, wherein the cache processing unit (14) restricts the AI module (18) from processing the input on recognition of attack vector in the input.

4. An Artificial Intelligence (AI) system (100) for processing of an input as claimed in claim 1, wherein the defense module (16) restricts the AI module (18) from processing the input on identification of attack vector in the input.

5. A method of processing an input in an Artificial Intelligence (AI) system (100), the system (100) comprising: an input interface (12) to receive input from at least one user; output interface (20) to send an output to said at least one user; a defense module (16) adapted to receive input from said at least one user; a cache processing unit (14) in communication with the output interface (20) and an AI module (18), the method comprising :
transmitting the input received to the cache processing unit (14);
recognizing the input by comparing characteristics of the input with characteristics of previous inputs stored in a memory of the cache processing unit (14);
identifying an attack vector in the input by means of the defense module (16) in dependance of recognition information received from the cache processing unit (14);
storing the characteristics of the attack vector in the memory of the cache processing unit (14);
processing the input data by means of the AI module (18) to generate the output in dependance of communication received from the cache processing unit (14) and the defense module (16).

6. The method of processing an input in an Artificial Intelligence (AI) system (100) as claimed in claim 5, wherein processing the input data by means of the AI module (18) is restricted on recognition of attack vector.

7. The method of processing an input in an Artificial Intelligence (AI) system (100) as claimed in claim 5, wherein identifying an attack vector comprises a parallel processing by the cache processing unit (14) and the defense module (16).

8. The method of processing an input in an Artificial Intelligence (AI) system (100) as claimed in claim 5, wherein identifying an attack vector comprises a sequential processing of the input by the cache processing unit (14) and the defense module (16).

9. The method of processing an input in an Artificial Intelligence (AI) system (100) as claimed in claim 5, wherein the characteristics of an attack characteristics are stored for in the cache processing unit (14) for a particular duration depending upon a frequency of the input.

Documents

Application Documents

# Name Date
1 202241036684-POWER OF AUTHORITY [27-06-2022(online)].pdf 2022-06-27
2 202241036684-FORM 1 [27-06-2022(online)].pdf 2022-06-27
3 202241036684-DRAWINGS [27-06-2022(online)].pdf 2022-06-27
4 202241036684-DECLARATION OF INVENTORSHIP (FORM 5) [27-06-2022(online)].pdf 2022-06-27
5 202241036684-COMPLETE SPECIFICATION [27-06-2022(online)].pdf 2022-06-27
6 202241036684-Power of Attorney [28-06-2023(online)].pdf 2023-06-28
7 202241036684-Covering Letter [28-06-2023(online)].pdf 2023-06-28
8 202241036684-FORM 18 [25-08-2023(online)].pdf 2023-08-25
9 202241036684-FER.pdf 2025-05-01
10 202241036684-OTHERS [31-10-2025(online)].pdf 2025-10-31
11 202241036684-FER_SER_REPLY [31-10-2025(online)].pdf 2025-10-31
12 202241036684-CORRESPONDENCE [31-10-2025(online)].pdf 2025-10-31
13 202241036684-CLAIMS [31-10-2025(online)].pdf 2025-10-31
14 202241036684-RELEVANT DOCUMENTS [05-11-2025(online)].pdf 2025-11-05
15 202241036684-RELEVANT DOCUMENTS [05-11-2025(online)]-1.pdf 2025-11-05
16 202241036684-PETITION UNDER RULE 137 [05-11-2025(online)].pdf 2025-11-05
17 202241036684-PETITION UNDER RULE 137 [05-11-2025(online)]-1.pdf 2025-11-05
18 202241036684-FORM 13 [05-11-2025(online)].pdf 2025-11-05

Search Strategy

1 SearchStrategyMatrixE_19-03-2024.pdf