Sign In to Follow Application
View All Documents & Correspondence

"Crypto System Identification Using Neural Networks"

Abstract: The present invention discloses a method for identifying and classifying cryptographic method used for encrypting a plain text from encrypted message, said method comprising the steps of: converting the encrypted message into bit pattern that comprises principal components and a cross-correlated series; and analyzing the bit pattern obtained in step (a) using an artificial neural network so as to identify and classify the cryptographic method used for encrypting the plain text.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
22 December 2005
Publication Number
40/2009
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
ipo@knspartners.com
Parent Application
Patent Number
Legal Status
Grant Date
2020-10-12
Renewal Date

Applicants

INDIAN INSTITUTE OF TECHNOLOGY
HAUZ KHAS, NEW DELHI-110 016, INDIA.
SYSTEMS & ANALYSIS GROUP (SAG)
DEFENCE RESEARCH & DEVELOPMENT ORGANIZATION, METCALFE HOUSE COMPLEX, NEW DELHI-110054, INDIA.

Inventors

1. CHANDRA
MATHEMATICS DEPARTMENT, INDIAN INSTITUTE OF TECHNOLOGY, DELHI, HAUZ KHAS, NEW DELHI-110016, INDIA.

Specification

FIELD OF THE INVENTION:
The present invention relates to a method and a system for cryptographic method identification. Particularly, the present invention relates to a method and a system to identify cryptographic method using Artificial Neural Networks. More particularly, the present invention relates to a method and a system to provide interclass and intra class classification of encrypted messages with high degree of accuracy.
BACKGROUND AND PRIOR ART DESCRIPTION:
The background section is primarily divided into two parts, the first part describing the various cryptographic methods that are available and the second part describing the various artificial neural networks that are available.
Cryptographic methods:
As a person skilled in the art would be aware, the cryptographic methods are categorized into two basic classes namely, (a) Block Cipher cryptographic method and (b) Stream Cipher cryptographic method.
Block Cipher cryptographic method:
A block cipher is a type of symmetric key encryption algorithm that transforms a fixed length block of plaintext (unencrypted text) data into a block of cipher text (encrypted text) data of the same length. This transformation takes place under the action of a user provided secret key. Applying the reverse transformation to the cipher text block using the same secret key performs decryption. Since different plaintext blocks are mapped to different cipher text blocks (to allow unique decryption), a block cipher effectively provides a permutation (one to one reversible correspondence) of the set of all possible messages. The permutation effected during any particular encryption is of course secret, since it is a function of the secret key.
The Block ciphers encrypt a plaintext block by a process that has several rounds. In each round, the same transformation (also known as round function) is applied to the data using a subkey. The set of subkeys is usually derived form the user-provided secret key by a special function. The set of subkeys is called the key schedule. The number of
rounds in the cipher depends on the desired security level and the consequent tradeoff with performance. In most cases, an increased number of rounds will improve the security offered by a block cipher, but for some cipher the number of rounds required to achieve adequate security will be too large for the cipher to be practical or desirable. The Applicants here below describe a few block cipher systems i.e. Enhanced RC6 (Ragab et al 2001), and SERPENT (Biham et al, 1998), that are commonly used for encrypting plain text into cipher text for the purpose of cryptographic method identification.
Enhanced RC6
Enhanced RC6 is a block cipher with block size of 256 bits. It uses an expanded key table S that is derived from the users supplied secret key. The size of S table is dependent on the number of rounds. Size of Stable t = 4r + 8 words. Choosing a larger number of rounds V presumably provides an increased level of security. It uses a variable length cryptographic key, which could be a max of 255 characters.
Serpent
Serpent (Biham et al, 1998) is a 32-round SP-network operating on four 32-bit words, thus giving a block size of 128 bits. Serpent encrypts a 128-bit plaintext P to a 128-bit
cipher text C in 32 rounds under the control of 33 128-bit sub keys Ko; ; K32.
The cipher itself consists of
an initial permutation IP;
32 rounds, each consisting of a key mixing operation, a pass through S-boxes, and (in all but the last round) a linear transformation. In the last round, this linear transformation is replaced by an additional key mixing operation;
a final permutation FP.
This gives the encrypted text. The reverse implementation of the algorithm with cipher text as input and the same key would result into the original plain text message.
Stream Cipher Cryptographic method:
A stream cipher is a type of symmetric encryption algorithm. Stream ciphers can be designed to be exceptionally fast, much faster than any block cipher. While block ciphers
operate on large blocks of data, stream ciphers typically operate on smaller units of plaintext usually bits. The encryption of any particular plaintext with a block cipher will result in the same cipher text when the same key is used. With stream cipher, the transformation of these smaller plaintext units will vary depending on when they are encountered during the encryption process. A stream cipher generates what is called a keystream (a sequence of bits used as a key). Encryption is accomplished by combining the keystream with the plaintext, usually with the bitwise XOR operation. The generation of the keystream can be independent of the plaintext and cipher text, yielding what is termed as a synchronous stream cipher, or it can depend on the data and its encryption, in which case the stream cipher is said to be self synchronizing. Most stream ciphers designs are for synchronous stream ciphers. In stream ciphers design is most commonly attributed to the appealing theoretical properties of the one-time pad. A one-time pad, sometimes called the Vernam Cipher uses a string of bits that is generated completely at random. The keystream is of same length as the plaintext message and the random string is combined using the bitwise XOR with the plaintext to produce the cipher text. Since the entire keystream is random, even an opponent with infinite computational resources can only guess the plaintext if he or she sees the cipher text. Such a cipher is said to offer perfect secrecy, and the analysis of the one-time pad is seen as one of the cornerstones of modern cryptography. Stream ciphers were developed as an approximation to the action of the one-time pad. Applicants here below describe a few stream cipher crypto graphic methods i.e. LILI-128 (Dawson et al, 2000) and RABBIT (Boesgaard et al, 2003) that are commonly used for encrypting plain text into cipher text for the purpose of cryptographic method identification.
LILI-128
The LILI-128 keystream generator is a LFSR based synchronous stream cipher with a 128 bit key. The LILI-128 keystream generator is a simple and fast keystream generator that uses two binary LFSRs and two functions to generate a pseudorandom binary keystream sequence. The structure of the LILI keystream generators can be grouped into two subsystems based on the functions they perform: clock control and data generation. The LFSR for the clock-control subsystem is regularly clocked; a shift occurs in the
register when this clock input changes state from one to zero. The output of this subsystem is an integer sequence which controls the clocking of the LFSR within the data-generation subsystem.
The state of the LILI-128 is defined to be the contents of the two LFSRs. The functions^ and fd are evaluated on the current state data, and the feedback bits are calculated. Then the LFSRs are clocked and the keystream bit is output. At initialisation, the 128 bit key is used directly to form the initial values of the two shift registers, from left to right, the first 39 bits in LFSRC then the remaining 89 bits in LFSRd. The output of the data-generation subsystem is the key stream which is XORed with the plaintext message to obtain the cipher text. On XORing the Key stream with the cipher text the original plain text could be retrieved.
Rabbit
The Rabbit algorithm (Boesgaard et al, 2003) takes a 128-bit secret key and a plaintext message as input and generates for each iteration an output block of 128 pseudo-random bits from a combination of the internal state bits. Encryption/decryption is done by XOR'ing the pseudo-random data with the plaintext/ciphertext. The size of the internal state is 513 bits divided between eight 32-bit state variables, eight 32-bit counters and one counter carry bit. The eight state variables are updated by eight coupled non-linear functions. The counters ensure a lower bound on the period length for the state variables.
Cryptographic method identification is one of the challenging tasks in Crypt analysis. Statistical Techniques have been widely used by researchers for cryptanalysis, however significant results have not been achieved.
In recent years neural computing has emerged as a practical technology, with successful applications in many fields. Majority of these applications are concerned with problems in pattern recognition. Historically, many concepts in neural computing have been inspired by studies of biological networks. Neural networks are made on the way that humans might approach pattern recognition tasks. Neural networks are general function
approximators and they can therefore be trained to compute any desired function including decision-making functions. The advantage of neural network is that it is a powerful data-modeling tool, which has the ability to recognize patterns even if there is no functional relationship between input and output. Neural Networks have been extensively used in various fields like Process Modeling and Control, Machine Diagnostics, Portfolio Management, Target Recognition, Voice Recognition, Intelligent Searching, Fraud Detection. However, very little work has been carried out cryptographic method identification (Cryptanalysis) using Neural Networks.
Large-scale use of information technology has made it immensely important to secure data from unauthorized access. Various cryptographic algorithms have been suggested in the literature. Extensive work has been done by cryptanalysts for finding out the weaknesses in these algorithms, and thus motivating research for better and stronger cryptographic algorithms.
With advancement in computing technology and mathematical sciences, cryptographic algorithms have to be made more sophisticated so that the computational complexity and the time required to break into the cipher systems is too large. Until now various techniques have been suggested by various cryptanalysts however it is seen that machine learning techniques have not been used in cryptanalysis successfully.
Machine learning techniques are capable of capturing the underlying variation in the data and make effective use of them for classification and prediction. Neural Networks have found its applications in various fields of science and is an effective machine-learning tool. Neural Networks is a powerful data-modeling tool, which can identify methods or systems, form the data even if the functional relationship between the data and the type of cryptographic method is not known. Due to this basic characteristic of the Neural Networks, it can be effectively employed to identify cryptographic method using encrypted messages alone.
Artificial Neural Networks (ANN) is an information-processing paradigm that is inspired
by the way biological nervous systems (such as the brain) process information. The key element of this paradigm is the novel structure of the information processing system. Artificial Neural Networks like humans, learn by example. Neural Networks, with their remarkable ability to derive meaning from complicated or imprecise data can be used to extract patterns and detect trends that are too complex to be noticed by either humans or statistical techniques.
The most commonly used neural network model is the Multilayered Feed Forward Neural Network using Backpropagation algorithm for training. (Rumelhart et al, 1986). A number of variations of Backpropagation have been suggested like adding a Momentum factor to the previous weight change, Resilient Back propagation (Riedmiller et al, 1993). Conjugate Gradient Algorithms like Polak-Ribiere Update (Fletcher et al, 1964, Hagan et al, 1996), Powell-Beale Restarts (Powell et al, 1977) and scaled conjugate gradient algorithm (Moller et al, 1993) have been developed to improve the efficiency of Backpropagation algorithm by updating the weights in the conjugate direction of the gradient. Some of the commonly known algorithms used for the training of ANN are described in brief in the following paragraphs:
Backpropagation Algorithm
The standard Backpropagation algorithm (Rumelhart et al, 1986) can train any network as long as its weights, net input and transfer functions have derivative functions. Here the weights are adjusted according to gradient descent.
(Equation removed)

The problem with the standard gradient descent method is that it at times gets trapped into local minima, and hence variations were suggested.
Gradient Descent Algorithm with Momentum
Here the weights are adjusted according to gradient descent. However some weightage is
given to the previous weight change also.
(Equation removed)

where a is the smoothing factor for applying the momentum and r| is the learning rate.
Resilient Back Propagation
The purpose of the resilient back propagation (Rprop) training (Riedmiller et al, 1993) algorithm is to eliminate the harmful effects of the magnitudes of the partial derivatives. Only the sign of the derivative is used to determine the direction of the weight update; the magnitude of the derivative has no effect on the weight update. The size of the weight change is determined by a separate update value. The update value for each weight and bias is increased by a factor deltinc whenever the derivative of the performance function with respect to that weight has the same sign for two successive iterations. The update value is decreased by a factor deltdec whenever the derivative with respect that weight changes sign from the previous iteration. If the derivative is zero, then the update value remains the same. Whenever the weights are oscillating the weight change will be reduced. If the weight continues to change in the same direction for several iterations, then the magnitude of the weight change will be increased.
(Equation removed)

Every time the partial derivative of the corresponding weight change wv changes its sign,
that indicates that the last update was too big and the algorithm has jumped over a local
minimum, the update value Ay is decreased by a factor of tf. If the derivative retains its
sign, the update value is slightly increased in order to accelerate convergence in shallow
regions.
However there is one exception: If the partial derivatives changes sign, i.e. the previous
step is too large and the minimum is missed, the previous weight change is reverted.
(Equation removed)

Conjugate Gradient Algorithms
In the conjugate gradient algorithms a search is performed along conjugate directions, which produces generally faster convergence than Gradient descent directions. There are different variations of Conjugate Gradient algorithms which are listed below.
In Polak-Ribiere Update (Fletcher et al, 1964, Hagan et al, 1996) the search direction at
each iteration is determined by
(Equation removed)

For the Polak-Ribiere update, the constant /?„ is computed by
(Equation removed)

This is the inner product of the previous change in the gradient with the current gradient divided by the norm squared of the previous gradient.
In the Powell-Beale Restarts algorithm (Powell et al, 1977) restart will be carried out if there is very little orthogonality left between the current gradient and the previous gradient. This is tested with the following inequality.
(Equation removed)

If this condition is satisfied, the search direction is reset to the negative of the gradient. Each of the conjugate gradient algorithms discussed so far requires a line search at each iteration. This line search is computationally expensive, since it requires that the network response to all training inputs be computed several times for each search. The scaled conjugate gradient algorithm (SCG), developed by Moller (1996) was designed to avoid the time-consuming line search. This algorithm combines the model-trust region approach (used in the Levenberg-Marquardt algorithm, (More, 1997)), with the conjugate gradient approach.
Although, various techniques have been used for cryptanalysis, Artificial Neural Networks has not been explored till now. Statistical Methods have been tried in the past for identifying cipher systems but accuracy beyond 60% could not be achieved.
OBJECTS OF THE PRESENT APPLICATION
The object of the present application is to provide a system and a method that can be employed for inter class and intra class identification of cryptographic method. Another object of the present application is to provide a system and a method to identify and classify the information coming from Block cipher system and Stream cipher system into their respective classes.
SUMMARY OF THE INVENTION
The present invention discloses a method and system for interclass and intra class classification of the encrypted message using neural network with high degree of accuracy.
STATEMENT OF THE INVENTION
Accordingly, the present invention relates to a method for identifying and classifying cryptographic method used for encrypting a plain text from encrypted message, said method comprising the steps of:
converting the encrypted message into bit pattern that comprises principal components and a cross-correlated series;
analyzing the bit pattern obtained in step (a) using an artificial neural network so as to identify and classify the cryptographic method used for encrypting the plain text
The present invention also provides an apparatus for identifying and classifying cryptographic method used for encrypting a plain text from encrypted message, said apparatus comprising:
a converter for converting the encrypted message into bit pattern that comprises principal components and a cross-correlated series; and
an artificial neural network operatively coupled to the output of the converter for receiving the bit pattern and identifying and classifying the cryptographic method used for encrypting the plain text from the received bit pattern.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
In the drawings accompanying the specification,
Figure 1 illustrates the block diagram of the system for identifying and classifying the
information coming from various cryptographic methods.
Figure 2 illustrates the block flow chart of the method to be adopted for identifying and
classifying the information coming from various cryptographic methods.
Figure 3 illustrates area plot for enhanced RC6.
Figure 4 illustrates area plot for Serpent.
Figure 5 illustrates area plot for LILI-128
Figure 6 illustrates area plot for Rabbit.
Figure 7 illustrates bar chart of classification accuracies for various Neural Network
circuits for different datasets.
Figure 8 illustrates time taken for convergence for various Neural Network circuits for
different datasets.
Figure 9a illustrates bar chart of individual classification accuracies for Enhanced RC6.
Figure 9b illustrates bar chart of individual classification accuracies for Serpent.
Figure 10 illustrates bar chart of classification accuracies for various Neural Network
circuits for different datasets.
Figure 11 illustrates time taken for convergence for various Neural Network circuits for
different datasets.
Figure 12a illustrates bar chart of individual classification accuracies for LILI.
Figure 12b illustrates bar chart of individual classification accuracies for Rabbit.
Figure 13a illustrates the bar chart of overall testing accuracy for RC6-LILI.
Figure 13b illustrates the bar chart of overall testing accuracy for RC6-Rabbit.
Figure 13c illustrates the bar chart of overall accuracy for Serpent-LILI.
Figure 13d illustrates the bar chart of overall accuracy for Serpent- Rabbit.
Figure 14 illustrates the bar chart of classification accuracies between Block and Stream
ciphers for various Neural Network circuits.
Figure 15 illustrates time taken for convergence for various Neural Network circuits for
different datasets.
Figure 16a illustrates the bar chart of individual classification accuracies for block
ciphers.
Figure 16b illustrates the bar chart of individual classification accuracies for stream
ciphers.
Figure 17 illustrates the bar chart of classification accuracies for various Neural Network
circuits.
Figure 18 illustrates the time taken for convergence for various Neural network circuits
for different datasets.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
Accordingly, the present invention provides a method for identifying and classifying cryptographic method used for encrypting a plain text from encrypted message, said method comprising the steps of:
converting the encrypted message into bit pattern that comprises principal components and a cross-correlated series;
analyzing the bit pattern obtained in step (a) using an artificial neural network so as to identify and classify the cryptographic method used for encrypting the plain text
In an embodiment of the present invention, in step (a), the principal component comprises of reduced features and the cross correlation series comprises of cross correlation values between two consecutive values.
In another embodiment of the present invention, in step (a), the encrypted message comprises block cipher encrypted message and stream cipher encrypted message.
In still another embodiment of the present invention the block cipher encrypted message comprises enhanced RC6 encrypted message and serpent encrypted message.
In yet another embodiment of the present invention the stream cipher encrypted message comprise LILI-128 encrypted message and Rabbit encrypted message.
In a further embodiment of the present invention the artificial neural network is a multilayered feed forward neural network employing backpropagation for training.
In a further more embodiment of the present invention the artificial neural network is a multilayered feed forward neural network employing backpropagation or gradient descent with momentum or resilient back propagation or conjugate gradient for training.
In one more embodiment of the present invention the step of converting the encrypted message into bit pattern comprises the steps of:
converting the encrypted message into its corresponding ASCII patterns, wherein each ASCII pattern includes a predetermined number of bits;
summing a predetermined number of ASCII patters thereby obtaining a summed up pattern;
performing principal component analysis on the summed up pattern of step (b) to obtain principal components of the summed up pattern;
finding first order difference between each pair of consecutive features of the summed up patterns to obtain a first-order differenced series;
applying e'x transformation on the first order differences series to obtain a transformed first order differenced series;
finding cross-correlation between consecutive features of the transformed first order differenced series thereby obtaining a cross correlated series; and
combining the principal components of the summed up pattern obtained in step (c) with the cross correlated series obtained in step (f) to form the bit pattern that includes principal components of the summed up pattern and the cross-correlated series.
In another embodiment of the present invention, each of the ASCII pattern comprises of 32 bits, and each bit pattern comprises 46 features.
In still another embodiment of the present invention, in step (b), each of the summed up pattern thus obtained is obtained by summing up equal number of equal of ASCII patterns.
The present invention also provides an apparatus for identifying and classifying cryptographic method used for encrypting a plain text from encrypted message, said apparatus comprising:
(c) a converter for converting the encrypted message into bit pattern that comprises principal components and a cross-correlated series; and
(d) an artificial neural network operatively coupled to the output of the converter for receiving the bit pattern and identifying and classifying the cryptographic method used for encrypting the plain text from the received bit pattern.
In another embodiment of the present invention the converter for converting the encrypted message into bit pattern comprises:
an digital to ASCII converter for converting the encrypted message into its corresponding ASCII patters, wherein each ASCII pattern includes a predetermined number of bits;
an adder being operatively coupled to the output of the ASCII converter for receiving the ASCII values from the digital to ASCII converter and calculating summation value of a predetermined number of patterns thereby obtaining a summed up pattern;
a subtractor operatively coupled to a first output of the adder for calculating a first order difference between each pair of consecutive features of the summed up pattern thereby obtaining a first order differenced series;
a transformation circuit being operatively coupled to an output of the subtractor for receiving the first-order differenced series and obtaining a transformed first order differenced series;
a cross-correlator being operatively coupled to an output of the transformation circuit for receiving the transformed first order differenced series and obtaining a cross correlated series;
a principal component analyzer coupled to a second output of the adder for calculating principal components of the summed up pattern; and
a combiner receiving the outputs of the cross correlator and the principal component analyzer as its first and second inputs respectively, combining the principal components of the summed up pattern with the cross correlated series and providing an bit pattern that includes principal components of the summed up pattern and the cross-correlated series.
In still another embodiment of the present invention the artificial neural network is a multilayered feed forward neural network employing backpropagation for training.
In yet another embodiment of the present invention the artificial neural network is a multilayered feed forward neural network employing backpropagation or gradient descent with momentum or resilient back propagation or conjugate gradient for training.
Figure 1 illustrates a block diagram of the system or apparatus for identifying and classifying a cryptographic method used for encrypting a plain text from an encrypted message. As can be noticed from figure 1, the apparatus according to the present invention comprises a converter block for converting the encrypted message into bit pattern that comprises principal components and a cross-correlated series. The output of the converter block is operatively coupled to an artificial neural network circuit (ANN circuit). The ANN circuit receives the bit pattern and identifies and classifies the cryptographic method used for encrypting the plain text.
Figure 1 also illustrates different components of the converter block. The converter comprises a digital to ASCII converter for converting the encrypted message into its corresponding ASCII patterns. The digital to ASCII converter receives plurality of digital bits and convert it into one or more ASCII based pattern. The output of the digital to ASCII converter is coupled to an adder for calculating summation of a predetermined number of patterns. The adder, thus outputs a summed up pattern. The summed up pattern from the adder is provided as inputs to a subtractor and principal component analyzer. The subtractor generates first order difference series from the summed up pattern by calculating a first order difference between each pair of consecutive features of the summed up pattern. The first order differenced series is then received by a transformation circuit which is operatively coupled with the output of the subtractor. The transformation circuit transforms the first order differenced series to provide a transformed first order differenced series. A cross-correlator is operatively coupled to an output of the transformation circuit which generates cross correlated series using transformed first order differenced series received from the transformation circuit. The
principal component analyzer on the other hand calculates the principal components of the summed up patterns. The outputs of cross-correlator and principal component analyzer are coupled to a combiner which combines the principal components of the summed up pattern with the cross correlated series and generates the bit pattern comprises principal components of the summed up pattern and the cross correlated series. The ANN circuit receives the bit patterns and identifies and classifies the cryptographic method.
Figure 2 shows a flow chart of the method of identifying and classifying the cryptographic method used for encrypting a plain text from encrypted message. As can be noticed from the flow chart (figure 2) the encrypted message is converted into bit pattern which comprises principal components and cross-correlated series and then the said pattern is analyzed using the artificial neural network to identify and classify the cryptographic method used for encrypting the plain text.
For converting the encrypted message into bit pattern, the encrypted message is first converted into its corresponding ASCII patterns which include a predetermined number of bits. Thereafter, predetermined number of ASCII patterns is summed up to obtain a summed up pattern. The principal components of summed up pattern are obtained by performing principal component analysis. Simultaneously and parallely, the summed patterns are also processed for finding first order difference between each pair of consecutive features of the summed patterns. The first order difference between all pairs is obtained in the form of a first order differenced series. The first order differenced series thus obtained is then transformed by applying e"x transformation. The features of the transformed first order differenced series are then cross correlated to obtain cross correlated series. The principal components of the summed up patterns are then combined with the cross related series to form a bit pattern that includes principal components of the summed up pattern and the cross correlated series.
The bit pattern comprising the principal components of the summed up pattern and the cross correlated series is analyzed using a ANN circuit so as to identify and classify the cryptographic method used for encrypting the plain text.
In the following paragraphs, the Applicants have described the invention purely by way of illustration and nothing contained in the following paragraphs should be construed to limit the scope of the invention.
Implementation Details
In this section, details of formation of datasets and data transformation techniques have been discussed.
Description of Datasets
Different Datasets were generated using each of the cryptographic methods mentioned
above in the following manner
Different 12.8 K Bytes encrypted using 1000 randomly generated Keys.
Different sets of keys and plaintext messages used for each cryptographic method.
Keys used were generated using the random number generator (60 bit LFSR with a primitive polynomial of order 60)
The Size of each dataset is about 12.8 Million Cipher Characters of which 10.24 Million Cipher Characters was used for training and the remaining was used for testing in each cryptographic method.
The Total Size of the Training Dataset is 40.96 Million Characters and the total size of Testing Dataset is 10.24 Million Cipher Characters.
Data Transformation Technique
Large collection of cipher texts was generated from different sets of plaintext messages (12800 Bytes) using different encryption algorithms. Different sets of plaintext messages and keys were used for each algorithm. For cryptographic method identification, the ASCII values of the cipher texts were considered. A group of 32 ASCII values were considered as one input pattern. The message of 12800 Bytes will result in 400 patterns
each containing 32 ASCII values of the corresponding Cipher characters. Various Neural Networks were trained using this data but the testing accuracy was low. This is due to the fact that the Cipher text characters are pseudorandom sequence generated by various Cryptographic algorithms. Various standard data transformation techniques like 1/x, x, log x were applied on the cipher text data for improving the testing accuracy but the results were not encouraging.
Summation of 200 patterns was taken and then a first order difference of the features in each of these summed up pattern was done to make the series stationary. The resultant patterns were fed then to the Neural Network. The area plots depicting the summation of 50 patterns for four cryptographic methods viz. Enhanced RC6, Serpent, LILI-128, and Rabbit are shown in the Figures 3, 4, 5 & 6 respectively. It was found that the Classification accuracy improved.
Classification on the first order-differenced series was carried out on different neural network architectures and neural network circuits. It was observed that there was some improvement in the results. To give greater importance to the smaller values of first order difference, the data was further transformed using the transformation e"(Flrst 0rder Dlfference). Then the 30 Cross Correlation values between features of the transformed data of each cryptographic method along with 16 reduced features (using Principal Component Analysis) of the summed up patterns was fed to various Neural Network circuits. The results of the various neural network circuits are discussed in the next section.
Results
This section deals with inter class and intra class classification of cryptographic methods. Plain Text messages comprised of alphanumeric characters along with some special characters were encrypted using various block ciphers and stream ciphers. For each cryptographic method different sets of plain text and pseudorandom keys were used. The Cipher Texts generated were grouped into chunks of 32 cipher characters (ASCII representations). Each pattern is denoted by 32 characters (features). The data was transformed using the data transformation technique described in the earlier section. The
resultant dataset had 46 features (16 reduced features and 30 Cross Correlation Values between the features). These 46 features were fed to various Neural Network circuits. The classification accuracies of five different datasets were observed for various neural networks. The datasets consisted of ASCII representations of Cipher Text of different 12.8 K Bytes of plaintext messages encrypted using different cryptographic methods with 1000 randomly generated Keys. Following section shows the inter class classification between two Block Ciphers (Enhanced RC6 and SERPENT).
Classification between Block Ciphers Enhanced RC6 and SERPENT Figure 7 shows the bar chart depicting the comparative accuracies for each of the datasets for various Neural Network circuits. It is observed that the classification accuracy of more than 90% is achieved for all the Neural Network circuits for each of the datasets. Figure 8 shows the time taken for convergence by various Neural Network circuits when trained using different datasets. Resilient Propagation Neural Network took the least time to converge compared to other neural network circuits. The classification accuracy using Resilient Propagation Neural Network is also more than 90% for all the datasets. In order to study which of the two block ciphers were better classified the individual classification accuracies were studied. Figures 9a and 9b depict the bar charts for comparing the individual accuracy for Enhanced RC6 and SERPENT respectively. The Block Cipher SERPENT was classified with better accuracy as compared to Enhanced RC6.
Classification between Stream Ciphers (LILI- RABBIT)
Figure 10 shows the bar chart depicting the comparative accuracies for each of the
datasets for various Neural Network circuits. Each of the Neural Network circuits
achieved a classification accuracy of more than 88% for all the datasets.
Figure 11 show the time taken for convergence by various Neural Network circuits when
trained using different datasets. Resilient Propagation Neural Network took the least time
to converge with more than 90%) classification accuracy. The gradient descent neural
network circuits took more time to converge compared to other circuits.
The Classification accuracy of each of the two stream ciphers was studied to find which
of them were better classified. Figures 12a and 12b depict the bar charts for comparing the individual accuracy for LILI-128 and RABBIT respectively. The Stream Cipher RABBIT was classified with better accuracy as compared to LILI 128.
Classification Accuracy between a Block and a Stream Cipher
Figure 13 (a), (b), (c), (d) show the Bar charts comparing the classification accuracies for various neural network circuits for RC6 Vs. LILI, RC6 Vs. Rabbit, Serpent Vs. LILI, Serpent Vs. Rabbit respectively. It is observed that the classification accuracy for Serpent V s. Rabbit is 80% to 95%. Also the classification accuracy between RC6 and Rabbit is better compared to the other combinations tested.
Classification Accuracy between Block (RC6, Serpent) Ciphers and Stream (LILI, Rabbit) Ciphers
Here Patterns from two Block Ciphers Enhanced RC6 and Serpent were mixed up and patterns from two Stream Cipher, LILI-128 and Rabbit were mixed together. The Neural Network was then trained to classify between Block Cipher and Stream. The aim was to identify the inter class (macro level) classification accuracy of Neural Network. The Bar chart showing the comparison of Classification accuracy for various Neural Network circuits is shown in Figure 14. It is observed that Resilient Propagation neural network has achieved a classification accuracy of about 80% to 90%.
Figure 15 shows the line chart for the same. It is observed that the time taken by resilient propagation neural network and gradient descent neural network circuits are almost same however the conjugate gradient neural network circuits have taken more time to converge.
Figure 16 (a) and (b) show the Bar chart of individual accuracy for Block and Stream Ciphers respectively. It is observed that for Dataset 0, 3 and 4 Stream Ciphers are better classified than Block Ciphers. But in Dataset 1 and 2 Blocks have been better classified.
Intra Class Classification Accuracy between RC6, Serpent, LILI, Rabbit
Inter class (micro level) classification accuracy for different blocks and stream ciphers
were studied for various neural network circuits. Figure 17 shows the Bar chart depicting
the comparison of classification accuracy for various Neural Network circuits. Resilient
Propagation Neural Network has given the best classification accuracy (81% to 90%)
compared to other neural network circuits.
Line chart depicting the comparison of time taken for convergence by different Neural
Network circuits is shown in Figure 18. Resilient Propagation Neural Network has taken
the least time to converge compared to other Neural Network circuits.
In the present invention Neural Networks has been successfully used for identifying and classifying cryptographic method. The distinction of cipher text into Block and Stream ciphers, as well as distinctions of algorithms within Block ciphers and Stream Ciphers has been done with more that 80 % accuracy by various Neural Network circuits. Resilient Back Propagation Neural Network has performed better than all the other Neural Network circuits and has also taken less time to converge with a classification accuracy of more than 88%. The advantage of cryptographic method identification using artificial neural network is that it will enable Cryptanalyst to find out algorithms that are vulnerable to attack by neural networks. Companies that are involved in testing of data security systems would be benefited by this technique for evaluating algorithms used in data security.

CLAIM:
1. A method for identifying and classifying cryptographic method used for
encrypting a plain text from encrypted message, said method comprising the steps
of:
(a) converting the encrypted message into bit pattern that comprises principal components and a cross-correlated series;
(b) analyzing the bit pattern obtained in step (a) using an artificial neural network so as to identify and classify the cryptographic method used for encrypting the plain text.

2. The method as claimed in claim 1, wherein in step (a), the principal component comprises of reduced features and the cross correlation series comprises of cross correlation values between two consecutive values.
3. The method as claimed in claim 1, wherein in step (a), the encrypted message comprises block cipher encrypted message and stream cipher encrypted message.
4. The method as claimed in claim 3, wherein the block cipher encrypted message comprises enhanced RC6 encrypted message and serpent encrypted message.
5. The method as claimed in claim 3, wherein the stream cipher encrypted message comprise LILI-128 encrypted message and Rabbit encrypted message.
6. The method as claimed in claim 1, wherein the artificial neural network is a multilayered feed forward neural network employing backpropagation for training.
7. The method as claimed in claim 6, wherein the artificial neural network is a multilayered feed forward neural network employing backpropagation or gradient descent with momentum or resilient back propagation or conjugate gradient for training.
8. The method as claimed in claim 1, wherein the step of converting the encrypted
message into bit pattern comprises the steps of:
(a) converting the encrypted message into its corresponding ASCII patterns, wherein each ASCII pattern includes a predetermined number of bits;
(b) summing a predetermined number of ASCII patters thereby obtaining a summed up pattern;
(c) performing principal component analysis on the summed up pattern of step (b) to obtain principal components of the summed up pattern;
(d) finding first order difference between each pair of consecutive features of the summed up patterns to obtain a first-order differenced series;
(e) applying e"x transformation on the first order differences series to obtain a transformed first order differenced series;
(f) finding cross-correlation between consecutive features of the transformed first order differenced series thereby obtaining a cross correlated series; and
(g) combining the principal components of the summed up pattern obtained in step (c) with the cross correlated series obtained in step (f) to form the bit pattern that includes principal components of the summed up pattern and the cross-correlated series.

9. The method as claimed in claim 8, wherein each of the ASCII pattern comprises of 32 bits, and each bit pattern comprises 46 features.
10. The method as claimed in claim 8 wherein in step (b), each of the summed up pattern thus obtained is obtained by summing up equal number of equal of ASCII patterns.
11. An apparatus for identifying and classifying cryptographic method used for encrypting a plain text from encrypted message, said apparatus comprising:
(a) a converter for converting the encrypted message into bit pattern that comprises principal components and a cross-correlated series; and
(b) an artificial neural network operatively coupled to the output of the converter for receiving the bit pattern and identifying and classifying the cryptographic method used for encrypting the plain text from the received bit pattern.
12. The apparatus as claimed in claim 11, wherein the converter for converting the encrypted message into bit pattern comprises:
(a) an digital to ASCII converter for converting the encrypted message into its corresponding ASCII patters, wherein each ASCII pattern includes a predetermined number of bits;
(b) an adder being operatively coupled to the output of the ASCII converter for receiving the ASCII values from the digital to ASCII converter and calculating summation value of a predetermined number of patterns thereby obtaining a summed up pattern;
(c) a subtractor operatively coupled to a first output of the adder for calculating a first order difference between each pair of consecutive features of the summed up pattern thereby obtaining a first order differenced series;
(d) a transformation circuit being operatively coupled to an output of the subtractor for receiving the first-order differenced series and obtaining a transformed first order differenced series;
(e) a cross-correlator being operatively coupled to an output of the transformation circuit for receiving the transformed first order differenced series and obtaining a cross correlated series;
(f) a principal component analyzer coupled to a second output of the adder for calculating principal components of the summed up pattern; and
(g) a combiner receiving the outputs of the cross correlator and the principal component analyzer as its first and second inputs respectively, combining the principal components of the summed up pattern with the cross correlated series and providing an bit pattern that includes principal components of the summed up pattern and the cross-correlated series.
13. The apparatus as claimed in claim 11, wherein the artificial neural network is a multilayered feed forward neural network employing backpropagation for training.
14. The apparatus as claimed in claim 13, wherein the artificial neural network is a multilayered feed forward neural network employing backpropagation or gradient descent with momentum or resilient back propagation or conjugate gradient for training.
15. A method and apparatus for identifying and classifying cryptographic method used for encrypting a plain text from encrypted message substantially as herein described with reference to the accompanying drawings.

Documents

Application Documents

# Name Date
1 3439-DEL-2005-Form-18-(18-12-2009).pdf 2009-12-18
2 3439-DEL-2005-Correspondence-Others-(18-12-2009).pdf 2009-12-18
3 3439-del-2005-form-5.pdf 2011-08-21
4 3439-del-2005-form-3.pdf 2011-08-21
5 3439-del-2005-form-26.pdf 2011-08-21
6 3439-del-2005-form-2.pdf 2011-08-21
7 3439-del-2005-form-1.pdf 2011-08-21
8 3439-del-2005-drawings.pdf 2011-08-21
9 3439-del-2005-description (provisional).pdf 2011-08-21
10 3439-del-2005-description (complete).pdf 2011-08-21
11 3439-del-2005-correspondence-others.pdf 2011-08-21
12 3439-del-2005-claims.pdf 2011-08-21
13 3439-del-2005-abstract.pdf 2011-08-21
14 FORM 13.pdf 2014-04-21
15 Form 1.pdf 2014-04-21
16 3439-DEL-2005_EXAMREPORT.pdf 2016-06-30
17 Petition Under Rule 137 [09-03-2017(online)].pdf 2017-03-09
18 Other Document [09-03-2017(online)].pdf 2017-03-09
19 Examination Report Reply Recieved [09-03-2017(online)].pdf 2017-03-09
20 Description(Complete) [09-03-2017(online)].pdf_310.pdf 2017-03-09
21 Description(Complete) [09-03-2017(online)].pdf 2017-03-09
22 Claims [09-03-2017(online)].pdf 2017-03-09
23 Abstract [09-03-2017(online)].pdf 2017-03-09
24 3439-DEL-2005-OTHERS-160317.pdf 2017-03-21
25 3439-DEL-2005-Correspondence-160317.pdf 2017-03-21
26 3439-DEL-2005-Correspondence to notify the Controller [21-09-2020(online)].pdf 2020-09-21
27 3439-DEL-2005-FORM-26 [23-09-2020(online)].pdf 2020-09-23
28 3439-DEL-2005-Written submissions and relevant documents [07-10-2020(online)].pdf 2020-10-07
29 3439-DEL-2005-PatentCertificate12-10-2020.pdf 2020-10-12
30 3439-DEL-2005-IntimationOfGrant12-10-2020.pdf 2020-10-12
31 3439-DEL-2005-US(14)-HearingNotice-(HearingDate-24-09-2020).pdf 2021-10-03

ERegister / Renewals

3rd: 12 Jan 2021

From 22/12/2007 - To 22/12/2008

4th: 12 Jan 2021

From 22/12/2008 - To 22/12/2009

5th: 12 Jan 2021

From 22/12/2009 - To 22/12/2010

6th: 12 Jan 2021

From 22/12/2010 - To 22/12/2011

7th: 12 Jan 2021

From 22/12/2011 - To 22/12/2012

8th: 12 Jan 2021

From 22/12/2012 - To 22/12/2013

9th: 12 Jan 2021

From 22/12/2013 - To 22/12/2014

10th: 12 Jan 2021

From 22/12/2014 - To 22/12/2015

11th: 12 Jan 2021

From 22/12/2015 - To 22/12/2016

12th: 12 Jan 2021

From 22/12/2016 - To 22/12/2017

13th: 12 Jan 2021

From 22/12/2017 - To 22/12/2018

14th: 12 Jan 2021

From 22/12/2018 - To 22/12/2019

15th: 12 Jan 2021

From 22/12/2019 - To 22/12/2020

16th: 12 Jan 2021

From 22/12/2020 - To 22/12/2021