Specification
Description:A METHOD TO CLASSIFY COCONUT TREES BASED ON MORPHOLOGICAL PARAMETERS
CROSS-REFERENCES TO RELATED APPLICATION
None.
FIELD OF THE INVENTION
The disclosure generally relates to the classification of tall trees, more particularly it relates to classification of trees based on its morphological parameters using tree images.
DESCRIPTION OF THE RELATED ART
Machine learning is being used widely in agricultural fields for improving harvest quality, detecting and targeting weeds, minimizing losses etc. Precision agriculture is used to analyzing efficiency, quality, and productivity in farming and providing decision support system to the entire farm management. In agricultural applications, machine learning algorithms are used for the classification of different types of fruits, plants, vegetables, etc. Machine learning algorithms are also widely used in agriculture fields to classify crops, plants and trees, and also identify diseases.
Chinese patent application 112991312A introduces a pear variety seedling identification method leveraging AI deep learning. Utilizing an RGS camera sensor for real time image acquisition, the method employs an HSV threshold segmentation approach to effectively distinguish target plants and interference backgrounds. LSTM network is used for training and characterization. It mainly focuses on avoiding complex image step of image processing. Chinese patent application 115482420A discloses a multi-feature collaborative high-resolution image cultivated land crop type classification method. This approach involves the extraction of morphological, attribute, and texture features from remote sensing images, subsequent construction of a comprehensive classification model incorporating a variety of modules. In “Plant Recognition Using Morphological Feature Extraction and Transfer Learning over SVM and AdaBoost” Mahajan et.al. proposes a plant species recognition model based on morphological features extracted from corresponding leaves’ images using the support vector machine (SVM) with adaptive boosting technique. Islam et.al. in “Automatic Plant Detection Using HOG and LBP Features With SVM” discuss plant species identification based on leaf images using a combination of HOG and LBP features with SVM. The methods discussed here primarily focuses on crop type classification or plant species identification mainly based on plant leaf or seedling.
Until now, there has been limited research on the classification of coconut trees. Coconut palms are grown in more than 12.5 million hectares all over the world in more than 85 countries. The production is around 67.7 million nuts, and the productivity is more than 500 nuts per hectare. Farm productivity can increase the competitiveness of the coconut and also the income of the coconut farmers. In the paper “Coconut Disease Prediction System Using Image Processing and Deep Learning Techniques” Nesaranjan et.al. discuss monitoring of coconut leaves for detection of pest diseases and nutrient deficiency in coconut trees utilizing SVM and CNN.
Improved varieties of the coconut palm that have capacity to result in high productivity, resistance to infectious diseases and adaptation to climatic conditions and growing environments can help with higher productivity. Climatic conditions like wind, sunlight and rain can affect the height, inclination and orientation of the coconut trees.
There is a requirement for facilitating identification of trees based on their morphological parameters such as height, inclination and orientation which would give better insights into the type and productivity of the tree as well as serve as. important parameters to be considered while designing robotic harvesters for harvesting fruits from the trees. .
SUMMARY OF THE INVENTION
The present subject matter relates to classification of coconut trees.
In one embodiment of the subject matter, a method for automated classification of coconut trees based on morphological parameters is disclosed. The method comprises obtaining images of multiple coconut trees, wherein the images are taken at a constant distance from the trees, pre-processing the obtained images to remove noise from the images and extracting features from the pre-processed images with feature extraction models Feature extraction includes performing texture classification on the images at a first feature extraction layer, performing object detection on the texture classified images at a second feature extraction layer and performing dimensionality reduction on the object detected images at a third feature extraction layer. The method further includes appending the features extracted with the feature extraction model from the images, extracting deep features from the appended features for providing an analogy to the morphological features of the trees, generating corresponding target classes for tree classification, training the classification model and providing the extracted deep features to classification model for classification of the morphological features, wherein, the classification is based on predefined indexing value of the morphological features.
In various embodiments, the morphological features of the tree includes height, inclination and orientation of the tree.
In various embodiments, the obtained images include plurality of images of a single tree taken at various orientations.
In various embodiments, the pre-processing of the captured images comprises removing background objects from the images, augmenting the images using position and color augmentation, and resizing the images to predefined form .
In various embodiments, augmenting the images comprises translating and flipping the images with position augmentation and enhancing brightness, contrast and sharpness of the images with color augmentation.
In various embodiments, the texture classification performed at first feature extraction layer uses Local Binary Pattern (LBP) for assigning labels to image pixels based on a threshold value within the neighborhood of each pixel for an image.
In various embodiments, the object detection performed at second feature extraction layer uses Histogram of Oriented Gradients (HOG) for providing pixel orientation in conjunction with gradient information to extract unique features.
In various embodiments, the dimension reduction at third extraction layer uses Principal Component Analysis (PCA) for providing dimensionality reduction of a feature array. The feature array is standardized to same scale for every value.
In various embodiments, the deep feature extraction from the appended feature array uses Inception Net model. In various embodiments the classification model used is Support Vector Machine and utilizes Pareto distribution to determine the values of C and gamma.
This and other aspects are described herein.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention has other advantages and features, which will be more readily apparent from the following detailed description of the invention and the appended claims, when taken in conjunction with the accompanying drawings, in which:
FIG. 1A illustrates the method for classification coconut trees based on morphological features.
FIG. 1B illustrates the feature extraction of pre-processed images with feature extraction models.
FIG. 1C illustrates the pre-processing of images obtained.
FIG. 2 illustrates the calculation of coconut trees height .
FIG. 3A illustrates the variation in weighted term and bias term as the number of sample increases for the Mathematical modelling of Hyper Tuning SVM
FIG. 3B illustrates the variation in a value (Lagrange Multiplier) as the number of sample increases for the Mathematical modelling of Hyper Tuning SVM.
FIG. 4A illustrates Receiver Operating Characteristics of Hyper Tuning SVM classifier.
FIG. 4B illustrates the Multi-class Receiver Operating Characteristics curve of classification model.
FIG. 4C illustrates the Hyper Parameter Plot of the classification model.
FIG. 4D illustrates Precision, Recall, and F1-score plot for different C values with gamma = 0.001.
FIG. 4E illustrates CPU usage and RAM usage plot for different C values with gamma = 0.001 for the modified SVM classifier.
FIG. 4F illustrates precision, recall, and f1-score plot for change in gamma values with C = 100 for the modified SVM classifier.
FIG. 4G illustrates CPU usage and RAM usage plots for different gamma values with C = 100 for the modified SVM classifier.
FIG. 4H illustrates Precision, Recall, and F1-score plot for different kernels for the modified SVM classifier.
FIG. 4I illustratesCPU usage and RAM usage for different kernels for the modified SVM classifier.
DETAILED DESCRIPTION OF THE EMBODIMENTS
While the invention has been disclosed with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt to a particular situation or material to the teachings of the invention without departing from its scope.
Throughout the specification and claims, the following terms take the meanings explicitly associated herein unless the context clearly dictates otherwise. The meaning of “a”, “an”, and “the” include plural references. The meaning of “in” includes “in” and “on.” Referring to the drawings, like numbers indicate like parts throughout the views. Additionally, a reference to the singular includes a reference to the plural unless otherwise stated or inconsistent with the disclosure herein.
The present subject matter in various embodiments describes a method for coconut tree classification based on morphological parameters such as height, inclination and orientation. The method includes feature extraction of pre-processed input images at three layers, mainly for performing texture classification, object detection and dimensionality reduction. The extracted features are appended and given as input to a deep feature extraction model. This helps in obtaining better and accurate features for image classification. Classification of images with hyper tuned SVM facilitates in achieving enhanced performance in terms of computational time. .
In various embodiments, as shown in FIG. 1A, the method 100 for classification of coconut trees based on morphological parameters the first step includes obtaining images of multiple coconut trees 102. The images of coconut trees is acquired by taking photographs manually. The images is taken at a constant distance from the trees. FIG. 2 shows the approach to calculate the height manually. The height of the tree is calculated using the height of the person taking photograph and the inclination of his neck when he sees the treetop.
In some embodiments, to measure the inclination of the tree, an inclinometer may be used for all the trees. Inclination is measured at a distance from bottom of the coconut tree. Orientation may also be measured manually. A deviation or curvature in the tree trunk is be construed as a measure of the tree’s orientation. In some embodiments images of a single tree may be taken at different orientations.
, In various embodiments, the obtained images are pre-processed to remove noise from the images 104. The pre-processing of the obtained images is performed in three steps as shown in FIG. 1C. The first step 120 involves removing the background objects. Background removal is performed eliminating all unnecessary objects from the obtained image for reducing noise and focusing on the essential features of the object being analyzed. During the background removal task, all the external noises which dominate the coconut trees are removed. The second step 122 involves position augmentation and color augmentation of the cleaned images. Translation and flipping methods is be used under position augmentation while brightness, contrast and sharpness methods is be utilized under color augmentation techniques. During translation the object within the image is shifted or moved to several directions up, down, left, and right while flipping augmentation involves mirror flipping objects in an image in left and right directions. Translation and flipping is applied to create a dataset of multiple images for testing and training. The third step 124 of pre-processing the images involves resizing the images into a predefined shape, which may be an optimal size of an image to perform feature extraction. Pre-processing helps to standardize and enhance the quality of images before feature extraction The data set obtained after pre-processing the images includes all the variations of the coconut trees in terms height, inclination, and orientation.
The pre-processed images are converted to grey scale image to reduce . the dimensionality of the data which can simplify the feature extraction process and reduce computational complexity. In the eventuality of color images it is essential for feature extraction for the images to be converted to RGB format.
In various embodiments, the pre-processed images are given as input to feature extraction models. In various embodiments, a modified Inception Net is used for deep feature extraction wherein theInception Net is a pre-trained model in Convolutional Neural -Network (CNN) composed of multiple layers. Three layers of feature extraction models namely Local Binary Pattern (LBP), Histogram of Oriented Gradient (HOG), and Principal Component Analysis (PCA) are utilized prior to Inception net to increase the performance of the model. The deep features extracted from the images are provided to a hyper tuned SVM for classification of the images.
The pre-processed images are passed through three layers for feature extraction), as shown in FIG. 1B. The first layer 126 is LBP which is mainly used for texture classification. LBP labels the pixels of an image by a threshold value of the neighbourhood of each pixel. The second layer 128 includes HOG that is used as a feature descriptor for object detection, and for feature extraction from image data. HOG breaks the entire image into smaller regions and calculates the gradient and orientation of each region. After calculating the gradient and orientation of all the regions, a histogram is created for each region using gradient and orientation pixel values that are obtained. When the LBP is combined with the HOG the classification performance improves significantly because, HOG gives orientation of each pixel along with gradient, thereby extracting unique features. The third layer 130 includes PCA which is the last feature extraction that is used to reduce the dimensionality of the feature array without losing any details of the feature array. A feature array is an array that holds the values of all the features extracted from a single data point. Each element in the array corresponds to a specific feature, and its value represents the specific characteristic of that feature for that data point.
In various embodiments, before providing the feature array to PCA, feature array is standardized to ensure that every value is on the same scale. Standardization of a feature array in deep learning is a data pre-processing technique that involves transforming the features to have a mean of 0 and a standard deviation of 1. It is a way of putting all the features on a common scale, making them comparable and ensuring they contribute equally to the learning process. Features often have different ranges and units. For example, one feature measures height in centimeters (0-200), while another measures weight in kilograms (50-150). Without standardization, features with larger ranges dominate those with smaller ranges, negatively impacting model performance. It's important to consider whether standardization is appropriate for a specific problem and dataset, as there might be cases where other scaling techniques are more suitable.
In various embodiments, the three features that are extracted from images are appended to each other. The appended features are provided as input to the Inception Net model to extract deep features 110. A deep feature is a node or layer's recurrent response to an input pertinent to the model's final output inside a hierarchical model. Depending on how early in the model or other structure the reaction is activated, one characteristic is seen as "deeper" than another. These features will have an analogy to the tree properties as these deep features are the output generated in response to the input tree features initially generated using the tree dataset. Inception net generates deep features file and corresponding target class file as output. Both deep features and target classes are given as input to the SVM for classification.
In various embodiments, during the training session, the extracted features are split into a training set and testing set, with 70:30 ratios. The training set is sent into the hyper parameter tuning SVM model and trained for classification 114. The kernel used is Radial Basis Function(RBF). Hyper parameter tuning selects the optimal hyper parameters for a machine learning algorithm to improve its performance on a given data set. The Pareto probability distribution is employed to determine the optimal values of C and gamma for the given data set. GridSearchCV technique is used for hyper parameter tuning in SVM. The classification of coconut trees using Modified Inception Net-SVM algorithm, divides the dataset into three different classes based on predefined indexing value of the morphological features 116. In various embodiments, the morphological parameters for classification of coconut trees is the height, inclination and orientation of the tree.
The invention has multiple advantages as further set forth herein. A computerized coconut tree detection system can help dendrologists and laypersons in identifying coconut trees based on three morphological parameters including height, inclination, and orientation. These three parameters help in determining the health and the nature of growth of coconut trees which influences the design and use of robots for harvesting coconuts. Deep learning is a powerful tool used for feature extraction as it is better in extracting deeper details (features) in an image. The Modified Inception Net based Hyper Tuning Support Vector Machine classification method classifies coconut trees based on three morphological parameters such as height, inclination and orientation. The MIN-SVM model classification has the potential to boost farm production, the competitiveness of the coconut, and the revenue of coconut producers. Improved coconut palm cultivars with characteristics such as high productivity, resistance to infectious diseases, and climatic and environmental adaptability may help with higher productivity. The Modified Inception Net-SVM achieved an accuracy of 95.35 percentage.
While the invention has been disclosed with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt to a particular situation or material the teachings of the invention without departing from its scope, which should be as delineated in the claims appended herewith.
EXAMPLES
EXAMPLE 1
IMPLEMENTATION OF MODIFIED INCEPTION NET-SVM MODEL
Data collection and Creation:
To implement the proposed MIN-SVM model, 17000 images with several variations in height, inclination, and orientation are created using three thousand raw images (without augmentation) collected from one hundred and forty-two coconut trees. iPhone xR device with an equivalent focal length of 1.8f, and a resolution of 12MP is used to acquire coconut tree images. All the images are captured between 4:00 pm and 6:00 pm. Some of the images of the coconut trees are captured on beachland and some are captured on farmlands. The height of the trees varied between 3.5m to 15m and inclination varied between 90 deg to 120 deg. All the images were collected between January and March 2022 in Vallikavu village, Kollam district, Kerala state of India.
Data collection of height of coconut trees:
FIG. 2, shows the approach to calculate the height manually. The height of each tree is measured with a constant distance of 2.4m from the tree as shown in FIG. 2. Here h1 is 1.8 m which is the height of the person. The same person collected the data from all the trees. h2 is the distance from tree to the person which is 2.4 m. A constant distance h2 is used to calculate height for all the trees. Total height of the tree is sum of h1 and h3. H3 is calculated as the inclination (?) of person’s neck when he sees the treetop. With inclination (?) and h2, h3 is found using trigonometric formulas.
Data collection of inclination and orientation of coconut trees:
To measure the inclination of the tree, an inclinometer is used for all the trees. Inclination is measured at 1.4m from bottom of the coconut tree. An inbuilt Android app named ‘Measure’ is used to cross-check the obtained results. ‘Measure’ application gave readings 95% similar to manually calculated readings. Orientation has been measured manually; if there is some twist or turn in the tree from ground to top of the tree, then the tree has orientation.
Image pre-processing:
Three thousand raw images (without augmentation) collected from one hundred and forty-two coconut trees were pre-processed, which resulted in a total of 17000 images. From one tree 22 images were taken from different orientations, thus, a total of 3124 images from the 142 trees. The original images were pre-processed using three distinct methods, namely background removal, augmentation, and resizing. FIG. 1B, shows the steps involved in image pre-processing. The initial step holds data acquisition. In the second step, background removal method is performed to remove all the unnecessary plants, trees, and grass from the image. In the next step, two-position and three-color augmentation techniques were used. Translation and flipping methods were used under position augmentation techniques. Brightness, contrast, and sharpness methods were utilized under color augmentation techniques.
In translation the object in an image is shifted or moved to several directions up, down, left, and right. Flipping augmentation involves mirror flipping objects in an image in left and right directions. These techniques helped in creating a dataset of seventeen thousand images for testing and training. In the final step, all images were resized into (299x299x3) shape, which is an optimal size of an image to perform feature extraction using the proposed MIN-SVM model.
The creation of data set is to accommodate all the variations in the coconut trees in terms height, inclination, and orientation. The data set before application of augmentation methods consists of 3124 images. As increasing the dataset improves the learning rate in any deep learning method, it has been applied various augmentation methods like flipping, sharpness, brightness etc to make the proposed method more robust towards the variations in input. These pre-processing methods are required only during the training time. While testing, the raw images are the inputs for the proposed method.
The parameters including height and inclination are measured and the presence of the orientation of each trees are also checked. All 22 images of each tree and the labelling for each image is purely based on these parameters. The index value determines the classes and is composed of the index of height, inclination, and orientation. This indexing is based on the variation of coconut trees from a well maintained farm. From the data of hundred trees, the normalized value of the number of trees belonging to each category is given as the index number.
Determining indexing value for height, inclination and orientation:
For coconut trees with less than 5 meters height the indexing value is 0.06. Indexing value of coconut trees with 5 meters to 15 meters height is 0.8. The indexing value is 0.14 for coconut trees with height more than 15 meters. For coconut trees with inclination 0 degrees to 10 degrees, the indexing value is 0.6. For 10 degrees to 30 degrees inclination, the indexing value is 0.36. 0.04 is the indexing value for more than 30 degrees inclination in coconut trees. If there is orientation in trees, the indexing value is 0.08. 0.92 is the indexing value if there is no orientation. Tables 1, 2 and 3 are indexing values of height, inclination, orientation. After acquiring the indexing values of height, inclination, and orientation, the final indexing value of the coconut trees is found by adding indexing values of height, inclination, and orientation. After obtaining the final indexing values of all trees, they are divided into three different classes. Range of class A is from 0.18 to 0.91. Class B ranges from 0.92 to 1.64, and range of class C is from 1.65 to 2.32.
Table 1: Indexing Values of Height
Height (meters) Indexing value
Less than 5m 0.06
5m -15m 0.8
Greater than 15m 0.14
Table 2: Indexing Values of Inclination
Inclination (degrees) Indexing value
0 -10 0.06
10 - 30 0.36
Greater than 30 0.04
Table 3: Indexing Values of Orientation
Orientation Indexing value
yes 0.08
no 0.92
After processing through the feature extraction models, the Modified Inception Net generated 2048 deep feature file and corresponding target class files as aoutput. Both deep features and target classes are given as input to the SVM with Radial Basis Function (RBF) as kernel for classification.
Mathematical modelling of principal component analysis (PCA):
The input for PCA is the pre-processed image. All the features extracted from an image are appended to a common variable. A common variable ‘F’ holds all the features that are appended from three distinct feature extractions. C is the original input image, which is also known as red, blue, green (RBG) image. Here RBG values of each grid are extracted individually and multiplied by generalized scaling factor and added together to make it gray scale image. Standardization of gray scale image data should be done to ensure that all data points come under same scale. Equation (1), is used to standardize in our case.
?? = [(????) - ?_(i=1)^N¦GLi ] * [?? * ?_(i=1)^N¦?(GLi – [?_(i=1)^N¦GLi])? ^2]^1/2 (1)
wherein ?? is the total number of columns present, and
GL is the Grey level image.
Eigen values are sorted from ascending to descending order and J will store ‘n’ values from the beginning of the sorted data. Equation (2), helps to find the principal component values by multiplying the eigen vector (Jk) which is related to that eigen value with the standardized data(S).
? ?? = 1 : ?? ???????? = [ . ??] * ?? (2)
wherein ?? is the number of principal component values required.
Mathematical modelling of Hyper Tuning Support Vector Machine:
Features that are extracted using modified inception net are considered as feature array to find hyper plane equations.
? ?? = 1 : ?? ?? = { , ???? }
(3)
wherein ?? is the weighted term, ?? is the features of an image, ?? is the bias term, ?? is the target class and n is the number of hyper planes required and D is the 2-D Array containing ???? and ???? .
Equation (3), contains ‘n’ hyper plane equations. Number of hyper planes required is based on user input. Each set of hyper plane has 2 equations. One equation helps to plot positive values of the hyper plane data on one side of hyper plane and second equation helps to plot on other side of hyper plane. Based on number of classes Hi projects hyper plane which divides two classes. In our case, dataset contains three different classes. So, ‘n’ should be set to three and three sets of hyper plane equations are calculated. Each set of hyper plane equations gives one hyper plane. So, the whole dataset will be divided into three classes by using three different hyper planes.
??? = (yi[wk T*x + bk] )/(||wk||) ? ?? = 1 : ??, ??? = 1 : ?? (4)
Equation (4), is used to calculate the marginal distance. It is the distance between hyper plane to one of the nearest support vectors.
(???? , ???? , ???? ) = 1 2 * (???? ?? * ) - ?_(i=1)^m¦ai * ???? (???? ?? * ???? + ????) + ?_(i =1 )^m¦ai
? ?? = 1 : ?? (5)
(?(minmax))/?wk = ?_(i=1)^m¦?ai * xi * yi? (6)
(?(minmax))/?bk = ?_(i=1)^m¦?ai * yi? (7)
where k is is the number of principal component values required.
Equation 5, is the minmax equation, here (wk, bk) are minimizing with maximizing ????. From, this equation, the equations for both weighted term (wk) and bias term (bk) can be obtained. To get the equation for weighted term differentiation of minmax with respect to wk should be done and that is shown in equation (6), similarly, bk as shown in equation (7). To find the a, w (weight term), b (bias term) equations, exponential approximations is used. For this approach, a, w, b values are collected with Z as 142 and 100 samples separately. For 142 samples a=121699445, w(x, y) = (4.085,0.051), b = -6.6380 were obtained. For 100 samples a=102756447, w(x, y) = (-0.0022,0.00025), b = -0.99 has been obtained.
(??) = ????-???? (8)
where Z is the number of samples.
Equation (8), shows the exponential approximation used to find a, with A and ? are constants. To find A and ?, apply natural logarithm on both sides,
18.6 = (??) - 142?? (9)
18.4 = (??) - 100?? (10)
(??) = 62409971.79??0.00476?? (11)
Solving equations (9) and (10), the exponential equation for a is obtained. To obtain a for any number of samples, Z should be substituted in equation (11).
???? (??) = ????-???? (12)
Equation (12), helps to find wx exponential approximation. Here B and ß are the constants; from these constants wx equation is calculated. By applying natural logarithm on both sides,
1.40 = (??) - 142?? (13)
(-6.11 + 3.14??) = (??) - 100?? (14)
Solving equations (13) and (14), the exponential equation for wx is obtained.
???? (??) = (3.3 * 10-7) + (6.3 * 10-7??)(0.17+0.074??)?? (15)
???? (??) = ????-??? (16)
???? (??) = (2.02 * 10-9)0.12?? (17)
Following the same method to find wx, constants C and e are calculated for equation (16). Equation (17), is the exponential approximation of wy.
(??) = ?? * (1/2) (18)
To find the equation for bias term, exponential approximation has been used which is mentioned in equation (18). Here D and x are the constants need to be found to frame equation for bias term. To find D and x from equation (18), apply natural logarithm on both sides,
(1.89 + 3.14??) = (??) - 98.4?? (19)
(-0.01 + 3.14??) = (??) - 69.3?? (20)
Solving equations (19) and (20) the exponential equation for b is,
(??) = -3981.25 * (1/2)0.065?? (21)
After obtaining equations for a, w (weight term), and b (bias term), the equations are used to find ( , ???? , ???? ). For these number of sample points, different values of a, w, and b are obtained.
FIG. 3A, shows the plot among number of sample points, weighted term, and bias term. The wx value increases with the increase in number of samples given as input. As the wx increases, graph shows decrement in the b values. Wy is very small value which is near to zero for all the samples taken. The number of sample points were mentioned on the X-axis. wx, wy, and b were calculated using equations (15), (17), and (21), for different number of samples. On the Y-axis, wx, wy, and b values obtained for different sample points were plotted. FIG. 3B, shows the graph plotted between a and different sample points. For a high a the misclassification will be less. In the FIG. 3B, 130 samples have high alpha values and hence less misclassification. a is calculated using equation (11).
Mathematical modelling of Pareto Probability Distribution:
C and gamma are represented as probability distributions and accuracy i.e., f(x) as a probability distribution function. Based on C, gamma and accuracy different probability distributions functions are calculated and found Residual Sum of Squared (RSS), Loc (mean) and Scale (standard deviation) scores for all the distributions. Based on Least Value of RSS, Pareto distribution is selected as best fit for our data than remaining probability distributions. x is an array which contains all the features that are extracted, and y is the target class. C and gamma parameters are influencing accuracy other than size of dataset.
??1(??1) = Pr(?? > ??1) = {¦(x*y*((x1m/x1)^^ a) ; x1=x1m@1 ; x1 ??2) = {¦(x*y*((x2m/x2)^^ a) ; x2=x2m @1 ; x2
Documents
Application Documents
| # |
Name |
Date |
| 1 |
202441036135-STATEMENT OF UNDERTAKING (FORM 3) [07-05-2024(online)].pdf |
2024-05-07 |
| 2 |
202441036135-FORM FOR SMALL ENTITY(FORM-28) [07-05-2024(online)].pdf |
2024-05-07 |
| 3 |
202441036135-FORM 1 [07-05-2024(online)].pdf |
2024-05-07 |
| 4 |
202441036135-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [07-05-2024(online)].pdf |
2024-05-07 |
| 5 |
202441036135-EVIDENCE FOR REGISTRATION UNDER SSI [07-05-2024(online)].pdf |
2024-05-07 |
| 6 |
202441036135-EDUCATIONAL INSTITUTION(S) [07-05-2024(online)].pdf |
2024-05-07 |
| 7 |
202441036135-DRAWINGS [07-05-2024(online)].pdf |
2024-05-07 |
| 8 |
202441036135-DECLARATION OF INVENTORSHIP (FORM 5) [07-05-2024(online)].pdf |
2024-05-07 |
| 9 |
202441036135-COMPLETE SPECIFICATION [07-05-2024(online)].pdf |
2024-05-07 |
| 10 |
202441036135-FORM-9 [15-05-2024(online)].pdf |
2024-05-15 |
| 11 |
202441036135-FORM-8 [15-05-2024(online)].pdf |
2024-05-15 |
| 12 |
202441036135-ENDORSEMENT BY INVENTORS [16-05-2024(online)].pdf |
2024-05-16 |
| 13 |
202441036135-Proof of Right [20-05-2024(online)].pdf |
2024-05-20 |
| 14 |
202441036135-FORM 18 [20-05-2024(online)].pdf |
2024-05-20 |
| 15 |
202441036135-FORM-26 [04-11-2024(online)].pdf |
2024-11-04 |
| 16 |
202441036135-RELEVANT DOCUMENTS [18-03-2025(online)].pdf |
2025-03-18 |
| 17 |
202441036135-POA [18-03-2025(online)].pdf |
2025-03-18 |
| 18 |
202441036135-FORM 13 [18-03-2025(online)].pdf |
2025-03-18 |
| 19 |
202441036135-OTHERS [07-05-2025(online)].pdf |
2025-05-07 |
| 20 |
202441036135-EDUCATIONAL INSTITUTION(S) [07-05-2025(online)].pdf |
2025-05-07 |