Abstract: ABSTRACT METHOD AND SYSTEM FOR PRODUCT RECOMMENDATION USING FACIAL AND VOICE RECOGNITION The present invention discloses a system (100) for product recommendation based on facial recognition and voice recognition of a user (1), the system comprising: a facial recognition module (200) comprising: an image preprocessing module (b) configured to extract a facial texture, and classify the facial expression into a category selected from neutral, happy, sad and confused; a voice recognition module (400) comprising: a voice user interface (402) configured to access a voice command of the user; and a speech recognition module (404) configured to categorize and filter products based on the voice command; and a product recommendation module configured to categorize products present in a shopping search engine based on the voice command, the category of the facial expression of the user, and a race of the user; and recommend products having a likelihood greater than a predetermined threshold. FIG. 1
Claims:I/We claim:
1. A system (100) for product recommendation based on facial recognition and voice recognition of a user (1), the system comprising:
a facial recognition module (200) comprising:
an image preprocessing module (b) configured to extract a facial texture, and classify the facial expression into a category selected from neutral, happy, sad and confused;
a voice recognition module (400) comprising:
a voice user interface (402) configured to access a voice command of the user; and
a speech recognition module (404) configured to categorize and filter products based on the voice command; and
a product recommendation module configured to categorize products present in a shopping search engine based on the voice command, the category of the facial expression of the user, and a race of the user; and recommend products having a likelihood greater than a predetermined threshold.
2. The system of claim 1, further comprising a product confirmation module, configured to receive a confirmation from the user to add the products recommended to the user to a shopping cart, wherein the confirmation from the user comprises one or more of a blinking of eyes of the user and a “yes” voice command from the user.
3. The system of claim 1, wherein a camera of the system is focused on an eye lid of the user and a region around the eyelid is tracked with a normal range and an adjusted range and a movement of the eye is matched with a pre-defined stored command in a database of the system, to determine a confirmation from the user to add the product into the shopping cart.
4. The system of claim 1, wherein the category is selected based on a value of one or more parameters consisting of an eyebrow raise distance, an upper-eyelid eyebrow distance, an inter eyebrow distance, an upper eyelid lower eyelid distance, a top lip thickness, a bottom lip thickness, a mouth width, and one or more forehead lines.
5 The system of claim 1, wherein the facial recognition module deploys deep convolutional neural networks to perform facial recognition, determine an age and race, and gender of the user.
6. The system of claim 1, wherein deep neural network is applied when user is concentrated towards a product cam lens is adjusted with 65 pixels for eye region and 35 pixels for frontal cortex region once pupil dilates and the frontal cortex landmark gets adjusted automatic zooming is done.
Dated this 14th day of August 2020.
Saravanan Gopalan
Applicant Agent (INPA – 3249)
Mission Legal Advocates
, Description:FORM 2
THE PATENTS ACT 1970
(39 of 1970)
&
The Patents Rules, 2003
COMPLETE SPECIFICATION
(Section 10 and rule13)
METHOD AND SYSTEM FOR PRODUCT RECOMMENDATION USING FACIAL AND VOICE RECOGNITION
Applicant:
K. RAMAKRISHNAN COLLEGE OF ENGINEERING
NH-45, SAMAYAPURAM, TRICHY,
TAMILNADU, INDIA- 621112
The following specification particularly describes the invention and the manner in which it is to be performed.
METHOD AND SYSTEM FOR PRODUCT RECOMMENDATION USING FACIAL AND VOICE RECOGNITION
TECHNICAL FIELD
[0001] The present invention relates generally to method and system for recommending product for online purchasing. More particularly the present invention relates to method and system for product recommendation using facial and voice recognition .
BACKGROUND OF THE INVENTION
[0002] Currently product recommendation is done through cookies and cache memory details which creates bad impression among customer and makes them to feel insecurity. Many techno wizard sectors are working with facial expression detection but hasn’t deployed with user easy way of handling.
[0003] Some prior art techniques involve product recommendation but fail to provide support to illiterate people in online purchasing. For example, patent application no. JP2004347942A titled “COMMODITY RECOMMENDATION SYSTEM BY FACE RECOGNITION” discloses a system for product recommendation by providing an imaging means 1 picks up an image of the face of a prospective purchaser of the commodity and a face recognition means 2 identifies a purchaser. A commodity recommendation means 5 recommends and guides the prospective purchaser to his favorite commodity based on purchaser's commodity purchasing information recorded on a storing means 3. The drawback of this prior art, is the recommendation is possible only when a past purchase history of the user is present. It fails to aid a new user, who is a novice in online shopping.
[0004] Another prior art US Patent No. US9367858B2 titled “ Method and apparatus for providing a purchase history” discloses a method and apparatus for providing a purchase history of a first user to a second (differing) user is provided herein. During operation a server is provided with an image and determines an identification of a first person within the image. Items that exist within the image, and that were purchased by the first person are also determined by the server. The server then provides information on the items purchased to a second user. The drawback of this prior art is similar to the earlier one.
[0005] There is a need for a method and system that can use face recognition and voice recognition to recommend appropriate product to a user involved in online purchasing.
[0006] Hence, in order to overcome aforementioned problems, an alternate method and system for product recommendation based on facial and voice recognition is disclosed.
[0007] The abovementioned shortcomings, disadvantages and problems are addressed herein, which will be understood by reading and studying the following specification.
OBJECT OF THE INVENTION
[0008] The primary object of the present invention is to afford illiterate people to purchase their products through online shopping. In this system captured facial recognition are used at the time of shopping after purchasing, Earlier fetched inputs like facial image and voice tone are cleared by this practice it enhances the security level of user identity.
[0009] These and other objects and advantages of the present invention will become readily apparent from the following detailed description taken in conjunction with the accompanying drawings.
SUMMARY OF THE INVENTION
[0010] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
[0011] The various embodiments of the present invention discloses a system (100) for product recommendation based on facial recognition and voice recognition of a user (1), the system comprising: a facial recognition module (200) comprising: an image preprocessing module (b) configured to extract a facial texture, and classify the facial expression into a category selected from neutral, happy, sad and confused; a voice recognition module (400) comprising: a voice user interface (402) configured to access a voice command of the user; and a speech recognition module (404) configured to categorize and filter products based on the voice command; and a product recommendation module configured to categorize products present in a shopping search engine based on the voice command, the category of the facial expression of the user, and a race of the user; and recommend products having a likelihood greater than a predetermined threshold.
[0012] According to an embodiment of the present invention, a product confirmation module, configured to receive a confirmation from the user to add the products recommended to the user to a shopping cart, wherein the confirmation from the user comprises one or more of a blinking of eyes of the user and a “yes” voice command from the user.
[0013] According to an embodiment of the present invention, a camera of the system is focused on an eye lid of the user and a region around the eyelid is tracked with a normal range and an adjusted range and a movement of the eye is matched with a pre-defined stored command in a database of the system, to determine a confirmation from the user to add the product into the shopping cart.
[0014] According to an embodiment of the present invention, the category is selected based on a value of one or more parameters consisting of an eyebrow raise distance, an upper-eyelid eyebrow distance, an inter eyebrow distance, an upper eyelid lower eyelid distance, a top lip thickness, a bottom lip thickness, a mouth width, and one or more forehead lines.
[0015] According to an embodiment of the present invention, the facial recognition module deploys deep convolutional neural networks to perform facial recognition, determine an age and race, and gender of the user.
[0016] According to an embodiment of the present invention, a deep neural network is applied when user is concentrated towards a product cam lens is adjusted with 65 pixels for eye region and 35 pixels for frontal cortex region once pupil dilates and the frontal cortex landmark gets adjusted automatic zooming is done.
[0017] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating the preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] The other objects, features and advantages will occur to those skilled in the art from the following description of the preferred embodiment and the accompanying drawings in which:
[0019] FIG. 1 illustrates a block diagram of a system for product recommendation based on facial and voice recognition, according to a non-limiting exemplary embodiment of the present invention;
[0020] FIG. 2 illustrates a block diagram of a module for facial recognition used within the system of FIG. 1, according to a non-limiting exemplary embodiment of the present invention;
[0021] FIG. 3 is a flow diagram for training the facial recognition module of FIG. 2, according to a non-limiting exemplary embodiment of the present invention.
[0022] FIG. 4 illustrates a block diagram of a module for voice recognition used within the system of FIG. 1, according to a non-limiting exemplary embodiment of the present invention;
[0023] FIG. 5 illustrates a flow diagram illustrating product recommendation within the system of FIG. 1, according to a non-limiting exemplary embodiment of the present invention;
[0024] FIG. 6 illustrates a flow diagram of a module for capturing the facial recognition within the system of FIG. 1, according to a non-limiting exemplary embodiment of the present invention; and
[0025] FIG. 7 illustrates a flow diagram of a module for confirming a product within the system of FIG. 1, according to a non-limiting exemplary embodiment of the present invention.
[0026] Although the specific features of the present invention are shown in some drawings and not in others. This is done for convenience only as each feature may be combined with any or all of the other features in accordance with the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0027] In the following detailed description, a reference is made to the accompanying drawings that form a part hereof, and in which the specific embodiments that may be practiced is shown by way of illustration. These embodiments are described in sufficient detail to enable those skilled in the art to practice the embodiments and it is to be understood that other changes may be made without departing from the scope of the embodiments. The following detailed description is therefore not to be taken in a limiting sense.The various embodiments of the present invention provide an artificial intelligence (AI) based tool and a clinical decision support system for accurately measuring joint range of motion. (ROM).
[0028] The various embodiments of the present invention discloses a system (100) for product recommendation based on facial recognition and voice recognition of a user (1), the system comprising: a facial recognition module (200) comprising: an image preprocessing module (b) configured to extract a facial texture, and classify the facial expression into a category selected from neutral, happy, sad and confused; a voice recognition module (400) comprising: a voice user interface (402) configured to access a voice command of the user; and a speech recognition module (404) configured to categorize and filter products based on the voice command; and a product recommendation module configured to categorize products present in a shopping search engine based on the voice command, the category of the facial expression of the user, and a race of the user; and recommend products having a likelihood greater than a predetermined threshold.
[0029] According to an embodiment of the present invention, a product confirmation module, configured to receive a confirmation from the user to add the products recommended to the user to a shopping cart, wherein the confirmation from the user comprises one or more of a blinking of eyes of the user and a “yes” voice command from the user.
[0030] According to an embodiment of the present invention, a camera of the system is focused on an eye lid of the user and a region around the eyelid is tracked with a normal range and an adjusted range and a movement of the eye is matched with a pre-defined stored command in a database of the system, to determine a confirmation from the user to add the product into the shopping cart.
[0031] According to an embodiment of the present invention, the category is selected based on a value of one or more parameters consisting of an eyebrow raise distance, an upper-eyelid eyebrow distance, an inter eyebrow distance, an upper eyelid lower eyelid distance, a top lip thickness, a bottom lip thickness, a mouth width, and one or more forehead lines.
[0032] According to an embodiment of the present invention, the facial recognition module deploys deep convolutional neural networks to perform facial recognition, determine an age and race, and gender of the user.
[0033] According to an embodiment of the present invention, a deep neural network is applied when user is concentrated towards a product cam lens is adjusted with 65 pixels for eye region and 35 pixels for frontal cortex region once pupil dilates and the frontal cortex landmark gets adjusted automatic zooming is done.
[0034] FIG. 1 illustrates a block diagram 100 of a system for product recommendation based on facial and voice recognition, according to a non-limiting exemplary embodiment of the present invention.
[0035] AGE RECOGNITION: This method can be processed from the earlier method by extracting the facial textures and face detection. Furthermore principal component analysis (PCA), variants of independent component analysis (ICA) or local binary pattern (LBP) is used for analysing facial texture. Geometric feature extraction system, represents facial geometry with certain attributes like shape, distances, angles or the coordinates of the fiducial points (i.e manually selected points) form a feature vector. It produces that the highest recognition rates. From the temporal perspective, facial expression recognition techniques are represented by dynamic (image sequencing) approach. The novelty works as Hierarchical -Augmented-Naïve. Bayes (HAN) classifier which incorporates the dependencies between features in hierarchical forms. This approach comes from the proposed pruning technique which substantially reduces the size of the neural network while improving the generalization capability and the recognition rate. For lower computation time Local Binary Patterns (LBP) is used which is featured as faster extractor. Additionally facial feature detection done with colour image sequences initialized without manual input. Facial data can be acquired from a database, a live video stream, in 2D or 3D or dynamic mode. The most popular type of pictures is the 2D grey scale facial images. Followed by some pre-processing (noise removal, light compensation, detection, normalization, tracking, etc.) operations. A normalization process was employed by Ma et. In which the centres of the eyes and mouth are taken as the reference points. A fixed distance d between the centers of the eyes represents a first normalization criterion. Further formulated and applied as a second pre-processing step. It refers to the face dimensions: the width of selected face is roughly 2d and the height is roughly 3d.The surface feature analysis is based on the triangle meshes of faces, which are created by a 3dMD static digitizer, which uses the principle of light pattern projection. For age and race detection, Haar-like-Feature detection and skin colour detection has been proposed. This operation is based on the manually identified position of eyes followed by a rotation of the image to horizontally align the face according to eyes; the final 65 x 45 pixels facial image was obtained by cropping and downsampling operations. The images were cropped from the original C-K database one using the positions of two eyes and resized into 200×110 pixels. The height of the image is 2.9d with level of eye located 2d apart from bottom boundary, where d represents the distance between the eyes. The fixed distance of 65 pixels between the eyes represents a normalization criterion. The final cropped face had the width of three times this distance and the height is cropped for four times. Features extracted and trained with datasets. The efficiency (minimizing within-class variations of expressions while maximizing between-class variations, low-dimensional feature space, etc.) and effectiveness (can be easily extracted from the raw face image) representation of the facial images would provide robustness during recognition process. Voice access is under gone by speech recognition process (i. e voice to text) this method is used for fetching the products through voice commands. Using voice user interface an ASR (Automatic Speech Recognition) system the essential pre-processing, feature Extraction and finally hidden markov model is used to get the desired result. There are three different technique namely acoustic phonetic approaches, pattern recognition approach and knowledge based approach for voice command recognition. With help of facial expression user likely hood product are determined and ethnicity, age recognition helps to prescribe the product as their taste. Deep neural network is applied when user is concentrated towards a product cam lens is adjusted with 65 pixels for eye region and 35 pixels for frontal cortex region once pupil dilates and the frontal cortex landmark gets adjusted automatic zooming is done. Finally to select the product customers have to blink once and confirming with yes command then it will directly added to the cart.
[0036] FIG. 2 illustrates a block diagram 200 of a module for facial recognition used within the system of FIG. 1, according to a non-limiting exemplary embodiment of the present invention. FACIAL EXPRESSION RECOGNITION: In this module Facial image is captured specified landmarks. FER (facial expression Recognition) Technology. The recognition of face expression can be categorized as, the use appearance features and the use geometric features, although some hybrid-based approaches are also possible. Each face model is classified with distinct approach .In the figure2.a), the face images are processed by an image filter or filter banks on face with some specific regions of the face to extract changes in facial appearance. Specific facial regions are convolved with Gabor wavelets filters, and the extracted filter responses at manually selected points, frequently used form vectors and those points used for the further facial expression classification. Furthermore principal component analysis (PCA), variants of independent component analysis (ICA) or local binary pattern (LBP) is used for analysing facial texture. Geometric feature extraction system, represents facial geometry with certain attributes like shape, distances, angles or the coordinates of the fiducial points (i.e manually selected points) form a feature vector. It produces that the highest recognition rates. From the temporal perspective, facial expression recognition techniques are represented by dynamic (image sequencing) approach. The novelty works as Hierarchical -Augmented-Naive Bayes (HAN) classifier which incorporates the dependencies between features in hierarchical forms. This approach comes from the proposed pruning technique which substantially reduces the size of the neural network while improving the generalization capability and the recognition rate. For lower computation time Local Binary Patterns (LBP) is used which is featured as faster extractor. Additionally facial feature detection done with colour image sequences initialized without manual input. Facial data can be acquired from a database, a live video stream, in 2D or 3D or dynamic mode. The most popular type of pictures is the 2D grey scale facial images. Followed by some pre-processing (noise removal, light compensation, detection, normalization, tracking, etc.) operations.
[0037] A normalization process was employed by Ma et. In which the centres of the eyes and mouth are taken as the reference points. A fixed distance d between the centers of the eyes represents a first normalization criterion. Further formulated and applied as a second pre-processing step. It refers to the face dimensions: the width of selected face is roughly 2d and the height is roughly 3d.The surface feature analysis is based on the triangle meshes of faces, which are created by a 3dMD static digitizer, which uses the principle of light pattern projection. For age and race detection, Haar-like-Feature detection and skin colour detection has been proposed. This operation is based on the manually identified position of eyes followed by a rotation of the image to horizontally align the face according to eyes; the final 65 x 45 pixels facial image was obtained by cropping and downsampling operations. The images were cropped from the original C-K database one using the positions of two eyes and resized into 200×110 pixels. The height of the image is 2.9d with level of eye located 2d apart from bottom boundary, where d represents the distance between the eyes. The fixed distance of 65 pixels between the eyes represents a normalization criterion. The final cropped face had the width of three times this distance and the height is cropped for four times. Features extracted and trained with datasets. The efficiency (minimizing within-class variations of expressions while maximizing between-class variations, low-dimensional feature space, etc.) and effectiveness (can be easily extracted from the raw face image) representation of the facial images would provide robustness during recognition process.
AVERAGE PERCENT DEVIATION FROM NEUTRAL VALUE
Real valued parameter NETURAL LIKE DISLIKE CONFUSED
Eyebrow raise distance 0.00 1.57 5.29 49.67
Upper eyelid- eyebrow distance 0.00 11.93 15.03 106.12
Inter eyebrow distance 0.00 -0.53 -6.43 5.62
Upper eyelid- lower eyelid distance 0.00 -20.10 -13.04 -34.60
Top lip thickness 0.00 -11.06 -23.31 -49.99
Bottom lip thickness 0.00 -10.90 -8.80 -51.77
Mouth width 0.00 33.30 -4.04 -8.19
Forehead lines 0.00 0.00 14.52 69.74
FACE EXPRESSION MATRIX: FACE EXPRESSION SYSTEM
NEUTRAL (N) LIKE (L) DISLIKE (DL) CONFUSE (C) LC DLN TOTAL (T)
N 62 - - - - 1 63
L - 54 - - 8 - 62
DL - - 34 - - 3 37
C - - - 2 6 - 8
T 62 54 34 2 14 4 170
[0038] FIG. 3 is a flow diagram 300 for training the facial recognition module of FIG. 2, according to a non-limiting exemplary embodiment of the present invention. Face Age Description Based on Deep Learning. Since deep CNN has been proved to be rich in scene and target representation ability, it has intelligent and automatic performance compared with the traditional manual feature extraction method. Face age descriptor and then the age estimation is carried out by using the divide-and-rule strategy. Te AgeNet uses an approach based on regression and classification to construct an age-estimated deep CNN. In order to reduce the complexity of the network and training time, we only use the regression-based age estimation deep CNN. GoogLeNet’s Deep CNN Hierarchy is the actual age value, and the batch number. Te deep network model is based on GoogLeNet, designed by Google et al. which can improve the performance under the same computing cost. It is a very effective network design In addition; there are many public large-scale face identity databases. Therefore, we use the large-scale face database CASIA-WebFac to pretrain the network. face age libraries to fine-tune the network model generated by the pretrain stage to produce a robust age-deep network. All training and test face images are normalized to 270×270 sizes. In the training stage, the data is augmented. That is, images of 230×230 size are randomly cut out from the images of 270×270 size, and the images are enlarged. In the test, for the input image of 270×270 size, according to the four corners and center point of the image, we cut out seven images of 270×270 size; then, all the cropped images are horizontally inverted to generate the corresponding image, a total of 10 images, respectively, into the depth of network learning. Finally, the output of last layer of GoogLeNet is used as the face age descriptor, and features of the 10 images are concatenated to form the final face feature. In order to extract the essence of the discriminative compact feature subset, we regard each age value as a class and use factor analysis model to deal with original feature dimension reduction. In the factor analysis model (FAM) it is desirable to find the optimal projection matrix to minimize the variation in style between homogeneous (same age) samples and to maximize the variation in content between multiple classes (multiple ages). Euclidean distance between two classes Divide-and-Rule Age Estimation Function Learning after obtaining the feature vector of face image age, the age estimation function can be learned and the corresponding age estimator can be trained. When we compare the two face ages, it is easy for humans to identify which is older, but to accurately guess the age of the face is not so easy. When inferring a person’s exact age, we may compare the input face with the faces of many people whose ages are known, resulting in a series of comparisons, and then estimate the person’s age by integrating the results. This process involves numerous pairwise preferences, each of which is obtained by comparing the input face to the faces in the dataset. We can train two-class classifier that only answers the problem of “whether this face is older than current data. Because it only needs to make a yes or no answer in the classification process, the complexity of the problem is reduced. Using k-mean regression and classification it removes the noisy data and store the data in clusters. Such data’s are processed with machine learning scoring module which predicts the range of customer age as the output.
[0039] FACE RECOGNITION: Humans are able to identify a face in a different ways like its identity demographic characteristics, including race. The demographic features, such as face are involved in human face identity recognition. Face identification involves the fusiform face area (FFA), which is known to be important for face recognition. There are different categories of race like Asian, American, European, African etc. System is focused on images with face data weakly aligned. The other-race effect for face recognition has been established in numerous human memory studies and in meta-analyses of these studies. In fact, the other-race effect in humans can be measured in infants as a decrease in their ability to detect differences. The first step of pre-processing is the face region extraction. Such extraction process the input face image by converting into grey image and stored in database for processing. The input image may be current scanned image or realities input image. And then enhancing state occurs. The proposed system allows the free size and format of colour image. Enhancing state includes the noise filtering, grey scale converting, and histogram equalization. Histogram equalization maps the input image’s intensity values for uniform distribution. By this equalization, the local contrast of the object in the image is increased, especially when the usable data of the image is represented by close contrast values. Through this adjustment, the intensity can be better distributed on the histogram. This allows for areas of lower local contrast to gain a higher contrast without affecting the global contrast. Using PCA technique images are processed in the linear domain. Facial image be represented as a two dimensional N X N array of intensity values. PCA tends to find a M-dimensional subspace whose basis vectors correspond to the maximum variance direction in the original image space. This new subspace is normally lower dimensional (M<< M << N2). New basis vectors define a subspace of face images called face space. All images of known faces are projected onto the face space to find sets of weights that describe the contribution of each vector. By comparing a set of weights for the unknown face to sets of weights of known faces, the face can be identified. Face space forms a cluster in image space and PCA gives suitable representation. K-Nearest Neighbor Classification One of the most popular non-parametric techniques is the Nearest Neighbor classification (NNC). NNC asymptotic or infinite sample size error is less than twice of the Bayes error . KNN (Kth nearest neighbor classifier) classifier is easy to compute and very efficient. KNN is very compatible and obtain less memory storage. So it has good discriminative power. Also, KNN is very robust to image distortions (e.g. rotation, illumination). Euclidian distance determines whether the input face is near a known face. This method can used to detect the race even in clustered background.
[0040] FIG. 4 illustrates a block diagram 400 of a module for voice recognition used within the system of FIG. 1, according to a non-limiting exemplary embodiment of the present invention. VOICE ACCESS: Voice User Interface (VUI) access the user’s voice to process their request, an appropriate dialog strategy is a first step. Menu Hierarchy serves as general organizational scheme for many Audio Patterns. Speech as one-dimensional method is used for our proposed system. Which represents the eye is active whereas the ear is passive, i.e. the ear cannot browse a set of recordings the way the eye can scan a screen of text. It has to wait until the information is available, and once received. Noise is ignored while converting into semantic structure example Google voice search.
[0041] Speech capturing Module: It consists of a microphone, which converts the sound wave signals to electrical signals and an Analog to Digital Converter which samples and digitizes the analog signals to obtain the discrete data that the computer can understand. It sends to Digital Signal Module or a Processor, Which performs processing on the raw speech signal like frequency domain conversion, restoring only the required information etc. Processed Frequency is forwarded to Pre-processed signal storage that pre-process speech and stored in the memory to carry out further task of speech recognition. Pre-Defined speech patterns are already given as the input for matching the new input speech signals. Such signals are trained by artificial neural network for processing different type of vocabularies using this method Products is filtered.
[0042] PRODUCT CATEGORIZATION: After processing the facial and voice commands products are categorized in shopping search engine.
[0043] FIG. 5 illustrates a flow diagram 500 illustrating product recommendation within the system of FIG. 1, according to a non-limiting exemplary embodiment of the present invention. PRODUCT RECOMMENDATION: Using age and ethnicity detection it identifies the user’s taste while processing the data sets. For example if a person allows to access the camera it scans the user image and analyse the gaze of user face identifies the race of user and age with the help of these things if the user is identified as Indian, people in India will have some unique taste of purchasing products with antique traditional model such type of products are added in the stack fetches similar type of products are automated using SVM technique. If facial expression is not satisfied then it iteratively recommends likely hood products.
[0044] FIG. 6 illustrates a flow diagram 600 of a module for capturing the facial recognition within the system of FIG. 1, according to a non-limiting exemplary embodiment of the present invention. If the user’s face seems to be reactionless and concentrate towards a product it observes the frontal cortex landmarks with the normalization technique in order to identify the pupil dilation that send the command to camera incorporated with lens having Optical zoom that makes the lens elements inside the lens housing move for enlarged view and image. As the lens moves, the magnification of the picture is changed for capturing high-quality photos from a closer perspective of the primary object without compromising image quality. Since the lens would move to adjust magnification, the image quality would be maintained. Furthermore with 65 pixels it identifies the eye region and focuses pupil region if the region is adjusted from normal range it up scales the product image for the customer’s inspection.
[0045] FIG. 7 illustrates a flow diagram 700 of a module for confirming a product within the system of FIG. 1, according to a non-limiting exemplary embodiment of the present invention. Once user liked a product in order to confirm their product user can blink their eyes once then saying the command “yes” .Such eye lid region is tracked with normal rage and adjusted range when this input is matched with pre-defined command automatically product gets added to the cart.
[0046] This invention accesses the voice and camera for processing the facial image and product categorization. Figure 1) i) Facial expression and the demographic information (age1.ii), facial landmarks 1.iii), and ethnicity1.iv)) are processed with the data sets. Such data’s are used for fig 5product recommendation, for deeper inspection of product images frontal cortex and pupil dilation monitor fig 6 analyses the user image and up scales the product as automated zooming. Once product seems to be liked similar models were fetched according to their preference. If user wants to select a product blink their eyes for one time and have to say “yes” for the further conformance. Those products added to the cart automatically.
[0047] It is noted that the above-described examples of the present invention is for the purpose of illustration only. Although the present invention has been described in conjunction with a specific example thereof, numerous modifications may be possible without materially departing from the teachings and advantages of the subject matter described herein. Other substitutions, modifications and changes may be made without departing from the spirit of the present solution. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
[0048] Although the embodiments herein are described with various specific embodiments, it will be obvious for a person skilled in the art to practice the embodiments herein with modifications.
ADVANTAGESOF THE INVENTION
[0049] The present invention provides expected advantages of the invention over the prior art.
[0050] This invention has the advantages of over prior art older models, in a sense that this can add products to the cart, and help illiterate users in online purchasing.
[0051] Another advantage is by using the disclosed apparatus balancing is more accurate than the previous one. This invention is not obvious and has advantages above prior art.
[0052] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such as specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. Further, it is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modifications. However, all such modifications are deemed to be within the scope of the claims.
| # | Name | Date |
|---|---|---|
| 1 | 202041034986-CLAIMS [23-10-2022(online)].pdf | 2022-10-23 |
| 1 | 202041034986-Correspondence to notify the Controller [03-03-2025(online)].pdf | 2025-03-03 |
| 1 | 202041034986-PETITION UNDER RULE 137 [19-03-2025(online)].pdf | 2025-03-19 |
| 1 | 202041034986-STATEMENT OF UNDERTAKING (FORM 3) [14-08-2020(online)].pdf | 2020-08-14 |
| 2 | 202041034986-CORRESPONDENCE [23-10-2022(online)].pdf | 2022-10-23 |
| 2 | 202041034986-FORM-26 [03-03-2025(online)].pdf | 2025-03-03 |
| 2 | 202041034986-REQUEST FOR EXAMINATION (FORM-18) [14-08-2020(online)].pdf | 2020-08-14 |
| 2 | 202041034986-Written submissions and relevant documents [19-03-2025(online)].pdf | 2025-03-19 |
| 3 | 202041034986-US(14)-HearingNotice-(HearingDate-04-03-2025).pdf | 2025-02-12 |
| 3 | 202041034986-POWER OF AUTHORITY [14-08-2020(online)].pdf | 2020-08-14 |
| 3 | 202041034986-Correspondence to notify the Controller [03-03-2025(online)].pdf | 2025-03-03 |
| 3 | 202041034986-FER_SER_REPLY [23-10-2022(online)].pdf | 2022-10-23 |
| 4 | 202041034986-CLAIMS [23-10-2022(online)].pdf | 2022-10-23 |
| 4 | 202041034986-FORM 18 [14-08-2020(online)].pdf | 2020-08-14 |
| 4 | 202041034986-FORM 3 [23-10-2022(online)].pdf | 2022-10-23 |
| 4 | 202041034986-FORM-26 [03-03-2025(online)].pdf | 2025-03-03 |
| 5 | 202041034986-CORRESPONDENCE [23-10-2022(online)].pdf | 2022-10-23 |
| 5 | 202041034986-FORM 1 [14-08-2020(online)].pdf | 2020-08-14 |
| 5 | 202041034986-Proof of Right [23-10-2022(online)].pdf | 2022-10-23 |
| 5 | 202041034986-US(14)-HearingNotice-(HearingDate-04-03-2025).pdf | 2025-02-12 |
| 6 | 202041034986-CLAIMS [23-10-2022(online)].pdf | 2022-10-23 |
| 6 | 202041034986-DRAWINGS [14-08-2020(online)].pdf | 2020-08-14 |
| 6 | 202041034986-FER_SER_REPLY [23-10-2022(online)].pdf | 2022-10-23 |
| 6 | 202041034986-FORM 4(ii) [21-09-2022(online)].pdf | 2022-09-21 |
| 7 | 202041034986-CORRESPONDENCE [23-10-2022(online)].pdf | 2022-10-23 |
| 7 | 202041034986-DECLARATION OF INVENTORSHIP (FORM 5) [14-08-2020(online)].pdf | 2020-08-14 |
| 7 | 202041034986-FER.pdf | 2022-03-23 |
| 7 | 202041034986-FORM 3 [23-10-2022(online)].pdf | 2022-10-23 |
| 8 | 202041034986-COMPLETE SPECIFICATION [14-08-2020(online)].pdf | 2020-08-14 |
| 8 | 202041034986-FER_SER_REPLY [23-10-2022(online)].pdf | 2022-10-23 |
| 8 | 202041034986-Proof of Right [23-10-2022(online)].pdf | 2022-10-23 |
| 9 | 202041034986-DECLARATION OF INVENTORSHIP (FORM 5) [14-08-2020(online)].pdf | 2020-08-14 |
| 9 | 202041034986-FER.pdf | 2022-03-23 |
| 9 | 202041034986-FORM 3 [23-10-2022(online)].pdf | 2022-10-23 |
| 9 | 202041034986-FORM 4(ii) [21-09-2022(online)].pdf | 2022-09-21 |
| 10 | 202041034986-DRAWINGS [14-08-2020(online)].pdf | 2020-08-14 |
| 10 | 202041034986-FER.pdf | 2022-03-23 |
| 10 | 202041034986-FORM 4(ii) [21-09-2022(online)].pdf | 2022-09-21 |
| 10 | 202041034986-Proof of Right [23-10-2022(online)].pdf | 2022-10-23 |
| 11 | 202041034986-COMPLETE SPECIFICATION [14-08-2020(online)].pdf | 2020-08-14 |
| 11 | 202041034986-FORM 1 [14-08-2020(online)].pdf | 2020-08-14 |
| 11 | 202041034986-FORM 4(ii) [21-09-2022(online)].pdf | 2022-09-21 |
| 11 | 202041034986-Proof of Right [23-10-2022(online)].pdf | 2022-10-23 |
| 12 | 202041034986-FORM 3 [23-10-2022(online)].pdf | 2022-10-23 |
| 12 | 202041034986-FORM 18 [14-08-2020(online)].pdf | 2020-08-14 |
| 12 | 202041034986-FER.pdf | 2022-03-23 |
| 12 | 202041034986-DECLARATION OF INVENTORSHIP (FORM 5) [14-08-2020(online)].pdf | 2020-08-14 |
| 13 | 202041034986-COMPLETE SPECIFICATION [14-08-2020(online)].pdf | 2020-08-14 |
| 13 | 202041034986-DRAWINGS [14-08-2020(online)].pdf | 2020-08-14 |
| 13 | 202041034986-FER_SER_REPLY [23-10-2022(online)].pdf | 2022-10-23 |
| 13 | 202041034986-POWER OF AUTHORITY [14-08-2020(online)].pdf | 2020-08-14 |
| 14 | 202041034986-CORRESPONDENCE [23-10-2022(online)].pdf | 2022-10-23 |
| 14 | 202041034986-DECLARATION OF INVENTORSHIP (FORM 5) [14-08-2020(online)].pdf | 2020-08-14 |
| 14 | 202041034986-FORM 1 [14-08-2020(online)].pdf | 2020-08-14 |
| 14 | 202041034986-REQUEST FOR EXAMINATION (FORM-18) [14-08-2020(online)].pdf | 2020-08-14 |
| 15 | 202041034986-CLAIMS [23-10-2022(online)].pdf | 2022-10-23 |
| 15 | 202041034986-DRAWINGS [14-08-2020(online)].pdf | 2020-08-14 |
| 15 | 202041034986-FORM 18 [14-08-2020(online)].pdf | 2020-08-14 |
| 15 | 202041034986-STATEMENT OF UNDERTAKING (FORM 3) [14-08-2020(online)].pdf | 2020-08-14 |
| 16 | 202041034986-FORM 1 [14-08-2020(online)].pdf | 2020-08-14 |
| 16 | 202041034986-POWER OF AUTHORITY [14-08-2020(online)].pdf | 2020-08-14 |
| 16 | 202041034986-US(14)-HearingNotice-(HearingDate-04-03-2025).pdf | 2025-02-12 |
| 17 | 202041034986-FORM 18 [14-08-2020(online)].pdf | 2020-08-14 |
| 17 | 202041034986-FORM-26 [03-03-2025(online)].pdf | 2025-03-03 |
| 17 | 202041034986-REQUEST FOR EXAMINATION (FORM-18) [14-08-2020(online)].pdf | 2020-08-14 |
| 18 | 202041034986-POWER OF AUTHORITY [14-08-2020(online)].pdf | 2020-08-14 |
| 18 | 202041034986-STATEMENT OF UNDERTAKING (FORM 3) [14-08-2020(online)].pdf | 2020-08-14 |
| 18 | 202041034986-Correspondence to notify the Controller [03-03-2025(online)].pdf | 2025-03-03 |
| 19 | 202041034986-Written submissions and relevant documents [19-03-2025(online)].pdf | 2025-03-19 |
| 19 | 202041034986-REQUEST FOR EXAMINATION (FORM-18) [14-08-2020(online)].pdf | 2020-08-14 |
| 20 | 202041034986-PETITION UNDER RULE 137 [19-03-2025(online)].pdf | 2025-03-19 |
| 20 | 202041034986-STATEMENT OF UNDERTAKING (FORM 3) [14-08-2020(online)].pdf | 2020-08-14 |
| 1 | ssE_22-03-2022.pdf |