Sign In to Follow Application
View All Documents & Correspondence

Interactive System And Method To Assist A User To Perform An Activity Through Feedbacks

Abstract: An interactive system (10) to assist a user to perform an activity through feedbacks is disclosed. The interactive system includes an internet of things based feedback device (20) The interactive system includes a processing subsystem (70) including an activity monitoring module (100) to identify a first pattern corresponding to movements of the activity of the user. The processing subsystem includes an impression tracking module (110) to identify a second pattern corresponding to the movements of the activity of the user. The processing subsystem includes a mapping module (120) to transmit feed signals to the internet of things based feedback device to project a virtual model of the user in a mirror (60) to provide visual feedbacks to the user. The processing subsystem includes a feedback module (130) to generate feedbacks to the user to correct a posture of the user performing an activity. FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
29 October 2021
Publication Number
18/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

WELLNESYS TECHNOLOGIES PRIVATE LIMITED
101, SKYLINE AMOGHA, 3RD LANE, 7TH CROSS, TEACHERS COLONY 1ST STAGE, KUMARASWAMY LAYOUT, BANGALORE, 560078, KARNATAKA, INDIA

Inventors

1. MURALIDHAR SOMISETTY
J-462, BRIGADE MEDOWS PLUMERIA, OPPOSITE TO ANJANEYA TEMPLE, UDAYAPURA POST, KANAKAPURA ROAD, SAALUHUNASE VILLAGE, BANGALORE, 560082, KARNATAKA, INDIA
2. SANKAR DASIGA
NO. 92, FIRST BLOCK, THIRD MAIN ROAD, R T NAGAR, BANGALORE, 560032, KARNATAKA, INDIA

Specification

DESC:EARLIEST PRIORITY DATE:
This Application claims priority from a Provisional patent application filed in India having Patent Application No. 202141049652, filed on October 29, 2021, and titled “A SYSTEM FOR AN INTERACTIVE AND IMMERSIVE DIGITAL LIFESTYLE”
FIELD OF INVENTION
[0001] Embodiments of the present disclosure relate to the field of computing, calculating, counting and more particularly to an interactive system and method to assist a user to perform an activity through feedbacks.
BACKGROUND
[0002] Human activities may be referred to as activities performed for recreation and living. The activities may include sports, games, therapeutical, exercise, martial arts dance and the like. Performing the activities in a wrong way may cause physical injuries to a person who is performing. Also, lack of guidance may affect learning pace and performance of the person.
[0003] Performing the activities following a pre-recorded video session is a one-way learning approach in which the person may end up doing the activities incorrectly due to the absence of any feedbacks. Further, finding an instructor is another tedious task since the availability of the instructor may be subjected to various factors such as geographical location, economic considerations, the expertise of the instructor and the like. Existing systems fails to provide feedbacks to the person performing the activity to correct the mistakes committed by the person.
[0004] Hence, there is a need for an interactive system to assist a user to perform an activity through feedbacks to address the aforementioned issue(s).
BRIEF DESCRIPTION
[0005] In accordance with an embodiment of the present disclosure, an interactive system to assist a user to perform an activity through feedbacks is provided. The interactive system includes an internet of things based feedback device located in proximity of a user and operatively coupled to an integrated database. The internet of things based feedback device includes a display unit adapted to display a plurality of information to guide the user to perform an activity. The internet of things based feedback device also includes a plurality of sensors positioned adjacently to the display unit. The plurality of sensors are adapted to capture one or more movements of the user upon performing the activity. The plurality of sensors includes at least one of an image sensor, a position sensor, and a pressure sensor. The internet of things based feedback device further includes a mirror positioned adjacently to the plurality of sensors. The mirror is adapted to provide one or more visual feedbacks to the user while performing the activity. The interactive system further includes a processing subsystem operatively coupled to the internet of things based feedback device. The processing subsystem is hosted on a server and configured to execute on a network to control bidirectional communications among a plurality of modules. The processing subsystem includes an activity monitoring module operatively coupled to the integrated database. The activity monitoring module is configured to detect one or more regions corresponding to one or more body parts of the user from one or more images received from the image sensor. The activity module is also configured to generate bounding boxes and segmentation masks for each of the one or more regions detected on the one or more images using a feature pyramid network technique. The activity module is further configured to identify a first pattern corresponding to one or more movements of the activity of the user based on bounding boxes and segmentation masks.
[0006] The processing subsystem also includes an impression tracking module operatively coupled to the activity monitoring module. The impression tracking module is configured to analyze one or more depth data, one or more positional data and one or more pressure data corresponding to the one or more movements of the activity of the user captured by the image sensor, the position sensor and the pressure sensor respectively. The impression tracking module is also configured to identify a second pattern corresponding to the one or more movements of the activity of the user based on the analysis. The processing subsystem also includes a mapping module operatively coupled to the impression tracking module. The mapping module is configured to generate a virtual model of the user in a virtual space by mapping the first patten and the second pattern. The one or more spatial dimensions of the virtual model in the virtual space is proportional to the one or more spatial dimensions of the user in a real space. The mapping module is also configured to transmit one or more feed signals to the internet of things based feedback device to project the virtual model of the user in the mirror to provide the one or more visual feedbacks to the user. The processing subsystem further includes a feedback module operatively coupled to the mapping module. The feedback module is configured to compare the one or more spatial dimensions of the virtual model with the one or more corresponding spatial dimensions of a prestored model. The feedback module is also configured to identify one or more anomalies to generate a matching score upon comparing. The feedback module is further configured to generate one or more feedbacks to the user based on the one or more anomalies identified to correct a posture of the user performing the activity when the matching score generated is below a predefined threshold, thereby assisting the user for performing the activity via one or more feedbacks.
[0007] In accordance with another embodiment of the present disclosure, a method to assist a user to perform an activity through feedbacks is provided. The method includes displaying, by a display unit of an internet of things based feedback device, a plurality of information to guide the user to perform an activity. The method also includes capturing, by a plurality of sensors of the internet of things based feedback device, capture one or more movements of the user upon performing the activity. The plurality of sensors includes at least one of an image sensor, a position sensor, and a pressure sensor. The method further includes providing, by a mirror of the internet of things based feedback device, one or more visual feedbacks to the user while performing the activity. The method also includes detecting, by an activity monitoring module coupled to an integrated database, one or more regions corresponding to one or more body parts of the user from one or more images received from the image sensor. The method also includes generating, by the activity monitoring module, bounding boxes and segmentation masks for each of the one or more regions detected on the one or more images using a feature pyramid network technique. The method also includes identifying, by the activity monitoring module, a first pattern corresponding to one or more movements of the activity of the user based on bounding boxes and segmentation masks. The method also includes analyzing, by an impression tracking module, one or more depth data, one or more positional data and one or more pressure data corresponding to the one or more movements of the activity of the user captured by the image sensor, the position sensor and the pressure sensor respectively.
[0008] The method further includes identifying, by the impression tracking module, a second pattern corresponding to the one or more movements of the activity of the user based on the analysis. The method further includes generating, by a mapping module, a virtual model of the user in a virtual space by mapping the first patten and the second pattern. The one or more spatial dimensions of the virtual model in the virtual space is proportional to the one or more spatial dimensions of the user in a real space. The method also includes transmitting, by the mapping module, one or more feed signals to the internet of things based feedback device to project the virtual model of the user in the mirror to provide the one or more visual feedbacks to the user. The method also includes comparing, by a feedback module, the one or more spatial dimensions of the virtual model with the one or more corresponding spatial dimensions of a prestored model. The method also includes identifying, by the feedback module, one or more anomalies to generate a matching score upon comparing. The method further includes generating, by the feedback module, one or more feedbacks to the user based on the one or more anomalies identified to correct a posture of the user performing the activity when the matching score generated is below a predefined threshold, thereby assisting the user for performing the activity via one or more feedbacks.
[0009] To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
[0011] FIG. 1 is a block diagram representation of an interactive system to assist a user to perform an activity through feedbacks in accordance with an embodiment of the present disclosure;
[0012] FIG. 2 is a block diagram representation of one embodiment of the interactive system of FIG. 1, in accordance with an embodiment of the present disclosure;
[0013] FIG. 3 is a schematic representation of an exemplary embodiment of the interactive system of FIG. 1, in accordance with an embodiment of the present disclosure;
[0014] FIG. 4 is a block diagram of a computer or a server in accordance with an embodiment of the present disclosure;
[0015] FIG. 5a is a flow chart representing the steps involved in a method to assist a user to perform an activity through feedbacks in accordance with an embodiment of the present disclosure;
[0016] FIG. 5b is a flow chart representing the continued steps involved in a method of FIG. 5a, in accordance with an embodiment of the present disclosure; and
[0017] FIG. 5c is a flow chart representing the continued steps involved in a method of FIG. 5b, in accordance with an embodiment of the present disclosure.
[0018] Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[0019] For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.
[0020] The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more devices or sub-systems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, sub-systems, elements, structures, components, additional devices, additional sub-systems, additional elements, additional structures, or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
[0021] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
[0022] In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
[0023] Embodiments of the present disclosure relate to an interactive system and method to assist a user to perform an activity through feedbacks. The interactive system includes an internet of things based feedback device located in proximity of a user and operatively coupled to an integrated database. The internet of things based feedback device includes a display unit adapted to display a plurality of information to guide the user to perform an activity. The internet of things based feedback device also includes a plurality of sensors positioned adjacently to the display unit. The plurality of sensors are adapted to capture one or more movements of the user upon performing the activity. The plurality of sensors includes at least one of an image sensor, a position sensor, and a pressure sensor. The internet of things based feedback device further includes a mirror positioned adjacently to the plurality of sensors. The mirror is adapted to provide one or more visual feedbacks to the user while performing the activity. The interactive system further includes a processing subsystem operatively coupled to the internet of things based feedback device. The processing subsystem is hosted on a server and configured to execute on a network to control bidirectional communications among a plurality of modules. The processing subsystem includes an activity monitoring module operatively coupled to the integrated database. The activity monitoring module is configured to detect one or more regions corresponding to one or more body parts of the user from one or more images received from the image sensor. The activity module is also configured to generate bounding boxes and segmentation masks for each of the one or more regions detected on the one or more images using a feature pyramid network technique. The activity module is further configured to identify a first pattern corresponding to one or more movements of the activity of the user based on bounding boxes and segmentation masks.
[0024] The processing subsystem also includes an impression tracking module operatively coupled to the activity monitoring module. The impression tracking module is configured to analyze one or more depth data, one or more positional data and one or more pressure data corresponding to the one or more movements of the activity of the user captured by the image sensor, the position sensor and the pressure sensor respectively. The impression tracking module is also configured to identify a second pattern corresponding to the one or more movements of the activity of the user based on the analysis. The processing subsystem also includes a mapping module operatively coupled to the impression tracking module. The mapping module is configured to generate a virtual model of the user in a virtual space by mapping the first patten and the second pattern. The one or more spatial dimensions of the virtual model in the virtual space is proportional to the one or more spatial dimensions of the user in a real space. The mapping module is also configured to transmit one or more feed signals to the internet of things based feedback device to project the virtual model of the user in the mirror to provide the one or more visual feedbacks to the user. The processing subsystem further includes a feedback module operatively coupled to the mapping module. The feedback module is configured to compare the one or more spatial dimensions of the virtual model with the one or more corresponding spatial dimensions of a prestored model. The feedback module is also configured to identify one or more anomalies to generate a matching score upon comparing. The feedback module is further configured to generate one or more feedbacks to the user based on the one or more anomalies identified to correct a posture of the user performing the activity when the matching score generated is below a predefined threshold, thereby assisting the user for performing the activity via one or more feedbacks.
[0025] FIG. 1 is a block diagram representation of an interactive system (10) to assist a user to perform an activity through feedbacks in accordance with an embodiment of the present disclosure. The interactive system (10) includes an internet of things based feedback device (20) located in proximity of a user (not shown in FIG. 1) and operatively coupled to an integrated database (30). In a specific embodiment, the integrated database (30) may include, but not limited to, a SQL based database, non-SQL based database, object-oriented database, hierarchical database, columnar database and the like. The internet of things based feedback device (20) includes a display unit (40) adapted to display information to guide the user to perform an activity. In some embodiments, the activity may include, but not limited to, yoga, dance, physical exercise, meditation, sport related sequence such as “pull shot” of the cricket game and the like.
[0026] Further, in one embodiment, the information may include, but not limited to, a recorded multimedia illustrating the activity, a streamed multimedia illustrating the activity, a live multimedia session and the like. In one embodiment, the display unit (40) may include, but not limited to, a liquid crystal display, a light emitting diode display, a plasma display and the like. The internet of things based feedback device (20) also includes a plurality of sensors (50) positioned adjacently to the display unit (40). The plurality of sensors (50) are adapted to capture one or more movements of the user upon performing the activity. The plurality of sensors (50) includes at least one of an image sensor (not shown in FIG. 1), a position sensor (not shown in FIG. 1), and a pressure sensor (not shown in FIG. 1).
[0027] Furthermore, in one embodiment, the image sensor may include a camera adapted to capture one or more images in wavelength range corresponding to a color including at least one of red, green and blue. In some embodiments, the position sensor may include, at least one of a time of flight sensor, an infrared sensor, a proximity sensor and the camera adapted to sense one or more depth data. In such an embodiment, the position sensor and the pressure sensor may be mounted on a physical mat (FIG. 2, (140)) adapted to support the user when the user is performing the activity. In a specific embodiment, the plurality of sensors (50) may be mounted on a wearable device (not shown in FIG. 1) worn by the user. In such an embodiment, the wearable device may include one or more biometric sensors adapted to sense one or more vitals of the user. In such an embodiment, the internet of things based feedback device (20) may be configured to receive a performance report from the wearable device regarding the performance of the user while performing the activity.
[0028] Moreover, in one embodiment, the internet of things based feedback device (20) may provide the performance report received to one or more persons. In one embodiment, the one or more persons may include, a trainer, a physician, a coach, a physiotherapist and the like. In a specific embodiment, the internet of things based feedback device (20) may be configured to establish a communication channel with the wearable device when the wearable device is located in a predefined range from the internet of things based feedback device (20). In some embodiments, the one or more vitals may include, but not limited to, heartbeat, blood pressure, blood oxygen level and the like. In some embodiments, the internet of things based feedback device (20) may include a microphone (not shown in FIG. 1) adapted to receive one or more voice commands from the user.
[0029] Additionally, in some embodiments, the internet of things based feedback device (20) may include a speaker (not shown in FIG. 1) adapted to provide one or more audio feedbacks to the user. In a specific embodiment, the internet of things based feedback device (20) may include a light sensor (not shown in FIG. 1) adapted to adjust brightness of the display unit (40) upon sensing one or more ambient light levels. In one embodiment, the internet of things based feedback device (20) may be enclosed by an enclosure (not shown in FIG. 1) to protect the internet of things based feedback device (20) from an external environment. In some embodiments, the internet of things based feedback device (20) may be mounted on a stand. In such an embodiment, it would be convenient for the user to move around the internet of things based feedback device (20) to a desired position within an area of interest.
[0030] Also, the internet of things based feedback device (20) further includes a mirror (60) positioned adjacently to the plurality of sensors (50). The mirror (60) is adapted to provide one or more visual feedbacks to the user while performing the activity. In one embodiment, the mirror (60) may include a two-way mirror (60). The interactive system (10) further includes a processing subsystem (70) operatively coupled to the internet of things based feedback device (20). The processing subsystem (70) is hosted on a server (80). In one embodiment, the server (80) may be a cloud-based server. In another embodiment, the server (80) may be a local server. The processing subsystem (70) is configured to execute on a network (90) to control bidirectional communications among a plurality of modules.
[0031] Furthermore, in one embodiment, the network (90) may include one or more terrestrial and/or satellite networks interconnected to communicatively connect a user device to web server engine and a web crawler. In one example, the network (90) may be a private or public local area network (LAN) or wide area network (WAN), such as the internet. Further, in another embodiment, the network (90) may include both wired and wireless communications according to one or more standards and/or via one or more transport mediums. In one example, the network (90) may include wireless communications according to one of the 802.11 or Bluetooth specification sets, LoRa (Long Range Radio) or another standard or proprietary wireless communication protocol. In yet another embodiment, the network (90) may also include communications over a terrestrial cellular network, including, a GSM (global system for mobile communications), CDMA (code division multiple access), and/or EDGE (enhanced data for global evolution) network.
[0032] Moreover, the processing subsystem (70) includes an activity monitoring module (100) operatively coupled to the integrated database (30). The activity monitoring module (100) is configured to detect one or more regions corresponding to one or more body parts of the user from one or more images received from the image sensor. In one embodiment, the one or more regions may be associated with one or more body joints of the user. The activity monitoring module (100) is also configured to generate bounding boxes and segmentation masks for each of the one or more regions detected on the one or more images using a feature pyramid network technique. In one embodiment, the bounding boxes may include, a surrounding sphere (SS), an axis-aligned bounding box (AABB), an oriented bounding box (OBB), a fixed-direction hull (FDH), and a convex hull (CH).
[0033] Additionally. in some embodiments, the segmentation masks may include, but not limited to, threshold based segmentation, edge based segmentation, region-based segmentation, clustering based segmentation, artificial neural network based segmentation. The activity module is further configured to identify a first pattern corresponding to one or more movements of the activity of the user based on bounding boxes and segmentation masks. In one embodiment, the first pattern may include a body silhouette of the user. In some embodiments, the activity monitoring module (100) may notify the user regarding one or more ambient conditions capable of affecting the identification of the first patten. In such an embodiment, the one or more ambient conditions may include, but not limited to, such as ambient light levels, temperature levels, humidity and the like.
[0034] Also, for example, consider a scenario in which a user X may be practicing karate in front of the internet of things based feedback device (20). The activity monitoring module (100) may detect one or more regions corresponding to hands, legs and head of the user X from the one or more images received from the image sensor. The activity monitoring module (100) may be able to deduce the first pattern corresponding to a body posture of the user X based on the bounding boxes and segmentation masks generated for each of the hands, legs and head of the user X. The first pattern may be a skeleton based model of the user X. As used herein, the skeleton based model may include interconnected nodes that may resemble body joints of a human being.
[0035] Further, the processing subsystem (70) also includes an impression tracking module (110) operatively coupled to the activity monitoring module (100). The impression tracking module (110) is configured to analyse one or more depth data, one or more positional data and one or more pressure data corresponding to the one or more movements of the activity of the user captured by the image sensor, the position sensor and the pressure sensor respectively. The impression tracking module (110) is also configured to identify a second pattern corresponding to the one or more movements of the activity of the user based on the analysis. In continuation with the ongoing example, consider a scenario in which the user X may be performing karate on the physical mat (FIG. 2, (140)).
[0036] Furthermore, left leg and left hand of the user X is in a forward position compared to the right leg and right hand of the user X. The impression tracking module (110) may receive relative distance between the left hand, the right hand, the left leg and the right leg of the user X from the one or more depth data and the one or more positional data provided by the image sensor and the position sensor. Further, the impression tracking module (110) may receive information regarding an amount of pressure exerted by the user X on the physical mat (140) from the one or more pressure data provided by the pressure sensor. The pressure sensor and the position sensor may be mounted in the physical mat (140). The impression tracking module (110) may be configured to identify the second pattern corresponding to movements of the hands, the legs and the head of the user X based on the one or more pressure data, the one or more depth data, and the one or more positional data. The second pattern may be in a time synchronized form including the one or more pressure data, the one or more depth data, and the one or more positional data with respect to the one or more movements of the activity of the user X.
[0037] Moreover, the processing subsystem (70) also includes a mapping module (120) operatively coupled to the impression tracking module (110). The mapping module (120) is configured to generate a virtual model of the user in a virtual space by mapping the first patten and the second pattern. In one embodiment, the virtual model may be a multidimensional model including a three dimensional model, a two dimensional model and the like. The one or more spatial dimensions of the virtual model in the virtual space is proportional to the one or more spatial dimensions of the user in a real space. The mapping module (120) is also configured to transmit one or more feed signals to the internet of things based feedback device (20) to project the virtual model of the user in the mirror (60) to provide the one or more visual feedbacks to the user.
[0038] Additionally, in continuation with the ongoing example, the mapping module (120) may be configured to generate a three dimensional model of the user X in the virtual space based on the first pattern and the second pattern. Dimensions of the virtual space and the three dimensional model of the user X may be proportional to the dimensions of the user X and a physical environment surrounding the user X. The physical environment may include, an open space, a room, a wall and the like. In one embodiment, the mapping module (120) may be configured to assign different weights to the first pattern and the second pattern based on one or more ambient conditions.
[0039] Further, the processing subsystem (70) further includes a feedback module (130) operatively coupled to the mapping module (120). The feedback module (130) is configured to compare the one or more spatial dimensions of the virtual model with the one or more corresponding spatial dimensions of a prestored model. In one embodiment, the prestored model may include the virtual model mapped to one or more human postures while performing the corresponding activity. The feedback module (130) is also configured to identify one or more anomalies to generate a matching score upon comparing. The feedback module (130) is further configured to generate one or more feedbacks to the user based on the one or more anomalies identified to correct a posture of the user performing the activity when the matching score generated is below a predefined threshold, thereby assisting the user for performing the activity via one or more feedbacks.
[0040] Furthermore, in continuation with the ongoing example, the feedback module (130) may compare the one or more spatial dimensions of the three dimensional model of the user X with the prestored model. In this scenario, the prestored model may be the virtual model mapped to one or more karate postures. The feedback module (130) may generate the matching score upon comparing the posture of the user X with the one or more karate postures of the prestored model. Further, the feedback module (130) may generate the one or more feedbacks to the user X to correct the posture of the user X when the matching score is below the predefined threshold.
[0041] FIG. 2 is a block diagram representation of one embodiment of the interactive system (10) of FIG. 1, in accordance with an embodiment of the present disclosure. The interactive system (10) of FIG. 1 includes the activity monitoring module (100), the impression tracking module (110), the mapping module (120) and the feedback module (130). In one embodiment, the interactive system (10) of FIG. 1 may include the processing subsystem (70) including a voice assistant module (150) configured to enable the user to interact with the internet of things based feedback device (20) through one or more voice commands of the user. In one embodiment, the voice assistant module (150) may be utilizing a natural language processing technique to process the one or more voice commands of the user X. In continuation with the ongoing example, the voice assistant module (150) may enable the user X to communicate with the internet of things based feedback module (130) through the one or more voice commands. The user X may direct the internet of things based feedback device (20) to replay the plurality of information being displayed by the display unit (40) through the one or more voice commands.
[0042] Further, the processing subsystem (70) may include a communication module (160) configured to provide information regarding one or more trainers through a graphical user interface displayed on the display unit (40) upon receiving one or more inputs from the user. In such an embodiment, the one or more inputs may include at least one of a touch command, a gesture command, and a voice command. The communication module (160) may also be configured to establish a communication channel between the one or more trainers and the user to initiate an interaction between the one or more trainers and the user upon selection of the one or more trainers by the user. In one embodiment, the communication channel may support, half duplex, full duplex, and unidirectional communication. In continuation with the ongoing example, the communication module (160) may provide information regarding one or more karate trainers when the user X request for the information through the graphical user interface displayed on the display unit (40). The communication module (160) may establish the communication channel between the user X and a karate trainer Y selected by the user X enable the user X to interact with the karate trainer Y.
[0043] FIG. 3 is a schematic representation of an exemplary embodiment (200) of the interactive system (10) of FIG. 1 in accordance with an embodiment of the present disclosure. Consider a scenario in which a user A (210) may be intending to practice zumba dance. The user A (210) may select the zumba dance through the user interface displayed on the display unit (40) associated with the internet of things based feedback device (20). In response to selection made by the user A (210), the internet of things feedback device may display a tutorial of the zumba dance on the display unit (40). The display unit (40) may be a touch sensitive display. The user A (210) may perform the zumba dance according to the tutorial by standing on the physical mat (140) laid in front of the internet of things based feedback device (20). The user A (210) may see himself/ herself on the mirror (60) associated with the internet of things based feedback device (20). The activity monitoring module (100) may identify the first patten corresponding to the one or more movements of the activity of the user A (210) based on the one or more images received from the image sensor associated with the internet of things based feedback device (20).
[0044] Further, the impression tracking module (110) may be configured to analyze the one or more depth data, the one or more positional data and the one or more pressure data corresponding to the one or more movements of the activity of the user captured by the image sensor, the position sensor and the pressure sensor respectively to identify the second pattern corresponding to the one or more movements of the activity of the user A (210). The mapping module (120) may generate the three dimensional model of the user A (210) by mapping the first pattern and the second pattern. The mapping module (120) may transmit the one or more feed signals to the internet of things based feedback device (20) to project the virtual model of the user A (210) in the mirror (60) to provide one or more visual feedbacks to the user A (210) regarding the activity being performed by the user A (210).
[0045] Further, a feedback module (130) may be configured to identify the one or more anomalies to generate the matching score. The one or more anomalies may be identified by comparing a way in which the user A (210) is performing the zumba dance and the prestored model representing an ideal way of performing the zumba dance. The feedback module (130) may generate the one or more feedbacks to the user A (210) to correct the posture of the user A (210) while performing the zumba dance when the matching score generated is below the predefined threshold. Consider a scenario in which, the user A (210) may like to practice the zumba dance again. The voice assistant module (150) may be configured to enable the user A (210) to direct the internet of things based feedback device (20) to replay the tutorial of the zumba dance through the one or more voice commands. The communication module (160) may be configured to provide information regarding a trainer B to the user A (210) upon receiving one or more inputs from the user A (210) through the graphical user interface displayed on the display unit (40). The communication module (160) may establish the communication channel between the user A (210) and the trainer B for initiating a training session for the user A (210) by the trainer B.
[0046] FIG. 4 is a block diagram of a computer or a server (80) in accordance with an embodiment of the present disclosure. The server (80) includes processor(s) (220), and memory (230) operatively coupled to the bus (240). The processor(s) (220), as used herein, includes any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof.
[0047] The memory (230) includes several subsystems stored in the form of executable program which instructs the processor to perform the method steps illustrated in FIG. 1. The memory (230) is substantially similar to system of FIG.1. The memory (230) has the following subsystems: the processing subsystem (70) including the activity monitoring module (100), the impression tracking module (110), the mapping module (120), the feedback module (130). the voice assistant module (150) and the communication module (160). The plurality of modules of the processing subsystem (70) performs the functions as stated in FIG. 1 and FIG. 2. The bus (240) as used herein refers to be the internal memory channels or computer network that is used to connect computer components and transfer data between them. The bus (240) includes a serial bus or a parallel bus, wherein the serial bus transmit data in bit-serial format and the parallel bus transmit data across multiple wires. The bus (240) as used herein, may include but not limited to, a system bus, an internal bus, an external bus, an expansion bus, a frontside bus, a backside bus, and the like.
[0048] The processing subsystem (70) includes an activity monitoring module (100) operatively coupled to the integrated database (30). The activity monitoring module (100) is configured to detect one or more regions corresponding to one or more body parts of the user from one or more images received from the image sensor. The activity module is also configured to generate bounding boxes and segmentation masks for each of the one or more regions detected on the one or more images using a feature pyramid network technique. The activity module is further configured to identify a first pattern corresponding to one or more movements of the activity of the user based on bounding boxes and segmentation masks.
[0049] The processing subsystem (70) also includes an impression tracking module (110) operatively coupled to the activity monitoring module (100). The impression tracking module (110) is configured to analyze one or more depth data, one or more positional data and one or more pressure data corresponding to the one or more movements of the activity of the user captured by the image sensor, the position sensor and the pressure sensor respectively. The impression tracking module (110) is also configured to identify a second pattern corresponding to the one or more movements of the activity of the user based on the analysis. The processing subsystem (70) also includes a mapping module (120) operatively coupled to the impression tracking module (110). The mapping module (120) is configured to generate a virtual model of the user in a virtual space by mapping the first patten and the second pattern. The one or more spatial dimensions of the virtual model in the virtual space is proportional to the one or more spatial dimensions of the user in a real space. The mapping module (120) is also configured to transmit one or more feed signals to the internet of things based feedback device (20) to project the virtual model of the user in the mirror (60) to provide the one or more visual feedbacks to the user. The processing subsystem (70) further includes a feedback module (130) operatively coupled to the mapping module (120). The feedback module (130) is configured to compare the one or more spatial dimensions of the virtual model with the one or more corresponding spatial dimensions of a prestored model.
[0050] The feedback module (130) is also configured to identify one or more anomalies to generate a matching score upon comparing. The feedback module (130) is further configured to generate one or more feedbacks to the user based on the one or more anomalies identified to correct a posture of the user performing the activity when the matching score generated is below a predefined threshold, thereby assisting the user for performing the activity via one or more feedbacks. The processing sub system also includes a voice assistant module (150) configured to enable the user to interact with the internet of things based feedback device (20) through one or more voice commands of the user. The processing sub system further includes a communication module (160) configured to provide information regarding one or more trainers through a graphical user interface displayed on the display unit (40) upon receiving one or more inputs from the user. The one or more inputs comprises at least one of a touch command, a gesture command, and a voice command. The communication module (160) is also configured to establish a communication channel between the one or more trainers and the user to initiate an interaction between the one or more trainers and the user upon selection of the one or more trainers by the user.
[0051] Computer memory elements may include any suitable memory device(s) for storing data and executable program, such as read only memory, random access memory, erasable programmable read only memory, electrically erasable programmable read only memory, hard drive, removable media drive for handling memory cards and the like. Embodiments of the present subject matter may be implemented in conjunction with program modules, including functions, procedures, data structures, and application programs, for performing tasks, or defining abstract data types or low-level hardware contexts. Executable program stored on any of the above-mentioned storage media may be executable by the processor(s) (220).
[0052] FIG. 5a, FIG. 5b and FIG. 5c is a flow chart representing the steps involved in a method (300) to assist a user to perform an activity through feedbacks in accordance with an embodiment of the present disclosure. The method (300) includes displaying a plurality of information to guide the user to perform an activity in step 310. In one embodiment, displaying a plurality of information to guide the user to perform an activity includes displaying a plurality of information to guide the user to perform an activity by a display unit of an internet of things based feedback device. In some embodiments, the activity may include, but not limited to, yoga, dance, physical exercise, meditation and the like. In one embodiment, the plurality of information may include, but not limited to, a recorded multimedia illustrating the activity, a streamed multimedia illustrating the activity, a live multimedia session and the like. In one embodiment, the display unit may include, but not limited to, a liquid crystal display, a light emitting diode display, a plasma display and the like.
[0053] The method (300) also includes capturing capture one or more movements of the user upon performing the activity in step 320. In one embodiment, capturing capture one or more movements of the user upon performing the activity includes capturing capture one or more movements of the user upon performing the activity by a plurality of sensors of the internet of thing based feedback device. The plurality of sensors comprises at least one of an image sensor, a position sensor, and a pressure sensor. In one embodiment, the image sensor may include a camera adapted to capture the one or more images in wavelength range corresponding to a color including at least one of red, green and blue. In some embodiments, the position sensor may include, at least one of a time of flight sensor, an infrared sensor, a proximity sensor and a camera adapted to sense the one or more depth data. In such an embodiment, the position sensor and the pressure sensor may be mounted on a physical mat adapted to support the user when the user is performing the activity. The physical mat may be laid on a ground surface upon which the user performs the activity.
[0054] Further, in a specific embodiment, the plurality of sensors may be mounted on a wearable device. In such an embodiment, the wearable device may include one or more biometric sensors adapted to sense one or more vitals of the user. In such an embodiment, the internet of things based feedback device may be configured to receive a performance report from the wearable device regarding the performance of the user while performing the activity and avail the performance report received to one or more persons. In one embodiment, the one or more persons may include, a trainer, a physician, a coach, a physiotherapist and the like. In a specific embodiment, the internet of things based feedback device may be configured to establish a communication channel with the wearable device when the wearable device is located in a predefined range from the internet of things based feedback device.
[0055] Furthermore, in some embodiments, the one or more vitals may include, but not limited to, heartbeat, blood pressure, blood oxygen level and the like. In some embodiments, the internet of things based feedback device may include a microphone adapted to receive one or more voice commands from the user. In some embodiments, the internet of things based feedback device may include a speaker adapted to provide one or more audio feedbacks to the user. In a specific embodiment, the internet of things based feedback device may include a light sensor adapted to adjust brightness of the display unit upon sensing one or more ambient light levels. In one embodiment, the internet of things based feedback device may be enclosed by an enclosure to protect the internet of things based feedback device from an external environment. In some embodiments, the internet of things based feedback device may be mounted on a stand.
[0056] The method (300) also includes providing one or more visual feedbacks to the user while performing the activity in step 330. In one embodiment, providing one or more visual feedbacks to the user while performing the activity includes providing one or more visual feedbacks to the user while performing the activity by a mirror of the internet of things base feedback device. In one embodiment, the mirror may include a two-way mirror.
[0057] The method (300) also includes detecting one or more regions corresponding to one or more body parts of the user from one or more images received from the image sensor in step 340. In one embodiment, detecting one or more regions corresponding to one or more body parts of the user from one or more images received from the image sensor includes detecting one or more regions corresponding to one or more body parts of the user from one or more images received from the image sensor by an activity monitoring module coupled to an integrated database. In a specific embodiment, the integrated database may include, but not limited to, a SQL based database, non-SQL based database, object-oriented database, hierarchical database, columnar database and the like. In one embodiment, the one or more regions may include regions around one or more body joins of the user.
[0058] The method (300) also includes generating bounding boxes and segmentation masks for each of the one or more regions detected on the one or more images using a feature pyramid network technique in step 350. In one embodiment, generating bounding boxes and segmentation masks for each of the one or more regions detected on the one or more images using a feature pyramid network technique includes generating bounding boxes and segmentation masks for each of the one or more regions detected on the one or more images using a feature pyramid network technique by the activity monitoring module. In one embodiment, the bounding boxes may include, a surrounding sphere (SS), an axis-aligned bounding box (AABB), an oriented bounding box (OBB), a fixed-direction hull (FDH), and a convex hull (CH). In some embodiments, the segmentation masks may include, but not limited to, threshold based segmentation, edge based segmentation, region-based segmentation, clustering based segmentation, artificial neural network based segmentation.
[0059] The method (300) also includes identifying a first pattern corresponding to one or more movements of the activity of the user based on bounding boxes and segmentation masks in step 360. In one embodiment, identifying a first pattern corresponding to one or more movements of the activity of the user based on bounding boxes and segmentation masks includes identifying a first pattern corresponding to one or more movements of the activity of the user based on bounding boxes and segmentation masks by the activity monitoring module. In one embodiment, the first pattern may include a body silhouette of the user.
[0060] The method (300) also includes analyzing one or more depth data, one or more positional data and one or more pressure data corresponding to the one or more movements of the activity of the user captured by the image sensor, the position sensor and the pressure sensor respectively in step 370. In one embodiment, analyzing one or more depth data, one or more positional data and one or more pressure data corresponding to the one or more movements of the activity of the user captured by the image sensor, the position sensor and the pressure sensor respectively includes analyzing one or more depth data, one or more positional data and one or more pressure data corresponding to the one or more movements of the activity of the user captured by the image sensor, the position sensor and the pressure sensor respectively by an impression tracking module.
[0061] The method (300) also includes identifying a second pattern corresponding to the one or more movements of the activity of the user based on the analysis in step 380. In one embodiment, identifying a second pattern corresponding to the one or more movements of the activity of the user based on the analysis includes identifying a second pattern corresponding to the one or more movements of the activity of the user based on the analysis by the impression tracking module.
[0062] The method (300) also includes generating a virtual model of the user in a virtual space by mapping the first patten and the second pattern in step 390. In one embodiment, generating a virtual model of the user in a virtual space by mapping the first patten and the second pattern includes generating a virtual model of the user in a virtual space by mapping the first patten and the second pattern by a mapping module. The one or more spatial dimensions of the virtual model in the virtual space is proportional to the one or more spatial dimensions of the user in a real space. In one embodiment, the virtual model may be a multidimensional model including three dimensional model, a two dimensional model and the like. In one embodiment, the mapping module may be configured to assign different weights to the first pattern and the second pattern based on one or more ambient conditions.
[0063] The method (300) also includes transmitting one or more feed signals to the internet of things based feedback device to project the virtual model of the user in the mirror to provide the one or more visual feedbacks to the user in step 400. In one embodiment, transmitting one or more feed signals to the internet of things based feedback device to project the virtual model of the user in the mirror to provide the one or more visual feedbacks to the user includes transmitting one or more feed signals to the internet of things based feedback device to project the virtual model of the user in the mirror to provide the one or more visual feedbacks to the user by the mapping module.
[0064] The method (300) also includes comparing the one or more spatial dimensions of the virtual model with the one or more corresponding spatial dimensions of a prestored model in step 410. In one embodiment, comparing the one or more spatial dimensions of the virtual model with the one or more corresponding spatial dimensions of a prestored model includes comparing the one or more spatial dimensions of the virtual model with the one or more corresponding spatial dimensions of a prestored model by a feedback module. In one embodiment, the prestored model may include the virtual model mapped to one or more human postures while performing the corresponding activity.
[0065] The method (300) also includes identifying one or more anomalies to generate a matching score upon comparing in step 420. In one embodiment, identifying one or more anomalies to generate a matching score upon comparing includes identifying one or more anomalies to generate a matching score upon comparing by the feedback module.
[0066] The method (300) further includes generating one or more feedbacks to the user based on the one or more anomalies identified to correct a posture of the user performing the activity when the matching score generated is below a predefined threshold in step 430. In one embodiment, generating one or more feedbacks to the user based on the one or more anomalies identified to correct a posture of the user performing the activity when the matching score generated is below a predefined threshold includes generating one or more feedbacks to the user based on the one or more anomalies identified to correct a posture of the user performing the activity when the matching score generated is below a predefined threshold by the feedback module.
[0067] In one embodiment, the one or more feedbacks may be generated in real-time, offline mode or based on historical data available on the cloud. The system and method disclosed herein may also be implemented to enhance the digital lifestyle of the user. For instance, a user profile may be stored in the system. Subsequently, a customized interface or an option for interaction may be provided based on the user profile such as, browsing of digital content, viewing of emails, scheduling of meetings and interactive virtual meetings.
[0068] Various embodiments of the interactive system and method to assist a user to perform an activity through feedbacks described above enable various advantages. The mapping module is capable of generating the virtual model of the user and projecting the same on the mirror by sending one or more feed signals to the internet of things based feedback device for providing the one or more visual feedbacks to the user. The feedback module is capable of providing the one or more feedbacks to the user to correct the posture of the user while performing the activity. Provision of the plurality of sensors aids the impression tracking module, and the activity monitoring module to identify the first pattern and the second pattern, thereby estimating the one or more physical activity of the user X accurately.
[0069] Additionally, possibility of mounting the plurality of sensors on the physical and the wearable device provides reliability to the interactive system along with ensuring freedom of movement to the user. Provision of the voice assistant module enables the user to communicate with the internet of things based feedback device, thereby enhancing operability of the interactive system. Further, the communication module is capable of establishing the communication channel between the user and the one or more trainers to enable the user X to get trained by the one or more users selected by the user X. Provision of the light sensor associated with the display unit may adjust brightness of the display unit according to the ambient light conditions, thereby enhancing viewability of the display unit.
[0070] Also, the system is more reliable and more efficient because the one or more devices registered on the integrated platform remain always connected and reachable to the user. Further, from a technical effect point of view, the implementation time required to perform the method steps included in the present disclosure by the one or more processors of the system is very minimal, thereby the system maintains very minimal operational latency and requires very minimal processing requirements.
[0071] It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the disclosure and are not intended to be restrictive thereof. While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended.
[0072] The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, the order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all the acts need to be necessarily performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. ,CLAIMS:1. An interactive system (10) to assist a user for performing an activity through one or more feedbacks comprising:
an internet of things based feedback device (20) located in proximity of the user and operatively coupled to an integrated database (30), wherein the internet of things based feedback device (20) comprises:
a display unit (40) adapted to display information to guide the user to perform an activity;
a plurality of sensors (50) positioned adjacent to the display unit (40), wherein the plurality of sensors (50) are adapted to capture one or more movements of the user upon performing the activity, wherein the plurality of sensors (50) comprises at least one of an image sensor, a position sensor, and a pressure sensor;
a mirror (60) positioned adjacently to the plurality of sensors (50), wherein the mirror (60) is adapted to provide one or more visual feedbacks to the user while performing the activity;
a processing subsystem (70) operatively coupled to the internet of things based feedback device (20), wherein the processing subsystem (70) is hosted on a server (80) and configured to execute on a network (90) to control bidirectional communications among a plurality of modules comprising:
an activity monitoring module (100) operatively coupled to the integrated database (30), wherein the activity monitoring module (100) is configured to:
detect one or more regions corresponding to one or more body parts of the user from one or more images received from the image sensor;
generate bounding boxes and segmentation masks for each of the one or more regions detected on the one or more images using a feature pyramid network technique;
identify a first pattern corresponding to one or more movements of the activity of the user based on bounding boxes and segmentation masks;
an impression tracking module (110) operatively coupled to the activity monitoring module (100), wherein the impression tracking module (110) is configured to:
analyze one or more depth data, one or more positional data and one or more pressure data corresponding to the one or more movements of the activity of the user captured by the image sensor, the position sensor and the pressure sensor respectively;
identify a second pattern corresponding to the one or more movements of the activity of the user based on the analysis;
a mapping module (120) operatively coupled to the impression tracking module (110), wherein the mapping module (120) is configured to:
generate a virtual model of the user in a virtual space by mapping the first patten and the second pattern, wherein the one or more spatial dimensions of the virtual model in the virtual space is proportional to the one or more spatial dimensions of the user in a real space;
transmit one or more feed signals to the internet of things based feedback device (20) to project the virtual model of the user in the mirror (60) to provide the one or more visual feedbacks to the user;
a feedback module (130) operatively coupled to the mapping module (120) wherein the feedback module (130) is configured to:
compare the one or more spatial dimensions of the virtual model with the one or more corresponding spatial dimensions of a prestored model;
identify one or more anomalies to generate a matching score upon comparing; and
generate one or more feedbacks to the user based on the one or more anomalies identified to correct a posture of the user performing the activity when the matching score generated is below a predefined threshold, thereby assisting the user for performing the activity via the one or more feedbacks.
2. The interactive system (10) as claimed in claim 1, wherein the image sensor comprises a camera adapted to capture the one or more images in wavelength corresponding to a color comprising at least one of red, green and blue.
3. The interactive system (10) as claimed in claim1, wherein the position sensor comprises at least one of a time of flight sensor, an infrared sensor, a proximity sensor and a camera adapted to sense the one or more depth data.
4. The interactive system (10) as claimed in claim 1, wherein the position sensor and the pressure sensor are mounted on a physical mat (140) adapted to support the user when the user is performing the activity.
5. The interactive system (10) as claimed in claim1, wherein the plurality of sensors (50) are mounted on a wearable device worn by the user wherein the wearable device comprise one or more biometric sensors adapted to sense one or more vitals of the user.
6. The interactive system (10) as claimed in claim1, wherein the mirror (60) comprises a two-way mirror.
7. The interactive system (10) as claimed in claim1, wherein the internet of things based feedback device (20) comprises:
a microphone adapted to receive one or more voice commands from the user;
a speaker adapted to provide one or more audio feedbacks to the user corresponding to the one or more feedbacks generated by the feedback module (130); and
a light sensor adapted to adjust brightness of the display unit (40) upon sensing one or more ambient light levels.
8. The interactive system (10) as claimed in claim1, wherein the processing subsystem (70) comprises a voice assistant module (150) configured to enable the user to interact with the internet of things based feedback device (20) through one or more voice commands of the user.
9. The interactive system (10) as claimed in claim 1, wherein the processing subsystem (70) comprises a communication module (160) configured to:
provide information regarding one or more trainers through a graphical user interface displayed on the display unit (40) upon receiving one or more inputs from the user, wherein the one or more inputs comprises at least one of a touch command, a gesture command, and a voice command; and
establish a communication channel between the one or more trainers and the user to initiate an interaction between the one or more trainers and the user upon selection of the one or more trainers by the user.
10. A method (300) comprising:
displaying, by a display unit of an internet of things based feedback device, a plurality of information to guide the user to perform an activity; (310)
capturing, by a plurality of sensors of the internet of things based feedback device, capture one or more movements of the user upon performing the activity, wherein the plurality of sensors comprises at least one of an image sensor, a position sensor, and a pressure sensor; (320)
providing, by a mirror of the internet of things based feedback device, one or more visual feedbacks to the user while performing the activity; (330)
detecting, by an activity monitoring module coupled to an integrated database, one or more regions corresponding to one or more body parts of the user from one or more images received from the image sensor; (340)
generating, by the activity monitoring module, bounding boxes and segmentation masks for each of the one or more regions detected on the one or more images using a feature pyramid network technique; (350)
identifying, by the activity monitoring module, a first pattern corresponding to one or more movements of the activity of the user based on bounding boxes and segmentation masks; (360)
analyzing, by an impression tracking module, one or more depth data, one or more positional data and one or more pressure data corresponding to the one or more movements of the activity of the user captured by the image sensor, the position sensor and the pressure sensor respectively; (370)
identifying, by the impression tracking module, a second pattern corresponding to the one or more movements of the activity of the user based on the analysis; (380)
generating, by a mapping module, a virtual model of the user in a virtual space by mapping the first patten and the second pattern, wherein the one or more spatial dimensions of the virtual model in the virtual space is proportional to the one or more spatial dimensions of the user in a real space; (390)
transmitting, by the mapping module, one or more feed signals to the internet of things based feedback device to project the virtual model of the user in the mirror to provide the one or more visual feedbacks to the user; (400)
comparing, by a feedback module, the one or more spatial dimensions of the virtual model with the one or more corresponding spatial dimensions of a prestored model; (410)
identifying, by the feedback module, one or more anomalies to generate a matching score upon comparing; (420) and
generating, by the feedback module, one or more feedbacks to the user based on the one or more anomalies identified to correct a posture of the user performing the activity when the matching score generated is below a predefined threshold, thereby assisting the user for performing the activity via one or more feedbacks. (430)

Dated this 28th day of October 2022

Signature

Jinsu Abraham
Patent Agent (IN/PA-3267)
Agent for the Applicant

Documents

Application Documents

# Name Date
1 202141049652-STATEMENT OF UNDERTAKING (FORM 3) [29-10-2021(online)].pdf 2021-10-29
2 202141049652-PROVISIONAL SPECIFICATION [29-10-2021(online)].pdf 2021-10-29
3 202141049652-POWER OF AUTHORITY [29-10-2021(online)].pdf 2021-10-29
4 202141049652-FORM FOR STARTUP [29-10-2021(online)].pdf 2021-10-29
5 202141049652-FORM FOR SMALL ENTITY(FORM-28) [29-10-2021(online)].pdf 2021-10-29
6 202141049652-FORM 1 [29-10-2021(online)].pdf 2021-10-29
7 202141049652-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [29-10-2021(online)].pdf 2021-10-29
8 202141049652-EVIDENCE FOR REGISTRATION UNDER SSI [29-10-2021(online)].pdf 2021-10-29
9 202141049652-DRAWINGS [29-10-2021(online)].pdf 2021-10-29
10 202141049652-DRAWING [28-10-2022(online)].pdf 2022-10-28
11 202141049652-CORRESPONDENCE-OTHERS [28-10-2022(online)].pdf 2022-10-28
12 202141049652-COMPLETE SPECIFICATION [28-10-2022(online)].pdf 2022-10-28
13 202141049652-STARTUP [29-10-2025(online)].pdf 2025-10-29
14 202141049652-FORM28 [29-10-2025(online)].pdf 2025-10-29
15 202141049652-FORM-26 [29-10-2025(online)].pdf 2025-10-29
16 202141049652-FORM 18A [29-10-2025(online)].pdf 2025-10-29