Abstract: ABSTRACT METHODS AND SYSTEMS FOR REAL TIME VIDEO DRIVEN HUMAN 3-D POSTURE ESTIMATION 5 The disclosure relates generally to methods and systems for real time video driven human 3-dimensional (3-D) posture estimation during physical activities. Conventional techniques do not exploit temporal information, they do not give smooth transition of postures over time. Furthermore, the techniques that exploit the temporal information suffer from higher time requirements due to two state 10 computations. The present disclosure solves the technical problems in the art with the methods and systems for real time video driven human 3-D posture estimation during physical activities. The present invention discloses a smart-phone camera based automatic posture monitoring system designed with an auto-encoder based architecture. The disclosed auto-encoder based cross-modal method uses 15 monocular video (2-D image sequences) from a single low-end mobile device (for example, smart-phone camera) for estimating human 3-D posture in real time (∼ 5 fps) with high accuracy (less than 1 cm error per joint location). [To be published with FIG. 3] 20
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION (See Section 10 and Rule 13)
Title of invention:
METHODS AND SYSTEMS FOR REAL TIME VIDEO DRIVEN HUMAN 3-D
POSTURE ESTIMATION
Applicant
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description:
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD [001] The disclosure herein generally relates to the field of posture estimation, and more specifically to methods and systems for real time video driven human 3-dimensional (3-D) posture estimation during physical activities. 5
BACKGROUND [002] The health benefits of a subject, i.e., humans from proper physical activities such as exercises for example, gym and yoga, are enormous. While doing such exercises maintaining proper body posture of a subject is indispensable.
10 However neglecting body posture can result in various disorders such as back pain,
spinal dysfunction, joint or muscle degeneration. Researchers have found that chronic poor posture is a leading cause of musculoskeletal disorder and spinal injury. Generally, a trainer monitors the body postures of the exercising trainees. With the advancement of Machine Learning (ML) and Deep Learning (DL)
15 technology, especially Generative Artificial Intelligence (GenAI), 3-dimensional
(3-D) body posture monitoring is automated using personal mobile devices such as smart phones.
[003] Estimating 3-D posture from 2 dimensional (2-D) images is not straight forward. Conventional Computer Vision (CV) based 3-D posture
20 monitoring techniques and stereovision-based techniques require a depth sensor
along with RGB camera, where multiple cameras are required which can also estimate the depth. Recent works have focused on recovering the entire 3-D shape including postures in parametric form such as skinned multi-person linear (SMPL) models which utilize heat maps or spectral domains. However, these parametric
25 forms are less accurate for posture estimation compared to skeleton based
approaches and more over require more memory and processing power due to more parameter size. There are techniques that estimate 3-D posture of the human from a static image, since, these techniques do not exploit temporal information, they do not give smooth transition of postures over time. Furthermore, the techniques that
30 exploit the temporal information suffer from higher time requirements due to two
state computations.
SUMMARY
[004] Embodiments of the present disclosure present technological
improvements as solutions to one or more of the above-mentioned technical
problems recognized by the inventors in conventional systems.
5 [005] In an aspect, a processor-implemented method for real time video
driven human 3-dimensional (3-D) posture estimation is provided. The method including the steps of: receiving one or more training datasets each comprising a plurality of training videos, wherein each of the plurality of training videos comprises one or more training video clips, wherein each of the one or more training
10 video clips comprises a plurality of training video frames and a 3-D posture
annotated to each of the plurality of training video frames; training a neural network model comprising an autoencoder network and a second encoder, with the one or more training datasets, to obtain a trained neural network model, wherein the autoencoder network comprises a first encoder and a first decoder, and wherein the
15 trained neural network model comprises a trained autoencoder network and a
trained second encoder, and the autoencoder network and the second encoder are trained sequentially, and wherein training the autoencoder network with the one or more training datasets comprises: (a) passing 3-D postures annotated to the plurality of training video frames present in each training video clip at a time, of the one or
20 more training video clips present in each of the one or more training datasets, to the
first encoder, to obtain a latent space vector associated to the 3-D postures associated to each training video clip; (b) passing the latent space vector associated to the 3-D postures associated to each training video clip, to the first decoder, to obtain reconstructed 3-D postures of the associated training video clip; (c)
25 calculating a value of a loss function of the autoencoder network, using (i) the 3-D
postures of each training video clip and (ii) the reconstructed 3-D postures of the associated training video clip, wherein the loss function of the autoencoder network is a summation of an autoencoder reconstruction loss and a bone length consistency loss; (d) updating one or more autoencoder network parameters of the autoencoder
30 network based on the value of the loss function of the autoencoder network; and (e)
repeating the steps (a) through (d) until the value of the loss function of the
autoencoder network is less than a first predefined threshold value, to obtain a
trained autoencoder network; and training the second encoder with the one or more
training datasets comprises: (f) passing the 3-D postures annotated to each training
video clip at a time, of the one or more training video clips present in each of the
5 one or more training datasets, to the first encoder of the trained autoencoder
network, to obtain a first latent space vector of the associated training video clip; (g) passing the plurality of training video frames present in the associated training video clip, to the second encoder, to obtain a second latent space vector of the associated training video clip; (h) calculating a value of the loss function of the
10 second encoder, using (i) the first latent space vector of the associated training video
clip and (ii) the second latent space vector of the associated training video clip, wherein the loss function of the second encoder is an encoder reconstruction loss; (i) updating one or more second encoder network parameters of the second encoder based on the value of the loss function of the second encoder; and (j) repeating the
15 steps (f) through (i) until the value of the loss function of the second encoder is less
than a second predefined threshold value, to obtain a trained second encoder; receiving in real-time a test video of the human while performing a physical activity, through an acquisition device; dividing the test video into one or more test video clips based on presence of the human in the associated one or more test video
20 clips, using a human detection technique; passing each of the one or more test video
clips to the trained second encoder of the trained neural network model, to obtain a latent space vector for each of the one or more test video clips; and passing the latent space vector of each of the one or more test video clips, to the first decoder of the trained autoencoder network of the trained neural network model, to estimate
25 the 3-D posture of the human present in each of the one or more test video clips.
[006] In another aspect, a system for real time video driven human 3-dimensional (3-D) posture estimation is provided. The system includes: a memory storing instructions; one or more Input/Output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces,
30 wherein the one or more hardware processors are configured by the instructions to:
receive one or more training datasets each comprising a plurality of training videos,
wherein each of the plurality of training videos comprises one or more training
video clips, wherein each of the one or more training video clips comprises a
plurality of training video frames and a 3-D posture annotated to each of the
plurality of training video frames; train a neural network model comprising an
5 autoencoder network and a second encoder, with the one or more training datasets,
to obtain a trained neural network model, wherein the autoencoder network comprises a first encoder and a first decoder, and wherein the trained neural network model comprises a trained autoencoder network and a trained second encoder, and the autoencoder network and the second encoder are trained
10 sequentially, and wherein training the autoencoder network with the one or more
training datasets comprises: (a) passing 3-D postures annotated to the plurality of training video frames present in each training video clip at a time, of the one or more training video clips present in each of the one or more training datasets, to the first encoder, to obtain a latent space vector associated to the 3-D postures
15 associated to each training video clip; (b) passing the latent space vector associated
to the 3-D postures associated to each training video clip, to the first decoder, to obtain reconstructed 3-D postures of the associated training video clip; (c) calculating a value of a loss function of the autoencoder network, using (i) the 3-D postures of each training video clip and (ii) the reconstructed 3-D postures of the
20 associated training video clip, wherein the loss function of the autoencoder network
is a summation of an autoencoder reconstruction loss and a bone length consistency loss; (d) updating one or more autoencoder network parameters of the autoencoder network based on the value of the loss function of the autoencoder network; and (e) repeating the steps (a) through (d) until the value of the loss function of the
25 autoencoder network is less than a first predefined threshold value, to obtain a
trained autoencoder network; and training the second encoder with the one or more training datasets comprises: (f) passing the 3-D postures annotated to each training video clip at a time, of the one or more training video clips present in each of the one or more training datasets, to the first encoder of the trained autoencoder
30 network, to obtain a first latent space vector of the associated training video clip;
(g) passing the plurality of training video frames present in the associated training
video clip, to the second encoder, to obtain a second latent space vector of the
associated training video clip; (h) calculating a value of the loss function of the
second encoder, using (i) the first latent space vector of the associated training video
clip and (ii) the second latent space vector of the associated training video clip,
5 wherein the loss function of the second encoder is an encoder reconstruction loss;
(i) updating one or more second encoder network parameters of the second encoder based on the value of the loss function of the second encoder; and (j) repeating the steps (f) through (i) until the value of the loss function of the second encoder is less than a second predefined threshold value, to obtain a trained second encoder;
10 receive in real-time a test video of the human while performing a physical activity,
through an acquisition device; divide the test video into one or more test video clips based on presence of the human in the associated one or more test video clips, using a human detection technique; pass each of the one or more test video clips to the trained second encoder of the trained neural network model, to obtain a latent space
15 vector for each of the one or more test video clips; and pass the latent space vector
of each of the one or more test video clips, to the first decoder of the trained autoencoder network of the trained neural network model, to estimate the 3-D posture of the human present in each of the one or more test video clips.
[007] In yet another aspect, there is provided a computer program product
20 comprising a non-transitory computer readable medium having a computer
readable program embodied therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: receive one or more training datasets each comprising a plurality of training videos, wherein each of the plurality of training videos comprises one or more training video clips,
25 wherein each of the one or more training video clips comprises a plurality of
training video frames and a 3-D posture annotated to each of the plurality of training video frames; train a neural network model comprising an autoencoder network and a second encoder, with the one or more training datasets, to obtain a trained neural network model, wherein the autoencoder network comprises a first encoder and a
30 first decoder, and wherein the trained neural network model comprises a trained
autoencoder network and a trained second encoder, and the autoencoder network
and the second encoder are trained sequentially, and wherein training the
autoencoder network with the one or more training datasets comprises: (a) passing
3-D postures annotated to the plurality of training video frames present in each
training video clip at a time, of the one or more training video clips present in each
5 of the one or more training datasets, to the first encoder, to obtain a latent space
vector associated to the 3-D postures associated to each training video clip; (b) passing the latent space vector associated to the 3-D postures associated to each training video clip, to the first decoder, to obtain reconstructed 3-D postures of the associated training video clip; (c) calculating a value of a loss function of the
10 autoencoder network, using (i) the 3-D postures of each training video clip and (ii)
the reconstructed 3-D postures of the associated training video clip, wherein the loss function of the autoencoder network is a summation of an autoencoder reconstruction loss and a bone length consistency loss; (d) updating one or more autoencoder network parameters of the autoencoder network based on the value of
15 the loss function of the autoencoder network; and (e) repeating the steps (a) through
(d) until the value of the loss function of the autoencoder network is less than a first predefined threshold value, to obtain a trained autoencoder network; and training the second encoder with the one or more training datasets comprises: (f) passing the 3-D postures annotated to each training video clip at a time, of the one or more
20 training video clips present in each of the one or more training datasets, to the first
encoder of the trained autoencoder network, to obtain a first latent space vector of the associated training video clip; (g) passing the plurality of training video frames present in the associated training video clip, to the second encoder, to obtain a second latent space vector of the associated training video clip; (h) calculating a
25 value of the loss function of the second encoder, using (i) the first latent space
vector of the associated training video clip and (ii) the second latent space vector of the associated training video clip, wherein the loss function of the second encoder is an encoder reconstruction loss; (i) updating one or more second encoder network parameters of the second encoder based on the value of the loss function of the
30 second encoder; and (j) repeating the steps (f) through (i) until the value of the loss
function of the second encoder is less than a second predefined threshold value, to
obtain a trained second encoder; receive in real-time a test video of the human while
performing a physical activity, through an acquisition device; divide the test video
into one or more test video clips based on presence of the human in the associated
one or more test video clips, using a human detection technique; pass each of the
5 one or more test video clips to the trained second encoder of the trained neural
network model, to obtain a latent space vector for each of the one or more test video clips; and pass the latent space vector of each of the one or more test video clips, to the first decoder of the trained autoencoder network of the trained neural network model, to estimate the 3-D posture of the human present in each of the one or more
10 test video clips.
[008] In an embodiment, each of (i) the first encoder of the autoencoder network, (ii) the first decoder of the autoencoder network, and (iii) the second encoder, comprises four residual network (ResNet) style blocks surrounded by one or more associated skip connections.
15 [009] In an embodiment, each block of the four ResNet style blocks
present in the first encoder of the autoencoder network and the second encoder, comprises two convolution layers each followed by a batch normalization layer, a rectified linear unit (ReLU) activation function layer, and a dropout layer.
[010] In an embodiment, each block of the four ResNet style blocks
20 present in the first decoder comprises two deconvolution layers each followed by a
batch normalization layer, a rectified linear unit (ReLU) activation function layer, and a dropout layer.
[011] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not
25 restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[012] The accompanying drawings, which are incorporated in and
constitute a part of this disclosure, illustrate exemplary embodiments and, together
30 with the description, serve to explain the disclosed principles:
[013] FIG. 1 is an exemplary block diagram of a system for real time video driven human 3-dimensional (3-D) posture estimation, in accordance with some embodiments of the present disclosure.
[014] FIGS. 2A-2C illustrate exemplary flow diagrams of a processor-
5 implemented method for real time video driven human 3-dimensional (3-D) posture
estimation, using the system of FIG. 1, in accordance with some embodiments of
the present disclosure.
[015] FIG. 3 shows an architecture of the neural network model, in
accordance with some embodiments of the present disclosure.
10 [016] FIG. 4 shows an architecture of one ResNet type block with a skip
connection, in accordance with some embodiments of the present disclosure.
[017] FIG. 5 shows an exemplary setup for online posture analysis using the mobile device, in accordance with some embodiments of the present disclosure.
[018] FIG. 6 shows the predicted 3-D postures of the human at several
15 difficult conditions, by the trained neural network model of the present disclosure.
[019] FIG. 7 shows a performance comparison of the present disclosure with that of MediaPipe in human posture estimation.
DETAILED DESCRIPTION OF EMBODIMENTS
20 [020] Exemplary embodiments are described with reference to the
accompanying drawings. In the figures, the left-most digit(s) of a reference number
identifies the figure in which the reference number first appears. Wherever
convenient, the same reference numbers are used throughout the drawings to refer
to the same or like parts. While examples and features of disclosed principles are
25 described herein, modifications, adaptations, and other implementations are
possible without departing from the scope of the disclosed embodiments.
[021] Chronic poor posture during physical activities such as exercises is
a leading cause of musculoskeletal disorder and spinal injury. Generally, a trainer
monitors the body postures of the exercising trainees. With the advancement of
30 Machine Learning (ML) and Deep Learning (DL) technology, especially
Generative Artificial Intelligence (GenAI), 3-dimensional (3-D) body posture
monitoring is automated using personal mobile devices such as smart phones. The
automation should make proper posture monitoring easily reachable to normal
people who otherwise may not be able to afford a trainer. A good and realistic
automated system should have the following characteristics: (i) non-interfere with
5 the user (s)’ normal activities, (ii) minimal requirements in terms of sensors and
other resources thus, cost effective, (iii) error in practically tolerable range, and (iv) real time estimation. For estimating 3-D human posture, most conventional techniques need either a 3-D depth sensor which is expensive or multiple cameras for stereovision. Hence, a low-cost system for automated monitoring and analysis
10 of posture is the need of the day.
[022] The present disclosure solves the technical problems in the art with the methods and systems for real time video driven human 3-D posture estimation during physical activities. The present invention discloses a smart-phone camera based automatic posture monitoring system designed with an auto-encoder based
15 architecture. The disclosed auto-encoder based cross-modal method uses
monocular video (2-D image sequences) from a single low-end mobile device (for example, smart-phone camera) for estimating human 3-D posture in real time (∼ 5 fps) with high accuracy (less than 1 cm error per joint location).
[023] Referring now to the drawings, and more particularly to FIG. 1
20 through FIG. 7, where similar reference characters denote corresponding features
consistently throughout the figures, there are shown preferred embodiments, and these embodiments are described in the context of the following exemplary systems and/or methods.
[024] FIG. 1 is an exemplary block diagram of a system 100 for real time
25 video driven human 3-dimensional (3-D) posture estimation, in accordance with
some embodiments of the present disclosure. In an embodiment, the system 100 includes or is otherwise in communication with one or more hardware processors 104, communication interface device(s) or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the one or
30 more hardware processors 104. The one or more hardware processors 104, the
memory 102, and the I/O interface(s) 106 may be coupled to a system bus 108 or a similar mechanism.
[025] The I/O interface(s) 106 may include a variety of software and
hardware interfaces, for example, a web interface, a graphical user interface (GUI),
5 and the like. The I/O interface(s) 106 may include a variety of software and
hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, a plurality of sensor devices, a printer and the like. Further, the I/O interface(s) 106 may enable system 100 to communicate with other devices, such as web servers and external databases.
10 [026] The I/O interface(s) 106 can facilitate multiple communications
within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. For the purpose, the I/O interface(s) 106 may include one or more ports for connecting a number of computing systems
15 with one another or to another server computer. Further, the I/O interface(s) 106
may include one or more ports for connecting a number of devices to one another or to another server.
[027] The one or more hardware processors 104 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal
20 processors, central processing units, state machines, logic circuitries, and/or any
devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 104 are configured to fetch and execute computer-readable instructions stored in the memory 102. In the context of the present disclosure, the expressions ‘processors’ and ‘hardware processors’ may
25 be used interchangeably. In an embodiment, the system 100 can be implemented in
a variety of computing systems, such as laptop computers, portable computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
[028] The memory 102 may include any computer-readable medium
30 known in the art including, for example, volatile memory, such as static random-
access memory (SRAM) and dynamic random access memory (DRAM), and/or
non-volatile memory, such as read only memory (ROM), erasable programmable
ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an
embodiment, the memory 102 includes a plurality of modules 102a and a repository
102b for storing data processed, received, and generated by one or more of the
5 plurality of modules 102a. The plurality of modules 102a may include routines,
programs, objects, components, data structures, and so on, which perform particular tasks or implement particular abstract data types.
[029] The plurality of modules 102a may include programs or computer-readable instructions or coded instructions that supplement applications or
10 functions performed by the system 100. The plurality of modules 102a may also be
used as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions. Further, the plurality of modules 102a can be used by hardware, by computer-readable instructions executed by the one or more hardware processors 104, or by
15 a combination thereof. In an embodiment, the plurality of modules 102a can include
various sub-modules (not shown in FIG. 1). Further, the memory 102 may include information pertaining to input(s)/output(s) of each step performed by the processor(s) 104 of the system 100 and methods of the present disclosure.
[030] The repository 102b may include a database or a data engine.
20 Further, the repository 102b amongst other things, may serve as a database or
includes a plurality of databases for storing the data that is processed, received, or generated as a result of the execution of the plurality of modules 102a. Although the repository 102b is shown internal to the system 100, it will be noted that, in alternate embodiments, the repository 102b can also be implemented external to the
25 system 100, where the repository 102b may be stored within an external database
(not shown in FIG. 1) communicatively coupled to the system 100. The data contained within such external databases may be periodically updated. For example, data may be added into the external database and/or existing data may be modified and/or non-useful data may be deleted from the external database. In one
30 example, the data may be stored in an external system, such as a Lightweight
Directory Access Protocol (LDAP) directory and a Relational Database
12
Management System (RDBMS). In another embodiment, the data stored in the repository 102b may be distributed between the system 100 and the external database.
[031] Referring to FIGS. 2A-2C, components and functionalities of the
5 system 100 are described in accordance with an example embodiment of the present
disclosure. For example, FIGS. 2A-2C illustrate exemplary flow diagrams of a processor-implemented method 200 for real time video driven human 3-dimensional (3-D) posture estimation, using the system 100 of FIG. 1, in accordance with some embodiments of the present disclosure. Although steps of
10 the method 200 including process steps, method steps, techniques or the like may
be described in a sequential order, such processes, methods, and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be
15 performed in any practical order. Further, some steps may be performed
simultaneously, or some steps may be performed alone or independently.
[032] At step 202 of method 200, the one or more hardware processors 104 of the system 100 are configured to receive one or more training datasets. Each training dataset of the one or more training datasets includes a plurality of training
20 videos. Each training video of the plurality of training videos includes one or more
training video clips. Further each of the one or more training video clips includes a plurality of training video frames and a 3-D posture annotated to each of the plurality of training video frames.
[033] In an embodiment, the number of the one or more training video
25 clips present in each training video may be fixed or may vary based on the length
of each training video. However, the number of the plurality of training video frames present in each training video clip is fixed. Each training video frame is a 2-dimensional image and represents the presence of the human performing the physical activity in a certain position and a corresponding 3-D posture is annotated
30 to each training video frame.
[034] There is a need to map a training image/training video frame displaying a human to the corresponding 3-D posture(s) of the human. To do so, a mathematical representation of human 3-D posture is required. A widely accepted way is a skeleton graph � consisting of a certain number of body joint locations 5 � = {��, � = 1,…,�} and bones � = {��,� = 1,…,�} connecting the joints. Rather than mapping the images directly to the 3-D joint locations, it is easier to map the images to a latent space. This space acts like an interpreter between the image space and the 3-D posture space. This latent space feature (i) will be independent of any information present in the image (such a gender/identity/emotion of the human)
10 other than 3-D human posture, thus, not contaminating the result with unnecessary information, (ii) will learn the relationship of the 3-D posture (alternatively called as pose) at one instance in time to 3-D postures in preceding and next time instances and (iii) will be concise.
[035] At step 204 of method 200, the one or more hardware processors
15 104 of the system 100 are configured to train a neural network model with the one or more training datasets received at step 202 of the method 200, to obtain a trained neural network model. The neural network model of the present disclosure includes an autoencoder (AE) network and a second encoder (E). The autoencoder (AE) network includes a first encoder and a first decoder.
20 [036] FIG. 3 shows an architecture of the neural network model, in
accordance with some embodiments of the present disclosure. The input to the AE network is a sequence of 3-D postures. The objective of the AE network is to learn posture latent posture of each 3-D posture. The encoder (E) maps a given video clip of image sequence (the plurality of training video frames in each training video
25 clip), displaying human in it, to the corresponding latent feature vector (learnt by the AE network). Before training the AE network and the E, an Off-the-Shelf (OS) routine Detectron is utilized for detecting the human in the training video frames (images) of the training video clip. Once this routine detects a human, it sends the cropped human image to the E. Also, the corresponding 3-D postures are sent to the
30 AE network for the training.
[037] Each of (i) the first encoder of the autoencoder (AE) network, (ii)
the first decoder of the autoencoder (AE) network, and (iii) the second encoder (E),
comprises four residual network (ResNet) style blocks surrounded by one or more
associated skip connections. FIG. 4 shows an architecture of one ResNet type block
5 with a skip connection, in accordance with some embodiments of the present
disclosure. Each block of the four ResNet style blocks present in the first encoder of the autoencoder (AE) network and the second encoder, includes two convolution layers each followed by a batch normalization layer, a rectified linear unit (ReLU) activation function layer, and a dropout layer. Similarly, each block of the four
10 ResNet style blocks present in the first decoder of the autoencoder (AE) network
comprises two deconvolution layers each followed by the batch normalization layer, the rectified linear unit (ReLU) activation function layer, and the dropout layer.
[038] As described above, the input to the AE network is the sequence of
15 3-D postures. The concatenated 3-D joint locations for n = 17 joints form 3 × 17 =
51 channels and there are N = 243 (from N frames/postures) number of input features in the temporal dimension for each channel. Temporal 1-D convolution is applied with kernel size K and dilation factor D and C output channels. Then B = 4 number of ResNet style blocks are applied which are surrounded by the skip
20 connections.
[039] In the ResNet style block, a dropout of 0.25 is applied. The kernel size K and the dilation factor D of the first convolution is 3 and KB respectively. For the second convolution for each block K = 1 and D = 1. This strategy has several advantages. The first advantage is that the 1-D convolutional kernel with dilation
25 factor D will have D - 1 zeros between two consecutive kernel elements. This
sparse structure increases the receptive field size which in turn ensures processing more information in a smaller number of operations. This makes the neural network model of the present disclosure faster.
[040] The second advantage is that the 1-D convolution over 3-D
30 coordinates drastically brings down the number of network parameters thus, the
required resources. For example, if spectral map of size for example, 150 × 150 is
considered, then the number of input nodes becomes 22,500 per map, whereas, in
our case the size is 17 × 3 × 243 = 12,393 for 17 × 3 = 51 input maps. The third
advantage is 1-D convolution over the temporal domain. Thus, the depth of the
network is always fixed and not dependent on the size of the input video sequence.
5 LSTM and RNN are generally used for processing sequences are dependent on the
size of the sequence, therefore, suffer from vanishing gradient, whereas the method of the present disclosure does not.
[041] The input to the second encoder (E) is the plurality of training video frames present in each of the one or more training video clips of each of the plurality
10 of training videos in each of the one or more training datasets.
[042] The autoencoder (AE) network and the second encoder (E) are trained sequentially, i.e., the autoencoder (AE) network is trained first followed by the training of the second encoder (E). For training the autoencoder (AE) network, a reconstruction loss along with bone length consistency loss are employed. For
15 training the second encoder (E), only the reconstruction loss is employed.
[043] The training of the autoencoder (AE) network with the one or more training datasets is further explained through steps 204a1 through 204a5. At step 204a1, the 3-D postures annotated to the plurality of training video frames present in each training video clip are passed at a time (at each iteration and the number of
20 the 3-D postures being the batch size), to the first encoder to obtain a latent space
vector associated to the 3-D postures associated to the corresponding training video clip. At step 204a2, the latent space vector obtained at the step 204a1 of the associated to the 3-D postures associated to the corresponding training video clip are passed to the first decoder to obtain the reconstructed 3-D postures of the
25 associated training video clip.
[044] At step 204a3, a value of a loss function of the autoencoder (AE) network, is calculated using (i) the 3-D postures of each training video clip passed at step 204a1 and (ii) the reconstructed 3-D postures of the associated training video clip obtained at step 204a2. The loss function of the autoencoder (AE) network is a
30 summation of an autoencoder reconstruction loss and a bone length consistency
loss. The loss function (LAE) of the autoencoder (AE) network is mathematically represented as equation 1:
(1)
5 [045] The first part of equation 1 represents the autoencoder
reconstruction loss and the second part of the equation 1 represents the bone length
consistency loss. In equation 1, JV is the number of 3-D postures (images) annotated
to the plurality of training video frames considered for each training video clip, n
is the number of joints of the 3-D postures (of the human in each 3-D posture), m
10 is the number of bones of the 3-D postures (of the human in each 3-D posture), and AEL(x) is the latent space vector obtained at step 204a1 when the 3-D posture sequence (corresponding to the training video clip) is forwarded through the first encoder of the autoencoder (AE) network.
[046] At step 204a4, one or more autoencoder network parameters of the
15 autoencoder (AE) network are updated based on the value of the loss function of the autoencoder (AE) network obtained at step 204a3 through back propagation.
[047] At step 204a5, the steps (204a1) through (204a4) are repeated until the value of the loss function of the autoencoder (AE) network obtained at the current iteration is less than a first predefined threshold value, to obtain a trained
20 autoencoder network.
[048] The training of the second encoder (E) with the one or more training datasets is further explained through steps 204b1 through 204b5. At step 204b1, the 3-D postures annotated to each training video clip are passed at a time (at each iteration and the number of the 3-D postures being the batch size), to the first
25 encoder of the trained autoencoder network obtained at step 204a5, to obtain a first latent space vector of the associated training video clip. At step 204b2, the plurality of training video frames present in the associated training video clip (which is passed at step 204b1), are passed to the second encoder (E), to obtain a second latent space vector of the associated training video clip.
[049] At step 204b3, a value of the loss function of the second encoder (E) is calculated using (i) the first latent space vector of the associated training video clip obtained at step 204b1 and (ii) the second latent space vector of the associated training video clip obtained at step 204b2. The loss function of the second encoder 5 (E) is an encoder reconstruction loss. The loss function (��) of the second encoder (E) is mathematically represented as equation 2:
��=1�‖���(�),�(�)‖2
(2) [050] In equation 2, � is the number of 3-D postures (images) annotated
10 to the plurality of training video frames considered for each training video clip passed at step 204b1, � is the size of the second latent space vector obtained at step 204b2, ���(�) is the first latent space vector obtained at step 204b 1 when the 3-D posture sequence (corresponding to the training video clip) is forwarded through the first encoder of the trained autoencoder (AE) network, and �(�) represents the
15 output of second encoder (E) for the central training video frame I (the video frame (image) at the central position) of the corresponding training video clip.
[051] At step 204b4, one or more second encoder network parameters of the second encoder (E) are updated based on the value of the loss function of the second encoder (E) obtained at step 204b3 through the back propagation.
20 [052] At step 204a5, the steps (204b 1) through (204b4) are repeated until
the value of the loss function of the second encoder (E) obtained at the current iteration is less than a second predefined threshold value, to obtain a trained second encoder. Once the autoencoder (AE) network and the second encoder (E) are trained, the trained neural network model comprising the trained autoencoder
25 network and the trained second encoder is obtained.
[053] Once the trained neural network model is obtained, a test video is passed through the human detector to the trained second encoder. The output of the trained second encoder (the latent space vector) is passed as an input to the first decoder of the trained autoencoder network. The first decoder of the trained
30 autoencoder network predicts the 3-D posture corresponding to the human in the
input test video frame (image) of the test video. The evaluation path is shown in FIG. 3 for predicting the 3-D posture. The details are further explained through steps 206 to 212 of the method 200.
[054] At step 206 of method 200, the one or more hardware processors
5 104 of the system 100 are configured to receive a test video of the human whose
posture is to be monitored while performing the physical activity. The test video may be received through the vision acquisition device installed in the mobile device such as the smart phone. The test video is the monocular video or a sequence of RGB frames without depth information.
10 [055] At step 208 of method 200, the one or more hardware processors
104 of the system 100 are configured to divide the test video received at step 206 of the method 200 into one or more test video clips based on presence of the human in the associated one or more test video clips. A human detection technique uses the human detector such as the Off-the-Shelf (OS) routine Detectron to detect the
15 human present in each video frame of the test video. Each of the one or more test
video clips contains a plurality of such test video frames. The number of the plurality of the test video frames is same as that of the number of the plurality of training video frames present in each training video clip used during the training.
[056] At step 210 of method 200, the one or more hardware processors
20 104 of the system 100 are configured to pass each of the one or more test video
clips to the trained second encoder of the trained neural network model, to obtain the latent space vector for each of the one or more test video clips. The latent space vector is obtained for each test video clip.
[057] At step 212 of method 200, the one or more hardware processors
25 104 of the system 100 are configured to pass the latent space vector of each of the
one or more test video clips obtained at step 210 of the method 200, to the first decoder of the trained autoencoder network of the trained neural network model, to estimate the 3-D posture of the human present in each of the one or more test video clips. The 3-D posture of the human is estimated for each test video clip based on
30 the centred video frame. The 3-D posture contain 3-D coordinates of each of the 17
joints, the way the neural network model is trained at step 204 of the method 200.
The present disclosure eliminated the need of both depth sensors and stereovision
based specialized cameras for estimating the 3-D posture of the human. The
methods and systems of the present disclosure accurately estimate the 3-D posture
from the single monocular video from the low-resolution camera e.g., a smart-
5 phone camera. The present disclosure utilizes 1-D dilation-based convolution
directly on the temporal sequence of 3D pose coordinates instead of the heat map
or spectral domain. This reduces the parameter size of our model and makes it
less resource consuming. Further, the present disclosure exploits the temporal
information. and brings down the time consumption by employing a single stage
10 Deep Learning (DL) based network that predicts 3-D posture from image
sequence directly.
[058] FIG. 5 shows an exemplary setup for online posture analysis using the mobile device, in accordance with some embodiments of the present disclosure. As shown in FIG. 5, a monocular/RGB camera present in the mobile device such
15 as a smart phone is fixed before the human in the video mode. The human or the
user can perform physical activities. The obtained monocular video or the sequence of RGB frames from the monocular/RGB camera stream gets channeled to the cloud and is processed by the methods configured in the systems such as the mobile device. The posture analytics output gets displayed on the screen. The first part of
20 the process is detecting the posture itself. The second part is to detect if the posture
is correct or not so that necessary precautions can be taken to correct the postures and avoid body disorders such as back pain, spinal dysfunction, joint or muscle degeneration. Example scenario:
25 [059] Dataset Description: The neural network model of the present
disclosure is trained on videos from eight subjects performing 47 exercises, being captured at 50 fps from four different views using two different camera models, one assuming image distortion and the other ignoring it. For testing, two different types of data were considered having: (i) online yoga videos and (ii) an in-house dataset
30 captured in-the-wild where six volunteers have performed two exercises each, three
takes per exercise and videos captured from two different views (frontal and side)
per capture. This resulted in a total of 72 videos. The videos are of variable length.
The in-house data collection protocol was cleared by the internal Ethics Committee.
The Helsinki Human Research guidelines were followed for the data capture. For
training the neural network model, the videos are divided into clips of N = 243 video
5 frames with random sampling.
[060] A two-step evaluation of the present disclosure was performed. The first step evaluates the proposed neural network model architecture in predicting the 3-D posture (pose) given the video. The second step evaluates the utilization of these 3-D postures for analyzing if proper posture for a given exercise is
10 maintained.
[061] Results on 3-D posture estimation from videos: The methods and systems of the present disclosure were evaluated, and a comparative study was conducted with the state-of-the art (SOA) techniques both qualitatively and quantitatively.
15 [062] Comparative study on qualitative results: The trained neural network
model of the present disclosure was tested on several difficult conditions, such as (i) low light, (ii) low resolution, (iii) self-occlusion, and (iv) highly flexible human body posture such as in yoga. FIG. 6 shows the predicted 3-D postures of the human at several difficult conditions, by the trained neural network model of the present
20 disclosure. As shown in FIG. 6, the present disclosure produces good results in all
these difficult and practical conditions. In FIG. 6, the first and second row shows a highly flexible body, low resolution (240p) and front view. In the third and fourth rows, bad posture (bent in the back, the angle at knee being less than 60 degrees in second and third columns), self-occlusion (left body part being occluded by right).
25 In the fifth and sixth row, low light condition, side view, good posture for squat
(straight back, angle at knee being greater than 60 degrees). The first row of each block shows input image with the joint locations projected on to the image, the second row shows the 3-D joint locations/skeletons as estimated by the trained neural network of the present disclosure.
30 [063] Further, the performance of the trained neural network of the present
disclosure was compared with the newest and most relevant SOA work called
MediaPipe. The methods and systems of the present disclosure have several practical advantages over MediaPipe. They are:
(i) The 33 joint locations that MediaPipe trains on, does not consider
any joint on the body torso. The present disclosure considers three
5 joints on body torso. Thus, the present disclosure is able to capture
the straightness vs. bend of the body torso which MediaPipe cannot.
FIG. 7 shows a performance comparison of the present disclosure
with that of MediaPipe. This straightness of the body torso is an
important posture requirement for most exercises. In FIG. 7, (a) is
10 the output of MediaPipe, contains no joint on the back, cannot detect
the bent back: a bad posture for the exercise Deadlift. (b) is the
output of the present disclosure. (c) is the skeletal output of the
present disclosure, contains three joints on the back, thus detects the
bent displayed by a curved line on the back.
15 (ii) (ii) MediaPipe processes one image at a time. It does not exploit the
temporal relationship of the 3-D body postures. As a result,
MediaPipe cannot produce smooth transition of skeletal postures
over temporal axis. The present disclosure explicitly exploits the
temporal relationship via 1-D convolution over time axis. Thus, the
20 present disclosure produces realistic and smooth transition. To
quantify this smoothness. The Mean Per Joint Velocity Error
(MPJVE) corresponding to the first derivative of the 3-D posture
sequence was measured. The MPJVE of the present disclosure is 2.8
vs. 11.6 for MediaPipe.
25 [064] Comparative Study on Quantitative Results: The Mean Per Joint
Position Error (MPJPE) defined by equation 3 was measured and compared with
the other SOA techniques such as (A) exploiting temporal information from 3-D
posture estimation, (B) Human posture estimation using MediaPipe pose based on
humanoid model, and (C) 2-D/3-D pose estimation and action recognition using
30 multitask deep learning.
MPJPE = 1/(F x n)If=1I?=1||xiy,M(/i)y|| (3)
[065] In equation 3, F is the number of frames and Misrepresents the jth joint of the 3-D posture output of the proposed model for the ith image. Table 1 shows the comparison results of the present disclosure over the SOA techniques A, B and C. As shown in Table 1, the present disclosure results in less error as 5 compared to other SOA techniques. In addition, the present disclosure is a one-step method in the inference case whereas B consists of two steps, image to 2-D posture detection and 2-D to 3-D posture prediction. Thus, our method is less time-consuming during inference (-2 - 3 fps vs. ~ 5 fps). In Table 1, Sq: Squat, DL: Deadlift, FV: Front View, SV: Side View.
Method Exercise Avg
Sq FV Sq SV DL FV DL SV
A NA NA NA NA 58.3
B 46 47.5 46.2 47.5 46.8
C NA NA NA NA 53.2
Present disclosure 45 44 45.8 46.1 45.2
10 Table 1
[066] Results on Posture Analysis: The efficacy of the present disclosure was evaluated for detecting whether the user is maintaining the proper posture while doing a particular exercise. For this, two exercises were considered: (i) squat (done by people in general for toning legs and glute muscles) and (ii) deadlift (done by
15 advance gym trainees and involves lifting heavy weights for bodybuilding).
Exercises follow a fixed sequence of body postures. For getting these body posture rules, certified gym trainer consultation was made. For squat, the rules are: 1. the angle at knee should be more than 60 degrees, 2. the back should be straight (no bend). The body posture rules for deadlift are: 1. the weight should not be far away
20 from the body, 2. the back should be straight, 2. the upper body should not go
backward. These conditions are easily detected by the 3-D joint locations of the body. In FIG. 6, third vs. fifth rows show violation of the two rules vs. maintaining
of the rules for squat. FIG. 7 shows that a bent back is detected by the present disclosure for deadlift.
[067] The written description describes the subject matter herein to enable
any person skilled in the art to make and use the embodiments. The scope of the
5 subject matter embodiments is defined by the claims and may include other
modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
10 [068] The embodiments of the present disclosure herein address
unresolved problems of real time video driven human 3-D posture estimation during physical activities. The present invention discloses a smart-phone camera based automatic posture monitoring system designed with an auto-encoder based architecture. The disclosed auto-encoder based cross-modal method uses
15 monocular video (2-D image sequences) from a single low-end mobile device (for
example, smart-phone camera) for estimating human 3-D posture in real time (∼ 5 fps) with high accuracy (less than 1 cm error per joint location).
[069] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message
20 therein; such computer-readable storage means contain program-code means for
implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination
25 thereof. The device may also include means which could be e.g., hardware means
like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both
30 hardware means, and software means. The method embodiments described herein
could be implemented in hardware and software. The device may also include
software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
[070] The embodiments herein can comprise hardware and software
elements. The embodiments that are implemented in software include but are not
5 limited to, firmware, resident software, microcode, etc. The functions performed by
various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in
10 connection with the instruction execution system, apparatus, or device.
[071] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation.
15 Further, the boundaries of the functional building blocks have been arbitrarily
defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons
20 skilled in the relevant art(s) based on the teachings contained herein. Such
alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such
25 item or items or meant to be limited to only the listed item or items. It must also be
noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[072] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A
30 computer-readable storage medium refers to any type of physical memory on which
information or data readable by a processor may be stored. Thus, a computer-
readable storage medium may store instructions for execution by one or more
processors, including instructions for causing the processor(s) to perform steps or
stages consistent with the embodiments described herein. The term “computer-
readable medium” should be understood to include tangible items and exclude
5 carrier waves and transient signals, i.e., be non-transitory. Examples include
random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[073] It is intended that the disclosure and examples be considered as
10 exemplary only, with a true scope of disclosed embodiments being indicated by the
following claims.
We Claim:
1. A processor-implemented method (200), comprising the steps of:
receiving, via one or more hardware processors, one or more training
datasets each comprising a plurality of training videos, wherein each of the
plurality of training videos comprises one or more training video clips,
5 wherein each of the one or more training video clips comprises a plurality
of training video frames and a 3-Dimensional (3-D) posture annotated to each of the plurality of training video frames (202); and
training, via the one or more hardware processors, a neural network
model comprising an autoencoder network and a second encoder, with the
10 one or more training datasets, to obtain a trained neural network model,
wherein the autoencoder network comprises a first encoder and a first
decoder, and wherein the trained neural network model comprises a trained
autoencoder network and a trained second encoder, and the autoencoder
network and the second encoder are trained sequentially (204), and wherein:
15 training the autoencoder network with the one or more training
datasets comprises:
(a) passing 3-D postures annotated to the plurality of
training video frames present in each training video
clip at a time, of the one or more training video clips
20 present in each of the one or more training datasets,
to the first encoder, to obtain a latent space vector associated to the 3-D postures associated to each training video clip (204a1);
(b) passing the latent space vector associated to the 3-D
25 postures associated to each training video clip, to the
first decoder, to obtain reconstructed 3-D postures of the associated training video clip (204a2);
(c) calculating a value of a loss function of the
autoencoder network, using (i) the 3-D postures of
30 each training video clip and (ii) the reconstructed 3-
D postures of the associated training video clip,
wherein the loss function of the autoencoder network
is a summation of an autoencoder reconstruction loss
and a bone length consistency loss (204a3);
5 (d) updating one or more autoencoder network
parameters of the autoencoder network based on the value of the loss function of the autoencoder network (204a4); and (e) repeating the steps (a) through (d) until the value of
10 the loss function of the autoencoder network is less
than a first predefined threshold value, to obtain a trained autoencoder network (204a5); and training the second encoder with the one or more training datasets comprises:
15 (f) passing the 3-D postures annotated to each training
video clip at a time, of the one or more training video clips present in each of the one or more training datasets, to the first encoder of the trained autoencoder network, to obtain a first latent space
20 vector of the associated training video clip (204b1);
(g) passing the plurality of training video frames present in the associated training video clip, to the second encoder, to obtain a second latent space vector of the associated training video clip (204b2);
25 (h) calculating a value of the loss function of the second
encoder, using (i) the first latent space vector of the associated training video clip and (ii) the second latent space vector of the associated training video clip, wherein the loss function of the second encoder
30 is an encoder reconstruction loss (204b3);
(i) updating one or more second encoder network
parameters of the second encoder based on the value
of the loss function of the second encoder (204b4);
and
5 (j) repeating the steps (f) through (i) until the value of
the loss function of the second encoder is less than a second predefined threshold value, to obtain a trained second encoder (204b5).
2. The method as claimed in claim 1, comprising:
receiving in real-time, via the one or more hardware processors, a
test video of the human while performing a physical activity, through an acquisition device (206);
dividing, via the one or more hardware processors, the test video into one or more test video clips based on presence of the human in the associated one or more test video clips, using a human detection technique (208);
passing, via the one or more hardware processors, each of the one or more test video clips to the trained second encoder of the trained neural network model, to obtain a latent space vector for each of the one or more test video clips (210); and
passing, via the one or more hardware processors, the latent space vector of each of the one or more test video clips, to the first decoder of the trained autoencoder network of the trained neural network model, to estimate the 3-D posture of the human present in each of the one or more test video clips (212).
3. The method as claimed in claim 1, wherein each of (i) the first encoder of
the autoencoder network, (ii) the first decoder of the autoencoder network,
30 and (iii) the second encoder, comprises four residual network (ResNet) style
blocks surrounded by one or more associated skip connections.
29
4. The method as claimed in claim 3, wherein each of the four ResNet style
blocks present in the first encoder of the autoencoder network and the
second encoder, comprises two convolution layers each followed by a batch
normalization layer, a rectified linear unit (ReLU) activation function layer,
5 and a dropout layer.
5. The method as claimed in claim 3, wherein each of the four ResNet style
blocks present in the first decoder comprises two deconvolution layers each
followed by a batch normalization layer, a rectified linear unit (ReLU)
10 activation function layer, and a dropout layer.
6. A system (100) comprising:
a memory (102) storing instructions;
one or more input/output (I/O) interfaces (106);
15 one or more hardware processors (104) coupled to the memory (102) via the
one or more I/O interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
receive one or more training datasets each comprising a plurality of training videos, wherein each of the plurality of training videos comprises
20 one or more training video clips, wherein each of the one or more training
video clips comprises a plurality of training video frames and a 3-Dimensional (3-D) posture annotated to each of the plurality of training video frames; and
train a neural network model comprising an autoencoder network
25 and a second encoder, with the one or more training datasets, to obtain a
trained neural network model, wherein the autoencoder network comprises a first encoder and a first decoder, and wherein the trained neural network model comprises a trained autoencoder network and a trained second encoder, and the autoencoder network and the second encoder are trained
30 sequentially, and wherein:
training the autoencoder network with the one or more training datasets comprises:
(a) passing 3-D postures annotated to the plurality of
training video frames present in each training video
5 clip at a time, of the one or more training video clips
present in each of the one or more training datasets,
to the first encoder, to obtain a latent space vector
associated to the 3-D posture associated to each
training video clip;
10 (b) passing the latent space vector associated to the 3-D
postures associated to each training video clip, to the first decoder, to obtain reconstructed 3-D postures of the associated training video clip;
(c) calculating a value of a loss function of the
15 autoencoder network, using (i) the 3-D postures of
each training video clip and (ii) the reconstructed 3¬
D postures of the associated training video clip,
wherein the loss function of the autoencoder network
is a summation of an autoencoder reconstruction loss
20 and a bone length consistency loss;
(d) updating one or more autoencoder network
parameters of the autoencoder network based on the
value of the loss function of the autoencoder
network; and
25 (e) repeating the steps (a) through (d) until the value of
the loss function of the autoencoder network is less than a first predefined threshold value, to obtain a trained autoencoder network; and training the second encoder with the one or more training datasets
30 comprises:
(f) passing the 3-D postures annotated to each training
video clip at a time, of the one or more training video
clips present in each of the one or more training
datasets, to the first encoder of the trained
5 autoencoder network, to obtain a first latent space
vector of the associated training video clip;
(g) passing the plurality of training video frames present
in the associated training video clip, to the second
encoder, to obtain a second latent space vector of the
10 associated training video clip;
(h) calculating a value of the loss function of the second encoder, using (i) the first latent space vector of the associated training video clip and (ii) the second latent space vector of the associated training video
15 clip, wherein the loss function of the second encoder
is an encoder reconstruction loss; (i) updating one or more second encoder network parameters of the second encoder based on the value of the loss function of the second encoder; and
20 (j) repeating the steps (f) through (i) until the value of
the loss function of the second encoder is less than a second predefined threshold value, to obtain a trained second encoder.
25 7. The system (100) as claimed in claim 6, wherein the one or more hardware
processors (104) are configured to:
receive in real-time a test video of the human while performing a physical activity, through an acquisition device;
divide the test video into one or more test video clips based on
30 presence of the human in the associated one or more test video clips, using
a human detection technique;
pass each of the one or more test video clips to the trained second encoder of the trained neural network model, to obtain a latent space vector for each of the one or more test video clips; and
pass the latent space vector of each of the one or more test video
5 clips, to the first decoder of the trained autoencoder network of the trained
neural network model, to estimate the 3-D posture of the human present in each of the one or more test video clips.
8. The system (100) as claimed in claim 6, wherein each of (i) the first encoder
10 of the autoencoder network, (ii) the first decoder of the autoencoder
network, and (iii) the second encoder, comprises four residual network
(ResNet) style blocks surrounded by one or more associated skip
connections.
15 9. The system (100) as claimed in claim 6, wherein each of the four ResNet
style blocks present in the first encoder of the autoencoder network and the second encoder, comprises two convolution layers each followed by a batch normalization layer, a rectified linear unit (ReLU) activation function layer, and a dropout layer.
20
10. The system (100) as claimed in claim 6, wherein each of the four ResNet style blocks present in the first decoder comprises two deconvolution layers each followed by a batch normalization layer, a rectified linear unit (ReLU) activation function layer, and a dropout layer.
| # | Name | Date |
|---|---|---|
| 1 | 202421032922-STATEMENT OF UNDERTAKING (FORM 3) [25-04-2024(online)].pdf | 2024-04-25 |
| 2 | 202421032922-REQUEST FOR EXAMINATION (FORM-18) [25-04-2024(online)].pdf | 2024-04-25 |
| 3 | 202421032922-FORM 18 [25-04-2024(online)].pdf | 2024-04-25 |
| 4 | 202421032922-FORM 1 [25-04-2024(online)].pdf | 2024-04-25 |
| 5 | 202421032922-FIGURE OF ABSTRACT [25-04-2024(online)].pdf | 2024-04-25 |
| 6 | 202421032922-DRAWINGS [25-04-2024(online)].pdf | 2024-04-25 |
| 7 | 202421032922-DECLARATION OF INVENTORSHIP (FORM 5) [25-04-2024(online)].pdf | 2024-04-25 |
| 8 | 202421032922-COMPLETE SPECIFICATION [25-04-2024(online)].pdf | 2024-04-25 |
| 9 | 202421032922-FORM-26 [08-05-2024(online)].pdf | 2024-05-08 |
| 10 | Abstract1.jpg | 2024-05-22 |
| 11 | 202421032922-Proof of Right [10-07-2024(online)].pdf | 2024-07-10 |
| 12 | 202421032922-FORM-26 [22-05-2025(online)].pdf | 2025-05-22 |
| 13 | 202421032922-Power of Attorney [30-05-2025(online)].pdf | 2025-05-30 |
| 14 | 202421032922-Form 1 (Submitted on date of filing) [30-05-2025(online)].pdf | 2025-05-30 |
| 15 | 202421032922-Covering Letter [30-05-2025(online)].pdf | 2025-05-30 |