Abstract: The present application provides system and method for estimating at least one upper body pose of at least one individual in a single image. The proposed system and method for estimating upper body pose of the individual in the single image which: do not require extensive training data for estimating human upper body poses in the single image; require only gradient information to localize the torso and the limbs; do not rely on figure-ground segmentation information; and independent of clothing or skin color constancy related assumptions for estimating human upper body poses in the single image.
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Role 13)
Title of invention:
A SYSTEM AND METHOD FOR ESTIMATING HUMAN UPPER BODY
POSE FROM SINGLE IMAGE
Applicant:
Tata Consultancy Services Limited A company Incorporated in India under The Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
FIELD OF THE INVENTION
The present application relates to a system and method for image information processing. More particularly, the application relates to system and method for estimating at least one upper body pose of at least one individual in a single image.
BACKGROUND OF THE INVENTION
Human upper body pose estimation plays a key role in applications related to human-computer interaction. In order to estimate the human upper body poses in an image, existing approaches have employed color constancy assumptions, limb detectors along with body part connectivity constrained maximization for pose estimation. These approaches are heavily dependent on clothing color constancy, hand skin exposure, seeded figure-ground segmentation and are far from real time execution. Also, constructing limb detectors require extensive training. Further, existing approaches have used edges to determine the torso boundary and to validate if a limb is present or not, i.e. as a cue in limb detection and pose estimation. People also use limb detectors for estimating the limb pose which requires training data.
Thus, in the light of the above mentioned background of the art, it is evident that, there is a need for a system and method for estimating at least one upper body pose of at least one individual in a single image which:
• do not require extensive training data for estimating human upper body poses in an image;
• require only gradient information to localize torso boundaries and limbs in a single image;
• do not rely on figure-ground segmentation information; and
• independent of clothing or skin color constancy related assumptions for estimating human upper body poses in an image.
OBJECTS OF THE INVENTION
The principle object is to provide a system and method for estimating at least one upper body pose of at least one individual in a single image.
Another significant object is to provide a system and method which do not require extensive training data for estimating human upper body poses in a single image.
Still another object is to provide a system and method which require only gradient information to localize the torso boundaries and the limbs in a single image.
Further another object is to provide a system and method which do not rely on figure-ground segmentation information.
Yet another object is to provide a system and method which independent of clothing or skin color constancy related assumptions for estimating human upper body poses in a single image.
SUMMARY OF THE INVENTION
Before the present systems and methods, enablement are described, it is to be understood that this application is not limited to the particular systems, and
methodologies described, as there can be multiple possible embodiments which are not expressly illustrated in the present disclosures. It is also to be understood that the terminology used in the description is for the purpose of describing the particular versions or embodiments only, and is not intended to limit the scope of the present application.
The present application provides system and method for estimating at least one upper body pose of at least one individual in a single image.
In one aspect a method for estimating at least one upper body pose of at least one individual in a single image comprising various machine implemented steps.
In another aspect, a single image associated with at least one individual is acquired by employing at least one sensor, wherein the said image captured in an indoor or outdoor environment is associated with at least one individual. The said individual is human. In one exemplary embodiment, the sensor can be any one of color camera or at least one interference sensor. The said image can be either one of color image, black and white image, or monochrome image. In a preferred embodiment, the said sensor is color camera and image is color image.
Upon acquiring the single image, a processor is configured with the sensor for analyzing the captured image in real-time for estimating the at least one upper body pose of the at least one individual in order to analyze the application related to human-computer interactions.
In one exemplary embodiment, upon acquiring the image, a head region of the said individual is located using frontal face detector. In a preferred embodiment, the frontal face detector is adapted to locate the head region of the sard individual using a plurality of Haar features.
Upon locating the head region, a face width of the individual is determined based on the located head region.
In one exemplary embodiment, dimensions of the body parts of the individual is calculated based on the determined face width using anthropometrical ratio data.
In another embodiment, at least one image co-ordinate of at least one joint between at least two body parts of the individual is determined. In one exemplary embodiment, the said at least one image co-ordinate of at least one joint between at least two body parts of the individual is determined using anthropometrical ratio data.
In another embodiment, at least one angle between at least two body parts of the individual is estimated. In one exemplary embodiment, the said at least one angle between at least two body parts is estimated using stochastic search iterations through Orientation Similarity Maximization along the outline of the 2D human body model placed on the image.
Upon estimating the angle, at least one body part of the individual is localized based on calculated dime nsions, determined the at least one image coordinate and estimated the at least one angle.
Next, gradient magnitude and unsigned orientation from the at least one input image is computed.
In another embodiment, random joint angles is generated and orientation similarity along the outlines of the at least one body part is computed.
Finally, the pose of the individual is estimated by maximizing the said orientation similarity measure in a stochastic search framework.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing summary, as well as the following detailed description of preferred embodiments, is better understood when read in conjunction with the appended drawings. There is shown in the drawings example embodiments, however, the application is not limited to the specific system and method disclosed in the drawings.
Figure 1 is a flowchart for the orientation similarity maximization according to one exemplary embodiment of the invention.
Figure 2a & 2b is human upper body model as shown as rectangles with circular regions at the joints to generate the effect of smooth contour of the body according to one exemplary embodiment of the invention.
Figure 3a shows the right upper arm modeled as rectangles are placed in different orientations and the direction maximizes the orientation similarity measure according to one exemplary embodiment of the invention.
Figure 3b shows the minimum/average/maximum joint angle estimation error for varying sizes of the neighborhood radius according to one exemplary embodiment of the invention.
Figure 4 shows experimental results performed on a set of human upper body images with varying levels of background clutter according to one exemplary embodiment of the invention.
DETAILED DESCRIPTION OF THE INVENTION
Some embodiments, illustrating its features, will now be discussed in detail. The words "comprising," "having," "containing," and "including," and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise. Although any methods, and systems similar or equivalent to those described herein can be used in the practice or testing of embodiments, the preferred methods, and systems are now described. The disclosed embodiments are merely exemplary.
The present application provides system and method for estimating at least . one upper body pose of at least one individual in a single image.
Figure 1 is a flowchart 100 for the orientation similarity maximization according to one exemplary embodiment of the invention. A system for
estimating at least one upper body pose of at least one individual in a single image, the said system comprising of a sensor and processor. Initially, a single image associated with at least one individual is acquired by employing at least one sensor, wherein the image captured in an indoor or outdoor environment is associated with at least one individual. The said individual is human. In one exemplary embodiment, the sensor can be any one of color camera, or at least one interference sensor. The said image can be either one of color image, black and white image, or monochrome image. In a preferred embodiment, the said sensor is color camera and image is color image
Upon acquiring the single image, a processor is configured with the sensor for analyzing the captured image in real-time for estimating the at least one upper body pose of the at least one individual in order to analyze the application related to human-computer interactions.
In one aspect a method for estimating at least one upper body pose of at least one individual in a single image comprising various machine implemented steps.
In one aspect, initially, a single image associated with at least one individual is acquired by employing at least one sensor, wherein the image captured in an indoor or outdoor environment is associated with the individual. The said individual is human. In one exemplary embodiment, the sensor can be any one of color camera, or at least one interference sensor. The said image can be either one of color image, black and white image, or monochrome image. In a preferred embodiment, the said sensor is color camera and image is color image.
Upon acquiring the single image, the head region of the said individual is located using frontal face detector. \n a preferred embodiment, the frontal face detector is adapted to locate the head region of the said individual using a plurality of Haar features.
Upon locating the head region, the face width of the individual is determined based on the located head region.
In one exemplary embodiment, dimensions of the body parts of the individual is calculated based on the determined face width using anthropometrical ratio data.
In another embodiment, at least one image co-ordinate of at least one joint between at least two body parts of the individual is determined. In one exemplary embodiment, the said at least one image co-ordinate of at least one joint between at least two body parts of the individual is determined using anthropometrical ratio data. The pictorial structures are used for representing a 2D human body model (body part dimensions and joint angles shown in Figure 2). In an exemplary embodiment, the human body to be near vertical for our particular application and hence the joint angle between torso and the
vertical axis of the head to lie in the interval The joint angles
made by the left and right upper arms with the torso axis are
assumed to vary in the interval of However, considering the possibilities
of roll in the upper arms, the angles between the lower and upper arms at left
and right elbows are assumed to lie in the interval of The
image co-ordinates of the joints viz. (between head and torso),
(between torso and left upper arm),- (between torso and right upper arm),
(left elbow) and (right elbow) can be obtained in terms of the body part dimensions and the joint angles using forward kinematics computations.
In another embodiment, at least one angle between at least two body parts of the individual is estimated. In one exemplary, the said at least one angle between at least two body parts is estimated using stochastic search iterations through Orientation Similarity Maximization along the outline of the 2D human body model placed on the image.
Upon estimating the angle, at least one body part of the individual is localized based on calculated dimensions, determined the at least one image co-ordinate and estimated the at least one angle. In an exemplary embodiment, a certain body part (e.g. the right upper arm) is localized as mentioned below. First, the joint co-ordinate of the base of the right upper arm is fixed. For different values of the joint angles the extent of alignment of the image
edges with the outlines of the rectangle representing the right upper arm is computed through an "orientation similarity measure". The final orientation of the right upper arm is obtained at the angle maximizing this measure as shown in the Figure 3a.
In one embodiment, gradient magnitude and unsigned orientation from the input image is computed. Next, random joint angles are generated and orientation similarity along the outlines of the at least one body part is computed. Finally, the pose of the individual is estimated by maximizing the said orientation similarity measure in a stochastic search framework.
The figure 2a is described as follows: Let and be the respective
gradient magnitude and unsigned (in case of unsigned directions), a line at an
angle of -a with the reference axis is considered to be equivalent to the line making an angle with respect to the same reference axis. The unsigned
orientations are thus considered to lie in the interval [0, TT) direction computed from the image pixel position Let the unsigned orientation of a body
part outline at the position , The orientation similarity measure
is defined as = 1 - and being unsigned
orientations [0, 1]). However, computation of orientation similarity
on a single pixel has two major disadvantages - first, the concerned image pixel may have a week gradient magnitude indicating lesser importance
of the gradient direction; and second, the computation might be susceptible to noise if computed only on a single pixel. Thus, a magnitude and position weighted similarity measureis computed over a circular
neighborhood of radius around the pixel position defined as,
d
Where is the position weighing function, Letbe the set of
contour pixels of some body part (e.g. the edges of the arms or the torso).
The net orientation similarity measure is defined as over the
contour where is the number of
pixels in the contour .
The process of maximizing OSM through stochastic search iterations is described below. In one embodiment, the head region is located first using a
Haar feature based frontal face detector. This provides the face width from which the body part dimensions are computed using the anthropometric ratio data. This also provides the image co-ordinates of the head-torso joint Then 3 stochastic search iterations are performed with a population size of 5 angles to estimate the head-torso joint angle . Localizing the torso provides the joint co-ordinates and To localize the left upper/lower arms, 3 stochastic search iterations is executed with a population size of 10 two-angle tuples. A Similar procedure is adopted for localizing the right upper/lower arm. Thus, a total of 15+30+30=75 OSM computations per image at an average of 11.33 frames per second are required.
BEST METHOD
The present application is described in the example given below which is provided only to illustrate the application and therefore should not be construed to limit the scope of the application.
The above proposed method is performed on a set of (unrelated) single person (upper body) images downloaded from the web and an image sequence recorded in the laboratory settings with varying levels of background clutter. A set of 20 images from this data set are ground-truthed through manual measurement of the joint angles, The accuracy of pose estimation directly depends on the neighborhood radius r in the computation of OSM- For a certain value of r, the average joint angle estimation error over 5 joint angles and 20 images is computed. Figure 3b shows the maximum, average and minimum estimation errors for Significant changes
in the error are not observed for r > 5. However, higher values of r lead to larger number of computations and hence, r = 5 is fixed for our experiments. The results of human upper body pose estimation on 12 images from the data set are shown in figure 4.
The methodology and techniques described with respect to the exemplary embodiments can be performed using a machine or other computing device within which a set of instructions, when executed, may cause the machine to perform any one or more of the methodologies discussed above. In some embodiments, the machine operates as a standalone device. In some embodiments, the machine may be connected (e.g., using a network) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client user machine in a server-client user network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may comprise a server computer, a client user computer, a personal computer (PC), a tablet PC, a laptop computer, a desktop computer, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
The machine may include a processor (e.g., a central processing unit (CPU), a graphics processing unit (GPU, or both), a main memory and a static memory, which communicate with each other via a bus. The machine may further include a video display unit (e.g., a liquid crystal display (LCD), a flat panel, a solid state display, or a cathode ray tube (CRT)). The machine may
include an input device (e.g., a keyboard) or touch-sensitive screen, a cursor control device (e.g., a mouse), a disk drive unit, a signal generation device (e.g., a speaker or remote control) and a network interface device.
The disk drive unit may include a machine-readable medium on which is stored one or more sets of instructions (e.g., software) embodying any one or more of the methodologies or functions described herein, including those methods illustrated above. The instructions may also reside, completely or at least partially, within the main memory, the static memory, and/or within the processor during execution thereof by the machine. The main memory and the processor also may constitute machine-readable media.
Dedicated hardware implementations including, but not limited to, application specific integrated circuits, programmable logic arrays and other hardware devices can likewise be constructed to implement the methods described herein. Applications that may include the apparatus and systems of various embodiments broadly include a variety of electronic and computer systems. Some embodiments implement functions in two or more specific interconnected hardware modules or devices with related control and data signals communicated between and through the modules, or as portions of an application-specific integrated circuit. Thus, the example system is applicable to software, firmware, and hardware implementations.
In accordance with various embodiments of the present disclosure, the methods described herein are intended for operation as software programs running on a computer processor. Furthermore, software implementations can include, but not limited to, distributed processing or component/object
distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein.
The present disclosure contemplates a machine readable medium containing instructions, or that which receives and executes instructions from a propagated signal so that a device connected to a network environment can send or receive voice, video or data, and to communicate over the network using the instructions. The instructions may further be transmitted or received over a network via the network interface device.
While the machine-readable medium can be a single medium, the term "machine-readable medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "machine-readable medium" shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure.
The term "machine-readable medium" shall accordingly be taken to include, but not be limited to: tangible media; solid-state memories such as a memory card or other package that houses one or more read-only (non-volatile) memories, random access memories, or other re-writable (volatile) memories; magneto-optical or optical medium such as a disk or tape; non-transitory mediums or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium. Accordingly, the disclosure is considered to include any one or more of a machine-readable medium or a distribution medium, as listed herein and
including art-recognized equivalents and successor media, in which the software implementations herein are stored.
The illustrations of arrangements described herein are intended to provide a general understanding of the structure of various embodiments, and they are not intended to serve as a complete description of all the elements and features of apparatus and systems that might make use of the structures described herein. Many other arrangements will be apparent to those of skill in the art upon reviewing the above description. Other arrangements may be utilized and derived there from, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. Figures are also merely representational and may not be drawn to scale. Certain proportions thereof may be exaggerated, while others may be minimized. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
The preceding description has been presented with reference to various embodiments. Persons skilled in the art and technology to which this application pertains will appreciate that alterations and changes in the described structures and methods of operation can be practiced without meaningfully departing from the principle and scope.
ADVANTAGES
The above proposed system and method can be used in the applications related to human-computer interaction and augmented reality such as and not limited to avatar based chatting applications, virtual meeting rooms and tele-presence of speaker/audience in large conferences, etc in order to estimate the pose of the individual.
WE CLAIM:
1. A method for estimating at least one upper body pose of at least one individual in a single image, the said method comprising the machine implemented steps of:
a) locating the h ead region of the said individual using frontal face detector;
b) determining the face width of the individual based on the located head region;
c) calculating dimensions of the body parts of the individual based on determined face width using anthropometrical ratio data;
d) determining at least one image co-ordinate of at least one joint between at least two body parts of the individual;
e) estimating at least one angle between at least two body parts of the individual;
f) localizing at least one body part of the individual based on calculated dimensions, determined the at least one image co-ordinate and estimated the at least one angle;
g) computing gradient magnitude and unsigned orientation from the input image;
h) generating random joint angles and computing orientation similarity along the outlines of the at least one body part; and
i) maximizing the said orientation similarity measure in a stochastic search framework to estimate the pose of the individual.
2. The method of claim 1, wherein the frontal face detector is adapted to locate the head region of the said individual using a plurality of Haar features.
3. The method of claim 1, wherein the said at least one angle between at least two body parts is estimated using stochastic search iterations through Orientation Similarity Maximization along the outline of the 2D human body model placed on the image.
4. The method of claim 1, wherein the said at least one image co-ordinate of at least one joint between at least two body parts of the individual is determined using anthropometrical ratio data.
5. The method of claim 1, wherein the image comprises of color image, black and white image, or monochrome image.
6. A system for estimating at least one upper body pose of at least one individual in a single image, the system comprising of:
a sensor for capturing at least one real-time image associated with at least one individual; and
a processor adapted to the sensor for analyzing the captured image in real-time for estimating the at least one upper body pose of the said individual.
7. The system of claim 6, wherein the sensor comprises a color camera or
at least one interference sensor.
| # | Name | Date |
|---|---|---|
| 1 | 1831-MUM-2011-FER_SER_REPLY [12-01-2018(online)].pdf | 2018-01-12 |
| 1 | 1831-MUM-2011-RELEVANT DOCUMENTS [28-09-2023(online)].pdf | 2023-09-28 |
| 2 | 1831-MUM-2011-DRAWING [12-01-2018(online)].pdf | 2018-01-12 |
| 2 | 1831-MUM-2011-RELEVANT DOCUMENTS [30-09-2022(online)].pdf | 2022-09-30 |
| 3 | 1831-MUM-2011-RELEVANT DOCUMENTS [25-09-2021(online)].pdf | 2021-09-25 |
| 3 | 1831-MUM-2011-CORRESPONDENCE [12-01-2018(online)].pdf | 2018-01-12 |
| 4 | 1831-MUM-2011-RELEVANT DOCUMENTS [30-03-2020(online)].pdf | 2020-03-30 |
| 4 | 1831-MUM-2011-COMPLETE SPECIFICATION [12-01-2018(online)].pdf | 2018-01-12 |
| 5 | 1831-MUM-2011-IntimationOfGrant21-10-2019.pdf | 2019-10-21 |
| 5 | 1831-MUM-2011-CLAIMS [12-01-2018(online)].pdf | 2018-01-12 |
| 6 | 1831-MUM-2011-PatentCertificate21-10-2019.pdf | 2019-10-21 |
| 6 | 1831-MUM-2011-ABSTRACT [12-01-2018(online)].pdf | 2018-01-12 |
| 7 | ABSTRACT 1.jpg | 2018-08-10 |
| 7 | 1831-MUM-2011-Written submissions and relevant documents (MANDATORY) [27-09-2019(online)].pdf | 2019-09-27 |
| 8 | 1831-mum-2011-form 3.pdf | 2018-08-10 |
| 8 | 1831-MUM-2011-ExtendedHearingNoticeLetter_12-09-2019.pdf | 2019-09-12 |
| 9 | 1831-MUM-2011-FORM 26(28-7-2011).pdf | 2018-08-10 |
| 9 | 1831-MUM-2011-FORM-26 [11-09-2019(online)].pdf | 2019-09-11 |
| 10 | 1831-mum-2011-form 2.pdf | 2018-08-10 |
| 10 | 1831-MUM-2011-HearingNoticeLetter02-09-2019.pdf | 2019-09-02 |
| 12 | 1831-mum-2011-abstract.pdf | 2018-08-10 |
| 12 | 1831-mum-2011-form 2(title page).pdf | 2018-08-10 |
| 13 | 1831-mum-2011-form 18.pdf | 2018-08-10 |
| 14 | 1831-mum-2011-claims.pdf | 2018-08-10 |
| 14 | 1831-mum-2011-form 1.pdf | 2018-08-10 |
| 15 | 1831-MUM-2011-CORRESPONDENCE(11-7-2011).pdf | 2018-08-10 |
| 15 | 1831-MUM-2011-FORM 1(11-7-2011).pdf | 2018-08-10 |
| 16 | 1831-MUM-2011-CORRESPONDENCE(28-7-2011).pdf | 2018-08-10 |
| 16 | 1831-MUM-2011-FER.pdf | 2018-08-10 |
| 17 | 1831-mum-2011-correspondence.pdf | 2018-08-10 |
| 17 | 1831-mum-2011-drawing.pdf | 2018-08-10 |
| 18 | 1831-mum-2011-description(complete).pdf | 2018-08-10 |
| 19 | 1831-mum-2011-drawing.pdf | 2018-08-10 |
| 19 | 1831-mum-2011-correspondence.pdf | 2018-08-10 |
| 20 | 1831-MUM-2011-CORRESPONDENCE(28-7-2011).pdf | 2018-08-10 |
| 20 | 1831-MUM-2011-FER.pdf | 2018-08-10 |
| 21 | 1831-MUM-2011-CORRESPONDENCE(11-7-2011).pdf | 2018-08-10 |
| 21 | 1831-MUM-2011-FORM 1(11-7-2011).pdf | 2018-08-10 |
| 22 | 1831-mum-2011-claims.pdf | 2018-08-10 |
| 22 | 1831-mum-2011-form 1.pdf | 2018-08-10 |
| 23 | 1831-mum-2011-form 18.pdf | 2018-08-10 |
| 24 | 1831-mum-2011-abstract.pdf | 2018-08-10 |
| 24 | 1831-mum-2011-form 2(title page).pdf | 2018-08-10 |
| 26 | 1831-MUM-2011-HearingNoticeLetter02-09-2019.pdf | 2019-09-02 |
| 26 | 1831-mum-2011-form 2.pdf | 2018-08-10 |
| 27 | 1831-MUM-2011-FORM-26 [11-09-2019(online)].pdf | 2019-09-11 |
| 27 | 1831-MUM-2011-FORM 26(28-7-2011).pdf | 2018-08-10 |
| 28 | 1831-MUM-2011-ExtendedHearingNoticeLetter_12-09-2019.pdf | 2019-09-12 |
| 28 | 1831-mum-2011-form 3.pdf | 2018-08-10 |
| 29 | 1831-MUM-2011-Written submissions and relevant documents (MANDATORY) [27-09-2019(online)].pdf | 2019-09-27 |
| 29 | ABSTRACT 1.jpg | 2018-08-10 |
| 30 | 1831-MUM-2011-ABSTRACT [12-01-2018(online)].pdf | 2018-01-12 |
| 30 | 1831-MUM-2011-PatentCertificate21-10-2019.pdf | 2019-10-21 |
| 31 | 1831-MUM-2011-CLAIMS [12-01-2018(online)].pdf | 2018-01-12 |
| 31 | 1831-MUM-2011-IntimationOfGrant21-10-2019.pdf | 2019-10-21 |
| 32 | 1831-MUM-2011-COMPLETE SPECIFICATION [12-01-2018(online)].pdf | 2018-01-12 |
| 32 | 1831-MUM-2011-RELEVANT DOCUMENTS [30-03-2020(online)].pdf | 2020-03-30 |
| 33 | 1831-MUM-2011-RELEVANT DOCUMENTS [25-09-2021(online)].pdf | 2021-09-25 |
| 33 | 1831-MUM-2011-CORRESPONDENCE [12-01-2018(online)].pdf | 2018-01-12 |
| 34 | 1831-MUM-2011-RELEVANT DOCUMENTS [30-09-2022(online)].pdf | 2022-09-30 |
| 34 | 1831-MUM-2011-DRAWING [12-01-2018(online)].pdf | 2018-01-12 |
| 35 | 1831-MUM-2011-RELEVANT DOCUMENTS [28-09-2023(online)].pdf | 2023-09-28 |
| 35 | 1831-MUM-2011-FER_SER_REPLY [12-01-2018(online)].pdf | 2018-01-12 |
| 1 | Search_21-06-2017.pdf |