Sign In to Follow Application
View All Documents & Correspondence

Dynamic Parallel Program Development System And The Method Thereof

Abstract: The present invention describes a dynamic parallel program development system and the method thereof. In one embodiment, the system includes a means for receiving a program source code (partial or complete); a means for analyzing the program source code during write time at the end of each logical program segment or at predefined check points to determine data dependency and execution time of each program segment; means for generating program segments based on the analysis of the program source code; means for dynamically segregating each of the program segments to one of the cores in a multi-core system for parallel execution; and means for displaying results associated with the one or more program segments.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 May 2010
Publication Number
20/2012
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

KPIT CUMMINS INFOSYSTEMS LTD
35 & 36, RAJIV GANDHI INFOTECH PARK, PHASE I, MIDC, HINJEWADI, PUNE-411 057, INDIA

Inventors

1. DR. VAIDYA VINAY GOVIND
108, PRATHAMESH PARK, BALEWADI ROAD, BANER, PUNE - 411045
2. MR. SAH SUDHAKAR
C/O LAKSHMI SAH, NEW KISHORE GANJ ROAD NO. 6, HARMOO ROAD, RANCHI JHARKHAND - 834001

Specification

FORM-2
THE PATENTS ACT, 1970
(39 OF 1970)
COMPLETE SPECIFICATIONS
(See Section 10)
TITLE OF INVENTION
"Dynamic Parallel Program Development System and the Method thereof

(a) KP1T Cummins Infosystems Limited

(b) a company registered under the companies Act 1956 and

(c) having its office at 35 & 36 Rajiv Gandhi Infotech Park, Phase 1, MIDC, Hinjewadi, Pune 411057, India

The following specification particularly describes the nature of the invention

FIELD OF THE INVENTION
The present invention generally relates to parallel programming. Particularly, the present invention relates to a dynamic parallel program development system and method thereof.
BACKGROUND OF THE INVENTION
In recent years, parallel computing and parallel programming paradigm has gained high momentum. With the development of parallel computing and the need of high computing resources, the concept of multi-core processor has introduced. Multi-core processors include two or more processing elements or cores built-in to the same chip or die. These processors are answer to the constant demand of increasing processing capabilities, while keeping the power and heat dissipation to the minimum. These processors are available today, even for general purpose computing as opposed to special segments of applications. Hence, it is important to utilize the available processing power to make the applications execute faster. The multi-core revolution has definitely taken the market by storm; however they may pose newer problems to the software community.
Programmers need to exploit the computing resources provided by multi-core processors. For maximum and efficient utilization of the multi-core processing power, programmers need to adapt the concept of parallel programming. The programmers and software -architects are expected to think in parallel, as opposed to the traditional sequential thinking. So, there is an unavoidable need of multi-threaded programs which can best utilize the capabilities of multi-core processors. - Programmers today generally don't exploit the full potentials of parallel programming concepts. In order to develop a program which can execute concurrently, programmers need to gain specific skills.
The prior art literature describes the parallelization of pre-written sequential

codes and the execution of such parallel programs on the multi-core processor. The method of converting a sequential program into a parallel program is described by the prior art. However, it is capable of converting only the already written sequential program. In general, the prior art focuses only on the sequential code which is already written. Moreover, programmers who are used to writing sequential code need input and assistance for writing parallel code.
SUMMARY OF THE INVENTION
The primary object of the present invention is to provide a dynamic parallel program development system and method thereof. In one aspect, the present invention provides a dynamic parallel program development system including means for receiving a program created by a programmer, and means for analyzing the program during write time at the end of each logical program segment or at any pre-defined checkpoint to determine data dependency and approximate execution time of each program segment. The system also includes means for generating one or more program segments based on the analysis of the program, and means for dynamically segregating each of the program segments to one of cores in a multi-core system for parallel execution. Moreover, the system includes means for displaying results or intermediate information associated with the program segments from the cores.
Accordingly the present invention relates to the system, wherein the means for receiving the program and means for displaying the results or intermediate information include an N-depth window, where the N-depth window is a window for sequential program and multiple windows for parallel code segments, where each of the parallel code windows corresponds to one core.
In another aspect, the present invention provides a method for generating a dynamic parallel program during write time of a program, for a computing platform consisting of multiple processors which share main memory and/or cache memory, for the purpose of increasing execution efficiency of the multiple

processors while reducing amount of data transfer from the main memory to the multiple processors.
Accordingly the present invention relates to a method including steps of analyzing code during write time of a program to determine data dependency of the code, and estimating an approximate time for executing of the code. The method also includes segregating a portion of the code to multiple processors as per the data dependency of the code, until the completion of the whole program by analyzing the program at the end of each logical segment or pre-defined checkpoints, reanalyzing the whole program for obtaining accurate execution and dependency information, and re-arranging a portion of the program for the multiple processors.
Accordingly the present invention relates to a method, wherein the program analysis is repeated after a predefined number of lines of source code, predefined time interval and/or end of logical segment and the code is segregated to the multiple processors. Further, the segregation of code is rearranged based on newly added code line, deleted code and/or modified code.
In yet another aspect, the present invention provides optional tips to a programmer such as avoiding use of a particular variable or set of variables if possible so as to increase parallelism possibility or suggests the programmer to shuffle the code segments between one or more of the multiple processors.
These and other objects, features and advantages of the present invention will become more apparent from the ensuing detailed description of the invention taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF DRAWINGS
Example embodiments are illustrated by way of example and not limitation in the figures of the accompanying figures, in which like references indicate similar elements and in which:

Figure 1 is a block diagram of a dynamic parallel program development system, in accordance with the present invention.
[001] Figure 2 is a flowchart illustrating an exemplary method for dynamic parallel programming, in accordance with the present invention.
Figure 3 is a schematic visual representation of an input module interface, in accordance with the present invention.
Figure 4 is a schematic block diagram illustrating segregation of code in a distributed architecture, in accordance with the present invention.
Figure 5 is a schematic block diagram illustrating segregation of code in a single processor having multiple cores, in accordance with the present invention.
Other features of the present embodiments will be apparent from the accompanying drawings and from the detailed description that follows.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The primary object of the present invention is to provide a dynamic parallel program development system and method thereof. The preferred embodiments of the present invention are now explained with reference to the accompanying drawings. It should be understood however that the disclosed embodiments are merely exemplary of the invention, which may be embodied in various forms. The following description and figures are not to be construed as limiting the invention and numerous specific details are described to provide a thorough understanding of the present invention, as the basis for the claims and as a basis for teaching one skilled in the art about making and/or using the invention. However in certain instances, well-known or conventional details are not described in order not to unnecessarily obscure the present invention in detail.

The terms "program source code", "program" and "code" are used interchangeably throughout the document.
In one embodiment, the dynamic parallel program development system enables programmers to dynamically write a parallel program. Further, the present invention provides a method to schedule program segments of codes to multiple processors during write time analysis. The program is made up of one or more lines of codes. The multi-core processor system is taken as an example to describe the present invention; however, the present invention can be extended to a system having distributed and/or parallel computing architecture. The multi-core or multi-processor system is made up of one or more number of cores or processors, which share main memory and/or cache memory. Further, it is appreciated that the present invention can be applicable to single processor having multiple cores.
Figure 1 is a block diagram of a dynamic parallel program development system, in accordance with the present invention. Particularly, the dynamic parallel program development system (100) includes an input module (102), an analyzer module (104) and an output module (106). In one embodiment, the input module (102) includes options to a programmer for typing a program (108), making incremental changes (110) and optional user accept/reject of automatic changes (112). Further, the analyzer module (104) includes an incremental dependency analyzer (114), an incremental profiler (116), an incremental code restructurer (118), an incremental analyzer (120), an incremental core assignment modifier (122) and a concurrent debugging information unit (124). Furthermore, the output module (106) includes a data flow unit (126), a control flow unit (128), an additional parallelization tips unit (130), an intermediate parallel code unit (132) and an intermediate parallel code schedule unit (134). The method performed by the system (100) is illustrated in Figure 2.
Figure 2 is a flow chart (200) illustrating an exemplary method of dynamic parallel programming, in accordance with the present invention. At step 202, a programmer starts writing sequential code through the input module or interface

(102). Further, as soon as the programmer starts writing the sequential code, the sequential code is analyzed depending on a trigger that is generated either at the end of each logical segment of code and/or after the pre-determined time interval has elapsed and/or at the pre-defined number of lines of code is written as shown in step 204. In one exemplary embodiment, the end of the logical segment of code could be end of a control block, a module or function, a function call or a loop or other suitable logical end of program based on write time profiling. Further, after each logical segment, there may exist three possible operations performed by any programmer such as addition of new code, deletion of any code, or modification of existing code.
At step 206, the program segment is analyzed based on the data dependency of the sequential code. The method enables dynamic analysis of the sequential code during code writing as it not only takes care of the code being added but also takes care of the deleted code deleted and the code modified through incremental dynamic data dependency analysis. In one embodiment, the incremental dynamic data dependency analysis consists of incremental and dynamic call graph generation, incremental and dynamic side effect analysis, incremental and dynamic approximate profiling, incremental and dynamic alias analysis, incremental and dynamic data flow analysis, and incremental and dynamic control flow analysis. The incremental and dynamic analysis is data dependency analysis used for write time parallelization. Thus, the program is reanalyzed after the above mentioned trigger is generated, so as to shuffle the code segments among available cores/processors to increase possibility of parallelism for program to be written further.
In one embodiment, analysis of the program is achieved by determining data dependency using static analysis of the sequential code. The static analysis enables finding usage of variables throughout the program to identify any modification to the variable and how other code segments can be executed without affecting particular variable value at some point and while keeping the functionality same.

Further, at step 208, the check for data dependency is performed, if there is any data dependency in the code, the code is marked as sequential code and is allocated a first core as shown in step 210. Further, if there is any data dependency in the code with code segment in the first core, the particular core is found, where variable used in current code is used more frequently as shown in step 212 and 214. In other words, a decision about sending code to different cores is made by considering approximate profiling determine whether all the cores are busy for almost same period of time. This enables scheduling segments of codes on separate cores. In one exemplary embodiment, approximate profiling is done at the time of writing program by considering a number of arithmetic, logical operations, loop depth and iteration, array size, etc. to decide parallelization dynamically.
Similarly, the sequential code written by the programmer is converted to a parallel program dynamically and the sequential code is segregated to multi-core systems at write time. The same process is repeated for any number of additional lines of codes or any modification of written code as shown in step 216. Further, if there is no modification for the code or the program is ended, the code is re-analyzed and the whole program is executed again for accurate execution as shown in step 218. Further, as in step 220, the concurrent code segment is re-arranged on different cores based on final code analysis and final profiling.
Alternatively, the present invention provides manual option for analyzing parallelization for the programmer. Particularly, the present invention provides additional option of manually enabling the parallelization analysis by clicking on manual analysis button provided in the user interface or using a menu option or a combination of key shortcuts. This facilitates modifying the analysis checkpoint as and when required by the programmer.
In one embodiment, the write time program profiling is described, wherein the profiling method employed in the present invention profiles the code after logical end of the code by static analysis of the following: ► Counting number of cycles for arithmetic operations

► Counting a depth and number of iterations
► Counting memory operations and approximating number of cycles required for the same.
► Computing the approximate number of cycles for branch statements which takes maximum time.
► Computing the approximate number of cycles required for looping and/or recursive statements
► Computing the approximate number of cycles for a function based on above information and profiling of functions called inside.
Further, after next trigger, program again tries to modify the profiling information based on added, deleted or modified code. It is noted that, there are many other factors considered for computing approximate program profiling at the time of program writing, which is useful in sending code segments to different cores. Further, the part of the code, within the selection of codes, which takes more time is first considered for division among cores. In one embodiment, the approximate call graph creation tracks all functions being declared, defined and/or called to show the complete and incomplete sections of the function call graph. Further, the program analysis information is stored in the form of internal data structures and used when programmer writes further part of the program.
In one example implementation, the output generated is represented in the form of a user interface with multiple windows representing processors/cores. Additionally, the output unit uses a highlighting or coloring scheme to represent separate logical segments and/or blocks of code that are mapped to individual cores/processors. The coloring scheme is used for distinction of code based on color in sequential code. Additionally, at any point of time, the described method maintains two versions of the code, first the actual code which is completely sequential code and the second, which is the parallel segment of codes for each core.
In general, the method described in the present invention estimates approximate program profiling time, for code segments or whole program, which is performed at the time of program writing. This profiling is called incremental profiling as

profiling data keeps on changing based on newly added code segments. Further, the present invention enables programmers to look at content of the program for each core at the time of writing which gives them an option to move code segments between cores. Furthermore, the present invention enables the programmer to transfer segments of code from one core to another core of a multi-core system to increase parallelism in a program.
Figure 3 is a schematic visual representation of an input module interface, in accordance with the present invention. Particularly, the input module interface includes a window (300) for both sequential code and parallel code. The sequential code window provides an interface for the programmer to write sequential code (302) and the multiple parallel code windows are used to view contents of multiple selected cores (304a, 304b, 304c and 304d) as shown in Figure 3. Further, the programmer has an option to select any number of cores.
In one example implementation, the number of cores selected by the programmer is four (304a, 304b, 304c and 304d) as shown in Figure 3. The four windows (304a, 304b, 304c and 304d) that represent the segment of codes are divided into four different parts while the programmer is writing program. The code written by the programmer is sent to the main window intended for writing the sequential program. First module ml is written which uses var 1 and var 2. ml is kept in core (304a) and displayed in its corresponding window.
Further, the code ml is analyzed for identifying the variables used inside the code ml. This analysis is used further while writing next modules. Upon completion of writing of code m2, the program identifies that code ml and code m2 are completely independent. At this point of time, the system asks the programmer to shift the code m2 to core (304b). Upon shifting the code rm2, the code is displayed in the corresponding window. If the programmer does not desire to shift, then the code m2 is retained in the core (304a). Similarly, next set of codes is moved to the core (304c) and the core (304d) accordingly. Further, once writing of the program is finished, the whole program is reanalyzed to find

out the time of execution of various modules or codes. Based on this information, the code segments are again swapped among various cores if required.
In one embodiment, the present invention provides user interface through a number of windows, visible to the programmer, called N-depth window, for example a quad depth window. Further, the cores are named in such a way that it looks like each core is having four sub cores, however all the cores are independent as depicted in Table 2, wherein C represent core.

Actual Core number New convention Further levels
C1 C1 C1.1
C2 C1.2
C3 C1.3
C4 C1.4
C5 C2.1
C6 C2 C2.2
C7 C2.3
C8 C2.4
C9 C3.1
C10 C3 C3.2
C11 C3.3
C12 C3.4
C13 C4.1
C14 C4 C4.2
C15 C4.3
C16 C4.4
In one exemplary embodiment, the first four cores in one screen, as shown in Figure 3, enables the programmer to view contents of the cores (104b), (104c) and (104d). Also, the content of each core can be viewed by the programmer. Further, the naming groups of the cores can be changed by the programmer and the naming convention also gives information to the programmer to keep code

using similar data structures to quad-cores near to it logically. It is noted that, the quad depth window concept can be extended with further increase in number of cores. Further, the programmer can view any number of cores at a time, four cores is one such example and the scope of the present invention varies from 1 to N windows where N is a positive integer which represents the number of cores.
In another embodiment, the present invention is applicable for segregation of pre-written or completed code to available cores, wherein the segregated code is statically analyzed and segregated to available cores.
Figure 4 is a schematic block diagram illustrating segregation of code in a distributed architecture, in accordance with the present invention. In one exemplary embodiment, the distributed architecture can include one or more processors (processor 1, processor 2, processor 3 and processor 4) having multiple cores such as C1, C2, C3 and C4. In one embodiment, the code (segment 1 to segment 12) is segregated in such a way that the communication time for code execution is minimized and maximum performance is achieved from the cluster of such processors as shown in Figure 4.
Figure 5 is a schematic block diagram illustrating segregation of code in a single processor having multiple cores, in accordance with the present invention. In one embodiment, the dynamic multi-threaded application generation is described. Particularly, the code is segregated in such a way that the same code is executed on multiple cores on multiple threads (thread 1 to thread n as shown in Figure 5). It is advantageous for loop level parallelization, wherein the loop takes maximum amount of time to execute but the iterations are independent. Further, the method identifies such loops and tries to segregate individual iterations in such a way as to minimize the execution time and at the same time keeps the inter processors communication time as minimum as possible.
In summary, the dynamic parallel programming is achieved through the data dependency checks at the end of each logical segment of code and highlighting

segments of codes using said data. Further, N window schema for N cores/processors of multi-core/processor system is provided. Alternatively, it provides user interface to assign or map an already mapped code segment to another core of a multi-core system. This is supported by providing online assistance to provide more information to user about parallelization possibility and hence increasing the chances of code parallelization.
Further, the system described in the present invention provides a decision taking capability in case programmer is not an expert in parallel programming. Further, the approximate time of execution of code is based on write time approximate profiling which is based on approximate number of cycles consumed to execute the code. Furthermore, the code segments are segregated onto multiple cores based on availability of the cores. In addition, the present invention provides rearranging the core and code segments to enable more parallelization for the code to be written next.
The final profiling and analysis of code is performed to finalize the segments of code to be executed by each core. Further, the segments of code are scheduled to multiple cores such that maximal efficiency from each code could be achieved. Also, the present invention is applicable for single processor architecture. The program analysis block of the parallel program development system is capable of converting the code to multi threaded program in addition to the conversion of program to multiple tasks. The program can be converted to multi threaded application, wherein same block of code can be executed on a single processor of different core (multi threading). Primarily, the system can convert sequential code being developed to parallel code which consists of different blocks of code which can be executed concurrently.
Alternatively, the system provides tips to the programmer such as avoiding use of a particular variable or set of variables if possible so as to increase the parallelism possibility or can suggest user to shuffle the program segments amongst multiple available cores. Further, in case of absence of any inputs from

the programmer, it will wait for predefined unit of time and then take a decision on its own, thereby functioning automatically.
It is advantageous that the present invention enables a programmer to write parallel program with ease. The described system has the ability to convert the sequential code being developed into parallel code without any user intervention even when the programmer is completely unaware of parallel programming concepts. Even though the described system is automated, optionally, the present system can assist programmers with parallel programming knowledge about increasing parallelism of the code being developed. The system as described makes parallel program development very easy to the programmer as it can analyze and remember the usage of variables easily.
The present invention applies methods of data dependency check at the time of writing the code that includes write time caii graph creation, write time side effect analysis, write time alias analysis and write time inter procedural analysis. Further, information is updated at each analysis point based on newly added, deleted or modified code. Also, the present invention enables an ability to type code in a window and split into parallel code. Different cores of a multi-processor system are mapped to multiple windows and code is automatically transferred from one window to another. This method can be used for parallelization in write time profiling and code screening for data dependency.
In general, the present invention describes a method to provide parallelism by detecting and analyzing at the time of writing program and without any need of programmer intervention. The method is, but not limited to, used for multi core processors and can be extended to distributed and/or parallel computing architecture. It is evident that the present invention and its advantages are not limited to the above described embodiments only. With minor modifications, substitutions and equivalents will be apparent to those skilled in the art without departing from the spirit and scope of the present invention as described in the claims. Accordingly, the specification and figures are to be regarded as illustrative examples of the invention, rather than in restrictive sense.

We Claim:
1. A dynamic intelligent parallel program development system comprising:
a means for receiving program source code as input;
a means for analyzing the program source code during write time to determine optimized allocation of the program source code to a plurality of cores of a multi-core processor;
a means for optimally generating one or more program segments based on the analyzed program source code;
a means for dynamically segregating the generated program segments and allocating the generated program segments to the plurality of cores in a multi-core system for parallel execution;
a means for executing individual program segments in each of the selected cores; and
a means for displaying results or intermediate information of execution of the program segments that are executed in each of the selected cores.
2. The system of claim 1, wherein the program source code is a sequentially written program source code or a partial or complete program source code.
3. The system of claim 1, wherein the means for receiving the program source code and means for displaying the results or intermediate information comprise N-depth window, wherein the N-depth window further comprises at least one window for displaying the sequential program source code and multiple windows for displaying parallel program segments, where each of the multiple windows associated with the program segments corresponds to a single core.
4. The system of claim 3, wherein the window for displaying the sequential program source code is configured to write program source code by the programmer and the multiple windows associated with the parallel program segments is configured to display the content of code segregated into multiple cores.

5. The system of claim 1, wherein the system can be implemented in single processor system with multiple threads, as analysis of the system can convert the sequential program source code as multithreaded program and multiple concurrent tasks.
6. A method of claim1, wherein the optimized allocation of the program source code to the plurality of cores of the multi-core processor is performed dynamically at a plurality of intervals comprising completion of entering each logical program segment, at any pre-defined check points and manually invoking the allocation by the programmer.
7. A method for generating a dynamic parallel program during write time of a program, for a computing platform configured of multiple processors which share main memory or cache memory the method comprising the steps of:
analyzing code during write time of a program to determine data dependency of the code, and approximate execution time of the code;
segregating the portion of the code to multiple cores as per data dependency of the code until the completion of the whole program at the end of each logical program segment or at any pre-defined check point;
analyzing the whole program repeatedly for obtaining accurate execution and dependency information; and
re-arranging the portion of the program segments for the multiple cores.
8. The method of claim 7, wherein the program analysis is repeated after a predefined number of lines of the code, predefined time interval and/or logical end of code segment.
9. The method of claim 7, further comprising providing tips to the programmer such as avoiding use of a particular variable or set of variables if possible so as to increase the parallelism possibility or suggesting the programmer to shuffle the code segments amongst multiple available cores.

10. The method of claim 7, further comprising facilitating the programmer to segregate segments of the code from one core to another core in a multi-core system.
11. The method of claim 10, further comprising automatically validating movement of code from one core to another core, automatically rearranging the segregation of code based on data dependency analysis, and providing a manual option for the programmer for analyzing at any check point defined by the programmer through a manual analysis button provided on a user interface.
12. The method of claim 10, wherein the segregation of code is re-arranged based on at least one of a newly added code line, deleted code, and modified code.
13. The method of claim 7, wherein data dependency analysis at the time of writing the code comprises write time call graph creation, write time side effect analysis, write time alias analysis and write time inter procedural analysis.
14.An article comprising a computer readable storage medium having instructions when executed by a computing platform to perform the method comprising the steps of:
analyzing code during write time of program, analyzing data dependency of the code, and analyzing an approximate execution time of the code;
segregating a portion of the code to multiple cores as per the data dependency of the code until the completion of the whole program at the end of each logical program segment or at any pre-defined check point;
repeatedly analyzing the whole program for a predetermined number of times for obtaining accurate execution and dependency information; and
re-arranging the portion of the program for the multiple cores.

15. A dynamic parallel program development system and the method thereof as described herein accompanied by the drawings.

Documents

Orders

Section Controller Decision Date
section-15 santosh mehtry 2017-06-23
section-15 2017-06-23

Application Documents

# Name Date
1 1444-MUM-2010-Abstract-071215.pdf 2018-08-10
1 1444-MUM-2010-CORRESPONDENCE(IPO)-(20-11-2010).pdf 2010-11-20
2 1444-mum-2010-abstract.pdf 2018-08-10
2 1444-MUM-2010-FORM 9(22-06-2011).pdf 2011-06-22
3 1444-MUM-2010-FORM 18(22-06-2011).pdf 2011-06-22
3 1444-MUM-2010-Amended Pages Of Specification-071215.pdf 2018-08-10
4 1444-MUM-2010-CORRESPONDENCE(IPO)-(FER)-(18-12-2014).pdf 2014-12-18
4 1444-MUM-2010-CERTIFICATE OF INCORPORATION(17-1-2014).pdf 2018-08-10
5 Other Document [20-09-2016(online)].pdf 2016-09-20
5 1444-MUM-2010-Claims-071215.pdf 2018-08-10
6 Form 13 [20-09-2016(online)].pdf 2016-09-20
6 1444-mum-2010-claims.pdf 2018-08-10
7 Description(Complete) [20-09-2016(online)].pdf 2016-09-20
7 1444-MUM-2010-CORRESPONDENCE(IPO)-(DECISION)-(23-6-2017).pdf 2018-08-10
8 1444-MUM-2010_EXAMREPORT.pdf 2018-08-10
8 1444-MUM-2010-Correspondence-180915.pdf 2018-08-10
9 1444-mum-2010-description(complete).pdf 2018-08-10
9 1444-MUM-2010-Power of Attorney-180915.pdf 2018-08-10
10 1444-MUM-2010-Drawing-071215.pdf 2018-08-10
10 1444-MUM-2010-Power of Attorney-071215.pdf 2018-08-10
11 1444-MUM-2010-Examination Report Reply Recieved-071215.pdf 2018-08-10
11 1444-MUM-2010-original under rule 6(1 A)Correspondence-271216.pdf 2018-08-10
12 1444-MUM-2010-Form 1-071215.pdf 2018-08-10
12 1444-MUM-2010-original under rule 6(1 A) Power of Attorney-271216.pdf 2018-08-10
13 1444-mum-2010-form 1.pdf 2018-08-10
13 1444-MUM-2010-MARKED COPY-071215.pdf 2018-08-10
14 1444-MUM-2010-FORM 13(17-1-2014).pdf 2018-08-10
14 1444-mum-2010-form 5.pdf 2018-08-10
15 1444-MUM-2010-FORM 18.pdf 2018-08-10
15 1444-MUM-2010-Form 5-071215.pdf 2018-08-10
16 1444-MUM-2010-Form 2(Title Page)-071215.pdf 2018-08-10
16 1444-mum-2010-form 2.pdf 2018-08-10
17 1444-mum-2010-form 2(title page).pdf 2018-08-10
18 1444-mum-2010-form 2.pdf 2018-08-10
18 1444-MUM-2010-Form 2(Title Page)-071215.pdf 2018-08-10
19 1444-MUM-2010-FORM 18.pdf 2018-08-10
19 1444-MUM-2010-Form 5-071215.pdf 2018-08-10
20 1444-MUM-2010-FORM 13(17-1-2014).pdf 2018-08-10
20 1444-mum-2010-form 5.pdf 2018-08-10
21 1444-mum-2010-form 1.pdf 2018-08-10
21 1444-MUM-2010-MARKED COPY-071215.pdf 2018-08-10
22 1444-MUM-2010-Form 1-071215.pdf 2018-08-10
22 1444-MUM-2010-original under rule 6(1 A) Power of Attorney-271216.pdf 2018-08-10
23 1444-MUM-2010-Examination Report Reply Recieved-071215.pdf 2018-08-10
23 1444-MUM-2010-original under rule 6(1 A)Correspondence-271216.pdf 2018-08-10
24 1444-MUM-2010-Power of Attorney-071215.pdf 2018-08-10
24 1444-MUM-2010-Drawing-071215.pdf 2018-08-10
25 1444-mum-2010-description(complete).pdf 2018-08-10
25 1444-MUM-2010-Power of Attorney-180915.pdf 2018-08-10
26 1444-MUM-2010-Correspondence-180915.pdf 2018-08-10
26 1444-MUM-2010_EXAMREPORT.pdf 2018-08-10
27 1444-MUM-2010-CORRESPONDENCE(IPO)-(DECISION)-(23-6-2017).pdf 2018-08-10
27 Description(Complete) [20-09-2016(online)].pdf 2016-09-20
28 1444-mum-2010-claims.pdf 2018-08-10
28 Form 13 [20-09-2016(online)].pdf 2016-09-20
29 1444-MUM-2010-Claims-071215.pdf 2018-08-10
29 Other Document [20-09-2016(online)].pdf 2016-09-20
30 1444-MUM-2010-CERTIFICATE OF INCORPORATION(17-1-2014).pdf 2018-08-10
30 1444-MUM-2010-CORRESPONDENCE(IPO)-(FER)-(18-12-2014).pdf 2014-12-18
31 1444-MUM-2010-FORM 18(22-06-2011).pdf 2011-06-22
31 1444-MUM-2010-Amended Pages Of Specification-071215.pdf 2018-08-10
32 1444-MUM-2010-FORM 9(22-06-2011).pdf 2011-06-22
32 1444-mum-2010-abstract.pdf 2018-08-10
33 1444-MUM-2010-CORRESPONDENCE(IPO)-(20-11-2010).pdf 2010-11-20
33 1444-MUM-2010-Abstract-071215.pdf 2018-08-10