Abstract: The present disclosure relates to a system (102) and method (500) for determining a posture of a robotic manipulator (200). The system (102) receives a target posture of the robotic manipulator (200) using a User Interface (UI) (108), segments the robotic manipulator (200) into modules upon receiving the target posture and divides the target posture into posture components corresponding to each of the segmented modules. Further, the system (102) transforms the posture components into coordinates upon the division of the target posture, predicts joint angles for each of the modules using a machine learning model based on the transformation and determines a configuration for the robotic manipulator (200) based on the predicted joint angles for orienting the robotic manipulator (200) in the target posture. The system (102) provides an efficient inverse kinematics (IK) computation for robotic manipulator (200) with multiple degrees of freedom (DoF).
Description:TECHNICAL FIELD
[0001] The present disclosure generally relates to robotic systems, and more particularly relates to a system and method for determining a posture of a robotic manipulator, thereby enabling efficient and accurate inverse kinematics (IK) computation for the robotic manipulator with multiple degrees of freedom (DoF).
BACKGROUND
[0002] In robotic systems, determining the posture of a manipulator involves computing the joint angles required to achieve a desired position and orientation. The process of computing joint angles to obtain a desired position and orientation is known as inverse kinematics (IK).
[0003] Existing methods for solving IK include numerical approaches such as Jacobian-based algorithms, optimization techniques, and data-driven machine learning models. While numerical methods are effective for simple robotic structures, they often suffer from inefficiencies, slow convergence, and instability when applied to robots with high degrees of freedom (DOF).
[0004] Although some existing methods utilize machine learning for IK solutions, they often require large datasets and high computational power, making the methods memory-intensive and difficult to scale for real-time applications. Additionally, many traditional approaches treat the robotic manipulator as a monolithic structure, leading to increased complexity in training and evaluation.
[0005] Therefore, there is a need to address at least the above-mentioned drawbacks and any other shortcomings, or at the very least, provide a valuable alternative to the existing methods and systems.
OBJECTS OF THE PRESENT DISCLOSURE
[0006] An object of the present disclosure relates to a system and method for determining a posture of a robotic manipulator, thereby enabling efficient and accurate computation of joint angles using modular inverse kinematics.
[0007] Another object of the present disclosure provides a modular decomposition technique to segment the robotic manipulator into independent modules, thereby simplifying kinematic calculations.
[0008] Another object of the present disclosure is to develop individual machine learning models for each module of the robotic manipulator, thereby improving computational efficiency and allowing parallelized training for different modules.
[0009] Another object of the present disclosure is to provide an integrated rule-based coordinate frame assignment for each module, thereby ensuring optimal coordinate transformation for improved posture determination.
[0010] Yet another object of the present disclosure is to provide a data generation method, thereby reducing redundancy and optimizing memory efficiency in training data.
SUMMARY
[0011] Aspects of the present disclosure generally relate to robotic systems, and more particularly relates to a system and method for determining a posture of a robotic manipulator, thereby enabling efficient and accurate inverse kinematics (IK) computation for the robotic manipulator with multiple degrees of freedom (DoF).
[0012] An aspect of the present disclosure relates to the method for determining the posture of the robotic manipulator. The method may include receiving, by one or more processors associated with a system, a target posture of the robotic manipulator using a User Interface (UI) associated with the system and segmenting, by the one or more processors, the robotic manipulator into one or more modules upon receiving the target posture. Further, the method may include dividing, by the one or more processors, the target posture into one or more posture components corresponding to each of the segmented one or more modules, transforming, by the one or more processors, the one or more posture components into coordinates upon the division of the target posture and predicting, by the one or more processors, joint angles for each of the one or more modules using a machine learning model associated with the system based on the transformation of the one or more posture components into the coordinates. Further, the method may include determining, by the one or more processors, a configuration for the robotic manipulator based on the predicted joint angles for orienting the robotic manipulator in the target posture.
[0013] In an embodiment, the one or more modules may include any one or a combination of a base module, an arm module, and a wrist module.
[0014] In an embodiment, the one or more posture components may include any one or a combination of a rotation of the base module, an extension length of the arm module and an angle of the wrist module.
[0015] In an embodiment, the segmenting, by the one or more processors, the robotic manipulator into the one or more modules upon receiving the target posture may include identifying, by the one or more processors, one or more attributes associated with each of the one or more modules based on the target posture and segmenting, by the one or more processors, the robotic manipulator into the one or more modules based on the identified attributes.
[0016] In an embodiment, the one or more attributes may include any one or a combination of types of joints, degrees of freedom (DoF), and kinematic dependencies.
[0017] In an embodiment, transforming, by the one or more processors, the one or more posture components into the coordinates upon the division of the target posture, may include assigning, by the one or more processors, the coordinates corresponding to each of the segmented one or more modules and mapping, by the one or more processors, the one or more posture components to the assigned coordinates to transform the one or more posture components into the corresponding coordinates for each of the one or more modules.
[0018] In an embodiment, predicting, by the one or more processors, the joint angles for each of the one or more modules using the machine learning model, may include correlating, by the one or more processors, the coordinates with pre-stored joint angles in a database associated with the system, generating, by the one or more processors, a plurality of joint angles for each of the one or more modules based on the correlation and assigning, by the one or more processors, a value to each of the generated plurality of joint angles. Further, determining, by the one or more processors, that the value falls within predefined limits and determining, by the one or more processors, the joint angles for each of the one or more modules based on the determination that the value falls within predefined limits.
[0019] In an embodiment, the method may include generating, by the one or more processors, data corresponding to the one or more posture components for each of the one or more modules and training, by the one or more processors, the machine learning model for each of the one or more modules using the generated data to predict the joint angles.
[0020] In an embodiment, the method may include combining, by the one or more processors, the joint angles of each of the one or more modules to determine the configuration for the robotic manipulator.
[0021] Another embodiment of the present disclosure may include a system for determining a posture of a robotic manipulator. The system may include one or more processors and a memory. The one or more processors may receive a target posture of the robotic manipulator using a User Interface (UI) associated with the system, segment the robotic manipulator into one or more modules upon receiving the target posture and divide the target posture into one or more posture components corresponding to each of the segmented one or more modules. Further, the one or more processors may transform the one or more posture components into coordinates upon the division of the target posture, predict joint angles for each of the one or more modules using a machine learning model associated with the system based on the transformation of the one or more posture components into the coordinates and determine a configuration for the robotic manipulator based on the predicted joint angles for orienting the robotic manipulator in the target posture.
[0022] Various objects, features, aspects, and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent components.
BRIEF DESCRIPTION OF THE DRAWINGS
[0023] The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate example embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
[0024] FIG. 1 illustrates an exemplary block diagram of a proposed system for determining a posture of a robotic manipulator, in accordance with an embodiment of the present disclosure.
[0025] FIG. 2 illustrates an exemplary perspective view of the robotic manipulator with segmented modules, in accordance with an embodiment of the present disclosure.
[0026] FIG. 3 illustrates an exemplary view of coordinates corresponding to each of the segmented modules, in accordance with an embodiment of the present disclosure.
[0027] FIG. 4 illustrates a graphical representation of training data coverage for the robotic manipulator, in accordance with an embodiment of the present disclosure.
[0028] FIG. 5 illustrates an example flow chart for a training phase and an evaluation phase for determining the posture of the robotic manipulator, in accordance with an embodiment of the present disclosure.
[0029] FIG. 6 illustrates an exemplary flow diagram of the method for determining the posture of the robotic manipulator, in accordance with an embodiment of the present disclosure.
[0030] FIG. 7 illustrates a block diagram of an example computer system in which or with which embodiments of the present disclosure may be implemented.
DETAILED DESCRIPTION
[0031] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such details as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosures as defined by the appended claims.
[0032] Embodiments explained herein relate to robotic systems, and more particularly relates to a system and method for determining a posture of a robotic manipulator, thereby enabling efficient and accurate inverse kinematics (IK) computation for the robotic manipulator with multiple degrees of freedom (DoF).
[0033] An embodiment of the present disclosure relates to the method for determining the posture of the robotic manipulator. The method may include receiving, by one or more processors associated with a system, a target posture of the robotic manipulator using a User Interface (UI) associated with the system and segmenting, by the one or more processors, the robotic manipulator into one or more modules upon receiving the target posture. Further, the method may include dividing, by the one or more processors, the target posture into one or more posture components corresponding to each of the segmented one or more modules, transforming, by the one or more processors, the one or more posture components into coordinates upon the division of the target posture and predicting, by the one or more processors, joint angles for each of the one or more modules using a machine learning model associated with the system based on the transformation of the one or more posture components into the coordinates. Further, the method may include determining, by the one or more processors, a configuration for the robotic manipulator based on the predicted joint angles for orienting the robotic manipulator in the target posture.
[0034] Various embodiments of the present disclosure will be explained in detail with reference to FIGs. 1 to 7.
[0035] FIG. 1 illustrates an exemplary block diagram 100 of a proposed system 102 for determining a posture of a robotic manipulator, in accordance with an embodiment of the present disclosure.
[0036] FIG. 2 illustrates an exemplary perspective view of the robotic manipulator 200 with segmented modules, in accordance with an embodiment of the present disclosure.
[0037] Referring to FIGs. 1 and 2, the system 102 for determining the posture of the robotic manipulator 200 (as shown in FIG. 2) may include one or more processors 104, a memory 106, and a user interface (UI) 108. The one or more processors 104 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s) 104 may be configured to fetch and execute computer-readable instructions stored in the memory 106 of the system 102. The memory 106 may store one or more computer-readable instructions or routines, which may be fetched and executed to predict the posture of the robotic manipulator 200. The memory 106 may include any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like.
[0038] In an embodiment, the UI 108 may comprise a variety of interfaces, for example, a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The UI 108 may facilitate communication of the system 102 with various devices coupled to it. The UI 108 may also provide a communication pathway for one or more components of the system 102. Examples of such components include but are not limited to, processing engine(s) 110, and a database 128. The database 128 may include data that is either stored or generated as a result of functionalities implemented by any of the components of the processing engine(s) 110.
[0039] In an embodiment, the processing engine(s) 110 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) 110. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) 110 may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the one or more processor(s) 104 may comprise a processing resource (for example, one or more processors 104), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) 110. In such examples, the system 102 may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system 102 and the processing resource. In other examples, the processing engine(s) 110 may be implemented by an electronic circuitry.
[0040] Further, the processing engine(s) 110 may include a receiving module 112, a segmentation module 114, a dividing module 116, a transformation module 118, a prediction module 120, a determination module 122, a machine learning (ML) module 124 and other module(s) 126. The other module(s) 126 may implement functionalities that supplement applications/functions performed by the processing engine(s) 110.
[0041] In an embodiment, the receiving module 112 may be configured to receive a target posture of the robotic manipulator 200 using the UI 108. The target posture may be such as, for example, a position of the robotic manipulator 200, an orientation of an end effector and the like.
[0042] In an embodiment, the segmentation module 114 may segment the robotic manipulator 200 into modules upon receiving the target posture. The modules may include any one or a combination of: a base module 202, an arm module 204, and a wrist module 206. In an embodiment, the segmentation module 114 may identify attributes associated with each of the modules based on the target posture. The attributes may include any one or a combination of: types of joints, degrees of freedom (DoF), and kinematic dependencies. Further, the segmentation module 114 may segment the robotic manipulator 200 into the modules based on the identified attributes.
[0043] Further, the modules may include joints 208-1 to 208-6 (collectively referred to as joints 208, hereinafter) associated with each of the modules. The base module 202 may include a base rotation joint 208-1, the arm module 204 may include a shoulder joint 208-2 and an elbow joint 208-3 and the wrist module 206 may include three wrist joints 208-4, 208-5 and 208-6. The wrist module 206 may control the orientation of the end-effector. Further, the robotic manipulator 200 may have six degrees of freedom (DoF), where the base module 202 may have one DoF, the arm module 204 may have two DoF and the wrist module 206 may have three DoF.
[0044] In an embodiment, the dividing module 116 may divide the target posture into posture components corresponding to each of the segmented modules. The posture components may include any one or a combination of: a rotation of the base module 202, an extension length of the arm module 204 and an angle of the wrist module 206. The posture components may be specific to each of the modules. For example, the posture components for the base module 202 may define the rotational angle of the base, the arm module 204 may be characterized by the extension length of the arm and the wrist module 206 may be specified by the angular orientation, ensuring precise positioning of the robotic manipulator 200 and orientation of the end effector.
[0045] FIG. 3 illustrates an exemplary view 300 of coordinates corresponding to each of the segmented modules, in accordance with an embodiment of the present disclosure.
[0046] Referring to FIG. 3, in an embodiment, the transformation module 118 may transform the posture components into the coordinates upon division of the target posture. The coordinates may be polar coordinates, planar polar coordinates and quaternions. The polar coordinates may be assigned to the base module 202, the planar polar coordinates may be assigned to the arm module 204 and the quaternions may be assigned to the wrist module 206. In an embodiment, a movement of the robotic manipulator 200 may be represented as the coordinates to determine joint angles for positioning a wrist center point 306. The base link 302 may operate using the polar coordinates (e.g., Azimuth angle (φ)), rotating around a vertical axis. The arm links 304-1 and 304-2 may move using the planar polar coordinates (e.g., Radius (r) and elevation angle (θ)). The wrist center point 306 may rotate using the quaternions.
[0047] In an embodiment, the transformation module 118 may assign the coordinates corresponding to each of the segmented modules. The coordinates for each module may be assigned using rule-based coordinate frame assignment technique. The rule-based coordinate frame assignment technique may use factors such as DOF, types of joints, axis configuration, workspace shape, redundancy and the like. Further, the transformation module 118 may map the posture components to the assigned coordinates to transform the posture components into the corresponding coordinates for each of the modules. Table 1 depicts the coordinate assignment for each modules of the robotic manipulator 200.
Table 1: Coordinate Assignment for the robotic manipulator 200
Use Case Module DoF Joint Types Axes Configuration Assigned Coordinate System Reasoning
6-DOF
Industrial Robot
Base 1 R Rotation about Axis Polar Coordinates (φ) Rotation about the base axis can be represented with a single angular parameter.
Arm 2 R + R Parallel Axes (θ = 0°) Planar Polar Coordinates (r, θ) Shoulder and elbow form a planar workspace with parallel rotation axes.
Wrist 3 R + R + R Orthogonal Axes Quaternions (q₀, q₁, q₂, q₃) Suitable for full 3D orientation control without singularities.
[0048] In an embodiment, the ML module 124 may be configured with a ML model. The ML model may predict the joint angles using regression models, such as Linear Regression, Polynomial Regression, Decision Tree Regressor, Random Forest Regressor, Support Vector Regressor, and Neural Networks. In an embodiment, the ML module 124 may generate data corresponding to the posture components for each of the modules and train the machine learning model for each of the modules using the generated data to predict the joint angles. The ML model may be developed using previously collected data that may map different posture components to the corresponding joint angles.
[0049] In an embodiment, the training process may involve feeding the ML model with numerous examples of how different modules of the robotic manipulator 200 behave under various conditions. The training data generation process may involve position coordinates for both the base module 202 and the arm module 204, with defined ranges for azimuth, radial distance, and elevation angles. For instance, if the robotic manipulator 200 needs to reach a specific position, the ML model learns from past data how the base, arm, and wrist should be oriented to achieve that posture. By leveraging the pre-trained model, the processor 104 may determine the joint angles required for the target posture without having to perform complex inverse kinematics calculations.
[0050] FIG. 4 illustrates a graphical representation 400 of training data coverage for the robotic manipulator 200, in accordance with an embodiment of the present disclosure.
[0051] Referring to FIG. 4, a training data coverage visualization for the robotic manipulator 200 across different positions and orientations in a three dimensional space is shown. A dotted trajectory may indicates a path covered by the arm module 204 and the wrist module 206 of the robotic manipulator 200, demonstrating the range of motion captured during training. The training data may be generated by varying parameters such as azimuth (horizontal rotation), radial distance (extension from the base), and elevation (vertical angle) while maintaining consistent joint angle constraints for the joints. By covering a broad range of positions, the training dataset may ensures the robotic arm can learn and generalize movements effectively, improving accuracy in tasks like object manipulation, pick-and-place operations, and motion planning.
[0052] In an embodiment, developing inverse kinematics models for each module of the robotic manipulator 200 may involve creating separate models for the base module 202, the arm module 204, and the wrist module 206, each with specific input and output dimensions, with the wrist module 206 being the most complex due to its orientation handling. The IK equations of proposed model are:
"Base Module Joint Angle " (θ_1 ) = f^(-1) (φ_WCP )
"Arm Module Joint Angles " (θ_2,θ_3 )=f^(-1) (r_WCP,θ_WCP )
"Wrist Module Joint Angles " (θ_4,θ_5,θ_6 )=f^(-1) (q_1,q_2,q_3,q_4 )
[0053] In an embodiment, the prediction module 120 may predict joint angles for each of the modules using the ML model associated with the system 102 based on the transformation of the posture components into the coordinates. In an embodiment, the prediction module 120 may correlate the coordinates with pre-stored joint angles in the database 128 associated with the system 102. The pre-stored joint angles may include the collection of previously computed joint angles.
[0054] Further, the prediction module 120 may generate a plurality of joint angles for each of the modules based on the correlation. Further, the prediction module 120 may assign a value to the generated plurality of joint angles. The prediction module 120 may determine that the value falls within predefined limits and determine the joint angles for each of the one or more modules based on the determination that the value falls within predefined limits. For example, if the robotic manipulator 200 requires an extension of the arm module 204, the prediction module may generate multiple possible joint angles, discard the joint angles that exceed the range of motion of the arm, and select the optimal joint angle that allows the arm to extend efficiently.
[0055] In an embodiment, the determination module 122 may determine a configuration for the robotic manipulator 200 based on the predicted joint angles for orienting the robotic manipulator 200 in the target posture. Further, the determination module 122 may combine the joint angles of each of the modules to determine the configuration for the robotic manipulator 200. The determination module may combine the joint angles of all the segmented modules to create a unified configuration that allows the manipulator to achieve the target posture. The determined configuration may be used to actuate the robotic manipulator 200, enabling it to physically move and achieve the desired posture.
[0056] FIG. 5 illustrates an example flow chart 500 for a training phase 500-A and an evaluation phase 500-B for determining the posture of the robotic manipulator 200, in accordance with an embodiment of the present disclosure.
[0057] Referring to FIG. 5, determining the posture of the robotic manipulator 200 may include the training phase 500-A and the evaluation phase 500-B. The training phase 500-A may include, at step 502, decomposing the robotic manipulator 200 into separate modules. At step 504, coordinates may be assigned to each module. At step 506, module-specific training data may be generated. At step 508, modular neural networks (ML model) may be trained with module-specific data to predict the joint angles. Further, the evaluation phase 500-B may include, at step 510, determining the target pose of the robotic manipulator 200. At step 512, module-specific target pose may be calculated. At step 514, the target pose may be converted to corresponding coordinates. At step 516, the modular neural networks may be evaluated, ensuring that the robotic manipulator 200 moves smoothly and accurately to the desired position.
[0058] In an embodiment, the experimentation conducted demonstrates effectiveness in improving the accuracy and efficiency of posture determination for robotic manipulators. By decomposing the system 102 into separate modules and assigning module-specific coordinates, the system 102 may reduce computational complexity and enhances adaptability. The use of modular neural networks trained on module-specific data may improve the precision of joint angle predictions. During evaluation, the system 102 successfully determined target poses and converted them into appropriate coordinates, leading to smooth and accurate movements of the robotic manipulator 200.
[0059] FIG. 6 illustrates a flow diagram illustrating an exemplary method 600 for determining the posture of the robotic manipulator 200, in accordance with an embodiment of the present disclosure.
[0060] Referring to FIG. 6, at block 602, the method 600 may include receiving the target posture of the robotic manipulator 200 using the User Interface (UI) 108 associated with the system 102.
[0061] At block 604, the method 600 may include segmenting the robotic manipulator 200 into the modules upon receiving the target posture.
[0062] At block 606, the method 600 may include dividing the target posture into the posture components corresponding to each of the segmented modules.
[0063] At block 608, the method 600 may include transforming the posture components into coordinates upon the division of the target posture.
[0064] At block 610, the method 600 may include predicting the joint angles for each of the modules using the machine learning model associated with the system 102 based on the transformation of the posture components into the coordinates.
[0065] At block 612, the method 600 may include determining the configuration for the robotic manipulator 200 based on the predicted joint angles for orienting the robotic manipulator 200 in the target posture.
[0066] Thus, the present disclosure proposes a system (e.g., 102 as represented in FIG. 1) and a method (e.g., 600 as represented in FIG. 6) for determining the posture of the robotic manipulator (e.g., 200 as shown in FIG. 2) using inverse kinematics. By incorporating machine learning models for prediction of the joint angles, the system 102 and the method 500 aim to provide a robust and effective solution for posture estimation and joint angle determination in robotic systems.
[0067] FIG. 7 illustrates a block diagram of an example computer system 700 in which or with which embodiments of the present disclosure may be implemented.
[0068] As shown in FIG. 7, the computer system 700 may include an external storage device 710, a bus 720, a main memory 730, a read-only memory 740, a mass storage device 750, communication port(s) 760, and a processor 770. A person skilled in the art will appreciate that the computer system 700 may include more than one processor and communication ports. The processor 770 may include various modules associated with embodiments of the present disclosure. The communication port(s) 760 may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fibre, a serial port, a parallel port, or other existing or future ports. The communication port(s) 760 may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system 700 connects. The main memory 730 may be random access memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory 740 may be any static storage device(s) including, but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor 770. The mass storage device 750 may be any current or future mass storage solution, which may be used to store information and/or instructions.
[0069] The bus 720 communicatively couples the processor 770 with the other memory, storage, and communication blocks. The bus 720 can be, e.g., a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), universal serial bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor 770 to the computer system 700.
[0070] Optionally, operator and administrative interfaces, e.g. a display, keyboard, and a cursor control device, may also be coupled to the bus 720 to support direct operator interaction with the computer system 700. Other operator and administrative interfaces may be provided through network connections connected through the communication port(s) 760. In no way should the aforementioned exemplary computer system 700 limit the scope of the present disclosure.
[0071] While the foregoing describes various embodiments of the present disclosure, other and further embodiments of the present disclosure may be devised without departing from the basic scope thereof. The scope of the present disclosure is determined by the claims that follow. The present disclosure is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the present disclosure when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE PRESENT DISCLOSURE
[0072] The present disclosure enables improved computational efficiency by reducing the complexity of inverse kinematics calculations.
[0073] The present disclosure provides enhanced accuracy and precision by using machine learning for joint angle prediction.
[0074] The present disclosure provides flexibility and adaptability by allowing integration with various robotic configurations.
[0075] The present disclosure enables scalability for complex systems by effectively handling high-degree-of-freedom robotic manipulators.
[0076] The present disclosure reduces redundancy in data processing by optimizing memory usage and eliminating redundant computations.
, Claims:1. A method (600) for determining a posture of a robotic manipulator (200), the method (600) comprising:
receiving (602), by one or more processors (104) associated with a system (102), a target posture of the robotic manipulator (200) using a User Interface (UI) (108) associated with the system (102);
segmenting (604), by the one or more processors (104), the robotic manipulator (200) into one or more modules upon receiving the target posture;
dividing (606), by the one or more processors (104), the target posture into one or more posture components corresponding to each of the segmented one or more modules;
transforming (608), by the one or more processors (104), the one or more posture components into coordinates upon the division of the target posture;
predicting (610), by the one or more processors (104), joint angles for each of the one or more modules using a machine learning model associated with the system (102) based on the transformation of the one or more posture components into the coordinates; and
determining (612), by the one or more processors (104), a configuration for the robotic manipulator (200) based on the predicted joint angles for orienting the robotic manipulator (200) in the target posture.
2. The method (600) as claimed in claim 1, wherein the one or more modules comprises any one or a combination of: a base module (202), an arm module (204), and a wrist module (206).
3. The method (600) as claimed in claim 2, wherein the one or more posture components comprises any one or a combination of: a rotation of the base module (202), an extension length of the arm module (204) and an angle of the wrist module (206).
4. The method (600) as claimed in claim 1, wherein the segmenting, by the one or more processors (104), the robotic manipulator (200) into the one or more modules upon receiving the target posture comprises:
identifying, by the one or more processors (104), one or more attributes associated with each of the one or more modules based on the target posture; and
segmenting, by the one or more processors (104), the robotic manipulator (200) into the one or more modules based on the identified attributes.
5. The method (600) as claimed in claim 4, wherein the one or more attributes comprises any one or a combination of: types of joints, degrees of freedom (DoF), and kinematic dependencies.
6. The method (600) as claimed in claim 1, wherein transforming, by the one or more processors (104), the one or more posture components into the coordinates upon the division of the target posture, comprises:
assigning, by the one or more processors (104), the coordinates corresponding to each of the segmented one or more modules; and
mapping, by the one or more processors (104), the one or more posture components to the assigned coordinates to transform the one or more posture components into the corresponding coordinates for each of the one or more modules.
7. The method (600) as claimed in claim 1, wherein predicting, by the one or more processors (104), the joint angles for each of the one or more modules using the machine learning model, comprises:
correlating, by the one or more processors (104), the coordinates with pre-stored joint angles in a database (128) associated with the system (102);
generating, by the one or more processors (104), a plurality of joint angles for each of the one or more modules based on the correlation;
assigning, by the one or more processors (104), a value to each of the generated plurality of joint angles;
determining, by the one or more processors (104), that the value falls within predefined limits; and
determining, by the one or more processors (104), the joint angles for each of the one or more modules based on the determination that the value falls within predefined limits.
8. The method (600) as claimed in claim 1, comprises:
generating, by the one or more processors (104), data corresponding to the one or more posture components for each of the one or more modules; and
training, by the one or more processors (104), the machine learning model for each of the one or more modules using the generated data to predict the joint angles.
9. The method (600) as claimed in claim 1, comprises combining, by the one or more processors (104), the joint angles of each of the one or more modules to determine the configuration for the robotic manipulator (200).
10. A system (102) for determining a posture of a robotic manipulator (200), the system (102) comprising:
one or more processors (104); and
a memory (106) operatively coupled with the one or more processors (104), wherein the memory (106) comprises one or more instructions which, when executed, cause the one or more processors (104) to:
receive a target posture of the robotic manipulator (200) using an User Interface (UI) (108) associated with the system (102);
segment the robotic manipulator (200) into one or more modules upon receiving the target posture;
divide the target posture into one or more posture components corresponding to each of the segmented one or more modules;
transform the one or more posture components into coordinates upon the division of the target posture;
predict joint angles for each of the one or more modules using a machine learning model associated with the system (102) based on the transformation of the one or more posture components into the coordinates; and
determine a configuration for the robotic manipulator (200) based on the predicted joint angles for orienting the robotic manipulator (200) in the target posture.
| # | Name | Date |
|---|---|---|
| 1 | 202541024640-STATEMENT OF UNDERTAKING (FORM 3) [19-03-2025(online)].pdf | 2025-03-19 |
| 2 | 202541024640-REQUEST FOR EXAMINATION (FORM-18) [19-03-2025(online)].pdf | 2025-03-19 |
| 3 | 202541024640-REQUEST FOR EARLY PUBLICATION(FORM-9) [19-03-2025(online)].pdf | 2025-03-19 |
| 4 | 202541024640-FORM-9 [19-03-2025(online)].pdf | 2025-03-19 |
| 5 | 202541024640-FORM FOR SMALL ENTITY(FORM-28) [19-03-2025(online)].pdf | 2025-03-19 |
| 6 | 202541024640-FORM 18 [19-03-2025(online)].pdf | 2025-03-19 |
| 7 | 202541024640-FORM 1 [19-03-2025(online)].pdf | 2025-03-19 |
| 8 | 202541024640-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [19-03-2025(online)].pdf | 2025-03-19 |
| 9 | 202541024640-EVIDENCE FOR REGISTRATION UNDER SSI [19-03-2025(online)].pdf | 2025-03-19 |
| 10 | 202541024640-EDUCATIONAL INSTITUTION(S) [19-03-2025(online)].pdf | 2025-03-19 |
| 11 | 202541024640-DRAWINGS [19-03-2025(online)].pdf | 2025-03-19 |
| 12 | 202541024640-DECLARATION OF INVENTORSHIP (FORM 5) [19-03-2025(online)].pdf | 2025-03-19 |
| 13 | 202541024640-COMPLETE SPECIFICATION [19-03-2025(online)].pdf | 2025-03-19 |
| 14 | 202541024640-FORM-26 [16-06-2025(online)].pdf | 2025-06-16 |
| 15 | 202541024640-Proof of Right [17-09-2025(online)].pdf | 2025-09-17 |