Sign In to Follow Application
View All Documents & Correspondence

Method And System For Simplified Image Generation And Loading On Multi Core And Multi Processor Architectures

Abstract: The present invention discloses a method and system for simplified image generation and loading on multi-core and multi-processor architectures. A method for generating and storing images within a multi-core processor architecture is disclosed that achieves significant reductions in image size while simplifying the storage process in flash memory. By consolidating the common data of multi-core Digital Signal Processors (DSPs) into a single entity, this method diminishes the overall size of the image, thereby accelerating system boot times. Further enhancements in boot speed are realized through the employment of high-speed interconnects, such as Peripheral Component Interconnect Express (PCIe) and Gigabit Ethernet, for the rapid distribution of image data across processor cores. A tool has been devised to facilitate the targeted update of boot images for individual cores, complemented by a novel data structure that catalogues segment identifiers and names for efficient modifications within the flash sub-area. This innovation not only streamlines the update process but also optimizes backup operations and reduces the maintenance workload for each processor core.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
29 March 2024
Publication Number
40/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

BHARAT ELECTRONICS LIMITED
Outer Ring Road, Nagavara, Bangalore – 560045, Karnataka, India

Inventors

1. SUJA SUSAN GEORGE
Embedded Systems / Product Development and Innovation Centre, Bharat Electronics Limited, Jalahalli P.O., Bangalore – 560013, Karnataka, India
2. SIVANANTHAM SUBRAMONIAM
Embedded Systems / Product Development and Innovation Centre, Bharat Electronics Limited, Jalahalli P.O., Bangalore – 560013, Karnataka, India
3. VIKRAM RAJAN
Embedded Systems / Product Development and Innovation Centre, Bharat Electronics Limited, Jalahalli P.O., Bangalore – 560013, Karnataka, India

Specification

Description:FORM – 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(SEE SECTION 10, RULE 13)

METHOD AND SYSTEM FOR SIMPLIFIED IMAGE GENERATION AND LOADING ON MULTI-CORE AND MULTI-PROCESSOR ARCHITECTURES

BHARAT ELECTRONICS LIMITED

WITH ADDRESS:
OUTER RING ROAD, NAGAVARA, BANGALORE 560045, INDIA

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
TECHNICAL FIELD
[0001] The present invention relates generally to the technical field of computer processing architectures, specifically focusing on the optimization of image generation and loading processes within systems equipped with multi-core and multi-processor configurations. This invention addresses advancements in the efficiency and speed of handling graphical data and images in complex computing environments, facilitating improved performance in applications requiring rapid image processing and rendering across diverse computing platforms.

BACKGROUND
[0002] The Keystone-2 architecture based 66AK2H12 processors (referred as K2, hereafter) and Keystone-1 architecture based TMS320C6678 processors (referred as K1, hereafter) are from Texas Instruments’ embedded high-performance multicore Digital Signal Processor (DSP) chips. The K2 processor contains four ARM Cortex A15 MP cores and Eight TMS320C66x DSP Core Subsystems (C66xCorePacs) whereas the K1 processor has eight C6678 cores (0~7 cores). Each core has independent arithmetic element, Internal L1 SRAM (static memory) and L2 SRAM memory which are shared between the cores of the processors. In addition to the internal memory, DDR3 memory is provided to the processors as external common memories. Each DSP core will be independent while loading software. So it is very much necessary to be aware of data loaded in the cores and code running in the internal memories. Code Composer Studio (CCS), an Integrated Development Environment (IDE) from TI produces an executable file after compiling application programs, which contains the information such as character list, code segment, data segment, used for loading of program and debugging. When being served only for program loading, it is necessary to remove a large amount of redundant evaluation information included in the executable files. TI provides a tool called Hex6x which converts executables into binary table file (btbl). In our current design, for each processor, eight numbers of executable files are generated and it is being converted into eight numbers of .btbl files which are combined together into single big .btbl file.
[0003] Embedded products widely employ NAND or NOR flashes as non-volatile memory device. The flash generally stores boot loader image, operating systems, File systems such as yaffs, ext4 etc and application firmware images. The data on flash is updated in fixed address. The multi-core embedded platform uses asymmetric processor structure, loads different operating systems in each cores of the processor. Generally, multiple operating systems can share same flash devices for data storage. A separate section of flash memory is allocated for each cores of the processor. In a multicore environment, usually one of the cores will be a master core. The master allocates different partition for boot images and applications storages. The partition is identified by partition name, core Identification number. An arbitrator program running in the master, sequences the booting and application read/write operations of processor Cores. The partitions are maintained by a sub-routine called Memory Technology Devices (MTD) of LINUX operating system at master.
[0004] The main difficulties to be resolved is provided herein. When the user is compiling multi-core program, it is necessary to maintain different .cmd file for each core. It is an unshared address conflict of the executable files of each core, there is a possibility of wrong loading of files in different cores (0 to 7). This highly complicates the compilation process of multicore application and development that can lead to catastrophic errors. When the multiple cores use identical source file it results in substantial amounts of duplicate code and data. Example, One shared big array initialized in DDR3, will be initialized eight times repeatedly to the same memory space, which will significantly increase DSP boot time. In certain
occasions, if application loads code and data using slow interfaces such as SPI or I2C, the load time increases drastically in the order of seconds that may be beyond allowed band. This has an impact on power consumption.
[0005] The arbitration mechanism in Master must synchronize boot image loading and application storage, and the image transfer mechanism between master and slave. In addition to above-mentioned tool chain, TI provides additional tool-chain called Multicore Application Deployment (MAD) for multicore application and development. This tool is not capable when more number of cores are more than eight, as in our current design.
[0006] US20080065856A1 discloses a multiprocessor system includes a plurality of microprocessors configured to operate on a plurality of operating systems, respectively, and a memory section configured to have a plurality of memory spaces respectively allocated to the plurality of microprocessors. Each of the plurality of microprocessors may include a translation look-aside buffer (TLB) and a page table register. The TLB stores a copy of at least a part of data of one of the plurality of memory spaces corresponding to the microprocessor, and the copy includes a relation of each of virtual addresses of a virtual address space and a corresponding physical address of a physical address space as the memory space. The page table register refers to the TLB in response to an execution virtual address generated based on an application program to be executed by the microprocessor to determine an execution physical address corresponding to the execution virtual address. The microprocessor accesses the memory space based on the execution physical address.
[0007] US6327648-B1 discloses a novel multi-DSP system allows a main DSP to operate concurrently with an auxiliary DSP for implementing a filter algorithm. The main DSP and auxiliary DSP have separate program memories but share the same data memory. The auxiliary DSP program memory is mapped to the main DSP program memory to allow the main DSP to download filter process instructions from its program memory into the auxiliary DSP program memory. The auxiliary DSP fetches from its program memory to execute them. The auxiliary DSP is prevented from accessing the shared data its program memory when this memory is occupied by the main DSP. An arbitration mechanism gives the auxiliary DSP access to the data memory only when the main DSP is not using this memory.
[0008] CN107343057A discloses a kind of C6678 Ethernet loading methods of IP address flexibility and changeability; it is related to technical field of information processing. Methods described includes the MAC Address that host computer extracts the C6678 in the BOOT bags sent by the C6678 by
network packet capturing mode, afterwards, simulation generation ARP response bags, the C6678 to be placed IP address is included in the ARP response bags, and the IP address after setting and the C6678 MAC Address are established into dynamic corresponding relation, finally in a broadcast manner by IP address notify corresponding to C6678, so as to will loading image file the C6678 is
sent to by network UDP protocol. According to the method, ARP ponds are configured using dynamical fashion, each IP has ageing time, and effective ARP parsing address can be produced in time after changing board or modification IP, solves in the prior art because the situation that IP address conflict causes loading to fail occurs.
[0009] Therefore, there exists a need for an improved method and system for image generation and loading in multi-core and multi-processor architectures. Such a system would ideally minimize redundancy, enhance efficiency, and reduce boot and operational times by optimizing the use of available computing resources. Additionally, an effective solution would provide a streamlined approach to resource management and allocation, simplifying system design and enabling more scalable, flexible computing environments.

SUMMARY
[0010] This summary is provided to introduce concepts related to the field of computer engineering and, more specifically, to methods and systems for optimizing image generation and loading processes within computing environments equipped with multi-core and multi-processor architectures. This invention addresses the technical challenges associated with managing and processing graphical data and software across complex computing platforms, aiming to improve efficiency, reduce processing time, and minimize redundant data handling during system initialization and operational phases.
[0011] In an embodiment of the present invention, a method for simplifying image generation and loading on a custom multi-core computing platform is disclosed. The method includes modifying multicore link loading tool chains to enhance efficiency in the image generation and software loading process by eliminating redundant data loading during both initialization and image loading phases. The method further includes associating each image file specific to a core within said multi-core computing platform with a unique identifier and name. these images are stored in a manner that allows for their retrieval through a Memory Technology Devices (MTD) subroutine of an operating system executed on a designated master core of the platform. Further, the method includes facilitating rapid updates to the stored images in response to application changes by controlling the structured storage and retrieval mechanism.
[0012] In accordance with one embodiment of the present invention, a system for simplifying image generation and loading on a custom multi-core computing platform is disclosed. The system includes a plurality of cores integrated within said multi-core computing platform. A modified multicore link loading toolchain is configured to eliminate redundant data loading during both initialization and image loading phases across said plurality of cores. A storage mechanism is configured to store image files corresponding to each of the plurality of cores, wherein each stored image file is associated with a unique identifier and name. A Memory Technology Devices (MTD) subroutine is executable on an operating system running on a designated master core within the multi-core computing platform, wherein the MTD subroutine is configured to facilitate access to the stored image files, thereby enabling rapid updates to the image files in response to application changes.

BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
[0013] The detailed description is described with reference to the accompanying figures.
[0014] Figure 1 illustrates a multi core multi-processor computing platform on which the current method is implemented, in accordance with an exemplary embodiment of the present invention.
[0015] Figure 2 illustrates how the different .btbl file of each core is combined into single large btbl in case of different btbl to be loaded, in accordance with an exemplary embodiment of the present invention.
[0016] Figure 3 illustrates btbl parsing from combined btbl image by btbl manager for loading the respective cores, in accordance with an exemplary embodiment of the present invention.
[0017] Figure 4 illustrates common image is loaded into multi-core processors, in accordance with an exemplary embodiment of the present invention.
[0018] Figure 5 illustrates an exemplary flowchart for boot image generation, in accordance with an exemplary embodiment of the present invention.
[0019] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative methods embodying the principles of the present invention. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.

DETAILED DESCRIPTION
[0020] The various embodiments of the present disclosure describe about methods and systems for optimizing image generation and loading processes within computing environments equipped with multi-core and multi-processor architectures. This invention addresses the technical challenges associated with managing and processing graphical data and software across complex computing platforms, aiming to improve efficiency, reduce processing time, and minimize redundant data handling during system initialization and operational phases.
[0021] In the following description, for purpose of explanation, specific details are set forth in order to provide an understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these details. One skilled in the art will recognize that embodiments of the present invention, some of which are described below, may be incorporated into a number of systems.
[0022] However, the systems and methods are not limited to the specific embodiments described herein. Further, structures and devices shown in the figures are illustrative of exemplary embodiments of the present invention and are meant to avoid obscuring of the present invention.
[0023] Furthermore, connections between components and/or modules within the figures are not intended to be limited to direct connections. Rather, these components and modules may be modified, re-formatted or otherwise changed by intermediary components and modules.
[0024] The appearances of the phrase “in an embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
[0025] It should be noted that the description merely illustrates the principles of the present invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described herein, embody the principles of the present invention. Furthermore, all examples recited herein are principally intended expressly to be only for explanatory purposes to help the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
[0026] In an embodiment of the present invention, a method for simplifying image generation and loading on a custom multi-core computing platform is disclosed. The method includes modifying multicore link loading tool chains to enhance efficiency in the image generation and software loading process by eliminating redundant data loading during both initialization and image loading phases. The method further includes associating each image file specific to a core within said multi-core computing platform with a unique identifier and name. these images are stored in a manner that allows for their retrieval through a Memory Technology Devices (MTD) subroutine of an operating system executed on a designated master core of the platform. Further, the method includes facilitating rapid updates to the stored images in response to application changes by controlling the structured storage and retrieval mechanism.
[0027] In accordance with one embodiment of the present invention, a system for simplifying image generation and loading on a custom multi-core computing platform is disclosed. The system includes a plurality of cores integrated within said multi-core computing platform. A modified multicore link loading toolchain is configured to eliminate redundant data loading during both initialization and image loading phases across said plurality of cores. A storage mechanism is configured to store image files corresponding to each of the plurality of cores, wherein each stored image file is associated with a unique identifier and name. A Memory Technology Devices (MTD) subroutine is executable on an operating system running on a designated master core within the multi-core computing platform, wherein the MTD subroutine is configured to facilitate access to the stored image files, thereby enabling rapid updates to the image files in response to application changes.
[0028] In another embodiment of the present invention, the disclosed method encompasses an operation wherein image loading is executed through a high-speed interconnect fabric. This fabric interlinks the computing elements within a custom multi-core computing platform. The essence of the high-speed interconnect fabric lies in its advanced configuration, which is meticulously engineered to support the expedited transfer of image data among the cores of the platform. By leveraging such a fabric, the method ensures that image data is disseminated rapidly and efficiently across the computing elements. This configuration significantly reduces the latency traditionally associated with image loading processes, thereby enhancing the overall performance and responsiveness of the computing platform. Through this innovative approach, the method addresses and mitigates the bottlenecks encountered in conventional image processing and loading strategies, paving the way for more efficient and effective utilization of multi-core computing resources.
[0029] In another embodiment, the method is operationalized on an embedded platform, which is characterized by the inclusion of multiple multi-core processors. These processors are interlinked through an array of serial interfaces, a design choice that facilitates seamless data exchange across the computing landscape of the platform. Integral to this architecture is a sophisticated system management logic, which has been meticulously developed to ensure efficient allocation and management of shared resources among the various multi-core processors.
[0030] This system management logic plays a pivotal role in orchestrating the operations of the embedded platform, particularly in environments where resource contention can significantly impact performance. By enabling a coordinated approach to resource allocation, the logic ensures that each processor and its constituent cores can access necessary resources in a timely and efficient manner, thereby maximizing operational efficiency and minimizing potential bottlenecks. The inclusion of various serial interfaces enhances the platform's flexibility, allowing for a wide range of data exchange protocols to be supported, which further contributes to the robustness and adaptability of the method in handling diverse computing tasks. Through this comprehensive design, the method leverages the full potential of the embedded platform's multi-core architecture, ensuring optimal performance and resource utilization across all computing elements.
[0031] In another embodiment of the present invention, the disclosed system incorporates a programmable logic controller (PLC), which is endowed with a sophisticated arbitration mechanism. This mechanism is meticulously engineered to oversee and arbitrate the allocation of and access to shared resources among the multitude of cores resident within the custom multi-core computing platform. The primary function of this arbitration mechanism is to ensure a fair and equitable distribution of resources across the cores, effectively preempting and mitigating scenarios of resource contention that could otherwise lead to performance degradation or operational inefficiencies.
[0032] The programmable nature of the PLC allows for dynamic adaptation and customization of the arbitration logic to suit specific operational requirements and scenarios, enhancing the system's versatility and its ability to handle diverse computational workloads. By managing access to shared resources, such as memory, input/output interfaces, and computational units, the arbitration mechanism plays a critical role in optimizing the overall performance of the multi-core computing platform. This arrangement ensures that all cores have timely and appropriate access to the resources necessary for their operation, thereby facilitating smooth and concurrent execution of tasks. The introduction of such an arbitration mechanism within the PLC framework exemplifies an advanced approach to managing the complexities inherent in multi-core processing environments, where efficient resource management is paramount to achieving high levels of system performance and reliability.
[0033] In another embodiment, the present invention aims to streamline the compilation process for applications designed to operate on multicore architectures. This is achieved through the deployment of a bespoke tool that is grounded in the C programming language. Central to this innovation is the utilization of a singular executable file that is shared across all cores within the multicore environment. The inventive method meticulously balances the reduction in loading times with the imperative to keep the control within a predefined threshold. This strategic balance is crucial for ensuring that the expedited loading process does not detrimentally impact the systematic functionality of the multicore system. By adopting this novel methodology, the invention effectively addresses and mitigates the traditionally time-intensive nature of loading programs onto multi-core DSPs, thereby enhancing operational efficiency without compromising system integrity or performance.
[0034] Figure 1 illustrates a multi core multi-processor computing platform on which the current method is implemented, in accordance with an exemplary embodiment of the present invention. The invention introduces a novel system and method for streamlining the process of image generation and software loading within multicore computing environments. Specifically, this method innovates by altering the multicore link within the loading tool chains of the Texas Instruments (TI) C6678 platform, with the primary objective of eliminating redundant data loading activities that typically occur during system initialization and the image loading phase. The core of this invention lies in its unique method. In scenarios where multiple cores are designated to utilize a singular image file (executable), the method ensures that the data within the shared region of this file is preserved. Concurrently, it addresses the issue of redundancy in the non-shared regions, areas that are traditionally replicated across multiple cores. By amalgamating these replicated non-shared regions into a unified global address space, the method significantly reduces the data footprint. This consolidation process culminates in the generation of a single binary table (.btbl) file, which serves as the comprehensive image for all involved cores.
[0035] For instances where different cores are tasked with executing distinct image files, the method employs a different strategy to streamline data handling. While it maintains a single copy of the shared region to avoid unnecessary duplication, it merges the contents of the non-shared regions into a single .btbl file. This is achieved through the utilization of a custom loader tool specifically designed for this purpose. The loading process for the image onto the cores follows a meticulously crafted sequence, ensuring efficient and streamlined execution.
[0036] Figure 2 illustrates how the different .btbl file of each core is combined into single large btbl in case of different btbl to be loaded, in accordance with an exemplary embodiment of the present invention. The custom tool merges the redundant information of shared and non-shared sections of code and data. As shown in Figure 2, the executables for the common section are combined using Custom tool to generate multiple .btbl files that are further combined to create a single large .btbl file. When data segment replicated, number of copies required of identical executable files is one less than the total number of cores. The non-multicore portion after the completion of duplication, containing shared Spatial data sector address is revised to corresponding global address of each core, forming multicore loading list files.
[0037] Figure 3 illustrates btbl parsing from combined btbl image by btbl manager for loading the respective cores, in accordance with an exemplary embodiment of the present invention. Figure 4 illustrates common image is loaded into multi-core processors, in accordance with an exemplary embodiment of the present invention. The generation of .btbl files for each individual cores is depicted. This is for the non-common sections. As shown in the table below, fifth line and last column are multicore shared space for data segment, are converted to single .btbl file. While combining multiple .btbl files, the data segment is retained and the non-shared section of cores “.btbl” loading list files like data segment three of 0 core SRAM address spaces inside 64kB of L1SRAM and 512kB ofL2SRAM. The non-shared section of data segment address is replicated 7 times as shown in table.
L1P SRAM, L1D SRAM, the L2SRAM of 1st core;
L1P SRAM, L1D SRAM, the L2SRAM of 2nd cores;
L1P SRAM, L1D SRAM, the L2SRAM of 3rd cores;
L1P SRAM, L1D SRAM, the L2SRAM of 4th cores;
L1P SRAM, L1D SRAM, the L2SRAM of 5th cores;
L1P SRAM, L1D SRAM, the L2SRAM of 6th cores and
L1P SRAM, L1D SRAM, the L2SRAM of 7th cores.
Start address End address Purposes Shared/unshared
00E00000 00E07FFF Home address, L1P SRAM unshared
0C000000 0C3FFFFF 4MB shared drives shared
10E00000 10E07FFF Global address, 0 core L1P SRAM unshared
11800000 1187FFFF Global address, 1 core L2SRAM unshared
11F00000 11F07FFF Global address, 1 core L1D SRAM unshared
12E00000 12E07FFF Global address, 2 core L1P SRAM unshared
13800000 1387FFFF Global address, 3 core L2SRAM unshared
13F00000 13F07FFF Global address, 3 core L1D SRAM unshared
14E00000 14E07FFF Global address, 4 core L1P SRAM unshared
15800000 1587FFFF Global address, 5 core L2SRAM unshared
15F00000 15F07FFF Global address, 5 core L1D SRAM unshared
16E00000 16E07FFF Global address, 6 core L1P SRAM unshared
17800000 1787FFFF Global address, 7 core L2SRAM unshared
17F00000 17F07FFF Global address, 7 core L1D SRAM unshared
00800000 0087FFFF Home address, L2SRAM unshared
00F00000 00F07FFF Home address, L1D SRAM unshared
10800000 1087FFFF Global address, 0 core L2SRAM unshared
10F00000 10F07FFF Global address, 0 core L1D SRAM unshared
11E00000 11E07FFF Global address, 1 core L1P SRAM unshared
12800000 1287FFFF Global address, 2 core L2SRAM unshared
12F00000 12F07FFF Global address, 2 core L1D SRAM unshared
13E00000 13E07FFF Global address, 3 core L1P SRAM unshared
14800000 1487FFFF Global address, 4 core L2SRAM unshared
14F00000 14F07FFF Global address, 4 core L1D SRAM unshared
15E00000 15E07FFF Global address, 5 core L1P SRAM unshared
16800000 1687FFFF Global address, 6 core L2SRAM unshared
16F00000 16F07FFF Global address, 6 core L1D SRAM unshared
17E00000 17E07FFF Global address, 7 core L1P SRAM unshared
80000000 FFFFFFFF DDR3 shared

[0038] The global address of modification is:11800000 - 11F07FFF;12800000 - 12F07FFF;13800000-13F07FFF;14800000-14F07FFF;15800000-15F07FFF;16800000-16F07FFF;17800000-17F07FFF.
[0039] Figure 5 illustrates an exemplary flowchart for boot image generation, in accordance with an exemplary embodiment of the present invention. Figure 1 is the flow of a preferred embodiment of the simplification generation method that C6678 multi-core DSPs software of the present invention loads image.
[0040] At Step 502, determining the executable files loaded in each core in multi-core DSP having a common section, individual sections.
[0041] At Step 504, identical executable files to be loaded to the shared memory of non-multicore in list files. When data segment replicated, number of copies required of identical executable files is one less than the total number of cores. The non-multicore portion after the completion of duplication, containing shared Spatial data sector address is revised to corresponding global address of each core, forming multicore loading list files.
[0042] At Step 506, the individual sections of each core generated after previous step are loaded into list file, and a total loading list file is synthesized using merge btbl function. In the present embodiment, the number of files loaded in each core is less than the number of files through traditional approach.
[0043] The present invention can be suitably applied in the embedded information processing system using keystone processors, such as communication, software radio, and artificial intelligence. It is to be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations.
[0044] In an exemplary implementation, the present invention introduces several key advantages to the realm of multicore computing environments, fundamentally transforming the efficiency and efficacy of software image generation and loading processes. The main benefits derived from this innovation are as follows:
[0045] Efficiency in Software Loading: At the core of this invention is a custom tool, meticulously developed in C programming language, designed to expedite the generation and loading of multicore software images. By eliminating redundant data from the loading process, this tool significantly simplifies the transfer process. This optimization not only shortens the software booting time but also ensures uninterrupted system functionality, thereby enhancing overall operational efficiency.
[0046] Cross-Compilation Capability: The inventive approach includes the development of a custom tool that possesses the ability to cross-compile multiple executables directly on the master processor. This eliminates the need for a dedicated host machine traditionally required for such operations, thereby streamlining the development and deployment process. This cross-compilation feature represents a significant leap in operational flexibility and efficiency, enabling developers to work more seamlessly across various computing environments.
[0047] Optimized Image Maintenance: Another cornerstone of this invention is the utilization of a sub-region list file; a strategic enhancement designed to reduce the backup time and alleviate the image maintenance workload associated with each processor core. This innovation not only simplifies the management of core-specific software images but also significantly reduces the resource investment required for their upkeep. By optimizing the way in which core images are maintained, this invention offers a practical solution to one of the more tedious aspects of multicore processor management.
[0048] Collectively, these benefits underscore the invention's potential to revolutionize the process of image generation and software loading in multicore computing environments. By addressing key challenges such as redundancy, cross-compatibility, and maintenance workload, the invention paves the way for more efficient, flexible, and user-friendly multicore computing solutions. This advancement not only contributes to the acceleration of computing processes but also enhances the scalability and adaptability of multicore systems in response to evolving technological demands.
[0049] The foregoing description of the invention has been set merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to person skilled in the art, the invention should be construed to include everything within the scope of the invention.
, Claims:
1. A method for simplifying image generation and loading on a custom multi-core computing platform, said method comprising:
modifying multicore link loading tool chains to enhance efficiency in the image generation and software loading process by eliminating redundant data loading during both initialization and image loading phases;
associating each image file specific to a core within said multi-core computing platform with a unique identifier and name, wherein said images are stored in a manner that allows for their retrieval through a Memory Technology Devices (MTD) subroutine of an operating system executed on a designated master core of the platform; and
facilitating rapid updates to the stored images in response to application changes by controlling the structured storage and retrieval mechanism.

2. The method as claimed in claim 1, further comprising performing image loading via a high-speed interconnect fabric among the computing elements within the custom multi-core computing platform, wherein said high-speed interconnect fabric is configured to facilitate the rapid transfer of image data between cores.

3. The method as claimed in claim 1, wherein said method is implemented on an embedded platform comprising multiple multi-core processors, said processors being interconnected through various serial interfaces to enable data exchange, and further including a system management logic designed for the efficient handling of shared resources among said multiple multi-core processors.

4. A system for simplifying image generation and loading on a custom multi-core computing platform, said system comprising:
a plurality of cores integrated within said multi-core computing platform;
a modified multicore link loading toolchain configured to eliminate redundant data loading during both initialization and image loading phases across said plurality of cores;
a storage mechanism configured to store image files corresponding to each of the plurality of cores, wherein each stored image file is associated with a unique identifier and name; and
a Memory Technology Devices (MTD) subroutine executable on an operating system running on a designated master core within the multi-core computing platform, wherein the MTD subroutine is configured to facilitate access to the stored image files, thereby enabling rapid updates to the image files in response to application changes.

5. The system as claimed in claim 1, said system further comprising:
a high-speed interconnect fabric operationally coupling the computing elements within the multi-core computing platform, wherein the high-speed interconnect fabric is configured to expedite the transfer of image data among the computing elements.

6. The system as claimed in claim 1, wherein the system is implemented on an embedded platform that includes multiple multi-core processors, these processors being interconnected through a plurality of serial interfaces to facilitate data exchange, and incorporates a system management logic specifically configured for the efficient management and allocation of shared resources across the multiple multi-core processors.

7. The system as claimed in claim 1, further comprising a programmable logic controller (PLC) equipped with an arbitration mechanism, wherein said arbitration mechanism is configured to manage and arbitrate access to shared resources among the multiple cores within the custom multi-core computing platform, ensuring equitable resource distribution and preventing resource contention.

Documents

Application Documents

# Name Date
1 202441025826-STATEMENT OF UNDERTAKING (FORM 3) [29-03-2024(online)].pdf 2024-03-29
2 202441025826-PROOF OF RIGHT [29-03-2024(online)].pdf 2024-03-29
3 202441025826-FORM 1 [29-03-2024(online)].pdf 2024-03-29
4 202441025826-FIGURE OF ABSTRACT [29-03-2024(online)].pdf 2024-03-29
5 202441025826-DRAWINGS [29-03-2024(online)].pdf 2024-03-29
6 202441025826-DECLARATION OF INVENTORSHIP (FORM 5) [29-03-2024(online)].pdf 2024-03-29
7 202441025826-COMPLETE SPECIFICATION [29-03-2024(online)].pdf 2024-03-29
8 202441025826-FORM-26 [07-06-2024(online)].pdf 2024-06-07
9 202441025826-POA [05-11-2024(online)].pdf 2024-11-05
10 202441025826-FORM 13 [05-11-2024(online)].pdf 2024-11-05
11 202441025826-AMENDED DOCUMENTS [05-11-2024(online)].pdf 2024-11-05