Abstract: ABSTRACT SYSTEM AND METHOD FOR EDGE LEVEL TRAINING OF A MODEL The present disclosure relates to a method for edge level training of a model by one or more processors (202). The method includes adding one or more edge servers (104) to a cluster of centralized servers. Further, the method includes receiving, at the one or more edge servers (104), data from one or more edge devices (102). Further, the method includes training, at the one or more edge servers (104), a model with the data. Further, the method includes deploying the trained model onto at least one of, the one or more edge devices (102) or the one or more edge servers (104). Further, the method includes synchronizing the trained model with the cluster of centralized servers. Ref. FIG. 5
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR EDGE LEVEL TRAINING OF A MODEL
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates to artificial intelligence (AI)/machine learning (ML) models, more particularly relates to training of the AI/ML models on an edge server.
BACKGROUND OF THE INVENTION
[0002] Machine learning and artificial intelligence have seen rapid advancements, enabling various applications, such as natural language processing, image recognition, and predictive analytics. These applications often require the use of sophisticated machine learning models, such as deep neural networks. Training the machine learning models typically occurs in centralized data centers with powerful hardware, owing to their computational intensity and memory requirements.
[0003] Further, a significant transformation with the advent of fifth generation (5G) and the ongoing evolution of network technologies have ushered in a new era of connectivity, enabling a vast array of applications such as augmented reality, autonomous vehicles, and industrial automation. These applications often rely on machine learning models to deliver low-latency, and high-quality services.
[0004] However, the machine learning model training methods, are typically performed in centralized data centers. While the centralized training offers high performance, it also poses several challenges, including latency, privacy concerns, and bandwidth consumption. Transmitting data to a remote server for training can introduce delays, compromise user privacy, and consume significant network resources.
[0005] Further, the centralized training in the centralized data centers face challenges in meeting the stringent latency requirements of telecom applications. Transmitting data between edge devices and remote data centers can introduce undesirable delays. Additionally, the privacy and security concerns arise when sensitive data is transmitted over external networks for model training.
[0006] Hence, there exists a need for an improved method and system that enables training of AI/ML model with reduced latency and optimized resource utilization.
SUMMARY OF THE INVENTION
[0007] One or more embodiments of the present disclosure provide a system and a method for edge level training of a model.
[0008] In one aspect of the present invention, the method for edge level training of the model is provided. The method includes adding, by the one or more processors, one or more edge servers to a cluster of centralized servers. Further, the method includes receiving, by the one or more processors, at the one or more edge servers, data from one or more edge devices. Further, the method includes training, by the one or more processors, at the one or more edge servers, a model with the data. Further, the method includes deploying, by the one or more processors, the trained model onto at least one of, the one or more edge devices or the one or more edge servers. Further, the method includes synchronizing, by the one or more processors, the trained model with the cluster of centralized servers.
[0009] In an embodiment, the one or more edge servers are added to the cluster of centralized servers based on at least one of, receiving an input from a user pertaining to the one or more edge servers, or the one or more processors selects the one or more edge servers based on at least one of, a type of data required to train the model.
[0010] In an embodiment, adding, one or more edge servers to a cluster of centralized servers, includes the step of: implementing, by the one or more processors, security measures between the one or more edge servers and the cluster of the centralized servers. The security measures include at least one of, encryption, authentication and an access control mechanism for secure communications between the one or more edge servers and the cluster of centralized servers.
[0011] In an embodiment, receiving, at the one or more edge servers, the data from one or more edge devices, includes the step of: establishing, by the one or more processors, a communication channel between the one or more edge servers and the one or more edge devices.
[0012] In an embodiment, receiving, at the one or more edge servers, the data from the one or more edge devices, further includes the step of: preprocessing, by the one or more processors, the received data for training the model.
[0013] In an embodiment, deploying, the trained model onto at least one of, the one or more edge devices or the one or more edge servers, includes the steps of: compressing, by the one or more processors, the trained model, and deploying, by the one or more processors, the compressed trained model onto at least one of, the one or more edge devices or the one or more edge servers.
[0014] In an embodiment, synchronizing, the trained model with the cluster of centralized servers, includes the step of updating, by the one or more processors, the trained model with updated data which is retrieved from the cluster of the centralized servers.
[0015] In one aspect of the present invention, the system for edge level training of a model is disclosed. The system includes an adding unit, a transceiver, a training unit, a deploying unit and a synchronizing unit. The adding unit is configured to add, one or more edge servers to a cluster of centralized servers. The transceiver is configured to receive, at the one or more edge servers, data from one or more edge devices. The training unit is configured to, train, at the one or more edge servers, a model with the data. The deploying unit is configured to, deploy, the trained model onto at least one of, the one or more edge devices or the one or more edge servers. The synchronizing unit is configured to, synchronize, the trained model with the cluster of centralized servers.
[0016] In one aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions cause the processor to add one or more edge servers to a cluster of centralized servers. Further, the processor receives, at the one or more edge servers, data from one or more edge devices. Further, the processor trains, at the one or more edge servers, a model with the data. Further, the processor deploys, the trained model onto at least one of, the one or more edge devices or the one or more edge servers. Further, the processor synchronizes, the trained model with the cluster of centralized servers.
[0017] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0019] FIG. 1 is an exemplary block diagram of an environment for edge level training of a model, according to various embodiments of the present disclosure.
[0020] FIG. 2 is a block diagram of a system of FIG. 1, according to various embodiments of the present disclosure.
[0021] FIG. 3 is an example schematic representation of the system of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system.
[0022] FIG. 4 illustrates a system architecture for edge level training of a model, in accordance with some embodiments.
[0023] FIG. 5 is an exemplary flow diagram illustrating a method for the edge level training of a model, according to various embodiments of the present disclosure.
[0024] FIG. 6 is a flow diagram illustrating an internal call flow for edge level training of a model, in accordance with some embodiments.
[0025] Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
[0026] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0027] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0028] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0029] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0030] Before discussing example, embodiments in more detail, it is to be noted that the drawings are to be regarded as being schematic representations and elements that are not necessarily shown to scale. Rather, the various elements are represented such that their function and general purpose becomes apparent to a person skilled in the art. Any connection or coupling between functional blocks, devices, components, or other physical or functional units shown in the drawings or described herein may also be implemented by an indirect connection or coupling. A coupling between components may also be established over a wireless connection. Functional blocks may be implemented in hardware, firmware, software or a combination thereof.
[0031] Further, the flowcharts provided herein, describe the operations as sequential processes. Many of the operations may be performed in parallel, concurrently or simultaneously. In addition, the order of operations maybe re-arranged. The processes may be terminated when their operations are completed, but may also have additional steps not included in the figured. It should be noted, that in some alternative implementations, the functions/acts/ steps noted may occur out of the order noted in the figured. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
[0032] Further, the terms first, second etc… may be used herein to describe various elements, components, regions, layers and/or sections, it should be understood that these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used only to distinguish one element, component, region, layer or section from another region, layer, or a section. Thus, a first element, component, region layer, or section discussed below could be termed a second element, component, region, layer, or section without departing form the scope of the example embodiments.
[0033] Spatial and functional relationships between elements (for example, between modules) are described using various terms, including “connected,” “engaged,” “interfaced,” and “coupled.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the description below, that relationship encompasses a direct relationship where no other intervening elements are present between the first and second elements, and also an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. In contrast, when an element is referred to as being "directly” connected, engaged, interfaced, or coupled to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., "between," versus "directly between," "adjacent," versus "directly adjacent," etc.).
[0034] The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
[0035] As used herein, the singular forms “a,” “an,” and “the,” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the terms “and/or” and “at least one of” include any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0036] Unless specifically stated otherwise, or as is apparent from the description, terms such as “processing” or “computing” or “calculating” or “determining” of “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device/hardware, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[0037] Although any systems and methods similar or equivalent to those described herein can be used for training machine learning models directly on edge devices or at the edge of a network. In accordance with an exemplary embodiment, an edge server may be added to an existing cluster of servers. Further training of machine learning models at the edge server or within user devices, NF (Network Function) clusters, or customer servers.
[0038] Various embodiments of the invention provide a system and a method for training an AI/ML model at an edge server. The system and method as disclosed enables adding an edge server to a cluster of servers. In accordance with an exemplary embodiment, the method includes selecting and setting up the server to act an edge server. The edge server may be configured to have the capability to receive model updates from a plurality of edge devices. The edge servers may be aggregated and possibly perform model training or fine-tuning.
[0039] Further in an aspect of the present invention, the method includes implementing security measures to ensure the integrity and confidentiality of data and model updates. The method includes encrypting, authenticating, and providing access control mechanisms to secure communication between edge devices and the edge server.
[0040] Data from the edge devices may be received and is sent for model training in the cluster which certainly includes the selected edge server. Further, the method includes compressing the trained model. The trained model may be compressed using techniques like model quantization in order to reduce the size of the models. Thereafter the model is deployed locally on individual edge devices or edge servers. These local models can be periodically synchronized with a central server or with each other.
[0041] Further the edge level training ensures data privacy by leveraging to use one’s own servers as resources for model training thus ensuring data privacy and integrity. End point servers of a network node can also be used as edge servers which is useful in sending network performance data with lower latency for analysis. The present disclosure enables less training time and optimization of bandwidth. Further, the edge level predictions are faster, resulting in quicker issue detection as compared to past predictions. In an aspect the present disclosure is configured to perform edge level training where the user servers or NF cluster servers can also be used to train the model thus leading to low latency and optimized bandwidth usage.
[0042] FIG. 1 illustrates an exemplary block diagram of an environment (100) for edge level training of a model, according to various embodiments of the present disclosure. The environment (100) comprises a plurality of user equipment’s (UEs) or edge devices (102-1, 102-2, ……,102-n). The at least one UE or edge device (102-n) from the plurality of the UEs (102-1, 102-2, ……102-n) or edge devices is configured to connect to a system (108) via a communication network (106). Hereafter, label for the plurality of UEs or one or more UEs or edge devices is 102.
[0043] In accordance with yet another aspect of the exemplary embodiment, the plurality of UEs (or edge devices) (102) may be a wireless device or a communication device that may be a part of the system (108). The wireless device or the UE or edge devices (102) may include, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch, a computer device, and so on), a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication or Voice Over Internet Protocol (VoIP) capabilities. In an embodiment, the UEs or edge devices (102) may include, but are not limited to, any electrical, electronic, electro-mechanical or an equipment or a combination of one or more of the above devices such as smartphones, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, where the computing device may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, input devices for receiving input from a user such as touch pad, touch enabled screen, electronic pen and the like. It may be appreciated that the UEs or edge devices (102) may not be restricted to the mentioned devices and various other devices may be used. A person skilled in the art will appreciate that the plurality of UEs or edge devices (102) may include a fixed landline, and a landline with assigned extension within the communication network (106).
[0044] The communication network (106), may use one or more communication interfaces/protocols such as, for example, Voice Over Internet Protocol (VoIP), 802.11 (Wi-Fi), 802.15 (including Bluetooth™), 802.16 (Wi-Max), 802.22, Cellular standards such as Code Division Multiple Access (CDMA), CDMA2000, Wideband CDMA (WCDMA), Radio Frequency Identification (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
[0045] The communication network (106) includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The communication network (106) may include, but is not limited to, a Third Generation (3G) network, a Fourth Generation (4G) network, a Fifth Generation (5G) network, a Sixth Generation (6G) network, a New Radio (NR) network, a Narrow Band Internet of Things (NB-IoT) network, an Open Radio Access Network (O-RAN), and the like.
[0046] The communication network (106) may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The communication network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0047] One or more network elements can be, for example, but not limited to a base station that is located in the fixed or stationary part of the communication network (106). The base station may correspond to a remote radio head, a transmission point, an access point or access node, a macro cell, a small cell, a micro cell, a femto cell, a metro cell. The base station enables transmission of radio signals to the UE or edge devices (102) or a mobile transceiver. Such a radio signal may comply with radio signals as, for example, standardized by a 3rd Generation Partnership Project (3GPP) or, generally, in line with one or more of the above listed systems. Thus, a base station may correspond to a NodeB, an eNodeB, a Base Transceiver Station (BTS), an access point, a remote radio head, a transmission point, which may be further divided into a remote unit and a central unit. The 3GPP specifications cover cellular telecommunications technologies, including radio access, core network, and service capabilities, which provide a complete system description for mobile telecommunications.
[0048] The system (108) is communicatively coupled to a server (104) via the communication network (106). The server (104) can be, for example, but not limited to a standalone server, a server blade, a server rack, an application server, a bank of servers, a business telephony application server (BTAS), a server farm, a cloud server, an edge server, home server, a virtualized server, one or more processors executing code to function as a server, or the like. In an implementation, the server (104) may operate at various entities or a single entity (include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defense facility side, or any other facility) that provides service.
[0049] The environment (100) further includes the system (108) communicably coupled to the server (e.g., remote server or the like) (104) and each UE of the plurality of UEs or edge devices (102) via the communication network (106). The remote server (104) is configured to execute the requests in the communication network (106).
[0050] The system (108) is adapted to be embedded within the remote server (104) or is embedded as an individual entity. The system (108) is designed to provide a centralized and unified view of data and facilitate efficient business operations. The system (108) is authorized to access to update/create/delete one or more parameters of their relationship between the requests for the edge level training, which gets reflected in real-time independent of the complexity of network.
[0051] In another embodiment, the system (108) may include an enterprise provisioning server (for example), which may connect with the remote server (104). The enterprise provisioning server provides flexibility for enterprises, ecommerce, finance to update/create/delete information related to the requests for the edge level training in real time as per their business needs. A user with administrator rights can access and retrieve the requests for the edge level training and perform real-time analysis in the system (108).
[0052] The system (108) may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a business telephony application server (BTAS), a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an implementation, system (108) may operate at various entities or single entity (for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defense facility side, or any other facility) that provides service.
[0053] However, for the purpose of description, the system (108) is described as an integral part of the remote server (104), without deviating from the scope of the present disclosure. Operational and construction features of the system (108) will be explained in detail with respect to the following figures.
[0054] FIG. 2 illustrates a block diagram of the system (108) provided for edge level training of a model, according to one or more embodiments of the present invention. As per the illustrated embodiment, the system (108) includes the one or more processors (202), the memory (204), a userinterface (206), a display (208), an input device (210), and the database (214). Further the system (108) may comprise one or more processors (202). The one or more processors (202), hereinafter referred to as the processor (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system (108) includes one processor. However, it is to be noted that the system (108) may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0055] Information related to the edge level training may be provided or stored in the memory (204) of the system (108). Among other capabilities, the processor (202) is configured to fetch and execute computer-readable instructions stored in the memory (204). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0056] The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like. In an embodiment, the system (108) may include an interface(s). The interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like. The interface(s) may facilitate communication for the system. The interface(s) may also provide a communication pathway for one or more components of the system. Examples of such components include, but are not limited to, processing unit/engine(s) and the database (214). The processing unit/engine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s).
[0057] The information related to the edge level training may be rendered on the user interface (206). The user interface (206) may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art. The user interface (206) may be rendered on the display (208), implemented using Liquid Crystal Display (LCD) display technology, Organic Light-Emitting Diode (OLED) display technology, and/or other types of conventional display technology. The display (208) may be integrated within the system (108) or connected externally. Further the input device(s) (210) may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
[0058] The database (214) may be communicably connected to the processor (202) and the memory (204). The database (214) may be configured to store and retrieve the request pertaining to features, or services or workflow of the system (108), access rights, attributes, approved list, and authentication data provided by an administrator. In another embodiment, the database (214) may be outside the system (108) and communicated through a wired medium and a wireless medium.
[0059] Further, the processor (202), in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor (202). In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by an electronic circuitry.
[0060] In order for the system (108) for edge level training of a model, the processor (202) includes an adding unit (216), a transceiver (218), a pre-processing unit (220), a training unit (222), a deploying unit (224), and a synchronizing unit (226). The adding unit (216), the transceiver (218), the pre-processing unit (220), the training unit (222), the deploying unit (224), and the synchronizing unit (226) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by the electronic circuitry.
[0061] In order for the system (108) for edge level training, the adding unit (216), the transceiver (218), the pre-processing unit (220), the training unit (222), the deploying unit (224), and the synchronizing unit (226) are communicably coupled to each other. In an embodiment, the adding unit (216) adds one or more edge servers (104) to a cluster of centralized servers (or server bundle (404) (as shown in FIG. 4)). The one or more servers (104) can be an edge server. The one or more edge servers (104) are added to the cluster of centralized servers based on at least one of, receiving an input from a user pertaining to the one or more edge servers (104). Consider an example, a cloud storage service operates a cluster of centralized servers to manage a user data. To enhance performance and accommodate growing user demands, the cloud storage service needs to add more servers to its cluster. Further, the administrator logs into a cloud storage management dashboard. Upon logging, the system (108) displays current server usage metrics, indicating that storage capacity and response times of the server. Further, the administrator selects an option to "Add Servers" in the cluster. Upon selection, a dialog box appears, allowing the administrator to specify the number of servers to add, the server type (e.g., standard, high-performance), and geographical locations for optimal load distribution.
[0062] The adding unit (216) processes the user’s input and validates it against the service’s capacity planning rules. Further, the adding unit (216) checks the existing cluster's health, performance metrics, and overall load. Based on the input, the adding unit (216) initiates a provisioning process to deploy the specified number of new servers and allocate the resources (e.g., CPU, memory or the like). After deployment and allocating the resources, the administrator receives a confirmation message that the new servers have been successfully added to the cluster. The system (108) begins monitoring performance metrics in real-time to ensure the cluster operates optimally with the resources.
[0063] In an embodiment, the adding unit (216) selects the one or more edge servers (104) based on at least one of, a type of data required to train the model. In an embodiment, the adding unit (216) adds the one or more edge servers to the cluster of centralized servers by implementing, security measures between the one or more edge servers and the cluster of the centralized servers. The security measures includes at least one of, encrypting, authenticating and providing access control mechanisms to secure communications between the one or more edge servers (104) and the cluster of centralized servers.
[0064] In another example, a machine learning platform uses a cluster of centralized servers to train various models for different types of data (e.g., images, text, and structured data). To improve efficiency and performance, the machine learning platform needs to add new servers tailored for specific data types while ensuring secure communication. A data scientist uploads a new dataset to the machine learning platform, specifying that it consists of high-resolution images for a computer vision model. The adding unit (216) analyzes the dataset's characteristics and determines a need for additional GPU-enabled servers optimized for image processing. Based on the identified data type, the adding unit (216) selects the appropriate number of high-performance GPU servers from the cloud infrastructure. The selection criteria include server specifications (e.g., GPU type, memory, and storage) that are best suited for handling large-scale image data. Before integrating the new edge servers into the cluster, the adding unit (216) initiates a series of security protocols. The adding unit (216) sets up secure communication channels using encryption protocols (e.g., Transport Layer Security (TLS)) to protect data in transit between the new servers and the existing cluster. The adding unit (216) implements strong authentication mechanisms, such as OAuth tokens or Secure Socket Shell (SSH) key pairs, ensuring that only authorized servers can communicate with the cluster. Role-based access control (RBAC) is configured to define which users and processes have permission to access the new servers and what actions they can perform.
[0065] Once the security measures are in place, the adding unit (216) provisions and configures the new edge servers within the cluster. The adding unit (216) connects the GPU servers to the existing architecture, ensuring that they can receive data securely from the central storage and send back processed results. After the servers are added, the system runs validation tests to ensure that the new setup is functioning correctly and securely. The adding unit (216) continuously monitors the performance and security metrics of both the new servers and the centralized cluster, generating alerts for any unusual activity. The data scientist can now initiate training processes on the new edge servers, with confidence that both the data and communication between servers are secure.
[0066] The transceiver (218) receives historic data from one or more edge devices (102) at the one or more edge servers (104). In an embodiment, the transceiver (218) receives the historic data from the one or more edge devices (102) at the one or more edge servers (104) by establishing a communication channel between the one or more edge servers and the one or more edge devices (102). The pre-processing unit (220) preprocesses the received historic data. The training unit (222) trains a model with the historic data at the one or more edge servers (104).
[0067] In an example, for a smart manufacturing environment, the edge devices (102) (such as sensors on machinery, production line cameras, and quality control devices) collect real-time data on machine performance, product quality, and operational conditions. This historic data is essential for training machine learning models to predict equipment failures and optimize production processes. Further, the transceiver (218), located in the centralized server infrastructure, establishes secure communication channels with each edge device (102) using protocols like MQTT for lightweight messaging and Hypertext Transfer Protocol (HTTPS) for secure data transfer. The communication channels ensure low-latency data transmission and secure encryption to protect sensitive operational data. Once the communication channels are established, the edge devices (102) begin transmitting their historic data to the transceiver (218). The transceiver (218) receives the historic data, which may include various formats such as time-series data from sensors and image data from cameras. It processes the data, ensuring integrity and completeness (e.g., checking for missing timestamps or corrupt images). After validation, the transceiver (218) forwards the processed data to centralized servers where it is stored in the database (214) optimized for machine learning applications. The data is organized for easy retrieval, with timestamps, sensor identifier (ID), and quality labels for images. With the historic data now stored in the centralized servers, the data scientists can initiate the training of machine learning models. For example, they may train a predictive maintenance model using vibration data to identify patterns indicative of equipment failure or a computer vision model to classify product quality based on the images received. Once trained, the models are evaluated for accuracy and performance. Successful models can then be deployed back to the edge devices (102), where they can make real-time predictions (e.g., alerting operators of potential equipment malfunctions or identifying defective products on the assembly line). The edge devices (102) can continuously send new data, and the transceiver (218) can facilitate updates to the models as more data becomes available, creating a feedback loop that improves model accuracy over time.
[0068] The deploying unit (224) deploys the trained model onto at least one of, the one or more edge devices (102) or the one or more edge servers (104). In an embodiment, the deploying unit (224) deploys the trained model onto at least one of, the one or more edge devices (102) or the one or more edge servers (104) by compressing the trained model, and deploying, the compressed trained model onto at least one of, the one or more edge devices (102) or the one or more edge servers (104).
[0069] In an example, in a smart home system, various edge devices (102) (like smart thermostats, smart plugs, and energy monitors) collect data on energy usage patterns. The machine learning models are trained to optimize energy consumption based on this data, predicting peak usage times and suggesting energy-saving actions. The data from the edge devices, (102) such as energy consumption logs and environmental factors (temperature, humidity), is collected and sent to the centralized server. The data scientists train a machine learning model to predict energy consumption patterns and recommend optimal settings for each device. Once the model achieves satisfactory performance metrics (e.g., accuracy, recall), the deploying unit (224) initiates a compression process to reduce the model's size. Techniques such as quantization, pruning, or knowledge distillation are applied to create a compressed version of the trained model, ensuring it retains performance while fitting within the resource constraints of edge devices (102). The deploying unit (224) decides to deploy the compressed model to both the edge devices (102) and the centralized server. The compressed model is sent from the centralized server to the edge devices (102) via secure communication protocols (e.g., MQTT or HTTPS). The deploying unit (224) ensures that the model is packaged in a compatible format for each edge device (102), taking into account different hardware capabilities and software environments. Each edge device (102) receives the compressed model and unpacks it. The deployment process includes a verification and an installation. The verification ensures that the model integrity is intact during transmission. The installation ensures that integrating the model into the device’s local environment. Once deployed, the edge devices (102) begin using the model to make real-time predictions. For example, a smart thermostat predicts when to adjust heating or cooling to optimize energy consumption based on user habits and environmental data. The users receive notifications about recommended actions (e.g., adjusting settings during peak hours) through their smart home app. The edge devices (102) continue to collect new energy usage data, which is sent back to the centralized server for ongoing analysis.
[0070] The synchronizing unit (226) synchronizes the trained model with the cluster of centralized servers. In an embodiment, the synchronizing unit (226) synchronizes the trained model with the cluster of centralized servers by updating, the trained model with updated historic data which is retrieved from the cluster of the centralized servers. In other words, as the model gathers more real-world feedback, the data scientists can periodically retrain and update the model, repeating the compression and deployment process to enhance accuracy over time. The example for edge level training is explained in FIG. 4 to FIG. 6.
[0071] FIG. 3 is an example schematic representation of the system (300) of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE or the edge device (102-1) and the system (108) for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0072] As mentioned earlier, the first UE or the edge device (102-1) includes one or more primary processors (305) communicably coupled to the one or more processors (202) of the system (108). The one or more primary processors (305) are coupled with a memory (310) storing instructions which are executed by the one or more primary processors (305). Execution of the stored instructions by the one or more primary processors (305) enables the UE or the edge device (102-1). The execution of the stored instructions by the one or more primary processors (305) further enables the UE or the edge device (102-1) to execute the requests in the communication network (106).
[0073] As mentioned earlier, the one or more processors (202) is configured to transmit a response content related to the edge level training to the UE or the edge device (102-1). More specifically, the one or more processors (202) of the system (108) is configured to transmit the response content to at least one of the UE or the edge device (102-1). A kernel (315) is a core component serving as the primary interface between hardware components of the UE or the edge device (102-1) and the system (108). The kernel (315) is configured to provide the plurality of response contents hosted on the system (108) to access resources available in the communication network (106). The resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
[0074] As per the illustrated embodiment, the system (108) includes the one or more processors (202), the memory (204), the input/output interface unit (206), the display (208), and the input device (210). The operations and functions of the one or more processors (202), the memory (204), the input/output interface unit (206), the display (208), and the input device (210) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure. Further, the processor (202) includes the adding unit (216), the transceiver (218), the pre-processing unit (220), the training unit (222), the deploying unit (224), and the synchronizing unit (226). The operations and functions of the the adding unit (216), the transceiver (218), the pre-processing unit (220), the training unit (222), the deploying unit (224), and the synchronizing unit (226) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure.
[0075] FIG. 4 illustrates a system architecture (400) for edge level training of the model, in accordance with some embodiments. The system architecture (400) includes a server bundle (404) (e.g., cluster of centralized servers or the like). The cluster of centralized servers may comprise an AI/ML module. Further the cluster of centralized servers may be communicably connected to the edge server (104). The edge server (104) may added to the cluster of centralized servers. Further the data for training the AI/ML model deployed in the cluster of centralized servers, may be provided in the edge server (104).
[0076] In an aspect of the present invention, the pre-processing unit (220) may be configured to pre-process data stored in the edge server (104). Once the edge server (104) is added to the cluster of the centralized server, the data from the edge devices (102) is received and is sent for model training in the cluster which certainly includes the selected edge server (104). Further, the training unit (222) may be configured to train the AI/ML model in the edge server (104).
[0077] The storages may be limited in the edge server (104), and a model compression unit (406), may be configured to compress the trained model so as to store in the limited storage of the edge server (104). The trained model may be compressed using the techniques like model quantization in order to reduce the size of the models. Further in an aspect the model may be deployed locally on individual edge devices (102). These local models can be periodically synchronized with the central server or with each other.
[0078] FIG. 5 is an exemplary flow diagram (500) illustrating the method for edge level training of the model, according to various embodiments of the present disclosure.
[0079] At 502, the method includes adding the one or more edge servers (104) to the cluster of centralized servers. In an embodiment, the method allows the adding unit (216) to add the one or more edge servers (104) to the cluster of centralized servers.
[0080] At 504, the method includes receiving, at the one or more edge servers (104), the historic data from one or more edge devices (102). In an embodiment, the method allows the transceiver (218) to receive, at the one or more edge servers (104), the historic data from one or more edge devices (102).
[0081] At 506, the method includes training, at the one or more edge servers (104), the model with the historic data. In an embodiment, the method allows the training unit (222) to train, at the one or more edge servers (104), the model with the historic data.
[0082] At 508, the method includes deploying the trained model onto at least one of, the one or more edge devices (102) or the one or more edge servers (104). In an embodiment, the method allows the deploying unit (224) to deploy the trained model onto at least one of, the one or more edge devices (102) or the one or more edge servers (104).
[0083] At 510, the method includes synchronizing the trained model with the cluster of centralized servers. In an embodiment, the method allows the synchronizing unit (226) to synchronize the trained model with the cluster of centralized servers.
[0084] FIG. 6 is a flow diagram (600) illustrating an internal call flow for the edge level training of the model, in accordance with some embodiments. In an embodiment, the machine learning models are trained directly on the edge devices. At 602, the method includes selecting the at least one edge server (104), to be added to the cluster of centralized servers (or server bundle (404)). Further, the AI/ML model may be deployed within the cluster of centralized servers. In an aspect of the present invention, the data for training the AI/ML model deployed within the cluster of centralized servers.
[0085] Further upon selecting the edge server (104), the edge server (104) may be set-up and added to the cluster of centralized servers (or server bundle (404)). At 604, the method includes adding the security and authentication protocol to the edge server (104), once the edge server (104) is added to the cluster of centralized servers.
[0086] At 606, the method includes pre-processing the data stored in the edge server (104). Once the edge server (104) is added to the cluster of the centralized server, the data from the edge devices (102) is received and is sent for the model training in the cluster which certainly includes the selected edge server (104).
[0087] Further, at 608, the method includes training the AI/ML model in the edge server (104). Further, storages may be limited in the edge server (104), or the edge devices (102). At 610, the method includes compressing the trained model so as to store in the limited storage of the edge server (104), and/or the edge devices (102). The trained model may be compressed using the techniques like model quantization in order to reduce the size of the models.
[0088] At 612, the method includes deploying the trained model at the edge server (104), and the edge devices (102). In an aspect, the trained model may be deployed locally on individual edge devices (102) or the edge servers (104). These local models can be periodically synchronized with the central server or with each other.
[0089] Below is the technical advancement of the present invention:
[0090] Based on the proposed method, the edge-level training enables training the machine learning models directly on the edge devices (102) or at the edge of the network (106). The edge server (104) is configured to have the capability to receive model updates from a plurality of edge devices (102). Further, the method implements the security measures to ensure the integrity and confidentiality of data and model updates. The system implements security measures such as encryption, authentication, and an access control mechanism to secure communications between the edge devices (102) and the edge server (104). The data from the edge devices (102) may be received and sent for training model provided in the server cluster and the server selected to the edge server. The trained model is compressed using techniques like model quantization. Deploying the model locally on individual edge devices (102) or the edge servers (104). Further, the edge level training ensures data privacy by leveraging to use one’s own servers as resources for model training thus ensuring data privacy and integrity. End point servers of a network node can also be used as edge servers which is useful in sending network performance data with lower latency for analysis. The present disclosure enables lesser training time and by using restricted bandwidth. Further the edge level prediction also is faster leading to quicker than before issue detection. In an aspect the present disclosure is configured to leverage to perform edge level training where the user servers or NF cluster servers can also be used to train the model thus leading to low latency and constrained bandwidth usage.
[0091] The proposed method can be implemented at autonomous vehicles, healthcare devices and industrial IoT.
[0092] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS. 1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0093] Method steps: A person of ordinary skill in the art will readily ascertain that the illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0094] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0095] Environment - 100
[0096] Edge devices or UEs– 102, 102-1-102-n
[0097] Server - 104
[0098] Communication network – 106
[0099] System – 108
[00100] Processor – 202
[00101] Memory – 204
[00102] User Interface – 206
[00103] Display – 208
[00104] Input device – 210
[00105] Database – 214
[00106] Adding unit– 216
[00107] Transceiver – 218
[00108] Pre-processing unit – 220
[00109] Training unit – 222
[00110] Deploying unit – 224
[00111] Synchronizing unit - 226
[00112] System - 300
[00113] Primary processors -305
[00114] Memory– 310
[00115] Kernel– 315
[00116] System architecture - 400
[00117] Cluster – 402
[00118] Server bundle – 404
[00119] Model compression unit – 406
,CLAIMS:CLAIMS:
We Claim:
1. A method for edge level training of a model, the method comprising the steps of:
adding, by the one or more processors (202), one or more edge servers (104) to a cluster of centralized servers;
receiving, by the one or more processors (202), at the one or more edge servers (104), data from one or more edge devices (102);
training, by the one or more processors (202), at the one or more edge servers (104), a model with the data;
deploying, by the one or more processors (202), the trained model onto at least one of, the one or more edge devices (102) or the one or more edge servers (104); and
synchronizing, by the one or more processors (202), the trained model with the cluster of centralized servers.
2. The method as claimed in claim 1, wherein the one or more edge servers (104) are added to the cluster of centralized servers based on at least one of, receiving an input from a user pertaining to the one or more edge servers (104), or the one or more processors (202) selects the one or more edge servers (104) based on at least one of, a type of data required to train the model.
3. The method as claimed in claim 1, wherein the step of, adding, one or more edge servers (104) to a cluster of centralized servers, includes the step of:
implementing, by the one or more processors (202), security measures between the one or more edge servers (104) and the cluster of the centralized servers, wherein the security measures includes at least one of, encryption, authentication and an access control mechanism for secure communication between the one or more edge servers (104) and the cluster of centralized servers.
4. The method as claimed in claim 1, wherein the step of, receiving, at the one or more edge servers (104), the data from the one or more edge devices (102), includes the step of:
establishing, by the one or more processors (202), a communication channel between the one or more edge servers (104) and the one or more edge devices (102).
5. The method as claimed in claim 1, wherein the step of, receiving, at the one or more edge servers (104), the data from the one or more edge devices (102), further includes the step of:
preprocessing, by the one or more processors (202), the received data for training of the model.
6. The method as claimed in claim 1, wherein the step of, deploying, the trained model onto at least one of, the one or more edge devices (102) or the one or more edge servers (104), includes the steps of:
compressing, by the one or more processors (202), the trained model;
deploying, by the one or more processors (202), the compressed trained model onto at least one of, the one or more edge devices (102) or the one or more edge servers (104).
7. The method as claimed in claim 1, wherein the step of, synchronizing, the trained model with the cluster of centralized servers, includes the step of:
updating, by the one or more processors (202), the trained model with updated data which is retrieved from the cluster of the centralized servers.
8. A system (108) for edge level training of a model, the system (108) comprising:
an adding unit (216), configured to, add, one or more servers (104) to a cluster of centralized servers;
a transceiver (218), configured to, receive, at the one or more edge servers (104), data from one or more edge devices (102);
a training unit (222), configured to, train, at the one or more edge servers (104), a model with the data;
a deploying unit (224), configured to, deploy, the trained model onto at least one of, the one or more edge devices (102) or the one or more edge servers (104); and
a synchronizing unit (226), configured to, synchronize, the trained model with the cluster of centralized servers.
9. The system (108) as claimed in claim 8, wherein the one or more edge servers (104) are added to the cluster of centralized servers based on at least one of, receiving an input from a user pertaining to the one or more edge servers (104), or the adding unit (216), selects the one or more edge servers (104) based on at least one of, a type of data required to train the model.
10. The system (108) as claimed in claim 8, wherein the adding unit (216), adds, the one or more edge servers (104) to the cluster of centralized servers, by:
implementing, security measures between the one or more edge servers (104) and the cluster of the centralized servers, wherein the security measures includes at least one of, encryption, authentication and an access control mechanism to secure communications between the one or more edge servers (104) and the cluster of centralized servers.
11. The system (108) as claimed in claim 8, wherein the transceiver (218), receives, at the one or more edge servers, data from one or more edge devices (102), by:
establishing, a communication channel between the one or more edge servers (104) and the one or more edge devices (102).
12. The system (108) as claimed in claim 8, wherein a pre-processing unit (220), preprocesses the received data for training the model.
13. The system (108) as claimed in claim 8, wherein the deploying unit (224), deploys, the trained model onto at least one of, the one or more edge devices (102) or the one or more edge servers (104), by:
compressing, the trained model; and
deploying, the compressed trained model onto at least one of, the one or more edge devices (102) or the one or more edge servers (104).
14. The system (108) as claimed in claim 8, wherein the synchronizing unit (226), synchronizes, the trained model with the cluster of centralized servers, by:
updating, the trained model with updated data which is retrieved from the cluster of the centralized servers.
| # | Name | Date |
|---|---|---|
| 1 | 202321068704-STATEMENT OF UNDERTAKING (FORM 3) [12-10-2023(online)].pdf | 2023-10-12 |
| 2 | 202321068704-PROVISIONAL SPECIFICATION [12-10-2023(online)].pdf | 2023-10-12 |
| 3 | 202321068704-FORM 1 [12-10-2023(online)].pdf | 2023-10-12 |
| 4 | 202321068704-FIGURE OF ABSTRACT [12-10-2023(online)].pdf | 2023-10-12 |
| 5 | 202321068704-DRAWINGS [12-10-2023(online)].pdf | 2023-10-12 |
| 6 | 202321068704-DECLARATION OF INVENTORSHIP (FORM 5) [12-10-2023(online)].pdf | 2023-10-12 |
| 7 | 202321068704-FORM-26 [27-11-2023(online)].pdf | 2023-11-27 |
| 8 | 202321068704-Proof of Right [12-02-2024(online)].pdf | 2024-02-12 |
| 9 | 202321068704-DRAWING [11-10-2024(online)].pdf | 2024-10-11 |
| 10 | 202321068704-COMPLETE SPECIFICATION [11-10-2024(online)].pdf | 2024-10-11 |
| 11 | Abstract.jpg | 2025-01-08 |
| 12 | 202321068704-Power of Attorney [24-01-2025(online)].pdf | 2025-01-24 |
| 13 | 202321068704-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf | 2025-01-24 |
| 14 | 202321068704-Covering Letter [24-01-2025(online)].pdf | 2025-01-24 |
| 15 | 202321068704-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf | 2025-01-24 |
| 16 | 202321068704-FORM 3 [31-01-2025(online)].pdf | 2025-01-31 |