Sign In to Follow Application
View All Documents & Correspondence

Methods And Systems For In Line Calibration Of Distributed Edge Infrastructure

Abstract: This disclosure relates generally to methods and systems for in-line calibration of a distributed edge infrastructure. Conventional techniques for calibrating the distributed edge infrastructure, are limited and manual. The present disclosure herein addresses the technical problems by performing a trulastic (true and elastic) calibration process for the distributed edge infrastructure, by checking a latency and time synchronization between the edge devices present in the distributed edge infrastructure. The in-line calibration of the distributed edge infrastructure is performed in two stages. In the first stage, an initial calibration is performed during the deployment of the distributed edge infrastructure. In the second stage, an adaptive calibration is performed during a real-time operation. The adaptive calibration is performed seamlessly when a complexity of the processes or a criticality of the upcoming work is to be done by the distributed edge infrastructure or based on the predefined time interval. [To be published with FIG. 3]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
23 August 2021
Publication Number
08/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
kcopatents@khaitanco.com
Parent Application

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point Mumbai Maharashtra India 400021

Inventors

1. DAS, Rajat Kumar
Tata Consultancy Services Limited Block -1B, Eco Space, Plot No. IIF/12 (Old No. AA-II/BLK 3. I.T) Street 59 M. WIDE (R.O.W.) Road, New Town, Rajarhat, P.S. Rajarhat, Dist - N. 24 Parganas, Kolkata West Bengal India 700160
2. SWAIN, Amit
Tata Consultancy Services Limited Block -1B, Eco Space, Plot No. IIF/12 (Old No. AA-II/BLK 3. I.T) Street 59 M. WIDE (R.O.W.) Road, New Town, Rajarhat, P.S. Rajarhat, Dist - N. 24 Parganas, Kolkata West Bengal India 700160
3. CHAKRAVARTY, Tapas
Tata Consultancy Services Limited Block -1B, Eco Space, Plot No. IIF/12 (Old No. AA-II/BLK 3. I.T) Street 59 M. WIDE (R.O.W.) Road, New Town, Rajarhat, P.S. Rajarhat, Dist - N. 24 Parganas, Kolkata West Bengal India 700160
4. BHAUMIK, Chirabrata
Tata Consultancy Services Limited Block -1B, Eco Space, Plot No. IIF/12 (Old No. AA-II/BLK 3. I.T) Street 59 M. WIDE (R.O.W.) Road, New Town, Rajarhat, P.S. Rajarhat, Dist - N. 24 Parganas, Kolkata West Bengal India 700160

Specification

FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION (See Section 10 and Rule 13)
Title of invention:
METHODS AND SYSTEMS FOR IN-LINE CALIBRATION OF DISTRIBUTED EDGE INFRASTRUCTURE
Applicant
Tata Consultancy Services Limited A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description:
The following specification particularly describes the invention and the
manner in which it is to be performed.

TECHNICAL FIELD [001] The disclosure herein generally relates to the field of edge computing, and, more particularly, to methods and systems for in-line calibration of a distributed edge infrastructure.
BACKGROUND
[002] Edge computing is a paradigm that brings computation and storage closer to the data generation. Edge computing infrastructure helps in accomplishing multiple tasks including running different signal processing algorithms, real-time response or actuation, data acquisition from multiple edge devices such as sensors, actuators and so on, illuminating the edge devices such as sensors, actuators and so on in different modes of operation on the fly, and also minimizes the usage of network bandwidth comparing to other computing infrastructures. To accomplishes these multiple tasks parallelly and synchronously, a distributed hardware infrastructure (herein after termed as ‘distributed edge infrastructure’) is used where the multiple tasks may be performed using multiple single board computing units, multiple controllers, multiple graphic processing units, and so on. However, the distributed edge infrastructure appears to be a single coherent network.
[003] The distributed edge infrastructures may be used in different applications, for example, in industries or factory floors where continuous monitoring of materials, predictive maintenance of machines, or complete factory infrastructure monitoring is required. To monitor the materials, machines or factory infrastructure, a hundred of sophisticated sensors including radars, acoustic sensors, ultrasound sensors, power meters, drones, cameras, and so on may be attached as edge devices to the distributed edge infrastructure through different interfaces. Therefore to run the entire distributed edge infrastructure seamlessly and synchronously, the edge devices need to be periodically or adaptively checked, so that the multiple single board computing units, multiple controllers, multiple graphic processing units, and so on, collaborate efficiently. Further, for critical applications such as elderly-care application, worker safety in hazard chemical manufacturing industry, it is essential to monitor the performance of the deployed

edge devices and to inspect and calibrate the edge devices in automated way. Conventional techniques for calibrating the distributed edge infrastructures, are limited and manual.
SUMMARY
[004] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems.
[005] In an aspect, a processor-implemented method for in-line calibration of a distributed edge infrastructure. The method including the steps of: configuring the distributed edge infrastructure for deployment, wherein the distributed edge infrastructure comprises a master controller and one or more slave controllers, and wherein each of: (i) the master controller, and (ii) each of the one or more slave controllers, at least comprises one or more single board computing units and one or more graphic processing units; performing an initial calibration of the distributed edge infrastructure during the deployment, wherein the initial calibration comprises: sending by the master controller, a dummy data packet with a timestamp of the master controller, at each iteration, to each of the one or more slave controllers, for a predefined number of iterations; measuring a latency between the master controller and each of the one or more slave controllers, at each iteration of the predefined number of iterations; calculating statistical parameters for each of the one or more slave controllers, based on the latency measured at each iteration of the predefined number of iterations, using a data distribution function; identifying one or more faulty slave controllers out of the one or more slave controllers, based on the calculated statistical parameters corresponding to each of the one or more slave controllers, to perform corrective measures; and storing the calculated statistical parameters for each of the one or more slave controllers, in the master controller for future use; and performing an adaptive calibration of the distributed edge infrastructure during operation, wherein the adaptive calibration of the distributed edge infrastructure comprises: calculating a weighted factor (X) for each of the one or more slave controllers, by the master controller, using the statistical parameters for each of the one or more slave controllers; and performing

the adaptive calibration for the one or more slave controllers, based on the respective weighted factor (X) for each of the one or more slave controllers.
[006] In another aspect, a system for in-line calibration of a distributed edge infrastructure is provided. The system includes: a memory storing instructions; one or more Input/Output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces, wherein the one or more hardware processors are configured by the instructions to: configure the distributed edge infrastructure for deployment, wherein the distributed edge infrastructure comprises a master controller and one or more slave controllers, and wherein each of: (i) the master controller, and (ii) each of the one or more slave controllers, at least comprises one or more single board computing units and one or more graphic processing units; perform an initial calibration of the distributed edge infrastructure during the deployment, wherein the initial calibration comprises: sending by the master controller, a dummy data packet with a timestamp of the master controller, at each iteration, to each of the one or more slave controllers, for a predefined number of iterations; measuring a latency between the master controller and each of the one or more slave controllers, at each iteration of the predefined number of iterations; calculating statistical parameters for each of the one or more slave controllers, based on the latency measured at each iteration of the predefined number of iterations, using a data distribution function; identifying one or more faulty slave controllers out of the one or more slave controllers, based on the calculated statistical parameters corresponding to each of the one or more slave controllers, to perform corrective measures; and storing the calculated statistical parameters for each of the one or more slave controllers, in the master controller for future use; and perform an adaptive calibration of the distributed edge infrastructure during operation, wherein the adaptive calibration of the distributed edge infrastructure comprises: calculating a weighted factor (X) for each of the one or more slave controllers, by the master controller, using the statistical parameters for each of the one or more slave controllers; and performing the adaptive calibration for the one or more slave controllers, based on the respective weighted factor (X) for each of the one or more slave controllers.

[007] In yet another aspect, there is provided a computer program product comprising a non-transitory computer readable medium having a computer readable program embodied therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: configure the distributed edge infrastructure for deployment, wherein the distributed edge infrastructure comprises a master controller and one or more slave controllers, and wherein each of: (i) the master controller, and (ii) each of the one or more slave controllers, at least comprises one or more single board computing units and one or more graphic processing units; perform an initial calibration of the distributed edge infrastructure during the deployment, wherein the initial calibration comprises: sending by the master controller, a dummy data packet with a timestamp of the master controller, at each iteration, to each of the one or more slave controllers, for a predefined number of iterations; measuring a latency between the master controller and each of the one or more slave controllers, at each iteration of the predefined number of iterations; calculating statistical parameters for each of the one or more slave controllers, based on the latency measured at each iteration of the predefined number of iterations, using a data distribution function; identifying one or more faulty slave controllers out of the one or more slave controllers, based on the calculated statistical parameters corresponding to each of the one or more slave controllers, to perform corrective measures; and storing the calculated statistical parameters for each of the one or more slave controllers, in the master controller for future use; and perform an adaptive calibration of the distributed edge infrastructure during operation, wherein the adaptive calibration of the distributed edge infrastructure comprises: calculating a weighted factor (X) for each of the one or more slave controllers, by the master controller, using the statistical parameters for each of the one or more slave controllers; and performing the adaptive calibration for the one or more slave controllers, based on the respective weighted factor (X) for each of the one or more slave controllers.
[008] In an embodiment, a reference clock source is connected to the master controller and each of the one or more slave controllers, to generate the timestamp.

[009] In an embodiment, performing the adaptive calibration for the one or more slave controllers, based on the weighted factor (X) for each of the one or more slave controllers, comprises: identifying one or more decisive slave controllers out of the one or more slave controllers having the weighted factor (X) greater than or equal to a predefined weighted factor threshold; sending by the master controller, the dummy data packet with the timestamp of the master controller, at each iteration, to each of the one or more decisive slave controllers, for the predefined number of iterations; measuring the latency between the master controller and each of the one or more decisive slave controllers, at each iteration of the predefined number of iterations; calculating the statistical parameters for each of the one or more decisive slave controllers, based on the latency measured at each iteration of the predefined number of iterations, using the data distribution function; identifying the one or more faulty slave controllers out of the one or more decisive slave controllers, based on the calculated statistical parameters corresponding to each of the one or more decisive slave controllers, to perform the corrective measures; and storing the calculated statistical parameters for each of the one or more decisive slave controllers, in the master controller for future use.
[010] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS [011] The accompanying drawings, which are incorporated in and
constitute a part of this disclosure, illustrate exemplary embodiments and, together
with the description, serve to explain the disclosed principles:
[012] FIG. 1 is an exemplary block diagram of a system for in-line
calibration of a distributed edge infrastructure, in accordance with some
embodiments of the present disclosure.
[013] FIG. 2A and FIG. 2B illustrates exemplary flow diagrams of a
processor-implemented method for in-line calibration of the distributed edge
infrastructure, in accordance with some embodiments of the present disclosure.

[014] FIG. 3 is an exemplary block diagram of the distributed edge infrastructure, in accordance with some embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS [015] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
[016] A distributed edge infrastructure used for different applications, to be calibrated and monitored for seamless operation in the given application. More specifically, the edge devices present in the distributed edge infrastructure to be calibrated continuously and proper actions to be taken incase of mal functions. Conventional techniques for calibrating the distributed edge infrastructure, are limited and manual.
[017] The present disclosure herein provides methods and systems for in-line calibration of the distributed edge infrastructure, that addresses the technical problems by performing a trulastic (true and elastic) calibration process for the distributed edge infrastructure. Different inter and intra hardware of each edge device present in the distributed edge infrastructure, may have different latency due to hardware characteristics, mechanical movements, signal disturbance, processing logic design, and so on. The latencies may get changes over a period of time if any mechanical exist for example. Hence, in the present disclosure, the trulastic calibration process for the distributed edge infrastructure is performed by checking a latency and time synchronization between the edge devices present in the distributed edge infrastructure. More specifically, the trulastic calibration process for the distributed edge infrastructure is performed basically by checking a time coherence between the edge devices.

[018] The in-line calibration of the distributed edge infrastructure is performed in two stages. In the first stage, an initial calibration is performed during the deployment of the distributed edge infrastructure. In the second stage, an adaptive (continuous) calibration is performed during a real-time operation. At both the stages, a dummy data packet is used to check the latency and time synchronization between the edge devices present in the distributed edge infrastructure.
[019] Referring now to the drawings, and more particularly to FIG. 1 through FIG. 3, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary systems and/or methods.
[020] FIG. 1 is an exemplary block diagram of a system for in-line calibration of a distributed edge infrastructure, in accordance with some embodiments of the present disclosure. In an embodiment, the system 100 includes or is otherwise in communication with one or more hardware processors 104, communication interface device(s) or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the one or more hardware processors 104. The one or more hardware processors 104, the memory 102, and the I/O interface(s) 106 may be coupled to a system bus 108 or a similar mechanism.
[021] The I/O interface(s) 106 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface(s) 106 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, a plurality of sensor devices, a printer and the like. Further, the I/O interface(s) 106 may enable the system 100 to communicate with other devices, such as web servers and external databases.
[022] The I/O interface(s) 106 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as

Wireless LAN (WLAN), cellular, or satellite. For the purpose, the I/O interface(s) 106 may include one or more ports for connecting a number of computing systems with one another or to another server computer. Further, the I/O interface(s) 106 may include one or more ports for connecting a number of devices to one another or to another server.
[023] The one or more hardware processors 104 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 104 are configured to fetch and execute computer-readable instructions stored in the memory 102. In the context of the present disclosure, the expressions ‘processors’ and ‘hardware processors’ may be used interchangeably. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, portable computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
[024] The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, the memory 102 includes a plurality of modules 102a and a repository 102b for storing data processed, received, and generated by one or more of the plurality of modules 102a. The plurality of modules 102a may include routines, programs, objects, components, data structures, and so on, which perform particular tasks or implement particular abstract data types.
[025] The plurality of modules 102a may include programs or computer-readable instructions or coded instructions that supplement applications or functions performed by the system 100. The plurality of modules 102a may also be used as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions.

Further, the plurality of modules 102a can be used by hardware, by computer-readable instructions executed by the one or more hardware processors 104, or by a combination thereof. In an embodiment, the plurality of modules 102a can include various sub-modules (not shown in FIG. 1). Further, the memory 102 may include information pertaining to input(s)/output(s) of each step performed by the processor(s) 104 of the system 100 and methods of the present disclosure.
[026] The repository 102b may include a database or a data engine. Further, the repository 102b amongst other things, may serve as a database or includes a plurality of databases for storing the data that is processed, received, or generated as a result of the execution of the plurality of modules 102a. Although the repository 102a is shown internal to the system 100, it will be noted that, in alternate embodiments, the repository 102b can also be implemented external to the system 100, where the repository 102b may be stored within an external database (not shown in FIG. 1) communicatively coupled to the system 100. The data contained within such external database may be periodically updated. For example, data may be added into the external database and/or existing data may be modified and/or non-useful data may be deleted from the external database. In one example, the data may be stored in an external system, such as a Lightweight Directory Access Protocol (LDAP) directory and a Relational Database Management System (RDBMS). In another embodiment, the data stored in the repository 102b may be distributed between the system 100 and the external database.
[027] In an embodiment, the system 100 may be connected to the distributed edge infrastructure externally through a wired connection or a wireless connection. In another embodiment, the system 100 may be part of the distributed edge infrastructure for performing the in-line calibration.
[028] Referring to FIG. 2A and FIG. 2B, components and functionalities of the system 100 are described in accordance with an example embodiment of the present disclosure. For example, FIG. 2A and FIG. 2B illustrates exemplary flow diagrams of a processor-implemented method for in-line calibration of the distributed edge infrastructure, in accordance with some embodiments of the present disclosure. Although steps of the method 200 including process steps,

method steps, techniques or the like may be described in a sequential order, such processes, methods and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any practical order. Further, some steps may be performed simultaneously, or some steps may be performed alone or independently.
[029] At step 202 of the method 200, the one or more hardware processors 104 of the system 100 are configured to configure the distributed edge infrastructure for deployment. The distributed edge infrastructure may be configured to deploy in an industry or factory where continuous monitoring of materials, predictive maintenance of machines, or complete factory infrastructure monitoring, and so on. The distributed edge infrastructure is considered as a master-slave paradigm in the present disclosure. FIG. 3 is an exemplary block diagram of the distributed edge infrastructure, in accordance with some embodiments of the present disclosure. As shown in FIG. 3, the distributed edge infrastructure includes a master controller and one or more slave controllers. The master controller includes at least one or more single board computing units and one or more graphic processing units. Similarly, each of the one or more slave controllers includes at least the one or more single board computing units and the one or more graphic processing units. In an embodiment, the one or more single board computing units may be microprocessors, microcontrollers, single-chip computers, central processing units (CPUs), or a combination thereof.
[030] In an embodiment, each of the one or more slave controllers are edge devices connected with the master controller through different interfaces such as Bluetooth, wireless fidelity (Wi-Fi), universal serial bus (USB), high definition multi-media interface(HDMI), giga-ethernet, and so on. The edge devices include different sensor types including radars, acoustic sensors, ultrasound sensors, power meters, drones, cameras, temperature sensors, pressure sensors, and so on. The different sensor types may depend on type of the application where the distributed edge infrastructure is required. For example, incase of Petro-chemical industry,

where the temperature and pressure of the chemical to be monitored, then the different sensor types include temperature sensors and pressure sensors. In an embodiment, the one or more slave controllers may be heterogeneous devices, or homogeneous devices, or a combination thereof.
[031] At step 204 of the method 200, the one or more hardware processors 104 of the system 100 are configured to perform an initial calibration of the distributed edge infrastructure. The initial calibration of the distributed edge infrastructure is performed during deployment of the distributed edge infrastructure in the industry or factory. Before performing the initial calibration of the distributed edge infrastructure, each of the one or more slave devices are synchronized themselves with the use of a timestamp of the master controller.
[032] The initial calibration of the distributed edge infrastructure is performed as mentioned through the steps 204a through 204e. At step 204a, a dummy data packet with a timestamp of the master controller, is sent by the master controller, at each iteration, to each of the one or more slave controllers, for a predefined number of iterations. In an embodiment, the dummy data packet may include predefined number of bytes of data. For example, the predefined number of bytes of data may be 100 bytes. For example, the predefined number of iterations may be 1000. If the predefined number of iterations is 1000, then 1000 number of dummy packets are sent to each of the one or more slave controllers, by the master controller, one at a time at each iteration.
[033] A reference clock source is connected to the master controller and each of the one or more slave controllers, to generate the timestamp. In an embodiment, the reference clock source may be one of: a clock source of the master controller, clock source of an internet, GPS clock source, and a real-time clock (RTC).
[034] At step 204b, a latency between the master controller and each of the one or more slave controllers, is measured, at each iteration of the predefined number of iterations, against the corresponding dummy data packet. The latency is the measure of delay which is a time difference between (i) the timestamp when the dummy data packet is sent by the master controller, and (ii) the timestamp when the

dummy data packet is received at input/output (I/O) port of each of the one or more slave controllers. The latency is used to check whether each of the one or more slave controllers are in sync with the master controller or not. At each iteration, the latency is measured for each slave controller against the corresponding dummy data packet. Hence, if the predefined number of iterations is 1000 at step 204a, then 1000 number of latencies are measured for each slave controller.
[035] At step 204c, statistical parameters for each of the one or more slave controllers, are calculated using the latency measured at each iteration of the predefined number of iterations, corresponding to each slave controller. The latencies measured for each slave controller at step 204b, are transformed in terms of a data distribution using a data distribution function. In an embodiment, the data distribution function may be a normal distribution function. The statistical parameters include mean, standard deviation, variance, mean difference, and so on. The statistical parameters are measured for each slave controller, using the latencies corresponding to the respective slave controller.
[036] At step 204d, one or more faulty slave controllers out of the one or more slave controllers, are identified based on the calculated statistical parameters corresponding to each slave controller, at step 204c. Each slave controller out of the one or more slave controllers, may be a faulty slave controller, if one of the statistical parameter is greater than or equal to a predefined threshold. In an embodiment, the predefined threshold may be different for each of the statistical parameters. For example, if the mean of one slave controller is greater than or equal to the predefined mean threshold, then such slave controller is considered as a faulty slave controller. The one or more faulty controllers are identified to perform corrective measures as part of the initial calibration. The corrective measures may be repairing the faulty slave controller or replacing the faulty slave controller with a new slave controller. The initial calibration is performed only during the deployment of the distributed edge infrastructure. At step 204e, the calculated statistical parameters for each of the one or more slave controllers, at step 204c, are stored in the master controller for future use.

[037] At step 206 of the method 200, the one or more hardware processors 104 of the system 100 are configured to perform an adaptive calibration of the distributed edge infrastructure during an operation of the distributed edge infrastructure. The adaptive calibration is performed after the deployment and during the distributed edge infrastructure is operational. Specifically, the adaptive calibration is performed when a complexity of the processes or a criticality of the upcoming work is to be done by the distributed edge infrastructure. Further, the adaptive calibration is performed based on the time interval configured by the industry or the factory. For example, the adaptive calibration of the distributed edge infrastructure is performed for every 30 days period.
[038] The adaptive calibration of the distributed edge infrastructure is performed as mentioned through the steps 206a through 206b. At step 206a, a weighted factor (X) is calculated for each of the one or more slave controllers, by the master controller. The weighted factor (X) is calculated for each of the one or more slave controllers, using the statistical parameters for each of the one or more slave controllers. The statistical parameters for each of the one or more slave controllers are those stored in the master controller for future use. The weighted factor (X) is used to identify the slave controllers that are abnormal or looking abnormal over period of time.
[039] In an embodiment, the statistical parameters for each of the one or more slave controllers, may be the statistical parameters calculated during the initial calibration. In another embodiment, the statistical parameters for each of the one or more slave controllers, may be the statistical parameters calculated during the adaptive calibration performed in the last iteration. In another embodiment, the statistical parameters for each of the one or more slave controllers, may be the combination of (i) the statistical parameters calculated during the initial calibration and (ii) the statistical parameters calculated during the adaptive calibration performed in the last iteration. For example, if the adaptive calibration is performed immediately after the initial calibration, then the statistical parameters calculated during the initial calibration are considered.

[040] More specifically, the weighted factor (X) for each of the one or more slave controllers is calculated using an equation 1:
X= T1+E6+E3 * |a4*E4 + a5 * n * E5|
(1) wherein T1 is a time elapsed from a last calibration performed to corresponding
slave controller. The last calibration performed may be the initial calibration or the adaptive calibration performed in the last iteration. E6 is a predefined user choice defined for the slave controller by the user of the industry or factory. If the user is interested in the adaptive calibration of the slave controller, then E6 for that slave controller is ‘1’. If the user is not interested in the adaptive calibration of the slave controller, then E6 for that slave controller is ‘0’. The default value for E6 is set to ‘1’, however may be changed to ‘0’ based on the user choice. E3 is a complexity type of each process of one or more processes to be executed by the corresponding slave controller. If the complexity type is ‘low complexity event’, then the value of E3 is set to ‘0’. If the complexity type is ‘high complexity or advanced complexity event’, then the value of E3 is set to ‘1’. Further, the value of E3 ranges between 0 to 1, based on the complexity type of each process to be executed by the corresponding slave controller. E4 is a rate of change in the latency calculated using the statistical parameters corresponding to the slave controller. The rate of change in the latency is considered from one of the statistical parameter having a maximum rate of change. The n is a number of processes running in each slave controller. E5 is a predefined value based on the number of processes running in each slave controller. In an embodiment, the value of E5 (predefine value) is set to ‘0’ for the slave controller, if the number of processes (n) running in the corresponding slave controller is less than a predefined processes threshold. In other words, , the value of E5 (predefine value) is set to ‘1’ for the slave controller, if the number of processes (n) running in the corresponding slave controller is greater than or equal to the predefined processes threshold. For example, the predefined processes threshold may be ‘10’. Lastly, a4 and a5 are predefined weights. In an embodiment, the hypothetical value of a4 ranges between ‘0.31’ to ‘0.46’. In an embodiment, the hypothetical value of a5 ranges between ‘0.54’ to ‘0.69’. The

hypothetical value of a4 and a5 are predefined based on several experiments conducted during the calibration process considering various hypothetical scenarios.
[041] At step 206b, the adaptive calibration for the one or more slave controllers, is performed based on the respective weighted factor (X) calculated at step 206a, for each of the one or more slave controllers. Performing the adaptive calibration for the one or more slave controllers, is further explained through steps 206b1 through 206b6. At step 206b1, one or more decisive slave controllers out of the one or more slave controllers having the weighted factor (X) greater than or equal to a predefined weighted factor threshold, are identified.
[042] At step 206b2, the dummy data packet with the timestamp of the master controller, is sent by the master controller, at each iteration, to each of the one or more decisive slave controllers, for the predefined number of iterations, in the similar manner as described at step 204a. In an embodiment, the dummy data packet may include predefined number of bytes of data. In an embodiment, the predefined number of bytes of data may be 100 bytes. For example, the predefined number of iterations may be 1000. If the predefined number of iterations is 1000, then 1000 number of dummy packets are sent to each of the one or more decisive slave controllers, by the master controller, one at a time at each iteration.
[043] At step 206b3, the latency between the master controller and each of the one or more decisive slave controllers, is measured, at each iteration of the predefined number of iterations, against the corresponding dummy data packet, in the similar manner as described at step 204b. At each iteration, the latency is measured for each decisive slave controller against the corresponding dummy data packet. Hence, if the predefined number of iterations is 1000 at step 206b2, then 1000 number of latencies are measured for each decisive slave controller.
[044] At step 206b4, the statistical parameters for each of the one or more decisive slave controllers, are calculated, using the latency measured at each iteration of the predefined number of iterations, corresponding to each decisive slave controller, in the similar manner as described at step 204c. The statistical parameters are measured for each decisive slave controller, using the latencies

corresponding to the respective decisive slave controller, by using the data distribution function.
[045] At step 206b5, the one or more faulty slave controllers out of the one or more decisive slave controllers, are identified based on the calculated statistical parameters corresponding to each of the one or more decisive slave controllers, in the similar manner as described at step 204d.
[046] Each decisive slave controller out of the one or more decisive slave controllers, may be the faulty slave controller, if one of the statistical parameter is greater than or equal to the predefined threshold. In an embodiment, the predefined threshold may be different for each of the statistical parameters. For example, if the mean of one decisive slave controller is greater than or equal to the predefined mean threshold, then such decisive slave controller is considered as the faulty slave controller. The one or more faulty slave controllers are identified to perform corrective measures as part of the adaptive calibration. The corrective measures may be repairing the faulty slave controller or replacing the faulty slave controller with the new slave controller. The adaptive calibration is performed after the deployment of the distributed edge infrastructure and during the real-time operation. At step 206b6, the calculated statistical parameters for each of the one or more decisive slave controllers, at step 206b4, are stored in the master controller for future use.
[047] In accordance with the present disclosure, the methods and systems for in-line calibration of the distributed edge infrastructure, of the present disclosure, calibrate the distributed edge infrastructure seamlessly when a complexity of the processes or a criticality of the upcoming work is to be done by the distributed edge infrastructure, or based on the predefined time interval configured by the industry or the factory. The in-line calibration of the distributed edge infrastructure identifies the faulty hardware (slave controllers) present in the distributed edge infrastructure for corrective measures. The in-line calibration of the distributed edge infrastructure of the present disclosure is simple, economic, yet effective, as the calibration process is performed based on the latency and time synchronization. The in-line calibration of the distributed edge infrastructure of the

present disclosure provides a more accurate idea of how the distributed edge infrastructure is performing in the real world, as most of the distributed edge infrastructures are used for an extended period of time.
[048] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[049] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
[050] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a

computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[051] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[052] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory,

nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[053] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.

We Claim:
1. A processor-implemented method (200) for in-line calibration of a
distributed edge infrastructure, the method (200) comprising the steps of:
configuring, via one or more hardware processors, the distributed edge infrastructure for deployment, wherein the distributed edge infrastructure comprises a master controller and one or more slave controllers, and wherein each of: (i) the master controller, and (ii) each of the one or more slave controllers, at least comprises one or more single board computing units and one or more graphic processing units (202);
performing, via the one or more hardware processors, an initial calibration of the distributed edge infrastructure during the deployment (204), wherein the initial calibration comprises:
sending by the master controller, a dummy data packet with a timestamp of the master controller, at each iteration, to each of the one or more slave controllers, for a predefined number of iterations (204a);
measuring a latency between the master controller and each of the one or more slave controllers, at each iteration of the predefined number of iterations (204b);
calculating statistical parameters for each of the one or more slave controllers, based on the latency measured at each iteration of the predefined number of iterations, using a data distribution function (204c);
identifying one or more faulty slave controllers out of the one or more slave controllers, based on the calculated statistical parameters corresponding to each of the one or more slave controllers, to perform corrective measures (204d); and
storing the calculated statistical parameters for each of the one or more slave controllers, in the master controller for future use (204e); and

performing, via the one or more hardware processors, an adaptive calibration of the distributed edge infrastructure during operation (206), wherein the adaptive calibration of the distributed edge infrastructure comprises:
calculating a weighted factor (X) for each of the one or more
slave controllers, by the master controller, using the statistical
parameters for each of the one or more slave controllers (206a); and performing the adaptive calibration for the one or more slave
controllers, based on the respective weighted factor (X) for each of
the one or more slave controllers (206b).
2. The method as claimed in claim 1, further comprising generating the timestamp using a reference clock source connected to the master controller and each of the one or more slave controllers.
3. The method as claimed in claim 1, further comprising calculating the weighted factor (X) for each of the one or more slave controllers using an equation, wherein the equation is:
X = T1+E6+E3 * |a4*E4 + a5 * n * E5| wherein T1 is a time elapsed from a last calibration performed to each of the one or more slave controllers, E6 is a predefined user choice, E3 is a complexity type of each process of one or more processes to be executed by each of the one or more slave controllers, E4 is a rate of change in the latency calculated using the statistical parameters corresponding to each of the one or more slave controllers, n is a number of processes running in each of the one or more slave controllers, E5 is a predefined value based on the number of processes (n) running in each of the one or more slave controllers, and a4 and 5 are predefined weights.

4. The method as claimed in claim 1, wherein performing the adaptive
calibration for the one or more slave controllers, based on the weighted
factor (X) for each of the one or more slave controllers, further comprises:
identifying one or more decisive slave controllers out of the one or more slave controllers having the weighted factor (X) greater than or equal to a predefined weighted factor threshold (206b1);
sending by the master controller, the dummy data packet with the timestamp of the master controller, at each iteration, to each of the one or more decisive slave controllers, for the predefined number of iterations (206b2);
measuring the latency between the master controller and each of the one or more decisive slave controllers, at each iteration of the predefined number of iterations (206b3);
calculating the statistical parameters for each of the one or more decisive slave controllers, based on the latency measured at each iteration of the predefined number of iterations, using the data distribution function (206b4);
identifying the one or more faulty slave controllers out of the one or more decisive slave controllers, based on the calculated statistical parameters corresponding to each of the one or more decisive slave controllers, to perform the corrective measures (206b5); and
storing the calculated statistical parameters for each of the one or more decisive slave controllers, in the master controller for future use (206b6).
5. A system (100) for in-line calibration of a distributed edge infrastructure,
the system (100) comprising:
a memory (102) storing instructions;
one or more Input/Output (I/O) interfaces (106); and

one or more hardware processors (104) coupled to the memory (102) via the one or more I/O interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
configure the distributed edge infrastructure for deployment, wherein the distributed edge infrastructure comprises a master controller and one or more slave controllers, and wherein each of: (i) the master controller, and (ii) each of the one or more slave controllers, at least comprises one or more single board computing units and one or more graphic processing units;
perform an initial calibration of the distributed edge infrastructure during the deployment, wherein the initial calibration comprises:
sending by the master controller, a dummy data packet with
a timestamp of the master controller, at each iteration, to each of the
one or more slave controllers, for a predefined number of iterations; measuring a latency between the master controller and each
of the one or more slave controllers, at each iteration of the
predefined number of iterations;
calculating statistical parameters for each of the one or more
slave controllers, based on the latency measured at each iteration of
the predefined number of iterations, using a data distribution
function;
identifying one or more faulty slave controllers out of the one
or more slave controllers, based on the calculated statistical
parameters corresponding to each of the one or more slave
controllers, to perform corrective measures; and
storing the calculated statistical parameters for each of the
one or more slave controllers, in the master controller for future use;
and
perform an adaptive calibration of the distributed edge infrastructure during operation, wherein the adaptive calibration of the distributed edge infrastructure comprises:

calculating a weighted factor (X) for each of the one or more slave controllers, by the master controller, using the statistical parameters for each of the one or more slave controllers; and
performing the adaptive calibration for the one or more slave controllers, based on the respective weighted factor (X) for each of the one or more slave controllers.
6. The system as claimed in claim 5, further comprises a reference clock source connected to the master controller and each of the one or more slave controllers, to generate the timestamp.
7. The system as claimed in claim 5, wherein the one or more hardware processors (104) are configured to calculate the weighted factor (X) for each of the one or more slave controllers using an equation, wherein the equation is:
T1+E6+E3 * |a4*E4 + a5 * n * E5| wherein T1 is a time elapsed from a last calibration performed to each of the one or more slave controllers, E6 is a predefined user choice, E3 is a complexity type of each process of one or more processes to be executed by each of the one or more slave controllers, E4 is a rate of change in the latency calculated using the statistical parameters corresponding to each of the one or more slave controllers, n is a number of processes running in each of the one or more slave controllers, E5 is a predefined value based on the number of processes (n) running in each of the one or more slave controllers, and a4 and a5 are predefined weights.
8. The system as claimed in claim 5, wherein the one or more hardware
processors (104) are configured to perform the adaptive calibration for the
one or more slave controllers, based on the weighted factor (X) for each of
the one or more slave controllers, by:

identifying one or more decisive slave controllers out of the one or more slave controllers having the weighted factor (X) greater than or equal to a predefined weighted factor threshold;
sending by the master controller, the dummy data packet with the timestamp of the master controller, at each iteration, to each of the one or more decisive slave controllers, for the predefined number of iterations;
measuring the latency between the master controller and each of the one or more decisive slave controllers, at each iteration of the predefined number of iterations;
calculating the statistical parameters for each of the one or more decisive slave controllers, based on the latency measured at each iteration of the predefined number of iterations, using the data distribution function;
identifying the one or more faulty slave controllers out of the one or more decisive slave controllers, based on the calculated statistical parameters corresponding to each of the one or more decisive slave controllers, to perform the corrective measures; and
storing the calculated statistical parameters for each of the one or more decisive slave controllers, in the master controller for future use.

Documents

Application Documents

# Name Date
1 202121038175-STATEMENT OF UNDERTAKING (FORM 3) [23-08-2021(online)].pdf 2021-08-23
2 202121038175-REQUEST FOR EXAMINATION (FORM-18) [23-08-2021(online)].pdf 2021-08-23
3 202121038175-PROOF OF RIGHT [23-08-2021(online)].pdf 2021-08-23
4 202121038175-FORM 18 [23-08-2021(online)].pdf 2021-08-23
5 202121038175-FORM 1 [23-08-2021(online)].pdf 2021-08-23
6 202121038175-FIGURE OF ABSTRACT [23-08-2021(online)].jpg 2021-08-23
7 202121038175-DRAWINGS [23-08-2021(online)].pdf 2021-08-23
8 202121038175-DECLARATION OF INVENTORSHIP (FORM 5) [23-08-2021(online)].pdf 2021-08-23
9 202121038175-COMPLETE SPECIFICATION [23-08-2021(online)].pdf 2021-08-23
10 202121038175-FORM-26 [21-10-2021(online)].pdf 2021-10-21
11 Abstract1.jpg 2022-03-15
12 202121038175-FER.pdf 2024-01-18
13 202121038175-OTHERS [28-05-2024(online)].pdf 2024-05-28
14 202121038175-FORM-26 [28-05-2024(online)].pdf 2024-05-28
15 202121038175-FER_SER_REPLY [28-05-2024(online)].pdf 2024-05-28
16 202121038175-DRAWING [28-05-2024(online)].pdf 2024-05-28
17 202121038175-CLAIMS [28-05-2024(online)].pdf 2024-05-28
18 202121038175-ABSTRACT [28-05-2024(online)].pdf 2024-05-28
19 202121038175-ORIGINAL UR 6(1A) FORM 26-100624.pdf 2024-06-12

Search Strategy

1 202121038175searchE_17-01-2024.pdf