Sign In to Follow Application
View All Documents & Correspondence

Plug And Play Integration System And Method Thereof

Abstract: The present disclosure provides a system (108) and a method (400) for plug and play integration of a data source with a network. The method (400) includes receiving (402) data from a new data source and one or more configuration information associated with the new data source via an API. The method (400) includes automatically configuring (404) the new data source based on the one or more received configuration information. The method (400) includes processing (406) the received data corresponding to the configurated new data source. The method (400) includes storing (408) the processed data in a database. Figure.3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
12 July 2023
Publication Number
03/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. BHATNAGAR, Aayush
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
2. MURARKA, Ankit
W-16, F-1603, Lodha Amara, Kolshet Road, Thane West - 400607, Maharashtra, India.
3. KOLARIYA, Jugal Kishore
C 302, Mediterranea CHS Ltd, Casa Rio, Palava, Dombivli - 421204, Maharashtra, India.
4. KUMAR, Gaurav
1617, Gali No. 1A, Lajjapuri, Ramleela Ground, Hapur - 245101, Uttar Pradesh, India.
5. SAHU, Kishan
Ajay Villa, Gali No. 2, Ambedkar Colony, Bikaner - 334003, Rajasthan, India.
6. VERMA, Rahul
A-154, Shradha Puri Phase-2, Kanker Khera, Meerut - 250001, Uttar Pradesh, India.
7. MEENA, Sunil
D-29/1, Chitresh Nagar, Borkhera, District - Kota - 324001, Rajasthan, India.
8. GURBANI, Gourav
I-1601, Casa Adriana, Downtown, Palava Phase 2, Dombivli - 421204 Maharashtra, India.
9. CHAUDHARY, Sanjana
Jawaharlal Road, Muzaffarpur - 842001, Bihar, India.
10. GANVEER, Chandra Kumar
Village - Gotulmunda, Post - Narratola, Dist. - Balod - 491228, Chhattisgarh, India.
11. DE, Supriya
G2202, Sheth Avalon, Near Jupiter Hospital Majiwada, Thane West - 400601, Maharashtra, India.
12. KUMAR, Debashish
Bhairaav Goldcrest Residency, E-1304, Sector 11, Ghansoli, Navi Mumbai - 400701, Maharashtra, India.
13. TILALA, Mehul
64/11, Manekshaw Marg, Manekshaw Enclave, Delhi Cantonment, New Delhi - 110010, India.
14. KALIKIVAYI, Srinath
3-61, Kummari Bazar, Madduluru Village, S N Padu Mandal, Prakasam District, Andhra Pradesh - 523225, India.
15. PANDEY, Vitap
D 886, World Bank Barra, Kanpur - 208027, Uttar Pradesh, India.

Specification

FORM 2
THE PATENTS ACT, 1970
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
PLUG AND PLAY TITLE OF THE INVENTION™THEREOF
APPLICANT
380006, Gujarat, India; Nationality : India
The following specification particularly describes
the invention and the manner in which
it is to be performed

RESERVATION OF RIGHTS
[001] A portion of the disclosure of this patent document contains
material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, integrated circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (herein after referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
TECHNICAL FIELD
[002] The present disclosure relates to a field of communications
network, and specifically to a system and a method for plug and play integration of a data source with the communications network.
DEFINITION
[003] As used in the present disclosure, the following terms are generally
intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicate otherwise.
[004] The term ‘plug and play integration’ as used herein, refers to a
system design that allows new data sources to be seamlessly integrated into the system with minimal manual intervention and without the need for extensive reconfiguration or code change.
[005] The term ‘data source’ as used herein, refers to any origin from
which data is obtained. This may include, but is not limited to, databases, files, sensors, APIs, and other systems that generate or store data which can be processed by the system.

[006] The term ‘normalizing’ as used herein, refers to a process of
transforming data from various sources into a consistent, standardized format.
[007] The term ‘configuration information’ as used herein, refers to a set
of parameters and metadata provided to the system that describes a format, structure, and handling requirements of the data source. This may include details such as data format, delimiters, field names, data types, and nested elements.
[008] The term ‘predefined schema’ as used herein, refers to a predefined
structure or model that specifies the organization of data fields and their respective data types. The predefined schema includes information about data types (e.g., integer, string, float), data fields (e.g., name, address), and relationships between data elements. The predefined schema ensure that the data is align with the expected format.
[009] The term ‘API’ as used herein, refers to an Application
Programming Interface. The API is a set of protocols, routines, and tools for building software and applications which specifies how software components should interact and allows for the integration of different systems by enabling them to communicate with each other.
BACKGROUND
[0010] The following description of related art is intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0011] In general, data parsing is a process of converting data from one
format to another format. The data parsing is widely used for data structuring and is generally done to make existing, often unstructured, unreadable data more

comprehensible. For example, if a user views a HyperText Markup Language (HTML) file that is likely to be challenging to read and comprehend, data parsing helps in converting the HTML file into a more readable format, such as a plain text, which the user easily understands.
[0012] With over hundreds of vendors in a network, writing a custom
adaptor for each node is not possible or feasible, especially when data formats and semantics change from vendor to vendor. As such, it is important for a normalization system to self-adapt to the various data formats and semantics of different vendors. In legacy or existing normalization systems, every integration of data parsing requires code-level changes and integration efforts. Further, the existing normalization systems are dependent on writing custom adapters for each vendor, product, or interface.
[0013] There is, therefore, a need in the art to provide a method and a
system that can overcome the shortcomings of the existing prior arts.
OBJECTS OF THE PRESENT DISCLOSURE
[0014] It is an object of the present disclosure to provide a system and a
method that includes a normalization layer having a plug and play integration approach for data parsing, which streamline a process of integrating and extracting data from various sources.
[0015] It is an object of the present disclosure to provide a system and a
method that self-adapts to various data formats and semantics of different vendors, thereby acting as a plug and play module.
[0016] It is an object of the present disclosure to provide a system and a
method that may be connected with any vendor and handle any type of data for ingestion, enrichment, and normalization through one of a user interface (UI) configuration or an application programming interface (API).

[0017] It is an object of the present disclosure to provide a system and a
method that allows for seamless integration and parsing of data from various sources with minimal configurations in the system, eliminating the need for code-level changes.
[0018] It is an object of the present disclosure to provide a system and a
method that supports efficient data indexing, facilitating faster retrieval and querying of processed data stored in a database.
[0019] It is an object of the present disclosure to increase the system
flexibility or make the system more versatile to adjust as per the format of input data.
[0020] It is an object of the present disclosure to reduce manual effort and
configuration time by automating the data integration and the normalization processes.
SUMMARY
[0021] In an exemplary embodiment, a method for integration of a data
source with a network. The method includes receiving, by a processing engine, data from a new data source and one or more configuration information associated with the new data source via an application programming interface (API. The method includes automatically configuring, by the processing engine, the new data source based on the one or more received configuration information. The method includes processing, by the processing engine, the received data corresponding to the configurated data source. The method further includes storing, by the processing engine, the processed data in a database.
[0022] In some embodiments, the one or more configuration information
includes at least one of a data format, a delimiter, a field name within the data, a data type for each field, and details of handling a nested element.

[0023] In some embodiments, the automatically configuring includes
analyzing the one or more configuration information to determine the data format and a corresponding data structure of the received data source, defining one or more parsing rules to separate the field name within the data source based on the determined data format and the data structure, mapping the field name to a corresponding data type within a predefined schema, and establishing a connection to process the data based on the one or more defined parsing rules and the mapped data field name.
[0024] In some embodiments, the processing includes parsing the data
corresponding to the configurated data source, normalizing the parsed data to provide a standardized format, and verifying the normalized data with respect to the predefined schema specified in the configuration information.
[0025] In some embodiments, the method further includes indexing the
processed data, the indexing facilitates faster retrieval of the processed data stored in the database.
[0026] In some embodiments, the method further includes providing the
processed data to one or more external devices.
[0027] In another exemplary embodiment, a system for integration of a
data source with a network is described. The system comprises a memory, and a processing engine communicatively coupled with the memory, configured to receive data from a new data source and one or more configuration information associated with the new data source via an application programming interface (API. The processing engine is configured to automatically configure the new data source based on the one or more received configuration information. The processing engine is configured to process the received data corresponding to the configurated data source. The processing engine is configured to store the processed data in a database.

[0028] In some embodiments, the one or more configuration information
includes at least one of a data format, a delimiter, a field name within the data, a data type for each field, and details of handling a nested element.
[0029] In some embodiments, to automatically configure the new data
5 source, the processing engine is configured to analyze the one or more configuration information to determine the data format and a corresponding data structure of the received data source, define one or more parsing rules to separate the field name within the data source based on the determined data format and the data structure, map the field name to a corresponding data type within a 10 predefined schema, and establish a connection to process the data based on the one or more defined parsing rules and the mapped data field name.
[0030] In some embodiments, to process the data, the processing engine is
configured to parse the data corresponding to the configurated new data source, normalize the parsed data to provide a standardized format, and verify the 15 normalized data with respect to the predefined schema specified in the one or more configuration information.
[0031] In some embodiments, the processing engine is configured to index
the processed data, the index facilitates faster retrieval of the processed data stored in the database.
20 [0032] In some embodiments, the processing engine is configured to
provide the processed data to one or more external devices.
[0033] In another exemplary embodiment, a user equipment (UE) is
described. The UE is communicatively coupled with a network, the coupling comprises steps of receiving, by the network, a connection request from the UE, 25 sending, by the network, an acknowledgment of the connection request to the UE and transmitting a plurality of signals in response to the connection request, the network is configured for performing a method for integration of a data source.
7

[0034] The foregoing general description of the illustrative embodiments
and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.
BRIEF DESCRIPTION OF THE DRAWINGS
5 [0035] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the 10 principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
15 [0036] FIG. 1 illustrates an exemplary network architecture in which or
with which embodiments of the present disclosure may be implemented.
[0037] FIG. 2 illustrates a block diagram of a system configured for plug
and play integration of a data source, in accordance with an embodiment of the present disclosure.
20 [0038] FIG 3 illustrates an exemplary process flow of the system for plug
and play integration of the data source, in accordance with embodiments of the present disclosure.
[0039] FIG. 4 illustrates a flow chart of a method for plug and play
integration of the data source, in accordance with an embodiment of the present 25 disclosure.
[0040] FIG. 5 illustrates an exemplary computer system in which or with
which embodiments of the present disclosure may be implemented.
8

LIST OF REFERENCE NUMERALS
100 – Network architecture 102-1 and 102-2– Users 104-1 and 104-2 –User Equipments 5 106 – Network 108 – System 202 – Processor 204 – Memory 206 –Interface(s) 10 208 – Processing engine 210, 308 – Database 212 – Receiving module
214 – Configuring module
216 – Normalization module 15 218 – Other engines
300 – Block diagram
306-1 and 306-2– normalization modules
500 – Computing system
510 – External Storage Device 20 520 – Bus
530 – Main Memory
540 – Read Only Memory
550 – Mass Storage Device
9

560 – Communication Port
570 – Processor
DETAILED DESCRIPTION
[0041] In the following description, for the purposes of explanation,
5 various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may 10 not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. 15 Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention as set forth.
20 [0042] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order
25 not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
10

[0043] Also, it is noted that individual embodiments may be described as a
process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in 5 parallel or concurrently. In addition, the order of the operations may be re¬arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the 10 function to the calling function or the main function.
[0044] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is
15 not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be
20 inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
[0045] Reference throughout this specification to “one embodiment” or
“an embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment
25 is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more
30 embodiments.
11

[0046] The terminology used herein is for the purpose of describing
particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be 5 further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes 10 any and all combinations of one or more of the associated listed items.
[0047] The present disclosure may provide a system and a method that
includes a normalization module with a plug and play integration approach for data parsing. The plug and play integration approach streamlines a process of integrating and extracting data from various data sources. The present disclosure 15 allows for seamless integration and parsing of data from the various data sources with minimal configurations, eliminating a need for code-level changes. Further, the present disclosure may increase the system flexibility and make the system more versatile to adjust to different data formats and data sources.
[0048] The various embodiments of the present disclosure will be
20 explained in detail with reference to FIGS. 1 to 5.
[0049] FIG. 1 illustrates an exemplary network architecture (100) in which
or with which embodiments of the present disclosure may be implemented.
[0050] Referring to FIG. 1, the network architecture (100) may include
one or more computing devices or user equipments (UEs) (104-1, 104-2…104-N) 25 associated with one or more users (102-1, 102-2…102-N) in an environment. A person of ordinary skill in the art will understand that one or more users (102-1, 102-2…102-N) may be individually referred to as the user (102) and collectively referred to as the users (102). Similarly, a person of ordinary skill in the art will understand that one or more UEs (104-1, 104-2…104-N) may be individually
12

referred to as the UE (104) and collectively referred to as the UEs (104). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “user equipment” may be used interchangeably throughout the disclosure. Although three UEs (104) are depicted in FIG. 1, however any number of the UEs 5 (104) may be included without departing from the scope of the ongoing description.
[0051] In an embodiment, the UE (104) may include smart devices
operating in a smart environment, for example, an Internet of Things (IoT) system. In such an embodiment, the UE (104) may include, but is not limited to, 10 smart phones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for 15 monitoring or interacting with or for the users (102) and/or entities, or any combination thereof. A person of ordinary skill in the art will appreciate that the UE (104) may include, but is not limited to, intelligent, multi-sensing, network-connected devices, that can integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-20 connected.
[0052] In an embodiment, the UE (104) may include, but is not limited to,
a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device(e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer 25 device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like. In an embodiment, the user equipment (104) may include, but is not limited to, any electrical, electronic, electro-30 mechanical, or an equipment, or a combination of one or more of the above
13

devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the UE (104) may include one or more in-built or externally coupled accessories 5 including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102) or the entity such as touch pad, touch enabled screen, electronic pen, and the like. A person of ordinary skill in the art will appreciate that the UE (104) may not be restricted to the mentioned devices and various other devices may be used.
10 [0053] Referring to FIG. 1, the UE (104) may communicate with a system
(108), for example, a plug and play integration system, through a network (106). In particular, the UE (104) may be communicatively coupled with the network (106), the coupling comprises steps of receiving, by the network (106), a connection request from the UE (104), sending, by the network (106), an
15 acknowledgment of the connection request to the UE (104) and transmitting a plurality of signals in response to the connection request. The plurality of signals is responsible for communicating with the system (108) to enable plug and play integration of a new data source.
[0054] The system (108) may be designed to self-adapt to various data
20 formats and semantics from different vendors. The system (108) may connect with any vendor and handle any type of data for ingestion, enrichment, and normalization through a User Interface (UI) configuration. The system (108) may require one-time provisioning of the data source details (also referred to as a configuration information) through exposed Application Programming Interfaces 25 (APIs). The data source details may include, but not limited to, a file type, a file format, a delimiter, a Nested element (NE) details, and a field name. Once these details are entered and configured, the plug and play integration system (108) may process a newly configured source data. The plug and play integration system (108) simplifies configuration, reduces manual effort, and promotes
14

interoperability, enabling efficient data integration across diverse systems and formats.
[0055] In an embodiment, the network (106) may include at least one of a
Fifth Generation (5G) network, 6G network, or the like. The network (106) may 5 enable the UE (104) to communicate with other devices in the network architecture (100) and/or with the system (108). The network (106) may include a wireless card or some other transceiver connection to facilitate this communication. In another embodiment, the network (106) may be implemented as, or include any of a variety of different communication technologies such as a 10 wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like.
[0056] Although FIG. 1 shows exemplary components of the network
architecture (100), in other embodiments, the network architecture (100) may 15 include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other components of the network architecture (100).
20 [0057] FIG. 2 illustrates a block diagram (200) of the plug and play
integration system (108), in accordance with an embodiment of the present disclosure. The plug and play integration system (108) may be configured for providing the plug and play integration of a data source for data parsing.
[0058] In an aspect, the plug and play integration system (108) may
25 include one or more processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, one or more processor(s) (202)
15

may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the plug and play integration system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and 5 executed to create or share data packets over a network service. The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like.
10 [0059] In an embodiment, the plug and play integration system (108) may
include an interface(s) (206). The interface(s) (206) may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) (206) may facilitate communication of the system (108). The interface(s) (206) may also provide a
15 communication pathway for one or more components of the system (108). Examples of such components include, but are not limited to, a processing engine (208) and a database (210).
[0060] The processing engine (208) may be implemented as a combination
of hardware and programming (for example, programmable instructions) to 20 implement one or more functionalities of the processing engine (208). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the 25 processing engine (208) may include a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine (208). In such examples, the plug and play integration system (108) may include the machine-30 readable storage medium storing the instructions and the processing resource to
16

execute the instructions, or the machine-readable storage medium may be separate but accessible to the plug and play integration system (108) and the processing resource. In other examples, the processing engine (208) may be implemented by an electronic circuitry.
5 [0061] In an embodiment, the processing engine (208) may include one or
more modules selected from any of a receiving module (212), a configuring module (214), a normalization module (216) and other engine(s) (218) having functions that may include, but are not limited to, testing, storage, and peripheral functions, such as a wireless communication unit for remote operation, display 10 unit for visualization and the like.
[0062] In order to enable the plug and play integration, initially, the
receiving module (212) may receive data from a new data source via an application programming interface (API). The new data source may refer to an origin of the data being integrated for a first time. The new data source may be
15 any system or entity generating data that a user wants to incorporate into the plug and play integration system (108). Examples of the new data source may include, but is not limited to, databases, public APIs, social media platforms, financial data providers, sensors in IoT devices, logs from network equipment, data feeds from financial markets, transactional records from retail systems, or even structured
20 files from different enterprise applications.
[0063] The data received from the new data source may come in various
formats depending on the data source. Examples of such data may include, but is not limited to, structured data files (e.g., CSV (comma-separated values), JSON (JavaScript Object Notation), or XML (Extensible Markup Language)), 25 unstructured data files (e.g., plain text files or log files), sensor data (e.g., readings from temperature sensor, humidity sensor, motion sensor, and the like), financial data (e.g., real-time or batch feeds of stock prices, trades, or economic indicators), and transactional data (e.g., records of sales, purchases, or user interactions from e-commerce platforms).
17

[0064] To facilitate the integration of the new data source, the plug and
play integration system (108) requires specific configuration information. Thus, the receiving module (212) may receive one or more configuration information associated with the new data source. This information may be provided by a user 5 through the API or a user interface. The one or more configuration information includes details, for example, but is not limited to, a data format, a delimiter, a field name within the data, a data type for each field, and details of handling a nested element.
[0065] The data format specifies the format of the data being received
10 (e.g., CSV, JSON, XML, HTML, or EXCEL). Understanding the data format is essential for parsing and processing the data correctly. The delimiter configuration indicates a character used to separate fields within the data (e.g., comma for CSV). Common delimiters may include commas, tabs, and semicolons. The field names may include various attributes such as identifiers, timestamps, data values,
15 status indicators, and any other relevant metadata. For instance, field names may include unique IDs, customer names, product codes, transaction dates, and quantities. This information allows the system (108) to map these fields to the appropriate data types and schema. The data types for each field specifies the expected type of data for each field, such as integer, float, string, date, and the
20 like. This information is important for data verification and data processing. Additionally, for network-related data sources, the details of handling the nested element may include specifics about nested elements, such as hierarchical structure, depth of nesting, and relationships between parent and child elements. This configuration information may specify how to traverse and parse each level
25 of the nested structure.
[0066] The configuring module (214) utilize the one or more configuration
information received by the receiving module (212) to seamlessly integrate and prepare data for processing. In particular, once the configuration information is provided, the configuring module (214) may automatically initiate configuration 30 process for the new data source. To automatically configure the new data source,
18

the configuring module (214) may first analyze the one or more received configuration information to understand the specifics of the new data source. This may include determining a data format (e.g., CSV, JSON, XML, or other supported format) and a corresponding data structure associated with that format. 5 The corresponding data structure defines how the data elements are logically arranged within the chosen format. For example, a CSV file typically has a tabular structure with rows and columns, while JSON may represent data in a hierarchical manner.
[0067] Once the data format and the corresponding data structure is
10 determined, the configuring module (214) may further define one or more parsing rules to separate the field name within the data source. The one or more parsing rules may specify how to separate individual data elements (e.g., field name) within the new data source. The one or more parsing rules may depend on the identified format. For example, a CSV format data may use a comma (“,”) as a 15 delimiter to separate data elements within each row while JSON format data may rely on brackets (“{}”) to define the structure.
[0068] Upon defining the parsing rules, the configuring module (214) may
further map each field name to a corresponding data type within a predefined schema. The predefined schema essentially defines an expected format for data 20 stored within the database. The mapping process ensures that the system understands what kind of data each field represents (e.g., text, number, or date) and may handle it appropriately during processing and storage.
[0069] Finally, the configuring module (214) may establish a connection
to process the data based on the one or more defined parsing rules and the mapped 25 data field name. With the established connection, the system (108) is now prepared to efficiently process the newly integrated data source.
[0070] Once the data source is configured by the configuring module
(214), the normalization module (216) takes over to process the actual data. In particular, the normalization module (216) may process the received data
19

corresponding to the configurated new data source. In an embodiment, to process the data, the normalization module (216) may first parse the data corresponding to the configurated new data source. In parsing, the normalization module (216) may apply the one or more parsing rules defined by the configuring module (214) to 5 break down the incoming data into its constituent fields. The parsing is essentially for extracting meaningful information from a raw data format, whether it be CSV, JSON, XML, or any other format.
[0071] After parsing, the normalization module (216) may proceed to
normalize the data. Normalization may include transforming the parsed data into a 10 standardized format that is consistent across all the data sources. The standardized format makes it easier to integrate, compare, and analyze data from diverse data sources. For example, dates may be converted to a uniform date-time format, or units of measurement may be standardized.
[0072] Once the data is normalized, the normalization module (216) may
15 verify the normalized data against the predefined schema specified in the one or more configuration information. The predefined schema is a part of the configuration information, specifying how the incoming data may be aligned with the expected data structure. The predefined schema may include the expected data types, data field names, and any specific verification rules. The normalization 20 module (216) may ensure that the data field names match the data types and constraints defined in the schema. For example, a field defined as an integer must not contain alphabetic characters. The verification ensures that the data is align to the required format and contains valid, consistent information.
[0073] Once the data is processed, the processed data may be stored in the
25 database (210). In other words, after the normalization module (216) has completed the parsing, the normalizing, and verifying the data, the processed data is stored in the database (210). The database (210) may include data (e.g., processed data) that may be either stored or generated as a result of functionalities implemented by any of the components of the processor (202) or the processing
20

engine (208). In an embodiment, the database (210) may be indicative of including, but not limited to, a relational database, a distributed database, a cloud-based database, or the like.
[0074] To enhance the efficiency of data retrieval, the normalization
5 module (216) may index the processed data. The indexing may facilitate faster retrieval of the processed data stored in the database (210). Indexing involves creating a structured summary of the data, which may be quickly searched to find the required information.
[0075] In some embodiments, the processed data may be provided to one
10 or more external devices. This enables seamless data sharing and integration with other systems, applications, or users who need access to the processed data. The plug and play integration system (108) supports interfacing with various external devices, allowing them to consume the normalized and indexed data. This may include analytics platforms, reporting tools, or other data-driven applications.
15 [0076] Although FIG. 2 shows an exemplary block diagram (200) of the
plug and play integration system (108), in other embodiments, the plug and play integration system (108) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 2. Additionally, or alternatively, one or more components of the
20 plug and play integration system (108) may perform functions described as being performed by one or more other components of the plug and play integration system (108).
[0077] FIG 3 illustrates an exemplary process flow (300) of the system for
plug and play integration of the data source, in accordance with embodiments of 25 the present disclosure.
[0078] FIG. 3 represents a new source data ingestion module (302)
(analogous to the receiving module 212), a source details configuring module (304) (analogous to the configuring module 214), one or more normalization
21

modules (306-1, 302-2…302-N), a database (308) (analogous to the database 210), and external devices (310). A person of ordinary skill in the art will understand that one or more normalization modules (306-1, 302-2…302-N) may collectively referred to as the normalization module (306) (analogous to the 5 normalization module 216).
[0079] The process flow begins with the new source data ingestion module
(302), which is responsible for receiving data from various new data sources. The data may be received from a variety of sources, such as different vendors, each with unique data formats and structures. The new source data ingestion module 10 (302) acts as an entry point for all incoming data, ensuring that the data is correctly received and ready for further processing.
[0080] The source details configuring module (304) is responsible for
setting up and managing configuration details associated with each of the new data source received. The source details configuring module (304) allows users to 15 specify various configuration details, such as, data format, delimiter, field names, data types, and handling of nested elements. In some embodiments, these configuration details may be received by the new source data ingestion module (302) via the API. The source details configuring module (304) ensures that the system understands how to handle and process data from different sources.
20 [0081] The data is then passed to the normalization module (306) for
processing. The processing of the data is carried out by the one or more normalization modules (306-1, 306-2...306-N) having multiple layer of normalization. These modules collectively form the normalization module (306). Each normalization module is responsible for processing (e.g., parsing,
25 transforming, and verifying) the received data based on the configuration information. The parsed data is then normalized into a standardized format to ensure consistency and compatibility across different data sources.
[0082] In other words, each normalization module may handle a separate
stream or batch of data simultaneously. It may be noted that different
22

normalization modules may be specialized to handle different types of data or specific normalization tasks. For example, one normalization module may handle text data, another may handle numerical data, and yet another may deal with nested elements. Adding more normalization modules may help to scale the 5 system to handle larger volumes of data. As data volume increases, these additional normalization modules may be deployed to ensure the system continues to perform optimally.
[0083] Once the data is normalized, the normalized data is stored in the
database (308). The database (308) serves as a centralized repository for all 10 processed data (e.g., the normalized data), ensuring that it is securely stored and easily retrievable for future use. The database (308) also supports indexing of the processed data, which enhances the speed and efficiency of data retrieval operations.
[0084] The processed data may be accessed by the external devices (310).
15 The external devices (310) may consume the normalized data for various purposes, including analytics, reporting, and further data processing. This interaction ensures that the data is not only normalized but also accessible to other systems and applications that rely on accurate and consistent data.
[0085] FIG. 4 illustrates a flow chart of a method (400) for integration of
20 the data source with a network, in accordance with an embodiment of the present disclosure.
[0086] The method (400), at step 402 includes receiving, by a processing
engine, data from a new data source and one or more configuration information associated with the new data source via an API. The one or more configuration 25 information may include at least one of a data format, a delimiter, a field name within the data, a data type for each field, and details of handling a nested element.
23

[0087] The method (400), at step 404 includes automatically configuring,
by the processing engine, the new data source based on the one or more received configuration information. In an embodiment, the automatically configuring includes analyzing the one or more configuration information to determine the 5 data format and a corresponding data structure of the received new data source, defining one or more parsing rules to separate the field name within the new data source based on the determined data format and the data structure, mapping the field name to a corresponding data type within a predefined schema, and establishing a connection to process the data based on the one or more defined 10 parsing rules and the mapped data field name.
[0088] The method (400), at step 406 includes processing, by the
processing engine, the received data corresponding to the configurated new data source. In an embodiment, the processing includes parsing the data corresponding to the configurated new data source, normalizing the parsed data to provide a 15 standardize format, and verifying the normalized data with respect to the predefined schema specified in the one or more configuration information.
[0089] The method (400), at step 408 includes storing, by the processing
engine, the processed data in a database. In some embodiments, the method (400) includes indexing the processed data. The indexing may facilitate faster retrieval 20 of the processed data stored in the database. providing the processed data to one or more external devices. In some embodiments, the method (400) includes providing the processed data to one or more external devices. The complete process is already explained in detail in conjunction with the FIG. 2 and FIG. 3.
[0090] FIG. 5 illustrates an exemplary computer system (500) in which or
25 with which embodiments of the present disclosure may be implemented.
[0091] As shown in FIG. 5, the computer system (500) may include an
external storage device (510), a bus (520), a main memory (530), a read only memory (540), a mass storage device (550), a communication port (560), and a processor (570). A person skilled in the art will appreciate that the computer
24

system (500) may include more than one processor (570) and communication ports (560). Processor (570) may include various modules associated with embodiments of the present disclosure.
[0092] In an embodiment, the communication port (560) may be any of an
5 RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port (560) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (500) connects.
10 [0093] In an embodiment, the main memory (530) may be Random Access
Memory (RAM), or any other dynamic storage device commonly known in the art. Read-only memory (540) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or Basic Input/Output System (BIOS) instructions for
15 the processor (570).
[0094] In an embodiment, the mass storage (550) may be any current or
future mass storage solution, which may be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced 20 Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g., an array of disks (e.g., SATA arrays).
[0095] In an embodiment, the bus (520) communicatively couples the
25 processor(s) (570) with the other memory, storage, and communication blocks. The bus (520) may be, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB) or the like, for connecting expansion cards, drives and other
25

subsystems as well as other buses, such a front side bus (FSB), which connects the processor (570) to the computer system (500).
[0096] Optionally, operator and administrative interfaces, e.g., a display,
keyboard, joystick, and a cursor control device, may also be coupled to the bus (520) to support direct operator interaction with the computer system (500). Other operator and administrative interfaces may be provided through network connections connected through the communication port (560). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system (500) limit the scope of the present disclosure.
[0097] While the foregoing describes various embodiments of the present
disclosure, other and further embodiments of the present disclosure may be devised without departing from the basic scope thereof. The scope of the present disclosure is determined by the claims that follow. The present disclosure is not limited to the described embodiments, versions, or examples, which are included to enable a person having ordinary skill in the art to make and use the present disclosure when combined with information and knowledge available to the person having ordinary skill in the art.
[0098] The present disclosure provides technical advancement related to
the field of data integration and normalization. This advancement addresses the limitations of existing solutions by introducing a plug and play integration system for seamless data parsing and processing. The disclosure involves the inventive aspects of the automatic configuration module, and the normalization module, which offer significant improvements in efficiency, flexibility, and interoperability. By implementing a self-adapting configuration mechanism and standardized data normalization techniques, the present disclosure enhances the integration of diverse data sources, resulting in reduced manual effort, faster data processing, and improved data consistency across various systems.

ADVANTAGES OF THE PRESENT DISCLOSURE
[0099] The present disclosure provides a system and a method that
includes a normalization layer having a plug and play integration approach for data parsing, which streamline a process of integrating and extracting data from various sources.
[00100] The present disclosure provides a system, and a method that self-
adapts to various data formats and semantics of different vendors, thereby acting as a plug and play module.
[00101] The present disclosure provides a system and a method that may be
connected with any vendor and handle any type of data for ingestion, enrichment, and normalization through one of the UI or the API.
[00102] The present disclosure provides a system and a method that allows
for seamless integration and parsing of data from various sources with minimal configurations in the system, eliminating the need for code-level changes.
[00103] The present disclosure provides a system and a method that
supports efficient data indexing, facilitating faster retrieval and querying of processed data stored in a database.
[00104] The present disclosure increases the system flexibility and make
the system more versatile to adjust as per the format of input data.
[00105] The present disclosure reduces manual effort and configuration
time by automating the data integration and the normalization processes.
[00106] The present disclosure simplifies configuration, reduces manual
effort, and promotes interoperability, enabling efficient data integration across diverse systems and formats.

We Claim:
1. A method (400) for integration of a data source with a network, the method
(500) comprising:
receiving (402), by a processing engine (208), data from a new data source and one or more configuration information associated with the new data source via an application programming interface (API);
automatically configuring (404), by the processing engine (208), the new data source based on the one or more received configuration information;
processing (406), by the processing engine (208), the received data corresponding to the configurated new data source; and
storing (408), by the processing engine (208), the processed data in a database.
2. The method (400) as claimed in claim 1, wherein the one or more configuration information comprises at least one of a data format, a delimiter, a field name within the data, a data type for each field, and details of handling a nested element.
3. The method (400) as claimed in claim 1, wherein the automatically configuring comprises:
analyzing the one or more configuration information to determine the data format and a corresponding data structure of the received new data source;
defining one or more parsing rules to separate the field name within the new data source based on the determined data format and the data structure;
mapping the field name to a corresponding data type within a predefined schema; and
establishing a connection to process the data based on the one or more defined parsing rules and the mapped data field name.

4. The method (400) as claimed in claim 1, wherein the processing comprises:
parsing the data corresponding to the configurated new data source; normalizing the parsed data to provide a standardize format; and verifying the normalized data with respect to the predefined schema specified in the one or more configuration information.
5. The method (400) as claimed in claim 1, further comprising indexing the processed data, wherein the indexing facilitates faster retrieval of the processed data stored in the database.
6. The method (400) as claimed in claim 1, further comprising providing the processed data to one or more external devices.
7. A system (108) for integration of a data source with a network, the system (108) comprising:
a memory (204); and
a processing engine (208) communicatively coupled with the memory (204), configured to:
receive data from a new data source and one or more configuration information associated with the new data source via an application programming interface (API);
automatically configure the new data source based on the one or more received configuration information;
process the received data corresponding to the configurated new data source; and
store the processed data in a database.
8. The system (108) as claimed in claim 7, wherein the one or more configuration
information comprises at least one of a data format, a delimiter, a field name
within the data, a data type for each field, and details of handling a nested
element.

9. The system (108) as claimed in claim 7, wherein to automatically configure the
new data source, the processing engine (208) is configured to:
analyze the one or more configuration information to determine the data format and a corresponding data structure of the received new data source;
define one or more parsing rules to separate the field name within the new data source based on the determined data format and the data structure;
map the field name to a corresponding data type within a predefined schema; and
establish a connection to process the data based on the one or more defined parsing rules and the mapped data field name.
10. The system (108) as claimed in claim 7, wherein to process the data, the
processing engine (208) is configured to:
parse the data corresponding to the configurated new data source; normalize the parsed data to provide a standardize format; and verify the normalized data with respect to the predefined schema specified in the one or more configuration information.
11. The system (108) as claimed in claim 7, wherein the processing engine (208) is configured to index the processed data, wherein the indexing facilitates faster retrieval of the processed data stored in the database.
12. The system (108) as claimed in claim 7, wherein the processing engine (208) is configured to provide the processed data to one or more external devices.
13. A user equipment (UE) (104) communicatively coupled with a network (106), the coupling comprises steps of:
receiving, by the network (106), a connection request from the UE (104); sending, by the network (106), an acknowledgment of the connection request to the UE (104); and

transmitting a plurality of signals in response to the connection request, wherein based on the connection request an integration of a data source with the network (106) is performed by a method (400) as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321047047-STATEMENT OF UNDERTAKING (FORM 3) [12-07-2023(online)].pdf 2023-07-12
2 202321047047-PROVISIONAL SPECIFICATION [12-07-2023(online)].pdf 2023-07-12
3 202321047047-FORM 1 [12-07-2023(online)].pdf 2023-07-12
4 202321047047-DRAWINGS [12-07-2023(online)].pdf 2023-07-12
5 202321047047-DECLARATION OF INVENTORSHIP (FORM 5) [12-07-2023(online)].pdf 2023-07-12
6 202321047047-FORM-26 [13-09-2023(online)].pdf 2023-09-13
7 202321047047-FORM-26 [05-03-2024(online)].pdf 2024-03-05
8 202321047047-FORM 13 [08-03-2024(online)].pdf 2024-03-08
9 202321047047-AMENDED DOCUMENTS [08-03-2024(online)].pdf 2024-03-08
10 202321047047-Request Letter-Correspondence [03-06-2024(online)].pdf 2024-06-03
11 202321047047-Power of Attorney [03-06-2024(online)].pdf 2024-06-03
12 202321047047-Covering Letter [03-06-2024(online)].pdf 2024-06-03
13 202321047047-CORRESPONDANCE-WIPO CERTIFICATE-14-06-2024.pdf 2024-06-14
14 202321047047-ENDORSEMENT BY INVENTORS [04-07-2024(online)].pdf 2024-07-04
15 202321047047-DRAWING [04-07-2024(online)].pdf 2024-07-04
16 202321047047-CORRESPONDENCE-OTHERS [04-07-2024(online)].pdf 2024-07-04
17 202321047047-COMPLETE SPECIFICATION [04-07-2024(online)].pdf 2024-07-04
18 Abstract-1.jpg 2024-08-07
19 202321047047-ORIGINAL UR 6(1A) FORM 26-020924.pdf 2024-09-09
20 202321047047-FORM 18 [30-09-2024(online)].pdf 2024-09-30
21 202321047047-FORM 3 [04-11-2024(online)].pdf 2024-11-04