Sign In to Follow Application
View All Documents & Correspondence

System And Method For Providing Seamless Integration Of Data Sources In A Communication Network

Abstract: The present disclosure provides a system (200) and a method (500) for providing seamless integration of data sources (252a, 252b) in a communication network (106). The method (500) includes receiving (502), from a user (102), a request associated with at least one data source (252a, 252b) via a user interface (UI) (256). The method (500) includes extracting (504) one or more parameters from the received request. The method (500) includes configuring (506) at least one policy based on the one or more extracted parameters. The method (500) includes fetching (508) at least one data file from the at least one data source (252a, 252b) based on the at least one configured policy. The method (500) includes integrating (510) the at least one fetched data file into an ingestion layer (260) to load the at least one fetched data file into one or more destination systems (254). FIG. 3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
12 July 2023
Publication Number
03/2025
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. BHATNAGAR, Aayush
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
2. MURARKA, Ankit
W-16, F-1603, Lodha Amara, Kolshet Road, Thane West - 400607, Maharashtra, India.
3. KOLARIYA, Jugal Kishore
C 302, Mediterranea CHS Ltd, Casa Rio, Palava, Dombivli - 421204, Maharashtra, India.
4. KUMAR, Gaurav
1617, Gali No. 1A, Lajjapuri, Ramleela Ground, Hapur - 245101, Uttar Pradesh, India.
5. SAHU, Kishan
Ajay Villa, Gali No. 2, Ambedkar Colony, Bikaner - 334003, Rajasthan, India.
6. VERMA, Rahul
A-154, Shradha Puri Phase-2, Kanker Khera, Meerut - 250001, Uttar Pradesh, India.
7. MEENA, Sunil
D-29/1, Chitresh Nagar, Borkhera, District - Kota - 324001, Rajasthan, India.
8. GURBANI, Gourav
I-1601, Casa Adriana, Downtown, Palava Phase 2, Dombivli - 421204 Maharashtra, India.
9. CHAUDHARY, Sanjana
Jawaharlal Road, Muzaffarpur - 842001, Bihar, India.
10. GANVEER, Chandra Kumar
Village - Gotulmunda, Post - Narratola, Dist. - Balod - 491228, Chhattisgarh, India.
11. DE, Supriya
G2202, Sheth Avalon, Near Jupiter Hospital Majiwada, Thane West - 400601, Maharashtra, India.
12. KUMAR, Debashish
Bhairaav Goldcrest Residency, E-1304, Sector 11, Ghansoli, Navi Mumbai - 400701, Maharashtra, India.
13. TILALA, Mehul
64/11, Manekshaw Marg, Manekshaw Enclave, Delhi Cantonment, New Delhi - 110010, India.
14. KALIKIVAYI, Srinath
3-61, Kummari Bazar, Madduluru Village, S N Padu Mandal, Prakasam District, Andhra Pradesh - 523225, India
15. PANDEY, Vitap
D 886, World Bank Barra, Kanpur - 208027, Uttar Pradesh, India.

Specification

FORM 2
THE PATENTS ACT, 1970
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
IN A COMMUNICATION NETWORK
APPLICANT
JIO PLATFORMS LIMITED
of Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India; Nationality: India
The following specification particularly describes
the invention and the manner in which
it is to be performed

RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material,
which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
TECHNICAL FIELD
[0002] The present disclosure generally relates to a field of acquiring data from
different sources for purposes of providing seamless integration of data sources in a communication network. In particular, the present disclosure relates to acquiring data from different data sources with minimal loss of data.
BACKGROUND
[0003] The following description of related art may be intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0004] Wireless communication technology has rapidly evolved over the past
few decades. The first generation of wireless communication technology was analog technology that offered only voice services. Further, when the second-generation (2G) technology was introduced, text messaging and data services became possible. The 3G technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth generation (4G) technology revolutionized the wireless communication with faster data

speeds, improved network coverage, and security. Fifth-generation (5G) and
advanced-generation technology are being deployed, with even faster data speeds,
low latency, and the ability to connect multiple devices simultaneously.
[0005] As mobile networks continue to grow, users are increasingly concerned
about the quality and performance of their network connections. The 5G networks are upgrading their software and hardware functionalities to transform the facilities provided to consumers. However, when a new vendor system gets onboarded in the network, code-level development and testing are required to be done every time. Since every new data source/vendor system has its own way of generating, handling, and storing data. Further, each new vendor has a different operating system where the data is stored.
[0006] There is, therefore, a requirement in the art to provide a system and a
method that can mitigate the problems associated with the prior arts and provide an easier and simpler approach to on-board different and new data sources in the network.
OBJECTS OF THE INVENTION
[0007] An object of the present invention is to provide a system and a method
for providing seamless integration of data sources in a communication network.
[0008] Another object of the present invention is to provide a system and a
method that streamlines the on boarding process of different data sources/vendors
in a communication network seamlessly through a user-friendly user interface (UI).
[0009] Another object of the present invention is to provide an ingestion layer
that may not require any code-level change.
[0010] Another object of the present invention is to provide an ingestion layer
that is capable of acquiring data with minimal data loss.
SUMMARY
[0011] In an exemplary embodiment, the present invention discloses a system
for providing seamless integration of data sources in a communication network. The system includes a receiving unit configured to receive, from a user, a request

associated with at least one data source via a user interface (UI). The system
includes a processing unit coupled to the receiving unit and is configured to extract
one or more parameters from the received request and configure at least one policy
based on the one or more extracted parameters and fetch at least one data file from
the at least one data source based on the at least one configured policy. The
processing unit is configured to integrate the at least one fetched data file into an
ingestion layer to load the at least one fetched data file into one or more destination
systems.
[0012] In an embodiment, the system further includes a database configured to
store the received request and the one or more extracted parameters.
[0013] In an embodiment, the system is further configured to communicate the
at least one fetched data file to a normalization layer to normalize the fetched data
file based on the at least one configured policy.
[0014] In an embodiment, the system is further configured to communicate the
at least one normalized data file and the at least one configured policy to the one or
more destination systems for analysis and reporting.
[0015] In an embodiment, the one or more extracted parameters include at least
one of a data format, a data pulling protocol, an internet protocol (IP) address, a
type of operating system (OS), and a data pulling interval.
[0016] In an embodiment, the system is further configured to feed a user
credential associated with the at least one data source to the ingestion layer.
[0017] In an embodiment, the system is further configured to fetch the at least
one data file from the at least one data source after a periodic time interval.
[0018] In an exemplary embodiment, the present invention discloses a method
for providing seamless integration of data sources in a communication network. The
method includes receiving, from a user, a request associated with at least one data
source via a user interface (UI). The method includes extracting one or more
parameters from the received request. The method comprising configuring at least
one policy based on the one or more extracted parameters. The method includes
fetching at least one data file from the at least one data source based on the at least
one configured policy. The method includes integrating the at least one fetched data

file into an ingestion layer to load the at least one fetched data file into one or more
destination systems.
[0019] In an embodiment, the method further includes strong, in a database,
the received request and the one or more extracted parameters.
[0020] In an embodiment, the method further includes communicating the at
least one fetched data file to a normalization layer to normalize the fetched data file
based on the at least one configured policy.
[0021] In an embodiment, the method further includes communicating the at
least one normalized data file and the at least one configured policy to the one or
more destination systems for analysis and reporting.
[0022] In an embodiment, the one or more extracted parameters include at least
one of a data format, a data pulling protocol, an internet protocol (IP) address, a
type of operating system (OS), and a data pulling interval.
[0023] In an embodiment, the method further includes feeding a user credential
associated with the at least one data source to the ingestion layer.
[0024] In an embodiment, the method further includes fetching the at least one
data file from the at least one data source after a periodic time interval.
[0025] In an exemplary embodiment, the present invention discloses a user
equipment (UE) communicatively coupled with a communication network. The
coupling comprises steps of receiving, by the communication network, a connection
request from the UE, sending, by the communication network, an acknowledgment
of the connection request to the UE and transmitting a plurality of signals in
response to the connection request. A seamless integration of data sources in the
communication network is performed by a method includes receiving, from a user,
a request associated with at least one data source via a user interface (UI). The
method includes extracting one or more parameters from the received request. The
method comprising configuring at least one policy based on the one or more
extracted parameters. The method includes fetching at least one data file from the
at least one data source based on the at least one configured policy. The method
comprising includes the at least one fetched data file into an ingestion layer to load
the at least one fetched data file into one or more destination systems.

[0026] The foregoing general description of the illustrative embodiments and
the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.
BRIEF DESCRIPTION OF DRAWINGS
[0027] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the
disclosed methods and systems in which like reference numerals refer to the same
parts throughout the different drawings. Components in the drawings are not
necessarily to scale, emphasis instead being placed upon clearly illustrating the
principles of the present disclosure. Some drawings may indicate the components
using block diagrams and may not represent the internal circuitry of each
component. It will be appreciated by those skilled in the art that disclosure of such
drawings includes the disclosure of electrical components, electronic components
or circuitry commonly used to implement such components.
[0028] FIG. 1 illustrates an exemplary network architecture in which or with
which embodiments of the present disclosure may be implemented.
[0029] FIG. 2 illustrates an exemplary block diagram of a system for providing
seamless integration of data sources in a communication network, in accordance
with an embodiment of the present disclosure.
[0030] FIG. 3 illustrates an exemplary system architecture of the system, in
accordance with embodiments of the present disclosure.
[0031] FIG. 4 illustrates an exemplary computer system in which or with which
embodiments of the present disclosure may be implemented.
[0032] FIG. 5 illustrates an exemplary flow diagram for a method for providing
seamless integration of data sources in a communication network, in accordance
with embodiments of the present disclosure.
[0033] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
LIST OF REFERENCE NUMERALS

100 – Network architecture
102-1, 102-2…102-N - A plurality of users
104-1, 104-2…104-N - A plurality of computing devices
106 – Network
112- Centralized server
200 - System
202 - Receiving unit
204 - Memory
206 - Interfacing unit
208 - Processing unit
210 - Database
252a, 252b - Data source(s)
254 - Destination system(s)
256 - User interface
260 - Ingestion layer
262 - Normalization layer
300 - System architecture
400 - Computer system
410 - External storage device
420 - Bus
430 - Main memory
440 - Read only memory
450 - Mass storage device
460 - Communication port(s)
470 – Processor
500- Flow Diagram
DETAILED DESCRIPTION
[0034] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that

embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0035] The ensuing description provides exemplary embodiments only, and is
not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0036] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0037] Also, it is noted that individual embodiments may be described as a
process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.

[0038] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
[0039] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0040] The terminology used herein is for the purpose of describing particular
embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
[0041] The 5G networks are upgrading their software and hardware
functionalities to transform the facilities provided to consumers. However, when a

new vendor system gets onboarded in the network, a code level development and testing may be required to be done every time. Since every new data source / vendor system has its own way of generating, handling, and storing data. Further, each new vendor has a different operating system where the data is stored.
[0042] There is, therefore, a requirement in the art to provide a system and a
method that can mitigate the problems associated with the prior arts and provide an easier and simpler approach to on-board different and new vendors in the network. The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by disclosing a system and a method that provides adaptive troubleshooting in a communication network that streamlines an onboarding process with different vendors (data sources) seamlessly through a user-friendly user interface (UI).
[0043] The various embodiments of the present disclosure will be explained in
detail with reference to FIGS. 1-5.
[0044] FIG. 1 illustrates an exemplary network architecture (100) in which or
with which embodiments of the present disclosure may be implemented. Referring to FIG. 1, the network architecture (100) may include one or more computing devices or user equipment (104-1, 104-2…104-N) associated with one or more users (102-1, 102-2…102-N) in an environment. A person of ordinary skill in the art will understand that one or more users (102-1, 102-2…102-N) may be individually referred to as the user (102) and collectively referred to as the users (102). Similarly, a person of ordinary skill in the art will understand that one or more user equipment (104-1, 104-2…104-N) may be individually referred to as the user equipment (104) and collectively referred to as the user equipment (104). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “user equipment” may be used interchangeably throughout the disclosure. Although three user equipment (104) are depicted in FIG. 1, however, any number of the user equipment (104) may be included without departing from the scope of the ongoing description.
[0045] In an embodiment, the user equipment (104) may include, but is not
limited to, a handheld wireless communication device (e.g., a mobile phone, a

smartphone, a phablet device, and so on), a wearable computer device(e.g., a head-
mounted display computer device, a head-mounted camera device, a wristwatch
computer device, and so on), a Global Positioning System (GPS) device, a laptop
computer, a tablet computer, or another type of portable computer, a media playing
5 device, a portable gaming system, and/or any other type of computer device with
wireless communication capabilities, and the like. In an embodiment, the user equipment (104) may include, but is not limited to, any electrical, electronic, electro-mechanical, or an equipment, or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices,
10 laptop, a general-purpose computer, desktop, personal digital assistant, tablet
computer, mainframe computer, or any other computing device, wherein the user equipment (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102)
15 or the entity such as touchpad, touch enabled screen, electronic pen, and the like. A
person of ordinary skill in the art will appreciate that the user equipment (104) may
not be restricted to the mentioned devices and various other devices may be used.
[0046] Referring to FIG. 1, the user equipment (104) may communicate with a
system (200) to provide seamless integration of data sources through a
20 communication network (106). In an embodiment, the network (106) may include
at least one of a Fifth Generation (5G) network, 6G network, or the like. The network (106) may enable the user equipment (104) to communicate with other devices in the network architecture (100) and/or with the system (108). The communication network (106) may include a wireless card or some other
25 transceiver connection to facilitate this communication. In another embodiment, the
communication network (106) may be implemented as or include any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or
30 the like. The UE (104) may be communicatively coupled with the communication
network (106). The communicative coupling comprises receiving, from the UE
11

(104), a connection request by the communication network (106), sending an acknowledgment of the connection request to the UE (104), and transmitting a plurality of signals in response to the connection request.
[0047] In another exemplary embodiment, a centralized server (112) may
5 include or comprise, by way of example but not limitation, one or more of a stand-
alone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at
10 least a portion of any of the above, some combination thereof.
[0048] Although FIG. 1 shows exemplary components of the network
architecture (100), in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or
15 alternatively, one or more components of the network architecture (100) may
perform functions described as being performed by one or more other components of the network architecture (100).
[0049] FIG. 2 illustrates an exemplary block diagram of the system (200) for
providing seamless integration of data sources in the communication network (106).
20 The data sources are the origins or locations from which data is collected or
retrieved for use in the system (200) for analysis and reporting. In an aspect the data
sources may comprise data associated with multiple vendors that need to be boarded
on the system (200).
[0050] As shown in FIG. 2, the system (108) may include a receiving unit
25 (202), a memory (204), an interfacing unit (206), a processing unit (208), and a
database (210). The receiving unit (202) is configured to receive a request from the user (102). The request is associated with at least one data source. In an example, the receiving unit (202) is configured to receive a request from the user (102) via a user interface (UI).
30 [0051] In an aspect, the system may include an ingestion layer and a
normalization layer. The ingestion layer is configured to collect, process, and
12

organize the processed data into a cohesive format for storage and analysis. The
ingestion layer encompasses the systematic retrieval of data from diverse data
sources, ranging from databases and application programming interfaces (APIs) to
streaming platforms and IoT devices. Following collection, the data undergoes
5 transformation, where it is cleansed, standardized, and enriched to ensure
consistency and compatibility with downstream processes. Subsequently, the transformed data is loaded into designated data systems, such as data warehouses or lakes, facilitating accessibility and utilization for analytical insights and decision-making.
10 [0052] The normalization layer is configured to systematically organize and
standardize the received data to eliminate redundancies, enhance integrity, and streamline accessibility. The normalization layer is configured to employ principles of normalization to minimize duplication and remove inconsistencies, thereby mitigating data anomalies. The normalization layer incorporates data cleaning
15 procedures to rectify errors, reconcile discrepancies, and address missing values,
ensuring the integrity and completeness of the dataset. Furthermore, the normalization layer orchestrates the integration of data from heterogeneous sources, harmonizing disparate datasets to establish a unified and coherent representation. Through rigorous validation processes, the normalization layer verifies data
20 accuracy, completeness, and adherence to predefined quality standards and business
rules, culminating in a refined dataset ready for analysis and decision-making processes.
[0053] The data sources and the UI are coupled to the ingestion layer of the
system. The ingestion layer is responsible for collecting, ingesting, and initially
25 processing raw data from data sources before it is further transformed, stored, or
analyzed. In an aspect, the ingestion layer acts as an entry point for the data from the data sources into the system (200). In an aspect, the ingestion layer facilitates a seamless and efficient flow of data from the data sources to downstream processing pipelines. The ingestion layer performs various operations such as data collection,
30 data ingestion, data validation, and data routing. The processing unit (208) is
coupled to the receiving unit (202) and is configured to extract one or more
13

parameters from the received request. In an aspect, the one or more extracted
parameters include at least one of a data format of the at least one data file obtained
from the at least one data source, a data pulling protocol, an internet protocol (IP)
address of the at least one data source, a type of operating system (OS) of the at
5 least one data source, and a data pulling interval.
[0054] In an aspect, the data format of at least one data file obtained from at
least one data source may comprise a comma-separated values (.csv) file extension. In an aspect, the data format of the at least one data file obtained from the at least one data source may comprise an ‘.asn’ file extension. The ‘.asn’ file extension is
10 commonly associated with files in the abstract syntax notation one (ASN.1) format.
In an aspect, the data format of the at least one data file obtained from the at least one data source may comprise an ‘.txt’ file extension. The ‘.txt’ file extension are plain text files that contain unformatted text data and are created and edited using text editors or word processing software.
15 [0055] In an aspect, the data-pulling protocol is a secure shell (SSH) protocol.
In an aspect, the SSH protocol is a cryptographic network protocol used for secure communication and remote access over unsecured networks. The SSH provides a secure channel for accessing and managing remote systems, transferring files securely, and executing commands on remote machines. In an aspect, the data
20 pulling protocol may comprise a secure shell file transfer protocol (SFTP) protocol.
The SFTP is a network protocol used for secure file transfer and management over the SSH connections. The SFTP provides a secure alternative for transferring files between computers over a network, such as the internet or a local area network. The SFTP encrypts both the authentication credentials and the data being transferred,
25 providing confidentiality and integrity of the file transfers.
[0056] In an aspect, the processing unit (208) is configured to configure at least
one policy based on the one or more extracted parameters.
[0057] In an aspect, the processing unit (208) is configured to fetch at least one
data file from the at least one data source based on the at least one configured policy.
30 [0058] In an aspect, the processing unit (208) is configured to integrate the at
least one fetched data file into an ingestion layer to load the at least one fetched data
14

file into one or more destination systems.
[0059] In an aspect, the database is configured to store the received request and
the one or more extracted parameters.
[0060] In an aspect, a user credential associated with the at least one data
5 source is fed to the ingestion layer. The user credential is used for securely
managing and storing the credentials to facilitate authentication and access to the at least one data source.
[0061] In an aspect, the at least one data file is fetched from the at least one
data source after a periodic time interval (periodically). In an aspect, the UI and the
10 ingestion layer are further coupled to the normalization layer. In an aspect, the at
least one fetched data file is fed to the normalization layer that normalizes the fetched data file based on the at least one configured policy. In an aspect, the normalization is responsible for ensuring that one or more extracted parameters and the at least one data file stored within the database is organized efficiently and free
15 from redundancy. The normalization process involves applying a set of rules or
normal forms to the database schema, with each normal form addressing a specific type of data redundancy or dependency.
[0062] In an aspect, the at least one normalized data file and the at least one
configured policy is communicated to the one or more destination systems for
20 analysis and reporting.
[0063] In an aspect, the processing unit (208) may be implemented as one or
more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions.
25 Among other capabilities, the processing unit (208) may be configured to fetch and
execute computer-readable instructions stored in a memory (204) of the system (200). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a
30 network service. The memory (204) may include any non-transitory storage device
including, for example, volatile memory such as Random-Access Memory (RAM),
15

or non-volatile memory such as Erasable Programmable Read-Only Memory
(EPROM), flash memory, and the like. In an embodiment, the interfacing unit (206)
may include a variety of interfaces, for example, interfaces for data input and output
devices, referred to as I/O devices, storage devices, and the like. The interfacing
5 unit (206) may also provide a communication pathway for one or more components
of the system (200). Examples of such components include, but are not limited to, the processing unit (208) and the database (210).
[0064] The processing unit (208) may be implemented as a combination of
hardware and programming (for example, programmable instructions) to
10 implement one or more functionalities of the processing unit (208). In examples
described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing unit (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing
15 unit (208) may comprise a processing resource (for example, one or more
processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing unit (208). In such examples, the system (200) may include the machine-readable storage medium storing the
20 instructions and the processing resource to execute the instructions, or the machine-
readable storage medium may be separate but accessible to the system (200) and
the processing resource. In other examples, the processing unit (208) may be
implemented by an electronic circuitry.
[0065] FIG. 3 illustrates an exemplary system architecture (300) of the system
25 (200) for providing seamless integration of data sources, in accordance with
embodiments of the present disclosure.
[0066] The system (200) is configured to receive and/or acquire at least one
data file from the external data sources (252a, 252b). The at least one data file is transmitted to one or more destination systems for further analysis, processing, and
30 reporting. In an aspect, one or more destination systems may include macro-service
engines, or other service engines. For example, the at least one data file may
16

include, without limitations, a fault management data, a performance management
data, a configuration data, call data records, informatics data, log data, inventory
data, etc. The system (200) further includes the ingestion layer (260) that is
configured to receive the data from the external data sources (252a, 252b). The
5 ingestion layer (260) may be configured to acquire pertinent data from the external
data sources (252a, 252b) based on the request provided to the system (200) by the
user interface (256). In an example, the request may depend on a pre-configuration
of the system (200).
[0067] The system (200) further includes the normalization layer (262) that is
10 configured to receive the data file (referred to as data also) from the ingestion layer
(260). Particularly, the normalization layer (262) may be configured to collate the data received from the ingestion layer (260), and then transmit the collated data to the destination system (254). Furthermore, the system (200) may be configured to receive the request associated with onboarding the at least one data source from the
15 user via the UI (256).
[0068] There may be multiple vendors across the globe with different formats
of generating the data and protocols used for transferring the data. Therefore, the system (200) may provide an approach that smoothens an onboarding process with different vendors seamlessly through the user interface (256). All configurations
20 related to data acquisition (file format, pulling protocol, pulling interval, specific
file pulling, etc.) from any vendor may be done on the user interface (256), which will be fed to the ingestion layer (260). As per the provided information, the ingestion layer (260) may begin to fetch the data on the configured interval basis. The system (200) may be protocol agnostic and uses the UI (256) based
25 configuration approach to ensure data pull from remote servers seamlessly.
[0069] Further, in order to onboard a new data source/vendor to the system
(200), the data source (252a, 252b) may be configured from the UI (256). The data source (252a, 252b) may further include information such as the format of data file (.csv, .asn, .txt, etc.), pull frequency, the protocol (SSH/SFTP), from etc. The data
30 source (252) may store metadata in the database (210) so that there may not be a
requirement to create a new policy repeatedly. The system IP, user credentials,
17

location of data generation, and type of OS may also be fed to the ingestion layer
(260) from the UI (256). Once necessary configurations are done, the system (200)
starts fetching the data periodically. The system (200) may ensure that no duplicate
data is fetched every time and also maintain no data loss.
5 [0070] An advantage of the system (200) is that no code-level changes may be
required. The system (200) may not have to go through a software life cycle process (development, testing, Integration testing, and then deployment) and no downtime may be required to onboard a new vendor/data source. Thus, the system (200) allows for a fast-track process of integration and saves time.
10 [0071] FIG. 4 illustrates an exemplary computer system (400) in which or with
which embodiments of the present disclosure may be implemented. The computer system (400) may include an external storage device (410), a bus (420), a main memory (430), a read-only memory (440), a mass storage device (450), a communication port(s) (460), and a processor (470). A person skilled in the art will
15 appreciate that the computer system (400) may include more than one processor
and communication ports. The processor (470) may include various modules associated with embodiments of the present disclosure. The communication port(s) (460) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial
20 port, a parallel port, or other existing or future ports. The communication ports(s)
(460) may be chosen depending on a network, such as a Local Area Network
(LAN), Wide Area Network (WAN), or any network to which the computer system
(400) connects.
[0072] In an embodiment, the main memory (430) may be Random Access
25 Memory (RAM), or any other dynamic storage device commonly known in the art.
The read-only memory (440) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor (470). The mass storage device (450) may be any current or future mass
30 storage solution, which can be used to store information and/or instructions.
Exemplary mass storage solutions include, but are not limited to, Parallel Advanced
18

Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces).
[0073] In an embodiment, the bus (420) may communicatively couple the
5 processor(s) (470) with the other memory, storage, and communication blocks. The
bus (420) may be, e.g. a Peripheral Component Interconnect PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB, or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (470) to the computer
10 system (400).
[0074] In another embodiment, operator, and administrative interfaces, e.g., a
display, keyboard, and cursor control device may also be coupled to the bus (420) to support direct operator interaction with the computer system (400). Other operator and administrative interfaces can be provided through network
15 connections connected through the communication port(s) (460). Components
described above are meant only to exemplify various possibilities. In no way should
the aforementioned exemplary computer system (400) limit the scope of the present
disclosure.
[0075] FIG. 5 illustrates an exemplary flow diagram for a method (500) for
20 providing seamless integration of data sources in a communication network, in
accordance with embodiments of the present disclosure.
[0076] At step 502: The method (500) includes receiving, from a user (102), a
request associated with at least one data source (252a, 252b) via a user interface (UI) (256).
25 [0077] At step 504: The method (500) includes extracting one or more
parameters from the received request.
[0078] At step 506: The method (500) includes configuring at least one policy
based on the one or more extracted parameters.
[0079] At step 508: The method (500) includes fetching at least one data file
30 from the at least one data source (252a, 252b) based on the at least one configured
policy.
19

[0080] At step 510: The method (500) includes integrating the at least one
fetched data file into an ingestion layer (260) to load the at least one fetched data file into one or more destination systems (254).
[0081] In an embodiment, the method (500) further includes storing the
5 received request and the one or more extracted parameters into a database (210).
[0082] In an embodiment, the method (500) further includes communicating
the at least one fetched data file to a normalization layer (262) to normalize the
fetched data file based on the at least one configured policy.
[0083] In an embodiment, the method (500) further includes communicating
10 the at least one normalized data file and the at least one configured policy to the one
or more destination systems (254) for analysis and reporting.
[0084] In an embodiment, the one or more extracted parameters include at least
one of a data format, a data pulling protocol, an internet protocol (IP) address, a type of operating system (OS), and a data pulling interval.
15 [0085] In an embodiment, the method (500) further comprises feeding a user
credential associated with at least one data source (252a, 252b) to the ingestion layer (260).
[0086] In an embodiment, the method (500) further comprises fetching at least
one data file from at least one data source (252a, 252b) after a periodic time interval.
20 [0087] In an exemplary embodiment, the present disclosure discloses a user
equipment (UE) (104) communicatively coupled with a communication network (106). The coupling comprises steps of receiving, by the communication network (106), a connection request from the UE (104), sending, by the communication network (106), an acknowledgment of the connection request to the UE (104) and
25 transmitting a plurality of signals in response to the connection request. A seamless
integration of data sources (252a, 252b) in the communication network (106) is performed by a method comprising receiving, from a user (102), a request associated with at least one data source (252a, 252b) via a user interface (UI) (256). The method (500) comprises extracting one or more parameters from the received
30 request. The method (500) comprises configuring at least one policy based on one
or more extracted parameters. The method (500) comprises fetching at least one
20

data file from at least one data source (252a, 252b) based on the at least one
configured policy. The method (500) comprises integrating (510) the at least one
fetched data file into an ingestion layer (260) to load the at least one fetched data
file into one or more destination systems (254).
5 [0088] While considerable emphasis has been placed herein on the preferred
embodiments, it will be appreciated that many embodiments can be made and that
many changes can be made in the preferred embodiments without departing from
the principles of the disclosure. These and other changes in the preferred
embodiments of the disclosure will be apparent to those skilled in the art from the
10 disclosure herein, whereby it is to be distinctly understood that the foregoing
descriptive matter to be implemented merely as illustrative of the disclosure and not as limitation.
ADVANTAGES OF INVENTION
15 [0089] The present invention provides a provide a system and a method for
providing seamless integration of data sources in a communication network.
[0090] The present invention provides a provide a system and a method that
streamlines the on boarding process of different data sources/vendors in a
communication network seamlessly through a user-friendly user interface (UI).
20 [0091] The present invention provides an ingestion layer that may not require
any code-level change.
[0092] The present invention provides an ingestion layer that is capable of
acquiring data with minimal data loss.
25
21

WE CLAIM:
1. A system (200) for providing seamless integration of data sources (252a,
252b) in a communication network (106), the system (200) comprising:
a receiving unit (202) configured to receive, from a user (102), a request associated with at least one data source (252a, 252b) via a user interface (UI) (256); and
a processing unit (208) coupled to the receiving unit (202) and is configured to:
extract one or more parameters from the received request; configure at least one policy based on the one or more extracted parameters;
fetch at least one data file from the at least one data source (252a, 252b) based on the at least one configured policy; and
integrate the at least one fetched data file into an ingestion layer (260) to load the at least one fetched data file into one or more destination systems (254).
2. The system (200) as claimed in claim 1, further comprising a database (210) configured to store the received request and the one or more extracted parameters.
3. The system (200) as claimed in claim 1, is further configured to communicate the at least one fetched data file to a normalization layer (262) to normalize the fetched data file based on the at least one configured policy.
4. The system (200) as claimed in claim 3, is further configured to communicate the at least one normalized data file and the at least one configured policy to the one or more destination systems (254) for analysis and reporting.

5. The system (200) as claimed in claim 1, wherein the one or more extracted parameters include at least one of a data format, a data pulling protocol, an internet protocol (IP) address, a type of operating system (OS), and a data pulling interval.
6. The system (200) as claimed in claim 1, is further configured to feed a user credential associated with the at least one data source (252a, 252b) to the ingestion layer (260).
7. The system (200) as claimed in claim 1, is further configured to fetch the at least one data file from the at least one data source (252a, 252b) after a periodic time interval.
8. A method (500) for providing seamless integration of data sources (252a, 252b) in a communication network (106), the method (500) comprising:
receiving (502), from a user (102), a request associated with at least one data source (252a, 252b) via a user interface (UI) (256);
extracting (504) one or more parameters from the received request;
configuring (506) at least one policy based on the one or more extracted parameters;
fetching (508) at least one data file from the at least one data source (252a, 252b) based on the at least one configured policy; and
integrating (510) the at least one fetched data file into an ingestion layer (260) to load the at least one fetched data file into one or more destination systems (254).
9. The method (500) as claimed in claim 8, further comprising storing, in a
database (210), the received request and the one or more extracted
parameters.

10. The method (500) as claimed in claim 8, further comprising communicating the at least one fetched data file to a normalization layer (262) to normalize the fetched data file based on the at least one configured policy.
11. The method (500) as claimed in claim 10, further comprising communicating the at least one normalized data file and the at least one configured policy to the one or more destination systems (254) for analysis and reporting.
12. The method (500) as claimed in claim 8, wherein the one or more extracted parameters include at least one of a data format, a data pulling protocol, an internet protocol (IP) address, a type of operating system (OS), and a data pulling interval.
13. The method (500) as claimed in claim 8, is further comprising feeding a user credential associated with the at least one data source (252a, 252b) to the ingestion layer (260).
14. The method (500) as claimed in claim 8, is further comprising fetching the at least one data file from the at least one data source (252a, 252b) after a periodic time interval.
15. A user equipment (UE) (104) communicatively coupled with a communication network (106), the coupling comprises steps of:
receiving, by the communication network (106), a connection request from the UE (104);
sending, by the communication network (106), an acknowledgment of the connection request to the UE (104); and
transmitting a plurality of signals in response to the connection request, wherein seamless integration of data sources (252a, 252b) in the communication network (106) is performed by a method as claimed in claim 8.

Documents

Application Documents

# Name Date
1 202321047046-STATEMENT OF UNDERTAKING (FORM 3) [12-07-2023(online)].pdf 2023-07-12
2 202321047046-PROVISIONAL SPECIFICATION [12-07-2023(online)].pdf 2023-07-12
3 202321047046-FORM 1 [12-07-2023(online)].pdf 2023-07-12
4 202321047046-DRAWINGS [12-07-2023(online)].pdf 2023-07-12
5 202321047046-DECLARATION OF INVENTORSHIP (FORM 5) [12-07-2023(online)].pdf 2023-07-12
6 202321047046-FORM-26 [13-09-2023(online)].pdf 2023-09-13
7 202321047046-FORM-26 [05-03-2024(online)].pdf 2024-03-05
8 202321047046-FORM 13 [08-03-2024(online)].pdf 2024-03-08
9 202321047046-AMENDED DOCUMENTS [08-03-2024(online)].pdf 2024-03-08
10 202321047046-Request Letter-Correspondence [03-06-2024(online)].pdf 2024-06-03
11 202321047046-Power of Attorney [03-06-2024(online)].pdf 2024-06-03
12 202321047046-Covering Letter [03-06-2024(online)].pdf 2024-06-03
13 202321047046-ENDORSEMENT BY INVENTORS [13-06-2024(online)].pdf 2024-06-13
14 202321047046-DRAWING [13-06-2024(online)].pdf 2024-06-13
15 202321047046-CORRESPONDENCE-OTHERS [13-06-2024(online)].pdf 2024-06-13
16 202321047046-COMPLETE SPECIFICATION [13-06-2024(online)].pdf 2024-06-13
17 202321047046-CORRESPONDANCE-WIPO CERTIFICATE-14-06-2024.pdf 2024-06-14
18 Abstract1.jpg 2024-07-12
19 202321047046-ORIGINAL UR 6(1A) FORM 26-020924.pdf 2024-09-09
20 202321047046-FORM 18 [30-09-2024(online)].pdf 2024-09-30
21 202321047046-FORM 3 [07-11-2024(online)].pdf 2024-11-07