Sign In to Follow Application
View All Documents & Correspondence

Method And System For Selectively Compressing Headers

Abstract: ABSTRACT METHOD AND SYSTEM FOR SELECTIVELY COMPRESSING HEADERS The present disclosure relates to a system (120) and a method (500) for selectively compressing headers. The method (500) includes the step of receiving (505), by one or more processors (205), a plurality of requests from a User Equipment (UE) (110) via a communication protocol. The plurality of requests aids in interaction between one or more network functions (125) and a server (115) to access services and exchange data. The method (500) includes the step of categorizing (510), by the one or more processors (205), the headers of each of the plurality of requests into a compression category and a non-compression category. The method (500) includes the step of indexing (515), by the one or more processors (205), the headers categorized in the compression category into one of static and dynamic tables, and thereby compressing the headers categorized in the compression category. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
05 July 2023
Publication Number
2/2025
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2025-07-22
Renewal Date

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA

Inventors

1. Aayush Bhatnagar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad-380006, Gujarat, India
2. Adityakar Jha
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad-380006, Gujarat, India
3. Ajith Reddy
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad-380006, Gujarat, India
4. Depak Kathuria
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad-380006, Gujarat, India
5. Himanshu Chahuhan
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad-380006, Gujarat, India
6. Nitin Verma
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad-380006, Gujarat, India
7. Yog Vashishth
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad-380006, Gujarat, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR SELECTIVELY COMPRESSING HEADERS
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention generally relates to wireless communication networks, and more particularly relates to a method and system for selectively compressing headers.
BACKGROUND OF THE INVENTION
[0002] In network communications, Hypertext Transfer Protocol version 2 (HTTP2) is widely used to facilitate the exchange of data between clients and servers. The HTTP2 protocol includes a header compression mechanism known as HPACK, which compresses headers to optimize network bandwidth and improve performance. However, the default behavior of HPACK is to compress all headers, making it challenging for trusted packet sniffers or probes to extract specific headers for request-response correlation.
[0003] In large-scale network communication systems with millions of subscribers and a high volume of signaling activities, there is a practical need to identify and correlate the flow of signaling for specific users. Various network functions, such as Session Management Functions (SMF), Short Message Service Functions (SMSF), and Access and Mobility Management Functions (AMF), are involved in processing the signaling, and the information passes through multiple stages of processing and interactions among these functions.
[0004] To address this requirement, probes are utilized to sniff the signaling and correlate it based on subscriber identity, such as mobile numbers or other identifiers. However, due to the default HPACK compression applied to all headers, the subscriber identity and other critical headers are compressed, making it difficult for probes to extract and correlate the necessary information.
[0005] There is, therefore, a need for an improved mechanism that selectively compresses headers in the HTTP2 protocol, allowing trusted packet sniffers and probes to retrieve specific headers without compression for accurate request-response correlation.
SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present disclosure provide a method and a system for selectively compressing headers.
[0007] In one aspect of the present invention, the method for selectively compressing headers is disclosed. The method includes the step of receiving, by one or more processors, a plurality of requests from a User Equipment (UE) via a communication protocol. The plurality of requests aids in interaction between one or more network functions and a server to access services and exchange data. The method includes the step of categorizing, by the one or more processors, the headers of each of the plurality of requests into a compression category and a non-compression category. The method includes the step of indexing, by the one or more processors, the headers categorized in the compression category into one of static and dynamic tables, and thereby compressing the headers categorized in the compression category.
[0008] In one embodiment, the method includes the step of analyzing, by the one or more processors, the headers of each of the plurality of requests to perform correlation tasks based on the headers in the non-compression category of each of the plurality of requests between the one or more network functions and the server upon categorizing the headers and indexing of the headers in the compression category.
[0009] In another embodiment, the correlation tasks include one of identifying specific subscribers and mobile number, tracking signal flow, and identifying anomalies.
[0010] In yet another embodiment, the one or more network functions is at least one of a Session Management Function (SMF), an Access and Mobility Management Function (AMF), and Short Message Service Function (SMSF).
[0011] In yet another embodiment, the headers in the compression category are the headers that are essential for network processing and are readable by a probing unit.
[0012] In yet another embodiment, the headers in the non-compression category are the headers that are excluded from request response correlation, wherein the headers in the non-compression category are sent to the server in original format of the headers to enable reading by the probing unit.
[0013] In another aspect of the present invention, the system for selectively compressing headers is disclosed. The system includes a receiving unit configured to receive a plurality of requests from a User Equipment (UE) via a communication protocol. The plurality of requests aids in interaction between one or more network functions and a server to access services and exchange data. The system includes a categorizing unit configured to categorize the headers of each of the plurality of requests into a compression category and a non-compression category. The system includes an indexing unit configured to index the headers categorized in the compression category into one of static and dynamic tables.
[0014] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor. The processor is configured to receive a plurality of requests from a User Equipment (UE) via a communication protocol. The plurality of requests aids in interaction between one or more network functions and a server to access services and exchange data. The processor is configured to categorize the headers of each of the plurality of requests into a compression category and a non-compression category. The processor is configured to index the headers categorized in the compression category into one of static and dynamic tables.
[0015] In yet another aspect of the present invention, a User Equipment (UE) includes one or more primary processors. The one or more primary processors are communicatively coupled to one or more processors and a memory. The memory stores instructions which when executed by the one or more primary processors causes the UE to generate and transmit a plurality of requests via a communication protocol.
[0016] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0018] FIG. 1 is an exemplary block diagram of a communication system for selectively compressing headers, according to one or more embodiments of the present disclosure;
[0019] FIG. 2 is an exemplary block diagram of a system for selectively compressing the headers, according to one or more embodiments of the present disclosure;
[0020] FIG. 3 is a schematic representation of a workflow of the system of FIG. 1, according to one or more embodiments of the present disclosure;
[0021] FIG. 4 is a signal flow diagram illustrating the system for selectively compressing the headers, according to one or more embodiments of the present disclosure; and
[0022] FIG. 5 is a flow diagram illustrating a method for selectively compressing the headers, according to one or more embodiments of the present disclosure.
[0023] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0024] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0025] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0026] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0027] The system and method of the present invention leverages Hypertext Transfer Protocol Version 2 (HTTP2) protocol, which provides two types of tables: a static table and a dynamic table, for indexing headers. The headers that are intended to be compressed using a Header Compression for HTTP2 (HPACK) mechanism are indexed into either the static or dynamic tables. On the other hand, the headers that are not to be compressed are excluded from the indexing process and are transmitted as string literals. This selective indexing and compression mechanism allows sniffing utilities and a probing unit to read the uncompressed literals and perform the necessary correlation tasks.
[0028] FIG. 1 illustrates an exemplary block diagram of a communication system 100 for selectively compressing headers, according to one or more embodiments of the present disclosure. The communication system 100 includes a network 105, a User Equipment (UE) 110, a server 115, a system 120, one or more Network Functions (NFs) 125. The UE 110 aids a user to interact with the system 120 to generate and transmit a plurality of requests via a communication protocol. The user includes, but not limited to a network operator. The terms “user” and “subscriber” are used interchangeably herein, without deviating from the scope of the present disclosure.
[0029] For the purpose of description and explanation, the description will be explained with respect to the UE 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the UE 110 from the first UE 110a, the second UE 110b, and the third UE 110c is configured to connect to the server 115 via the network 105.
[0030] In an embodiment, each of the first UE 110a, the second UE 110b, and the third UE 110c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0031] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0032] The network 105 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0033] The communication system 100 includes the server 115 accessible via the network 105. The server 115 is communicatively coupled to the one or more network functions 125. The server 115 is mainly configured for storing and managing user-related data. The server 115 stores subscriber information, including mobile numbers, identities, and other relevant data required for request-response correlation. The server 115 communicates with the one or more network functions 125 to provide necessary user data during network operations.
[0034] The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0035] The communication system 100 includes the one or more NFs 125. The one or more NFs refers to the tasks or operations performed by the network devices or software components within the network 105 to facilitate communication, data transfer, data security, data management, and optimize network performance. Examples of one or more NFs 125 include routing and switching, network address translation (NAT), firewalling, load balancing, Quality of Service (QoS) management, network monitoring and management, encryption, authentication, and access control. In an embodiment, the one or more NFs 125 is at least one of, but not limited to, a Session Management Function (SMF), an Access and Mobility Management Function (AMF), and a Short Message Service Function (SMSF).
[0036] As mentioned earlier, the UE 110 aids a user to interact with the system 120 to generate and transmit the plurality of requests via the communication protocol. The plurality of requests aids in interaction between the one or more network functions 125 and the server 115 to access services and exchange data. In one embodiment, the plurality of requests corresponds to at least one of, communication, data transfer, data security, and data management.
[0037] The communication protocol facilitates data exchange between the one or more NFs 125 and the server 115. In one embodiment, the communication protocol includes, but not limited to, a Hypertext Transfer Protocol Version 2 (HTTP2) protocol.
[0038] The server 115 and the one or more NFs 125 is interfaced via the communication protocol. The communication protocol facilitates data exchange between the one or more NFs 125 and the server 115. The communication protocol includes the header compression mechanism known as the HPACK. The communication protocol includes a header compression mechanism known as header compression for HTTP2 (HPACK). The HPACK is a compression mechanism specifically designed for reducing the overhead of transmitting HTTP header fields in HTTP2. The HPACK compresses the header fields by using a combination of techniques such as Huffman encoding for string literals and a static and dynamic table indexing for commonly used header fields. This compression reduces the size of headers sent over the network 105, leading to optimized network bandwidth and performance.
[0039] The communication system 100 further includes the system 120 communicably coupled to the server 115 and each of the first UE 110a, the second UE 110b, and the third UE 110c via the network 105. In one or more embodiments, the system 120 is adapted to be embedded within the server 115 or is embedded as the individual entity.
[0040] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0041] FIG. 2 illustrates an exemplary block diagram of the system 120 for selectively compressing headers, according to one or more embodiments of the present disclosure. The system 120 includes one or more processors 205, a memory 210, a user interface 215, and a database 240. The one or more processors 205, hereinafter referred to as the processor 205, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system 120 includes one processor 205. However, it is to be noted that the system 120 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0042] The information related to generation of the plurality of requests via the communication protocol is provided or stored in the memory 210. Among other capabilities, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROMs, FLASH memory, unalterable memory, and the like.
[0043] The user interface 215 includes a variety of interfaces, for example, interfaces for a Graphical User Interface (GUI), a web user interface, a Command Line Interface (CLI), and the like. The user interface 215 facilitates communication of the system 120. In one embodiment, the user interface 215 provides a communication pathway for one or more components of the system 120. Examples of the one or more components include, but are not limited to, the UE 110 and the database 240.
[0044] The database 240 is configured to store the plurality of requests. Further, the database 240 provides structured storage, support for complex queries, and enables efficient data retrieval and analysis. The database 240 is one of, but is not limited to, one of a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0045] Further, the processor 205, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0046] In order for the system 120 to selectively compress the headers, the processor 205 includes a receiving unit 220, a categorizing unit 225, an indexing unit 230, and a probing unit 235 communicably coupled to each other for selectively compressing the headers.
[0047] The receiving unit 220, the categorizing unit 225, the indexing unit 230, and the probing unit 235, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0048] The receiving unit 220 is configured to receive the plurality of requests from the UE 110 via the communication protocol. In one embodiment, the communication protocol includes but not limited to, a HTTP2 protocol. In one embodiment, the plurality of requests is at least one of, communication, data transfer, data security, and data management. The plurality of requests aids in interaction between the one or more NFs 125 and the server 115 to access services and exchange data. In an embodiment, the one or more NFs 125 is at least one of the Session Management Function (SMF), the Access and Mobility Management Function (AMF), and the Short Message Service Function (SMSF). The one or more NFs 125 includes various network functions with various components in the network 105 responsible for handling different tasks and services. The one or more NFs 125 includes the AMF, the SMF, and the SMSF.
[0049] The AMF is a network function responsible for managing access and mobility-related tasks in the network 105. The AMF plays a crucial role in the network's overall operation by handling functions such as subscriber authentication, mobility management, session establishment, and security procedures. The AMF is typically involved in the initial setup and ongoing management of user sessions, ensuring seamless mobility between different network areas and providing secure access to network resources. The AMF works closely with other network functions to coordinate and maintain efficient communication services for subscribers.
[0050] The SMF is a network function that oversees the management and control of sessions within the network 105. The SMF is primarily responsible for session establishment, modification, and termination, ensuring the reliable and efficient delivery of data between network entities. The SMF handles tasks, such as session routing, quality of service (QoS) management, traffic optimization, and policy enforcement. The SMF acts as a central point for session-related decisions and coordinates with other network functions to maintain the integrity and performance of sessions throughout their lifecycle.
[0051] The SMSF is a network function dedicated to handling Short Message Service (SMS) communications within the network 105. The SMSF enables the exchange of short text messages between mobile subscribers, facilitating communication for various purposes such as personal messaging, notifications, alerts, and information services. The SMSF handles tasks such as message routing, delivery confirmation, message storage, and interaction with external systems, ensuring reliable and timely delivery of SMS messages. The SMSF plays a critical role in supporting messaging services and contributes to the overall communication capabilities provided by the one or more NFs 125.
[0052] The one or more NFs 125, including the AMF, the SMF, and the SMSF, are essential components of the network 105 architecture. The AMF, the SMF, and the SMSF work together with other network functions to enable seamless connectivity, efficient session management, and reliable communication services for subscribers. Each function has its specific responsibilities and contributes to the overall functionality and performance of the network 105.
[0053] Upon receiving the plurality of requests from the UE 110 via the communication protocol, the categorizing unit 225 is configured to categorize the headers of each of the plurality of requests into a compression category and a non-compression category. The headers typically refer to additional information attached to each request in the communication protocol. In an embodiment, the headers include, but not limited to, HTTP headers, Simple Mail Transfer Protocol (SMTP) headers, and Transmission Control Protocol (TCP)/Internet Protocol (IP) headers. The headers perform data handling, data routing and data processing across various protocols. For example, in HTTP2 requests, the HTTP2 headers include at least one of content type, encoding, cookies, and user-agent information that help the server 115 and the one or more NFs 125 to communicate effectively.
[0054] Upon categorizing the headers of each of the plurality of requests into the compression category and the non-compression category, the probing unit 235 is configured to analyze the headers of each of the plurality of requests. The probing unit 235 is an entity within the network 105 architecture responsible for monitoring and capturing network traffic. The network traffic refers to the data packets transmitted between the devices or nodes within the network 105. The network traffic includes various types of information, such as web pages, emails, files, multimedia streams, and any other data exchanged between devices connected to the network 105. The probing unit 235 is designed to analyze and correlate the information of the headers traveling across the network 105. The probing unit 235 acts as a trusted utility and interacts with the selectively compressed headers to perform correlation tasks.
[0055] In one embodiment, on analyzing the headers of each of the plurality of requests in the non-compression category, the probing unit 235 is configured to perform correlation tasks. In one embodiment, the probing unit 235 is implemented as a packet sniffer. The packet sniffers are commonly used for network troubleshooting, monitoring network activity, analyzing network protocols, and detecting malicious activities such as unauthorized access or data breaches. In an embodiment, the correlation tasks include one of identifying specific subscribers and mobile number, tracking signal flow, and identifying anomalies.
[0056] The correlation tasks are performed by the probing unit 235 based on the headers in the non-compression category of each of the plurality of requests between the one or more NFs 125 and the server 115. In one or more embodiments, with the accurate correlation provided by the selective HPACK mechanism, the probing unit 235 is configured to enable analysis and troubleshooting of network activities. The probing unit 235 tracks the flow of signaling for a specific subscriber and pinpoints any issues or anomalies in the network 105. This analysis aids in providing efficient services to users and maintaining the integrity of network operations.
[0057] In another embodiment, on analyzing the headers of each of the plurality of requests in the compression category, the indexing unit 230 is configured to index the headers categorized in the compression category into one of static and dynamic tables. Commonly used header fields are referenced from the static table or the dynamic table if the headers have been previously encountered and stored. The new headers that are not present in the dynamic table are added to it, and their representations are sent along with the index to their position in the dynamic table. The selective HPACK mechanism utilizes a combination of the static and dynamic tables, which reduces the overhead of header transmission in HTTP2 connections. The selective HPACK mechanism is deployed at the one or more NFs 125 to handle the headers appropriately.
[0058] In the HPACK mechanism, the static table contains commonly used header fields and values that are predefined and known to both the client and the server 115. Entries in the static table are not modified during the HTTP2 connection. The static table remains constant throughout the connection and helps in efficiently compressing commonly used headers without needing to send their full representations in each message. In the HPACK mechanism, the dynamic table is used to store the header fields and values encountered during the HTTP2 connection. Entries in the dynamic table can be added, updated, or removed based on the headers exchanged between the client and the server 115 during the session. When the header field is encountered for a first time in a session, then the encountered field is added to the dynamic table. Subsequent occurrences of the same header field can refer to the entry in the dynamic table by using an index, reducing redundancy and saving bandwidth. Further, the entries in the dynamic table have the associated index that can be used to reference them in subsequent messages, further optimizing header transmission.
[0059] The headers in the compression category are the headers that are essential for network processing and are readable by the probing unit 235. For the headers that need to be indexed and compressed with the HPACK, the one or more NFs 125 follow the HPACK compression process. An indexed representation defines the header field as a reference to an entry in either the static table or the dynamic table provided by the communication protocol.
[0060] The headers in the non-compression category are the headers that are excluded from request response correlation. The headers in the non-compression category are not compressed with the HPACK, the one or more NFs 125 exclude the headers from the compression process. In an embodiment, the headers in the non-compression category are transmitted to the server 115 in the original format of the headers to enable reading by the probing unit 235.
[0061] By doing compression of the headers, the system 120 enables efficient request-response correlation by providing customized header compression and accurate sniffing capabilities in large-scale network communication systems, improves request-response correlation by selectively applying HPACK compression and ensuring the availability of uncompressed headers crucial for correlation. By implementing the system 120, network operators can enhance their analysis and troubleshooting capabilities, optimize network bandwidth usage, and provide efficient services to their subscribers.
[0062] FIG. 3 is a schematic representation of the system 120 in which various entities operations are explained, according to one or more embodiments of the present disclosure. Referring to FIG. 3, describes the system 120 for selectively compressing the headers. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 110a for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0063] As mentioned earlier in FIG.1, In an embodiment, the first UE 110a may encompass electronic apparatuses. These devices are illustrative of, but not restricted to, personal computers, laptops, tablets, smartphones (including phones), or other devices enabled for web connectivity. The scope of the first UE 110a explicitly extends to a broad spectrum of electronic devices capable of executing computing operations and accessing networked resources, thereby providing users with a versatile range of functionalities for both personal and professional applications. This embodiment acknowledges the evolving nature of electronic devices and their integral role in facilitating access to digital services and platforms. In an embodiment, the first UE 110a can be associated with multiple users. Each UE 110 is communicatively coupled with the processor 205 via the network 105.
[0064] The first UE 110a includes one or more primary processors 305 communicably coupled to the one or more processors 205 of the system 120. The one or more primary processors 305 are coupled with a memory 310 storing instructions which are executed by the one or more primary processors 305. Execution of the stored instructions by the one or more primary processors 305 enables the first UE 110a to transmit the request to generate the dashboard to dynamically monitor the KPIs.
[0065] Furthermore, the one or more primary processors 305 within the UE 110 are uniquely configured to execute a series of steps as described herein. This configuration underscores the processor 205 capability to selectively compress the headers. The operational synergy between the one or more primary processors 305 and the additional processors, guided by the executable instructions stored in the memory 310, facilitates a seamless compressing of the headers.
[0066] As mentioned earlier in FIG.2, the system 120 includes the one or more processors 205, the memory 210, the user interface 215, and the database 240. The operations and functions of the one or more processors 205, the memory 210, the user interface 215, and the database 240 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0067] Further, the processor 205 includes the receiving unit 220, the categorizing unit 225, the indexing unit 230, and the probing unit 235. The operations and functions of the receiving unit 220, the categorizing unit 225, the indexing unit 230, and the probing unit 235 are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 120 in FIG. 3, should be read with the description provided for the system 120 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0068] FIG. 4 is a signal flow diagram illustrating the system for selectively compressing the headers, according to one or more embodiments of the present disclosure.
[0069] At step 402, receiving the plurality of requests from the UE 110 via the communication protocol by the receiving unit 220. In one embodiment, the communication protocol includes but not limited to, HTTP2 protocol. In one embodiment, the plurality of requests is at least one of, communication, data transfer, data security, and data management. The plurality of requests aids in interaction between the one or more NFs 125 and the server 115 to access services and exchange data. In an embodiment, the one or more NFs 125 is at least one of the Session Management Function (SMF), the Access and Mobility Management Function (AMF), and the Short Message Service Function (SMSF).
[0070] At step 404, categorizing the headers of each of the plurality of requests into the compression category and the non-compression category by the categorizing unit 225. The headers typically refer to additional information attached to each request in the communication protocol. In an embodiment, the headers include, but not limited to, HTTP headers, Simple Mail Transfer Protocol (SMTP) headers, and Transmission Control Protocol (TCP)/Internet Protocol (IP) headers. The headers perform data handling, data routing and data processing across various protocols. For example, in HTTP2 requests, the HTTP2 headers include at least one of content type, encoding, cookies, and user-agent information that help the server 115 and the one or more NFs 125 communicate effectively.
[0071] At step 406, analyzing the headers of each of the plurality of requests by the probing unit 235. The probing unit 235 is the entity within the network 105 architecture responsible for monitoring and capturing network traffic. The network traffic refers to the data packets transmitted between the devices or nodes within the network 105. The network traffic includes various types of information, such as web pages, emails, files, multimedia streams, and any other data exchanged between devices connected to the network 105. The probing unit 235 is designed to analyze and correlate the information of the headers traveling across the network 105. The probing unit 235 acts as the trusted utility and interacts with the selectively compressed headers to perform correlation tasks.
[0072] At step 408, performing on analyzing the headers of each of the plurality of requests in the non-compression category, the probing unit 235 is configured to perform correlation tasks. In one embodiment, the probing unit 235 is implemented as the packet sniffer. The packet sniffers are commonly used for network troubleshooting, monitoring network activity, analyzing network protocols, and detecting malicious activities such as unauthorized access or data breaches. In an embodiment, the correlation tasks include one of identifying specific subscribers and mobile number, tracking signal flow, and identifying anomalies.
[0073] The correlation tasks are performed by the probing unit 235 based on the headers in the non-compression category of each of the plurality of requests between the one or more NFs 125 and the server 115. In one or more embodiments, with the accurate correlation provided by the selective HPACK mechanism, the probing unit 235 is configured to enable analysis and troubleshooting of network activities. The probing unit 235 tracks the flow of signaling for a specific subscriber and pinpoints any issues or anomalies in the network 105. This analysis aids in providing efficient services to users and maintaining the integrity of network operations.
[0074] At step 410, indexing the headers categorized in the compression category into one of static and dynamic tables by the indexing unit 230 based on analyzing the headers of each of the plurality of requests in the compression category. Commonly used header fields are referenced from the static table or the dynamic table if the headers have been previously encountered and stored. The new headers that are not present in the dynamic table are added to it, and their representations are sent along with the index to their position in the dynamic table. The selective HPACK mechanism utilizes a combination of the static and dynamic tables, which reduces the overhead of header transmission in HTTP2 connections. The selective HPACK mechanism is deployed at the one or more NFs 125 to handle the headers appropriately. The headers in the compression category are the headers that are essential for network processing and are readable by the probing unit 235.
[0075] The headers in the compression category are the headers that are essential for network processing and are readable by the probing unit 235. For the headers that need to be indexed and compressed with the HPACK, the one or more NFs 125 follow the HPACK compression process. An indexed representation defines the header field as the reference to the entry in either the static table or the dynamic table provided by the communication protocol.
[0076] In the HPACK mechanism, the static table contains commonly used header fields and values that are predefined and known to both the client and the server 115. Entries in the static table are not modified during the HTTP2 connection. The static table remains constant throughout the connection and helps in efficiently compressing commonly used headers without needing to send their full representations in each message. In the HPACK mechanism, the dynamic table is used to store the header fields and values encountered during the HTTP2 connection. Entries in the dynamic table can be added, updated, or removed based on the headers exchanged between the client and the server 115 during the session. When the header field is encountered for the first time in the session, then the encountered field is added to the dynamic table. Subsequent occurrences of the same header field can refer to the entry in the dynamic table by using an index, reducing redundancy and saving bandwidth. Further, the entries in the dynamic table have the associated index that can be used to reference them in subsequent messages, further optimizing header transmission.
[0077] The headers in the non-compression category are the headers that are excluded from request response correlation. The headers in the non-compression category are not compressed with the HPACK, the one or more NFs 125 exclude the headers from the compression process. In an embodiment, the headers in the non-compression category are transmitted to the server 115 in the original format of the headers to enable reading by the probing unit 235. Owing to this, the indexing unit 230 is configured to index the headers categorized in the compression category and the non-compression category. The indexing unit 230 is configured to transmit the indexed headers categorized in the compression category and the non-compression category to the categorizing unit 225. The categorizing unit 225 is configured to transmit the acknowledgement of whether the headers categorized in the compression category or the non-compression category to the receiving unit 220. The receiving unit 220 is configured to transmit the acknowledgement of the headers categorized in the compression category or the non-compression category to the network operators.
[0078] FIG. 5 is a flow diagram illustrating a method 500 for selectively compressing the headers, according to one or more embodiments of the present disclosure.
[0079] At step 505, the method 500 includes the step of receiving the plurality of requests from the UE 110 via the communication protocol by the receiving unit 220. The plurality of requests aids in interaction between the one or more NFs 125 and the server 115 to access services and exchange data. In an embodiment, the one or more NFs 125 is at least one of the Session Management Function (SMF), the Access and Mobility Management Function (AMF), and the Short Message Service Function (SMSF). The one or more NFs 125 includes various network functions with various components in the network 105 responsible for handling different tasks and services. The one or more NFs 125 includes the AMF, the SMF, and the SMSF.
[0080] At step 510, the method 500 includes the step of categorizing the headers of each of the plurality of requests into the compression category and the non-compression category by the categorizing unit 225 based on receiving the plurality of requests from the UE 110 via the communication protocol. Upon categorizing the headers of each of the plurality of requests into the compression category and the non-compression category, the probing unit 235 is configured to analyze the headers of each of the plurality of requests.
[0081] In one embodiment, on analyzing the headers of each of the plurality of requests in the non-compression category, the probing unit 235 is configured to perform the correlation tasks. In an embodiment, the correlation tasks include one of identifying specific subscribers and mobile number, tracking signal flow, and identifying anomalies. The correlation tasks are performed by the probing unit 235 based on the headers in the non-compression category of each of the plurality of requests between the one or more network functions 125 and the server 115.
[0082] At step 515, the method 500 includes the step of indexing the headers categorized in the compression category into one of static and dynamic tables by the indexing unit 230 based on analyzing the headers of each of the plurality of requests in the compression category, the indexing unit 230 and thereby compressing the headers categorized in the compression category. The selective HPACK mechanism is deployed at the one or more NFs 125 to handle the headers appropriately. The headers in the compression category are the headers that are essential for network processing and are readable by the probing unit 235.
[0083] The headers in the non-compression category are the headers that are excluded from request response correlation. The headers in the non-compression category are not compressed with the HPACK, the one or more NFs 125 exclude the headers from the compression process. In an embodiment, the headers in the non-compression category are transmitted to the server 115 in the original format of the headers to enable reading by the probing unit 235.
[0084] The present invention discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by a processor 205. The processor 205 is configured to receive a plurality of requests from a User Equipment (UE) 110 via a communication protocol. The plurality of requests aids in interaction between one or more Network Functions (NFs) 125 and a server 115 to access services and exchange data. The processor 205 is configured to categorize the headers of each of the plurality of requests into the compression category and the non-compression category. The processor 205 is configured to index the headers categorized in the compression category into one of static and dynamic tables.
[0085] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0086] The present disclosure incorporates technical advancement of indexing the headers. The headers that are intended to be compressed using the HPACK mechanism are indexed into either the static or dynamic tables. On the other hand, the headers that are not to be compressed are excluded from the indexing process and are transmitted as string literals. This selective indexing and compression mechanism allows sniffing utilities and the probing unit to read the uncompressed literals and perform the necessary correlation tasks, which improves request-response correlation by selectively applying HPACK compression and ensuring the availability of uncompressed headers crucial for correlation. By implementing this method, network operators can enhance their analysis and troubleshooting capabilities, optimize network bandwidth usage, and provide efficient services to their subscribers.
[0087] The present disclosure offers the following advantages:
[0088] Enhanced Request-Response Correlation: The selective HPACK mechanism allows trusted packet sniffers and the probing unit to accurately read and correlate specific headers without the interference of HPACK compression. This enhances the analysis and troubleshooting capabilities in large-scale network communication systems with millions of subscribers and a high volume of signaling activities.
[0089] Customizable Compression Behavior: The invention provides flexibility by allowing the customization of the list of headers for which HPACK compression is to be disabled. The network operators can adapt the compression behavior to their specific requirements and optimize the correlation process accordingly.
[0090] Compatibility with Existing Protocols: The invention leverages the HTTP2 protocol, an established and widely used network communication protocol. By integrating the selective HPACK mechanism into the existing protocol, it ensures compatibility with a wide range of network infrastructures and systems.
[0091] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS
[0092] Communication system – 100;
[0093] Network – 105;
[0094] User Equipment – 110;
[0095] Server – 115;
[0096] System – 120;
[0097] One or more Network Functions – 125;
[0098] One or more processor -205;
[0099] Memory – 210;
[00100] User Interface– 215;
[00101] Receiving unit– 220;
[00102] Categorizing unit- 225;
[00103] Indexing unit– 230;
[00104] Probing unit- 235;
[00105] Database- 240;
[00106] One or more primary processors – 305;
[00107] Memory of user equipment – 310.
,CLAIMS:CLAIMS
We Claim:
1. A method (500) of selectively compressing headers, the method (500) comprising the steps of:
receiving (505), by one or more processors (205), a plurality of requests from a User Equipment (UE) (110) via a communication protocol, the plurality of requests aids in interaction between one or more network functions (125) and a server (115) to access services and exchange data;
categorizing (510), by the one or more processors (205), the headers of each of the plurality of requests into a compression category and a non-compression category; and
indexing (515), by the one or more processors (205), the headers categorized in the compression category into one of static and dynamic tables, and thereby compressing the headers categorized in the compression category.

2. The method (500) as claimed in claim 1, wherein upon categorizing the headers and indexing of the headers in the compression category, the method (500) comprises the step of analyzing, by the one or more processors (205), the headers of each of the plurality of requests to perform correlation tasks based on the headers in the non-compression category of each of the plurality of requests between the one or more network functions and the server.

3. The method (500) as claimed in claim 2, wherein the correlation tasks include one of identifying specific subscribers and mobile number, tracking signal flow, and identifying anomalies.

4. The method (500) as claimed in claim 1, wherein the one or more network functions (125) is at least one of a Session Management Function (SMF), an Access and Mobility Management Function (AMF), and Short Message Service Function (SMSF).

5. The method (500) as claimed in claim 1, wherein the headers in the compression category are the headers that are essential for network processing and are readable by a probing unit (235).

6. The method (500) as claimed in claim 1, wherein the headers in the non-compression category are the headers that are excluded from request response correlation, wherein the headers in the non-compression category are sent to the server in original format of the headers to enable reading by the probing unit (235).

7. A system (120) for selectively compressing headers, the system (120) comprises:
a receiving unit (220) configured to receive, a plurality of requests from a User Equipment (UE) (110) via a communication protocol, the plurality of requests aids in interaction between one or more network functions (125) and a server (115) to access services and exchange data;
a categorizing unit (225) configured to categorize, the headers of each of the plurality of requests into a compression category and a non-compression category;
an indexing unit (230) configured to index, the headers categorized in the compression category into one of static and dynamic tables.

8. The system (120) as claimed in claim 7, the system (120) comprising a probing unit (235) configured to analyze, the headers of each of the plurality of requests to perform correlation tasks based on the headers in the non-compression category of each of the plurality of requests between the one or more network functions (125) and the server (115).

9. The system (120) as claimed in claim 8, wherein the correlation tasks include one of identifying specific subscribers and mobile number, tracking signal flow, and identifying anomalies.

10. The system (120) as claimed in claim 7, wherein the one or more network functions (125) is at least one of a Session Management Function (SMF), an Access and Mobility Management Function (AMF), and Short Message Service Function (SMSF).

11. The system (120) as claimed in claim 7, wherein the headers in the compression category are the headers that are essential for network processing and are readable by the probing unit (235).

12. The system (120) as claimed in claim 1, wherein the headers in the non-compression category are the headers that are excluded from request response correlation, wherein the headers in the non-compression category are sent to the server in original format of the headers to enable reading by the probing unit (235).

13. A User Equipment (UE) (110) comprising:
one or more primary processors (305) communicatively coupled to one or more processors (205), the one or more primary processors (305) coupled with a memory (310), wherein said memory (310) stores instructions which when executed by the one or more primary processors (305) causes the UE (110) to:
generate and transmit a plurality of requests via a communication protocol,
wherein the one or more processors (205) is configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321045205-STATEMENT OF UNDERTAKING (FORM 3) [05-07-2023(online)].pdf 2023-07-05
2 202321045205-PROVISIONAL SPECIFICATION [05-07-2023(online)].pdf 2023-07-05
3 202321045205-FORM 1 [05-07-2023(online)].pdf 2023-07-05
4 202321045205-FIGURE OF ABSTRACT [05-07-2023(online)].pdf 2023-07-05
5 202321045205-DRAWINGS [05-07-2023(online)].pdf 2023-07-05
6 202321045205-DECLARATION OF INVENTORSHIP (FORM 5) [05-07-2023(online)].pdf 2023-07-05
7 202321045205-FORM-26 [11-09-2023(online)].pdf 2023-09-11
8 202321045205-Proof of Right [22-12-2023(online)].pdf 2023-12-22
9 202321045205-DRAWING [26-06-2024(online)].pdf 2024-06-26
10 202321045205-COMPLETE SPECIFICATION [26-06-2024(online)].pdf 2024-06-26
11 Abstract1.jpg 2024-09-26
12 202321045205-Power of Attorney [11-11-2024(online)].pdf 2024-11-11
13 202321045205-Form 1 (Submitted on date of filing) [11-11-2024(online)].pdf 2024-11-11
14 202321045205-Covering Letter [11-11-2024(online)].pdf 2024-11-11
15 202321045205-CERTIFIED COPIES TRANSMISSION TO IB [11-11-2024(online)].pdf 2024-11-11
16 202321045205-FORM 3 [27-11-2024(online)].pdf 2024-11-27
17 202321045205-FORM 3 [27-11-2024(online)]-1.pdf 2024-11-27
18 202321045205-FORM-9 [10-01-2025(online)].pdf 2025-01-10
19 202321045205-FORM 18A [13-01-2025(online)].pdf 2025-01-13
20 202321045205-FER.pdf 2025-01-30
21 202321045205-OTHERS [07-03-2025(online)].pdf 2025-03-07
22 202321045205-FER_SER_REPLY [07-03-2025(online)].pdf 2025-03-07
23 202321045205-COMPLETE SPECIFICATION [07-03-2025(online)].pdf 2025-03-07
24 202321045205-US(14)-HearingNotice-(HearingDate-04-07-2025).pdf 2025-06-18
25 202321045205-Correspondence to notify the Controller [19-06-2025(online)].pdf 2025-06-19
26 202321045205-FORM-26 [02-07-2025(online)].pdf 2025-07-02
27 202321045205-Written submissions and relevant documents [18-07-2025(online)].pdf 2025-07-18
28 202321045205-FORM-26 [18-07-2025(online)].pdf 2025-07-18
29 202321045205-PatentCertificate22-07-2025.pdf 2025-07-22
30 202321045205-IntimationOfGrant22-07-2025.pdf 2025-07-22

Search Strategy

1 202321045205_SearchStrategyNew_E_SearchstrategyE_27-01-2025.pdf

ERegister / Renewals

3rd: 17 Oct 2025

From 05/07/2025 - To 05/07/2026