Sign In to Follow Application
View All Documents & Correspondence

Method And System For Managing Storage In A Communication Network

Abstract: ABSTRACT METHOD AND SYSTEM FOR MANAGING STORAGE IN A COMMUNICATION NETWORK The present disclosure relates to a system (125) and a method (500) for managing storage in a communication network (105). The system (125) includes a transceiver module (220) to receive an alert pertaining to a corrupt application from a User Equipment (110). The system further includes an identification module (225) to identify a corrupt stack of the corrupt application based on the received alert. The system further includes an extraction module (230) to selectively extract a memory image corresponding to the identified corrupt stack and a format convertor module (235) to convert the extracted memory image into acceptable formats. Thereby, the system (125) manages storage in the communication network (105) in an optimized manner to facilitate debugging the corrupt stack of the corrupt application. The method (500) includes various steps for managing the storage in the communication network (105). Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
03 July 2023
Publication Number
04/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, India. Ahmedabad Gujarat India 380006

Inventors

1. Aayush Bhatnagar
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane Navi Mumbai Maharashtra, India 400701
2. Birendra Singh Bisht
B-2101, Yashaskaram CHS, Plot -39, Sector -27 Kharghar Navi Mumbai Maharashtra India 410210
3. Harbinder Pal Singh
Wing B1, Flat No 402, Lakhani Suncoast, Sector 15, CBD Belapur Navi Mumbai Maharashtra India 400614
4. Rohit Soren
Flat-106, HNo-84, Sultanpur New Delhi Delhi India
5. Priyanka Singh
E-802 RiverScape CHS,Casa Rio, Palava City Dombivli East Maharashtra India 421204
6. Pravesh Aggarwal
A-313, Raghubir Nagar - New Delhi New Delhi India 110027
7. Bidhu Sahu
1702, E, RiverScape, CasaRio, Palava City Dombivali East Maharastra India 421204
8. Suman Naskar
E-801 RiverScape CHS,Casa Rio, Palava City, Dombivli East India Maharashtra 421204
9. Satyajit Kumar
S50/34,Kundu Niwas, DLF Phase 3,Sector 24, , - Gurugram Haryana India 122004
10. Raj Priya Darshi
1102/Fairfield, Bharat Ecovistas, Shilphata Thane Maharashtra India 421204

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR MANAGING STORAGE IN A COMMUNICATION NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention generally relates to wireless communication systems, and more particularly relates to managing storage in a communication network during application fault handling.
BACKGROUND OF THE INVENTION
[0002] In existing applications, whenever any disruption in an ongoing network process occurs or relevant networking system crashes due to any abnormal data, coding bug, segmentation fault, or any interruption from hardware, then a significant amount of incoming data reception during the downtime is lost. To safeguard such losses and to prevent future crashes, a fault handling mechanism is employed. Such a mechanism is implemented to diagnose as to why the crash has happened. In order to successfully diagnose such anomaly, the core dump of the process is implemented.
[0003] The core dump process can be depicted in a flow chart where a user/network operator can identify application programming interface (APIs) or segmentation fault in specific code lines. In other words, core dump is an image of the virtual memory map and core dump process includes analyzing the entirety of the virtual memory map. Therefore, the size of the core dump is the same as the total virtual memory taken by the process. Such core dump is used to diagnose the cause of corruption.
[0004] However, one of the common issues in fault handling is that during occurrence of crashes in any application which could be due to bugs in coding, corrupt data from network, unhandled network events etc., a large amount of data is produced which needs analysis so as to identify the exact section of the application execution steps/methodology/codes. The size of this data is variable, depending upon the extent of crash occurred. The size of this data can be from a few megabytes (MB) to as big as multiple gigabytes (GB) depending on the application’s virtual memory usage. Considering large scale of activities performed by certain application, some application processes’ virtual memory goes as big as 20-50GBs for which core dumping take significant amount of time thereby downgrading overall system throughput.
[0005] Then again, in contemporary systems, core dumping process takes too long (20-30seconds) which is a long time to wait until a stand-by process comes into play which may be an active process. This long duration to generate core dump also impacts the time required to restart the process which has crashed, as a new process cannot be restarted until critical system resources like IP/Port etc. used previously has not been released completely. Moreover, the analysis of such large volume of data results in significant storage consumption resulting overall performance degradation and may sometime lead to prolonged operational shut out or downtime.
[0006] Therefore, there is a need for an advancement for a system and method that can overcome at least one of the above shortcomings, particularly to manage the storage during fault handling.
BRIEF SUMMARY OF THE INVENTION
[0007] One or more embodiments of the present disclosure provide a system and method for managing storage in a communication network.
[0008] In one aspect of the present invention, a system for managing storage in a communication network is disclosed. The system includes a transceiver configured to receive an alert from a User Equipment (UE) pertaining to at least one corrupt application. The UE is adapted to host a plurality of applications and each of the plurality of applications includes a plurality of stacks. In one embodiment, the alert corresponding to the at least one corrupt application, includes data pertaining to an address of a code section where the corruption has occurred pertaining to the at least one corrupt stack. The system further includes an identification module and extraction module communicably coupled to the identification module. The identification module is configured to identify at least one corrupt stack pertaining to the at least one corrupt application based on the received alert from the UE. The extraction module is configured to selectively extract a memory image corresponding to the identified at least one corrupt stack pertaining to the at least one corrupt application. The system further includes a format convertor module configured to convert the extracted memory image into one or more acceptable formats
[0009] The system is further configured to utilize at least one of stack back trace and Application Programming Interfaces (APIs) to identify the at least one corrupt stack. The system is further configured to store the memory image corresponding to the identified at least one corrupt stack is stored as a crash dump file. The system is further configured to utilize the crash dump file to initiate a debugging process. The system is further configured to initiate the debugging process, by mapping the crash dump file to a file which is in the one or more acceptable formats, in order to rectify the at least one corrupt stack. The system further includes an initiation unit configured to initiate in real time a standby mode. The initiation unit utilizes one or more resources in order to restart the at least one corrupt application during a debugging process of the at least one corrupt stack. The system further includes debugging the at least one corrupt stack pertaining to the at least one corrupt application, and thereby managing storage.
[0010] In another aspect of the present invention, a method for managing storage in a communication network is disclosed. The method includes the steps of receiving an alert from a User Equipment (UE) pertaining to at least one corrupt application. The UE is adapted to host a plurality of applications and each of the plurality of applications includes a plurality of stacks. In one embodiment, the alert corresponding to the at least one corrupt application, includes data pertaining to an address of a code section where the corruption has occurred pertaining to the at least one corrupt stack. The method includes the step of identifying at least one corrupt stack pertaining to the at least one corrupt application based on the received alert from the UE. The method further includes selectively extracting a memory image corresponding to the identified at least one corrupt stack pertaining to the at least one corrupt application. Thereafter the method includes the step of converting the extracted memory image into one or more acceptable formats.
[0011] The method further includes utilizing at least one of a stack back trace and Application Programming Interfaces (APIs) to identify the at least one corrupt stack. The method includes the step of storing the memory image corresponding to the identified at least one corrupt stack is stored as a crash dump file. The method further includes the step of utilizing the crash dump file to initiate a debugging process by mapping the crash dump file to a file which is in the one or more acceptable formats, in order to rectify the at least one corrupt stack. The method include the step of initiating in real time a standby mode by providing one or more resources in order to restart the at least one corrupt application during a debugging process of the at least one corrupt stack. The method further include the step of debugging the at least one corrupt stack pertaining to the at least one corrupt application, and thereby managing storage.
[0012] In another aspect of invention User Equipment is disclosed. The UE includes one or more primary processors communicatively coupled to one or more processors, the one or more primary processors coupled with a memory. The processor causes the UE to host a plurality of applications, each application including a plurality of stacks and transmit an alert pertaining to at least one corrupt application to the one or more processor.
[0013] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0015] FIG. 1isan exemplary block diagram of an environment for managing storage in a communication network, according to various embodiments of the present invention;
[0016] FIG. 2 is a block diagram of the system for managing storage in the communication network, according to various embodiments of the present system;
[0017] FIG. 3 is a schematic representation of the present system of FIG. 1 workflow, according to various embodiments of the present system;
[0018] FIG. 4 is a signal flow diagram for managing storage in the communication network, according to one or more embodiments of the present invention; and
[0019] FIG. 5 shows a flow diagram of a method for managing storage, according to various embodiments of the present system.
[0020] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILEDDESCRIPTION OF THE INVENTION
[0021] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0022] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0023] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0024] As per various embodiments depicted, the present invention discloses the system and method for managing storage by implementing optimized core dumping to minimize failover time. The failover time is defined as the time required for initiating a backup.
[0025] In various embodiments, the present invention discloses the system and method to perform partial core dump for fault analysis to selectively retrieve data corresponding to the fault. The system and method further utilizes the retrieved data to facilitate ease of debugging the fault. The retrieved data corresponds to an image of a workflow of an application at a specific time.
[0026] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for managing storage in a communication network 105. The environment 100 includes a user equipment 110. For the purpose of description and explanation, the description will be explained with respect to one or more user equipments (UE) 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure.
[0027] In an embodiment, each of the first UE 110a, the second UE 110b, and the third UE 110cis one of, but are not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0028] Each of the first UE 110a, the second UE 110b, and the third UE 110c is further configured to host a plurality of applications thereon. Each of the plurality of applications is adapted to include one or more application stacks to aid in performing certain predefined activities of each of the plurality of application. The predefined activities includes, but not limited to, accessing a server 115, transmitting data packets, and receiving data packets via the communication network 105.
[0029] The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise, a defence facility, or any other facility that provides content.
[0030] The communication network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The communication network 105may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0031] The environment further includes a system 125 communicably coupled to the server 115 and each of the first UE 110a, the second UE 110b, and the third UE 110c via the communication network 105. The system 125 is configured to manage storage in the communication network 105.
[0032] In various embodiments, the system 125 may be generic in nature and may be integrated with any application including a System Management Facility (SMF), an Access and Mobility Management Function (AMF), a Business Telephony Application Server (BTAS), a Converged Telephony Application Server (CTAS), any SIP (Session Initiation Protocol) Application Server which interacts with core Internet Protocol Multimedia Subsystem(IMS) on Industrial Control System (ISC) interface as defined by 3GPP to host a wide array of cloud telephony enterprise services, a System Information Blocks (SIB)/ and a Mobility Management Entity (MME).
[0033] The system 125 is further configured to employ Transmission Control Protocol (TCP) connection to identify any connection loss in the communication network 105 and thereby improving overall efficiency. The TCP connection is a communications standard enabling applications and system 125 to exchange information over the communication network 105.
[0034] Operational and construction features of the system 125 will be explained in detail with respect to the following figures.
[0035] Referring to FIG. 2, FIG. 2 illustrates a block diagram of the system 125 for managing storage in the communication network 105, according to one or more embodiments of the present invention. The system 125 is adapted to be embedded within the server 115 or is embedded as the individual entity. However, for the purpose of description, the system 125 is described as an integral part of the server 115, without deviating from the scope of the present disclosure.
[0036] As per the illustrated embodiment, the system 125 includes one or more processors 205, a memory 210, and an input/output (I/O) interface unit 215. The one or more processor 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system 125 includes one processor 205. However, it is to be noted that the system 125 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure. Among other capabilities, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0037] In an embodiment, the I/O interface unit 215 includes a variety of interfaces, for example, interfaces for data input and output devices, referred to as Input/Output (I/O) devices, storage devices, and the like. The I/O interface unit 215 facilitates communication of the system 125. In one embodiment, the I/O interface unit 215 provides a communication pathway for one or more components of the system 125. Examples of such components include, but are not limited to, the UE 110 and a database 240.
[0038] The database 240 is one of, but is not limited to, one of a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 240 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0039] Further, the processor 205, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205. In such examples, the system 125 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 125 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0040] In order for the system 125 to manage storage in the communication network 105, the processor 205 includes a transceiver module 220, an identification module 225, an extraction module 230, and a format convertor module 235 communicably coupled to each other.
[0041] The transceiver module 220 of the processor 205 is communicably connected to each of the first UE 110a, the second UE 110b, and the third UE 110c via the communication network 105. Accordingly, the transceiver module 220 is configured to receive at least one input from each of the at least each of the first UE 110a, the second UE 110b, and the third UE 110c.
[0042] The at least one input corresponds to an alert pertaining to the at least one application of the plurality of applications hosted in one of the first UE 110a, the second UE 110b, and the third UE 110c.In one embodiment, the alert is one of a notification, an alarm and a trigger. The alert includes data pertaining to, but not limited to, an address of the one or more application stacks of the plurality of applications.
[0043] The transceiver module 220 is configured to receive the alert in case of a disruption in the predefined activities performed by the at least one application of the plurality of applications. In one embodiment, the disruption is caused due to one of a fault and a corruption in at least one of the one or more application stacks of the at least one application. The application of the plurality of applications which experience the disruption in the predefined activities is categorized as a corrupt application. In particular, the corrupt application refers to an application that exhibits malfunctioning behavior due to damage or alteration in its code, configuration, or data. The corruption refers to the alteration or damage of data, metadata, or the structure of the storage system, leading to data becoming inaccessible, incorrect, or unusable.
[0044] The processor 205 further includes the identification module 225 in communication with transceiver module 220. More specifically, the identification module 225 is communicably coupled with the transceiver module 220 to identify at least one corrupt stack pertaining to the at least one corrupt application based on the received alert. The corrupt stack refers to issues within the application stack that handles the storage operations, including the file systems, storage protocols, and the networked storage management software.
[0045] In one embodiment, the identification module 225 is configured to determine. Accordingly, at least one of the address and a location of the at least one corrupt stack Accordingly, the identification module 225 is configured to implement identification techniques such as, but not limited to, stack back-trace and utilization of Application Programming Interface (API), to retrieve information corresponding to the address of the at least one corrupted stack from a memory map available in the database 240. The stack back-trace is a report of the active stack frames at a certain point in time during the execution of a program. The memory map includes a record of workflows of the one or more application stacks replicated onto the database 240. The identification module 225 is configured to utilize the retrieved information from the memory map to identify the at least one corrupt stack.
[0046] The API is defined as medium of communication in between applications performing a designated process. The identification module 225 is configured to utilize the API which performs intended functionalities like tracing of corrupted application stack. The API is operable by providing a set of instructions in suitable formats like JSON (JavaScript Object Notation), Python or any other such compatible formats.
[0047] The extraction module 230 of the processor 205 is communicably connected to the identification module 225. The extraction module 230 is configured to selectively extract a memory image corresponding to the identified corrupt stack pertaining to the at least one corrupt application. The memory image pertains to at least one of a snapshot indicating one of the address and location of the corrupt, and a summarized note containing one of the address and location of the corrupt stack.
[0048] In one embodiment, the extraction module 230 is communicably coupled to the identification module 225 to receive the retrieved information pertaining to one of the address and the location of the identified at least one corrupted stack. Utilizing the retrieved information, the extraction module 230 is further configured to extract the memory image corresponding to the identified at least one corrupt stack pertaining to the at least one corrupt application. Accordingly, the extraction module 230 is configured to perform partial core dumping of the memory map.
[0049] After selectively extracting the memory image, the extraction module 230 is configured to store the extracted memory image as a crash dump file in the database 240. The crash dump file is a file that captures the memory state of the entire system 125 at the moment of its failure. The extraction module 230 is further configured to include one or more sub-modules to extract the memory image corresponding to the identified at least one corrupt stack and store the extracted memory image as the crash dump file.
[0050] Further, owing to the selective extraction of the memory image corresponding to the identified at least one corrupt stack, the system 125 is configured to advantageously reduce data to be analyzed for debugging. As such, the selective extraction of the memory image minimizes operational time, thereby ensuring the memory 210 and processing speed of the processor 205 is not compromised.
[0051] The processor 205 further includes the format convertor module 235. The format convertor module 235 is communicably coupled to the extraction module 230. The format convertor module 235 is configured to receive the extracted memory image corresponding to the identified at least one corrupt stack from the extraction module 230. The format convertor module 235 converts the extracted memory image into one or more acceptable formats to facilitate in debugging the at least one corrupt stack pertaining to the at least one corrupt application. The one or more acceptable formats includes at least one of, but not limited to, an American Standard Code for Information Interchange (ASCII) text, Bulletin Board Code (BBCode), Creole, Crossmark, Epytext, MakeDoc, Markdown, and various source codes such a Java language source code and Restructured Text (reST) in Python
[0052] The format convertor module 235 is further configured to convert the extracted memory image into the one or more acceptable formats. This is achieved by using scripts that translate information of the crash dump file including symbols and address, into readable and editable format. By doing so, the system 125 is configured to minimize time consumption in debugging.
[0053] Referring to FIG. 3, FIG. 3 describes a preferred embodiment of the system 125. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 110a for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0054] As mentioned earlier, the first UE 110a includes one or more primary processors 305 communicably coupled to the one or more processors 205 of the system 125. The one or more primary processors 305 are coupled with a memory unit 310 storing instructions which are executed by the one or more primary processors 305. Execution of the stored instructions by the one or more primary processors 305 enables the first UE 110a to host a plurality of applications and each of the plurality of applications includes a plurality of stacks. The execution of the stored instructions by the one or more primary processors 305 further enables the first UE 110a to transmit an alert pertaining to at least one corrupt application to the one or more processors 205.
[0055] As mentioned earlier, the processor 205 of the system 125 is configured to receive the alert from the first UE 110a. More specifically, the processor 205 of the system 125 is configured to receive the alert from a kernel 315 of at least one of the first UE 110a in response to the disruption in performing the predefined activities by at least one application of the plurality of applications.
[0056] The kernel 315 is a core component serving as the primary interface between hardware components of the first UE 110a and the plurality of applications hosted thereon. The kernel 315 is configured to provide the plurality of applications hosted on the first UE 110a access to resources available in the communication network 105. The resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
[0057] In the preferred embodiment, the transceiver module 220 of the processor 205 is communicably connected to the kernel 315 of the first UE 110a. The transceiver module is configured to receive the alert from the kernel 315 corresponding to the disruption in the predefined activities to be performed by the at least one application of the plurality of applications.
[0058] In the preferred embodiment, the identification module 225 of the processor 205 is communicably connected to the transceiver module 220 to receive the alert corresponding to the disruption. Upon receiving the alert, the identification module 225 implements various identification techniques such as, but not limited to, stack back-tracing and utilization of API, to identify the at least one corrupt stack amongst the one or more application stacks of the corrupt application.
[0059] In one embodiment, the system 125 further includes a signal handler module 320 registered to the kernel 315. The signal handler module 320 aids in identifying one of the location and address of the at least one corrupt stack of the corrupt application
[0060] The processor 205 further includes the extraction module 230 in communication with the identification module 225. Upon identification of the at least one corrupt stack, the extraction module 230 is configured to extract the memory image corresponding to the identified at least one corrupt stack. The extraction module 230 extracts the memory image from the memory map available in the database 240. The extraction module 230 is further configured to store the extracted memory image as the crash dump file.
[0061] The format convertor module 235 is communicably coupled to the extraction module 230. The format convertor module 235 is access the stored crash dump file via the extraction module 230. The format convertor module 235 is configured to convert the stored crash dump file into the acceptable format which can be read, interpreted and edited easily for efficient debugging during fault handling.
[0062] FIG. 4 is a signal flow diagram for managing storage in a communication network 105, according to one or more embodiments of the present invention. For the purpose of description, the signal flow diagram is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0063] At step 405, the alert from the UE 110 is received pertaining to at least one corrupt application by the transceiver module 220. The UE 110 is adapted to host the plurality of applications, where each application includes the plurality of stack. The alert corresponding to the at least one corrupt application, includes data pertaining to an address of a code section where the corruption has occurred pertaining to the at least one corrupt stack.
[0064] At step 410, upon receiving the alert from the UE 110, the at least one corrupt stack pertaining to the at least one corrupt application is identified based on the received alert from the UE 110 by the identification module 225. The at least one corrupt stack is identified utilizing at least one of stack back trace and Application Programming Interfaces (APIs).

[0065] At step 415, upon identifying the at least one corrupt stack, the extraction module 230 is configured to selectively extract the memory image corresponding to the identified at least one corrupt stack pertaining to the at least one corrupt application. The memory image corresponding to the identified at least one corrupt stack is stored as the crash dump file. The crash dump file is utilized to initiate the debugging process by mapping the crash dump file to the file, which is in the one or more acceptable formats, in order to rectify the at least one corrupt stack.
[0066] At step 420, upon extracting the memory image, the format convertor module 235 is configured to convert the extracted memory image into one or more acceptable formats. Further, the at least one corrupt stack pertaining to the at least one corrupt application is debugged to manage storage. In one embodiment, the standby mode is initiated in real time by providing one or more resources in order to restart the at least one corrupt application during a debugging process of the at least one corrupt stack.
[0067] FIG. 5 is a flow chart of the method 500 for managing storage in a communication network 105, according to one or more embodiments of the present invention. The method 500 is adapted to perform partial core dump for fault analysis to selectively retrieve data corresponding to the fault. More specifically, the method further utilizes the retrieved data to facilitate ease of debugging the fault. The retrieved data corresponds to an image of a workflow of an application at a specific time. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIG 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0068] At step 505, the method 500 includes the step of receiving, by the one or more processors 205, an alert from at least one of first UE 110a, the second UE 110b, and the third UE 110c. The alert is one of an alarm, a notification and a trigger. The alert corresponding to the at least one corrupt application, further includes data pertaining to an address of a corrupt stack where the corruption has occurred pertaining to the at least one corrupt application, from the at least one of the first UE 110a, the second UE 110b, and the third UE 110c.
[0069] At step 510, the method 500 includes the step of identifying, by the one or more processors 205, the at least one corrupt stack pertaining to the at least one corrupt application based on the received alert from the at least one of the first UE 110a, the second UE 110b, and the third UE 110c.
[0070] At step 515, the method 500 includes the step of selectively extracting, by the one or more processors 205, a memory image corresponding to the identified at least one corrupt stack pertaining to the at least one corrupt application. The one or more processors 205 are configured to store the extracted memory image corresponding to the identified at least one corrupt stack, as a crash dump file.
[0071] At step 520, the method 500 includes the step of converting, by the one or more processors 205, the extracted memory image into one or more acceptable formats to facilitate in debugging the at least one corrupt stack pertaining to the at least one corrupt application.
[0072] In order to facilitate the debugging of the at least one corrupt stack, the method 500 is further configured to map the crash dump file to a file. The file is configured to be as per the one or more acceptable formats. In one embodiment, the file is a source code file that is editable via suitable platform or modification tools to rectify the at least one corrupt stack. The crash dump file includes a corruption detail which may include one of an error in application code, an unwanted code segment from communication network 105, and a break in application code due to abrupt hardware issues.
[0073] Further, during debugging, the method 500 includes step of initiating in real time a standby mode. In this regard, the method 500 provides one or more resources in order to restart the at least one corrupt application during a debugging process of the at least one corrupt stack. Thereby, the operational downtime of the at least one corrupt application is advantageously minimized to a few milliseconds.
[0074] In a preferred embodiment, the method 500 for managing storage in a communication network 105 is provided. During operation the one or more processor 205 performs the step of receiving the alert from at least one of the first UE 110a, the second UE 110b, and the third UE 110c, the alert corresponds to at least one corrupt stack of the corrupt application. The one or more processor 205 further performs identifying the at least one corrupt stack amongst the one or more application stacks of the corrupt application. The one or more processor 205 further configured to perform the step of extracting a memory pertaining to the at least one corrupt stack. The memory image of the at least one corrupt stack is obtained from a memory map available in the database 240. The one or more processor 205 further performs the step of converting the extracted memory image into an acceptable format which is readable and editable.
[0075] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by a processor 205. The processor 205 is configured to receive an alert from a UE 110 pertaining to at least one corrupt application hosted by the UE 110 when a crash occurs due to at least one corrupt stack. The processor 205 is further configured to identify the at least one corrupt stack pertaining to the at least one corrupt application based on the received alert from the UE 110. The processor 205 is further configured to selectively extract a memory image corresponding to the identified at least one corrupt stack and upon extracting the memory image, and to convert the extracted memory image into one or more acceptable formats to facilitate in debugging the at least one corrupt stack pertaining to the at least one corrupt application.
[0076] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0077] The present disclosure incorporates technical advancement of partial core dumping to bring out an advantage of optimized fault analysis thereby enabling the regular memory and processing speed from not being compromised and a minimized failover time, which is reduced from a wide time range of 20-30 seconds to a few milliseconds, by pin pointing to the exact location responsible for the crash, thereby requiring debugging only of the selected crash dump rather than the entire core dump.
[0078] The present disclosure significantly reduces, by incorporating efficient storage management, the time required for crash dumping and improves system throughput enabling faster recovery, identification and resolution of coding bugs or corrupt data handling.
[0079] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS
[0080] Environment - 100
[0081] Communication Network - 105
[0082] User Equipment - 110
[0083] Primary processors -305
[0084] Memory Unit of User Equipment – 310
[0085] Server - 115
[0086] System - 125
[0087] Memory – 210
[0088] One or more processor -205
[0089] Transceiver Module- 220
[0090] Identification Module - 225
[0091] Extraction Module - 230
[0092] Format Convertor Module - 235
[0093] Database - 240
[0094] Kernel – 315
[0095] Signal handler Module – 320
,CLAIMS:
CLAIMS
We Claim:
1. A method (500) for managing storage in a communication network (105), the method (500) comprises the steps of:
receiving (505), by one or more processors (205), an alert from a User Equipment (UE) (110) pertaining to at least one corrupt application, the UE (110) adapted to host a plurality of applications, each application including a plurality of stacks;
identifying (510), by the one or more processors (205), at least one corrupt stack pertaining to the at least one corrupt application based on the received alert from the UE (110);
selectively extracting (515), by the one or more processors (205), a memory image corresponding to the identified at least one corrupt stack pertaining to the at least one corrupt application; and
converting (520), by the one or more processors (205), the extracted memory image into one or more acceptable formats.

2. The method (500) as claimed in claim 1, wherein the at least one corrupt stack is identified utilizing at least one of, stack back trace and Application Programming Interfaces (APIs).

3. The method (500) as claimed in claim 1, wherein the memory image corresponding to the identified at least one corrupt stack is stored as a crash dump file.

4. The method (500) as claimed in claim 3, wherein the one or more processors (205) are configured to utilize the crash dump file to initiate a debugging process by mapping the crash dump file to a file which is in the one or more acceptable formats, in order to rectify the at least one corrupt stack.

5. The method (500) as claimed in claim 1, wherein the alert corresponding to the at least one corrupt application, includes data pertaining to an address of a code section where the corruption has occurred pertaining to the at least one corrupt stack.

6. The method (500) as claimed in claim 1, wherein the method further comprises the step of:
initiating in real time, by the one or more processors (205), a standby mode by providing one or more resources in order to restart the at least one corrupt application during a debugging process of the at least one corrupt stack.

7. The method (500) as claimed in claim 1, wherein the method further comprises the step of :
debugging, by the one or more processors (205), the at least one corrupt stack pertaining to the at least one corrupt application to manage storage.

8. A User Equipment (UE) (110), comprising:
one or more primary processors (305) coupled with a memory (310), communicatively coupled to one or more processors (205), wherein said memory stores instructions which when executed by the one or more primary processors (305) causes the UE (110) to:
host, a plurality of applications, each application including a plurality of stacks; and
transmit, an alert pertaining to at least one corrupt application to the one or more processors,
wherein one or more processors (205) are further configured to perform the method as claimed in claim 1.

9. A system (125) for managing storage in a communication network (105), the system comprising:
a transceiver module (220) configured to receive, an alert from a User Equipment (UE) (110) pertaining to at least one corrupt application, the UE adapted to host a plurality of applications, each application including a plurality of stacks;
an identification module (225) configured to identify, at least one corrupt stack pertaining to the at least one corrupt application based on the received alert from the UE (110);
an extraction module (230) configured to selectively extract, a memory image corresponding to the identified at least one corrupt stack pertaining to the at least one corrupt application; and
a format convertor module (235) configured to convert, the extracted memory image into one or more acceptable formats.

10. The system (125) as claimed in claim 9, wherein the at least one corrupt stack is identified utilizing at least one of, stack back trace and Application Programming Interfaces (APIs).

11. The system (125) as claimed in claim 9, wherein the memory image corresponding to the identified at least one corrupt stack is stored as a crash dump file.

12. The system (125) as claimed in claim 11, wherein the one or more processors (205) are configured to utilize the crash dump file to initiate a debugging process by mapping the crash dump file to a file which is in the one or more acceptable formats, in order to rectify the at least one corrupt stack.

13. The system (125) as claimed in claim 9, wherein the alert corresponding to the at least one corrupt application, includes data pertaining to an address of a code section where the corruption has occurred pertaining to the at least one corrupt stack.

14. The system (125) as claimed in claim 9, wherein the one or more processors (205) are further configured to:
initiate in real time, a standby mode by providing one or more resources in order to restart the at least one corrupt application during a debugging process of the at least one corrupt stack.

15. The system (125) as claimed in claim 9, wherein the one or more processors (205) are further configured to:
debugging, by the one or more processors (205), the at least one corrupt stack pertaining to the at least one corrupt application to manage storage.

Documents

Application Documents

# Name Date
1 202321044335-STATEMENT OF UNDERTAKING (FORM 3) [03-07-2023(online)].pdf 2023-07-03
2 202321044335-PROVISIONAL SPECIFICATION [03-07-2023(online)].pdf 2023-07-03
3 202321044335-FORM 1 [03-07-2023(online)].pdf 2023-07-03
4 202321044335-FIGURE OF ABSTRACT [03-07-2023(online)].pdf 2023-07-03
5 202321044335-DRAWINGS [03-07-2023(online)].pdf 2023-07-03
6 202321044335-DECLARATION OF INVENTORSHIP (FORM 5) [03-07-2023(online)].pdf 2023-07-03
7 202321044335-FORM-26 [11-09-2023(online)].pdf 2023-09-11
8 202321044335-Proof of Right [22-12-2023(online)].pdf 2023-12-22
9 202321044335-DRAWING [25-06-2024(online)].pdf 2024-06-25
10 202321044335-COMPLETE SPECIFICATION [25-06-2024(online)].pdf 2024-06-25
11 Abstract1.jpg 2024-10-03
12 202321044335-Power of Attorney [11-11-2024(online)].pdf 2024-11-11
13 202321044335-Form 1 (Submitted on date of filing) [11-11-2024(online)].pdf 2024-11-11
14 202321044335-Covering Letter [11-11-2024(online)].pdf 2024-11-11
15 202321044335-CERTIFIED COPIES TRANSMISSION TO IB [11-11-2024(online)].pdf 2024-11-11
16 202321044335-FORM 3 [25-11-2024(online)].pdf 2024-11-25
17 202321044335-FORM 18 [20-03-2025(online)].pdf 2025-03-20