Sign In to Follow Application
View All Documents & Correspondence

Method And System For Handling Responses From A User Interaction Management Platform

Abstract: ABSTRACT METHOD AND SYSTEM FOR HANDLING RESPONSES FROM A USER INTERACTION MANAGEMENT PLATFORM The present disclosure relates to a method for handling a response from a User Interaction Management (UIM) platform hosted in a server (104a) by the at least one processor (202). The method includes receiving an application programming interface (API) request from a system (108). Further, the method includes identifying a type of a response to be sent based on the API request using a data driven model. Further, the method includes selecting the response as one of a synchronous response and an asynchronous response based on the identified type of the response, and wherein the response is provided to the system (108). Ref. FIG. 4

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 July 2023
Publication Number
04/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD - 380006, GUJARAT, INDIA

Inventors

1. Aayush Bhatnagar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
2. Ankit Murarka
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
3. Rizwan Ahmad
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
4. Kapil Gill
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
5. Rahul Verma
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
6. Arpit Jain
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
7. Shashank Bhushan
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
8. Kamal Malik
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
9. Chaitanya V Mali
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
10. Supriya De
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
11. Kumar Debashish
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
12. Tilala Mehul
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India
13. Kothagundla Vinay Kumar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad, Gujarat - 380006, India

Specification

DESC:
FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR HANDLING RESPONSES FROM A USER INTERACTION MANAGEMENT PLATFORM

2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to the field of User Interaction Management (UIM) in telecommunications systems, more particularly relates to a UIM system that supports synchronous or asynchronous responses for its application programming interface (API).
BACKGROUND OF THE INVENTION
[0002] Telecommunications systems often require efficient and reliable communication between various components, such as user devices, network infrastructure, and service providers. In many cases, these components interact with each other through APIs, which enable the exchange of information and control commands.
[0003] Traditionally, the telecommunications systems have predominantly relied on synchronous communication patterns for API interactions. In synchronous communication, a calling entity initiates a request and waits for the corresponding response before proceeding with the next operation. While synchronous communication offers simplicity and immediate feedback, the synchronous communication can also lead to potential delays and decreased performance, especially in scenarios where the requested operation takes a significant amount of time to complete.
[0004] As the complexity and scale of the telecommunications systems continue to grow, there is an increasing demand for more flexible and efficient communication models. In particular, there is a need for a UIM system that supports both synchronous and asynchronous responses for its API, allowing the calling entity to choose the appropriate communication pattern based on the requirements of the specific operation.
SUMMARY OF THE INVENTION
[0005] One or more embodiments of the present disclosure provide a system and a method for handling a response from a user interaction management (UIM) platform.
[0006] In one aspect of the present invention, a method for handling a response from a UIM platform hosted in a server. The method includes receiving, by the at least one processor, an API request from a system. Further, the method includes identifying, by the at least one processor, a type of a response to be sent based on the API request using a data driven model. Further, the method includes selecting, by the at least one processor, the response as one of a synchronous response and an asynchronous response based on the identified type of the response, and wherein the response is provided to the system.
[0007] In an embodiment, further, the method includes obtaining, by at least one processor, at least one of a high-level design (HLD) document and a low-level design (LLD) document. Further, the method includes training, by the at least one processor, the data driven model using the HLD document and the LLD document.
[0008] In an embodiment, further, the method includes receiving, by the at least one processor, feedback about the data driven model from a server based on the response. Further, the method includes training, by the at least one processor, the data driven model based on the feedback, wherein the data driven model is deployed in a data driven module.
[0009] In an embodiment, the data driven model comprises a machine learning (ML) model.
[0010] In an embodiment, the HLD document is associated with at least one of an integration of external API, a user device, and an IP pool inventory, and wherein the LLD document is associated with at least one of the integration of the external API, the user device, and the IP pool inventory.
[0011] In an embodiment, a training data for the data driven model is stored in a database.
[0012] In another aspect of the present invention, a system for handling a response from a UIM platform is disclosed. The system includes a plurality of first servers configured to send an application programming interface (API) request from the system to the UIM. A server is communicatively coupled to the plurality of first servers via a communication network. The server is configured to host the UIM. The server includes a unified inventory management module, a display, a training module (e.g., artificial intelligence module or the like). The unified inventory management module is configured to receive the API request from a system via a first interface. Further, the unified inventory management module is configured to identify a type of a response to be sent based on the API request using a trained data driven model. Further, the unified inventory management module is configured to select the response as at least one of a synchronous response and an asynchronous response based on the identification. A display is configured to render the response for viewing, wherein the response is communicated to the system.
[0013] In another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor, cause the processor to receive, an API request from a system, identify, a type of a response to be sent based on the API request using the data driven model, and select, the response as one of a synchronous response and an asynchronous response based on the identified type of the response, and wherein the response is provided to the system.
[0014] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.

BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0016] FIG. 1 is an exemplary block diagram of an environment for handling a response from a User Interaction Management (UIM) platform, according to various embodiments of the present disclosure.
[0017] FIG. 2 is a block diagram of a server of FIG. 1, according to various embodiments of the present disclosure.
[0018] FIG. 3 is an exemplary schematic representation of the system of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system.
[0019] FIG. 4 shows a sequence flow diagram illustrating a method for handling a response from the UIM platform, according to various embodiments of the present disclosure.
[0020] Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
[0021] The foregoing shall be more apparent from the following detailed description of the invention.

DETAILED DESCRIPTION OF THE INVENTION
[0022] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0023] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0024] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0025] Before discussing example, embodiments in more detail, it is to be noted that the drawings are to be regarded as being schematic representations and elements that are not necessarily shown to scale. Rather, the various elements are represented such that their function and general purpose becomes apparent to a person skilled in the art. Any connection or coupling between functional blocks, devices, components, or other physical or functional units shown in the drawings or described herein may also be implemented by an indirect connection or coupling. A coupling between components may also be established over a wireless connection. Functional blocks may be implemented in hardware, firmware, software or a combination thereof.
[0026] Further, the flowcharts provided herein, describe the operations as sequential processes. Many of the operations may be performed in parallel, concurrently or simultaneously. In addition, the order of operations maybe re-arranged. The processes may be terminated when their operations are completed, but may also have additional steps not included in the figured. It should be noted, that in some alternative implementations, the functions/acts/ steps noted may occur out of the order noted in the figured. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
[0027] Further, the terms first, second etc, may be used herein to describe various elements, components, regions, layers and/or sections, it should be understood that these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are used only to distinguish one element, component, region, layer or section from another region, layer, or a section. Thus, a first element, component, region layer, or section discussed below could be termed a second element, component, region, layer, or section without departing form the scope of the example embodiments.
[0028] Spatial and functional relationships between elements (for example, between modules) are described using various terms, including “connected,” “engaged,” “interfaced,” and “coupled.” Unless explicitly described as being “direct,” when a relationship between first and second elements is described in the description below, that relationship encompasses a direct relationship where no other intervening elements are present between the first and second elements, and also an indirect relationship where one or more intervening elements are present (either spatially or functionally) between the first and second elements. In contrast, when an element is referred to as being "directly” connected, engaged, interfaced, or coupled to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., "between," versus "directly between," "adjacent," versus "directly adjacent," etc.).
[0029] The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
[0030] As used herein, the singular forms “a,” “an,” and “the,” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the terms “and/or” and “at least one of” include any and all combinations of one or more of the associated listed items. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0031] Unless specifically stated otherwise, or as is apparent from the description, terms such as “processing” or “computing” or “calculating” or “determining” of “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device/hardware, that manipulates and transforms data represented as physical, electronic quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
[0032] Various embodiments of the invention provide a method for handling a response from a User Interaction Management (UIM) platform hosted in a server. The method includes receiving, by the at least one processor, an API request from a system. Further, the method includes identifying, by the at least one processor, a type of a response to be sent based on the API request using a data driven model. Further, the method includes selecting, by the at least one processor, the response as one of a synchronous response and an asynchronous response based on the identified type of the response, and wherein the response is provided to the system.
[0033] The present invention provides a method and a system configured to enable the UIM with synchronous or asynchronous responses for its Application Programming Interface (API) in a flexible and efficient manner.
[0034] FIG. 1 illustrates an exemplary block diagram of an environment (100) for handling a response from a user interaction management (UIM) platform, according to various embodiments of the present disclosure. The environment (100) comprises a plurality of user equipment’s (UEs) 102-1, 102-2, ……,102-n. The at least one UE (102-n) from the plurality of the UEs (102-1, 102-2, ……102-n) is configured to connect to a system (108) via a communication network (106). Hereafter, label for the plurality of UEs or one or more UEs is 102.
[0035] In accordance with yet another aspect of the exemplary embodiment, the plurality of UEs (102) may be a wireless device or a communication device that may be a part of the system (108). The wireless device or the UE (102) may include, but are not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication or VoIP capabilities. In an embodiment, the UEs may include, but are not limited to, any electrical, electronic, electro-mechanical or an equipment or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the computing device may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, input units for receiving input from a user such as touch pad, touch enabled screen, electronic pen and the like. It may be appreciated that the UEs may not be restricted to the mentioned devices and various other devices may be used. A person skilled in the art will appreciate that the plurality of UEs (102) may include a fixed landline, a landline with assigned extension within the communication network (106).
[0036] The plurality of UEs (102) may comprise a memory such as a volatile memory (e.g., RAM), a non-volatile memory (e.g., disk memory, FLASH memory, EPROMs, etc.), an unalterable memory, and/or other types of memory. In one implementation, the memory might be configured or designed to store data. The data may pertain to attributes and access rights specifically defined for the plurality of UEs (102). The UE (102) may be accessed by the user, to receive the requests related to an order determined by the system (108) or the server (104a). The communication network (106), may use one or more communication interfaces/protocols such as, for example, Voice Over Internet Protocol (VoIP), 802.11 (Wi-Fi), 802.15 (including Bluetooth™), 802.16 (Wi-Max), 802.22, Cellular standards such as Code Division Multiple Access (CDMA), CDMA2000, Wideband CDMA (WCDMA), Radio Frequency Identification (e.g., RFID), Infrared, laser, Near Field Magnetics, etc.
[0037] The system (108) is communicatively coupled to a plurality of server(s) (104a-104n) via the communication network (106). The plurality of server(s) (104a- 104n) can be, for example, but not limited to a standalone server, a server blade, a server rack, an application server, a bank of servers, a business telephony application server (BTAS), a server farm, a cloud server, an edge server, home server, a virtualized server, one or more processors executing code to function as a server, or the like. Hereafter, operations and functions of the patent application is explained in the view of a server (104a) from the plurality of servers (104a-104n). In an implementation, the server (104a) may operate at various entities or a single entity (include, but is not limited to, a vendor side, a service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, a defence facility side, or any other facility) that provides service.
[0038] The communication network (106) includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The communication network (106) may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0039] The communication network (106) may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The communication network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0040] One or more network elements can be, for example, but not limited to a base station that is located in the fixed or stationary part of the communication network (106). The base station may correspond to a remote radio head, a transmission point, an access point or access node, a macro cell, a small cell, a micro cell, a femto cell, a metro cell. The base station enables transmission of radio signals to the UE or mobile transceiver. Such a radio signal may comply with radio signals as, for example, standardized by a 3GPP or, generally, in line with one or more of the above listed systems. Thus, a base station may correspond to a NodeB, an eNodeB, a Base Transceiver Station (BTS), an access point, a remote radio head, a transmission point, which may be further divided into a remote unit and a central unit.
[0041] 3GPP: The term “3GPP” is a 3rd Generation Partnership Project and is a collaborative project between a group of telecommunications associations with the initial goal of developing globally applicable specifications for Third Generation (3G) mobile systems. The 3GPP specifications cover cellular telecommunications technologies, including radio access, core network, and service capabilities, which provide a complete system description for mobile telecommunications. The 3GPP specifications also provide hooks for non-radio access to the core network, and for networking with non-3GPP networks.
[0042] The server (104a) may include one or more processors (202) coupled with a memory (204), wherein the memory (204) may store instructions which when executed by the one or more processors (202) may cause the server (104a) executing requests in the communication network (106) or the plurality of servers (104a-104n). An exemplary representation of the server (104a) for such purpose, in accordance with embodiments of the present disclosure, is shown in FIG. 2 as the server (104a). In an embodiment, the server (104a) may include the one or more processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in the memory (204) of the server (104a). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service.
[0043] The environment (100) further includes the system (108) communicably coupled to a remote server (not shown) and each UE of the plurality of UEs (102) via the communication network (106). The remote server is configured to execute the requests in the communication network (106).
[0044] The system (108) is adapted to be embedded within the remote server or is embedded as the individual entity. The system (108) is designed to provide a centralized and unified view of data and facilitate efficient business operations. The system (108) is authorized to access to update/create/delete one or more parameters of their relationship between the requests for a workflow corresponding an API request, which gets reflected in real-time independent of the complexity of network.
[0045] In another embodiment, the system (108) may include an enterprise provisioning server (for example), which may connect with the remote server. The enterprise provisioning server provides flexibility for enterprises entity, ecommerce entity, finance entity to update/create/delete information related to the requests in real time as per their business needs. A user with administrator rights can access and retrieve the requests for the workflow and perform real-time analysis in the system (108).
[0046] The system (108) may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a business telephony application server (BTAS), a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an implementation, system (108) may operate at various entities or single entity (for example include, but is not limited to, a vendor side, service provider side, a network operator side, a company side, an organization side, a university side, a lab facility side, a business enterprise side, ecommerce side, finance side, a defence facility side, or any other facility) that provides service.
[0047] However, for the purpose of description, the system (108) is described as an integral part of the remote server, without deviating from the scope of the present disclosure.
[0048] FIG. 2 illustrates a block diagram of the server (104a) provided for handling a response from the UIM platform, according to one or more embodiments of the present invention. As per the illustrated embodiment, the server (104a) includes the one or more processors (202), the memory (204), an interface (e.g., user interface or the like) (206), a display (208), an input unit (210), and a database (or centralized database) (214). Further the server (104a) may comprise the one or more processors (202). The one or more processors (202), hereinafter referred to as the processor (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the server (104a) includes one processor. However, it is to be noted that the server (104a) may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0049] The information related to the request may be provided or stored in the memory (204) of the server (104a). Among other capabilities, the processor (202) is configured to fetch and execute computer-readable instructions stored in the memory (204). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0050] The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like. In an embodiment, the server (104a) may include an interface(s). The interface(s) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like. The interface(s) may facilitate communication for the system (108). The interface(s) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not limited to, processing unit/engine(s) and a database. The processing unit/engine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s).
[0051] The information related to the requests may further be configured to render on the interface (206). The interface (206) may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art. The interface (206) may be rendered on the display (208), implemented using Liquid Crystal Display (LCD) display technology, Organic Light-Emitting Diode (OLED) display technology, and/or other types of conventional display technology. The display (208) may be integrated within the server (104a) or connected externally. Further the input unit(s) (210) may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
[0052] The database (214) may be communicably connected to the processor (202) and the memory (204). The database (214) may be configured to store and retrieve the request pertaining to features, or services or workflow of the server (104a), access rights, attributes, approved list, and authentication data provided by an administrator. Further the server (104a) may allow the system (108) to update/create/delete one or more parameters of their information related to the request, which provides flexibility to roll out multiple variants of the request as per business needs. In another embodiment, the database (214) may be outside the server (104a) and communicated through a wired medium and wireless medium.
[0053] Further, the processor (202), in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor (202). In such examples, the server (104a) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the server (104a) and the processing resource. In other examples, the processor (202) may be implemented by an electronic circuitry.
[0054] In order for the server (104a) to handle the response from the UIM platform, the processor (202) includes an unified inventory management module (216), a training module (218) (e.g., artificial intelligence module or the like) and a data driven module (220). The unified inventory management module (216), the training module (218), and the data driven module (220) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In an embodiment, the UIM platform is hosted in the server (104a). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor. In such examples, the server (104a) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the server (104a) and the processing resource. In other examples, the processor (202) may be implemented by the electronic circuitry.
[0055] In order for the server (104a) to handle the response from the UIM platform, the unified inventory management module (216), the training module (218) and the data driven module (220) are communicably coupled to each other. In an example embodiment, the unified inventory management module (216) receives the application programming interface (API) request from the system (108) via the interface (206). In another embodiment, the unified inventory management module (216) receives the API request from an external system (not shown) via the interface (206). The external system can be, for example, but not limited to a legacy system, a partner system, a cloud service system or the like. The external system is operated by a service provider, a third party external partner, and a collaborator. The API request refers to a communication made by the applications to the API to retrieve, update, or manipulate data.
[0056] Further, the unified inventory management module (108) identifies the type of a response to be sent based on the API request using the trained data driven model (e.g., machine learning (ML) model, Artificial intelligence (AI) model or the like). In an example, the type of the response can be, for example, but not limited to a synchronous response and an asynchronous response. The synchronous response refers to the immediate and direct reply from the API to a client (requesting application) after the API request has been processed. In other words, the synchronous response ensures immediate feedback and enables real-time interaction between the client and the server (104a) in the UIM platform to facilitate efficient and responsive operations. The asynchronous response refers to a mechanism where the server (104a) does not immediately send back the response to the client after receiving the request. Instead, the server (104a) acknowledges the receipt of the request and continues to process it independently. The client does not wait for the server (104a) to finish processing but is typically provided with a way to check on the status or retrieve the result of the request at a later time. The asynchronous responses for the API requests enable efficient handling of long-running operations and improve system responsiveness by allowing the clients to continue their operations without blocking while awaiting the completion of the requested task on the server (104a). The ML model can be, for example, but not limited to a Linear Regression model, Decision Tree model, Random Forest model, Support Vector Machine model or the like.
[0057] Further, the unified inventory management module (216) selects the response as at least one of the synchronous response and the asynchronous response based on the identification. The API response is selected as the synchronous response or the asynchronous response depending on the nature of the operation being performed and the expected interaction flow with the user or a client application. The selection is determined based on various parameters. The various parameters can be, for example but not limited to a user expectation, operation complexity, impact on system performance or the like. The user expectation means what the user expects in terms of a response time and a feedback for the specific operation. The operation complexity evaluates how complex and resource-intensive the operation is. The integration requirements determine if the operation requires coordination with external systems or APIs that may influence the response type. The display (208) renders the response for viewing, wherein the response is communicated to the system (108).
[0058] Further, the training module (218) obtains at least one of a high-level design (HLD) document and a low-level design (LLD) document. The HLD document serves as a blueprint for stakeholders involved in the development, implementation, and integration of the API request for the UIM platform. The HLD document provides a structured overview of the system architecture, functional requirements, and integration points, ensuring consistency and alignment with organizational goals and technical standards. Further, the LLD document serves as a blueprint for developers implementing the API request, ensuring consistency, reliability, and compatibility with the overall UIM platform. The LLD document bridges the gap between the high-level architectural design and the actual coding and implementation phase.
[0059] Further, the training module (218) trains the data driven model using the HLD document and the LLD document. The HLD document is associated with at least one of an integration of external API, a user device (e.g., UE (102) or the like), and an Internet Protocol (IP) pool inventory. The LLD document is associated with at least one of the integration of the external API, the user device, and the IP pool inventory.
[0060] Further, the training module (218) receives a feedback from the data driven model based on the response. The feedback indicates a range of responses and notifications that convey the status, success, errors, or other relevant information regarding a requested operation for the API request. The feedback can be, for example, but not limited to a transactional feedback, a validation error, an error response, a success response or the like. Further, the training module (218) trains the data driven model deployed in the data driven module (220) based on the feedback.
[0061] In an example, the unified inventory management module (216) in accordance with the exemplary embodiment supports both sync and async HTTP responses, thereby making it more flexible and versatile for integration with external systems. All the inventory APIs in the unified inventory management module (216) have the option of choosing either the sync response or async response, which provides a wide range of external systems for seamless integration. The decision of which response type to use is made intelligently by the unified inventory management module (216) based on the AI/ML feedback. The AI/ML model is trained using the integration documentation of external systems at both high-level and low-level details.
[0062] FIG. 3 is an example schematic representation of the system (300) of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE (102-1), the server (104a) and the system (108) for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure. The operations and functions of the system (108) is explained in FIG. 1. For the sake of brevity, we are not repeating the same information.
[0063] As mentioned earlier, the first UE (102-1) includes one or more primary processors (305) communicably coupled to the one or more processors (202) of the server (104a). The one or more primary processors (305) are coupled with a memory (310) storing instructions which are executed by the one or more primary processors (305). Execution of the stored instructions by the one or more primary processors (305) enables the UE (102-1). The execution of the stored instructions by the one or more primary processors (305) further enables the UE (102-1) to execute the requests in the communication network (106).
[0064] As mentioned earlier, the one or more processors (202) is configured to transmit a response content related to the request to the UE (102-1). More specifically, the one or more processors (202) of the server (104a) is configured to transmit the response content to at least one of the UE (102-1). A kernel (315) is a core component serving as the primary interface between hardware components of the UE (102-1) and the server (104a). The kernel (315) is configured to provide the plurality of response contents hosted on the server (104a) to access resources available in the communication network (106). The resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
[0065] As per the illustrated embodiment, the server (104a) includes the one or more processors (202), the memory (204), the interface (206), the display (208), and the input unit (210). The operations and functions of the one or more processors (202), the memory (204), the interface (206), the display (208), and the input unit (210) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure. Further, the processor (202) includes the unified inventory management module (216), the training module (218) and the data driven module (220). The operations and functions of the unified inventory management module (216), the training module (218) and the data driven module (220) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure.
[0066] FIG. 4 is a flow chart (400) illustrating a method for handling the response from the UIM platform (hosted in the server (104a)), according to various embodiments of the present system.
[0067] At step 402, the method includes obtaining at least one of the HLD document and the LLD document. The HLD document serves as the blueprint for stakeholders involved in the development, implementation, and integration of the API request for the UIM platform. The HLD document provides the structured overview of the system architecture, functional requirements, and integration points, ensuring consistency and alignment with organizational goals and technical standards. Further, the LLD document serves as the blueprint for developers implementing the API request, ensuring consistency, reliability, and compatibility with the overall UIM platform. The LLD document bridges the gap between the high-level architectural design and the actual coding and implementation phase.
[0068] At step 404, the method includes training a data driven model using the HLD document and the LLD document.
[0069] At step 406, the method includes receiving the API request from the system (108). The API request refers to the communication made by the applications to the API to retrieve, update, or manipulate inventory-related data.
[0070] At step 408, the method includes identifying the type of the response to be sent based on the API request using the data driven model.
[0071] At step 410, the method includes selecting the response as one of the synchronous response and the asynchronous response based on the identified type of the response, where the response is provided to the system (108). In an example, the synchronous response refers to the immediate and direct reply from the API to the client (requesting application) after the API request has been processed. In an example, the asynchronous response refers to the mechanism where the server (104a) does not immediately send back the response to the client after receiving the request. Instead, the server (104a) acknowledges the receipt of the request and continues to process it independently. The client does not wait for the server (104a) to finish processing but is typically provided with a way to check on the status or retrieve the result of the request at a later time. The asynchronous responses for the API requests enable efficient handling of long-running operations and improve system responsiveness by allowing the clients to continue their operations without blocking while awaiting the completion of the requested task on the server (104a).
[0072] Technical advantages of the invention: The method can be used to enable the UIM with synchronous or asynchronous responses for its API in a flexible and efficient manner.
[0073] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS. 1-4) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0074] Method steps: A person of ordinary skill in the art will readily ascertain that the illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0075] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS
[0076] Environment - 100
[0077] UEs– 102, 102-1-102-n
[0078] Server – 104a, 104b.. 104n
[0079] Communication network – 106
[0080] System – 108
[0081] Processor – 202
[0082] Memory – 204
[0083] Interface – 206
[0084] Display – 208
[0085] Input unit – 210
[0086] Database – 214
[0087] Unified inventory management module – 216
[0088] Training module – 218
[0089] Data driven module - 220
[0090] System - 300
[0091] Primary processors -305
[0092] Memory– 310
[0093] Kernel– 315

,CLAIMS:CLAIMS:
We Claim
1. A method for handling a response from a User Interaction Management (UIM) platform hosted in a server (104a), the method comprising the steps of:
receiving, by at least one processor (202), an application programming interface (API) request from a system (108);
identifying, by the at least one processor (202), a type of a response to be sent based on the API request using a data driven model; and
selecting, by the at least one processor (202), the response as one of a synchronous response and an asynchronous response based on the identified type of the response, and wherein the response is provided to the system (108).

2. The method as claimed in claim 1, wherein the method comprises:
obtaining, by at least one processor (202), at least one of a high-level design (HLD) document and a low-level design (LLD) document; and
training, by the at least one processor (202), the data driven model using the HLD document and the LLD document.

3. The method as claimed in claim 1, wherein the method comprises:
receiving, by the at least one processor (202), feedback about the data driven model from the server (104a) based on the response; and
training, by the at least one processor (202), the data driven model based on the feedback, wherein the data driven model is deployed in a data driven module (220).

4. The method as claimed in claim 1, wherein the data driven model comprises a machine learning (ML) model.

5. The method as claimed in claim 2, wherein the HLD document is associated with at least one of an integration of external application programming interface (API), a user device, and an IP pool inventory, and wherein the LLD document is associated with at least one of the integration of the external API, the user device, and the IP pool inventory.

6. The method as claimed in claim 1, wherein a training data for the data driven model is stored in a database (214).

7. A system (108) for handling a response from a User Interaction Management (UIM) platform, the system (108) comprising:
a plurality of first servers (104b-104n) configured to send an application programming interface (API) request from the system (108) to the UIM platform; and
a server (104a) communicatively coupled to the plurality of first servers (104b-104n) via a communication network (106), and wherein the server (104a) is configured to host the UIM platform, and wherein the server (104a) further comprises:
a unified inventory management module (216) configured to:
receive the API request from the system (108) via an interface (206);
identify a type of a response to be sent based on the API request using a trained data driven model;
select the response as at least one of a synchronous response and an asynchronous response based on the identification; and
a display (208) configured to render the response for viewing, wherein the response is communicated to the system (108).

8. The system (108) as claimed in claim 7, comprising a training module (218) configured to:
obtain at least one of a high-level design (HLD) document and a low-level design (LLD) document; and
train the data driven model using the HLD document and the LLD document.

9. The system (108) as claimed in claim 7, wherein the training module (218) is further configured to:
receive, feedback from the data driven model from the server (104a) based on the response; and
train, the data driven model deployed in the data driven module (220) based on the feedback.

10. The system (108) as claimed in claim 7, wherein the data driven model comprises a machine learning (ML) model.

11. The system (108) as claimed in claim 7, wherein the system (108) comprises another server.

12. The system (108) as claimed in claim 8, wherein the HLD document is associated with at least one of an integration of external application programming interface (API), a user device, and an IP pool inventory, and wherein the LLD document is associated with at least one of the integration of the external API, the user device, and the IP pool inventory.

13. The system (108) as claimed in claim 7, wherein the server (104a) further comprises a database (214) configured to store the data driven model and the training data for the data driven model.

14. A User Equipment (UE) (102-1), comprising:
one or more primary processors (305) communicatively coupled to one or more processors (202) of a system (108), the one or more primary processors (305) coupled with a memory (310), wherein said memory (310) stores instructions which when executed by the one or more primary processors (305) causes the UE (102-1) to:
transmit, an application programming interface (API) request received from a system to the one or more processers (202);
wherein the one or more processors (202) is configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321048717-STATEMENT OF UNDERTAKING (FORM 3) [19-07-2023(online)].pdf 2023-07-19
2 202321048717-PROVISIONAL SPECIFICATION [19-07-2023(online)].pdf 2023-07-19
3 202321048717-FORM 1 [19-07-2023(online)].pdf 2023-07-19
4 202321048717-FIGURE OF ABSTRACT [19-07-2023(online)].pdf 2023-07-19
5 202321048717-DRAWINGS [19-07-2023(online)].pdf 2023-07-19
6 202321048717-DECLARATION OF INVENTORSHIP (FORM 5) [19-07-2023(online)].pdf 2023-07-19
7 202321048717-FORM-26 [03-10-2023(online)].pdf 2023-10-03
8 202321048717-Proof of Right [08-01-2024(online)].pdf 2024-01-08
9 202321048717-DRAWING [18-07-2024(online)].pdf 2024-07-18
10 202321048717-COMPLETE SPECIFICATION [18-07-2024(online)].pdf 2024-07-18
11 Abstract-1.jpg 2024-09-28
12 202321048717-Power of Attorney [25-10-2024(online)].pdf 2024-10-25
13 202321048717-Form 1 (Submitted on date of filing) [25-10-2024(online)].pdf 2024-10-25
14 202321048717-Covering Letter [25-10-2024(online)].pdf 2024-10-25
15 202321048717-CERTIFIED COPIES TRANSMISSION TO IB [25-10-2024(online)].pdf 2024-10-25
16 202321048717-FORM 3 [02-12-2024(online)].pdf 2024-12-02
17 202321048717-FORM 18 [20-03-2025(online)].pdf 2025-03-20