Sign In to Follow Application
View All Documents & Correspondence

Method And System For Optimising Mapping And Retrieving A Target Audio Based On A Trigger

Abstract: The present disclosure relates to a method and a system for optimising mapping and retrieving a target audio based on a trigger. The method comprises receiving, by a transceiver unit [302] at an enterprise provisioning server (EPS) [132], a service provisioning request, wherein the service provisioning request comprises at least an audio data. Further, the method comprises generating, by a processing unit [304], the target audio data associated with the service provisioning request based on the audio data. Further, the method comprises transmitting, by the transceiver unit [302] from the EPS [132] to a multimedia server, the target audio data; Further, the method comprises generating a target path associated with the target audio data; mapping the target path and the target audio data; identifying, the trigger from a user; and retrieving the target audio from the target path based on the identified trigger. [Figure 4]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
25 July 2023
Publication Number
06/2025
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Birendra Bisht
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
2. Aayush Bhatnagar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
3. Pradeep Kumar Bhatnagar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
4. Harbinder Pal Singh
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
5. Sandeep Gupta
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
6. Nitin Warape
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
7. P R Srikanth Reddy
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
8. Monish Rode
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
9. Somya Mishra
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
10. Smridhi Sharma
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
11. Amrish Bansal
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Specification

FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR OPTIMISING MAPPING AND RETRIEVING A TARGET AUDIO BASED ON A TRIGGER”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.

METHOD AND SYSTEM FOR OPTIMISING MAPPING AND RETRIEVING A TARGET AUDIO BASED ON A TRIGGER
TECHNICAL FIELD
5
[0001] Embodiments of the present disclosure generally relate to wireless communication systems. More particularly, embodiments of the present disclosure relate to optimising mapping and retrieving a target audio based on a trigger.
10 BACKGROUND
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
15 present disclosure. However, it should be appreciated that this section is used only
to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Wireless communication technology has rapidly evolved over the past few
20 decades, with each generation bringing significant improvements and
advancements. The first generation of wireless communication technology was
based on analog technology and offered only voice services. However, with the
advent of the second-generation (2G) technology, digital communication and data
services became possible, and text messaging was introduced. The third-generation
25 (3G) technology marked the introduction of high-speed internet access, mobile
video calling, and location-based services. The fourth-generation (4G) technology
revolutionized wireless communication with faster data speeds, better network
coverage, and improved security. Currently, the fifth-generation (5G) technology is
being deployed, promising even faster data speeds, low latency, and the ability to
30 connect multiple devices simultaneously. With each generation, wireless
2

communication technology has become more advanced, sophisticated, and capable of delivering more services to its users.
[0004] The prior art in the field of service requests associated with custom
5 multimedia messages for users or for an enterprise service onboarded using service
provisioning requests that need to be played during calls has shown several shortcomings. These custom multimedia messages (or the multimedia files) may be in the form of audio data that needs to be played when a call is received at a user device. One notable drawback is the cumbersome process involved in saving the
10 multimedia file of a service request on a server. Traditionally, the service request
for such multimedia files requires the manual creation of specific paths and the subsequent upload of the files to those designated paths. These paths need to be very specific to identify a particular enterprise service onboarded. This process is time-consuming and prone to errors, as it heavily relies on human intervention.
15 Additionally, the need for specific paths adds complexity to the service request
procedure, making it less user-friendly and efficient. Also, the provisioning entity needs to know the network node topology to which all circles these files are applicable, and appropriate paths need to be present there. Similarly, it is expected for the provisioning entity to know for which exact service this file is intended and
20 thus must have an understanding of the path. There is also a need for tracking the
new nodes added to the network.
[0005] Further, over the period of time, various solutions have been developed to improve the performance of communication devices and to optimize storing and
25 retrieving multimedia files of the audio data (that a user may want to be played)
during an ongoing network session. However, there are certain challenges with existing solutions. One problem observed in the prior art regarding the provisioning of custom multimedia messages for users or an enterprise is the laborious and error-prone nature of the process. In traditional methods, the creation of specific paths
30 and manual uploading of multimedia files to those paths pose significant
challenges. This approach not only consumes valuable time but also increases the
3

risk of mistakes and inconsistencies. The reliance on manual intervention makes
the provisioning procedure cumbersome and less user-friendly, hindering the
overall efficiency and effectiveness of the system. Therefore, there is a clear need
for a more streamlined and automated solution that addresses these limitations and
5 improves the provisioning process for custom multimedia messages. As a result,
there is a clear need for an improved system that streamlines the service request process, automates path creation, and simplifies file uploading to enhance overall efficiency and user experience.
10 [0006] Thus, there exists an imperative need in the art to optimise mapping and
retrieving a target audio based on a trigger, which the present disclosure aims to address.
SUMMARY
15
[0007] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
20
[0008] An aspect of the present disclosure may relate to a method for optimising mapping and retrieving a target audio based on a trigger. The target audio may refer to the audio that a user may want to be played when a call is received from another user. The method comprises receiving, by a transceiver unit at an enterprise
25 provisioning server (EPS), a service provisioning request, wherein the service
provisioning request comprises at least an audio data. Further, the method comprises generating, by a processing unit, the target audio data associated with the service provisioning request based on the audio data. Further, the method comprises transmitting, by the transceiver unit from the EPS to a multimedia server, the target
30 audio data. Further, the method comprises generating, by the processing unit at the
multimedia server, a target path associated with the target audio data. Further, the
4

method comprises mapping, by the processing unit at the multimedia server, the
target path and the target audio data. Further, the method comprises identifying, by
the processing unit, the trigger from a user. Further, the method comprises
retrieving, by the processing unit, the target audio from the target path based on the
5 identified trigger.
[0009] In an exemplary aspect of the present disclosure, the target audio is transmitted by the transceiver unit from the EPS to the multimedia server in a predefined format via a predetermined transmission path. 10
[0010] In an exemplary aspect of the present disclosure, the trigger from the user is at least a trigger during an active call session associated with the user.
[0011] In an exemplary aspect of the present disclosure, the method further
15 comprises facilitating a playback of the target audio during the active call session.
[0012] In an exemplary aspect of the present disclosure, the mapping by the processing unit at the multimedia server, the target path and the target audio data further comprises storing the target audio data at the target path. 20
[0013] In an exemplary aspect of the present disclosure, the target path is shared with the multimedia server and a Media Resource Function (MRF).
[0014] Another aspect of the present disclosure may relate to a system for
25 optimising mapping and retrieving a target audio based on a trigger. The target
audio may refer to the audio that a user may want to be played when a call is
received from another user. The system comprises a transceiver unit, wherein the
transceiver unit is configured to receive, at an enterprise provisioning server (EPS),
a service provisioning request, wherein the service provisioning request comprises
30 at least an audio data. Further, the system comprises a processing unit connected to
the transceiver unit. The processing unit is configured to generate, a target audio
5

data associated with the service provisioning request based on the audio data.
Further, the transceiver unit is configured to transmit, from the EPS to a multimedia
server, the target audio data. Also, the processing unit is further configured to
generate, at the multimedia server, a target path associated with the target audio
5 data; map, at the multimedia server, the target path and the target audio data;
identify, the trigger from a user; and retrieve, the target audio from the target path based on the identified trigger from the user.
[0015] Yet another aspect of the present disclosure may relate to a non-transitory
10 computer readable storage medium storing instructions for optimising mapping and
retrieving a target audio based on a trigger, the instructions include executable code which, when executed by one or more units of a system, causes: a transceiver unit of the system to receive, at an enterprise provisioning server (EPS), a service provisioning request, wherein the service provisioning request comprises at least an
15 audio data. Further, the executable code when executed causes a processing unit of
the system to generate, a target audio data associated with the service provisioning request based on the audio data. Further, the executable code when executed causes the transceiver unit to further transmit, from the EPS to a multimedia server, the target audio data. Further, the executable code when executed causes the processing
20 unit to generate, at the multimedia server, a target path associated with the target
audio data. Further, the executable code when executed causes the processing unit to the further map, at the multimedia server, the target path and the target audio data. Further, the executable code when executed causes the processing unit to further identify, the trigger from a user. Thereafter, the executable code when
25 executed causes the processing unit to further retrieve, the target audio from the
target path based on the identified trigger from the user.
[0016] Yet another aspect of the present disclosure may relate to a user equipment
(UE) for optimising mapping and retrieving a target audio based on a trigger. The
30 UE comprises a system, wherein the system further comprises a transceiver unit.
The transceiver unit is configured to receive, at an enterprise provisioning server
6

(EPS), a service provisioning request, wherein the service provisioning request
comprises at least an audio data. Further, the system of the UE comprises a
processing unit connected to the transceiver unit. The processing unit is configured
to generate, a target audio data associated with the service provisioning request
5 based on the audio data. Further, the transceiver unit is configured to transmit, from
the EPS to a multimedia server, the target audio data. Also, the processing unit is
further configured to generate, at the multimedia server, a target path associated
with the target audio data; map, at the multimedia server, the target path and the
target audio data; identify, the trigger from a user; and retrieve, the target audio
10 from the target path based on the identified trigger from the user.
OBJECTS OF THE INVENTION
[0017] Some of the objects of the present disclosure, which at least one
15 embodiment disclosed herein satisfies are listed herein below.
[0018] It is an object of the present disclosure to provide a system and a method optimise storing multimedia files at MRF server which can be catered during an ongoing network session. 20
[0019] It is another object of the present disclosure to provide a solution that generates a target path for multimedia files based on the provisioning request received.
25 DESCRIPTION OF THE DRAWINGS
[0020] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
and systems in which like reference numerals refer to the same parts throughout the
30 different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
7

disclosure. Also, the embodiments shown in the figures are not to be construed as
limiting the disclosure, but the possible variants of the method and system
according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such
5 drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
[0021] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture. 10
[0022] FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
15 [0023] Fig. 3 illustrates an exemplary block diagram of a system for optimising
mapping and retrieving a target audio based on a trigger, in accordance with exemplary implementations of the present disclosure.
[0024] Fig. 4 illustrates a method flow diagram for optimising mapping and
20 retrieving a target audio based on a trigger in accordance with exemplary
implementations of the present disclosure.
[0025] FIG.5 illustrates an exemplary scenario architecture diagram of a system for
optimising mapping and retrieving a target audio associated with a user based on a
25 trigger, in accordance with exemplary embodiments of the present disclosure.
[0026] The foregoing shall be more apparent from the following more detailed description of the disclosure.
30 DETAILED DESCRIPTION
8

[0027] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
5 details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above.
10 [0028] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and
15 arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
[0029] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
20 ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
25 [0030] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process
30 is terminated when its operations are completed but could have additional steps not
included in a figure.
9

[0031] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
5 necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
10 similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
[0032] As used herein, a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for
15 processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of
20 integrated circuits, etc. The processor may perform signal coding data processing,
input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
25 [0033] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The
30 user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
10

tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from at least one of
a transceiver unit, a processing unit, a storage unit, a detection unit and any other
5 such unit(s) which are required to implement the features of the present disclosure.
[0034] As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable
10 medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
15
[0035] As used herein “interface” or “user interface refers to a shared boundary across which two or more separate components of a system exchange information or data. The interface may also be referred to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with
20 each other, which also includes the methods, functions, or procedures that may be
called.
[0036] All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a
25 general-purpose processor, a special purpose processor, a conventional processor,
a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
30
11

[0037] As used herein the transceiver unit include at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system. 5
[0038] As discussed in the background section, the current known solutions have
several shortcomings. The present disclosure aims to overcome the above-
mentioned and other existing problems in this field of technology by providing
method and system of optimising mapping and retrieving a target audio based on a
10 trigger.
[0039] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture, in accordance with exemplary implementation of the present disclosure. As shown in FIG. 1, the 5GC network
15 architecture [100] includes a user equipment (UE) [102], a radio access network
(RAN) [104], an access and mobility management function (AMF) [106], a Session Management Function (SMF) [108], a Service Communication Proxy (SCP) [110], an Authentication Server Function (AUSF) [112], a Network Slice Specific Authentication and Authorization Function (NSSAAF) [114], a Network Slice
20 Selection Function (NSSF) [116], a Network Exposure Function (NEF) [118], a
Network Repository Function (NRF) [120], a Policy Control Function (PCF) [122], a Unified Data Management (UDM) [124], an application function (AF) [126], a User Plane Function (UPF) [128], a data network (DN) [130], and an enterprise provisioning server (EPS) [132], wherein all the components are assumed to be
25 connected to each other in a manner as obvious to the person skilled in the art for
implementing features of the present disclosure.
[0040] Radio Access Network (RAN) [104] is the part of a mobile
telecommunications system that connects user equipment (UE) [102] to the core
30 network (CN) and provides access to different types of networks (e.g., 5G network).
12

It consists of radio base stations and the radio access technologies that enable wireless communication.
[0041] Access and Mobility Management Function (AMF) [106] is a 5G core
5 network function responsible for managing access and mobility aspects, such as UE
registration, connection, and reachability. It also handles mobility management procedures like handovers and paging.
[0042] Session Management Function (SMF) [108] is a 5G core network function
10 responsible for managing session-related aspects, such as establishing, modifying,
and releasing sessions. It coordinates with the User Plane Function (UPF) for data forwarding and handles IP address allocation and QoS enforcement.
[0043] Service Communication Proxy (SCP) [110] is a network function in the 5G
15 core network that facilitates communication between other network functions by
providing a secure and efficient messaging service. It acts as a mediator for service-based interfaces.
[0044] Authentication Server Function (AUSF) [112] is a network function in the
20 5G core responsible for authenticating UEs during registration and providing
security services. It generates and verifies authentication vectors and tokens.
[0045] Network Slice Specific Authentication and Authorization Function
(NSSAAF) [114] is a network function that provides authentication and
25 authorization services specific to network slices. It ensures that UEs can access only
the slices for which they are authorized.
[0046] Network Slice Selection Function (NSSF) [116] is a network function
responsible for selecting the appropriate network slice for a UE based on factors
30 such as subscription, requested services, and network policies.
13

[0047] Network Exposure Function (NEF) [118] is a network function that exposes capabilities and services of the 5G network to external applications, enabling integration with third-party services and applications.
5 [0048] Network Repository Function (NRF) [120] is a network function that acts
as a central repository for information about available network functions and services. It facilitates the discovery and dynamic registration of network functions.
[0049] Policy Control Function (PCF) [122] is a network function responsible for
10 policy control decisions, such as QoS, charging, and access control, based on
subscriber information and network policies.
[0050] Unified Data Management (UDM) [124] is a network function that
centralizes the management of subscriber data, including authentication,
15 authorization, and subscription information.
[0051] Application Function (AF) [126] is a network function that represents external applications interfacing with the 5G core network to access network capabilities and services. 20
[0052] User Plane Function (UPF) [128] is a network function responsible for handling user data traffic, including packet routing, forwarding, and QoS enforcement.
25 [0053] Data Network (DN) [130] refers to a network that provides data services to
user equipment (UE) in a telecommunications system. The data services may include but are not limited to Internet services, private data network related services.
[0054] Enterprise Provisioning Server (EPS) [132] refers to a centralized system or
30 server used to manage and automate the provisioning (setup and configuration) of
network resources such as user accounts, access privileges, software applications,
14

and network services. According to some implementations of the present disclosure, EPS [132] facilitates provisioning of the audio data that is to be played for the user during a call session.
5 [0055] Fig. 2 illustrates an exemplary block diagram of a computing device [200]
upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure. In an implementation, the computing device [200] may also implement a method for optimising mapping and retrieving a target audio based on a trigger utilising the
10 system. In another implementation, the computing device [200] itself implements
the method for optimising mapping and retrieving the target audio based on the trigger using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
15
[0056] The computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a hardware processor [204] coupled with bus [202] for processing information. The hardware processor [204] may be, for example, a general purpose microprocessor. The
20 computing device [200] may also include a main memory [206], such as a random
access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204]. The main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the
25 processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static
30 information and instructions for the processor [204].
15

[0057] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
5 Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command selections to the processor [204]. Another type of user input device may be a cursor controller [216], such as
10 a mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling cursor movement on the display [212]. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
15
[0058] The computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine.
20 According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206]. Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210]. Execution of the sequences of instructions
25 contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
30
16

[0059] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication interface [218] provides a two-
way data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication interface [218] may be an
5 integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
10 implementation, the communication interface [218] sends and receives electrical,
electromagnetic or optical signals that carry digital data streams representing various types of information.
[0060] The computing device [200] can send messages and receive data, including
15 program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], the host [224], and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
20 and/or stored in the storage device [210], or other non-volatile storage for later
execution.
[0061] Referring to Figure 3, an exemplary block diagram of a system [300] for optimising mapping and retrieving a target audio based on a trigger, is shown, in
25 accordance with the exemplary implementations of the present disclosure. The
system [300] comprises at least one transceiver unit [302], at least one processing unit [304], and at least one storage unit [306]. Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the figures all units shown within the system should
30 also be assumed to be connected to each other. Also, in Fig. 3 only a few units are
shown, however, the system [300] may comprise multiple such units or the system
17

[300] may comprise any such numbers of said units, as required to implement the
features of the present disclosure. Further, in an implementation, the system [300]
may be present in a user device to implement the features of the present disclosure.
The system [300] may be a part of the user device / or may be independent of but
5 in communication with the user device (may also referred herein as a UE). In
another implementation, the system [300] may reside in a server or a network entity. In yet another implementation, the system [300] may reside partly in the server/ network entity and partly in the user device.
10 [0062] The system [300] is configured for optimising mapping and retrieving the
target audio based on the trigger, with the help of the interconnection between the components/units of the system [300]. Here, the target audio may refer to an audio that a user may want to be played when a call is received at the user device of the user from another user.
15
[0063] The system comprises at least a transceiver unit [302]. The transceiver unit [302] is configured to receive, at an enterprise provisioning server (EPS) [132], a service provisioning request, wherein the service provisioning request comprises at least an audio data. This service provisioning request may be received from a user.
20 That is, for example, when a client (that is, a user) is being onboarded, or say,
provisioned in the system, the user they may need pre-recorded custom audio message to be played at call time. Thus, for this purpose, the pre-recorded custom audio message needs to be associated with the user information and be played as per the user requirement. For this, the service provisioning request may be raised
25 initially and received by the transceiver unit [302]. Also, the transceiver unit [302]
may be integrated with the EPS [132]. In an exemplary implementation of the present solution, the audio data may be a pre-recorded custom audio message to be played at call time. In an exemplary implementation of the present solution, when the service provisioning request is received at the EPS [132], the EPS [132] handles
30 the audio data and sends back an appropriate response. Also, in an exemplary
implementation, this audio data required for the serving the service provisioning
18

request may be uploaded at specified path at the EPS [132]. Also, here the EPS
[132] may be a server that facilitates the provisioning of the audio data that is to be
played for the user during a call session. The user device may be connected or in
communication with the EPS [132]. In an exemplary implementation, the EPS [132]
5 may comprise a user data of one or more users that may specify the customised
audio that needs to be played for each user when a call is received at the user device
of the each user. Also, for this purpose, the EPS [132] facilitates the provisioning
of this customised audio data for the each user in the storage unit [306], and the
EPS [132] also handles the path generated for the customised audio that is to be
10 played for the each user when the call is received at the user device of the each user.
[0064] Further, the system comprises at least a processing unit [304] connected to the transceiver unit [302]. The processing unit [304] is configured to generate, a target audio data associated with the service provisioning request based on the audio
15 data. In an implementation, the target audio data is generated by the processing unit
[304] by converting the audio data in a target format that is required by the multimedia server. For example, the audio data may be saved in the storage unit [306] in some format that may or may not be supported by the multimedia server. Thus, the processing unit [304] may convert the audio data into a format supported
20 by the multimedia server. For example, the audio data is present in the storage unit
[306] in *.m4a format, and the *.m4a format is not supported by the multimedia server. Also, say for example, format *.mp3 is supported by the multimedia server. Thus, the processing unit [304] may convert the audio data from *.m4a format to *.mp3 format. This audio data converted to the *.mp3 format is the target audio
25 data. A person skilled in the art would appreciate that the above example is provided
for understanding purposes only and does not limit or restrict the present disclosure in any possible manner. The target audio data is the data that is to be played finally for a user for facilitating the completion of the method as disclosed by the present disclosure.
30
19

[0065] The transceiver unit [302] is further configured to transmit, from the EPS
[132] to a multimedia server, the target audio data. In an implementation of the
present disclosure, the target audio data may be transmitted by the transceiver unit
[302] from the EPS [132] to the multimedia server in a predefined format via a
5 predetermined transmission path. In an implementation, this predefined format may
refer to the format of the audio data in which the audio data is saved in the storage unit [306]. Also, in an implementation, the multimedia server is a media synchronisation server, which comprises a media synchronisation (active) server and a media synchronisation (standby) server. As used herein, the “media
10 synchronization (active) server” may refer to a server that processes and
synchronizes media streams in real-time in order to provide accurate and timely media delivery to users. Further, as used herein, the “media synchronization (standby) server” may refer to a server that acts as a backup server which is configured to assume the role of the active server in case of failure or maintenance
15 and replicates the active server configuration and state to ensure minimal disruption
and provide timely media delivery to users.
[0066] Also, the processing unit [304] is further configured to generate, at the multimedia server, a target path associated with the target audio data. This target
20 path may be in the form of a uniform resource locator (URL), etc. and direct to a
location where the audio may be saved. This location may be a location in the storage unit [306] where the audio files may be saved. Thus, when an audio data is saved in the storage unit [306], path or the locator of the audio data for pointing to the location of the audio data in the storage unit [306] may be generating by the
25 processing unit [304]. For example, a User X wants to play an audio i.e., ‘Audio 1’
when a call received at the user device of the User X. Thus, the Audio 1, that is, the target audio for the User X, may be saved in the storage unit [306] and the path of the location of the storage unit [306] where the Audio 1 is saved (that is, the target path, such as, the URL of the location of Audio 1 in the storage unit [306]) may be
30 generated by the processing unit [304] to point to the location of this target audio
when needed. The target path may be shared with the multimedia server and a
20

Media Resource Function (MRF). As used herein, the “Media Resource Function
(MRF)” may refer to an entity in the IP Multimedia Subsystem (IMS) that provides
media processing resources for multimedia services. Further, the MRF may be
responsible for processing media streams, such as an audio stream and a video
5 stream, in real-time, wherein the processing may include a media stream
transcoding (such as codec conversion), a media stream mixing (such as conferencing), a media stream routing, a media stream recording, a media stream playback and any such like processing that may be appreciated by a person skilled in the art to implement the present disclosure.
10
[0067] Further, the processing unit [304] is configured to map, at the multimedia server, the target path and the target audio data. In an example, the mapping may be performed by tagging the user data with the location path of the audio data that the user wants to play. That is, say for example, a User X wants to play an audio
15 i.e., ‘Audio 1’ when a call received at the user device of the User X. Thus, the Audio
1, that is, the target audio for the User X, may be saved in the storage unit [306] and the path of the location of the storage unit [306] where the Audio 1 is saved (that is, the target path, such as, the URL of the location of Audio 1 in the storage unit [306]), may be tagged with the identity of the User X. This identity of the user X
20 may be, for example, a unique username saved for the User X, or a unique number
saved for the User X to identify User X form the database of one or more users. In an example, the mapping may comprise creating and/or maintaining a mapping table in a database in the storage unit [306]. The mapping table may comprise one or more of a user information (comprising user identity), an information related to
25 the target audio data (indicating which audio to be played for which user), and a
target path of the target audio data. Also, after mapping at the multimedia server the target path and the target audio data, the processing unit [304] is further configured to store the target audio data at the target path that may be shared with the multimedia server and the Media Resource Function (MRF) associated with the
30 network. In other words, the target audio file may be stored at a location
corresponding to the target path.
21

[0068] Further, the processing unit [304] is configured to identify, the trigger from
a user. This trigger may be, for example, a call answered by a user, or establishment
of a call session successfully taking place. In other words, the trigger from the user
5 may be a trigger during an active call session associated with the user.
[0069] Also, the processing unit [304] is configured to facilitate a playback of the
target audio during the active call session. This playback of the target audio may be
played on a user device of a user (for e.g., a customer) that calls the user of the
10 present invention (for e.g., an enterprise). Further, the processing unit [304] is
configured to retrieve, the target audio from the target path based on the identified trigger from the user.
[0070] Referring to Figure 4, an exemplary method flow diagram [400] for
15 optimising mapping and retrieving a target audio based on a trigger, in accordance
with exemplary implementations of the present disclosure is shown. In an
implementation the method [400] is performed by the system [300]. Further, in an
implementation, the system [300] may be present in a server device to implement
the features of the present disclosure. Also, as shown in Figure 4, the method [400]
20 starts at step [402].
[0071] At step 402, the method comprises receiving, by a transceiver unit [302] at an enterprise provisioning server (EPS) [132], a service provisioning request, wherein the service provisioning request comprises at least an audio data. This
25 service provisioning request may be received from a user. That is, for example,
when a client (that is, a user) is being onboarded, or say, provisioned in the system, the user they may need pre-recorded custom audio message to be played at call time. Thus, for this purpose, the pre-recorded custom audio message needs to be associated with the user information and be played as per the user requirement. For
30 this, the service provisioning request may be raised initially and received by the
transceiver unit [302]. In an exemplary implementation of the present solution, the
22

audio data may be a pre-recorded custom audio message to be played at call time.
In an exemplary implementation of the present solution, when the service
provisioning request is received at the EPS [132], the EPS [132] handles the audio
data and sends back an appropriate response. Also, in an exemplary
5 implementation, this audio data required for the serving the service provisioning
request may be uploaded at specified path at the EPS [132]. Also, here the EPS [132] may be a server that facilitates the provisioning of the audio data that is to be played for the user during a call session. The user device may be connected or in communication with the EPS [132]. In an exemplary implementation, the EPS [132]
10 may comprise a user data of one or more users that may specify the customised
audio that needs to be played for each user when a call is received at the user device of the each user. Also, for this purpose, the EPS [132] facilitates the provisioning of this customised audio data for the each user in the storage unit [306], and the EPS [132] also handles the path generated for the customised audio that is to be
15 played for the each user when the call is received at the user device of the each user.
[0072] At step 404, the method comprises generating, by a processing unit [304], the target audio data associated with the service provisioning request based on the audio data. In an implementation, the target audio data is generated by the
20 processing unit [304] by converting the audio data in a target format that is required
by the multimedia server. For example, the audio data may be saved in the storage unit [306] in some format that may or may not be supported by the multimedia server. Thus, the processing unit [304] may convert the audio data into a format supported by the multimedia server, i.e., the target format. For example, the audio
25 data is present in the storage unit [306] in *.m4a format, and the *.m4a format is
not supported by the multimedia server. Also, say for example, format *.mp3 is supported by the multimedia server. Thus, the processing unit [304] may convert the audio data from *.m4a format to *.mp3 format (i.e., the target format). This audio data converted to the *.mp3 format is the target audio data. A person skilled
30 in the art would appreciate that the above example is provided for understanding
23

purposes only and does not limit or restrict the present disclosure in any possible manner.
[0073] At step 406, the method comprises transmitting, by the transceiver unit
5 [302] from the EPS [132] to a multimedia server, the target audio data. Here, the
target audio may be transmitted by the transceiver unit [302] from the EPS [132] to the multimedia server in a predefined format via a predetermined transmission path. In an implementation, this predefined format may refer to the format of the audio data in which the audio data is saved in the storage unit [306].
10
[0074] At step 408, the method comprises generating, by the processing unit [304] at the multimedia server, a target path associated with the target audio data. This target path may be shared with the multimedia server and a Media Resource Function (MRF). Also, this target path may be in the form of a uniform resource
15 locator (URL), etc. and direct to a location where the audio may be saved. This
location may be a location in the storage unit [306] where the audio files may be saved. Thus, when an audio data is saved in the storage unit [306], path or the locator of the audio data for pointing to the location of the audio data in the storage unit [306] may be generating by the processing unit [304]. For example, a User X
20 wants to play an audio i.e., ‘Audio 1’ when a call received at the user device of the
User X. Thus, the Audio 1, that is, the target audio for the User X, may be saved in the storage unit [306] and the path of the location of the storage unit [306] where the Audio 1 is saved (that is, the target path, such as, the URL of the location of Audio 1 in the storage unit [306]) may be generated by the processing unit [304] to
25 point to the location of this target audio when needed.
[0075] At step 410, the method comprises mapping, by the processing unit [304] at
the multimedia server, the target path and the target audio data. In an example, the
mapping may be performed by tagging the user data with the location path of the
30 audio data that the user wants to play. That is, say for example, a User X wants to
play an audio i.e., ‘Audio 1’ when a call received at the user device of the User X.
24

Thus, the Audio 1, that is, the target audio for the User X, may be saved in the
storage unit [306] and the path of the location of the storage unit [306] where the
Audio 1 is saved (that is, the target path, such as, the URL of the location of Audio
1 in the storage unit [306]), may be tagged with the identity of the User X. This
5 identity of the user X may be, for example, a unique username saved for the User
X, or a unique number saved for the User X to identify User X form the database of one or more users. In an example, the mapping may comprise creating and/or maintaining a mapping table in a database in the storage unit [306]. The mapping table may comprise one or more of a user information (comprising user identity),
10 an information related to the target audio data (indicating which audio to be played
for which user), and a target path of the target audio data. This mapping may further comprise storing the target audio data at the target path that may be shared with the multimedia server and the Media Resource Function (MRF) associated with the network. In other words, the target audio file may be stored at a location
15 corresponding to the target path.
[0076] At step 412, the method comprises identifying, by the processing unit [304],
the trigger from a user. This trigger may be, for example, a call answered by a user,
or establishment of a call session successfully taking place. In other words, the
20 trigger from the user may be a trigger during an active call session associated with
the user.
[0077] Also, the method may further comprise facilitating a playback of the target
audio during the active call session. Thus, for this purpose, at step 414, the method
25 comprises retrieving, by the processing unit [304], the target audio from the target
path based on the identified trigger.
[0078] FIG. 5 illustrates an exemplary scenario architecture diagram of a system
[500] for optimising mapping and retrieving a target audio associated with a user
30 based on a trigger, in accordance with exemplary embodiments of the present
disclosure. Further, the system as shown in Figure 5 comprises various components
25

of telecom network architecture, such as an Enterprise Provisioning Server (EPS)
[132] unit, involved in implementation of the features of the present disclosure. The
system [500] comprises of at least one a Media Sync Server (MSS), at least one of
a SAN Storage and least one of a Media Resource Function (MRF). In an
5 implementation, the storage unit [306] may implement, or may be connected to a
SAN storage. Here, the SAN refers to ‘Storage Area Network (SAN), that is a dedicated network that interconnects and delivers shared pools of storage devices to multiple servers. Further, a person skilled in the art would understand that SAN, as known in the art, is a high-speed network that connects storage devices with
10 servers and other computing resources. The primary purpose of SAN is to provide
block-level data storage and access across multiple servers, allowing these servers to access shared pools of storage devices (such as disk arrays or tape libraries) as if they were locally attached. SANs may be used in environments where large amounts of data need to be centrally managed, accessed, and shared among multiple
15 servers without the performance bottlenecks associated with traditional network
storage solutions. Also, the MRF may refer to a component or entity responsible for handling and processing media streams in telecommunications networks. The MRF may be responsible for managing tasks such as encoding, decoding, mixing, and distributing audio, video, or other media types across the network, ensuring
20 efficient delivery and synchronization of multimedia content during
communication sessions.
[0079] Also, all of the components/ units of the system [500] are assumed to be connected to each other unless otherwise indicated below. Also, in Fig. 5 only a
25 few units are shown, however, the system [500] may comprise multiple such units
or the system [500] may comprise any such numbers of said units, as required to implement the features of the present disclosure. Further, in an implementation, the system [500] may be present in a user device to implement the features of the present invention. The system [500] may be a part of the user device / or may be
30 independent of but in communication with the user device (may also referred herein
as a UE). In another implementation, the system [500] may reside in a server or a
26

network entity. In yet another implementation, the system [500] may reside partly
in the server/ network entity and partly in the user device. In an implementation of
the present disclosure, the system [500] as depicted in figure 5, which may be
explained, for clarity, in conjunction with system [300] as depicted in figure 3 and
5 method [400] as depicted in figure 4, is configured to perform as follows:
[0080] In order to optimise mapping and retrieving the target audio associated with
the user based on the trigger, the EPS [132] is configured to receive from a network,
a service provisioning request associated with the user, wherein the service
10 provisioning request comprises at least a multimedia data associated with service
provisioning request.
[0081] Further, the EPS [132] may generate or rename a multimedia file name and
a target audio data associated with the service provisioning request based on the
15 audio data. In an implementation of the present solution, the target file name is
generated in a pre-defined format by the processing unit [304] based on the data associated with service provisioning request.
[0082] Further, the multimedia file may be transmitted to a multimedia server. This
20 multimedia server may be the media sync server as shown in Figure 5. As used
herein, the “media sync server” may refer to an entity that optimizes the mapping
and retrieval of target audio content in response to the trigger, such aa in an event
when a trigger such as a call is received, the media sync maps the requested audio
content to the appropriate media source. Then the media sync server may retrieve
25 the target audio and synchronizes it with the trigger in real time thus enabling a
delivery of an accurate and relevant audio content for a particular user.
[0083] A storage unit [306] may be configured to store the multimedia file at the
storage location path which may direct the location of multimedia file in the storage
30 unit [306]. Un an implementation, the storage unit [306] may implement, or may
be connect to a SAN Storage. The SAN Storage is also connected with the MRF
27

server as shown. The SAN Storage may contain the location of the target audio data
being stored. Also, the location of the target audio may be directed by a target path
that may be shared with the multimedia server and the Media Resource Function
(MRF). In other words, the SAN Storage may be shared between the multimedia
5 server and the Media Resource Function (MRF).
[0084] The present disclosure further discloses a user equipment (UE) for optimising mapping and retrieving a target audio based on a trigger. The user equipment (UE) comprises a system. The system of the UE further comprises at
10 least one transceiver unit [302] and at least one processing unit [304]. The at least
one transceiver unit [302] and the at least one processing unit [304] of the system of the UE may be connected to each other for implementing the features of the present invention. The transceiver unit [302] of the system of the UE is configured to receive, at an enterprise provisioning server (EPS) [132], a service provisioning
15 request, wherein the service provisioning request comprises at least an audio data.
Further, the processing unit [304] of the system of the UE is configured to generate, a target audio data associated with the service provisioning request based on the audio data. The transceiver unit [302] of the system of the UE is further configured to transmit, from the EPS [132] to a multimedia server, the target audio data.
20 Further, the processing unit [304] of the system of the UE is further configured to
generate, at the multimedia server, a target path associated with the target audio data. Further, the processing unit [304] of the system of the UE is configured to map, at the multimedia server, the target path and the target audio data. Further, the processing unit [304] of the system of the UE is configured to identify, the trigger
25 from a user. Further, the processing unit [304] of the system of the UE is configured
to retrieve, the target audio from the target path based on the identified trigger from the user.
[0085] The present disclosure further discloses a non-transitory computer readable
30 storage medium storing instructions for optimising mapping and retrieving a target
audio based on a trigger, the instructions include executable code which, when

executed by one or more units of a system, causes: a transceiver unit [302] of the
system to receive, at an enterprise provisioning server (EPS) [132], a service
provisioning request, wherein the service provisioning request comprises at least an
audio data. Further, the executable code when executed causes a processing unit
5 [304] of the system to generate, a target audio data associated with the service
provisioning request based on the audio data. Further, the executable code when executed causes the transceiver unit [302] to further transmit, from the EPS [132] to a multimedia server, the target audio data. Further, the executable code when executed causes the processing unit [304] to further generate, at the multimedia
10 server, a target path associated with the target audio data. Further, the executable
code when executed causes the processing unit [304] to map, at the multimedia server, the target path and the target audio data. Further, the executable code when executed causes the processing unit [304] to identify, the trigger from a user. Thereafter, the executable code when executed causes the processing unit [304] to
15 and retrieve, the target audio from the target path based on the identified trigger
from the user.
[0086] As is evident from the above, the present disclosure provides a technically advanced solution for optimising mapping and retrieving a target audio based on a
20 trigger. The present solution optimises storing multimedia files at a Media Resource
Function (MRF) server which can be catered during an ongoing network session. Further, the present solution generates a target path for multimedia files based on a provisioning request received, that is used for retrieving, the target audio based on the trigger. Thus, the present solution provides a technical effect of ease of
25 managing audio files i.e., the target audio at the MRF server and provides a benefit
of automatically syncing the target audio with the MRF server. This automatic sync ensures that all the target audio are always readily available without a requirement of human intervention.
30 [0087] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and

that many changes can be made to the implementations without departing from the
principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
5 and non-limiting.

We Claim:
1. A method for optimising mapping and retrieving a target audio based on a
trigger, the method comprising:
5 - receiving, by a transceiver unit [302] at an enterprise provisioning server
(EPS) [132], a service provisioning request, wherein the service provisioning request comprises at least an audio data;
- generating, by a processing unit [304], a target audio data associated with
the service provisioning request based on the audio data;
10 - transmitting, by the transceiver unit [302] from the EPS [132] to a
multimedia server, the target audio data;
- generating, by the processing unit [304] at the multimedia server, a target
path associated with the target audio data;
- mapping, by the processing unit [304] at the multimedia server, the target 15 path and the target audio data;
- identifying, by the processing unit [304], the trigger from a user; and
- retrieving, by the processing unit [304], the target audio from the target path
based on the identified trigger.
20 2. The method as claimed in claim 1, wherein the target audio is transmitted by
the transceiver unit [302] from the EPS [132] to the multimedia server in a predefined format via a predetermined transmission path.
3. The method as claimed in claim 1, wherein the trigger from the user is at least
25 a trigger during an active call session associated with the user.
4. The method as claimed in claim 3 further comprising: facilitating a playback of
the target audio during the active call session.

5. The method as claimed in claim 1, wherein mapping, by the processing unit
[304], at the multimedia server, the target path and the target audio data further
comprises:
- storing the target audio data at the target path.
5
6. The method as claimed in claim 1 wherein the target path is shared with the
multimedia server and a Media Resource Function (MRF).
7. A system for optimising mapping and retrieving a target audio based on a
10 trigger, the system comprises:
- a transceiver unit [302], wherein the transceiver unit [302] is configured to:
• receive, at an enterprise provisioning server (EPS) [132], a service
provisioning request, wherein the service provisioning request
comprises at least an audio data; and
15 - a processing unit [304] connected to the transceiver unit [302], wherein the
processing unit [304] is configured to:
• generate, a target audio data associated with the service provisioning
request based on the audio data;
wherein the transceiver unit [302] is configured to transmit, from the
20 EPS [132] to a multimedia server, the target audio data;
wherein the processing unit [304] is further configured to: generate, at the multimedia server, a target path associated with the target audio data;
map, at the multimedia server, the target path and the target audio data,
25 identify, the trigger from a user, and
retrieve, the target audio from the target path based on the identified trigger from the user.
32

8. The system as claimed in claim 7, wherein the target audio is transmitted by the transceiver unit [302] from the EPS [132] to the multimedia server in a predefined format via a predetermined transmission path.
5 9. The system as claimed in claim 7, wherein the trigger from the user is at least a
trigger during an active call session associated with the user.
10. The system as claimed in claim 9, wherein the processing unit [304] is further
configured to facilitate a playback of the target audio during the active call
10 session.
11. The system as claimed in claim 7, wherein to map at the multimedia server the
target path and the target audio data, the processing unit [304] is further
configured to store the target audio data at the target path.
15
12. The system as claimed in claim 7, wherein the target path is shared with the
multimedia server and a Media Resource Function (MRF).
13. A user equipment (UE) for optimising mapping and retrieving a target audio
20 based on a trigger, the user equipment (UE) comprising a system, wherein the
system comprises:
- a transceiver unit [302], wherein the transceiver unit [302] is configured to:
• receive, at an enterprise provisioning server (EPS) [132], a service
provisioning request, wherein the service provisioning request
25 comprises at least an audio data; and
- a processing unit [304] connected to the transceiver unit [302], wherein the
processing unit [304] is configured to:
• generate, a target audio data associated with the service provisioning
request based on the audio data;

wherein the transceiver unit [302] is configured to transmit, from the EPS [132] to a multimedia server, the target audio data;
wherein the processing unit [304] is further configured to:
generate, at the multimedia server, a target path associated with the target
5 audio data,
map, at the multimedia server, the target path and the target audio data, identify, the trigger from a user, and
retrieve, the target audio from the target path based on the identified trigger from the user.

Documents

Application Documents

# Name Date
1 202321050013-STATEMENT OF UNDERTAKING (FORM 3) [25-07-2023(online)].pdf 2023-07-25
2 202321050013-PROVISIONAL SPECIFICATION [25-07-2023(online)].pdf 2023-07-25
3 202321050013-FORM 1 [25-07-2023(online)].pdf 2023-07-25
4 202321050013-FIGURE OF ABSTRACT [25-07-2023(online)].pdf 2023-07-25
5 202321050013-DRAWINGS [25-07-2023(online)].pdf 2023-07-25
6 202321050013-FORM-26 [21-09-2023(online)].pdf 2023-09-21
7 202321050013-Proof of Right [05-10-2023(online)].pdf 2023-10-05
8 202321050013-ORIGINAL UR 6(1A) FORM 1 & 26)-261023.pdf 2023-11-04
9 202321050013-FORM-5 [24-07-2024(online)].pdf 2024-07-24
10 202321050013-ENDORSEMENT BY INVENTORS [24-07-2024(online)].pdf 2024-07-24
11 202321050013-DRAWING [24-07-2024(online)].pdf 2024-07-24
12 202321050013-CORRESPONDENCE-OTHERS [24-07-2024(online)].pdf 2024-07-24
13 202321050013-COMPLETE SPECIFICATION [24-07-2024(online)].pdf 2024-07-24
14 202321050013-FORM 3 [02-08-2024(online)].pdf 2024-08-02
15 202321050013-Request Letter-Correspondence [20-08-2024(online)].pdf 2024-08-20
16 202321050013-Power of Attorney [20-08-2024(online)].pdf 2024-08-20
17 202321050013-Form 1 (Submitted on date of filing) [20-08-2024(online)].pdf 2024-08-20
18 202321050013-Covering Letter [20-08-2024(online)].pdf 2024-08-20
19 202321050013-CERTIFIED COPIES TRANSMISSION TO IB [20-08-2024(online)].pdf 2024-08-20