Abstract: FIGURE 1 A system and method to enable interactive voice/multimedia response (IVMR) is provided. The method encompasses receiving, at a telecom application server (TAS) [504] from a MT device [512], at least a call disconnect message and an Integrated Services Digital Network User Part (ISUP) code. The TAS [504] thereafter identifies a cause of call rejection corresponding to the received ISUP code. Thereafter, the TAS [504], transmits the at least the ISUP code and the identified cause of call rejection to a media resource function (MRF) server [506]. Further, the MRF server [506] generates at least one relevant IVMR based on at least the ISUP code and the identified cause of call rejection transmitted to the MRF server [506]. Also, the MRF server [506] thereafter transmits the at least one generated IVMR to a mobile originating (MO) device [514].
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
AND
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“SYSTEM AND METHOD TO ENABLE INTERACTIVE MULTIMEDIA
RESPONSE (IMR)"
We, Reliance Jio Infocomm Limited, an Indian National, of, 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad-380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.
FIELD OF INVENTION:
The present invention relates generally to the field of wireless communication systems, and more particularly relates to providing an Interactive Voice/Multimedia Response (IVMR).
BACKGROUND OF THE INVENTION:
The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
Today a wireless network, that is widely deployed to provide various communication services such as voice, video, data, advertisement, content, messaging, broadcasts, etc. usually comprises multiple access networks and support communications for multiple users it hosts by sharing the available network resources.
One example of such a network is Evolved Universal Terrestrial Radio Access (E-UTRA) which is a radio access network standard meant to be a replacement of Universal Mobile Telecommunications System (UMTS) and High-Speed Downlink Packet Access/High-Speed Uplink Packet Access (HSDPA/HSUPA) technologies specified in 3GPP releases 5 and beyond. Unlike HSPA, LTE's E-UTRA is a new air interface system, unrelated to and incompatible with W-CDMA. It provides higher data rates, lower latency and is optimized for packet data. The earlier UMTS Terrestrial Radio Access Network (UTRAN) is the radio access network (RAN) defined as a part of the Universal Mobile Telecommunications System (UMTS), a third generation (3G) mobile phone technology supported by the 3rd Generation Partnership Project (3GPP). The UMTS, which is the successor to Global System for Mobile Communications (GSM) technologies, currently supports various air interface standards, such as Wideband-Code Division Multiple Access (W-CDMA), Time Division-Code Division Multiple Access (TD-CDMA), and Time Division-Synchronous Code Division Multiple Access (TD-SCDMA). The UMTS also supports enhanced 3G data communications protocols, such as High-Speed Packet Access (HSPA), which provides higher data transfer speeds and
capacity to associated UMTS networks. Furthermore, as the demand for mobile data and voice access continues to increase, research and development continue to advance the technologies not only to meet the growing demand for access, but to advance and enhance the user experience with user device. Some of the technologies that have evolved starting GSM/EDGE, UMTS/HSPA, CDMA2000/EV-DO and TD-SCDMA radio interfaces with the 3GPP Release 8, e-UTRA is designed to provide a single evolution path for providing increase in data speeds, and spectral efficiency, and allowing the provision of more functionality.
Furthermore a ‘smart computing device or user equipment (UE) or user device’ refers to any electrical, electronic, electro-mechanical computing device or equipment or a combination of one or more of the above devices. Also a ‘smartphone’ is one type of “smart computing device” that refers to mobility wireless cellular connectivity device that allows end users to use services on cellular networks such as including but not limited to 2G, 3G, 4G, 5G and/or the like mobile broadband internet connections with an advanced mobile operating system which combines features of a personal computer operating system with other features useful for mobile or handheld use.
In currently available smart devices/smartphones, when a call is made from one party (say, an originating device) to other party (say, receiving device), the existing solution for call rejection with message are only intended to reject an incoming call as it is declined by the other party with an interactive voice response (IVR) as ‘USER is Busy’ and in parallel, it sends out the selected SMS to notify a user at the originating device. In most of the cases, the user at the originating device might miss-out the received ‘Call rejection SMS’ as the user might be busy listening to the IVR or user might have turned OFF the SMS notification settings at the originating device and thereby a reason for which the other Party (i.e. MT device) has declined the call is not known to the user at the originating device. Further, such scenarios might create a poor user experience and the played IVR may misguide the user of the reason for which call being rejected.
Furthermore, the current solution provides user with the following options of response on an incoming call user interface (UI) to reject the call to respond to the Incoming Call:
Option 1: ANSWER
Option 2: REJECT
Option 3: IGNORE (by putting the device on Silent)
Option 4: Reject with SMS
Hence, in the current scenarios, it is not possible for the smart mobile communication devices to reject calls intended to be received onto the receiving device at least with a relevant audio/multimedia response to the originating device based on a call rejection option available on the user device and respectively playback a pre-defined IVR/IVMR in a network. Furthermore, the current solutions also fails to reject the calls intended to be received onto the receiving device with at least one of an augmented reality (AR) response, a virtual reality (VR) response and an IoT response. Also, in the present scenarios the smart mobile communication devices are not able to play a call rejection response received at the originating device based on an operating condition of the originating device. Therefore, this leads to a condition where network resources may be used but wasted with poor user experience.
Another existing solution also provides an interactive voicemail (voice message) selection system to refuse an incoming call where a user indicates a specific voicemail message to be played to a calling party. Further, an indication associated with the specific prerecorded voicemail message is received from the recipient that identifies the specific prerecorded voicemail message from a plurality of prerecorded voicemail messages that is to be delivered as a message to the calling party. So, the user has pre-recorded voice messages in a voicemail server and based on the input from the user the specific associated voicemail will be delivered to the calling party as voice message. Hence, the message (in the form of voice) to the calling party for call rejection, and the voice message is pre-recorded and provided by the user itself to voicemail server.
Currently, there are no solutions available for smart mobile communication devices to reject calls intended to be received onto the terminating device with at least one of a relevant audio/multimedia response, augmented reality (AR) response, virtual reality (VR) response and IoT response to the originating device based on an IVR/IVMR
call rejection option available on the user device and respectively playback the pre-defined IVR/IVMR in the network. Also, the currently known solutions fails to play a call rejection response received at the originating device based on an operating condition of the originating device.
Therefore, in view of the above cited and other inherent limitations in the existing solutions, there exists a need in the art to provide a mechanism to respond to an incoming call in such a way that the call originating smart mobile communication device (i.e. the MO device) directly receives a response in the form of an IVR/IVMR based on a pre-programming record of mapped response codes and audio, multi-media responses and also to enable the MO device to play the received call rejection response based at least on the operating condition of the MO device.
SUMMARY OF THE INVENTION
This section is provided to introduce certain objects and aspects of the present invention in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter. In order to overcome at least a few problems associated with the known solutions as provided in the previous section, an object of the present invention is to provide a system and a method to enable interactive voice/multimedia response (IVMR) during a rejection of an incoming call. Another object of the present invention is to provide a method and a system for smart mobile communications device to reject calls intended to be received onto a terminating (MT) device with relevant audio/multimedia response to an originating (MO) device based on a call rejection option available on the terminating device and respectively playback a pre¬defined IVR/IVMR in network. Yet another object of the present invention is to provide a method and a system to intelligently identify and notify the MO Party about a situation directly over IVR/IVMR instead of a SMS which might get missed-out, by introducing separate cause codes in a reason header for each of the pre-defined ‘Rejection IVRs/IVMRs’ and accordingly the network identifies a cause of the rejection and playout a relevant IVR/IVMR to the other party (MO device). Yet another object of the present invention is to provide a system and a method for improving experience of a ‘Faster call Rejection’ not only to the terminating party but also to the originating
party. Yet another object of the present invention is to provide a system and a method for identifying the pre-defined call rejection causes at the network based on an input from the MT device (call rejection option) and instead of having to send an SMS to the other party regarding the cause, the same can be intimated using IVR/IVMR by mapping the audio/multimedia responses to the call rejection cause codes. Yet another object of the present invention is to provide a system and a method for improving user experience and to directly get the response in the form of an audio/multimedia in IVR/IVMR instead of the messaging. Yet another object of the present invention is to provide a system and a method for call rejection with the predefined audio/multimedia response at network and to have better user experience which is currently the need of the hour in the industry. Yet another object of the present invention is to provide a method and a system for improving user experience for network subscriptions in multi-SIM, multi-active wireless devices providing a mechanism to have call rejection with predefined audio/multimedia response. Yet another object of the present invention is to provide a method and a system for enabling an ecosystem that provides a seamless enhancement of session in multi-SIM, multi-active wireless devices. Yet another object of the present invention is to provide a system and a method for call rejection with the predefined audio/multimedia response in the user devices independent of whether the user device is 5G/4G/3G/EV-Do/eHRPD capable technology.
In order to achieve the aforementioned objectives, the present invention provides a method and system to enable interactive voice/multimedia response (IVMR). An aspect of the present invention relates to a method to enable interactive voice/multimedia response (IVMR). The method comprises receiving, at a Telecom Application Server (TAS) from a mobile terminating (MT) device, at least a call disconnect message and an Integrated Services Digital Network User Part (ISUP) code. The method thereafter encompasses identifying, at the TAS, a cause of call rejection corresponding to the received ISUP code based on a mapping of at least one ISUP code and at least one user-selected Interactive Voice/Media Response (IVMR) for call rejection stored at the TAS. The method further comprises transmitting, by the TAS, at least the ISUP code and the identified cause of call rejection to a media resource function (MRF) server. Thereafter the method comprises generating, at the MRF
server, at least one relevant IVMR based on at least the ISUP code and the identified cause of call rejection transmitted to the MRF server, wherein the at least one relevant IVMR comprises at least a text response data, an audio response data, a video response data, an augmented reality (AR) response data, a virtual reality (VR) response data and an IoT response data. The text response data is sent from a Short Message Service Center (SMSC). The audio response data and the video response data is identified at the MRF server and the augmented reality (AR) response data, the virtual reality (VR) response data and the IoT response data, is extracted from an IVMR server. Further, the method encompasses transmitting, by the MRF server, the at least one generated IVMR to a mobile originating (MO) device.
Another aspect of the present invention relates to a system to enable interactive voice/multimedia response (IVMR). The system comprises a Telecom Application Server (TAS) configured to receive from a mobile terminating (MT) device, at least a call disconnect message and an ISUP code. The TAS is further configured to identify, a cause of call rejection corresponding to the received ISUP code based on a mapping of at least one ISUP code and at least one user-selected Interactive Voice/Media Response (IVMR) for call rejection stored at the TAS. Further, the TAS is configured to transmit, at least the ISUP code and the identified cause of call rejection to a media resource function (MRF) server. Further, the MRF server is configured to generate at least one relevant IVMR based on at least the ISUP code and the identified cause of call rejection transmitted to the MRF server, wherein the at least one relevant IVMR comprises at least a text response data, an audio response data, a video response data, an augmented reality (AR) response data, a virtual reality (VR) response data, and an IoT response data. The text response data is sent from a Short Message Service Center (SMSC). The audio response data and the video response data is identified at the MRF server, and the augmented reality (AR) response data the virtual reality (VR) response data and the IoT response data is extracted from an IVMR server. Thereafter, the MRF server is configured to transmit, the at least one generated IVMR to a mobile originating (MO) device.
BRIEF DESCRIPTION OF DRAWINGS
The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
Figure 1 illustrates an exemplary network architecture diagram indicating latching of a user device with radio access technologies (RATs), in accordance with exemplary embodiments of the present invention.
Figure 2 illustrates an exemplary block diagram of Long Term Evolution (LTE) eNodeB, in accordance with exemplary embodiments of the present invention.
Figure 3 illustrates an exemplary block diagram of a user equipment comprising a subscriber identity module (SIM), in accordance with exemplary embodiments of the present invention.
Figure 4 illustrates an exemplary network architecture diagram [400], in accordance with exemplary embodiments of the present invention.
Figure 5 illustrates an exemplary block diagram of a system [500] to enable interactive voice/multimedia response (IVMR), in accordance with exemplary embodiments of the present invention.
Figure 6 illustrates an exemplary block diagram [600] to enable interactive voice/multimedia response (IVMR) in an LTE network, in accordance with exemplary embodiments of the present invention.
Figure 7 illustrates an exemplary method flow diagram [700], depicting a method to enable interactive voice/multimedia response (IVMR), in accordance with exemplary embodiments of the present invention.
Figure 8 illustrates an exemplary diagram of a process flow depicting a method to enable interactive voice/multimedia response (IVMR), in accordance with exemplary embodiments of the present invention.
Figure 9A illustrates an exemplary use case of providing an AR response, in accordance with exemplary embodiments of the present invention.
Figure 9B illustrates an exemplary use case of providing a VR response, in accordance with exemplary embodiments of the present invention.
Figure 10 illustrates an exemplary flow diagram [1000], depicting an instance implementation of a process at a mobile terminating device to enable interactive voice/multimedia response (IVMR), in accordance with exemplary embodiments of the present invention.
Figure 11 illustrates an exemplary flow diagram [1100], depicting an instance implementation of a process at a network entity to enable interactive voice/multimedia response (IVMR), in accordance with exemplary embodiments of the present invention.
Figure 12 illustrates an exemplary sequence diagram, depicting an instance implementation of the process of enabling an interactive voice/multimedia response (IVMR), in accordance with exemplary embodiments of the present invention.
Figure 13 illustrates an exemplary user interface diagram, depicting various exemplary user interfaces at an exemplary MT device, in accordance with exemplary embodiments of the present invention.
The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION OF THE INVENTION
In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of embodiments of the invention. However, it will be apparent that various embodiments may be practiced without these specific details. The figures and description are not intended to be restrictive.
The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention as set forth.
Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a sequence diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks (e.g., a computer-program product) may be stored in a machine-readable medium. A processor(s) may perform the necessary tasks. ‘
The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
As utilized herein, terms “component,” “system,” “platform,” “node,” “layer,” “selector,” “interface,” and the like are intended to refer to a computer-related entity, hardware, software (e.g., in execution), and/or firmware. For example, a component
can be a process running on a processor, a processor, an object, an executable, a program, a storage device, and/or a computer. By way of illustration, an application running on a server and the server can be a component. One or more components can reside within a process and a component can be localized on one computer and/or distributed between two or more computers.
Further, these components can execute from various computer-readable media having various data structures stored thereon. The components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems via the signal). As another example, a component can be an apparatus with specific functionality provided by mechanical parts operated by electric or electronic circuitry which is operated by a software application or a firmware application executed by a processor, wherein the processor can be internal or external to the apparatus and executes at least a part of the software or firmware application. As yet another example, a component can be any apparatus that provides specific functionality through electronic components without mechanical parts, the electronic components can include a processor therein to execute software or firmware that confers at least in part the functionality of the electronic components.
In addition, the disclosed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, computer-readable carrier, or computer-readable media. For example, computer-readable media can include, but are not limited to, magnetic storage devices, e.g., hard disk; floppy disk; magnetic strip(s); optical disk (e.g., compact disk (CD), digital video disc (DVD), Blu-ray Disc™ (BD); smart card(s), flash memory device(s) (e.g., card, stick, key drive).
Moreover, terms like “source and/or destination user equipment (UE)”, “mobile station”, “smart computing device”, “user device”, “user equipment”, “device”,
“smart mobile communications device”, “mobile communication device”, “mobile device”, “mobile subscriber station,” “access terminal,” “terminal,” “handset,” “originating device,” “terminating device,” and similar terminology refers to any electrical, electronic, electro-mechanical computing device or equipment or a combination of one or more of the above devices. Smart computing devices may include, but not limited to, a mobile phone, smartphone, virtual reality (VR) devices, augmented reality (AR) devices, pager, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device as may be obvious to a person skilled in the art. In general, a smart computing device is a digital, user-configured, computer networked device that can be operated autonomously. A smart computing device is one of the appropriate systems for storing data and other private/sensitive information. The smart computing device operates at all the seven levels of ISO reference model, but the primary function is related to the application layer along with the network, session and presentation layer. The smart computing device may also have additional features of a touch screen, apps ecosystem, physical and biometric security, etc. Further, a ‘smartphone’ is one type of “smart computing device” that refers to the mobility wireless cellular connectivity device that allows end users to use services on cellular networks such as including but not limited to 2G, 3G, 4G, 5G and/or the like mobile broadband internet connections with an advanced mobile operating system which combines features of a personal computer operating system with other features useful for mobile or handheld use. These smartphones can access the Internet, have a touchscreen user interface, can run third-party apps including capability of hosting online applications, music players and are camera phones possessing high-speed mobile broadband 4G LTE internet with video calling, hotspot functionality, motion sensors, mobile payment mechanisms and enhanced security features with alarm and alert in emergencies. Mobility devices may include smartphones, wearable devices, smart-watches, smart bands, wearable augmented devices, etc. For the sake of specificity, the mobility device is referred to both feature phone and smartphones in present disclosure but does not limit the scope of the disclosure and may extend to any mobility device in implementing the technical solutions. The above smart devices including the smartphone as well as the feature phone including IoT devices enable
the communication on the devices. Further, the foregoing terms are utilized interchangeably in the subject specification and related drawings.
Furthermore, the terms “user,” “subscriber,” “customer,” “consumer,” “owner,” and the like are employed interchangeably throughout the subject specification and related drawings, unless context warrants particular distinction(s) among the terms. It should be appreciated that such terms can refer to human entities, or automated components supported through artificial intelligence, e.g., a capacity to make inference based on complex mathematical formulations, that can provide simulated vision, sound recognition, decision making, etc. In addition, the terms “wireless network” and “network” are used interchangeable in the subject application, unless context warrants particular distinction(s) among the terms.
As used herein, a “processor” or “processing unit” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special-purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, a low-end microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
The present invention provides a novel method and system to enable interactive voice/multimedia response (IVMR) during disconnecting an incoming call from a mobile originating (MO) device at a mobile terminating (MT) device. More specifically, the present invention provides a solution for rejecting incoming calls with pre-defined multi-media responses such as voice, video, text, augmented reality, virtual reality responses, IoT responses based on mapped cause codes (i.e. ISUP codes) to one or more use cases (causes of call rejection) for any mobile communications device. Also, to provide the Interactive Voice/Media Response (IVMR), the present invention encompasses storing at an Interactive Voice/Media Response (IVMR) server at the network entity, a data corresponding to at least one of one or more responses and
input from users of at least one of the MT and the MO devices, wherein the MO and MT devices are configured to manage in a programmed manner via a processing unit, at least one of an audio, a video, an AR, a VR and IoT functionality. More specifically, the MO and MT devices are configured to generate one or more holographic animated creatures/characters of the users (for XR: AR/VR purpose), one or more audio messages, one or more video messages and/or one or more IoT response data to feed the data corresponding to the same to the IVMR server as a pre-defined IVMR response. The holographic animated creature refers to a representation of the users comprising animation as per a response code. Further, to enable interactive voice/multimedia response, the present invention encompasses introducing with a call disconnect message (such as a call rejection response ‘486 – Busy’), separate cause codes (ISUP codes) in a reason header for each of the pre-defined rejection interactive voice/multimedia responses (IVMRs) and accordingly identifying at a network entity the cause of the call rejection at the MT device to playout a relevant IVMR/IVR to the MO device. Also, the present invention encompasses playing at the MO device the relevant IVMR/IVR based on at least an operating condition of the user of the MO device and a pre-trained data set. The present invention, therefore, encompasses intelligently identifying at a network entity and notifying at the MO device a situation (i.e. a cause of call rejection at the MT device) directly over an Interactive Voice/Media Response (IVMR) instead of a SMS which might get missed-out at the MO device and thereby the present invention provides a solution that helps to have a better experience of a ‘Faster call Rejection’ not only to a terminating user (i.e. user at the MT device) but also to a user at the MO device.
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present disclosure.
Referring to FIG.1 illustrates an exemplary network architecture diagram indicating latching of a user device with radio access technologies (RATs), in accordance with exemplary embodiments of the present invention. As shown at [100A] and [100B] in Figure 1, the user equipment (UE) [102] may be latched to a long term evolution (LTE) network [110] or to a legacy (UTRAN/GSM) network [106] via eNodeB [108] and NodeB/BTS [104] respectively. Therefore, Figure 1 indicates the latching of the UE
[102] with various radio access technologies (RATs). Further, the wireless communication networks, with which the UE [102] is connected/latched, provides to the latched UEs [102] one or more services (e.g., voice traffic, data traffic etc.) through base station/s of the said wireless network/s.
Furthermore, the network architecture diagram as indicated in Figure 1 is an exemplary network architecture diagram and the wireless communication network(s)/network entities (i.e. the LTE network and the UTRAN/GSM network) as disclosed in Figure 1 are exemplary and other network entities, such as other legacy networks and/or next-generation 5G-New Radio (NR) networks, with which the UE [102] is latched can also be implemented. Also, in an implementation the user equipment [102] may be one of a single SIM and a multi-SIM user device. Therefore, in such implementation the latching of one of the single SIM and the multi-SIM user device will be with one of a single radio access technologies (RAT) and multiple different radio access technologies (RATs), respectively.
Referring to Figure 2 illustrates an exemplary block diagram of the LTE eNodeB, in accordance with exemplary embodiments of the present invention. As depicted in the figure 2, the LTE eNodeB may include but is not limited to a call processing unit, a radio resource management unit, SON functions, E-GTPU, SCDP/UDP/IP protocols, X2-AP, S1-AP interfaces, MAC layer, RLC layer, PDCP layer, RRC layer, physical layer (PHY layer), a scheduler and any other such unit obvious to a person skilled in the art.
Referring to FIG.3 illustrates an exemplary block diagram of a user equipment [300] comprising a subscriber identity module (SIM), in accordance with exemplary embodiments of the present invention. The user equipment [300] as indicated in the Figure 3 comprises at least one subscriber identity module (SIM)/universal integrated circuit card (UICC) [320]. The user equipment [300] further may comprise a plurality of subsystems [302, 302A, 302B, 302C, 303, 304, 305 and 306], wherein said subsystems [302, 302A, 302B, 302C, 303, 304, 305 and 306] may include, but not limiting to, a modem subsystem [302] with a Baseband DSP processor [302C] and a plurality of radio interfaces [302A]. The user equipment [300] may further include a cellular radio, a transmission/reception radio frequency (RF) connected to the antenna [107] for receiving and transmitting wireless services such as VoIP and
Internet/Intranet services. Also, the user equipment [300] may comprise an application processor [304], a memory subsystem [305], a power subsystem [306] and an external I/O interfaces subsystem [303]. The present disclosure further encompasses that the subscriber identity module [320] may comprise a processor [320B], an I/O interface [320A], a RAM temporary storage [320C], an EEPROM / Non-volatile Memory (NVM) [320D] and a SIM file system [320E]. Further, the EEPROM / Non- Volatile Memory (NVM) [320D] may consist of an operating system code, a code of other SIM applications and the Auto International Mobile Subscriber Identity (IMSI) Switch SIM application. The SIM file system [320E] and USIM application may contain elementary files and location parameters such as EFLOCI (Location Information), EFPSLOCI (PS Location Information), EFEPSLOCI (PS Location Information) and various other application-specific files used by various SIM applications running on the subscriber identity module [320] along with a plurality of context and configuration files of the Auto IMSI Switch SIM application.
Referring to Figure 4 illustrates an exemplary network architecture diagram [400], in accordance with exemplary embodiments of the present invention. As shown in Figure 4, the exemplary network architecture diagram [400] comprises a number of components/units such as including but not limited to a Mobility Management Entity (MME) [402], an Evolved Universal Terrestrial Access Network (E-UTRAN) [404], a Home Subscriber Server (HSS) [406], a serving gateway [408], a Packet Data Network (PDN) gateway [410], a Serving GPRS Support Node (SGSN), a UMTS Terrestrial Radio Access Network (UTRAN), a GSM/EDGE Radio Access Network (GERAN), a policy and charging rules function (PCRF) etc. Further, only the units/components of the network architecture diagram [400] that are important are explained here. Also, the components as shown in the network architecture diagram [400] are connected with each other over various interfaces such as the MME [402] is connected to the E-UTRAN [404] over an S1-MME interface and the MME [402] is connected to the HHS [406] over an S6a interface etc.
Further, the MME (Mobility Management Entity) [402] deals with a control plane and is configured to handle signaling related to mobility and security for E-UTRAN access. The MME [402] is responsible for tracking and paging of one or more user
equipment/s (UEs) in idle-mode. The MME [402] is also a termination point of a Non-Access Stratum (NAS).
The E-UTRAN [404] comprises a set of eNodeB connected to Evolved Packet Core (EPC) through an S1 interface. Further, an eNodeB can support Frequency Division Duplex (FDD) mode, Time Division Duplex (TDD) mode or dual-mode operation. Also, the eNodeB is responsible for assigning one or more radio resources to the one or more UEs.
Further, the HSS (Home Subscriber Server) [406] is a database that contains user-related and subscriber-related information. The HSS [406] also provides support functions in mobility management, call and session setup, user authentication and access authorization.
Furthermore, the gateways (i.e. the Serving GW [408] and the PDN GW [410]) deal with a user plane. These gateways are configured to transport an IP data traffic between the one or more User Equipment/s (UEs) and one or more external networks. The Serving GW [408] is a point of interconnect between a radio-side and the EPC. Further, the Serving GW [408] serves the one or more UEs by routing an incoming and outgoing IP packets. Also, the Serving GW [408] is an anchor point for an intra-LTE mobility (i.e. in case of handover between eNodeBs) and for mobility between LTE and other 3GPP accesses. Also, the Serving GW [108] is logically connected to other gateway, i.e. to the PDN GW [410].
The PDN GW [410] is a point of interconnect between the EPC and the external IP networks. These networks are called PDN (Packet Data Network), hence the name. The PDN GW [410] routes packets to and from one or more Packet Data Networks (PDNs). The PDN GW [410] also performs various functions such as IP address/IP prefix allocation or policy control and charging etc.
Further, the figure 4 also indicates that a user equipment [300] is connected with E-UTRAN [404]. The user equipment (UE) [300] is latched to the E-UTRAN [404] to avail at least one service from the corresponding network entity. Also, in an implementation the user equipment [300] is one of an MO device and an MT device configured at least to receive and provide, respectively, at least one Interactive
Voice/Media Response (IVMR) in coordination with the corresponding network entity. Also, the network architecture diagram [400] is an exemplary network architecture diagram and the features of the present invention can be implemented on any network obvious to a person skilled in the art.
Referring to FIG. 5 illustrates an exemplary block diagram of a system [500] to enable interactive voice/multimedia response (IVMR), in accordance with exemplary embodiments of the present invention. The system [500] comprises at least one telecom application server (TAS) [504], at least one media resource function (MRF) server [506], at least one Short Message Service Center (SMSC) [508], at least one Interactive Voice/Media Response (IVMR) server [510], at least one mobile terminating (MT) device [512] and at least one mobile originating (MO) device [514]. Further, the telecom application server (TAS) [504], the media resource function (MRF) server [506], the Interactive Voice/Media Response (IVMR) server [510] and the Short Message Service Center (SMSC) [508] resides at a network entity [502], wherein in an implementation the network entity [502] is connected to the at least one of the mobile terminating (MT) device [512] and the mobile originating (MO) device [514]. Also, all of the components/ units of the system [500] are assumed to be connected to each other unless otherwise indicated below. Also, in Fig. 5 only a few units are shown, however, the system [500] may comprise multiple such units or the system [500] may comprise any such numbers of said units obvious to a person skilled in the art or as required to implement the features of the present disclosure.
The system [500] is configured to enable interactive voice/multimedia response (IVMR) with the help of the interconnection between its components/ units.
The telecom application server (TAS) [504] is configured to receive from a mobile terminating (MT) device [512], at least a call disconnect message and an Integrated Services Digital Network User Part (ISUP) code. Further, to receive at the TAS [504] from the mobile terminating (MT) device [512] at least the call disconnect message and the ISUP code, the MT device [512] is configured to receive an incoming call request from the MO device [514]. Further, upon receipt of the incoming call request from the MO device [514] at an Incoming Call User Interface (UI) of the MT device [512] an additional option to reject the incoming call with an IVMR is provided along
with the existing options of call rejection. Therefore, in an implementation a user at the MT device [512] gets following options to respond to the incoming call:
Option 1: ANSWER
Option 2: REJECT
Option 3: IGNORE (by putting the device on Silent)
Option 4: Reject with SMS
Option 5: Reject with IVMR
More specifically, the ‘Reject with IVMR’ option is present on the MT device [514] to provide a user with an option to select the option of call rejection with IVMR. For example, the mobile originating device [514] initiates a call and said call terminates on the terminating device [512] and the MT device [512] starts RINGING. Thereafter, the MT device [512] is presented with multiple methods to respond to the Incoming Call, such as: - “ACCEPT”, “DECLINE”, “IGNORE”, “REJECT with SMS OR IVMR”. The Reject with IVMR option is provided to respond to the incoming call in such a way that a cause of rejection is correctly notified at the MO device [514] in the form of an Interactive Voice/Media Response (IVMR), wherein the IVMR response is provided in the form of an audio, a video, a text, an augmented reality (AR), a virtual reality (VR) response and an IoT response.
Further, upon receipt of the options of call rejection, the MT device [512] is configured to receive a first user selection to reject the incoming call with an IVMR (i.e. with the Reject with IVMR option). Thereafter, the TAS [504] is configured to provide one or more IVMRs for call rejection on a display of the MT device [512] wherein the MT device [512] is further configured to receive a second user selection of an IVMR for call rejection from the one or more IVMRs for call rejection displayed on the MT device [512] to reject the incoming call. The one or more IVMRs for call rejection are one or more pre-defined cases of “Rejection with IVMR”, for example an exemplary IVMR for call rejection is “I’m on my way”. Further, considering the above stated example, if the user selects the “REJECT with SMS OR IVMR” method, then the user at the MT device [512] is further presented with one or more pre-defined cases of call rejection
for SMS as well as IVMR, wherein the user needs to choose one option to reject the incoming call. Further, for the IVMR, the pre-defined cases of call rejection with IVMR (i.e. the one or more IVMRs for call rejection) are mapped to respective ISUP codes to be generated by the MT device [512] and sent along with a call disconnect message to the network.
Further based on the call rejection, the MT device [512] is thereafter configured to identify, the ISUP code corresponding to the user-selected IVMR for call rejection. The MT device [512] is also further configured to generate, the call disconnect message (i.e. a call disconnect response 486 – busy here), wherein at least the generated call disconnect message and the identified ISUP code is transmitted to the TAS [504] based on the rejection of the incoming call.
Thereafter, upon receipt of the call disconnect message and the ISUP code, the TAS [504] is configured to identify a cause of call rejection corresponding to the received ISUP code based on a mapping of at least one ISUP code and at least one user-selected IVMR for call rejection stored at the TAS [504]. The mapping of at least one ISUP code and at least one user-selected IVMR for call rejection is based on the at least one ISUP code defined/assigned at the TAS [504] for the at least one user-selected IVMR for call rejection. Furthermore, in an implementation the TAS [504] is configured to define one or more new ISUP codes for the one or more IVMRs for call rejection (designated actions/pre-defined cases of rejection with IVMR). Also in another implementation the TAS [504] is configured to assign to the one or more IVMRs for call rejection one or more subsets of existing ISUP Code “17 – User Busy” mapped to Response Code ‘486 – Busy’. Further, in accordance with the implementation of features of the present invention some exemplary ISUP codes defined for exemplary IVMRs for call rejection are provided below in Table 1:
Options in “Rejection with IVR” Pre-defined Cases of “Rejection with IVR” ISUP code Subsets of ISUP Cause code '17'
Option 1 I’m on my way. 150 17,a
Option 2 Will call you later. 151 17,b
Option 3 In meeting. 152 17,c
Option 4 Can’t talk right now. 153 17,d
Option 5 Call back in 5 minutes. 154 17,e
Option 5 Driving. 155 17,f
TABLE 1
Further, after identification of the cause of call rejection, the TAS [504] is configured to transmit, at least the ISUP code and the identified cause of call rejection to the media resource function (MRF) server [506].
The MRF server [506] is thereafter configured to generate at least one relevant IVMR based on at least the ISUP code and the identified cause of call rejection transmitted to the MRF server [506], wherein the at least one relevant IVMR comprises at least a text response data, an audio response data, a video response data, an augmented reality (AR) response data, a virtual reality (VR) response data and an IoT response data. The text response data is sent from the Short Message Service Center (SMSC) [508] based on a mapping of the at least one ISUP code with at least one text response data. The audio response data and the video response data is identified at the MRF server [506] based on a mapping of the at least one ISUP code with at least one audio response data and at least one video response data, respectively. The augmented reality (AR) response data and the virtual reality (VR) response data is extracted by the MRF server [506] from the IVMR server [510] based on a mapping of the at least one ISUP code with at least one augmented reality (AR) response data and at least one virtual reality (VR) response data, respectively. The IoT response data is extracted by the MRF server [506] from the IVMR server [510] based on a mapping of the at least one ISUP code with at least one IoT response data. Further, the text response data, the audio response data, the video response data, the augmented reality (AR) response data, the virtual reality (VR) response data and the IoT response data are user-configurable. More specifically, the MT device [512] and the MO device [514] are configured to manage in a programmed manner via a processing unit, at least one of
an audio, a video, an AR, a VR and an IoT functionality based on a pre-trained data set. The pre-trained data set comprises a plurality of data trained on the basis of implementation of artificial intelligence (Machine learning) techniques. Furthermore, the pre-trained data set encompasses a plurality of trained data related at least to the audio, the video, the augmented reality (AR), the virtual reality (VR) and the IoT functionality. More specifically, each of the MO device [514] and the MT device [512] are configured to generate via their respective processing unit, one or more holographic animated creatures/characters of the users (for XR: AR/VR purpose), one or more audio messages, one or more video messages and/or one or more IoT data based on the pre-trained data set to feed a data corresponding to the generated one or more holographic animated creatures, one or more audio messages, the one or more video messages and/or one or more IoT data to the IVMR server [510] as a pre¬defined IVMR response, wherein the holographic animated creature refers to a representation of the users comprising animation as per a response (ISUP) code generated on-to a designated user as an IVMR response. Thereafter, the IVMR server [510] is configured to store the received data (pre-defined IVMR response).
Further, after generation of the at least one relevant IVMR, the MRF server [506] is configured to transmit, the at least one generated IVMR to the mobile originating (MO) device [514]. The MO device [514] is thereafter configured to play the at least one received IVMR. Also, in order to play the at least one received IVMR, the MO device [514] is further configured to continuously detect, an operating condition of a user of the MO device [514]. Further, the at least one received IVMR is played on the MO device [514] based on at least one of the detected operating condition and the pre-trained data set. For instance, if in an event an IVMR response comprising an audio, a video, a text, an augmented reality (AR), a virtual reality (VR) response and a IoT response, is received at the MO device [514], the MO device [514] in such event is configured to evaluate a state of the MO device [514]/ user of the MO device [514], based at least on the pre-trained data set to present the appropriate audio/ video/ text/ augmented reality (AR)/ virtual reality (VR)/IoT response at the MO device [514]. Further, some of the exemplary use cases relating to presentation/playing of received IVMR/s at the MO device [514] are as below:
- If the MO device [514] is identified close to ear of the user, the MO device [514] is configured to play audio response/s received as for the mapped cause (ISUP) code via the MRF server [506].
- If the MO device [514] is identified in the user’s hand in Landscape mode or a connected headphone is identified or as per a user preference, the MO device [514] is configured to play video of holographic animated creature/ character of the user as response received, with picture-in-picture preview.
- If the MO device [514] is identified in user’s hand in Portrait mode or as per the user preference and/ or a connected AR based device is identified i.e. Smart AR Glass, etc., the MO device [514] is configured to play a video of holographic animated creature/ character of the user as an augmented reality response.
- If the MO device [514] is identified to be connected to a VR based device i.e. Smart VR Glass, etc. and/ or as per a pre-defied preference, the MO device [514] is configured to play the video of holographic animated creature/ character of the User as a virtual reality response.
Also, the MO device [514] is thereafter configured to transmit to the MT device [512], at least one response to the at least one received IVMR. The MO device [514] is configured to revert to MT device [512], the response for the at least one received IVMR in any form as stated above.
Figure 6 illustrates an exemplary block diagram [600] to enable interactive voice/multimedia response (IVMR) in an exemplary LTE network, in accordance with exemplary embodiments of the present invention. The exemplary block diagram [600] as indicated in the Figure 6 indicates that a MO UE [514] is connected with a MT UE [512] via an IMS network [602] in an event a call is initiated from the MO UE [514] to MT UE [512] in an LTE environment. The MO UE [514] and the MT UE [512] are connected to the LTE network via LTE-Uu interface. Also, the MO UE [514] and the MT UE [512] are connected to a PCSCF [604] of the IMS network [602] over a Gm interface. Further, the IMS network [602] comprises two PCSCF [604] connected to a SCSCF [606] over an Mw interface. The SCSCF [606] in the IMS network [602] is further connected to a telecom application server (TAS) [608] via an ISC interface. Thereafter, the TAS
[608] is connected to an IVMR server [612], a SMSC [614] and a media server (MRF) [610]. The TAS [608] is connected to the media server (MRF) [610] over a Mr interface.
Further, in order to enable interactive voice/multimedia response (IVMR) in an event an incoming call form the at the MO device [514] is rejected at the MT device [512] in the LTE environment, the features of the present invention are implemented in a similar manner as disclosed in the description of Figure 5.
Referring to Figure 7 illustrates an exemplary method flow diagram [700], depicting a method to enable interactive voice/multimedia response (IVMR), in accordance with exemplary embodiments of the present invention. As shown in figure 7 the method [700] begins at step [702].
At step [704], the method comprises receiving, at the telecom application server (TAS) [504] from a mobile terminating (MT) device [512], at least a call disconnect message and an Integrated Services Digital Network User Part (ISUP) code. Further, the step of receiving of at least the call disconnect message and the ISUP code at the TAS [504] from the MT device [512], further comprises receiving, at the MT device [512], an incoming call request from a MO device [514]. Further, upon receipt of the incoming call request, the method encompasses providing at an Incoming Call User Interface (UI) of the MT device [512], an additional option to reject the incoming call with an IVMR along with the existing options of call rejection. Therefore, in an exemplary implementation the method encompasses providing a “Reject with IVMR” option along with an “ACCEPT”, a “DECLINE”, an “IGNORE” and a “REJECT with SMS” option as a “REJECT with SMS OR IVMR” option. The Reject with IVMR option is provided to respond to the incoming call in such a way that a cause of rejection is correctly notified at the MO device [514] in the form of an Interactive Voice/Media Response (IVMR), wherein the IVMR response is provided in the form of an audio, a video, a text, an augmented reality (AR), a virtual reality (VR) response and an IoT response.
Further, upon receipt of the options of call rejection, the method encompasses receiving a first user selection to reject the incoming call with an IVMR. The method thereafter comprises providing, by the TAS [504], one or more IVMRs for call rejection on a display of the MT device [512]. The method thereafter comprises receiving, at the MT device [512], a second user selection of an IVMR for call rejection from the
one or more IVMRs for call rejection displayed on the MT device [512] to reject the incoming call.
Further based on the call rejection, the method encompasses identifying, at the MT device [512], the ISUP code corresponding to the user-selected IVMR for call rejection. The method thereafter also comprises generating, at the MT device [512], a call disconnect message, wherein at least the generated call disconnect message and the identified ISUP code is transmitted to the TAS [504] based on the rejection of the incoming call.
The method, thereafter, at step [706] comprises identifying, at the TAS [504], a cause of call rejection corresponding to the received ISUP code based on a mapping of at least one ISUP code and at least one user-selected Interactive Voice/Media Response (IVMR) for call rejection stored at the TAS [504]. The mapping of at least one ISUP code and at least one user-selected IVMR for call rejection is based on the at least one ISUP code defined at the TAS [504] for the at least one user-selected IVMR for call rejection. Furthermore, in an exemplary implementation the method encompasses defining by the TAS [504], one or more new ISUP codes for the one or more IVMRs for call rejection (designated actions/pre-defined cases of rejection with IVMR). Also, in another exemplary implementation the method encompasses assigning via the TAS [504] to the one or more IVMRs for call rejection one or more subsets of existing ISUP Code “17 – User Busy” mapped to Response Code ‘486 – Busy’.
Further, after identification the cause of call rejection by the TAS [504], at step [708], the method comprises transmitting, by the TAS [504], at least the ISUP code and the identified cause of call rejection to a media resource function (MRF) server [506].
Next, at step [710], the method comprises generating, at the MRF server [506], at least one relevant IVMR based on at least the ISUP code and the identified cause of call rejection transmitted to the MRF server [506], wherein the at least one relevant IVMR comprises at least a text response data, an audio response data, a video response data, an augmented reality (AR) response data, a virtual reality (VR) response data and an IoT response data. The text response data sent via a Short Message Service Centre (SMSC) [508] based on a mapping of the at least one ISUP code with at least one text response data. The audio response data and the video
response data is identified at the MRF server [506] based on a mapping of the at least one ISUP code with at least one audio response data and at least one video response data, respectively. The augmented reality (AR) response data and the virtual reality (VR) response data is extracted by the MRF server [506] from an IVMR server [510] based on a mapping of the at least one ISUP code with at least one augmented reality (AR) response data and at least one virtual reality (VR) response data, respectively. The IoT response data is extracted by the MRF server [506] from the IVMR server [510] based on a mapping of the at least one ISUP code with at least one IoT response data. Furthermore, the text response data, the audio response data, the video response data, the augmented Reality (AR) response data, the virtual Reality (VR) response data and the IoT response data are user-configurable.
More specifically, the method encompasses managing by a processing unit of each of the MT device [512] and the MO device [514], at least one of an audio, a video, an AR, a VR and IoT functionality based on a pre-trained data set. The pre-trained data set comprises a plurality of data trained on the basis of implementation of known artificial intelligence and machine learning techniques. Furthermore, the pre-trained data set encompasses a plurality of trained data related at least to the audio, the video, the augmented reality (AR), the virtual reality (VR) and the IoT functionality. More specifically, the method encompasses generating by the processing units of each of the MO device [514] and the MT device [512], one or more holographic animated creatures/characters of the users (for XR: AR/VR purpose), one or more audio messages, one or more video messages and/or one or more IoT data based on the pre-trained data set to feed a data corresponding to the generated one or more holographic animated creatures, one or more audio messages, the one or more video messages and/or the one or more IoT data to the IVMR server [510] as a pre-defined IVMR response. The holographic animated creature refers to a representation of the users comprising animation as per a response (ISUP) code generated on-to a designated user as an IVMR response. Thereafter, the method comprises storing at the IVMR server [510], the received data (pre-defined IVMR response).
Next, after generation of the at least one relevant IVMR, at step [712], the method comprises transmitting, by the MRF server [506], the at least one generated IVMR to a mobile originating (MO) device [514]. Further, the method comprises playing, at the
MO device [514], the at least one received IVMR. Also, the method thereafter comprises continuously detecting, at the MO device [514], an operating condition of the user of the MO device [514], wherein the at least one received IVMR is played on the MO device [514] based on at least one of the detected operating condition and the pre-trained data set. For instance, in an event an IVMR response comprising an audio, a video, a text, an augmented reality (AR), a virtual reality (VR) and IoT response is received at the MO device [514], the method in such instance encompasses evaluating, at the MO device [514], a state of the MO device [514] and/or the user of the MO device [514] based at least on the pre-trained data set to present the appropriate audio/ video/ text/ augmented reality (AR)/ virtual reality (VR) response at the MO device [514].
The method thereafter comprises transmitting, by the MO device [514] to the MT device [512], at least one response to the at least one received IVMR. The method encompasses transmitting by the MO device [514], to revert to MT device [512], the response for the at least one received IVMR in form of audio/ video/ text/ augmented reality (AR)/ virtual reality (VR)/IoT data. Thereafter, the method terminates at step [714].
Referring to Figure 8, an exemplary diagram of a process flow depicting a method to enable interactive voice/multimedia response (IVMR), in accordance with exemplary embodiments of the present invention is shown. At step 1, the Figure 8 indicates that at least an AR and/or a VR data is transmitted by a processing unit [804] of the MT device [512] to the network entity [502], wherein said transmitted data is stored at the IVMR server [510] as a pre-defined IVMR and character for the AR and/or VR. Furthermore, the figure 8 only depicts the transmitted AR and VR data, however in an implementation the data in other formats such as an audio, a video and the like may also be transmitted.
Next, at step 2, an outgoing call originated from the MO device [514] is indicated. Thereafter, the step 3 indicates an incoming call at the MT device [512]. Further based on the incoming call, at [802] a pre-defined call reject option is presented at the MT device [512], to reject the call with an IVMR. The pre-defined reject option is provided
as a Reject with IVMR option along with the other already existing options to respond to the incoming call.
Next, the step 4 indicates a transmission of a call rejection response with a cause value (ISUP) code from the MT device [512] to the TAS server [504] of the network entity [502], wherein the call rejection response with the cause value (ISUP code) is transmitted based on a rejection of the incoming call with the IVMR (Reject with IVMR option).
Further, at step 5, a transmission of an exact defined IVMR from the MRF server [506] of the network entity [502] to the MO device [514] is indicated, wherein the exact defined IVMR is a cause of call rejection responded with a relevant IVMR based on an identification of the cause of call rejection at the network entity [502]. The cause of call rejection is identified based on a mapping of the ISUP code and a user-selected IVMR for call rejection stored at the TAS [504]. Also, the step 6 indicates that a relevant AR and VR data (i.e. the reverent IVMR) is transmitted from the pre-defined IVMR and character for the AR and/or the VR stored at the IVMR server [510]. The relevant AR and VR data is provided based on a mapping of the ISUP code with the pre-defined IVMR and/or character for the AR and/or the VR.
More specifically, the network receives the call disconnect response/message ‘486 – Busy’ along with the ISUP codes in a reason header. The ISUP codes are pre-defined to user selected options for call rejection on the network entity [502]. Also, the ISUP codes are directly mapped to respective appropriate IVMR feed on the network entity [502] and accordingly based on the ISUP code the reason of the call rejection is being identified. Thereafter, the TAS [504] request the MRF (Media Gateway) server [506] to be played to the MO device [514] for the cause of call rejection to be responded with IVMR response. The IVMR Server [510] having at least the Augmented Reality/Virtual Reality Data (holographic animated creature/ character of User) is thereafter configured to provide it for presentation of at least one of a video or AR or VR response at the MO device [514].
Thereafter, the AR and VR data corresponding to the exact defined IVMR is played at the MO device [514] based on at least one of an operating condition of the MO device and/or a user of the MO device [514] identified via a processing unit [806] of the MO
device [514] and a pre-trained data set. The pre-trained data set comprises a plurality of data trained on the basis of implementation of artificial intelligence (Machine learning) techniques. Furthermore, the pre-trained data set encompasses a plurality of trained data related at least to an audio, a video, an augmented reality (AR) and a virtual reality (VR) functionality. Further, some of the exemplary use cases relating to presentation/playing of received IVMR/s at the MO device [514] are as below:
- The MO device [514] is configured to play an audio response received (if the MO device [514] is close to ear or headphone connected) or
- The MO device [514] is configured to play a video response received as picture-in-picture (If the MO device [514] is in user hand in Landscape mode), or
- The MO device [514] is configured to play a video as an augmented reality response received (if connected to an AR based device i.e. Smart AR Glass, etc., and/or as per a pre-defined preference) as indicated in Figure 9A, or
- The MO device [514] is configured to play a video as a virtual reality response received (if the MO device [514] is connected to a VR based device i.e. Smart VR Glass, etc. and/or as per a pre-defined preference) as indicated in Figure 9B.
Figure 9A illustrates an exemplary use case of providing an AR response, in accordance with exemplary embodiments of the present invention. At [902A] it indicates a user and at [904A] indicates an augmented reality (AR) device, wherein said AR device connected to the MO device [514]. Further, at [906A] the figure 9A indicates a live camera feed for physical world around the user and at [908A] it indicates a user view via the AR device. Thereafter, at [910A] the Figure 9A indicates adding of a digital content (holographic animated/ virtual) creature/ character onto the live camera feed, making that digital content look as if it is part of the physical world around user, wherein the digital content (holographic animated/ virtual) creature/ character is added onto the live camera feed to enable interactive voice/multimedia response (IVMR) during rejecting an incoming call at the MT device [512] based on the implementation of the features of the present invention.
Figure 9B illustrates an exemplary use case of providing a VR response, in accordance with exemplary embodiments of the present invention. At [902B] it indicates a user and at [904B] indicates a virtual reality (VR) device, wherein said VR device connected to the MO device [514]. Further, at [906B] the figure 9B indicates a software generated simulation environment and at [908B] it indicates a user view via the VR device. Thereafter, at [910B] the Figure 9B indicates adding of a digital content (holographic animated/ virtual) creature/ character onto the software generated simulation environment by placing user inside an experience in which users are immersed in a virtual world, wherein the digital content (holographic animated/ virtual) creature/ character is added onto the software generated simulation environment to enable interactive voice/multimedia response (IVMR) during rejecting an incoming call at the MT device [512] based on the implementation of the features of the present invention.
Referring to Figure 10 an exemplary flow diagram [1000], depicting an instance implementation of a process at a mobile terminating (MT) device [512] to enable interactive voice/multimedia response (IVMR), in accordance with exemplary embodiments of the present invention is shown. As shown in Fig. 10, the exemplary process starts at step [1002]. At step [1004] the method comprises receiving an incoming call at the MT device [512] from a MO device [514]. Next, at step [1006] the method encompasses identifying a caller ID at the MT device [514] and thereafter presenting at the MT device [514] one or more reply options (i.e. options to respond to the incoming call such as answer, reject, reject with SMS and reject with IVMR).
Further, at step [1008a] the method comprises presenting at the MT device [514] one or more pre-defined IVMR options for call rejection. Also, step [1008b] the method comprises presenting at the MT device [514] others options (i.e. answer/reject/reject with message) to respond to the incoming call. Next, at step [1010] the method encompasses selecting via a user one of the pre-defined IVMR options from the one or more pre-defined IVMR options.
Next, at step [1012] the method comprises identifying by the MT device [514] a pre-defined cause code (ISUP code) for the rejection IVMR (i.e. the user selected IVMR option for call rejection). Next, at step [1014] the method encompasses
disconnecting/rejecting the call with the relevant cause code in a call disconnect signalling. Thereafter, the method terminates at step [1016].
Referring to Figure 11 an exemplary flow diagram [1100], depicting an instance implementation of a process at a network entity [502] to enable interactive voice/multimedia response (IVMR), in accordance with exemplary embodiments of the present invention is shown. As shown in Fig. 11, the exemplary process starts at step [1102].
At step [1104] the method comprises receiving at a TAS server [504] of the network entity [502], a call disconnect message with a cause (ISUP) code from a MT device [512]. More specifically, the TAS [504] receives the call disconnect message ‘486 – Busy’ along with the ISUP Codes in a reason header. Next, at step [1106] based on the received ISUP code, the method encompasses identifying by the TAS [504] at the network entity [502], an actual option selected by a user (i.e. a cause of call rejection) at the MT device [514]. The ISUP codes are pre-defined to user selected option for call rejection on the network entity [502]. Also, the ISUP codes are directly mapped to respective appropriate IVMR feed on the network and accordingly based on the ISUP code the reason of call rejection is being identified.
Further, at step [1108], the method comprises selecting via a MRF serve [506] a pre-defined interactive voice or media response (IVMR) mapped to the cause (ISUP) code. The TAS [504] requests the MRF (Media Gateway) [506] to be played to the MO device [514] for the cause of call rejection to be responded with IVMR response. Next, at step [1010] the method encompasses transmitting via a MRF server [506] of the network entity [502] a call disconnect response to the originating device [514] with the appropriate IVMR played and the AR/VR data via the IVMR server [510]. The IVMR Server [510] provides the Augmented Reality/ Virtual Reality Data (holographic animated creature/ character of User) for presentation of Video or AR or VR response at the MO device [514]. Thereafter, the method terminates at step [1112].
Referring to Figure 12 illustrates an exemplary sequence diagram, depicting an instance implementation of the process of enabling an interactive voice/multimedia response (IVMR), in accordance with exemplary embodiments of the present invention is shown. At [1202] the MO device [514] initiates a call by sending INVITE
Request with supported parameters. Further, at [1204] the TAS [504] transmits the INVITE Request received from the MO device [514] to the MT device [512]. Thereafter, at [1206] the MT device [512] sends 183 Session Progress Response with its supported parameters. Next, at [1208], the TAS [504] transmits the 183 Session Progress Response received from the MT device [512] to the MO device [514].
Thereafter, at [1210] MO device [514] sends a PRACK (Provision Acknowledgement) towards the MT device [512]. Next, at [1212], the TAS [504] transmits the PRACK received from the MO device [514] to the MT device [512]. Thereafter, at [1214] the MT device [512] sends a 200 OK response towards the MO device [514]. Next, at [1216], the TAS [504] transmits the 200 OK response received from the MT device [512] to the MO device [514]. Thereafter, at [1218] the MT device [512] starts to ring and transmits towards the MO device [514] a 180 ringing response. Next, at [1220], the TAS [504] transmits the 180 ringing response received from the MT device [512] to the MO device [514].
Further at [1222], the MT device [512] rejects the call with an IVMR (such as an exemplary IVMR may be “Call You Back Later”). Thereafter, based on the call rejection the MT device [512] sends towards the TAS [504], a “486 Busy Here” Response (call disconnect message/response) with a reason (ISUP) code (for example cause code: 155) that is mapped against the above mentioned IVMR (say “Call You Back Later”). Next, at [1224], the TAS [504] after receiving the “486-Busy Here” response with reason header and the cause (ISUP) code communicates the cause code to the MRF server [506] which is further connected to at least the IVMR Server [510] and the SMSC [508]. More specifically, after checking the ISUP code of the call rejection in the call disconnect message, the TAS [504] requests the MRF server [506] to be played to the MO device [514] for a cause of call rejection to be responded with IVMR. And the IVMR Server [510] thereafter provides at least the pre-stored augmented reality and/or virtual reality data for presentation of a video or an AR or a VR response at the MO device [514].
Next, at [1226], the TAS [504] transmits the “486-Busy Here” response to the MO device [514]. Also, the MRF server [506] at [1228] is thereafter configured to respond
a relevant IVMR from the IVMR Server [510] to be played to the MO device [514] based on the preconfigured mapping of reason codes (ISUP codes) and the IVMR.
Further, the MO device [514] receives the IVMR response in the form of an audio, a video, a text, an augmented reality (AR), a virtual reality (VR) and IoT response. Also, the MO device [514] thereafter detects an operating condition of the MO device and/or a user of the MO device [514], wherein the at least one received IVMR is played on the MO device [514] based on at least one of the detected operating condition and a pre-trained data set comprising a data trained based on one or more artificial intelligence and/or machine learning techniques.
Referring to Figure 13 illustrates an exemplary user interface diagram, depicting various exemplary user interfaces at an exemplary MT device, in accordance with exemplary embodiments of the present invention. At [1302], an exemplary user interface of an incoming call at the MT device [512] indicates that a Reject with SMS or IVMR option [1302 A] is provided along with the other existing options to respond to the incoming call.
Thereafter, at [1304], the exemplary user interface of the incoming call at the MT device [512] indicates a user selection of Reject with SMS or IVMR option at [1304 A] and a user selection of a driving option at [1304 B]. Further, at [1306] the exemplary user interface of the incoming call at the MT device [512] indicates that a SMS and an IVMR option is provided at [1306 A] to reject the incoming call with one of a SMS or IVMR. For instance, if the user selects the IVMR option the user selection of the driving option is provided at the MO device [514] as an IVMR response and which is further played at the MO device [514] based at least on an operating condition of the MO device [514]. More specifically, when the user select the ‘IVMR’ option to reject the call, the call is terminated and a call disconnect response 486 – busy here with supposed newly defined ISUP code (let’s say reason 155) is transmitted at the TAS [504]. Further, after checking the ISUP code of the call rejection in the call disconnect message, TAS [504] requests the MRF server [506] in co-ordination with the IVMR server [510] and SMSC [508] to play the relevant IVMR to the MO device [514]. In this way the MO device [514] directly receives the call rejection cause over IVMR itself in the form of audio or video/AR/ VR/IoT data based on a user/device state.
Thus, the present invention provides a novel solution of enabling interactive voice/multimedia response (IVMR). Furthermore, by implementing the features of the present invention, with the call rejection response ‘486 – Busy’ separate cause codes in reason header for each of the pre-defined ‘Rejection IVMRs’ are introduced and accordingly network identifies a cause of the rejection and pass-on the relevant IVMR response to the MO Party. The present invention therefore overcomes the limitations of the existing solutions by intelligently identifying and notifying MO Party about a situation of call rejection directly over IVMR instead of a SMS which might get missed-out. Also, the present invention help to have a better experience of the ‘Faster call Rejection’ not only to a Called User but also to a Calling Party.
While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the invention. These and other changes in the preferred embodiments of the invention will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter to be implemented merely as illustrative of the invention and not as a limitation.
We Claim:
1. A method to enable interactive voice/multimedia response (IVMR), the method comprising:
- receiving, at a telecom application server (TAS) [504] from a mobile terminating (MT) device [512], at least a call disconnect message and an Integrated Services Digital Network User Part (ISUP) code;
- identifying, at the TAS [504], a cause of call rejection corresponding to the received ISUP code based on a mapping of at least one ISUP code and at least one user-selected Interactive Voice/Media Response (IVMR) for call rejection stored at the TAS [504];
- transmitting, by the TAS [504], at least the ISUP code and the identified cause of call rejection to a media resource function (MRF) server [506];
- generating, at the MRF server [506], at least one relevant IVMR based on at least the ISUP code and the identified cause of call rejection transmitted to the MRF server [506], wherein
the at least one relevant IVMR comprises at least a text response data, an audio response data, a video response data, an augmented reality (AR) response data, a virtual reality (VR) response data and an IoT response data,
the text response data is sent via a Short Message Service Centre (SMSC) [508],
the audio response data and the video response data is identified at the MRF server [506], and
the augmented reality (AR) response data, the virtual reality (VR) response data and IoT response data is extracted from an IVMR server [510]; and
- transmitting, by the MRF server [506], the at least one generated
IVMR to a mobile originating (MO) device [514].
2. The method as claimed in claim 1, wherein the text response data is sent from the SMSC [508] based on a mapping of the at least one ISUP code with at least one text response data.
3. The method as claimed in claim 1, wherein the audio response data and the video response data is identified at the MRF server [506], based on a mapping of the at least one ISUP code with at least one audio response data and at least one video response data.
4. The method as claimed in claim 1, wherein the augmented reality (AR) response data and the virtual reality (VR) response data is extracted by the MRF server [506] from the IVMR server [510] based on a mapping of the at least one ISUP code with at least one augmented reality (AR) response data and at least one virtual reality (VR) response data.
5. The method as claimed in claim 1, wherein the IoT response data is extracted by the MRF server [506] from the IVMR server [510] based on a mapping of the at least one ISUP code with at least one IoT response data.
6. The method as claimed in claim 1, wherein the method further comprising playing, at the MO device [514], the at least one received IVMR.
7. The method as claimed in claim 6, the method further comprising continuously detecting, at the MO device [514], an operating condition of a user of the MO device [514], wherein the at least one received IVMR is played on the MO device [514] based on at least one of the detected operating condition and a pre-trained data set.
8. The method as claimed in claim 6, the method further comprising transmitting by the MO device [514] to the MT device [512], at least one response to the at least one received IVMR.
9. The method as claimed in claim 1, wherein receiving, at the TAS [504] from
the MT device [512], at least a call disconnect message and an ISUP code
further comprises:
- receiving, at the MT device [512], an incoming call request from the MO device [514];
- receiving a first user selection to reject the incoming call with an IVMR;
- providing, by the TAS [504], one or more IVMRs for call rejection on a display of the MT device [512];
- receiving, at the MT device [512], a second user selection of an IVMR for call rejection from the one or more IVMRs for call rejection displayed on the MT device [512] to reject the incoming call;
- identifying, at the MT device [512], the ISUP code corresponding to the user-selected IVMR for call rejection; and
- generating, at the MT device [512], a call disconnect message, wherein at least the generated call disconnect message and the identified ISUP code is transmitted to the TAS [504] based on the rejection of the incoming call.
10. The method as claimed in claim 1, wherein the mapping of at least one ISUP code and at least one user-selected IVMR for call rejection is based on the at least one ISUP code defined at the TAS [504] for the at least one user-selected IVMR for call rejection.
11. The method as claimed in claim 1, wherein at least the text response data, the audio response data, the video response data, the Augmented Reality (AR) response data, the Virtual Reality (VR) response data and the IoT response data are user-configurable.
12. A system to enable interactive voice/multimedia response (IVMR), the system comprising:
- a telecom application server (TAS) [504] configured to:
- receive, from a mobile terminating (MT) device [512], at least a call disconnect message and an Integrated Services Digital Network User Part (ISUP) code,
- identify a cause of call rejection corresponding to the received ISUP code based on a mapping of at least one ISUP code and at least one user-selected Interactive Voice/Media Response (IVMR) for call rejection stored at the TAS [504];
- transmit, at least the ISUP code and the identified cause of call rejection to a media resource function (MRF) server [506];
- the MRF server [506], configured to:
- generate at least one relevant IVMR based on at least the ISUP
code and the identified cause of call rejection transmitted to
the MRF server [506], wherein
- the at least one relevant IVMR comprises at least a text response data, an audio response data, a video response data, an augmented reality (AR) response data, a virtual reality (VR) response data and an IoT response data,
- the text response data is sent via a Short Message Service Centre (SMSC) [508],
- the audio response data and the video response data is identified at the MRF server [506], and
- the augmented reality (AR) response data, the virtual reality (VR) response data and the IoT response data is extracted from an IVMR server [510]; and
- transmit, the at least one generated IVMR to a mobile
originating (MO) device [514].
13. The system as claimed in claim 12, wherein the text response data is sent via the SMSC [508] based on a mapping of the at least one ISUP code with at least one text response data.
14. The system as claimed in claim 12, wherein the audio response data and the video response data is identified at the MRF server [506], based on a mapping of the at least one ISUP code with at least one audio response data and at least one video response data.
15. The system as claimed in claim 12, wherein the augmented reality (AR) response data and the virtual reality (VR) response data is extracted by the MRF server [506] from the IVMR server [510] based on a mapping of the at least one ISUP code with at least one augmented reality (AR) response data and at least one virtual reality (VR) response data.
16. The system as claimed in claim 12, wherein the IoT response data is extracted by the MRF server [506] from the IVMR server [510] based on a mapping of the at least one ISUP code with at least one IoT response data.
17. The system as claimed in claim 12, wherein the MO device [514] is further configured to play the at least one received IVMR.
18. The system as claimed in claim 17, wherein the MO device [514] is further configured to continuously detect, an operating condition of a user of the MO device [514], wherein the at least one received IVMR is played on the MO device [514] based on at least one of the detected operating condition and a pre-trained data set.
19. The system as claimed in claim 17, wherein the MO device [514] is further configured to transmit to the MT device [512], at least one response to the at least one received IVMR.
20. The system as claimed in claim 12, wherein to receive at the TAS [504] from a mobile terminating (MT) device [512], at least a call disconnect message and an ISUP code:
- the MT device [512] is further configured to:
- receive an incoming call request from the MO device [514], and
- receive a first user selection to reject the incoming call with an IVMR;
- the TAS [504] is further configured to provide one or more IVMRs for
call rejection on a display of the MT device [512];
wherein the MT device [512] is further configured to:
- receive a second user selection of an IVMR for call rejection from the one or more IVMRs for call rejection displayed on the MT device [512] to reject the incoming call,
- identify, the ISUP code corresponding to the user-selected IVMR for call rejection, and
- generate, a call disconnect message, wherein at least the generated call disconnect message and the identified ISUP code is transmitted to the TAS [504] based on the rejection of the incoming call.
21. The system as claimed in claim 12, wherein the mapping of at least one ISUP code and at least one user-selected IVMR for call rejection is based on the at least one ISUP code defined at the TAS [504] for the at least one user-selected IVMR for call rejection.
22. The system as claimed in claim 12, wherein the text response data, the audio response data, the video response data, the augmented Reality (AR) response data, the virtual Reality (VR) response data and the IoT response data are user-configurable.
| # | Name | Date |
|---|---|---|
| 1 | 201921043578-STATEMENT OF UNDERTAKING (FORM 3) [25-10-2019(online)].pdf | 2019-10-25 |
| 2 | 201921043578-PROVISIONAL SPECIFICATION [25-10-2019(online)].pdf | 2019-10-25 |
| 3 | 201921043578-FORM 1 [25-10-2019(online)].pdf | 2019-10-25 |
| 4 | 201921043578-FIGURE OF ABSTRACT [25-10-2019(online)].pdf | 2019-10-25 |
| 5 | 201921043578-Proof of Right (MANDATORY) [15-11-2019(online)].pdf | 2019-11-15 |
| 6 | 201921043578-FORM-26 [20-11-2019(online)].pdf | 2019-11-20 |
| 7 | 201921043578-ORIGINAL UR 6(1A) FORM 26-251119.pdf | 2019-11-27 |
| 8 | 201921043578-ORIGINAL UR 6(1A) FORM 1-251119.pdf | 2019-11-27 |
| 9 | 201921043578-FORM 18 [24-10-2020(online)].pdf | 2020-10-24 |
| 10 | 201921043578-ENDORSEMENT BY INVENTORS [24-10-2020(online)].pdf | 2020-10-24 |
| 11 | 201921043578-DRAWING [24-10-2020(online)].pdf | 2020-10-24 |
| 12 | 201921043578-COMPLETE SPECIFICATION [24-10-2020(online)].pdf | 2020-10-24 |
| 13 | Abstract1.jpg | 2021-10-19 |
| 14 | 201921043578-FER.pdf | 2021-12-17 |
| 15 | 201921043578-PA [26-02-2022(online)].pdf | 2022-02-26 |
| 16 | 201921043578-ASSIGNMENT DOCUMENTS [26-02-2022(online)].pdf | 2022-02-26 |
| 17 | 201921043578-8(i)-Substitution-Change Of Applicant - Form 6 [26-02-2022(online)].pdf | 2022-02-26 |
| 18 | 201921043578-Response to office action [05-04-2022(online)].pdf | 2022-04-05 |
| 19 | 201921043578-FER_SER_REPLY [16-06-2022(online)].pdf | 2022-06-16 |
| 20 | 201921043578-US(14)-HearingNotice-(HearingDate-28-03-2024).pdf | 2024-02-28 |
| 21 | 201921043578-Correspondence to notify the Controller [19-03-2024(online)].pdf | 2024-03-19 |
| 22 | 201921043578-FORM-26 [20-03-2024(online)].pdf | 2024-03-20 |
| 23 | 201921043578-Written submissions and relevant documents [08-04-2024(online)].pdf | 2024-04-08 |
| 24 | 201921043578-PatentCertificate27-04-2024.pdf | 2024-04-27 |
| 25 | 201921043578-IntimationOfGrant27-04-2024.pdf | 2024-04-27 |
| 1 | Search_201921043578E_15-12-2021.pdf |