Abstract: Embodiments provide methods and systems for conducting remote product demonstration using extended reality (XR) technology and artificial intelligence methods. Method performed by server system associated with remote product demonstration application includes receiving request for remote product demonstration associated with product from customer device associated with customer. Request includes information of 3D scene data of the product. The method includes identifying product demonstrator from plurality of product demonstrators based on product attributes associated with product, generating scene of product on product demonstrator device associated with product demonstrator based on 3D scene data, and determining product features associated with product based on 3D scene data and deep neural network model. Method includes modifying scene on product demonstrator device by overlapping product features on the scene displayed on product demonstrator device, and initiating remote extended-reality based conference room call between product demonstrator and customer.
DESC:FORM 2
THE PATENTS ACT 1970
(39 of 1970)
&
The Patent Rules 2003
COMPLETE SPECIFICATION
(refer section 10 & rule 13)
TITLE OF THE INVENTION:
EXTENDED-REALITY AND ARTIFICIAL INTELLIGENCE BASED REMOTE PRODUCT DEMONSTRATION METHODS AND SYSTEMS
APPLICANT(S):
Name:
Nationality:
Address:
Triumb Technologies Pvt. Ltd.
Indian
#7,8, 27th Main Road, Second Floor, HSR Layout, Sector One, Bangalore 560102, India
PREAMBLE TO THE DESCRIPTION
The following specification particularly describes the invention and the manner in which it is to be performed.
DESCRIPTION
(See next page)
EXTENDED-REALITY AND ARTIFICIAL INTELLIGENCE BASED REMOTE PRODUCT DEMONSTRATION METHODS AND SYSTEMS
TECHNICAL FIELD
The present disclosure generally relates to remote product demonstration and, more particularly to methods and systems for conducting a remote product demonstration using extended reality (XR) technology and artificial intelligence methods.
BACKGROUND
A product demonstration (also referred to as "demo") is an effective way to address product-related concerns specific to a customer. The customers, who are more visual or hands-on learners, often want to see the product in action to fully understand its value, potential, and its effectiveness for their usage. This is especially true if product features are one of the key selling points of the product. If customers are provided with an option to perceive the product in their own context or their own environment by seeing, feeling, listening, touching, and sometimes smelling it, it will be more appealing to the customers than simply reading the product demonstration or just listening to the sales pitch of some salesperson.
In a current scenario, the products are usually marketed using telesales methods, by performing remote video calls, or by providing in-person demos. Although, all these methods may help the customer in getting a brief idea about the product, getting a detailed explanation of each feature of the product to get a clear visualization of the product is not fully achievable through all these methods.
The situation becomes more complex for products that keep on evolving over time, as the sales and marketing teams need to be well-versed with the updated product information. The situation can also lead to inferior and incomplete demonstration to the customer in case the sales and marketing teams are not well-versed.
Further, the high costs involved in a product expert or marketing staff visiting the customer for providing in-person demo make it an unfeasible and non-scalable method. Perusing customer to visit physical product showroom leads to decision making and delayed sales cycles.
In view of the above discussion, there is a need for a technical solution for enabling sales and marketing staff to effectively conduct product demonstration sessions remotely or in-person.
SUMMARY
Various embodiments of the present disclosure provide methods and systems for conducting a remote product demonstration using extended reality (XR) technology and artificial intelligence methods.
In an embodiment, a computer-implemented method is disclosed. The method includes receiving, by a server system associated with a remote product demonstration application, a request for remote product demonstration associated with a product from a customer device associated with a customer, the request comprising information of three-dimensional (3D) scene data of the product that is viewed by the customer on the customer device in an extended-reality (XR) environment. Further, the method includes identifying, by the server system, a product demonstrator from a plurality of product demonstrators based, at least in part, on product attributes associated with the product. The method further includes generating, by the server system, a scene of the product on a product demonstrator device associated with the product demonstrator based, at least in part, on the 3D scene data of the product. Thereafter, the method includes determining, by the server system, product features associated with the product based, at least in part, on the 3D scene data of the product and a deep neural network model. Furthermore, the method includes modifying, by the server system, the scene of the product on the product demonstrator device associated with the product demonstrator by overlapping the product features on the scene of the product displayed on the product demonstrator device of the product demonstrator. The method includes initiating, by the server system, a remote XR based conference room call between the product demonstrator and the customer.
BRIEF DESCRIPTION OF THE FIGURES
For a more complete understanding of example embodiments of the present technology, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:
FIG. 1 illustrates an environment related to at least some example embodiments of the present disclosure;
FIG. 2 illustrates a flow diagram of conducting remote demonstration of a product using extended reality (XR) technology, in accordance with an example embodiment;
FIG. 3 is a schematic diagram representation of adaptive deep learning models that is used for accessing optimal marketing/sales data, and product feature information corresponding to the product, in accordance with an example embodiment;
FIGS. 4A and 4B, collectively, represent a flow diagram of conducting collaborative remote product demonstrations, in accordance with some embodiments;
FIG. 5 is an example representation of a user interface (UI) in an XR environment depicting product description fields displayed to a user associated with a remote demonstration platform, in accordance with an example embodiment;
FIG. 6A is an example representation of the UI in an XR environment depicting a conference call page displayed to the user of the remote demonstration platform, in accordance with an example embodiment;
FIG. 6B is an example representation of the UI in an XR environment depicting a conference call page displayed to the user of the remote demonstration platform, in accordance with another example embodiment;
FIG. 7 is an example representation of the UI in an XR environment depicting a conference call page displayed to a product demonstrator, in accordance with an example embodiment;
FIG. 8 is a block diagram of a server system, in accordance with an example embodiment;
FIG. 9 is a block diagram representation of a deep reinforcement learning model, in accordance with an example embodiment;
FIG. 10 represents a flow diagram depicting a method for product demonstration, in accordance with an example embodiment; and
FIG. 11 is a block diagram of an electronic device (such as customer device and a product demonstrator device), in accordance with an example embodiment.
The drawings referred to in this description are not to be understood as being drawn to scale except if specifically noted, and such drawings are only exemplary in nature.
DETAILED DESCRIPTION
Various computer implemented methods and systems for conducting a remote demonstration of a product using extended reality (XR) technology and artificial intelligence methods are disclosed.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure can be practiced without these specific details. In other instances, systems and methods are shown in block diagram form only in order to avoid obscuring the present disclosure.
Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not for other embodiments.
Moreover, although the following description contains many specifics for the purposes of illustration, anyone skilled in the art will appreciate that many variations and/or alterations to said details are within the scope of the present disclosure. Similarly, although many of the features of the present disclosure are described in terms of each other, or in conjunction with each other, one skilled in the art will appreciate that many of these features can be provided independently of other features. Accordingly, this description of the present disclosure is set forth without any loss of generality to, and without imposing limitations upon, the present disclosure.
Various example embodiments of the present disclosure are described hereinafter with reference to FIGS. 1 to 11.
FIG. 1 shows an example representation of an environment 100 related to at least some example embodiments of the present disclosure. The example representation of the environment 100 is shown depicting a wireless communication network (e.g., a network 106) that connects entities such as, a product shop 108, customers 102a, 102b, 102c, and 110, product demonstrators 114a, 114b and 114c, and a server system 118. The customers 102a, 102b, 102c and 110 are depicted to be associated with electronic devices 104a, 104b, 104c and 112 (hereinafter referred to as ‘customer devices 104a, 104b, 104c and 112), respectively. The product demonstrators 114a, 114b, and 114c are depicted to be associated with electronic devices 116a, 116b, and 116c (hereinafter referred to as ‘product demonstrator devices 116a, 116b, and 116c), respectively. It should be noted that the three product demonstrators are shown for the sake of simplicity to explain the present disclosure, and in the application, there can be many such product demonstrators. It should also be noted that the product demonstrators 114a, 114b, and 114c are collectively referred as a product demonstrator 114 and the product demonstrator devices 116a, 116b, and 116c are collectively referred as a product demonstrator device 116. Though the embodiments of the present disclosure primarily describe about remote product demonstration its applicability and extensibility to use cases such remote product support, remote product training and skill training may be explored. Further, the present disclosure augments user experience not just in the remote product demonstration but also in the in-person demos by super imposing key features on the physical/virtual product, thereby minimizing the informational gap between sales person and the customer.
In an embodiment, the customer device 104a and the product demonstrator device 116 are equipped with a remote demonstration platform 120 that facilitates remote demonstration of one or more products using three-dimensional (3D) or extended reality (XR) technology.
The terms “extended reality”, “XR”, “augmented reality”, or “AR” refer to a technology displaying one or more two-dimensional or three-dimensional objects on a device such as a mobile device, a tablet computer, computer or a head mount display device, head mounted device so that the one or more two-dimensional or three-dimensional objects exist in an environment of the customer device. The environment of the customer can be captured by a camera of the device and displayed together with the one or more two-dimensional or three-dimensional objects. Alternatively, the customer can see the environment of the customer through a transparent display that shows one or more two-dimensional or three-dimensional objects. Although the term “augmented reality” or “AR” is used in this disclosure, the term “mixed reality”, “MR”, hybrid reality, meta-reality or any other terminologies can be applicable in a similar manner.
The customer device 104a and the product demonstrator device 116 may be any communication devices having hardware components for enabling User Interfaces (UIs) of the remote demonstration platform 120 to be presented on the customer device 104a and the product demonstrator device 116. The customer devices 104a, 104b, 104c, and 112, and the product demonstrator device 116 may be capable of being connected to a wireless communication network (such as the network 106). Examples of the customer devices 104a, 104b, 104c and 112 and the product demonstrator device 116 may include a mobile phone, a smart telephone, a computer, a laptop, a PDA (Personal Digital Assistant), a Mobile Internet Device (MID), a tablet computer, an Ultra-Mobile personal computer (UMPC), a phablet computer, a handheld personal computer and the like.
In at least one example embodiment, the customer 102a may access the remote demonstration platform 120 for viewing one or more products available in a product portfolio offered by the remote demonstration platform 120. In an embodiment, the product portfolio is maintained in a 3D environment or an extended reality environment. In one example, the customer 102a may be required to wear a smart wearable or a headset depending on a type of the extended reality environment for browsing products available in the product portfolio. The customer 102a may also access the remote demonstration platform 120 for getting more information, such as how to make a purchase, details about some features, color options, or any other details about a product that the customer 102a may be interested in. The remote demonstration platform 120 may enable the customer 102a to get connected with the product demonstrator 114 to get more information about the product. The remote demonstration platform 120 sets up a product demonstration session remotely with one or more product demonstrators such as, a product demonstrator 114. In an embodiment, the remote demonstration platform 120 schedules the product demonstration session as a virtual shop or in an exhibition stall where customers can join or visit the pre-scheduled product demonstration session. In an embodiment, the product demonstrator 114 can be a product expert. In another embodiment, the product demonstrator 114 can be a sales or marketing person. In yet another embodiment, the product demonstrator can be a person from a sales support center or any automated agent such as, a chatbot.
In an embodiment, the product demonstrator 114 may access the remote demonstration platform 120 for answering product related queries of the customer 102a, related products or accessories, and functioning of the products before sales/during sales or post sales. The remote demonstration platform 120 is configured to receive 3D scene related information associated with the product from the customer device 104a. Based on the 3D scene related information, the remote demonstration platform 120 may enable the product demonstrator 114 to have a 3D view of the product similar to the 3D view of the customer 102a browsing the product on the customer device 104a so that the product demonstrator 114 gets the same perspective as the customer 102a. In an embodiment, the product demonstrator 114 may view a portion of the product or the product from the same specific angle and conditions, such as zoom in/zoom out as those of the customer 102a in the extended reality environment. The remote demonstration platform 120 may also enable the product demonstrator 114 to convert a call with the customer 102a into a remote extended-reality based conference room call. Once the call is converted, the remote demonstration platform 120 may enable the product demonstrator 114 to add the customer 102a in the remote extended-reality based conference room call.
However, the customer 102a may be required to accept a connection request received on the customer device 104a for joining the remote extended-reality based conference room call to become a part of the remote extended-reality based conference room call. In an embodiment, the remote extended-reality based conference room call may create an extended reality environment in which the customer 102a and the product demonstrator 114 may feel like they are sitting in an extended-reality based conference room where the product is being displayed on a projector for explanation purpose. Further, the remote demonstration platform 120 may be configured to synchronize content displayed on the customer device 104a and the product demonstrator device 116 in the remote extended-reality based conference room call. The synching of the content may help the product demonstrator 114 in explaining all the features of the product or in providing a detailed explanation about a part of the product or in providing information about customizations options available for the product or in providing information about different variants available for the product, and may also help in answering queries asked by the customer 102a. In an embodiment, the product demonstrator 114 can provide information about competitive products, and can also launch related product demos, new accessories demos, etc., in the remote extended-reality based conference room call that may help the customer 102a in making a selection of a product from one or more similar products.
In an embodiment, the server system 118 provides a software application, herein referred to as the remote demonstration platform 120, in response to a request received from the customer device 104a and the product demonstrator device 116a to 116c (associated with the product demonstrators 114a to 114c, respectively) via the network 106. Examples of the network 106 include stand-alone or a combination of a local area network (LAN), a wide area network (WAN), wireless, wired, any currently existing or to be developed network that can be used for communication. More specifically, an example of the network 106 can be the Internet which may be a combination of a plurality of networks.
In some embodiments, the remote demonstration platform 120 may be factory-installed on the customer device 104a and the product demonstrator devices 116a to 116c. The customer 102a and the product demonstrators 114a to 114c may not need to specifically request the remote demonstration platform 120 from the server system 118. The remote demonstration platform 120 may also be accessed through the web via the network 106. In an embodiment, the remote demonstration platform 120 may be accessed through the web using Internet.
In an embodiment, the remote demonstration platform 120 is configured to display a plurality of products that is available in the product portfolio in the extended reality environment. In an embodiment, the remote demonstration platform 120 is configured to maintain customer information about each customer (e.g., the customer 102a) of a plurality of customers in a database 122. The customer information may include personal details and location details. The remote demonstration platform 120 is also configured to maintain customer interaction history for each customer, every time the customer 102a browses the product portfolio displayed by the remote demonstration platform 120 or contacts a product demonstrator (e.g., the product demonstrator 114) while browsing products offline in a product shop or online in some e-commerce website. The customer interaction history may also be maintained in the database 122. The customer interaction history includes details about products viewed by the customer, products (color, variant, functionality, etc.) in which the customer 102a is interested, queries asked by the customer, timing details, etc.
Further, the remote demonstration platform 120 is configured to generate additional intelligent information for each product available in the product portfolio based, at least in part, on the 3D scene related information associated with the product received from the customer device 104a, and the customer interaction history maintained for a plurality of customers. The additional intelligent information may include, but not limited to, location based insights, negotiation patterns, frequently asked questions etc. The location based insights may include language preferences, color preferences, features in-demand etc., for the particular location. In some embodiments, the remote demonstration platform 120 is configured to overlap additional intelligent information of the product over the 3D view of the product that is visible to the product demonstrator 114. The product demonstrator 114 may also enable the customer 102a to see some of the additional intelligent information if needed.
In at least one example embodiment, the remote demonstration platform 120 is configured to connect the customer 102a with the product demonstrator 114 that may help the customer 102a in making the purchase of the product by providing a better explanation of the product. The remote demonstration platform 120 may generate a connection request and may send the connection request to the product demonstrator 114 for connecting the customer 102a with the product demonstrator 114. The product demonstrator 114 needs to accept the connection request for getting connected with the customer. Further, the remote demonstration platform 120 is configured to provide information about the same product viewed by the customer 102a to the product demonstrator 114. The information may include product id, product view details, condition details, product feature details etc. Additionally, the remote demonstration platform 120 is configured to facilitate the creation of the remote extended-reality based conference room call for connecting the customer 102a with the product demonstrator 114. In an embodiment, the content displayed on a customer device (e.g., the customer device 104a) and a product demonstrator device (e.g., the product demonstrator device 116) in the extended-reality based conference room are synchronized. Therefore, the customer 102a and the product demonstrator 114 may see the same product in the same orientation which helps in better explanation of the product.
The remote demonstration platform 120 is also configured to facilitate the addition of one or more members by the customer 102a in the remote extended-reality based conference room call. The members can be friends/relatives/family members/colleagues/social contacts and decision makers of the customer. For adding the one or more members, the remote demonstration platform 120 may create a dynamic temporary link that can be shared by the customer 102a with the members using the UI of the remote demonstration platform 120 or any messaging platform/mailing platform. The members need to click on the shared dynamic temporary link to get added in the extended-reality based conference room call. The added members may also get to see the same 3D view as seen by the others (e.g., the customer and the product demonstrator) in the extended-reality based conference room.
Additionally, the remote demonstration platform 120 is configured to facilitate the addition of one or more members by the product demonstrator 114 in the extended-reality based conference room call in case the product demonstrator 114 needs support in providing better explanation/demonstration of the product. In an embodiment, one or more members can be fellow product demonstrators. For adding the one or more members, the product demonstrator may either send invites to the one or more members through the remote demonstration platform 120 or may create a dynamic temporary link that can be shared by the product demonstrator with the members using any messaging platform/mailing platform. The added one or more members may get to see the same view as seen by the others (e.g., the customers and the product demonstrator) in the extended-reality based conference room. The 3D scene may include the 2D images, 3D texts, Videos, Voice overs, Annotations, Labels etc. as part of the 3D scene not just the 3D product
In one example scenario, as seen in FIG. 1, the customer 102a is interested in buying a Bluetooth headset and the customer 102a wants to know more about the features of the Bluetooth headset. So, while viewing a mic present in the Bluetooth headset on the remote demonstration platform 120 using XR wearable device, the customer 102a may use the option to get connected with the product demonstrator 114a. The product demonstrator 114a, upon accepting a request, may get information about the product that the customer 102a is viewing on the customer device 104a. In one embodiment, the remote demonstration platform 120 is configured to receive 3D scene related information about the product which the customer 102a is looking at, from the customer device 104a and utilize adaptive deep learning models for extracting information related to marketing, sales or post-sales support based on the customer’s profile data or based on earlier conducted such product demonstration sessions for the same customer or different customers from the database 122. The database 122 is configured to store all product related insights or features.
The product demonstrator 114a then converts the call into a remote - extended-reality (XR) based conference room call for connecting with the customer 102a. The customer 102a may receive a connection invite from the product demonstrator 114a for attending the extended-reality (XR)-based conference room call. Once the customer 102a accepts the connection invite, the customer 102a and the product demonstrator 114a will become a part of the extended-reality (XR)-based conference room call and the content that is being displayed on their devices will be synchronized. Since the customer 102a is viewing the mic present in the Bluetooth headset, the product demonstrator 114a will also get to see the mic present in the Bluetooth headset on the product demonstrator device 116a from the same angle at which the customer 102a is viewing on the customer device 104a. In some scenarios, it should be noted that the product demonstrator 114a is also required to wear an XR wearable device for viewing products. The presentation of the same product i.e. Bluetooth headset in the same view (i.e., angle and conditions) may help the product demonstrator 114 in better understanding the queries of the customer 102a and in providing a better illustration of the features of the Bluetooth headset.
The customer 102a may add the customer 102b and the customer 102c in the extended-reality (XR) based conference room call for taking decisions on the product. The customer 102a may create a dynamic temporary link on the remote demonstration platform 120 and may then share the dynamic temporary link with the customer 102b and the customer 102c. The customer 102b and the customer 102c need to click on the shared dynamic link to get added in the extended-reality (XR) based conference room call. Once the customer 102b and the customer 102c are also added in an extended-reality (XR) based conference room, all the customers present in the room will get the same view of the Bluetooth headset. Now the customers 102a, 102b, and 102c can ask queries about the Bluetooth headset with the product demonstrator 114a.
Since the features of the Bluetooth headset are quite new, the product demonstrator 114a may not be able to answer all the queries asked by the customers 102a, 102b, and 102c. The product demonstrator 114a may then add another product demonstrator 114b, who is a product expert for the Bluetooth headset, in the extended-reality (XR) based conference room call. The product demonstrator 114b will also get the same view of the Bluetooth headset as others in the extended-reality (XR) based conference room call. The product demonstrator 114b may then answer all the queries asked by the customers 102a, 102b, and 102c that helps the customers in better visualization of the Bluetooth headset.
In another example scenario, the customer 110 views a television (TV) in a product shop 108. The customer 110 is interested in getting more information about the features of the TV. A label displayed on the product TV indicates that the customer 110 needs to scan an identification code 109 for getting connected with a product demonstrator. The identification code can be a barcode, QR code, location based tag or a Microsoft tag or other codes like NFC, and other identification codes. The customer 110 scans the identification code 109 provided on the TV using the customer device 112. In an embodiment, the scanning of the identification code 109 may ask the customer 110 to download the remote demonstration platform 120 to start communication with the product demonstrator 114c. In another embodiment, the scanning of the identification code 109 may open a chat interface that can be used by the customer 110 for starting communication with the product demonstrator 114c. In yet another embodiment, the scanning of the identification code 109 may place a contact number on a phone dial. The contact number can be dialled by the customer 110 for starting communication with the product demonstrator 114c.
In some cases, the customer 110 is asked to download the remote product demonstration application. The customer 110 downloads and registers with the remote demonstration platform 120 for getting connected with the product demonstrator 114c. The customer 110 then searches for the same TV that he/she may be interested in from a plurality of products that is available in the remote demonstration platform 120. While viewing the same TV, a user interface (UI) facilitated by the remote demonstration platform 120 provides an option to the customer 110 to get connected with the product demonstrator 114c. The customer 110 uses the option and gets connected with the product demonstrator 114c that may answer all his/her queries and may provide a detailed explanation of features offered by the same TV. In some cases, the customer 110 may be asked to join a web-based call/interaction for getting more information about the product.
In some cases, after scanning the identification code 109, the chat interface is opened for the customer 110 or the customer 110 enables to call the product demonstrator 114c, or the product demonstrator 114c may ask the customer 110 to either download the remote demonstration platform 120 for getting more information about the product or may schedule a call at a later point of time that is convenient for the customer 110. In some cases, the customer 110 may be asked to join a web-based call/interaction for getting more information about the product. A process to use the remote demonstration platform 120 is already explained above, so it is not explained herein again for the sake of brevity.
In case of a scheduled call, a dynamic temporary link is created by the product demonstrator 114c using the remote demonstration platform 120, and then the dynamic temporary link along with login details is shared with the customer 110 using any messaging/mailing platform or using the remote demonstration platform . The customer 110 clicks on the dynamic temporary link at the scheduled time to get connected with the product demonstrator 114c in a remote extended-reality based conference room where the content that is displayed on customer device 112 and the product demonstrator device 116c is synched. The customer 110 can then ask queries related to features and functionalities of the TV and the asked queries will then be answered by the product demonstrator 114c.
The remote demonstration platform 120 is a remote product demonstration application resting at the server system 118. In an embodiment, the server system 118 is configured to manage the remote demonstration platform 120 and communicate with devices, such as the customer device 104a, 104b, 104c, and 112 and the product demonstrator device 116 using the network 106. The server system 118 may be local and can be a physical server present at a geographical location. Alternatively, or additionally, the server system 118 can be a remote server, such as a cloud-based server.
It is noted that the instructions (or the executable code) configuring the remote demonstration platform 120 are stored in a memory of the server system 118, and the instructions are executed by a processor (for example, a single-core or a multi-core processor) included within the server system 118. Accordingly, even though the various functionalities for conducting a remote demonstration of a product using extended reality (XR) technology are explained with reference to or being performed by the remote demonstration platform 120, it is understood that the processor in conjunction with the code in the memory is configured to execute the various tasks as enabled by the instructions of the remote demonstration platform 120.
Referring now to FIG. 2, a flow diagram 200 of conducting remote demonstration of a product using extended reality (XR) technology is illustrated in accordance with an example embodiment. The sequence of operations of the flow diagram 200 does not need to be necessarily executed in the same order as it is presented. Further, one or more operations may be grouped and performed in form of a single step, or one operation may have several sub-steps that may be performed in parallel or in a sequential manner.
At 202, a customer (e.g., the customer 102a) sees or browses a product 'X' in an offline manner at a physical store or in an online manner at an e-commerce website or an e-commerce application installed on the customer device 104a.
When the customer 102a is interested in the product 'X' and wants to get more product information/demonstration about the product 'X', at operation 204, the customer 102a sends a request via the server system 118 to a product demonstrator (such as, the product demonstrator 114) using a connecting means. The connecting means may be, but not limited to, a click button on the customer device 104a, scanning an identification code on the product by the customer device 104a, an eye gazing scan, hand gestures, location based triggers, other codes like NFC, or bluetooth based triggers etc.
After receiving the request, at 206, the product demonstrator 114 also receives orientation information of the product 'X' which the customer 102a is looking at with specific angle, zoom-in/out, and a specific portion of the product that is being tapped.
At 208, the server system 118 generates the same 3D view in 3D/extended reality (XR) environment for the product demonstrator 114 on the product demonstrator device 116 based on the orientation information. In other words, the server system 118 is configured to generate an environment for the product demonstrator 114 similar to that of the customer 102a so that the interaction between the customer 102a and the product demonstrator 114 can be enhanced.
At 210, the server system 118 utilizes adaptive deep learning models for getting marketing/sales data and product feature information corresponding to the product and provides the marketing/sales data and the product feature information to the product demonstrator 114. Detailed explanation of the adaptive deep learning models is provided with reference to FIG.3.
At 212, the server system 118 establishes a remote extended-reality based conference room call for connecting the customer 102a with the product demonstrator 114. In one example, the customer 102a and the product demonstrator 114 may be required to wear extended reality (XR) wearable devices or headsets to become part of the remote extended-reality based conference room call.
At 214, once the customer 102a and the product demonstrator 114 join the call, the server system 118 performs live synchronization of signaling data associated with the product demonstration in the remote extended-reality based conference room call. The live synchronization of user actions may help the product demonstrator 114 in providing a better explanation of the product and may also help the customer 102a in getting a better understanding of the product. At 216, the product demonstrator 114 may provide a demonstration of the product, information about related accessories of the product, or explanation about any portion of the product to the customer 102a in the remote extended-reality based conference room call.
In one embodiment, the customer 102a may want to add more members for the demonstration of the product. The members can be friends/relatives/family members/colleagues/social contacts and decision makers of the customer 102a.
At 218, the customer 102a may use a feature of adding members (facilitated by the remote demonstration platform 120) in the extended-reality based conference room call to add members in the extended-reality based conference room call. A dynamic temporary link is created that needs to be shared by the customer with the members. The members need to click on the shared dynamic temporary link to get added in the extended-reality based conference room call. The added members then get to see the same view as seen by the others (e.g., the customer and the product demonstrator) in the extended-reality based conference room call. The product demonstrator 114 may then explain the product or give a demo of the product to the added members also while answering questions asked by the customer 102a and the added members.
At 220, at the end of extended-reality (XR) conference room call, the customer 102a will be asked to provide feedback by choosing whether the demo or product information received during the conference room call is satisfactory or not. If the customer chooses satisfactory, then the session is closed, otherwise, if the need be, another call session is scheduled where again the customer 102a will be connected with another product demonstrator that may help the user in providing a better demonstration of the product. In one embodiment, a database is created based on synchronised remote product demonstrations in extended reality based conference calls.
FIG. 3 illustrates a schematic diagram representation 300 of adaptive deep learning models that is used for accessing optimal marketing/sales data, and product feature information corresponding to the product, in accordance with an embodiment of the present disclosure. In one embodiment, the adaptive deep learning models include, but are not limited to, fully connected layer-based neural network architecture with reinforcement learning. In one example, the neural network architecture may be convolutional neural network (CNN) or recurrent neural network (RNN). The adaptive deep learning models are configured to generate additional product information about the product which the customer is looking at, modify extended-reality (XR) perspective for the customers, sequence the data experience flow, highlight certain data points, etc. The adaptive deep learning models are configured to continuously learn optimal predictive suggestions corresponding to the product based on customer-product demonstrator interaction data. In one embodiment, the adaptive deep learning models are trained based on customer profile data and earlier conducted such product demonstration sessions for the same customer and different customers.
The server system 118 receives input data for training the adaptive deep learning models (see, 302). The input data may include, but not limited to, customer profile data, past customer-product demonstrator interactions (i.e., text chat data, voice chat data, extended-reality (XR) video), marketing data associated with multiple products in the similar product category, etc.
The server system 118 performs a standard normalization process over the text and image data received in the input data (see, 304).
The server system 118 provides the normalized input data to the fully connected layers 306 and converts the normalized input data into a latent space representation. In one example, the fully connected layers are multi-layer perceptrons (MLPs) with a back propagation algorithm.
The fully connected layers provide the latent space representation to the reinforcement learning (RL) agent 312 which runs a policy at time T=T0 (see, 308) and determines observations 310. As shown in the FIG. 3, the reinforcement learning involves two entities, i.e., an RL agent 312 and an environment 316 that interact with each other. The RL agent 312 is an entity that selects optimal marketing/sales data and product feature information associated with a product as actions 314, and the environment 316 may be set to give a feedback of reward value depending upon the feedback or activities of the customer 102a.
In other words, the RL agent 312 determines optimal marketing/sales data corresponding to the latent space representation. The optimal marketing/sales data includes product feature information that provides the highest reward value calculated using a reward function 318. It means that the server system 118 shows the best preferential information of the customer for available product categories to the product demonstrator 114. The RL agent 312 performs an action (i.e., customer-product demonstrator interaction) and in response to the action, the server system 118 checks the results (see, 320). If the results are optimal, the RL agent 312 is configured to update the policy space or provide predictive suggestions to the product demonstrator 114 (see, 322). The predictive suggestions may include, but not limited to, provision of more details of the product and/or customer preferences, modification in extended-reality (XR) scene, involvement of other product demonstrators, highlighting the certain data points/features in the product demonstrations, etc.
If the results are not optimal, the weights and biases of the fully connected layers 306 are changed. The RL agent 312 is configured to generate a predefined trained table with reward and penalty scores for a set of localized input-output. The set of localized input-output may also include all the products and product accessories of competitors.
In production phase, the server system 118 receives details about a product which the customer is looking at, a part of the product which the customer is looking at, an orientation information (for example, an angle at which the customer is looking at the product), language selected by the customer on the remote demonstration platform, etc. Based on the details, the server system 118 determines marketing/sales data, frequently asked questions, product features using the deep adaptive learning models.
FIGS. 4A and 4B, collectively, represent a flow diagram 400 of conducting collaborative remote product demonstrations, in accordance with some embodiments of the present disclosure. The sequence of operations of the flow diagram 400 does not need to be necessarily executed in the same order as it is presented. Further, one or more operations may be grouped and performed in form of a single step, or one operation may have several sub-steps that may be performed in parallel or in a sequential manner.
The operations 402a, 402b, and 402c are performed in alternative manner. In other words, the operations 402a, 402b, and 402c are referred as different scenarios/examples of product demonstrations.
In first exemplary scenario, at the operation 402a, a customer 102a browses products in a product catalog/product portfolio in a mobile application installed on a customer device 104a. The mobile application is associated with a remote demonstration platform. The mobile application continuously tracks all the activities, such as products viewed by the customer 102a, a number of times a product is viewed by the customer 102a, queries asked by the customer 102a, timing details, etc., that are being performed by the customer 102a on the mobile application. The tracking of the activities is performed anonymously based on the preferences received from the customer 102a to circumvent various privacy legislations. The tracked activities of the customer 102a are logged in the server system 118. The server system 118 may maintain customer purchase history for the customer using the tracked activities of the customer. The customer purchase history may be further used by the server system 118 for generating additional intelligent information for the products available in the product catalog/product portfolio using adaptive deep learning models.
In second exemplary scenario, at 402b, a customer (for example, the customer 102b) browses products in a product catalog/product portfolio on an e-commerce web application on a user computer device.
In third exemplary scenario, at 402c, a customer 110 explores products available in a product catalog or a set of products displayed in a product shop 108.
When the customer (such as, customer 102a, customer 102b, or customer 110) has some queries about a particular product, at 404, the customer initiates a communication with a product demonstrator via a connecting means. The queries can be related to a product feature, specification, functioning, part of the product, price, compatibility, etc. The connecting means may be, but not limited to, clicking on a particular button on a user interface of the mobile application, scan by a mobile device of an identification code displayed along with the product, or setting up a call with a product demonstrator and installing the mobile application on the customer device. In the first exemplary scenario, the customer 102a clicks on the user interface functionality of the mobile application that helps the customer 102a to connect to the product demonstrator 114.
In the second exemplary scenario, the customer 102b scans the identification code displayed along with the product using a mobile device i.e., customer device which leads to downloading of the mobile application on the customer device.
In the third exemplary scenario, the customer 110 calls the product demonstrator by calling a helpline number printed on the product. Then, the product demonstrator requests the customer 110 to download the mobile application on his/her mobile device. After downloading the mobile application, the customer 110 clicks on the user interface functionality of the mobile application that helps the customer 110 to connect to the product demonstrator 114.
At operation 406, the server system 118 establishes a communication between the customer (i.e., customers 102a, 102b and 110) and the product demonstrator 114. The product demonstrator 114 starts communication with the customer 102a through a chat medium or a video medium or an audio medium using a product demonstrator device (e.g., the product demonstrator device 116).
At operation 408, the product demonstrator 114 or the server system 118 checks the availability of the customer. In particular, the product demonstrator 114 checks with the customer for the possibility of conducting extended-reality based product remote demonstrations in real-time or immediately. If the customer agrees for the extended-reality based product remote demonstrations, operation 410 is performed.
At the operation 410, the server system 118 receives information of a type of product customer is looking at and orientation information (i.e., 3D scene) of the product which the customer is looking at, from the customer device.
At operation 412, the server system 118 sets up the similar 3D/ extended-reality (XR) environment for the product demonstrator 114 to provide a product demonstration of the product to the customer.
At operation 414, the product demonstrator 114 generates an invite for conducting an extended-reality based conference room call session and then sends the invite including a connection request in form of a dynamic temporary link to the customer on the customer device for joining the remote extended-reality based conference room call.
At operation 416, the product demonstrator 114 initiates the extended-reality based conference room call session on the product demonstrator device 116. At operation 418, the customer joins the extended-reality based conference room call by clicking on the dynamic temporary link.
At operation 420, the product demonstrator 114 provides remote product demonstration of the product and may also try to answer queries of the customer. The remote demonstration platform 120 may fetch details about the product that the customer is interested in and views on the customer device 104a. The fetched details along with the additional intelligent product information about the product are then displayed on the product demonstrator device 116. The additional intelligent product information is determined using adaptive deep learning models based on customer profile data and earlier conducted such sessions with the same customer or the different customers.
At operation 422, the customer 102a as well as the product demonstrator 114 can invite other members or other product demonstrators in the extended-reality based conference room call. Again, a dynamic temporary link may be generated by the server system 118 that can be shared by the customer or the product demonstrator 114 with other people or other product demonstrators for taking them in the extended-reality based conference room call. In one example, the dynamic temporary link initiates installing process of the mobile applications in devices of other people or other product demonstrators. The other people or other product demonstrators may click on the dynamic temporary link shared with them to join the extended-reality based conference room call.
At operation 424, the customer, the product demonstrators, and the other people may see the product in three-dimension (3D) or an extended reality environment. The product demonstrators and the other people may choose a location of their choice in an extended-reality based conference room to place the product whose demonstration is to be given in the extended-reality based conference room.
At operation 426, the product demonstrators provide a remote demonstration of the product with live synchronization of actions. The product demonstrators may also demonstrate other related products, accessories associated with the product, product variants, competitive product, customization options available for the product, etc., in the extended-reality based conference room.
At operation 428, the customer activities performed by the customer will be logged in the server system 118 for further analysis. The customer, the product demonstrators, and the other people log a partial or full recording of the conference room call and can also take screenshots of the conference room call. At operation 430, the customer, the product demonstrators, and the other people can choose to end the call after the product demonstration and query resolution.
FIG. 5 is an example representation of a user interface (UI) 500 in extended-reality (XR) environment depicting product description fields displayed to a user, such as customers 102a and 110 associated with the remote demonstration platform 120, in accordance with an example embodiment of the present disclosure. The UI 500 is presented on a user device, such as the customer devices 104a or 112 shown in FIG. 1. In an embodiment, the user may be required to register first with the remote demonstration platform 120 by providing an e-mail id or a contact number for using the remote demonstration platform 120. The product 502 (i.e., motorbike) is a three-dimensional (3D) physical object.
The product description fields include multiple fields 506, 508, 510, and 512 having product details associated with the product 502. The product description fields are generated by the customer device 104a based on view of the product 502. A connection icon 504 (i.e., connect with marketing support) is also shown so that the customer 102a can click on the connection icon 504 for connecting with a product demonstrator.
FIG. 6A is an example representation of a user interface (UI) 600 in an extended-reality (XR) environment depicting a conference call page displayed to a user, such as the customers 102a and 110 of the remote demonstration platform 120, in accordance with an example embodiment of the present disclosure. The UI 600 represents a holographic image generated from an XR wearable 602.
The conference call page includes a top section 604 and a bottom section 608. The top section 604 includes an add icon 606 that is useful for adding other people in the conference call. The bottom section 608 includes a description of the product along with a product image 610. The product image 610 can be a two-dimensional (2D) image or a three-dimensional (3D) image or XR based image.
FIG. 6B is an example representation of a user interface (UI) 650 in an extended-reality (XR) environment depicting a conference call page 652 displayed to a user, such as the customers 102a and 110 of the remote demonstration platform 120, in accordance with an example embodiment. The UI 650 is projected by a user device, such as the customer devices 104a and 112 shown in FIG. 1, to create the extended-reality (XR) environment.
The conference call page 652 includes a top section 654 and a bottom section 658. The top section 654 includes an add icon 656 that can be clicked by the customer for adding other people in the conference call. The bottom section 658 includes description of the product along with a product image 660. The product image 660 can be a two-dimensional (2D) image or a three-dimensional (3D) image or XR based image.
FIG. 7 is an example representation of a user interface (UI) 700 in an extended-reality (XR) environment depicting a conference call page 702 displayed to a product demonstrator, in accordance with an example embodiment of the present disclosure. The UI 700 is generated by a product demonstrator device (such as, the product demonstrator device 116 shown in FIG. 1) in an extended-reality (XR) environment similar to the extended-reality (XR) environment associated with the customer 102a.
The conference call page 702 includes a top section 704 and a bottom section 708. The top section 704 includes an add icon 706 that can be clicked by the product demonstrator for adding other product demonstrators in the conference call. The bottom section 708 includes a description of the product along with a product image 710 and additional intelligent product information 712. The product image 710 can be a two-dimensional (2D) image or a three-dimensional (3D) image or XR based image. The additional intelligent information 712 includes, but is not limited to, frequently asked questions, color preferences, etc.
FIG. 8 is a block diagram of a server system 800, in accordance with an example embodiment. In some embodiments, the server system 800 is embodied as a cloud-based and/or SaaS-based (software as a service) architecture. The server system 800 is configured to facilitate remote demonstration of a product using extended reality (XR) technology. In an embodiment, the server system 800 includes a computer system 802 and a database 804. The computer system 802 further includes at least one processor 806 for executing instructions, a memory 808, a communication interface 810, and a user interface 816 that communicate with each other via a bus 812. In one embodiment, the server system 800 is similar to the server system 118.
In some embodiments, the database 804 is integrated within computer system 802. For example, the computer system 802 may include one or more hard disk drives as the database 804. A storage interface 814 is any component capable of providing the processor 806 with access to the database 804. The storage interface 814 may include, for example, an Advanced Technology Attachment (ATA) adapter, a Serial ATA (SATA) adapter, a Small Computer System Interface (SCSI) adapter, a RAID controller, a SAN adapter, a network adapter, and/or any component providing the processor 806 with access to the database 804.
In one embodiment, the database 804 is configured to store customer profile data associated with each customer of the one or more customers, such as the customers 102a to 102c. The database 804 is also configured to location history associated with each customer.
The processor 806 includes suitable logic, circuitry, and/or interfaces to execute operations for receiving a request from a customer (e.g., the customer 102a) for a product demonstration. Examples of the processor 806 include, but are not limited to, an application-specific integrated circuit (ASIC) processor, a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a field-programmable gate array (FPGA), a graphical processing unit (GPU) and the like. The memory 808 includes suitable logic, circuitry, and/or interfaces to store a set of computer readable instructions for performing operations. Examples of the memory 808 include a random-access memory (RAM), a read-only memory (ROM), a removable storage drive, a hard disk drive (HDD), and the like. It will be apparent to a person skilled in the art that the scope of the disclosure is not limited to realizing the memory 808, as described herein. In another embodiment, the memory 808 may be realized in the form of a database server or a cloud storage, without departing from the scope of the present disclosure.
The processor 806 is operatively coupled to the communication interface 810 such that the processor 806 is capable of communicating with the remote device such as, customer devices 104a to 104c and product demonstrator devices 116a to 116c, or communicating with any entity connected to the network 106 (as shown in FIG. 1). Further, the processor 806 is operatively coupled to the user interface 816 for interacting with the customers 104a to 104c and the product demonstrators 114a to 114c.
It is noted that the server system 800 as illustrated and hereinafter described is merely illustrative of an apparatus that could benefit from embodiments of the present disclosure and, therefore, should not be taken to limit the scope of the present disclosure. It is noted that the server system may include fewer or more components than those depicted in FIG. 8.
In an embodiment, the processor 806 is configured to implement a user interface (UI) for facilitating product demonstration session based on the request received from the customer device. The request includes orientation information of the product in the customer device. The processor 806 is also configured to facilitate addition of one or more customers in the UI. The processor 806 is further configured to implement extended-reality (XR) view on the customer device and product demonstrator device.
In one embodiment, the processor 806 is configured to determine optimal marketing/sales data and product feature information based on customer profile data and earlier conducted such product demonstration sessions for the same customer and different customers. The determination of the optimal marketing/sales is further based on orientation information of the product in the customer device. Further, the determination of the optimal marketing/sales data is performed utilizing adaptive deep learning models. The user interface 816 transmits the metadata to the product demonstrator device, where the metadata facilitates generation of a view of the product in extended-reality environment on the display of the product demonstrator device. The generated view is similar to the view of the customer browsing the product in the customer device. The processor 806 is also configured to establish a remote extended-reality based conference room call, to provide a product demonstration session, for connecting the customer with a product demonstrator.
FIG. 9 is a block diagram representation of a deep reinforcement learning model 900, in accordance with an embodiment of the present disclosure. As shown in the FIG. 9, the deep reinforcement learning model involves two entities, i.e., an agent 902 (similar to the RL agent 312) and an environment 904 (similar to the environment 316), that interact with each other. The agent 902 is an entity that determines optimal marketing/sales data, and the environment 904 may be set to provide feedback of a reward value depending upon the feedback or activities of the customer. The deep reinforcement learning model 900 implements Markov Decision Process (MDP). The MDP may be represented by a four-tuple , where,
1) S is a State Space, which includes a set of environmental states that the agent 902 may perceive.
2) A is an Action Space, which includes a set of actions that the agent 902 may take on each state of the environment 904.
3) R is a reward function and R(s, a, s') represents a reward that the agent 902 obtains from the environment 904 when the action 'a' is performed on the state s and the state is changed to state s’.
4) T is a state transition function and T(s, a, s’) may represent a probability of executing action 'a' on state 's' and moving to state s'.
In the process of interaction between the agent 902 and the environment 904 in the MDP, the agent 902 senses that the environment state at time t is ‘st’. Based on the environment state 'st', the agent 902 may select an action 'at' from the action space A to execute. After the environment 904 receives the action selected by the agent 902, it returns corresponding reward signal feedback Rt+1 to the agent 902 and transfers to new environment state 'st+1', and waits for the agent 902 to make a new decision. In the process of interacting with the environment 904, the goal of the agent 902 is to find an optimal strategy such that the optimal strategy obtains the largest long-term cumulative reward in any state 's' and at any time step t.
The total reward is also called as Q-value denoted using the following equation:
Q(s,a)=r(s,a)+? max??Q(s^',a)?….Eqn. (1)
The above equation states that the Q-value yielded from being at state 's' and performing action 'a' is equal to the immediate reward r(s, a) plus the highest Q-value possible from the next state 's', and Gamma (?) is a discount factor which controls the contribution of rewards further in the future. In other words, the Q(s, a) is a cumulative reward value of rewards generated in the subsequent learning optimization when the agent 902 executes the action 'a' in the state 's'.
Further, in the deep reinforcement learning model 900, a neural network architecture is utilized to approximate Q value-function. The state is given as the input and the Q-values of all possible actions are generated as the output.
Based on the above deep reinforcement learning model 900, the server system 800 provides optimal marketing/sales data to the product demonstrator for each product, to improve the details provided in the product demonstration. Then, the server system 800 iteratively updates the marketing/sales data by using the deep reinforcement learning model 900 according to the customer’s future transaction data, to finally learn the optimal marketing/ sales data step by step.
As mentioned above, in reinforcement learning, in the process of interacting with the environment, the goal of the agent 902 is to find an optimal marketing/sales data such that the agent 902 receives the maximum long-term cumulative reward in any state s and at any time step t. In some example embodiments, the above objective may be achieved using a Q-value function approximation algorithm. In other example embodiments, the foregoing objectives may also be implemented by using other reinforcement learning algorithms such as a strategy approximation algorithm, which is not limited herein.
In one embodiment, the deep reinforcement learning model 900 may include one or more neural networks. In one embodiment, the neural network includes an input layer, multiple hidden layers, and an output layer. The neural network is utilized to approximate the Q-value function. The MDP in the deep reinforcement learning model includes a state space S and an action space A, wherein the customer profile data, earlier-conducted product demonstration sessions, and marketing data correspond to the state space S, and application of one or more authorizing components (i.e., products) over a product demonstration sessions corresponds to the action space A.
FIG. 10 represents a flow diagram depicting a method 1000 for product demonstration, in accordance with an example embodiment of the present disclosure. The method 1000 depicted in the flow diagram may be executed by a server system associated with a remote product demonstration application (e.g., the server system 118). Operations of the method 1000 and combinations of operation in the flow diagram, may be implemented by, for example, hardware, firmware, a processor, circuitry and/or a different device associated with the execution of software that includes one or more computed program instructions. The method 1000 starts at operation 1002.
At 1002, the method 1000 includes receiving a request for remote product demonstration associated with a product from a customer device associated with a customer. The request includes information of 3D scene data of the product that is viewed by the customer on the customer device in an extended-reality (XR) environment.
At 1004, the method 1000 includes identifying a product demonstrator from a plurality of product demonstrators based, at least in part, on product attributes associated with the product.
At 1006, the method 1000 includes generating a scene of the product on a product demonstrator device associated with the product demonstrator based, at least in part, on the 3D scene data of the product.
At 1008, the method 1000 includes determining product features associated with the product based, at least in part, on the 3D scene data of the product and a deep neural network model.
At 1010, the method 1000 includes modifying the scene of the product on the product demonstrator device associated with the product demonstrator by overlapping the product features of product over the scene of the product displayed on the product demonstrator device of the product demonstrator.
At 1012, the method 1000 includes initiating a remote XR based conference room call between the product demonstrator and the customer.
In an example embodiment, generating the scene of the product on the product demonstrator device includes generating the same 3D scene in 3D/XR environment for the product demonstrator on the product demonstrator device based on the 3D scene data. Further, the method 1000 includes sending a connection request to the product demonstrator based, at least in part, on the identifying step.
In an embodiment, the deep neural network model is a deep reinforcement learning model trained based at least in part on customer profile data and earlier conducted such product demonstration sessions for the same customer and different customers, and the product features of the product.
In an embodiment, the server system 800 is configured to perform live synchronization in the call. Further, the server system is configured to display a plurality of products included in a product portfolio in the remote product demonstration in the extended-reality environment.
In an embodiment, the server system 800 is configured to customize the render of at least a portion of the scene on the customer device based on hand motions of the product demonstrator tracked by a motion measuring device.
In an embodiment, the server system 800 is configured to generate the scene of the product on the product demonstrator device associated with the product demonstrator by transmitting metadata to the product demonstrator device, where the metadata facilitates generation of the scene of the product in the XR environment on the product demonstrator device, and the generated scene is similar to the view of the customer browsing the product in the customer device.
In an embodiment, the server system 800 is configured to determine the product features associated with product further based on customer interaction history information. The customer interaction history information includes information associated with products viewed by the customer, products in which the customers are interested, queries asked by the customer, and the timing of the view of the product.
In an embodiment, the server system 800 is configured to schedule a product demonstration session, where the product demonstration session is scheduled such that multiple customers can join. Further, the server system 800 is configured to invite additional customers and product demonstrators in the same product demonstration session, where different product related demonstrations can be performed in the same session.
FIG. 11 shows a simplified block diagram of an electronic device 1100 capable of implementing the various embodiments of the present disclosure. The electronic device 1100 may be an example of the customer devices 104a to 104c and product demonstrator devices 116a to 116c shown in FIG.1. It should be understood that the electronic device 1100 as illustrated and hereinafter described is merely illustrative of one type of device and should not be taken to limit the scope of the embodiments. As such, it should be appreciated that at least some of the components described below in connection with the electronic device 1100 may be optional and thus in an example embodiment may include more, less or different components than those described in connection with the example embodiment of the FIG. 11. As such, among other examples, the electronic device 1100 could be any of an electronic device or may be embodied in any of the electronic devices, for example, cellular phones, tablet computers, laptops, mobile computers, personal digital assistants (PDAs), mobile televisions, mobile digital assistants, or any combination of the aforementioned, and other types of communication or multimedia devices.
The illustrated electronic device 1100 includes a controller or a processor 1102 (e.g., a signal processor, microprocessor, ASIC, or other control and processing logic circuitry) for performing such tasks as signal coding, data processing, image processing, input/output processing, power control, and/or other functions. An operating system 1104 controls the allocation and usage of the components of the electronic device 1100 and provides support for one or more programs such as a location tracking application that implements one or more of the innovative features described herein. The applications 1106 may include common mobile computing applications (e.g., telephony applications, email applications, calendars, contact managers, web browsers, messaging applications such as USSD messaging or SMS messaging or SIM Tool Kit (STK) application) or any other computing application.
The illustrated electronic device 1100 includes one or more memory components, for example, a non-removable memory 1108 and/or a removable memory 1110. The non-removable memory 1108 and/or the removable memory 1110 may be collectively known as storage device/module in an embodiment. The non-removable memory 1108 can include RAM, ROM, flash memory, a hard disk, or other well-known memory storage technologies. The removable memory 1110 can include flash memory, smart cards, or a Subscriber Identity Module (SIM). The one or more memory components can be used for storing data and/or code for running the operating system 1104. The electronic device 1100 may further include a user identity module (UIM) 1112. The UIM 1112 may be a memory device having a processor built in. The UIM 1112 may include, for example, a subscriber identity module (SIM), a universal integrated circuit card (UICC), a universal subscriber identity module (USIM), a removable user identity module (R-UIM), or any other smart card. The UIM 1112 typically stores information elements related to a mobile subscriber. The UIM 1112 in form of the SIM card is well known in Global System for Mobile (GSM) communication systems, Code Division Multiple Access (CDMA) systems, or with third-generation (3G) wireless communication protocols such as Universal Mobile Telecommunications System (UMTS), CDMA9000, wideband CDMA (WCDMA) and time division-synchronous CDMA (TD-SCDMA), or with fourth-generation (4G) wireless communication protocols such as LTE (Long-Term Evolution).
The electronic device 1100 can support one or more input devices 1120 and one or more output devices 1130. Examples of the input devices 1120 may include, but are not limited to, a touch screen / a display screen 1122 (e.g., capable of capturing finger tap inputs, finger gesture inputs, multi-finger tap inputs, multi-finger gesture inputs, or keystroke inputs from a virtual keyboard or keypad), a microphone 1124 (e.g., capable of capturing voice input), a camera module 1126 (e.g., capable of capturing still picture images and/or video images) and a physical keyboard 1128. Examples of the output devices 1130 may include, but are not limited to, a speaker 1132 and a display 1134. Other possible output devices can include piezoelectric or other haptic output devices. Some devices can serve more than one input/output function. For example, the touch screen 1122 and the display 1134 can be combined into a single input/output device.
A wireless modem 1140 can be coupled to one or more antennas (not shown in the FIG. 11) and can support two-way communications between the processor 1102 and external devices, as is well understood in the art. The wireless modem 1140 is shown generically and can include, for example, a cellular modem 1142 for communicating at long range with the mobile communication network, a Wi-Fi compatible modem 1144 for communicating at short range with an external Bluetooth-equipped device or a local wireless data network or router, and/or a Bluetooth-compatible modem 1146. The wireless modem 1140 is typically configured for communication with one or more cellular networks, such as a GSM network for data and voice communications within a single cellular network, between cellular networks, or between the electronic device 1100 and a public switched telephone network (PSTN).
The electronic device 1100 can further include one or more input/output ports 1150, a power supply 1152, one or more sensors 1154 for example, an accelerometer, a gyroscope, a compass, a global positioning system sensor (for providing location details) or an infrared proximity sensor for detecting the orientation or motion of the electronic device 1100, a transceiver 1156 (for wirelessly transmitting analog or digital signals) and/or a physical connector 1160, which can be a USB port, IEEE 1294 (FireWire) port, and/or RS-232 port. The illustrated components are not required or all-inclusive, as any of the components shown can be deleted and other components can be added.
Various embodiments of the present disclosure facilitate improvement in the customer experience by super imposing key features on the virtual product in an extended reality environment that minimizes the informational gap.
Although the invention has been described with reference to specific exemplary embodiments, it is noted that various modifications and changes may be made to these embodiments without departing from the broad spirit and scope of the invention. For example, the various operations, blocks, etc., described herein may be enabled and operated using hardware circuitry (for example, complementary metal oxide semiconductor (CMOS) based logic circuitry), firmware, software and/or any combination of hardware, firmware, and/or software (for example, embodied in a machine-readable medium). For example, the apparatuses and methods may be embodied using transistors, logic gates, and electrical circuits (for example, application specific integrated circuit (ASIC) circuitry and/or in Digital Signal Processor (DSP) circuitry).
The present disclosure is described above with reference to block diagrams and flowchart illustrations of method and system embodying the present disclosure. It will be understood that various blocks of the block diagram and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, may be implemented by a set of computer program instructions. These set of instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to cause a device, such that the set of instructions when executed on the computer or other programmable data processing apparatus creates a means for implementing the functions specified in the flowchart block or blocks although other means for implementing the functions including various combinations of hardware, firmware and software as described herein may also be employed.
Various embodiments described above may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on at least one memory, at least one processor, an apparatus or, a non-transitory computer program product. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a “computer-readable medium” may be any non-transitory media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer, with one example of a system described and depicted in FIG. 1. A computer-readable medium may include a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
The foregoing descriptions of specific embodiments of the present disclosure have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the present disclosure and its practical application, to thereby enable others skilled in the art to best utilize the present disclosure and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but such are intended to cover the application or implementation without departing from the spirit or scope of the invention.
,CLAIMS:CLAIMS
We claim:
1. A computer-implemented method, comprising:
receiving, by a server system associated with a remote product demonstration application, a request for remote product demonstration associated with a product from a customer device associated with a customer, the request comprising information of three-dimensional (3D) scene data of the product that is viewed by the customer on the customer device in an extended-reality (XR) environment;
identifying, by the server system, a product demonstrator from a plurality of product demonstrators based, at least in part, on product attributes associated with the product;
generating, by the server system, a scene of the product on a product demonstrator device associated with the product demonstrator based, at least in part, on the 3D scene data of the product;
determining, by the server system, product features associated with the product based, at least in part, on a 3D scene data of the product and a deep neural network model;
modifying, by the server system, the scene of the product on the product demonstrator device associated with the product demonstrator by overlapping the product features on the scene of the product displayed on the product demonstrator device of the product demonstrator; and
initiating, by the server system, a remote XR based conference room call between the product demonstrator and the customer.
2. The computer-implemented method as claimed in claim 1, wherein generating the scene of the product comprises generating a 3D scene in the XR environment for the product demonstrator on the product demonstrator device based on the 3D scene data received from the customer device.
3. The computer-implemented method as claimed in claim 1, further comprising sending a connection request to the product demonstrator based, at least in part, on the identifying step.
4. The computer-implemented method as claimed in claim 1, wherein the deep neural network model is a deep reinforcement learning model trained based, at least in part, on customer profile data, earlier-conducted product demonstration sessions for the customer and different customers and the product features of the product.
5. The computer-implemented method as claimed in claim 1, further comprising:
displaying, by the server system, a plurality of products comprised in a product portfolio in the remote product demonstration in an XR environment.
6. The computer-implemented method as claimed in claim 1, further comprising:
customizing a render of at least a portion of the scene on the customer device based on hand motions of the product demonstrator tracked by a motion measuring device.
7. The computer-implemented method as claimed in claim 1, wherein generating the scene of the product on the product demonstrator device associated with the product demonstrator comprises transmitting metadata to the product demonstrator device, wherein the metadata facilitates generation of the scene of the product in an extended-reality environment on the product demonstrator device, and wherein the generated scene is similar to the view of the customer browsing the product in the customer device.
8. The computer-implemented method as claimed in claim 1, further comprising:
synchronizing content displayed on the customer device and the product demonstrator device in the remote extended-reality based conference room call.
9. The computer-implemented method as claimed in claim 1, wherein determining the product features associated with the product is further based on customer interaction history information, and wherein the customer interaction history information comprises information associated with: products viewed by the customer, products in which the customers is interested, queries asked by the customer, and timing of the view of the product.
10. The computer-implemented method as claimed in claim 1, further comprising scheduling a product demonstration session for multiple customers to join.
11. The computer-implemented method as claimed in claim 10, further comprising sending invitation requests to one or more customers and product demonstrators for joining in the product demonstration session.
12. A server system configured to perform the computer-implemented method as claimed in any of the claims 1-11.
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 202041035184-IntimationOfGrant17-10-2022.pdf | 2022-10-17 |
| 1 | 202041035184-STATEMENT OF UNDERTAKING (FORM 3) [14-08-2020(online)].pdf | 2020-08-14 |
| 2 | 202041035184-PatentCertificate17-10-2022.pdf | 2022-10-17 |
| 2 | 202041035184-PROVISIONAL SPECIFICATION [14-08-2020(online)].pdf | 2020-08-14 |
| 3 | 202041035184-POWER OF AUTHORITY [14-08-2020(online)].pdf | 2020-08-14 |
| 3 | 202041035184-Annexure [29-03-2022(online)].pdf | 2022-03-29 |
| 4 | 202041035184-Written submissions and relevant documents [29-03-2022(online)].pdf | 2022-03-29 |
| 4 | 202041035184-FORM FOR STARTUP [14-08-2020(online)].pdf | 2020-08-14 |
| 5 | 202041035184-FORM FOR SMALL ENTITY(FORM-28) [14-08-2020(online)].pdf | 2020-08-14 |
| 5 | 202041035184-Correspondence to notify the Controller [25-02-2022(online)].pdf | 2022-02-25 |
| 6 | 202041035184-US(14)-HearingNotice-(HearingDate-15-03-2022).pdf | 2022-02-22 |
| 6 | 202041035184-FORM 1 [14-08-2020(online)].pdf | 2020-08-14 |
| 7 | 202041035184-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [14-08-2020(online)].pdf | 2020-08-14 |
| 7 | 202041035184-ABSTRACT [06-12-2021(online)].pdf | 2021-12-06 |
| 8 | 202041035184-EVIDENCE FOR REGISTRATION UNDER SSI [14-08-2020(online)].pdf | 2020-08-14 |
| 8 | 202041035184-CLAIMS [06-12-2021(online)].pdf | 2021-12-06 |
| 9 | 202041035184-COMPLETE SPECIFICATION [06-12-2021(online)].pdf | 2021-12-06 |
| 9 | 202041035184-DRAWINGS [14-08-2020(online)].pdf | 2020-08-14 |
| 10 | 202041035184-DECLARATION OF INVENTORSHIP (FORM 5) [14-08-2020(online)].pdf | 2020-08-14 |
| 10 | 202041035184-FER_SER_REPLY [06-12-2021(online)].pdf | 2021-12-06 |
| 11 | 202041035184-Form26_General Power of Attorney_24-08-2020.pdf | 2020-08-24 |
| 11 | 202041035184-OTHERS [06-12-2021(online)].pdf | 2021-12-06 |
| 12 | 202041035184-Correspondence_24-08-2020.pdf | 2020-08-24 |
| 12 | 202041035184-FER.pdf | 2021-10-18 |
| 13 | 202041035184-COMPLETE SPECIFICATION [13-08-2021(online)].pdf | 2021-08-13 |
| 13 | 202041035184-Proof of Right [05-02-2021(online)].pdf | 2021-02-05 |
| 14 | 202041035184-Correspondence, Form-1_10-02-2021.pdf | 2021-02-10 |
| 14 | 202041035184-CORRESPONDENCE-OTHERS [13-08-2021(online)].pdf | 2021-08-13 |
| 15 | 202041035184-DRAWING [13-08-2021(online)].pdf | 2021-08-13 |
| 15 | 202041035184-STARTUP [13-08-2021(online)].pdf | 2021-08-13 |
| 16 | 202041035184-FORM 18A [13-08-2021(online)].pdf | 2021-08-13 |
| 16 | 202041035184-FORM28 [13-08-2021(online)].pdf | 2021-08-13 |
| 17 | 202041035184-FORM-9 [13-08-2021(online)].pdf | 2021-08-13 |
| 18 | 202041035184-FORM28 [13-08-2021(online)].pdf | 2021-08-13 |
| 18 | 202041035184-FORM 18A [13-08-2021(online)].pdf | 2021-08-13 |
| 19 | 202041035184-DRAWING [13-08-2021(online)].pdf | 2021-08-13 |
| 19 | 202041035184-STARTUP [13-08-2021(online)].pdf | 2021-08-13 |
| 20 | 202041035184-Correspondence, Form-1_10-02-2021.pdf | 2021-02-10 |
| 20 | 202041035184-CORRESPONDENCE-OTHERS [13-08-2021(online)].pdf | 2021-08-13 |
| 21 | 202041035184-COMPLETE SPECIFICATION [13-08-2021(online)].pdf | 2021-08-13 |
| 21 | 202041035184-Proof of Right [05-02-2021(online)].pdf | 2021-02-05 |
| 22 | 202041035184-Correspondence_24-08-2020.pdf | 2020-08-24 |
| 22 | 202041035184-FER.pdf | 2021-10-18 |
| 23 | 202041035184-Form26_General Power of Attorney_24-08-2020.pdf | 2020-08-24 |
| 23 | 202041035184-OTHERS [06-12-2021(online)].pdf | 2021-12-06 |
| 24 | 202041035184-FER_SER_REPLY [06-12-2021(online)].pdf | 2021-12-06 |
| 24 | 202041035184-DECLARATION OF INVENTORSHIP (FORM 5) [14-08-2020(online)].pdf | 2020-08-14 |
| 25 | 202041035184-COMPLETE SPECIFICATION [06-12-2021(online)].pdf | 2021-12-06 |
| 25 | 202041035184-DRAWINGS [14-08-2020(online)].pdf | 2020-08-14 |
| 26 | 202041035184-CLAIMS [06-12-2021(online)].pdf | 2021-12-06 |
| 26 | 202041035184-EVIDENCE FOR REGISTRATION UNDER SSI [14-08-2020(online)].pdf | 2020-08-14 |
| 27 | 202041035184-ABSTRACT [06-12-2021(online)].pdf | 2021-12-06 |
| 27 | 202041035184-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [14-08-2020(online)].pdf | 2020-08-14 |
| 28 | 202041035184-FORM 1 [14-08-2020(online)].pdf | 2020-08-14 |
| 28 | 202041035184-US(14)-HearingNotice-(HearingDate-15-03-2022).pdf | 2022-02-22 |
| 29 | 202041035184-Correspondence to notify the Controller [25-02-2022(online)].pdf | 2022-02-25 |
| 29 | 202041035184-FORM FOR SMALL ENTITY(FORM-28) [14-08-2020(online)].pdf | 2020-08-14 |
| 30 | 202041035184-FORM FOR STARTUP [14-08-2020(online)].pdf | 2020-08-14 |
| 30 | 202041035184-Written submissions and relevant documents [29-03-2022(online)].pdf | 2022-03-29 |
| 31 | 202041035184-POWER OF AUTHORITY [14-08-2020(online)].pdf | 2020-08-14 |
| 31 | 202041035184-Annexure [29-03-2022(online)].pdf | 2022-03-29 |
| 32 | 202041035184-PROVISIONAL SPECIFICATION [14-08-2020(online)].pdf | 2020-08-14 |
| 32 | 202041035184-PatentCertificate17-10-2022.pdf | 2022-10-17 |
| 33 | 202041035184-STATEMENT OF UNDERTAKING (FORM 3) [14-08-2020(online)].pdf | 2020-08-14 |
| 33 | 202041035184-IntimationOfGrant17-10-2022.pdf | 2022-10-17 |
| 1 | SearchHistory(1)E_28-09-2021.pdf |