Sign In to Follow Application
View All Documents & Correspondence

Method And System For Contextual Data Visualization In Mixed Reality And Augmented Reality

Abstract: This disclosure relates generally to visualizing huge volume of data and the visualization requires advanced data visualizations techniques like Augmented Reality (AR) and Mixed Reality (MR). In certain situations, a user requires to switch from AR to MR and vice versa in order to analyze real time data. However, there is a challenge with conventional methods in providing data visualization to the user without losing the context of visualization while switching from AR to MR and vice versa. In the present disclosure, the system acquires data from a plurality of databases and display processed data virtually using AR and MR. The switching between AR and MR and vice versa is seamless to the user. The switching is performed by retaining the context of the visualization in AR and reproducing the visualization in MR by utilizing a plurality of applications.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
24 August 2018
Publication Number
09/2020
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ip@legasis.in
Parent Application
Patent Number
Legal Status
Grant Date
2024-08-29
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point, Mumbai - 400021, Maharashtra, India

Inventors

1. RAMCHANDANI, Sahil
Tata Consultancy Services Limited, SDF V, Unit 130/131, Santacruz Electronic Export Processing Zone, Andheri (East), Mumbai - 400096, Maharashtra, India
2. KSHIRSAGAR, Mahesh
Tata Consultancy Services Limited, SDF V, Unit 130/131, Santacruz Electronic Export Processing Zone, Andheri (East), Mumbai - 400096, Maharashtra, India
3. THANGADURAI, Mercy Grace
Tata Consultancy Services Limited, Block C, Kings Canyon, ASF Insignia, Gurgaon - Faridabad Road, Gawal Pahari, Gurgaon - 122003, Haryana, India
4. SHARMA, Mohit
Tata Consultancy Services Limited, Block C, Kings Canyon, ASF Insignia, Gurgaon - Faridabad Road, Gawal Pahari, Gurgaon - 122003, Haryana, India
5. HEBBALAGUPPE, Ramya
Tata Consultancy Services Limited, Block C, Kings Canyon, ASF Insignia, Gurgaon - Faridabad Road, Gawal Pahari, Gurgaon - 122003, Haryana, India
6. PERLA, Ramakrishna
Tata Consultancy Services Limited, Deccan Park, Plot No 1, Survey No. 64/2, Software Units Layout, Serilingampally Mandal, Madhapur, Hyderabad - 500034, Telangana, India

Specification

DESC:FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of invention:
METHOD AND SYSTEM FOR CONTEXTUAL DATA VISUALIZATION IN MIXED REALITY AND AUGMENTED REALITY

Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India

The following specification particularly describes the invention and the manner in which it is to be performed.
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
[001] The present application claims priority from Indian provisional patent application no. 201821031844, filed on August 24, 2018.

TECHNICAL FIELD

[002] The disclosure herein generally relates to field of data processing, and, more particularly to contextual data visualization in Mixed Reality (MR) And Augmented Reality (AR).

BACKGROUND
[003] Data is growing at an exponential rate and deriving insights from huge volume of data requires advanced data visualizations techniques like Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR). VR provides a computer generated simulation of a three dimensional (3-D) visualization that can be interacted within a seemingly real or physical way by a user by wearing a HMD (Head Mounted Device) along with a Mobile device. AR technology superimposes a computer generated image on the user’s view of the real world on a Mobile device, thus providing a composite view, wherein there is no need to wear HMD. However, MR technology superimposes a computer generated image on user’s view of the real world, thus providing a composite view, typically viewed using HMD along with a Mobile device. Thus, visualizing these data sets in Mixed Reality (MR) provides an immersive experience to the user in 3D in context of the real world at the right time and right place.
[004] Conventional data visualization methods include graphs, pie charts and data visualization in 2 dimensions (2D). 2D data visualization is inadequate to analyze and derive meaningful insights from huge datasets. Further, the conventional data visualization solutions provides only Mixed Reality (MR) based solutions which mandates the user to wear a Head Mounted Devices (HMD) continuously till he/she is working on the data of interest. However, in Augmented Reality (AR), the virtual objects are available to be viewed on mobile devices without the need of any HMD. However, the user is unable to obtain a complete immersive experience as like MR. Further, there is a challenge in wearing HMD by a user for more than 10 minutes since it causes nausea and dizziness. Further, conventional data visualization techniques using MR utilizes expensive devices. Further, in VR, the user loses the context of the real world and the scenario in which data is needed to be visualized for insights. Hence there is a challenge in providing data visualization to user without losing the context of visualization while switching from AR to MR in a comfortable manner.
SUMMARY
[005] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a method for contextual data visualization in mixed reality and augmented reality is provided. The method includes receiving, a request by a chatbot, wherein the request comprises a query for a data visualization. Further, the method includes analyzing, the query to identify a first data visualization mode based on a plurality of visualization parameters associated with the request by utilizing a memory based reasoning technique. Further, the method includes displaying, the data visualization in the first data visualization mode in a display dashboard. Further, the method includes extracting, a set of session parameters associated with the first data visualization mode, wherein the set of session parameters comprising a plurality of filters applied on a data, an interaction on the data, an information associated with the current display dashboard and a dimension associated with the display dashboard. Furthermore, the method includes simultaneously extracting, a plurality of metadata associated with the first data visualization mode from at least one of the display dashboard and a visualization server. Finally, the method includes seamlessly displaying, the data visualization in a second data visualization mode based on the set of session parameters and the plurality of metadata associated with the first data visualization mode.
[006] In another aspect, a system for contextual data visualization in mixed reality and augmented reality is provided is provided. The system includes a visualization server wherein the visualization server includes, at least one memory comprising programmed instructions, at least one hardware processor operatively coupled to the at least one memory, wherein the at least one hardware processor is capable of executing the programmed instructions stored in the at least one memories and a data visualization switching unit, wherein the data visualization switching unit is configured to receive, a request by a chatbot, wherein the request comprises a query for a data visualization. Further, the data visualization switching unit is configured to analyze, the query to identify a first data visualization mode based on a plurality of visualization parameters associated with the request by utilizing a memory based reasoning technique. Further, the data visualization switching unit is configured to display, the data visualization in the first data visualization mode in a display dashboard. Further, the data visualization switching unit is configured to extract, a set of session parameters associated with the first data visualization mode, wherein the set of session parameters comprising a plurality of filters applied on a data, an interaction on the data, an information associated with the current display dashboard and a dimension associated with the display dashboard. Furthermore, the data visualization switching unit is configured to simultaneously extract, a plurality of metadata associated with the first data visualization mode from at least one of the display dashboard and a visualization server. Finally, the data visualization switching unit is configured to seamlessly display, the data visualization in a second data visualization mode based on the set of session parameters and the plurality of metadata associated with the first data visualization mode.
[007] In yet another aspect, a computer program product comprising a non-transitory computer-readable medium having embodied therein a computer program for method and system for contextual data visualization in mixed reality and augmented reality is provided. The computer readable program, when executed on a computing device, causes the computing device to receive, a request by a chatbot, wherein the request comprises a query for a data visualization. Further, the computer readable program, when executed on a computing device, causes the computing device to analyze, the query to identify a first data visualization mode based on a plurality of visualization parameters associated with the request by utilizing a memory based reasoning technique. Further, the computer readable program, when executed on a computing device, causes the computing device to display, the data visualization in the first data visualization mode in a display dashboard. Further, the computer readable program, when executed on a computing device, causes the computing device to extract, a set of session parameters associated with the first data visualization mode, wherein the set of session parameters comprising a plurality of filters applied on a data, an interaction on the data, an information associated with the current display dashboard and a dimension associated with the display dashboard. Furthermore, the computer readable program, when executed on a computing device, causes the computing device to simultaneously extract, a plurality of metadata associated with the first data visualization mode from at least one of the display dashboard and a visualization server. Finally, the computer readable program, when executed on a computing device, causes the computing device to seamlessly display, the data visualization in a second data visualization mode based on the set of session parameters and the plurality of metadata associated with the first data visualization mode.
[008] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[009] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[010] FIG. 1 illustrates an exemplary system for contextual data visualization in both Mixed Reality (MR) and Augmented Reality (AR), according to some embodiments of the present disclosure.
[011] FIG. 2 is a functional block diagram of a visualization server of the system for contextual data visualization in both MR and AR, according to some embodiments of the present disclosure.
[012] FIG. 3 is an exemplary functional block diagram of a mobile device of the system for contextual data visualization in both MR and AR, according to some embodiments of the present disclosure.
[013] FIG. 4 is an exemplary functional block diagram of a middleware integration module of the system for contextual data visualization in both MR and AR, according to some embodiments of the present disclosure.
[014] FIG. 5 illustrates an exemplary layered architecture of various components for the system for data visualization in both Mixed Reality (MR) and Augmented Reality (AR), in accordance with some embodiments of the present disclosure.
[015] FIG. 6 illustrates an exemplary use case for contextual data visualization in MR and AR, in accordance with some embodiments of the present disclosure and
[016] FIG. 7 exemplary flow diagram for a processor implemented method for contextual data visualization in MR and AR using system of FIG. 1, in accordance with some embodiments of the present disclosure.

DETAILED DESCRIPTION OF EMBODIMENTS
[017] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
[018] Glossary – Terms used in the embodiments with explanation.
Voicebot: A chatbot which can interface with the Augmented Reality (AR) / Mixed Reality (MR) solution.
Virtual Reality (VR) Headset: A simple Cardboard-like headset where a mobile device/ smartphone is kept inside as a primary computing device.
Bluetooth controller: A low-cost Bluetooth controller having a joystick and at least four functional buttons to enable functionalities including toggling ON/OFF a laser pointer, filters, zoom etc.
Unity3D: Unity3D is for rendering the virtual environment. Here, the designed virtual environment is converted to an APK (Android Package) project through unity.
Android Studio: Android Studio provides the Software Development Kit (SDK) required by Unity to create android Apps. Unity Projects can be exported to different platforms like android, iOS (iPhone Operating System), webGL (web Graphics Libraray) etc.
Java Development Kit/JRE: To create android applications for the present disclosure.
Plotter(s): Each plotter makes GET request to a server and plots a corresponding graph.
AR Camera: The camera attached with a mobile device/ a cell phone camera, wherein the system tracks the image markers and further showcases visualizations onto the mobile device or cell phone. When the user moves head AR camera keeps a track of the image marker’s position and let the system know where to project the visualization.
Google VR scripts: Controls the action that needs to be triggered, when reticle pointer selects any point in the 3D model overlaid on an image marker.
Vuforia: A Unity3D Plugin to enable AR and MR mode. The vuforia engine identifies the image marker in the real world and super imposes the virtual object seen on the mobile device in accordance to the space occupied by the image marker.
Image Marker: An object placed in view of an imaging device which appears in the image produced, to use as a point of reference or a measure. The object is identified initially and registered in Vuforia to act as target for projecting a 3D visualization in real world. More than one image marker can be registered and used.
Pointing Gestural Framework: To detect and classify the fingertip gestures made in front of the virtual object. There are three components in this framework:
A. The Faster R-CNN detector. It takes RGB image input and outputs the hand candidate bounding box,
B. Fingertip Regressor block. It accurately localizes the fingertip (the fingertip is analogous to a pen-tip in Human Computer Interface).
C. Bi-LSTM network - For classification of fingertip detections on subsequent frames into different gestures.
[019] Embodiments herein provide a method and system for contextual data visualization in both Mixed Reality (MR) and Augmented Reality (AR). The present disclosure utilizes AR and MR technologies to visualize data in 3D, in stereoscopic view to increase its lucidity and hence facilitates quick decision making. Here, Vuforia is used to generate 3D data visualizations on any image of interest. Further, the present The present disclosure can be implemented using an android or iOS application or the like. Here, the system acquires data from a plurality of databases and display processed data virtually using AR and MR. The switching between AR and MR, and vice versa is performed by retaining the context of the visualization in AR and reproducing the visualization in MR by utilizing a plurality of software like Unity 3D and Vuforia. The switching between AR and MR is seamless to the user and the user obtains a comfortable view of the processed data. For example, the system is utilized in monitoring real time data in plant including steel plant and maintaining data at the backend. The user interaction is facilitated by provided generic Bluetooth controllers, eye gaze, touch input and gesture based controls. An implementation of the method and system for detecting topological contours is described further in detail with reference to FIGS. 1 through 7.
[020] Referring now to the drawings, and more particularly to FIG. 1 through 7, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[021] FIG. 1 illustrates an exemplary system 100 for contextual data visualization in both Mixed Reality (MR) and Augmented Reality (AR), according to some embodiments of the present disclosure. The system 100 for contextual data visualization in both Mixed Reality (MR) and Augmented Reality (AR), includes a MR device 102, a mobile device 106, a middleware integration Server 108, a visualization server 104, a cloud server 110 and a network 112. In an embodiment, the MR device is Head Mounted Device (HMD). Here the visualization server 104 can be one among a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a cloud-based computing device, a router, a network gateway, a sensor gateway, a Wi-Fi access point and the like. In one implementation, the visualization server 104 may be implemented in a cloud-based environment. In another implementation, the visualization server 104 can be implemented in a cloud-edge environment and in yet another implementation, the visualization server 104can be implemented in a cloud-fog environment. The MR device 102, the cloud server 110, the mobile device 106, the middleware integration unit 108 and the visualization server 104 are communicatively coupled through a network 112.
[022] In an embodiment, the network 112 may be a wireless or a wired network, or a combination thereof. In an example, the network can be implemented as a computer network, as one of the different types of networks, such as virtual private network (VPN), intranet, local area network (LAN), wide area network (WAN), the internet, and such. The network may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), and Wireless Application Protocol (WAP), to communicate with each other. Further, the network may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices. The network devices within the network 108 may interact with the each other through communication links.
[023] In an embodiment, the cloud server 110 can be a Vuforia server. Further, the cloud server 110 database provides flexibility to store up to million image markers and frequently update image markers. Further, a plurality of images are stored in the cloud server 110 and the plurality of images are accessed cloud either by utilizing a web app or by utilizing Application Programming Interface’s (APIs).
[024] In an embodiment, the middleware integration server 108 can be one among a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a cloud-based computing device, a router, a network gateway, a sensor gateway, a Wi-Fi access point and the like. In addition to the default functionalities, the middleware integration server 108 includes a plurality of software to enable seamless transition from AR mode to MR mode and vice versa. The plurality of software includes the Unity 3D and Vuforia. The Unity 3D is a game engine used to import 3D models and to animate the 3D models. In an embodiment, Vuforia with inbuilt Unity 3D is utilized to provide a platform for enabling AR. Further, Vuforia provides an option to use images, 3D models, cylindrical objects, cuboid objects (like cereal boxes) and ground plane as platform/marker to showcase 3D visualization.
[025] FIG. 2 illustrates a block diagram of the visualization server 104, of the system for contextual data visualization in both MR and AR, according to some embodiments of the present disclosure. The visualization server 104 includes or is otherwise in communication with one or more hardware processors, such as a processor 202, at least one memory such as a memory 204, an I/O interface 222 and a data visualization unit 220. In an embodiment, the data visualization switching unit 220 comprising a query analysis module (not shown in FIG. 2), display module (not shown in FIG. 2), a session parameters extraction module (not shown in FIG. 2) and a metadata extraction module(not shown in FIG. 2). The processor 202, memory 204, and the I/O interface 222 may be coupled by a system bus such as a system bus 208 or a similar mechanism.
[026] The I/O interface 222 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The interfaces 222 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, the imaging device 106, the projecting device 102, a printer and the like. Further, the interfaces 222 may enable the computing device 104 to communicate with other devices, such as web servers and external databases. The interfaces 222 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. For the purpose, the interfaces 222 may include one or more ports for connecting a number of computing systems with one another or to another server computer. The I/O interface 222 may include one or more ports for connecting a number of devices to one another or to another server.
[027] The hardware processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the hardware processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204.
[028] The memory 204 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, the memory 204 includes a plurality of modules 206 and a repository 210 for storing data processed, received, and generated by one or more of the modules 206 and the image analysis unit 220. The modules 206 may include routines, programs, objects, components, data structures, and so on, which perform particular tasks or implement particular abstract data types.
[029] The memory 204 also includes module(s) 206 and a data repository 210. The module(s) 206 include programs or coded instructions that supplement applications or functions performed by the system 100 for detecting topological contours. The modules 206, amongst other things, can include routines, programs, objects, components, and data structures, which perform particular tasks or implement particular abstract data types. The modules 206 may also be used as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions. Further, the modules 206 can be used by hardware, by computer-readable instructions executed by a processing unit, or by a combination thereof. The modules 206 can include various sub-modules (not shown). The modules 206 may include computer-readable instructions that supplement applications or functions performed by the computing device 104 for detecting topological contours.
[030] The data repository 210 may include received queries 212, a session parameters database 214, a metadata database 216 and other data 218. Further, the other data 218 amongst other things, may serve as a repository for storing data that is processed, received, or generated as a result of the execution of one or more modules in the module(s) 206 and the modules associated with the image analysis unit 220.
[031] Although the data repository 210 is shown internal to the computing device 104, it will be noted that, in alternate embodiments, the data repository 210 can also be implemented external to the computing device 104, where the data repository 210 may be stored within a database (not shown in FIG. 1) communicatively coupled to the computing device 104. The data contained within such external database may be periodically updated. For example, new data may be added into the database (not shown in FIG. 1) and/or existing data may be modified and/or non-useful data may be deleted from the database (not shown in FIG. 1). In one example, the data may be stored in an external system, such as a Lightweight Directory Access Protocol (LDAP) directory and a Relational Database Management System (RDBMS). In another embodiment, the data stored in the data repository 210 may be distributed between the computing device 104 and the external database (not shown).
[032] FIG. 3 is an exemplary functional block diagram of a mobile device 106 of the system for contextual data visualization in both MR and AR, according to some embodiments of the present disclosure. Now referring to FIG. 3, the mobile device 106 includes a presentation layer 310, a web service broker 312 to provide VPN or Wi-Fi service, a rendering engine 314, a Vuforia engine 318, a security layer 316 to provide security services to the user including user authentication, a device target database 320 for storing image markers for faster recognition and rendering and a native device features module 322 to provide magnet button present in some VR headset to enable the user tap on the screen. Further, the native device features module 322 is configured to provide a control interface with users on the mobile screen to enable the user to tap, to pinch in, to pinch out, to select, to zoom in and to zoom out 3D visuals. The rendering engine 314 provides a system to understand the 3D objects and charts and project the same in the presentation layer of the mobile device interface. The Vuforia engine 318 provides a system to identify the image marker in the real world from the camera feeds of the mobile device and accordingly map the right virtual objects to be super-imposed on the real world image on the mobile interface. In an embodiment (during data visualization in MR), the mobile device 106 is placed inside the MR device and connected to the network 112. In another embodiment (during data visualization in AR), the mobile device is standalone and directly connected to the network 112.
[033] FIG. 4 is an exemplary functional block diagram of a middleware integration Server of the system for contextual data visualization in both MR and AR, according to some embodiments of the present disclosure. Now, referring to FIG. 4, the middleware integration server includes an authentication and authorization module 402, a session management module 404, a set of parsers 406, a business layer 408 and an analytical and reporting services 410. The authentication and authorization module 402 identifies whether the user has login rights to visualize the data and whether the user has access rights on the data that has been requested for data visualization.. The session management module 404 provides a plurality of session parameters associated with user’s interaction from logging into the system to interact with the virtual 3D objects. The plurality of session parameters included a login credential, a user requests in Natural Language, a user requests in terms of interactions on the data visualization 3D charts, mode of data visualization that the user is currently in and size of the data requested by the user., The set of parsers 406 perform parsing of the data in JSON format available from the backend databases. The business layer 408 implements business logic based on user profile and the user request and an analytical and reporting services 410 enables the integration of the visualization server with APIs.
[034] FIG. 5 illustrates an exemplary layered architecture of various components for the system for data visualization in both Mixed Reality (MR) and Augmented Reality (AR), in accordance with some embodiments of the present disclosure. Now referring to FIG. 5, multiple views are illustrated, including a business view, a data view, an application view and a technology view. In business view, the user visualizes 3D data visualization reports / dashboards either on image markers or on a plane surface. The database view represents the type of data required to be integrated for AR & MR applications in data visualizations and the storage options. The data view includes an image marker database and a user data. The image marker database further includes device based data and cloud based data. The user data includes a SQL database, IOT devices database, and CSV (Comma Separated Values) /JSON (Java Script Object Notation) files. The SQL database tables includes all data needed to plot graphs and visualizations in AR. Here, the SQL data is imported into Unity 3D in the form of JSON files. The applications view represents the modes of interactions for the business users to interact with the reports / dashboards in AR or MR. Here, Voice and Gesture based applications are integrated to provide an intuitive user experience of interaction while keeping the solution operable with low cost frugal HMDs. The applications view further includes a networking module, content authorization module, User Experience (UX) module and a mode of input module. The mode of input module provides the mode of giving input to the system including touch inputs and Bluetooth/motion controllers. The technology view represents the technologies used to develop this solution of 3D data visualization in AR and MR. Further, the technology view includes a cloud image database management module, the Vuforia engine module and a Unity 3D module. The cloud image database management module further includes a web app and remaining APIs (Application Programming interfaces). The Vuforia engine module further includes an image/plane tracking module and a visual rendering on target module. The Unity 3D module further includes 3D visuals creation module and a scripting module.
[035] FIG. 6 illustrates an exemplary use case for contextual data visualization in MR and AR, in accordance with some embodiments of the present disclosure. As depicted in FIG. 6, the user is in a remote location and wants to monitor the operations of the plant. Here, at step 602, user is authenticated by the chatbot. At step 604, the user requests system/bot to show all available plant analysis reports. AT step 608, system/bot asks the user’s options for (a) plant simulation in AR (b) Plant operations dashboard and (c) plant performance dashboard. At step 608, the user requests system/bot to show plant simulation in AR and the plant simulation is shown to the user in AR. In one embodiment, at step 610, the user requests the system to show the simulation in MR. In another embodiment, at step 610, the system automatically decides to switch the simulation from AR to MR based on the plurality of session parameters and a plurality of metadata associated with a current visualization mode. Further, at step 612, the system stores all the context from AR of all user interactions to MR. At step 614, the user accesses the dashboard in MR for data visualization and at step 614, the user provides gesture based interaction with data reports.
[036] In an embodiment, since the user has access to mobile device, user commands the bot using the mobile device with voice or text to open the plant monitoring reports. Initially, the user uses AR technology on the mobile device to view the plant at various levels on the surface in front of the user. Here, the information from sensors are made available in real time to display current values of interests from the sensor. Further, the user utilizes the available options in AR to monitor the performance of plant machinery (e, g.: Furnace or a Warehouse) or processes (e.g.: Supply Chain) using the data visualization charts built in AR. Here, for each chart, the application connects to the dataset in the backend server to fetch and refresh the data in the charts.
[037] In an embodiment, the restrictions of the screen size of the mobile device does not allow the user to view or interact with dashboards or the 3D visuals of the plant. In such cases, the user switches to the Mixed Reality (MR) mode by inserting the mobile device in the Head Mounted Device (HMD). Here, with the field of vision above 90 degrees, the user is able to visualize the plant and its processes in a much broader scale. Further, the user has his/her hands free to interact with the dashboards / reports for deriving insights. In the MR mode of data visualization, the user can not only interact with reports / dashboards using gesture controls enabled by a custom built gesture recognition engine but also by using voice commands for applying filters and drill down/up functions.
[038] In an embodiment, if the user is back on the move once he/she has derived insights in MR mode using an HMD, the user can continue monitoring other parameters on his mobile phone in AR mode. The transition from AR on mobile to MR with HMD and back to AR on mobile is seamless for the user. In the entire journey of transition from AR to MR and back from MR to AR, the user at no point in time loses the context of the real world around.
[039] The data visualization switching unit 220 of the visualization server 104 can be configured to receive, a request by a chatbot, wherein the request comprises a query for a data visualization. Here, the query be a Natural Language Processing (NLP) query. Eg: An Operations Manager of an organization wants to monitor the supply chain of a product XYZ for which the operations manager inputs the request to the bot as “Show me performance parameters of supply chain network for product XYZ”
[040] Further, the data visualization switching unit 220 of the visualization server104 can be configured to analyze, the query to identify a first data visualization mode based on a plurality of visualization parameters associated with the request by utilizing a memory based reasoning technique. The plurality of visualization parameters includes a profile associated with the user, a real time context and a visualization profile. The visualization profile includes a visualization mode, a visualization chart objects and a visualization graphics associated with the query. Here, the memory based reasoning technique provides a best fit visualization mode based on the user profile and by utilizing other visualization parameters. The memory based reasoning technique calculates Euclidean distance between the stored samples of user request and a corresponding visualization mode.
[041] Further, the data visualization switching unit 220 of the visualization server 104 can be configured to display the data visualization in the first data visualization mode in a display dashboard. The first and second data visualization mode be at least one of the AR based visualization mode or the MR based visualization mode.
[042] Further, the data visualization switching unit 220 of the visualization server104 can be configured to extract, a set of session parameters associated with the first data visualization mode, wherein the set of session parameters comprising a plurality of filters applied on a data, an interaction on the data, an information associated with the current display dashboard and a dimension associated with the display dashboard.
[043] Further, the data visualization switching unit 220 of the visualization server104 can be configured to simultaneously extract, a plurality of metadata associated with the first data visualization mode from at least one of the display dashboard and a visualization server. The plurality of metadata includes a model of the mobile device used for viewing the data visualization, an image marker, number of dashboards required for visualization, size and granularity of data in the first data visualization mode, size and granularity of data in a second data visualization mode, a graphical representation in the first data visualization mode and a graphical representation in the second data visualization mode.
[044] In an embodiment, the extraction of the set of session parameters and the extraction of the plurality of metadata are performed by utilizing the middleware integration server (Refer FIG. 4). Here, the parameters are extracted by parsing the information available as JSON files by utilizing the plurality of parsers associated with the middleware integration server.
[045] Further, the data visualization switching unit 220 of the visualization server104 can be configured to seamlessly display the data visualization in a second data visualization mode based on the set of session parameters and the plurality of metadata associated with the first data visualization mode. The data visualization in AR is performed in the mobile device and the data visualization in MR is performed by placing the mobile device into an MR device. For example, in an embodiment with major time to be spent with AR and minor time to be spent with MR (for cases where data is to be analyzed for over 15 minutes), the system provides the virtual data visualization charts in AR on the mobile device without using HMD. In an embodiment, if the user wants to interact with dashboards more intuitively with gestures and pointers using his hands, the user could insert the mobile device inside the VR box (Mixed Reality Device) for minor moments and use the same application in MR. The plurality of software, for example, the Unity 3D and Vuforia available in the middleware integration server 108 (FIG.1), converts the visualization from AR to MR and vice versa seamlessly by utilizing the set of session parameters and the meta data associated with the corresponding mode of visualization. The Unity 3D is a game engine used to import 3D models and to animate the 3D models. In an embodiment, Vuforia with inbuilt Unity 3D is utilized to provide a platform for enabling AR. Further, Vuforia provides an option to use images, 3D models, cylindrical objects, cuboid objects (like cereal boxes) and ground plane as platform/marker to showcase 3D visualization.
[046] In an embodiment, when the number of dashboards used for visualization used by the user is exceeding a threshold, wherein the user is unable to visualize the detailed view, the system automatically switches from AR to MR and vice-versa. In another embodiment, the user decides to switch from AR to MR and vice-versa.
[047] FIG. 7 exemplary flow diagram for a processor implemented method for contextual data visualization in MR and AR using system of FIG. 1 in accordance with some embodiments of the present disclosure. The method 700 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method 700 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communication network. The order in which the method 700 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 700, or an alternative method. Furthermore, the method 700 can be implemented in any suitable hardware, software, firmware, or combination thereof.
[048] At 702, the visualization server 104 receives, by a one or more hardware processors, the request by the chatbot, wherein the request includes a query for a data visualization. At 704, the visualization server 104 analyzes, by the one or more hardware processors, the query to identify a first data visualization mode based on a plurality of visualization parameters associated with the request by utilizing a memory based reasoning technique. The plurality of visualization parameters includes the profile associated with the user, the real time context and the visualization profile. The visualization profile includes the visualization mode, the visualization chart objects and the visualization graphics associated with the query. At 706, the visualization server 104 displays, by the one or more hardware processors, the data visualization in the first data visualization mode in a display dashboard. The first and second data visualization mode be at least one of the AR based visualization mode or the MR based visualization mode. The data visualization in AR is performed in the mobile device and the data visualization in MR is performed by placing the mobile device into an MR device. At 708, the visualization server 104 extracts, by the one or more hardware processors, the set of session parameters associated with the first data visualization mode, wherein the set of session parameters includes a plurality of filters applied on a data, an interaction on the data, an information associated with the current display dashboard and a dimension associated with the display dashboard. The interaction on the data comprises at least a drill down or drill up. At 710, the visualization server 104 simultaneously extracts, by the one or more hardware processors, the plurality of metadata associated with the first data visualization mode from at least one of the display dashboard. The plurality of metadata comprises a model of the mobile device used for viewing the data visualization, an image marker, number of dashboards required for visualization, size and granularity of data in the first data visualization mode, size and granularity of data in a second data visualization mode, a graphical representation in the first data visualization mode and a graphical representation in the second data visualization mode. At 712, the visualization server 104 seamlessly displaying, by the one or more hardware processors, the data visualization in a second data visualization mode based on the set of session parameters and the plurality of metadata associated with the first data visualization mode.
[049] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[050] The embodiments of present disclosure herein addresses unresolved problem of for contextual data visualization in MR and AR. Here, the system provides interactivity features on visualization reports and dashboards including Filters, voice commands for Drill Down / Up, Point Markers, Zoom, Line markers, etc., using the concept of mixed reality (Augmented reality) in data visualization. Further, the system provides interaction on reports with custom built hand gestures. Furthermore, the system enables collecting data from multiple sensors (IOT devices) and display it virtually in real time. The system integrates with voice bot to access the reports in VR for authentication and authorization and the reports are stored in web as well as Mobile Application based on user scenarios.
[051] Further, the system and method for contextual data visualization in MR and AR provides a low cost data visualization solution providing both MR and AR options for data visualization, which can be chosen by the user as and when required. The advantages of proposed system are described below:
1. User Experience Differentiator – The existing data visualization solutions that provide only MR based solutions require the user to wear the head mounted devices continuously till he/she is working on the data of interest. While in Augmented Reality, the virtual objects are available to be viewed on mobile devices without the need of any head mounted device but may not provide a complete immersive experience as in MR. It is to be noted that in terms of user experience, none of the head mounted device can be worn by a user for more than 10 minutes beyond which it causes nausea and dizziness. Hence, it is important to switch between MR and AR options based on the need. The system enables the user to seamlessly switch between based on the actual priority at the time. This can be explained with following two scenarios.
a. Major time with AR and minor with MR (for cases where data is to be analyzed for over 15 minutes) – The HoloLens based existing solution cannot be used continuously as it is only in Mixed Reality for which Head set needs to be worn, because there are chances the user might trip if it is being used while walking in inspection. However, the present disclosure caters to both AR and MR and hence, user can yet see the virtual data visualization charts in AR on his mobile without the use of HMD. In case, at a stage, a user wants to interact with dashboards more intuitively with gestures and pointers using his hands, he could insert the mobile inside the VR box for that minor moments and use the same application in MR mode, thereby providing advantages of both scenarios. This cannot be achieved in a HoloLens based application of MR.
b. Minor time with AR and major with MR (for cases where data is to be visualized for less than 3 minutes) – Here the existing HoloLens based MR solution has limitations that the virtual object cannot be closer than 2 meters and hence gesture based action such as pointing to an object becomes difficult. However, the MR solution proposed by the method is based on user’s mobile camera and has custom gesture based solution (details mentioned below) has no such limitations. The user can act on pressing the buttons available as virtual objects close to his eye. Also, he can change the mode to Augmented Reality in this case in our solution.
c. Existing HoloLens based solutions support resolution up to 720p, where in the proposed system and method provides high resolution as most of the smartphones of current day have display of 1080p and in higher spectrum ranging to 2160p or 4k.
d. Existing HoloLens solution has too narrow Field of vision (FOV). User can only see visualization in front of your eyes, an area spanning from 30-40 degrees.
1. Gesture based control – The gesture based component of the system proposed comes in a negligible cost overhead. The proposed gesture based interaction uses highly sophisticated computer vision libraries like theano and tensor flow to map real world location of finger onto our phone screens and facilitate virtual touch like input system. Thus, there is an increase in the accuracy of interaction with the reports.
2. Data Visualization different than display of information to comprehend the dataset - Also, the Trimble Connect solution does not have a built in library of reports / graphs / charts for data visualization. The proposed system provides a library of charts to visualize the data in backend database systems with simple charts such as bar charts, summary wall, pie chart and with advanced visualization charts such as force-directed graph, geographical maps and collapsible tree maps. In brief, connecting to a sensor to get the data and displaying is different than connecting to databases and visualizing the dataset in real time. For such scenarios, the proposed solution where interactivity features of applying filters and drill down and up comes in to picture. For enabling these interactions, we have embedded the component of gesture recognition which is inbuilt in solutions of HoloLens but not available in low-end frugal devices as they work on using a single camera on the phone.
3. Voice Based command – Not only can these virtual reports be controlled with gestures, but also it can be drilled down and filtered using voice commands which is different than accessing the reports using voice.
4. The proposed method and system enables for low-cost frugal devices with the existing smartphone on VR Box or Google Cardboard which has low affordable cost. This has a huge impact on the adoption of the solution as every rank and file to senior executives can be enabled with contextual AR/MR reports with our solution.
[052] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[053] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[054] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[055] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e. non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[056] It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.
,CLAIMS:
1. A processor implemented method, comprising:
receiving, via one or more hardware processors, a request by a chatbot, wherein the request comprises a query for a data visualization;
analyzing, via the one or more hardware processors. the query to identify a first data visualization mode based on a plurality of visualization parameters associated with the request by utilizing a memory based reasoning technique;
displaying, the data visualization in the first data visualization mode in a display dashboard;
extracting, via the one or more hardware processors, a set of session parameters associated with the first data visualization mode, wherein the set of session parameters comprising a plurality of filters applied on a data, an interaction on the data, an information associated with the current display dashboard and a dimension associated with the display dashboard;
simultaneously extracting, via the one or more hardware processors, a plurality of metadata associated with the first data visualization mode from at least one of the display dashboard and a visualization server; and
seamlessly displaying, via the one or more hardware processors, the data visualization in a second data visualization mode based on the set of session parameters and the plurality of metadata associated with the first data visualization mode.
2. The processor implemented method of claim 1, wherein the query be a Natural Language Processing (NLP) query.
3. The processor implemented method of claim 1, wherein the interaction on the data comprises at least a drill down or drill up.
4. The processor implemented method of claim 1, wherein the plurality of metadata comprises a model of the mobile device used for viewing the data visualization, an image marker, number of dashboards required for visualization, size and granularity of data in the first data visualization mode, size and granularity of data in a second data visualization mode, a graphical representation in the first data visualization mode and a graphical representation in the second data visualization mode.
5. The processor implemented method of claim 1, wherein the plurality of visualization parameters comprising a profile associated with the user, a real time context and a visualization profile.
6. The processor implemented method of claim 1, wherein the visualization profile comprising visualization mode, a visualization chart objects and a visualization graphics associated with the query.
7. The processor implemented method of claim 1, wherein the first and second data visualization mode be at least one of a Augmented Reality (AR) based visualization mode or a Mixed Reality (MR) based visualization mode.
8. The processor implemented method of claim 1, wherein the data visualization in AR is performed in the mobile device and the data visualization in MR is performed by placing the mobile device into an MR device.

9. A system (100), the system (100) comprising:
a visualization server (104), wherein the visualization server (104) comprising:
at least one memory (204) storing programmed instructions;
one or more hardware processors (202) operatively coupled to the at least one memory, wherein the one or more hardware processors (202) are capable of executing the programmed instructions stored in the at least one memory (204); and
an data visualization switching unit (220), wherein the data visualization switching unit (220) is configured to:
receive, a request by a chatbot, wherein the request comprises a query for a data visualization;
analyze, the query to identify a first data visualization mode based on a plurality of visualization parameters associated with the request by utilizing a memory based reasoning technique;
display, the data visualization in the first data visualization mode in a display dashboard;
extract, a set of session parameters associated with the first data visualization mode, wherein the set of session parameters comprising a plurality of filters applied on a data, an interaction on the data, an information associated with the current display dashboard and a dimension associated with the display dashboard;
simultaneously extract, a plurality of metadata associated with the first data visualization mode from at least one of the display dashboard and a visualization server; and
seamlessly display, the data visualization in a second data visualization mode based on the set of session parameters and the plurality of metadata associated with the first data visualization mode.
10. The system as claimed in claim 9, wherein the query be a Natural Language Processing (NLP) query.
11. The system as claimed in claim 9, wherein the interaction on the data comprises at least a drill down or drill up.
12. The system as claimed in claim 9, wherein the plurality of metadata comprises a model of the mobile device used for viewing the data visualization, an image marker, number of dashboards required for visualization, size and granularity of data in the first data visualization mode, size and granularity of data in a second data visualization mode, a graphical representation in the first data visualization mode and a graphical representation in the second data visualization mode.
13. The system as claimed in claim 9, wherein the plurality of visualization parameters comprising a profile associated with the user, a real time context and a visualization profile.
14. The system as claimed in claim 13, wherein the visualization profile comprising visualization mode, a visualization chart objects and a visualization graphics associated with the query.
15. The system as claimed in claim 9, wherein the first and second data visualization mode be at least one of a Augmented Reality (AR) based visualization mode or a Mixed Reality (MR) based visualization mode.
16. The system as claimed in claim 9, wherein the data visualization in AR is performed in the mobile device and the data visualization in MR is performed by placing the mobile device into an MR device.

Documents

Application Documents

# Name Date
1 201821031844-STATEMENT OF UNDERTAKING (FORM 3) [24-08-2018(online)].pdf 2018-08-24
2 201821031844-PROVISIONAL SPECIFICATION [24-08-2018(online)].pdf 2018-08-24
3 201821031844-FORM 1 [24-08-2018(online)].pdf 2018-08-24
4 201821031844-DRAWINGS [24-08-2018(online)].pdf 2018-08-24
5 201821031844-Proof of Right (MANDATORY) [03-10-2018(online)].pdf 2018-10-03
6 201821031844-FORM-26 [04-10-2018(online)].pdf 2018-10-04
7 201821031844-ORIGINAL UR 6(1A) FORM 1 & FORM 26-091018.pdf 2019-02-15
8 201821031844-FORM 3 [04-07-2019(online)].pdf 2019-07-04
8 201821031844-FER.pdf 2021-10-18
9 201821031844-FORM 18 [04-07-2019(online)].pdf 2019-07-04
10 201821031844-ENDORSEMENT BY INVENTORS [04-07-2019(online)].pdf 2019-07-04
11 201821031844-DRAWING [04-07-2019(online)].pdf 2019-07-04
12 201821031844-COMPLETE SPECIFICATION [04-07-2019(online)].pdf 2019-07-04
13 Abstract1.jpg 2019-09-06
14 201821031844-OTHERS [07-08-2021(online)].pdf 2021-08-07
15 201821031844-FER_SER_REPLY [07-08-2021(online)].pdf 2021-08-07
16 201821031844-COMPLETE SPECIFICATION [07-08-2021(online)].pdf 2021-08-07
17 201821031844-CLAIMS [07-08-2021(online)].pdf 2021-08-07
18 201821031844-FER.pdf 2021-10-18
19 201821031844-US(14)-HearingNotice-(HearingDate-11-06-2024).pdf 2024-05-14
20 201821031844-Correspondence to notify the Controller [06-06-2024(online)].pdf 2024-06-06
21 201821031844-FORM-26 [07-06-2024(online)].pdf 2024-06-07
22 201821031844-FORM-26 [07-06-2024(online)]-1.pdf 2024-06-07
23 201821031844-Written submissions and relevant documents [21-06-2024(online)].pdf 2024-06-21
23 201821031844-FORM 1 [24-08-2018(online)].pdf 2018-08-24
24 201821031844-PROVISIONAL SPECIFICATION [24-08-2018(online)].pdf 2018-08-24
24 201821031844-PatentCertificate29-08-2024.pdf 2024-08-29
25 201821031844-IntimationOfGrant29-08-2024.pdf 2024-08-29
25 201821031844-STATEMENT OF UNDERTAKING (FORM 3) [24-08-2018(online)].pdf 2018-08-24

Search Strategy

1 2021-03-0513-16-54E_05-03-2021.pdf

ERegister / Renewals

3rd: 29 Nov 2024

From 24/08/2020 - To 24/08/2021

4th: 29 Nov 2024

From 24/08/2021 - To 24/08/2022

5th: 29 Nov 2024

From 24/08/2022 - To 24/08/2023

6th: 29 Nov 2024

From 24/08/2023 - To 24/08/2024

7th: 29 Nov 2024

From 24/08/2024 - To 24/08/2025

8th: 19 Aug 2025

From 24/08/2025 - To 24/08/2026