Sign In to Follow Application
View All Documents & Correspondence

System And Method For Providing Automated Data Visualization And Modification

Abstract: ABSTRACT SYSTEM AND METHOD FOR PROVIDING AUTOMATED DATA VISUALIZATION AND MODIFICATION A system and method for automated data visualization and modification of visualized data is disclosed. The present invention provides for identifying data points and data types associated with the selected data. Further, one or more visual representations for rendering the selected data are evaluated based on the identified data points and the data types. Yet further, the selected data is optimally visualized based on an identification of a display device type. The present invention further provides for evaluating a theme of visual representations using a real-time lighting information of the real world environment based on identification of the display device type. The selected data is visualized using the evaluated theme of visual representations and the evaluated one or more visual representations. Yet further, the present invention provides for identifying user actions and interpreting inputs from the identified user actions to update or modify visualized data.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
02 July 2020
Publication Number
01/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
dev.robinson@amsshardul.com
Parent Application

Applicants

Cognizant Technology Solutions India Pvt. Ltd.
Techno Complex, No. 5/535, Old Mahabalipuram Road, Okkiyam Thoraipakkam, Chennai 600 097, Tamil Nadu India

Inventors

1. Shubam Gupta
H. No. 802-A, Last Morh, Gandhi Nagar, Jammu, Jammu and Kashmir – 180004, India
2. Pooja Gupta
House No. 2736/4A, Street No. 5, Jammu Colony, Ludhiana, Punjab – 141003, India
3. Siddhartha Das
Flat A-402, Sumadhura Silver Ripples, Borewell Road, Nallurhalli, Whitefield, Bangalore, Karnataka – 560066, India
4. Nitesh Awasthi
203, Krishna Apartment, 113/104 Swaroop Nagar, Kanpur, Uttar Pradesh – 208002, India

Specification

We claim:
1. A method for providing automated data visualization
and modification of visualized data, wherein the method is
implemented by at least one processor (114) executing
program instructions stored in a memory(116), the method
comprising:
identifying, by the processor(114), one or more data points and one or more data types associated with a selected data using data analysis techniques;
evaluating, by the processor(114), one or more visual representations for rendering the selected data based on the one or more data points and the one or more data types using a mapping table generated based on a predetermined set of mapping criteria; and
visualizing, by the processor(114), the selected data based on the one or more visual representations via a display device(106) based on an identification of a type of the display device(106) using a data rendering technique.
2. The method as claimed in claim 1, wherein
identifying the one or more data points and the one or more
data types associated with the selected data using the data
analysis techniques comprises:
identifying a category of the selected data by analyzing and categorizing the selected data,
extracting one or more features from the selected data based on the identified category, wherein the one or more features are representative of measurements associated with the one or more data points to observe the selected data;

identifying the one or more data types of each of the one or more features using a datatype checker pipeline;
creating data structures by aligning the extracted features in columns; and
identifying the one or more data points associated with the selected data using data analysis techniques on the created data structures, wherein the pattern and correlation between the extracted features and the one or more datatypes is evaluated.
3. The method as claimed in claim 1, wherein the visual representations may be selected from graphs, charts, tables, maps, infographics, and dashboards.
4. The method as claimed in claim 1, wherein the type of the display device (106) is identified from virtual reality displays, augmented reality displays, mixed reality displays and smartphones.
5. The method as claimed in claim 1, wherein visualizing the selected data using the evaluated one or more visual representations via the display device(106) comprises:
evaluating one or more 3D objects for visualizing the evaluated one or more visual representations on identification of virtual reality display as the type of display device;
computing coordinates of the one or more 3D objects based on feature values associated with one or more features of the identified one or more data points;

evaluating a theme of the one or more 3D objects based on a real-time lighting information of a real world environment, wherein the theme is representative of color and texture the one or more 3D objects; and
rendering the one or more 3D objects in the real world environment based on the computed coordinates and the evaluated theme, wherein the rendered one or more 3D objects are representative of visualized selected data.
6. The method as claimed in 5, wherein estimating the
real-time lighting information of the real world
environment comprises:
retrieving information associated with the real world environment via one or more optical sensors, wherein one or more images of the real world environment are retrieved via the one or more optical sensors;
extracting a plurality of feature-points from the retrieved images; and
estimating the real-time lighting information of the real world environment using pixel intensity of the plurality of feature-points based on a plurality of lighting parameters.
7. The method as claimed in claim 1, wherein
visualizing the selected data based on the evaluated one or
more visual representations via the display device using
the data rendering technique comprises evaluating one or
more objects for visualizing the evaluated one or more
visual representations on identification of smartphone as
the type of display device; computing coordinates of the
one or more objects based on feature values associated with
one or more features of the identified one or more data
points; and rendering the one or more objects on the

smartphone using the computed coordinates, wherein the rendered one or more objects are representative of visualized selected data.
8. The method as claimed in claim 1, wherein the
visualized data is analysed and modified based on user
actions, wherein the user actions include hand gestures,
touch inputs and voice commands, wherein the user actions
are customizable.
9. The method as claimed in claim 8, wherein the user
actions are identified via a plurality of sensing devices
(110a), and the identified user actions are interpreted for
analysis and modification of the visualized data using
machine learning techniques.
10. The method as claimed in claim 8, wherein
identification and interpretation of the hand gestures
comprises:
retrieving image frames in real-time via one or more optical sensors;
processing and analyzing the retrieved image frames to detect hand gestures in the frames using a gesture recognition model;
interpreting inputs from the hand gestures in the frame using the gesture recognition model.
11. The method as claimed in claim 10, wherein the hand
gestures include select, move, zoom in, zoom out, discard,
and randomize.

12. The method as claimed in claim 8, wherein
identification and interpretation of the voice commands
comprises:
identifying speech input by processing and analyzing retrieved sound signals;
converting the identified speech input into a text using at least one of: a speech recognition machine learning model and a speech to text module of the display device;
interpreting the text using a voice interpretation machine learning model to identify user's intent.
13. The method as claimed in claim 8, wherein identification and interpretation of the touch inputs comprises processing and analyzing touch user actions received via touch sensors; and interpreting inputs from the processed and analyzed touch user actions.
14. A system (108) for providing automated data visualization and modification of visualized data, interfacing with a display device (106), the system (108) comprising:
a memory (116) storing program instructions; a processor (114) configured to execute program instructions stored in the memory (116); and a visualization engine (112) executed by the processor (114), and configured to:
identify one or more data points and one or more data types associated with a selected data using data analysis techniques;
evaluate one or more visual representations for rendering the selected data based on the one or more data

points and the one or more data types using a mapping table generated based on a predetermined set of mapping criteria; and
visualize the selected data based on the one or more
visual representations via the display device (106) based
on an identification of a type of the display device (106)
using a data rendering technique.
15. The system (108) as claimed in claim 14, wherein the
visualization engine (112) comprises a data access unit
(118) executed by the processor (114), said data access unit (118) configured to authenticate users to retrieve the selected data from an external data source.
16. The system (108) as claimed in claim 14, wherein the
visualization engine (112) comprises a data analysis unit
(120) executed by the processor (114), said data analysis unit (120) configured to identify the one or more data points and the one or more data types in the selected data using data analysis techniques by:
identifying a category of the selected data by analyzing and categorizing the selected data,
extracting one or more features from the selected data based on the identified category, wherein the one or more features are representative of measurements associated with the one or more data points to observe the selected data;
identifying the one or more data types of each of the one or more features using a datatype checker pipeline;
creating data structures by aligning the extracted features in columns; and

identifying the one or more data points associated with the selected data using data analysis techniques on the created data structures, wherein the pattern and correlation between the extracted features and the one or more datatypes is evaluated.
17. The system (108) as claimed in claim 14, wherein the visual representations may be selected from graphs, charts, tables, maps, infographics, and dashboards.
18. The system (108) as claimed in claim 14, wherein the type of the display device (106) is identified from virtual reality displays, augmented reality displays, mixed reality displays and smartphones.
19. The system (108) as claimed in claim 14, wherein the visualization engine (112) comprises a data rendering unit
(122) executed by the processor (114), said data rendering unit (122) configured to visualize the selected data using the evaluated one or more visual representations via the display device (106) using the data rendering technique, wherein the data rendering technique comprises:
evaluating one or more 3D objects for visualizing the evaluated one or more visual representations on identification of virtual reality display as the type of display device;
computing coordinates of the one or more 3D objects based on feature values associated with one or more features of the identified one or more data points;
evaluating a theme of the one or more 3D objects based on a real-time lighting information of a real world environment, wherein the theme is representative of color and texture; and

rendering the one or more 3D objects in the real world environment based on the computed coordinates and the evaluated theme, wherein the rendered one or more 3D objects are representative of visualized selected data.
20. The system (108) as claimed in 19, wherein
estimating the real-time lighting information of the real
world environment comprises:
retrieving information associated with the real world environment via at least one of: a plurality of sensing devices (110a) and a one or more optical sensors of the display device, wherein one or more images of the real world environment are retrieved via at least on of: the plurality of sensing devices (110a) and the one or more optical sensors of the display device;
extracting a plurality of feature points from the retrieved images; and
estimating the real-time lighting information of the real world environment using pixel intensity of the extracted plurality of feature points based on a plurality of lighting parameters.
21. The system (108) as claimed in claim 14, wherein the
visualization engine (112) comprises a data rendering unit
(122) executed by the processor (114), said data rendering unit (122) configured to visualize the selected data based on the evaluated one or more visual representations via the display device (106) using the data rendering technique, wherein one or more objects for visualizing the evaluated one or more visual representations are evaluated on identification of smartphone as the type of display device; further wherein coordinates of the one or more objects are computed based on feature values associated with one or

more features of the identified one or more data points; and the one or more objects are rendered on the smartphone using the computed coordinates, wherein the rendered one or more objects are representative of visualized selected data.
22. The system (108) as claimed in claim 14, wherein the visualization engine (112) comprises an action identification unit (124) executed by the processor, said action identification unit (124) configured to analyse and modify the visualized data using user actions, wherein the user actions include hand gestures, touch inputs and voice commands, further wherein the user actions are customizable.
23. The system (108) as claimed in claim 22, wherein the user actions are identified via a plurality of sensing devices (110a), and the identified user actions are interpreted for analysis and modification of the visualized data using machine learning techniques.
24. The system (108) as claimed in claim 22, wherein identification and interpretation of the hand gestures comprises:
retrieving image frames in real time via at least one of: a plurality of sensing devices (110a) and a one or more optical sensors of the display device (106);
processing and analyzing the retrieved image frames to detect hand gestures in the frames using a gesture recognition model;
interpreting inputs from the hand gestures in the frame using the gesture recognition model.

25. The system (108) as claimed in claim 24, wherein the hand gestures include select, move, zoom in, zoom out, discard, and randomize.
26. The system (108) as claimed in claim 22, wherein identification and interpretation of the voice commands comprises:
identifying speech input by processing and analyzing sound signals retrieved via at least one of: a plurality of sensing devices (110a) and a microphone of the display device (10 6);
converting the identified speech input into text using at least one of: a speech recognition machine learning model and a speech to text module of the display device (10 6);
interpreting the text using a voice interpretation machine learning model to identify user's intent.

Documents

Application Documents

# Name Date
1 202041028124-ABSTRACT [27-06-2022(online)].pdf 2022-06-27
1 202041028124-STATEMENT OF UNDERTAKING (FORM 3) [02-07-2020(online)].pdf 2020-07-02
2 202041028124-CLAIMS [27-06-2022(online)].pdf 2022-06-27
2 202041028124-PROOF OF RIGHT [02-07-2020(online)].pdf 2020-07-02
3 202041028124-POWER OF AUTHORITY [02-07-2020(online)].pdf 2020-07-02
3 202041028124-COMPLETE SPECIFICATION [27-06-2022(online)].pdf 2022-06-27
4 202041028124-FORM 1 [02-07-2020(online)].pdf 2020-07-02
4 202041028124-FER_SER_REPLY [27-06-2022(online)].pdf 2022-06-27
5 202041028124-FORM 3 [27-06-2022(online)].pdf 2022-06-27
5 202041028124-DRAWINGS [02-07-2020(online)].pdf 2020-07-02
6 202041028124-Information under section 8(2) [27-06-2022(online)].pdf 2022-06-27
6 202041028124-COMPLETE SPECIFICATION [02-07-2020(online)].pdf 2020-07-02
7 202041028124-FER.pdf 2022-02-02
7 202041028124-Abstract_02-07-2020.jpg 2020-07-02
8 202041028124-FORM 3 [09-11-2020(online)].pdf 2020-11-09
8 202041028124-FORM 18 [06-07-2020(online)].pdf 2020-07-06
9 202041028124-Covering Letter [04-09-2020(online)].pdf 2020-09-04
9 202041028124-Request Letter-Correspondence [04-09-2020(online)].pdf 2020-09-04
10 202041028124-Covering Letter [04-09-2020(online)].pdf 2020-09-04
10 202041028124-Request Letter-Correspondence [04-09-2020(online)].pdf 2020-09-04
11 202041028124-FORM 18 [06-07-2020(online)].pdf 2020-07-06
11 202041028124-FORM 3 [09-11-2020(online)].pdf 2020-11-09
12 202041028124-Abstract_02-07-2020.jpg 2020-07-02
12 202041028124-FER.pdf 2022-02-02
13 202041028124-COMPLETE SPECIFICATION [02-07-2020(online)].pdf 2020-07-02
13 202041028124-Information under section 8(2) [27-06-2022(online)].pdf 2022-06-27
14 202041028124-DRAWINGS [02-07-2020(online)].pdf 2020-07-02
14 202041028124-FORM 3 [27-06-2022(online)].pdf 2022-06-27
15 202041028124-FER_SER_REPLY [27-06-2022(online)].pdf 2022-06-27
15 202041028124-FORM 1 [02-07-2020(online)].pdf 2020-07-02
16 202041028124-COMPLETE SPECIFICATION [27-06-2022(online)].pdf 2022-06-27
16 202041028124-POWER OF AUTHORITY [02-07-2020(online)].pdf 2020-07-02
17 202041028124-CLAIMS [27-06-2022(online)].pdf 2022-06-27
17 202041028124-PROOF OF RIGHT [02-07-2020(online)].pdf 2020-07-02
18 202041028124-STATEMENT OF UNDERTAKING (FORM 3) [02-07-2020(online)].pdf 2020-07-02
18 202041028124-ABSTRACT [27-06-2022(online)].pdf 2022-06-27

Search Strategy

1 SearchHistory(59)E_01-02-2022.pdf