Sign In to Follow Application
View All Documents & Correspondence

A System And Method For Infrastructure Management Powdered By Statistical Methods

Abstract: A system and method for infrastructure management powered by statistical methods. This invention relates to network management, and more particularly to Information Technology (IT) infrastructure management. A further object of the invention is proactively capture user experiences, which can be in text format or feedback which can be either structured or unstructured as and when application is being accessed by the user. The feedback collected from different sources is processed using statistical methods to extract information and provide a user perspective map for every application accessed. FIG. 4

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 May 2009
Publication Number
15/2012
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Prithvi Information Solutions Limited
10Q3-A1  Cyber Towers  HITEC city  Madhapur  Hyderabad - 500081

Inventors

1. Shilpa Kadam
Sri Sai Towers Flat 303  Plot 28 & 29 White fields  Kondapur  Hyderabad
2. Praveen Koduru
5424 Forest Edge Dr McDonald PA 15057 USA
3. Dr Dakshina Murthy
H. No 35  Quiet Lands  GachiBowli  Hyderabad – 500 032
4. Ajay Dani
107  Shubham Apartments Ameerpet  Hyderabad 500016

Specification

FIELD OF INVENTION

[001] This invention relates to network management, and more particularly to Information Technology (IT) infrastructure management.

BACKGROUND OF INVENTION

[002] IT infrastructure management service providers might align their ITSM (IT service management) along the ITIL guidance which enables them to manage all elements of customer's infrastructure through clearly defined service levels. In general, there are 5 key areas where the guidelines are best practiced: service strategy, service design, service transition, service operation and continual service improvement. The service provider has to manage several aspects of network to ensure that issues within the network are resolved precisely and promptly.

[003] Service transition involves managing changes and delivery of services required by business. Changes in IT infrastructure should be handled in a controlled way by following standard methods and procedures to handle changes promptly. Due Care is taken to balance the need for a change and the impact of damage due to change. ITSM Change management is responsible for assessing, approving, implementing, monitoring, reviewing and then closing the change requests when changes arise from hardware, software, communication equipments or support and maintenance of live systems. The changes are collected either during the beta releases, request from customers or by conducting a survey from key users. The request for changes could be due to inefficient performance of applications, software, hardware etc or failures or if a new product is available in that segment. Changes collected are then assessed for impact, cost, benefit and risk involved.

[004] IT portfolio management is a systematic supervision of larger sets of projects, ongoing IT services which comprises of application portfolio and project portfolio.

[005] As businesses steadily evolve, there is a need to organize and reorganize IT services to changing business needs efficiently and cost effectively. Several key measures have been identified and data is collected to continuously improve services. For each of the above scenarios the focus is on associating cost incurred with ROI to the organization using a macroscopic perspective of the information collected to take decisions.

[006] Performance of ill-functioning system can be substantially enhanced by modifying one or more internal elements. Thus, a microscopic evaluation is an essential input during ITSM to take right decision. However, so far there has been no systematic framework developed for getting information about the individual elements.

[007] The experience of users if captured systematically can provide detailed information on the functioning of individual elements. The user data can be their requests for changes, updates and difficulties. However, one major obstacle is that there is no process defined to collect, analyze and display the user preferences systematically.

[008] With the introduction to ITIL which is heavily dependent on a centralized DB, robust methods of implementing change management, configuration management etc provides a user interface for data management. As there are several data sources from where the data is collected, the admin staff is required to use the user interface to avoid duplication of data records and choose most relevant records to be updated into the central database. Some methods allow for integration of structured and unstructured human activities in the context of delivering one of more services. The systems and methods described improve efficiency and quality of service by increasing overall productivity and providing better accountability of actual cost of delivery. There are no smart applications that can continuously collect and process user preferences instantly and enable ITSM group to take better decisions.

[009] It has already been proposed that introduction of ITIL is heavily dependent on a centralized DB, robust methods of implementing change management, configuration management etc. Such methods provide a user interface for data management. As there are several data sources from where the data is collected, the admin staff is required to use the user interface to avoid duplication of data records and choose most relevant records to be updated into the central database. Also, allows for integration of structured and unstructured human activities in the context of delivering one of more services. The systems and methods described improve efficiency and quality of service by increasing overall productivity and providing better accountability of actual cost of delivery.

[0010] The framework collects system information and displays metrics that propose modifications to improve efficiency improvements. This can be useful to form an end-end solution for better IT maintenance and portfolio spending in organization. The performance of an enterprise is measured by measuring the performance of individual elements. The performance characteristics are reliability, serviceability, availability and performance. This information is used in decision making and planning activities.

OBJECT OF INVENTION

[0011] A further object of the invention is proactively capture user experiences, which can be in text format or feedback which can be either structured or unstructured as and when application is being accessed by the user. The feedback collected from different sources is processed using statistical methods to extract information and provide a user perspective map for every application accessed.

[0012] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.

STATEMENT OF INVENTION

[0013] The embodiments herein achieve a system for infrastructure management by employing stastical methods thereof. Referring now to the drawings, and more particularly to FIGS. 1 through 6, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments.

BRIEF DESCRIPTION OF FIGURES

[0014] This invention is illustrated in the accompanying drawings, through out which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings, in which:

[0015] FIG. 1 is a system diagram that illustrates a framework to capture user data, according to embodiments as disclosed herein;

[0016] FIG. 2 is a flow chart depicting the automatic trigger, in accordance with the embodiments herein;

[0017] FIG. 3 is a flow chart depicting the flow of information when the server automatically triggers the software for user feedback, in accordance with the embodiments herein;

[0018] FIG. 4 is a flow chart depicting the flow of information when the user opts for a feedback, in accordance with the embodiments herein;

[0019] FIG. 5 illustrates a chirper module, in accordance with the embodiments herein; and

[0020] FIGs. 6a and 6b illustrate sample administration dash board page, in accordance with the embodiments herein.

DETAILED DESCRIPTION OF INVENTION

[0021] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.

[0022] A method and system framework which is powered by statistical techniques that enable organizations increase productivity is disclosed. The frame work has a software component which will proactively capture user experiences, which can be in text format or feedback which can be either structured or unstructured as and when application is being accessed by the user. For few critical users a speaker phone is connected to their respective systems so as to collect their voice feedback. A remote user will also be able to send his feedback through the components attached to the application at site, through mobile or speaker phones. The feedback collected from different sources is processed using statistical methods to extract information and provide a user perspective map for every application accessed.

[0023] FIG. 1 is a system diagram that illustrates a framework to capture user data, according to embodiments as disclosed herein. The system framework comprises of a plurality of users 101, servers 116, 117 and at least one administrator. The server 116 may comprise of Network Components (NC), Application Servers (AS), Database (DB), monitoring applications (MA). The server 11 may comprise of an analysis engine. The frame work has a module which will proactively capture user experiences, which can be in text format or feedback which can be either structured or unstructured as and when application is being accessed by the user. When the user accesses any application, he can provide his feedback regarding the application. The server 116 may prompt the user for his feedback. The device used by the user may also be used to prompt the user for feedback, using an installed application or an applet (which may be obtained from the server 116). The AS manages all the processing functions of the data input by the user. The DB maintains a record of the user demography like: functional role, experience, location etc, feedback related information input by the user, date/ time etc. The DB also contains various rules for performing statistical analysis on the data and pattern recognition techniques. The rules can be user defined or can be obtained from a pre defined database. MA monitors the activity of different system components and tracks the functioning of the elements of the system. The monitoring tools would collect access related log data and provide a dashboard to display performance of the system or network. The NC is used and performs operations as per the specification. The server 116 may send the data to the server 117 on receiving the feedback from the user. The server 116 may send the data to the server 117 on receiving a request from the server 117. The server 116 may also send the data to the server 117 at a pre-configured time or at specific intervals. The analysis engine present in the server 117, gathers the data input by the user and analyzing it for the content. It is also very important that the analysis engine adapts and learns from the feedback that it receives as misclassification errors from the Admin (or ITSM group) that interprets the final outputs. This adaptation phase is to maintain the optimal performance. Analysis engine is of two type's text analysis engine for analyzing the text feedback and voice analysis engine for analyzing voice feedback. Administrator provides authorization and authenticates different processes. The feedback element maintains a record of the feedback input by the users. The feedback can be in the form of text data or voice data. In another embodiment herein, the analysis engine may be present in the server 116.

[0024] FIG. 2 is a flow chart depicting the flow of information when the server automatically triggers the software for user feedback, in accordance with the embodiments herein. IT infrastructure management service providers might align their ITSM (IT service management) along the ITIL guidance which enables them to manage all elements of customer's infrastructure through clearly defined service levels. The user inputs his feedback on different applications or software. The feedback can be in the form of text data or voice data. The input feedback data is sent to the server 116. The server 116 comprises of a plug-in that stores the data warehouse of usage statistics and dynamically created rules. Whenever, there is a deviation from the browsing pattern of the user or an average user, the user is prompted to provide user feedback. For instance, the user spends lot of time on one page, which is more than the average that he/she usually spends, and then the server 116 will pop the message for the user to provide feedback. To promote the usage of the software based on last trigger time, the server 116 pops up for user to provide feedback. The threshold values are derived and computed from the usage statistics. The server 116 stores the User demography, Application details, user's roles. It also stores the usage statistics like: average time spent per page, application, page content, visuals, links, fonts, layouts, actions links, and look and feel, lower response time from server, etc. for standard application analytics plug-ins. In addition to this, the server 116 derives usability quotient, and ranks the page/application/user/network element. Based on the user roles and usage pattern, user preferences are weighted. This will help the administrators to take better decisions. A check is made to see if the user spends more time. In case the user take more time, the last trigger is checked, to see if the time is more than the pre defined threshold value. Further, a check is made to see if the average usage time is more than the threshold value. If the time is more, the analysis engine recognizes that a feedback data is available. The nature of the feedback is studied, as to whether the feedback is a voice data or a text data. If the data is voice data speech analysis engine is used to extract useful information from the speech content and the speakers to make strategic decisions. There are automatic techniques like Hidden Markov models or a combination of standard techniques to perform speech interpretation. This application will employ methods such as conditional random fields and identify frequently used keywords, phrases spoken, detect the pitch of voice, the day and time of the day and be able to rate the user expression with either a positive/neutral/negative satisfaction quotient. Based on these factors and user roles, the voice data will be utilized for rating. So, when the user opts to speak, the voice is recorded and mapped with the application for which the voice was recorded. The output displayed to the administrator will be the application, keywords/phrases map with appropriate weights. On the other hand, if the data is text data, text data from the user is captured and stored in the data ware house. Along with the metadata, the analysis engine retrieves the messages to analyze. Multiple filtering and data mining techniques are used to systematically clean the data, extract sentiment keywords and use Natural language processing to classify text based on application and then provide % of success in classification using different techniques. Analysis of grammatical sentence structures and phrases with parts of speech, entity tagging and categorization can be accomplished using Natural Language Processing (NLP) techniques. The steps involved are: extract application-specific features, extract sentiment from each phrase and develop a database of application, feature extraction, identify sentiment phrases, and associated patterns. Further, the analysis engine analysis the data based on the pre defined set of rules or the rules provided by the user. The role of analysis engine is to collect user feedback and process the data using statistical techniques to provide user perceptions to ITSM group. The structured data is used to analyze to compute the quality of each page, availability of hardware or software etc and the data is sent to the ISTM group for display. The various actions in method 200 may also be performed by the analysis engine in server 117. The various actions in method 200 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 2 may be omitted.

[0025] FIG. 3 is a flow chart depicting the automatic trigger, in accordance with the embodiments herein. The feedback can be provided in two ways either the server 116 performs an automatic trigger to the user to input his feedback or the user himself can opt to provide his feedback. In case of automatic trigger, the period of trigger can be defined, can be random or event triggered to proactively collect user feedback. The server 116 prompts a user to provide his feedback regarding software or an application based on some in built rules. The rules can be based on usage pattern, time spent on a page, delay response time on a page, application usability, availability, ease of access etc when trying to perform any action. The user enters a feedback message. The choice can be provided to the user to enter the feedback in the form of a text message or a voice message. The data regarding the feedback details is collected and stored in the data ware house along with the relevant metadata. The data is then analyzed based on the demographic profile of the user, application type, feedback, data/time etc. The nature of the data is identified i.e. if the data is a text or voice data. In case the data is text, a Natural Language Processing technique (NLP) will process the data. Multiple filtering and data mining techniques are used to systematically clean the data, extract sentiment keywords and use Natural language processing to classify text based on application and then provide % of success in classification using different techniques. Analysis of grammatical sentence structures and phrases with parts of speech, entity tagging and categorization can be accomplished using Natural Language Processing (NLP) techniques. The steps involved are: extract application-specific features, extract sentiment from each phrase and develop a database of application, feature extraction, identify sentiment phrases, and associated patterns. In case the data is a voice the speech analysis engine will analyze the data. The speech analysis engine is used to extract useful information from the speech content and the speakers to make strategic decisions. There are automatic techniques like Hidden Markov models or a combination of standard techniques to perform speech interpretation. This application will employ methods such as conditional random fields and identify frequently used keywords, phrases spoken, detect the pitch of voice, the day and time of the day and be able to rate the user expression with either a positive/neutral/negative satisfaction quotient. Based on these factors and user roles, the voice data will be utilized for rating. So, when the user opts to speak, the voice is recorded and mapped with the application for which the voice was recorded. The output displayed to the administrator will be the application, keywords/phrases map with appropriate weights. The data is then analyzed.

The role of analysis engine is to collect user feedback and process the data using statistical techniques to provide user perceptions to ITSM group. It can be deployed at remote sites (e.g. ATM Machine/POS Terminal etc.) The various actions in method 300 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 3 may be omitted.

[0026] FIG. 4 is a flow chart depicting the flow of information when the user opts for a feedback, in accordance with the embodiments herein. In case the user opts to give his feedback when desired by him, he can easily provide either text data or voice data. Some of the reasons for the user to opt may be the user instantly wants to text a feedback and submit when he desires or user thinks accessibility of specific application could be improved if the page design can be changed and the like. At first user enters his feedback. The problem is studied, for which purpose the feedback is given by the user. The type of feedback is identified i.e. either a text or voice. A reference is made to the data ware house to store the details regarding the feedback. In case the feedback is in the form of voice, NLP speech is analyzed. The speech analysis engine is used to extract useful information from the speech content and the speakers to make strategic decisions. There are automatic techniques like Hidden Markov models or a combination of standard techniques to perform speech interpretation. This application will employ methods such as conditional random fields and identify frequently used keywords, phrases spoken, detect the pitch of voice, the day and time of the day and be able to rate the user expression with either a positive/neutral/negative satisfaction quotient. Based on these factors and user roles, the voice data will be utilized for rating. So, when the user opts to speak, the voice is recorded and mapped with the application for which the voice was recorded. Further, a check is made if the data is analyzed. The user voice details are analyzed for judging the sentiments of the user. The speech analysis engine processes the voice data using statistical and pattern recognition techniques. The inputs used for analysis are the content, tone of the voice, frequency of terms used, application, date/time to provide positive/neutral/negative sentiment and related phrases along with the user map. If the system is unable to classify the sentiment a message is sent to Admin to peek. If the data collected is text, then the NLP analysis engine processes the unstructured data using statistical techniques to provide positive/neutral/negative sentiment and related phrases along with the user map. The information display to the ITSM group is in the form of dashboard where the user perspective is provided. User demography, application, date/time, positive/negative/neutral - sentiment, Rank etc. The usage statistics is stored in the engine for future references. Apart from just gathering the information and analyzing it for the content, it is also very important that the analysis engine adapts and learns from the feedback that it receives as misclassification errors from the Administrator (or ITSM group) that interprets the final outputs. This adaptation phase is to maintain the optimal performance. The network administrator handles the administrative functions. The details are then sent to the analysis engine, which analyses the data based on the rules formulated. The various actions in method 400 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 4 may be omitted.

[0027] FIG. 5 illustrates a chirper module, in accordance with the embodiments herein. It is simple and easy to use application. When either the user receives a prompt or the user opts to provide feedback, a pop-up message appears. The prompt may be generated by the device used by the user or the server 116. There are two options for the user to select, either text message or voice message. If the user wants to type a message, a text area is provided else the user can use the microphone to provide feedback. The text message or the voice message stored in the data ware house and is then passed to the Analysis engine along with the metadata for analysis. The text data from the user is captured and stored in the data ware house. Along with the metadata, the analysis engine retrieves the messages to analyze. Multiple filtering and data mining techniques are used to systematically clean the data, extract sentiment keywords and use Natural language processing to classify text based on application and then provide % of success in classification using different techniques. Analysis of grammatical sentence structures and phrases with parts of speech, entity tagging and categorization can be accomplished using Natural Language Processing (NLP) techniques. The steps involved are: extract application-specific features, extract sentiment from each phrase and develop a database of application, feature extraction, identify sentiment phrases, and associated patterns.

[0028] When different techniques provide varying results, the final performance metrics can be evaluated with a confidence level by combining the accuracies of classification by each of the methods. Techniques such as Precision and Recall, Weighted voting and Stacking could be used to combine the classifier performances and provide a confidence level.

[0029] The speech analysis engine is used to extract useful information from the speech content and the speakers to make strategic decisions. There are automatic techniques like Hidden Markov models or a combination of standard techniques to perform speech interpretation. This application will employ methods such as conditional random fields and identify frequently used keywords, phrases spoken, detect the pitch of voice, the day and time of the day and be able to rate the user expression with either a positive/neutral/negative satisfaction quotient. Based on these factors and user roles, the voice data will be utilized for rating. So, when the user opts to speak, the voice is recorded and mapped with the application for which the voice was recorded. The output displayed to the administrator will be the application, keywords/phrases map with appropriate weights. The various actions in method 500 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 5 may be omitted.

[0030] FIG. 6 illustrates sample administration dash board page, in accordance with the embodiments herein. If there were several macroscopic change requests then what additional value will microscopic change requests have on the cost-benefit analysis? For instance, the change request is to increase the processing time of a function in a page. The microscopic view of the same request would be, to increase the processing time of a function align the action buttons in a sequence, or move a specific button from one page to appropriate page, or because for each action, the application is referring to DB etc. One other use of the microscopic detail is to increase the productivity of users in the working environment. Some of the user inputs like 'the printer option for two side printing", "add reporting features to a software tool rather than buying new software" will be useful. To compare the cost of the macroscopic changes (Mac) with Microscopic (Mic) changes for an application with respect after to cost we compute the returns of making Mic changes. Change Benefit: CB = (Mic - Mac)/Mac

[0031] Application PI: It reduces the process time by 30% (compared to manual process) and for each 5% reduction, the company saves $1000. So, for each time the process is run, PI saves $6000. Company runs process 1 once day (250 days a year). So, Application currently saves the company $1.5 million dollars. A new system P2 is being considered for replacing PI. P2 costs $500,000. It reduces the process time by 60%. So, the information that the decision makers have at this point is that if they invest $500,000, their savings will increase by $1.5 million per year. So, the system justifies itself in a year and hence is good to have. However, if we have the following data: PI has two elements that are being found as biggest efficiency breakers. If those two can be brought to the remaining system's level, the efficiency in fact goes up to 50%. The management identifies that they can change one of the items by redeveloping the element and the cost would be $50,000. By providing training on the second element, they can solve that hurdle and the cost is $10,000. So, with an investment of $60,000, the efficiency can be improved to 50% and hence the savings can be increased $1,000,000. Now the decision to take is whether to spend 60K and save $1 million more or Spend 500,000 and saves $1.5 million more. So far, decision makers did not have a systematic way of collecting such information which the current publication solves. Also, this system enables change in a Six sigma sense where continuous improvements can be made to the worst performing elements of the lot to constantly enhance the efficiency. Similar kind of analyses can be done for portfolio rationalization. The various actions in method 600 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 6 may be omitted.

[0032] The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the network elements. The network elements shown in Fig. 1 include blocks which can be at least one of a hardware device, or a combination of hardware device and software module.

[0033] The embodiment disclosed herein describes a method for infrastructure management using statistical techniques. Therefore, it is understood that the scope of the protection is extended to such a program and in addition to a computer readable means having a message therein, such computer readable storage means contain program code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The method is implemented in a preferred embodiment through or together with a software program written in e.g. Very high speed integrated circuit Hardware Description Language (VHDL) another programming language, or implemented by one or more VHDL or several software modules being executed on at least one hardware device. The hardware device can be any kind of portable device that can be programmed. The device may also include means which could be e.g. hardware means like e.g. an ASIC, or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. The method embodiments described herein could be implemented partly in hardware and partly in software. Alternatively, the invention may be implemented on different hardware devices, e.g. using a plurality of CPUs.

[0034] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.

CLAIMS

1. A method for enabling a user to provide feedback on a Information Technology network and analyze said feedback, said method using a first server and a second server and further comprising steps of

said first server analyzing when said user is to be prompted for feedback;

said user providing feedback to a first server, on being prompted;

said first server storing said feedback in a memory storage area;

said second server accessing said feedback from said memory storage area;

said second server analyzing said feedback;

said second server creating a report based on said feedback; and

said second server providing said report to a second user.

2. The method, as claimed in claim 1, wherein said first server analyzes on a plurality of factors when said user is to be prompted for feedback, wherein said factors comprises of

usage pattern of said user;

time spent on a feature by said user;

delay response time on a feature by said user;

application usability; availability of a feature; and ease of access to a feature.

3. The method, as claimed in claim 1, wherein said user is prompted by said first server.

4. The method, as claimed in claim 1, wherein said user is prompted by a device used by said user.

5. The method, as claimed in claim 1, wherein said feedback is provided using at least one of

text data; and

voice data.

6. The method, as claimed in claim 1, wherein said first server stores relevant metadata with said feedback.

7. The method, as claimed in claim 1, wherein said second server analyses said feedback by performing steps of

cleaning said feedback;

extracting sentiment keywords from said feedback; and

classifying said feedback based on said keywords, using natural language processing.

8. A system comprising of a first server and a second server for enabling a user to provide feedback on a Information Technology network and analyze said feedback, said system comprising at least one means adapted for

analyzing when said user is to be prompted for feedback;

receiving said feedback from said user;

storing said feedback in a memory storage area;

accessing said feedback from said memory storage area;

analyzing said feedback;

creating a report based on said feedback; and

providing said report to a second user.

9. The system, as claimed in claim 8, wherein said system is adapted for analyzing on a plurality of factors when said user is to be prompted for feedback, wherein said factors comprises of

usage pattern of said user;

time spent on a feature by said user;

delay response time on a feature by said user;

application usability;

availability of a feature; and

ease of access to a feature.

10. The system, as claimed in claim 8, wherein said system is adapted for
receiving said feedback as at least one of

text data; and

voice data.

11. The system, as claimed in claim 8, wherein said system is adapted for storing relevant metadata with said feedback.

12. The system, as claimed in claim 8, wherein said system is adapted for analyzing said feedback by performing steps of

cleaning said feedback;

extracting sentiment keywords from said feedback; and

classifying said feedback based on said keywords, using natural language processing.

13. The method to enable a user to be prompted for information, said
method comprising steps of

said user downloading an application from a server;

said application analyzing when said user is to be prompted for feedback;

said application prompting said user for feedback at said analyzed time; and
said application sending said feedback to said user.

14. The method, as claimed in claim 13, wherein said application analyzes on a plurality of factors when said user is to be prompted for feedback, wherein said factors comprises of

usage pattern of said user;

time spent on a feature by said user;

delay response time on a feature by said user;

application usability;

availability of a feature; and

ease of access to a feature.

15. The method, as claimed in claim 13, wherein said application receives
said feedback in at least one of

text format; and

voice format.

16. The method, as claimed in claim 13, wherein said application sends relevant metadata with said feedback.

Documents

Application Documents

# Name Date
1 1147-che-2009 power of attorney 19-05-2010.pdf 2010-05-19
1 abstract1147-CHE-2009.jpg 2012-03-13
2 Drawings.pdf 2011-09-03
2 1147-che-2009 form-2 19-05-2010.pdf 2010-05-19
3 Form-1.pdf 2011-09-03
3 1147-che-2009 claims 19-05-2010.pdf 2010-05-19
4 1147-che-2009 drawings 19-05-2010.pdf 2010-05-19
4 Form-3.pdf 2011-09-03
5 Form-5.pdf 2011-09-03
5 1147-che-2009 description(complete) 19-05-2010.pdf 2010-05-19
6 Power of Authority.pdf 2011-09-03
6 1147-che-2009 correspondence others 19-05-2010.pdf 2010-05-19
7 1147-che-2009 abstract 19-05-2010.pdf 2010-05-19
8 Power of Authority.pdf 2011-09-03
8 1147-che-2009 correspondence others 19-05-2010.pdf 2010-05-19
9 Form-5.pdf 2011-09-03
9 1147-che-2009 description(complete) 19-05-2010.pdf 2010-05-19
10 1147-che-2009 drawings 19-05-2010.pdf 2010-05-19
10 Form-3.pdf 2011-09-03
11 1147-che-2009 claims 19-05-2010.pdf 2010-05-19
11 Form-1.pdf 2011-09-03
12 Drawings.pdf 2011-09-03
12 1147-che-2009 form-2 19-05-2010.pdf 2010-05-19
13 abstract1147-CHE-2009.jpg 2012-03-13
13 1147-che-2009 power of attorney 19-05-2010.pdf 2010-05-19