Abstract: A system and method for performance assessment of multiple entities is disclosed. The system comprises a processor configured to obtain performance measurement data corresponding to primary assessment features associated with the multiple entities. The processor is also configured to identify secondary assessment features associated with the primary assessment features from the performance measurement data. Further, the processor is configured to determine measurement values corresponding to the secondary assessment features. In addition, the processor is configured to identify comparable entities for each secondary assessment feature based on the determined measurement values. The processor is also configured to generate a peer group for each primary assessment feature. The peer group comprises the comparable entities identified. Further, the processor is configured to rank the comparable entities in each peer group for each primary assessment feature. Furthermore, the processor is configured to present the ranked comparable entities for each primary assessment feature.
Claims:1. A system for performance assessment of a plurality of entities, comprising:
a processor;
one or more storage devices for storing instructions that when executed by the processor cause the processor to:
obtain, from one or more data sources, performance measurement data corresponding to one or more primary assessment features associated with the plurality of entities;
identify one or more secondary assessment features associated with the one or more primary assessment features from the performance measurement data based on one or more machine learning models;
determine measurement values corresponding to the one or more secondary assessment features from the performance measurement data for each entity of the plurality of entities based on the one or more machine learning models;
identify comparable entities of the plurality of entities for each secondary assessment feature of the one or more secondary assessment features based on the determined measurement values;
generate a peer group for each primary assessment feature, wherein the peer group comprises the comparable entities identified;
rank the comparable entities in each peer group for each primary assessment feature based on the determined measurement values; and
present the ranked comparable entities for each primary assessment feature.
2. The system as claimed in claim 1, wherein the one or more secondary assessment features are identified by conducting principal component analysis (PCA) on the performance measurement data and the measurement values for each identified secondary feature are determined by applying maximum likelihood estimation (MLE) on the performance measurement data corresponding to the one or more secondary assessment features identified, the one or more machine learning models being configured to conduct the PCA and apply the MLE on the performance measurement data.
3. The system as claimed in claim 2, wherein the peer group generated for each primary assessment feature is based on the one or more secondary assessment features identified corresponding to the one or more primary assessment features respectively and the measurement values determined for each identified secondary feature by applying the MLE.
4. The system as claimed in claim 1, wherein the measurement values are determined by data normalization of the performance measurement data corresponding to one or more primary assessment features associated with the plurality of entities.
5. The system as claimed in claim 1, wherein the comparable entities are identified by assigning weightage to the one or more secondary assessment features, wherein the weightage is assigned by comparing the determined measurement values of the comparable entities.
6. The system as claimed in claim 1, wherein the comparable entities in each peer group are ranked by segregating the comparable entities into a plurality of performance levels, wherein the comparable entities are segregated by comparing the determined measurement values for each secondary assessment feature with predefined ranking criteria corresponding to the plurality performance levels respectively.
7. The system as claimed in claim 1, wherein the ranked comparable entities are presented by generating a peer comparison matrix for the one or more primary assessment features, wherein the peer comparison matrix comprises the rank and the performance level of the comparable entities respectively in each peer group for each primary assessment feature.
8. The system as claimed in claim 1, wherein the ranked comparable entities corresponding to the one or more primary assessment features are presented graphically.
9. The system as claimed in claim 6, wherein the ranked comparable entities corresponding to the one or more primary assessment features are presented in a visual wagon wheel, wherein the visual wagon wheel displays the one or more primary assessment features, the one or more secondary features, the one or more performance levels, and a visual position of the ranked comparable entities corresponding to each secondary feature in the visual wagon wheel, the visual position being represented by unique visual indicators associated with the ranked comparable entities respectively, and the visual position being determined based on the segregation.
10. A method for performance assessment of a plurality of entities by a computer assessment system, comprising:
obtaining, by a processor of the computer assessment system, performance measurement data corresponding to one or more primary assessment features associated with the plurality of entities;
identifying, by the processor, one or more secondary assessment features associated with the one or more primary assessment features from the performance measurement data based on one or more machine learning models;
determining, by the processor, measurement values corresponding to the one or more secondary assessment features from the performance measurement data for each entity of the plurality of entities based on the one or more machine learning models;
identifying, by the processor, comparable entities of the plurality of entities for each secondary assessment feature of the one or more secondary assessment features based on the determined measurement values;
generating, by the processor, a peer group for each primary assessment feature, wherein the peer group comprises the comparable entities identified;
ranking, by the processor, the comparable entities in each peer group for each primary assessment feature based on the determined measurement values; and
presenting, by the processor, the ranked comparable entities for each primary assessment feature.
11. The method as claimed in claim 10, wherein the determining of the measurement values comprises:
normalizing the performance measurement data corresponding to one or more primary assessment features associated with the plurality of entities.
12. The method as claimed in claim 10, wherein the identifying of the comparable entities comprises:
assigning weightage to the one or more secondary assessment features, wherein the assigning of the weightage comprises comparing the determined measurement values of the plurality of entities.
13. The method as claimed in claim 10, wherein the ranking of the ranked comparable entities in each peer group comprises:
segregating the comparable entities into a plurality of performance levels, wherein the segregation of the comparable entities comprises:
comparing the determined measurement values for each secondary assessment feature with predefined ranking criteria corresponding to the plurality performance levels respectively.
14. The method as claimed in claim 13, wherein the presenting of the ranked comparable entities in each peer group comprises:
generating a peer comparison matrix for the one or more primary assessment features, wherein the peer comparison matrix comprises the rank and the performance level of the comparable entities respectively in each peer group for each primary assessment feature.
15. The method as claimed in claim 13, wherein the presenting of the ranked comparable entities in each peer group comprises:
presenting a visual wagon wheel based on the peer comparison matrix, the visual wagon wheel displaying the one or more primary assessment features, the one or more secondary features, the one or more performance levels, and a visual position of the ranked comparable entities corresponding to each secondary feature in the wagon wheel, the visual position being represented by unique visual indicators associated with the ranked comparable entities respectively, and the visual position being determined based on the segregation.
, Description:FIELD OF INVENTION
[0001] The present disclosure relates in general to providing business intelligence. More particularly, the present disclosure relates to a system and method for performance assessment of entities.
BACKGROUND OF THE INVENTION
[0002] In an organizational conglomerate there are several challenges in assessing performance of different organizational entities. Particularly, the different organizational entities may have respective performance evaluation processes and performance metrics. Accordingly, consolidating the performance metrics across the different organizational entities, assessing, and reporting overall performances of the different organizational entities based on such consolidation may be cumbersome and time consuming. Further, comparisons drawn between the different organizational entities based on such consolidation may also be inaccurate since the organizational entities may differ in implementation processes, product/services rendered, and other operational parameters. Accordingly, insights derived from such consolidation may also be inaccurate.
[0003] Hence, there is a need for performance assessment of entities such that drawbacks of existing assessment methods may be overcome.
SUMMARY OF THE INVENTION
[0004] In an aspect of the present disclosure, a system for performance assessment of multiple entities is disclosed. The system comprises a processor and one or more storage devices for storing instructions. The storage instructions when executed by the processor cause the processor to perform one or more functions. The functions comprise obtaining, from different data sources, performance measurement data corresponding to primary assessment features associated with the multiple entities. The functions also comprise identifying secondary assessment features associated with the primary assessment features from the performance measurement data based on machine learning models. Further, the functions comprise determining measurement values corresponding to the secondary assessment features from the performance measurement data for each entity of the multiple entities based on the machine learning models. In addition, the functions comprise identifying comparable entities of the multiple entities for each secondary assessment feature based on the determined measurement values. The functions also comprise generating a peer group for each primary assessment feature, the peer group comprising the comparable entities identified. Further, the functions comprise ranking the comparable entities in each peer group for each primary assessment feature based on the determined measurement values. Furthermore, the functions comprise presenting the ranked comparable entities for each primary assessment feature.
[0005] In another aspect of the present disclosure, a method for performance assessment of multiple entities is disclosed. The method comprises a step of obtaining, from different data sources, performance measurement data corresponding to primary assessment features associated with the multiple entities. The method also comprises a step of identifying secondary assessment features associated with the primary assessment features from the performance measurement data based on machine learning models. Further, the method comprises a step of determining measurement values corresponding to the secondary assessment features from the performance measurement data for each entity of the multiple entities baed on the machine learning models. In addition, the method comprises a step of identifying comparable entities of the multiple entities for each secondary assessment feature based on the determined measurement values. The method also comprises a step of generating a peer group for each primary assessment feature, the peer group comprising the comparable entities identified. Further, the method comprises a step of ranking the comparable entities in each peer group for each primary assessment feature based on the determined measurement values. Furthermore, the method comprises a step of presenting the ranked comparable entities for each primary assessment feature.
BRIEF DESCRIPTION OF DRAWINGS
[0006] FIG. 1 is a schematic block diagram of an environment 100 including a system 105 and electronic devices 110-125, in accordance with which various embodiments of the present disclosure may be implemented;
[0007] FIG. 2 is a schematic block diagram of different components in the system 105 of FIG. 1, in accordance with an embodiment of the present disclosure;
[0008] FIG. 3 is a schematic block diagram of a processor 210 in the system 105 of FIG. 2, in accordance with the embodiment of the present disclosure;
[0009] FIGS. 4A-4F illustrate bar graphs of exemplary ranked peer groups for each primary assessment feature associated with multiple entities, in accordance with the embodiment of the present disclosure;
[0010] FIG. 5 is an exemplary illustration of a visual wagon wheel displaying performance of multiple entities corresponding to primary and secondary assessment features, in accordance with the embodiment of the present disclosure; and
[0011] FIG. 6 is a schematic block diagram of a method for performance assessment of entities, in accordance with the embodiment of the present disclosure.
DETAILED DESCRIPTION
[0012] Referring to FIG. 1, a schematic block diagram of an environment 100 including a system 105 and electronic devices 110-125 is disclosed. The system 105 may be configured to be in communication with electronic devices 110-125 via a network 130. In an embodiment, the system 105 may correspond to a central performance assessment system or a server provided for performance assessment of multiple entities. In an embodiment, the electronic devices 110-125 may correspond to electronic devices associated with each entity. In an embodiment, the multiples entities may correspond to different and/or disparate organizations. In an embodiment, the multiples entities may correspond to different and/or disparate organizations of a conglomerate. In an embodiment, the multiples entities may correspond to different individual units or departments within an organization. Examples of the system 105 and/or the electronic devices 110-125 include, but are not limited to, computers, laptops, mobile devices, handheld devices, personal digital assistants (PDAs), tablet personal computers, digital notebook, automatic teller machines (ATMs), wearables, and similar electronic devices. The network 130 may include communication networks such as, but not limited to, a Local Area Network (LAN), a Wireless Local Area Network (WLAN), a Wide Area Network (WAN), internet, a Small Area Network (SAN), and a close-range or short-range wireless network or a personal area network such as Bluetooth®. In some embodiments, the network 130 may correspond to telecom networks using EDGE/GPRS/3G/4G/5G technologies.
[0013] In an embodiment, performance measurement data corresponding to each entity of the multiple entities respectively may be provided to the system 105 from the electronic devices 110-125 respectively via the network 130. In an embodiment, the performance measurement data may be provided to the system 105 automatically from the electronic devices 110-125 via the network 130. In an embodiment, the performance measurement data may be manually provided from the electronic devices 110-125 via the network 130. In an embodiment, the system 105 may also continuously or periodically monitor the performance measurement data stored in and/or collected by the electronic devices 110-125. In an embodiment, the performance measurement data may be stored in the electronic devices 110-125 via manual input into the electronic devices 110-125 respectively. In some embodiments, the performance measurement data may also be collected by electronic devices 110-125 via multiple other electronic devices (not shown) respectively via the network 130. The other electronic devices may be associated with different users, entities, and/or assessment systems. In an embodiment, the system 105 may retrieve updated performance measurement data from the electronic devices 110-125 in real-time or periodically. In an embodiment, the performance measurement data may be provided to the system 105 directly via manual input into the system 105. In an embodiment, the system 105 and/or the electronic devices 110-125 may include one or more databases or storage units to store the performance measurement data and may collectively be referred to as “data sources”.
[0014] Referring to FIG. 2, a block diagram of the system 105 of FIG. 1 is disclosed. In some embodiments, the system 105 includes a bus 205 or other communication mechanism for communicating information, and a processor 210 coupled with the bus 205 for processing information. The system 105 also includes a memory 215, such as a random-access memory (RAM) or other dynamic storage device, coupled to the bus 205 for storing information and instructions to be executed by the processor 210. The memory 215 can be used for storing temporary variables or other intermediate information during execution of instructions to be executed by the processor 210. The system 105 further includes a read only memory (ROM) 220 or other static storage device coupled to bus 205 for storing static information and instructions for processor 210. A storage unit 225, such as a magnetic disk, optical disk, solid state or semiconductor memory, is provided and coupled to the bus 205. The storage unit 225 may store performance measurement data associated with multiple entities. In an embodiment, the storage unit 225 may also store one or more machine learning models to analyse the performance measurement data. The system 105 can be coupled via the bus 205 to a display 230, such as a cathode ray tube (CRT), liquid crystal display (LCD), Light Emitting Diode (LED), and Organic LED (OLED), for displaying information to the user. An input device 235, including alphanumeric and other keys, is coupled to bus 205 for communicating information and command selections to the processor 210. Another type of user input device is a cursor control 240, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to the processor 210 and for controlling cursor movement on the display 230. The input device 235 can also be included in the display 230, for example a touch screen.
[0015] Various embodiments are related to the use of system 105 for implementing the techniques described herein. In one embodiment, the techniques are performed by the system 105 in response to the processor 210 executing instructions included in the memory 215. Such instructions can be read into the memory 215 from another machine-readable medium, such as the storage unit 225. Execution of the instructions included in the memory 215 causes the processor 210 to perform the process steps described herein.
[0016] The term “machine-readable medium” as used herein refers to any medium that participates in providing data that causes a machine to operate in a specific fashion. In some embodiments implemented using the system 105, various machine-readable media are involved, for example, in providing instructions to the processor 210 for execution. The machine-readable medium can be a storage media. Storage media includes both non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage unit 225. Volatile media includes dynamic memory, such as the memory 215. All such media must be tangible to enable the instructions carried by the media to be detected by a physical mechanism that reads the instructions into a machine.
[0017] Common forms of machine-readable medium include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper-tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge.
[0018] In another embodiment, the machine-readable medium can be a transmission media including coaxial cables, copper wire and fibre optics, including the wires that comprise the bus 205. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. Examples of machine-readable medium may include but are not limited to a carrier wave as describer hereinafter or any other medium from which the system 105 can read, for example online software, download links, installation links, and online links. For example, the instructions can initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and provide the instructions over a telephone line using a modem. A modem local to the system 105 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on the bus 205. The bus 205 carries the data to the memory 215, from which the processor 210 retrieves and executes the instructions. The instructions received by the memory 215 can optionally be stored on storage unit 225 either before or after execution by the processor 210. All such media must be tangible to enable the instructions carried by the media to be detected by a physical mechanism that reads the instructions into a machine.
[0019] The system 105 also includes a communication interface 245 coupled to the bus 205. The communication interface 245 provides a two-way data communication coupling to the network 130. For example, the communication interface 245 can be an integrated service digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface 245 can be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links can also be implemented. In any such implementation, the communication interface 245 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
[0020] In some embodiments, the processor 210 of the system 105 may be capable of executing the computer instructions in order to perform one or more functions. In an embodiment, the processor 210 may be configured to execute one or more modules 305-320, as shown in FIG. 3, in order to perform the functions. The processor 210 includes hardware circuitry that facilitates the execution of the modules 305-320. The modules 305-320 may include, but are not limited to, a data consolidation module 305, an evaluation module 310, a peer grouping module 315, and a presentation module 320.
[0021] Referring to FIG. 3, the data consolidation module 305 may be configured to obtain the performance measurement data corresponding to one or more primary assessment features associated with the multiple entities. In an embodiment, the data consolidation module 305 may obtain the performance measurement data from different data sources such as, the electronic devices 110-125 via the network 130 and the communication interface 245. The performance measurement data from different data sources obtained from the electronic devices 110-125 may then be stored in the storage unit 225 and accessed by the data consolidation module 305. In an embodiment, the performance measurement data may be prestored in the storage unit 225 and data consolidation module 305 may obtain the performance measurement data directly from the storage unit 225. Exemplarily, the performance measure data may include different metrics, metric identifiers, and metric values associated with the primary assessment features for each entity. In an embodiment, determination of the metrics and the metric values associated with the primary assessment features for each entity may be disparate and/or unique to each entity. In an embodiment, determination of the metrics and the metric values associated with the primary assessment features for each entity may be similar. In an embodiment, the primary assessment features may be predefined and/or common for each entity. In an embodiment, the primary assessment features may be disparate for each entity.
[0022] The evaluation module 310 may be configured to identify one or more secondary assessment features associated with the primary assessment features respectively from the performance measurement data for each entity. In an embodiment, the evaluation module 310 may identify the secondary assessment features by conducting principal component analysis (PCA) on the performance measurement data. The evaluation module 310 may be configured to implement one or more machine learning models in order to conduct the PCA. The PCA is used for transforming the performance measurement data by mapping each data point in the performance measurement data onto principal components while maintaining variations in the performance measurement data. A number of the principal components may be determined based on the mapping. The mapping may, in turn, be based on correlation and/or similarity between the different metrics, metric identifiers, and/or metric values in the performance measurement data. For example, the secondary assessment features may be identified by mapping different metric identifiers onto the principal components or principal metric identifiers such that variation in the performance measurement data is maximized across the principal metric identifiers. The mapped principal components correspond to the secondary assessment features identified. Exemplarily, for ‘n’ metric identifiers in the performance measurement data, the PCA may be conducted to identify ‘m’ principal metric identifiers, such that ‘m’ principal metric identifiers include the performance measurement data associated with the ‘n’ metric identifiers collectively and also maximize the variation between the performance measurement data associated with the ‘n’ metric identifiers. The ‘m’ principal metric identifiers correspond to the secondary assessment features identified. In an embodiment, the secondary assessment features may also be predefined and/or common for each entity. In an embodiment, the secondary assessment features may be disparate for each entity.
[0023] An exemplary list of the primary assessment features associated with an entity or the multiple entities, and the identified second assessment features that are associated with the primary assessment features are provided in Table 1 below. It may be noted that the exemplary list has been provided only as a reference and is in no way limited to the examples provided in Table 1.
Table 1
Primary Assessment Features Secondary Assessment Features
Finance Cash Conversion Ratio, Free Cash Flow (FCF)
Customer NPS, MS Improvement
Learning and development Communication Improvement, Engagement Action Plan, Digital Transformation Progress
Operational excellence Strategic Projects Progress, Key Projects Progress
Safety Progressive Safety Index, TRIFR
Sustainability Sustainability Accessibility Index (SI)
[0024] The evaluation module 310 may also be configured to determine measurement values corresponding to the identified secondary assessment features from the performance measurement data for each entity. In an embodiment, the evaluation module 310 may also be configured to determine the measurement values by data normalization of the performance measurement data corresponding to the primary assessment features associated with the multiple entities. In an embodiment, the evaluation module 310 may implement the data normalization such that the primary assessment features and the identified secondary assessment features for the multiple entities may be combined and rendered uniform or made common across the multiple entities. For example, the evaluation module 310 may implement the data normalization such that the primary assessment features and the identified secondary assessment features for the multiple entities may be as shown in Table 1 above. The evaluation module 310 may, accordingly, be configured to determine the measurement values corresponding to the identified secondary assessment features from the performance measurement data for the multiple entities after implementing the data normalization.
[0025] In an embodiment, the evaluation module 310 may determine the measurement values for each identified secondary feature by applying maximum likelihood estimation (MLE) on the performance data corresponding to each identified secondary feature. The evaluation module 310 may be configured to implement one or more machine learning models in order to apply the MLE. The MLE is a method that determines values for parameters of a model. The parameter values determined are such that the parameter values are representative of actual data observed or recorded. For example, the MLE is applied to estimate the measurement values corresponding to the identified secondary assessment features from the performance measurement data such that the estimated measurement values are representative of the performance measurement data obtained from each entity and/or the multiple entities. Exemplarily, for ‘i’ identified secondary assessment features and ‘j’ metric values in the performance measurement data, the evaluation module 310 may determine the measurement values ‘value 1-value n’ for the ‘i’ identified secondary assessment features respectively such that the measurement values ‘value 1-value n’ determined are representative of the ‘j’ metric values in the obtained performance measurement data from each entity and/or the multiple entities. In an embodiment, when the MLE is applied, the evaluation module 310 may first identify a number of data points or metric values corresponding to each secondary assessment feature from the performance measurement data. The evaluation module 310 may then consolidate the metric values identified in order to determine the measurement value for each secondary assessment feature. For example, the evaluation module 310 may identify ‘d’ metric values of the ‘j’ metric values in the performance data corresponding to a secondary assessment feature and consolidate the ‘d’ metric values to determine the measurement value of the secondary assessment feature.
[0026] Table 2 below provides exemplary measurement values determined corresponding to an identified secondary assessment feature associated with a respective primary assessment feature.
Table 2
Entity PF: Finance
SF: FCF (Cr) PF: Customer SF: NPS (%) PF: Safety
SF: TRIFR
PF: Sustainability
SF: SI
E1 1000 85 1.1 32
E2 1100 70 1.3 58
E3 80 92 3.1 65
E4 880 80 1.5 70
(PF: Primary Assessment Feature, SF: Secondary Assessment Feature, Cr: Crore)
[0027] The peer grouping module 315 may be configured to identify comparable entities of the multiple entities for each secondary assessment feature based on the determined measurement values. The peer grouping module 315 may be configured to implement one or more machine learning models in order to identify the comparable entities. In an embodiment, the peer grouping module 315 may identify the comparable entities based on a relative variation in the determined measurement values between the multiple entities. In an embodiment, the peer grouping module 315 may also define a predefined threshold variation limit for each secondary assessment feature in order to identify the comparable entities. For example, the peer grouping module 315 may identify entities with the relative variation that is below or equal to the predefined threshold variation limit as comparable entities. Similarly, the peer grouping module 315 may identify entities with the relative variation that is above the predefined threshold variation limit as non-comparable entities. In an embodiment, the peer grouping module 315 may also define a predefined minimum or maximum measurement value for each secondary assessment feature in order to identify the comparable entities. For example, the peer grouping module 315 may identify entities with the determined measurement value below or above the predefined minimum or maximum measurement value as comparable entities. Similarly, the peer grouping module 315 may identify entities with the determined measurement value within the predefined measurement range as comparable entities. The predefined threshold variation limit, the predefined minimum or maximum measurement values, and the predefined measurement range are herein collectively referred to as “predefined comparison criteria”.
[0028] For example, the peer grouping module 315 may define the predefined comparison criteria corresponding to the secondary assessment features associated each primary feature as shown in Table 3 below.
Table 3
Predefined Comparison Criteria PF: Finance
SF: FCF (Cr) PF: Customer SF: NPS (%) PF: Safety
SF: TRIFR
PF: Sustainability
SF: SI
Threshold Variation Limit 300 - - -
Minimum Measurement value - 85 - -
Maximum Measurement value - - 1.5 -
Measurement range - - - 60-70
(PF: Primary Assessment Feature, SF: Secondary Assessment Feature, Cr: Crore)
[0029] Table 4 shown below discloses comparable entities of the entities E1-E4 identified by the peer grouping module 315 in view of the determined measurement values as shown in Table 1 and the predefined comparison criteria as shown in Table 2.
Table 4
Feature Predefined Comparison Criteria Comparable entities identified
PF: Finance, SF: FCF (Cr) 300 E1, E2, E4
PF: Customer SF: NPS (%) 85 E1, E3
PF: Safety, SF: TRIFR 1.5 E1, E2, E4
PF: Sustainability, SF: SI 50-70 E3, E4
(PF: Primary Assessment Feature, SF: Secondary Assessment Feature)
[0030] In an embodiment, the peer grouping module 315 may also identify the comparable entities by assigning weightage to the secondary assessment features. The peer grouping module 315 may be configured to implement one or more machine learning models in order to assigning weightage to the secondary assessment features. The peer grouping module 315 may assign the weightage by comparing the determined measurement values of the comparable entities. In an embodiment, the peer grouping module 315 may compare the determined measurement values based on the predefined comparison criteria as shown in Table 3. For example, referring to Table 1 and Table 4, the peer grouping module 315 may assign a higher weightage to the secondary assessment feature ‘Free Cash Flow (FCF)’ in comparison with the secondary assessment feature ‘Cash Conversion Ratio’, both secondary assessment features being associated with the primary assessment feature ‘Finance’. The peer grouping module 315 may, accordingly, assign a higher priority to the comparable entities identified corresponding to the secondary assessment feature ‘Free Cash Flow (FCF)’ than the comparable entities identified corresponding to the secondary assessment feature ‘Cash Conversion Ratio’. In an embodiment, the weightage assigned by the peer grouping module 315 may be predefined. In an embodiment, the weightage to be assigned may be automatically determined by the peer grouping module 315 based on one or more factors such as, the number of metric values identified corresponding to each secondary assessment feature when MLE is applied by the evaluation model 310. In an embodiment, the weightage may also be assigned automatically by the peer grouping module 315 based on the determined measurement values of each secondary assessment feature. For example, the determined measurement values corresponding to a secondary assessment feature ‘SF1’ may not meet the predefined comparison criteria such that no comparable entities may be identified by the peer grouping module 315. Similarly, the determined measurement values corresponding to a secondary assessment feature ‘SF2’ may meet the predefined comparison criteria such that comparable entities may be identified by the peer grouping module 315. Accordingly, the peer grouping module 315 may assign a higher weightage to a secondary assessment feature ‘SF2’ in comparison with the secondary assessment feature ‘SF1’ based on the determined measurement values.
[0031] The peer grouping module 315 may then generate a peer group for each primary assessment feature. The peer grouping module 315 may generate the peer group for each primary assessment feature based on the secondary assessment features identified corresponding to the primary assessment features by conducting the PCA and the measurement values determined for each identified secondary feature by applying the MLE. In an embodiment, the peer group generated by the peer grouping module 315 for each primary assessment feature comprises the comparable entities identified by the peer grouping module 315 for each secondary assessment feature associated with respective primary assessment feature. For example, in view of the comparable entities identified as shown in Table 4 above, the peer grouping module 315 may generate peer groups as shown in Table 5 below.
Table 5
Feature Peer Group
PF: Finance E1, E2, E4
PF: Customer E1, E3
PF: Safety E1, E2, E4
PF: Sustainability E3, E4
(PF: Primary Assessment Feature)
[0032] In an embodiment, the peer group generated peer grouping module 315 for a first primary assessment feature may be different from the peer group generated for a second primary assessment feature. For example, referring to Table 5 as shown above, the peer group generated for the primary assessment feature ‘Finance’ is different from the peer group generated for the primary assessment feature ‘Sustainability’. In an embodiment, the peer group generated by the peer grouping module 315 for primary assessment features may also be similar to each other. For example, referring to Table 5 as shown above, the peer group generated for the primary assessment feature ‘Finance’ is similar to the peer group generated for the primary assessment feature ‘Safety’.
[0033] The peer grouping module 315 may also be configured to rank the comparable entities in each peer group for each primary assessment feature based on the determined measurement values. In an embodiment, the peer grouping module 315 may rank the comparable entities in an ascending or descending order of the determined measurement values by the evaluation module 310. In an embodiment, the peer grouping module 315 may rank the comparable entities in each peer group by segregating the comparable entities into different performance levels. Examples of the performance levels include, but are not limited to, top quartile, middle quartile, and bottom quartile. The peer grouping module 315 may also define predefined ranking criteria corresponding to the performance levels respectively. The peer grouping module 315 may segregate the comparable entities in respective performance levels by comparing the determined measurement values for each secondary assessment feature with the predefined ranking criteria defined corresponding to the plurality performance levels respectively. For example, in view of the determined measurement values as shown in Table 1, the peer groups generated as shown in Table 5, and the predefined ranking criteria, the peer grouping module 315 may rank the comparable entities each peer group as shown in Table 6 below.
Table 6
Feature Predefined ranking criteria Top Quartile Middle Quartile Bottom Quartile
PF: Finance
SF: FCF (Cr) Top > 1000cr
Middle = 1000cr
Bottom < 1000cr E2 E1 E4
PF: Customer
SF: NPS (%) Top > 90%
Middle: 88%-90%
Bottom < 88% E3 E1
PF: Safety
SF: TRIFR Top > 1.4
Middle: 1.2-1.4
Bottom < 1.2 E4 E2 E1
PF: Sustainability
SF: SI Top > 65
Middle = 65
Bottom < 65 E4 E3
(PF: Primary Assessment Feature, SF: Secondary Assessment Feature, Cr: Crore)
[0034] The presentation module 320 may be configured to present the ranked comparable entities for each primary assessment feature. In an embodiment, the presentation module 320 may present the ranked comparable entities by generating a peer comparison matrix for the primary assessment features. The peer comparison matrix comprises the rank and the performance level of the comparable entities respectively in each peer group for each primary assessment feature as determined by the peer grouping module 315. The peer comparison matrix may also comprise the determined measurement values corresponding to each secondary assessment feature associated with each primary assessment feature across the multiple entities. In an embodiment, the presentation module 320 may present the ranked comparable entities corresponding to the primary assessment features graphically.
[0035] In an example, referring to FIGS. 4A-4F, the presentation module 320 may present bar graphs corresponding to the primary assessment features 405, 410, 415, 420, 425, and 430 respectively associated with the multiple entities 401-403, 407-409. The bar graphs presented by the presentation module 320 may display the rank and performance level of exemplary ranked peer groups generated corresponding to the primary assessment features 405, 410, 415, 420, 425, and 430 respectively. The exemplary peer groups, as shown in FIGS. 4A-4F, corresponding to the primary assessment features 405, 410, 415, 420, 425, and 430 are reproduced in Table 7 below.
Table 7
Primary Assessment Feature Secondary Assessment Features Peer Group
405 435, 440 401-403
410 445, 450 407-409
415 455, 460, 465 401, 408
420 470, 475 402, 403, 409
425 480, 485 407, 408, 402
430 490 401, 407, 403
[0036] In addition, the bar graphs presented by the presentation module 320 may display the determined measurement values corresponding to each secondary assessment feature (as shown in Table 7 above) associated with the primary assessment features 405, 410, 415, 420, 425, and 430 respectively. For example, as shown in FIG. 4A, the presentation module 320 may display the determined measurement values corresponding to the secondary assessment features 435, 440 respectively on a vertical ‘Y’ axis of the bar graph and the exemplary peer group 401-403 generated corresponding to the primary assessment feature 405 on the horizontal ‘X’ axis of the bar graph. The bar graphs for the primary assessment features 405, 410, 415, 420, 425, and 430 may be similarly presented by the presentation module 320 in order to provide a performance assessment of the multiple entities 401-403, 407-409 across the primary assessment features 405, 410, 415, 420, 425, and 430 and the secondary assessment features 435-490 associated with the primary assessment features 405, 410, 415, 420, 425, and 430.
[0037] In another example, referring to FIG. 5, the presentation module 320 may present a visual wagon wheel 500 comprising the ranked comparable entities in each peer group (as shown in Table 7) corresponding to the primary assessment features 405, 410, 415, 420, 425, and 430 as shown in the FIGS. 4A-4F. The visual wagon wheel 500 presented by the presentation module 320 may display the rank and the performance levels 505-515 of exemplary ranked peer groups as determined by the peer grouping module 315 corresponding to the primary assessment features 405, 410, 415, 420, 425, and 430 respectively. The performance levels 505-515 may correspond to a ‘Bottom Quartile’, ‘Middle Quartile’ and ‘Top Quartile’ levels respectively. The visual wagon wheel 500 displayed by the presentation module 320 may collectively comprise the primary assessment features 405, 410, 415, 420, 425, and 430, the secondary features 435-490, the performance levels 505-515, and a visual position of the ranked comparable entities corresponding to each secondary feature in the visual wagon wheel 500. The visual position of each entity may be represented by presentation module 320 by unique visual indicators, such as different shapes or patterns, associated with the ranked comparable entities respectively. For example, the entity 401 may be represented by the visual indicator that is in the shape of a triangle. Similarly, the entities 402-403, 407-409 may be represented by different shapes such as a circle, square, oval, diamond, and rectangle respectively. The visual position of the entities 402-403, 407-409 may be determined based on the segregation and the performance levels 505-515 determined by the peer grouping module 315.
[0038] Referring again to FIG. 3, in an embodiment, the presentation module 320 may also generate and present insights and recommendations based the the ranked comparable entities in each peer group. In an embodiment, presentation module 320 may generate the insights and recommendations based on one or more machine learning models. The insights and recommendations presented by the presentation module 320 may include predefined proactive measures and/or actions to be taken in order to improve the ranking of the entities corresponding to each primary assessment feature.
[0039] In an embodiment, the presentation module 320 may present the bar graphs, as shown in FIGS. 4A-4F, and the visual wagon wheel 500, as shown in FIG. 5, via the display 230 of the system 105. In an embodiment, the presentation module 320 may present the bar graphs as shown in FIGS. 4A-4F and the visual wagon wheel 500 on respective display units (not shown) of the electronic devices 110-125 via the communication interface 245 and the network 130. In an embodiment, the presentation module 320 may present the bar graphs as shown in FIGS. 4A-4F and the visual wagon wheel 500 in response to a user request received via the system 105 and/or the electronic devices 110-125.
Industrial Applicability
[0040] Referring to FIG. 6, a method for performance assessment of multiple entities is disclosed. The method comprises a step 605 of obtaining, from different data sources, for example, the electronic devices 110-120 (as shown in FIG. 1), performance measurement data corresponding to primary assessment features, for example, as shown in Table 1, associated with the multiple entities. The method also comprises a step 610 of identifying secondary assessment features, as shown in Table 1, associated with the primary assessment features from the performance measurement data based on machine learning models. Further, the method comprises a step 615 of determining measurement values, for example, as shown in Table 2, corresponding to the secondary assessment features from the performance measurement data for each entity of the multiple entities based on the machine learning models. The step 615 comprises a step of normalizing the performance measurement data corresponding to one or more primary assessment features associated with the multiple entities. In addition, the method comprises a step 620 of identifying comparable entities of the multiple entities, for example, as shown in Table 4, for each secondary assessment feature based on the determined measurement values. The method also comprises a step 625 of generating a peer group, for example, as shown in Table 5, for each primary assessment feature, the peer group comprising the comparable entities identified. The step 625 also comprises assigning weightage to the one or more secondary assessment features. The step 625 also comprises comparing the determined measurement values of the comparable entities. Further, the method comprises a step 630 of ranking the comparable entities, for example, as shown in Table 6, in each peer group for each primary assessment feature based on the determined measurement values. The step 630 also comprises segregating the comparable entities into different performance levels. The segregating the comparable entities may comprise comparing the determined measurement values for each secondary assessment feature with predefined ranking criteria corresponding to the performance levels respectively, for example, as shown in Table 6. Furthermore, the method comprises a step 635 of presenting the ranked comparable entities for each primary assessment feature, for example, as shown in FIGS. 4A-4F and FIG. 5. The step 635 also comprises generating a peer comparison matrix for the primary assessment features. The peer comparison matrix comprises the rank and the performance level of the comparable entities respectively in each peer group for each primary assessment feature. The step 635 also comprises presenting a visual wagon wheel 500 (as shown in FIG. 5) based on the peer comparison matrix. Referring to FIG. 5, the visual wagon wheel 500 may display the primary assessment features 405, 410, 415, 420, 425, and 430, the secondary features 435-490, the performance levels 505-515, and a visual position of the ranked comparable entities corresponding to each secondary feature in the visual wagon wheel 500. The visual position of each entity may be represented by unique visual indicators, such as different shapes or patterns, associated with the ranked comparable entities respectively. The visual position of the entities 402-403, 407-409 may be determined based on the segregation.
[0041] The system 105 and the method 600 of the present disclosure facilitate a unified, automated, and real-time performance assessment of multiple entities corresponding to different primary assessment features associated with the multiple entities. Further, the system 105 and the method 600 of the present disclosure also provide insights and recommendations based on the performance assessment such that proactive measures or actions may be undertaken to improve the performance of the entities corresponding to each primary assessment feature.
[0042] In the preceding specification, the present disclosure and its advantages have been described with reference to specific embodiments. However, it will be apparent to a person of ordinary skill in the art that various modifications and changes can be made, without departing from the scope of the present disclosure, as set forth in the claims below. Accordingly, the specification and figures are to be regarded as illustrative examples of the present disclosure, rather than in restrictive sense. All such possible modifications are intended to be included within the scope of present disclosure.
| # | Name | Date |
|---|---|---|
| 1 | 202121039642-STATEMENT OF UNDERTAKING (FORM 3) [01-09-2021(online)].pdf | 2021-09-01 |
| 2 | 202121039642-FORM 1 [01-09-2021(online)].pdf | 2021-09-01 |
| 3 | 202121039642-FIGURE OF ABSTRACT [01-09-2021(online)].pdf | 2021-09-01 |
| 4 | 202121039642-DRAWINGS [01-09-2021(online)].pdf | 2021-09-01 |
| 5 | 202121039642-DECLARATION OF INVENTORSHIP (FORM 5) [01-09-2021(online)].pdf | 2021-09-01 |
| 6 | 202121039642-COMPLETE SPECIFICATION [01-09-2021(online)].pdf | 2021-09-01 |
| 7 | Abstract1.jpg | 2021-11-23 |
| 8 | 202121039642-Proof of Right [01-12-2021(online)].pdf | 2021-12-01 |
| 9 | 202121039642-FORM-26 [01-12-2021(online)].pdf | 2021-12-01 |
| 10 | 202121039642-REQUEST FOR CERTIFIED COPY [24-08-2022(online)].pdf | 2022-08-24 |
| 11 | 202121039642 CORRESPONDANCE CERTIFIED COPY 24-08-2022.pdf | 2022-08-24 |
| 12 | 202121039642-FORM 3 [01-11-2022(online)].pdf | 2022-11-01 |
| 13 | 202121039642-FORM 18 [30-08-2024(online)].pdf | 2024-08-30 |