Sign In to Follow Application
View All Documents & Correspondence

Player Retention For Online Games

Abstract: In an online game system, e.g., using mobile phone apps, retention incentives are selectively applied to players based on their respective predicted likelihoods of churning (i.e., ceasing participating in the game at some time in the near future). Game play data is logged and transferred from mobile phones to a game backend, which generates feature vectors including values for various game-play variables and indications whether or not retention incentives were applied. The feature vectors are used by a churn-prediction engine that generates the predictions that form the basis of retention incentive decisions. A deep-machine-learning engine updates models used by the churn-prediction engine to make its churn predictions based in part on the retention incentives applied and their results. [FIGURE 1]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
22 June 2018
Publication Number
52/2019
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
docketing@globalipservices.com
Parent Application

Applicants

ABSENTIA VIRTUAL REALITY PRIVATE LIMITED
B2-B/25, Runwal Nagar CHS, Kolbad, Thane (West)-400601 Maharashtra, India

Inventors

1. SHUBHAM MISHRA
HAL Township, 510, GT Road, Kanpur – 208007, Uttar Pradesh, India
2. HARIKRISHNA VALIYATH
A2, Ratandeep, 141 SV Road, Andheri (West), Mumbai – 400058, Maharashtra, India
3. VRUSHALI PRASADE
A2-6/203, Flower Valley Complex, Off Eastern Express Highway, Khopat, Thane (West) – 400601, Maharashtra, India
4. SOMA HALDER
Tinpukur, Badra PO, Kolkata – 700079, West Bengal, India
5. ASHISH GUPTA
89-A, Hanuman Vatika - 2, Tagore Nagar, Jaipur – 302021, Rajasthan, India

Specification

FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION (See section 10, rule 13)
“PLAYER RETENTION FOR ONLINE GAMES”
ABSENTIA VIRTUAL REALITY PRIVATE LIMITED, B2-B/25, Runwal Nagar CHS, Kolbad, Thane (West) – 400601, Maharashtra, India, a company incorporated in India
The following specification particularly describes the invention and the manner in which it is to be performed.

PLAYER RETENTION FOR ONLINE GAMES
BACKGROUND
[01] Mobile phones can serve as game consoles for games downloaded, e.g., as apps. In many cases, a game can be downloaded for free or for a low cost. The developer or distributor may then count on advertising and/or in-app purchases for additional revenue. A player that churns, i.e., no longer plays a game, no longer makes in-app purchases and no longer represents value to an advertiser. Accordingly, efforts are made to identify likely churners and to provide incentives for them to continue game participation.
BRIEF DESCRIPTION OF THE DRAWINGS
[02] FIGURE 1 is a schematic diagram of a game system that provides for automated player retention incentives.
[03] FIGURE 2 is a schematic diagram of a feature vector used in the game system of FIG. 1.
[04] FIGURE 3 is a flow chart of a retention-boosting process implementable in the game system of FIG. 1 and other systems.
[05] FIGURE 4 is a flow chart of a game-development process to set up the game system of FIG. 1.

DETAILED DESCRIPTION
[06] The present invention provides for a game system in which retention actions, e.g., retention incentives, to prevent player churning, are determined based on the predicted likelihoods of churning. Game-play data collected includes retention data indicating whether or not a retention action was taken. The retention data allows a churn-prediction engine to distinguish cases in which a retention action was taken from cases in which a retention action was not taken so that the prediction engine can make more accurate predictions in each case. Retention actions and their results are: 1) used by the churn-prediction engine to predict likelihoods of churnings; and 2) used by a machine-learning engine to adjust the models used to predict churning. In addition, the invention provides a game-development platform for developing such a game.
[07] In some embodiments, churn-likelihood thresholds are set to divide players into low, medium, and high-risk churners. Retention actions are then taken based on player categorizations as low, medium, or high risk. In other embodiments, churn likelihoods are predicted both assuming a retention action is taken and assuming a retention action is not taken. Retention actions can then be taken based on the marginal benefit of the retention action. Other prediction and retention-action approaches are used in still other embodiments.
[08] The game-play data is used to detect when a player has churned and how likely a player (that has not churned) is to churn. Adding the retention action data permits a machine learning engine to assess the degree of success of retention actions. Such assessments can be used to improve the churn-prediction models employed by the prediction engine, adjust segmentation thresholds used to divide players into separate

groups so that models can be customized more effectively for each group, adjust thresholds used to classify players as low, medium or high risk churners, and to more effectively determine whether to apply a retention action or, in some cases, better determine which retention action to apply in each case. These and other features and advantages of the invention are apparent from the examples below.
[09] As shown in FIG. 1, a game system 100 includes a game console 102 and a cloud-based back end 104. Game console102 can be a mobile phone or other device on which a game app has been installed. The game app provides for logging game-play data including session start and end times, and, depending on the game, scores and/or values for other game-related parameters.
[10] The game-play data collected is transferred to backend 104 for storage in a data store 106. Data store 106 is also used to store data generated from the raw game data received from console 102. For example, predictions, player-profile data, and device-profile data can be stored in data store 106. Player profile data can include country of residence and other localization information, as well as personal information such as age, gender, etc., if available. Device-profile information can include manufacturer and model of the mobile device, as well as operating-system version, amount of memory installed, price, and language, time-zone and other configuration settings.
[11] Representational filters 108 provide for segmenting, i.e., partitioning, data by player according to player and device profile data to improve prediction performance. The model performances can be improved if different kinds of data are analyzed differently. For example, the user behavior of an American user can be entirely different from that

of a Canadian user in terms of several variables. Hence, if the data is pre-segmented before the modelling process, and individual models are prepared for each of these segments, the models will give improved performance relative to a single model applicable to all segments. However in order to prevent high model variance, these segments should, in general, not have class skew or too few data points. The key categories identified for the gaming industry include: 1) country; 2) region (part of a country or a group of countries); 3) mobile phone brand and platforms; and 4) affluence (based on mobile-phone price). In addition, segmentation can facilitate conformance to local variations in law, e.g., that may limit which retention-boosting incentives may be offered.
[12] A feature-vector generator 110 generates feature vectors 112, each of which is an ordered set of features (parameter values) to be used for predicting the likelihood of churn for a given player. As shown in FIG. 2, each feature vector 112 includes a data vector 202 and a retention-decision vector 204. To ensure that the major factors influencing churn are taken into account, the data vector 202 includes representatives of the following feature categories: transaction 206, velocity or ratio 208, individual-based cumulative average 210, game-based cumulative average 212, and indicator or category 214. Other embodiments include other categories of features and may not include representatives of all the foregoing feature categories.
[13] Transaction variables 206 include values that are directly captured from player behavior. Examples include player level, start time, and play-session duration. Velocity/ratio variables 208 are formed from one or more transaction variables. Examples include levels cleared per week and average time per session.

[14] Individual-based cumulative average variables 210 compare a player’s recent performance and behavior to their own generic mean values. These include variables like ratio of time spent in last session to average time per session. Game-based cumulative average variables 212 compare the user data to the cumulative average values of other users. An example is weekly time spent by player against average weekly time spent by others in the same country.
[15] Indicator/category variables 214 are binary variables such as “does the player operate on an Android device?” Other player and device profile data can be included in this category. Indicator variables 214 have direct correlation to the visual filters that are created to analyze model results, and thus can be powerful contributors in a machine learning model.
[16] Retention-decision vector 204 can be a vector of 1s and 0s; each of the 1s indicates a churn-prevention activity. In an alternative embodiment in which different retention action may be selected, the retention-decision vector indicates not only whether or not a retention action was taken, but also indicates which retention action was taken.
[17] Feature vectors 112 (FIG. 1) are transmitted from feature vector generator 110 to a churn prediction engine 114, which outputs churn predictions, e.g., data representing likelihoods of churn for respective players. Prediction engine 114 includes a churn-prediction-model set of one or more churn-prediction models. Each segment separated by representational filters 108 may have its own model. In an alternative embodiment, one complex model accommodates the entire population of players.
[18] Churn-prediction engine 114 embodies a definition for “churn” in the form of a user doing a certain activity in the game less than n (activity

count ) times in the given churn period t . Both the values for activity count and churn period need to be optimized for the best results. This churn period can progressively move forward as the model continues to learn from new incoming data.
[19] Using the feature vectors, predictions are made on the degree of risk of user churning in terms of probability. These churning risks are classified into low, mid and high risk categories. The threshold probability limits for this categorization need to set and optimized manually. The feature vector which generates the lowest risk is chosen for performing retention boosting activity, and processed for modeling.
[20] One further step here can enable minimization costs of the retention boosting as well. Once you have a vector of n scores , where n is the number of retention vectors and each score stands the probability of churning for a particular retention decision (including the decision of no action), they can be further multiplied (dot product ) by the vector by a retention cost vector , which is a vector of costs of retention decisions. This yields the expected costs of each retention decision for that user. Any kind of business logic can be included to incorporate the probabilities of churning and retention costs for making a final choice on the retention decision.
[21] Churn-prediction engine 114 outputs a churn-prediction set of one or more predictions 116 per feature vector. 112. In one scenario, a single prediction represents the likelihood a player will churn if no incentive to remain is provided. In another scenario, an additional prediction indicates the likelihood that a player will churn if an incentive to remain is provided; in this case, predictions for cases with and without retention incentives can be compared to evaluate the marginal utility of a retention incentive.

In yet another scenario, separate predictions of likelihoods of churn are provided for respective incentives, in case more than one incentive type is available.
[22] Predictions 116 are transmitted to retention-action generator 118, which determines, based on received predictions, when retention actions 120 should be taken and, if there is a choice of retention actions, which retention actions should be taken. Examples of retention actions can include credits for in-app purchases and game-specific enhancements to enhance a player’s status in the game or to make the game more interesting, e.g., bonus points, extra lives, and new characters.
[23] In one configuration, the illustrated game system 100 classifies players as high, medium, and low risk churners; in this configuration, retention-action generator 118 makes retention decisions based on a player’s classification as a low, medium or high risk churner. In an alternative configuration, retention action generator 118 may condition a retention action on the likelihood being above some threshold or being within some range (e.g., above which the retention action would likely be ineffective).
[24] In configurations in which churn predictions are provided for both a case in which retention action is not taken and a case in which a retention action is taken, a retention-action decision can be based on a comparison of the predicted likelihoods. In the event predictions are available for each of a selection of retention actions, the most effective retention action can be selected. In the event that costs are associated with each of the retention actions, the most cost-effective retention action can be selected.
[25] Retention actions 120 can be applied automatically at game console 102. For example, a new level of play may be installed

automatically. In some cases, an automatically applied offer, e.g., of a credit toward an in-app purchase, may or may not be accepted by the player.
[26] Data collected from console 102 can indicate whether or not a retention action was taken. Of course, game-play data indicates whether or not a player has churned, so in the event a retention action is taken, the same data indicates whether or not the retention action was successful. and, in some embodiments, whether or not it was successful. In some embodiments, additional information can be collected to identify which retention action was taken and, in the event an offer was involved, whether or not the player accepted the offer. This additional information can be used to enhance churn-prediction accuracy.
[27] Logged play data, including retention actions and their results 120, predictions 116, feature vectors 112, all via data store 106 and, in some embodiments, via representation filters 108, are provided to machine learning engine 122, which makes adjustments to prediction models based on the collected data to minimize the error between predictions and their results. Different adjustments can be made to different models for segmented player data.
[28] Machine learning engine 122 can be a deep machine learning engine to achieve better results, i.e., more accurate churn predictions. Deep machine learning engines can make it difficult for humans to evaluate what is going on. However, some visibility is provided by feature/goal correlator 124, which can provide human-readable correlations between features and predictions, and between retention actions and churn reduction.

[29] The retention classes can be introduced with randomization, or with some inherent business logic by analyzing the data vector. A sufficient number of non-skewed data points are required for the machine-learning algorithm to perform effectively. Each class is uniquely represented by a position in the retention vector. Direct correlations of each of the variables with the goal metric (churn categorization) can be studied. The ability to analyze the effect of individual variables can help in comprehending the model better. This is especially useful in determining the new retention decisions. Variance and colinearity methods can also be used for feature selection.
[30] The invention describes a retention booster technique, which can facilitate identification of churns, and enables businesses to act instantaneously and effectively at the same time. The retention process consists of processing of raw data and feature engineering, followed by ensemble learning using deep learning algorithms, to achieve classification for churners based on low, medium or high risk (categories). These predicted churners can be directly aligned with appropriate retention decisions as defined by the responsible business. A retention decision corresponds to one or more game owner initiated notifications, discounts, reminders or any other alternative points of interaction. As comprehensibility of the model is a very critical challenge in order to make retention decisions , this model pre-incorporates the effect of retention decisions in its score itself. This model is aimed at achieve higher accuracy in prediction without compromising on comprehensibility of the model.
[31] The end goal of the retention process can be, for example: 1) to increase or maximize revenue; 2) to reduce the number of users leaving the game; or 3) to prevent reduction of game time by players. This churn prediction model uses a continuous learning algorithm and does not have

a fixed time period assigned as a learning period. While initially, the model can respond randomly (and may require some manual input), the model gets more and more confident as it accommodates new data points over time.
[32] A process 300 implementable in game system 100 and other systems is flow charted in FIG. 3. At 310, a game is developed. As part of game development, feature engineering defines attributes to serve as independent variables to be input to a churn-prediction model. In the illustrated embodiment, the selected attributes include representatives from the following five categories: transaction, velocity or ratio, individual average, game-based average, and indicator or category. In addition, an attribute indicating whether or not a retention incentive was applied. In other embodiments, less than all of these categories are used and/or additional categories are represented.
[33] In addition, at 310, representational filters can be set up to segment players so that the machine-learning engine can tweak respective churn-prediction models separately. In some embodiments, the machine-learning engine can adjust the representational filters to optimize the segmentation. For example, the mobile-phone price thresholds between segments can be adjusted to achieve better results.
[34] An iterative retention loop is performed at 320. At 321, retention actions are determined. For example, a determination may be made to offer a player a credit on an in-app purchase, or opportunities for extended or a higher level of play, e.g., to make player churn less likely. Alternatively, a determination may be made not to take any affirmative retention action, e.g., because the player is not likely to churn or because a player is probably going to churn even given the retention action. At 322,

the determination retention action is taken at the game console. Both the determination and any subsequent implementation can be automated (performed without human intervention other than via the mobile phone itself).
[35] At 323, game-play data is collected, e.g., logged, on game consoles, e.g., mobile phones. The data can include game-play session start times, end times (or play durations), scores, and other game-specific data. In addition, data indicating implementation and consumption of retention actions is collected. For example, the offer of a credit for in-app purchase and a player’s exercise of that credit can be logged. The collection can occur initially on the mobile device, followed by transfer to a game back end.
[36] At 324, representational filters segment the collected data according to player and device profiles. Segmentation can be based on several attributes including player country of residence, player intra-country or inter-country region of residence, mobile device model and/or price, and mobile device platform, e.g., Android (provided by Alphabet, Inc.) or iOS (Provided by Apple, Inc.).
[37] At 325, feature vectors are generated. Each feature vector is an ordered set of attributes, e.g., parameter values, to be used as independent variables to a churn-prediction model. In the illustrated embodiment, the attributes include representatives of the attribute categories established at 310.
[38] At 326, a churn-prediction model is used to generate predictions based on feature vectors. In a first configuration, predictions are made for a likelihood of churn for each player assuming no retention action is taken. In a second configuration, predictions are also provided assuming a

retention action is taken. In variations in which more than one retention action is available, predictions can be made for each selectable retention action.
[39] The predictions generated at 326 are used at 321 for retention action determinations. Depending on configuration settings, affirmative decisions (a retention action is to be taken) can be based on a churn likelihood above a certain threshold, or on a predicted reduction in churn likelihood associated with a retention action, or other bases. If more than one retention action is available, the one that is predicted to achieve the greatest reduction in churn likelihood can be selected. Alternatively, the most cost-effective retention action may be selected based, in part, on costs associated with respective retention actions.
[40] At 330, attributes can be correlated with predictions or results to determine their contributions to results and predictions. These correlations can be provided in human readable form to provide insights into the operation of the churn-prediction model. At 340, a machine-learning engine determines whether or not adjustments need to be made to the churn-prediction model based on a comparison of predictions and results. Adjustments are made to minimize the error (difference) in predictions relative to corresponding results. The adjusting causes at least some predictions made after the adjusting to differ from what they would have been had the adjusting not been performed
[41] Game development process 310 is flow-charted in greater detail in FIG. 4. At 411, goals are defined. These goals are used by the machine learning engine to guide adjustments to the churn-prediction models used by the churn-prediction engine. As mentioned above, cxample goals

include: 1) to increase or maximize revenue; 2) to reduce the number of users leaving the game; or 3) to prevent reduction of game time by players.
[42] At 412, representational filters are designed to segment players into groups, with each group being handled separately by the machine-learning engine to achieve greater prediction accuracy. Thus, different instances of a churn-prediction model can be adjusted separately to achieve greater prediction accuracy. In addition, separating the players into groups allows different retention incentives to be used, e.g., as may be appropriate for players from different countries. For example, the currencies in which in-app purchase credits can be expressed can be matched to the local currency.
[43] At 413, feature engineering involves selecting the variables to be represented in the feature vectors. One or more representatives can be selected from each of pre-determined categories to ensure a sufficient set of variables for predicting churn accurately and accurately adjusting the churn-prediction models. The predetermined categories can include transactional, velocity, indicator, individual and game averages attributes.
[44] At 414, modeling and training are performed. When the actual result of whether the user has churned or not is acquired, the set of feature vectors are consumed in the deep machine-learning model. The deep machine-learning models used in this process are Convolutional Neural Network (CNN) or Recurrent Neural Network (RNN). One of the processes can be preferred over others after observing the accuracies for a certain period of time.
[45] Using the feature vectors, predictions are made on the degree of risk of user churning in terms of probability. These churning risks are classified into low, mid and high risk categories. At least initially, the

boundary probability limits for this categorization need to set and optimized manually. In an embodiment, these boundary limits may be adjusted by the machine-learning engine. The feature vector which generates the lowest risk is chosen for performing retention boosting activity, and processed for modeling.
[46] Costs retention boosting can be minimized. Given a vector of n scores , where n is the number of retention vectors and each score stands for the probability of churning for a particular retention decision (including the decision of no action), the retention vectors can be multiplied (dot product) by a retention cost vector , which is a vector of costs of retention decisions. This gives the expected costs of each retention decision for that user. Business logic can be built in to incorporate the probabilities of churning and retention costs for making a final choice on the retention decision.
[47] At 415, model performance evaluation is performed. An evaluation time period is defined initially. The performance of the model can be analyzed in the following timeframes: lifetime performance and performance in the current period. The improvement the model performance with time can be evaluated as: performance in current period vs lifetime performance and performance in the current period vs performance in the previous period.
[48] Individual retention action types can be evaluated by the following formula.


[49] The overall performance of a model can be evaluated by the following formula, where (users)i is the number of users on whom the retention technique i has been applied.

[50] For certain businesses, the following metric of cost effectiveness can also be analyzed.

[51] Herein, related art labeled “prior art”, if any, is admitted prior art; related art not labelled “prior art”, if any, is not admitted prior art. The illustrated embodiments, as well as variations thereupon and modifications thereto are within the scope of the present invention, which is defined in the following claims.

I/We Claim:
1. A process comprising:
iteratively performing an automated action sequence including;
determining whether or not to provide retention incentives to respective players to delay or forgo churning based at least in part on respective predictions of likelihoods of churning for the respective players, the determining defining a first group of players to whom respective incentives are to be provided, the determining defining a second group of players to whom respective retention incentives are not to be provided,
providing the retention incentives to players of the first group and not providing the retention incentives to players of the second group,
collecting game-play data for each of the players of the first group and each of the players of the second group, the game-play data indicating, for each player of the first group, whether or not the player churned, and
using a set of one or more churn-prediction models, making predictions of likelihoods of churning for respective players of the first group and the second group based at least in part on game-play data collected from the respective players; and using a machine-learning engine, adjusting at least one of the churn-prediction models based in part on the predictions and in part based on the indications of whether or not players of the first group have churned, the adjusting causing at least some predictions made after the adjusting to differ from what they would have been had the adjusting not been performed.

2. The process of Claim 1 wherein the iteratively performing includes generating feature vectors from the game-play data and using the feature vectors to make the predictions of likelihoods of churning for respective players.
3. The process of Claim 2 wherein the feature vectors include include values for representatives of each of the following categories of variables: transaction variables, ratio variables, individual-based cumulative average variables, game-based cumulative average variables, and indicator variables.
4. The process of Claim 3 wherein the machine-learning engine is a deep-machine learning engine.
5. The process of Claim 1 wherein the iteratively performing further includes dividing players into segments according to prices of their mobile phones and other parameters, each of the segments corresponding to a respective churn-prediction model, and adjusting the respective churn-prediction models separately.
6. A system comprising non-transitory media encoded with code that, when executed by a processor, implements a process including:
iteratively performing an automated action sequence including;
determining whether or not to provide retention incentives to respective players to delay or forgo churning based at least in part on respective predictions of likelihoods of churning for the respective players, the determining defining a first group of players to whom respective incentives are to be provided, the determining defining a second group of players to whom respective retention incentives are not to be provided,

providing the retention incentives to players of the first group and not providing the retention incentives to players of the second group,
collecting game-play data for each of the players of the first group and each of the players of the second group, the game-play data indicating, for each player of the first group, whether or not the player churned, and
using a set of one or more churn-prediction models, making predictions of likelihoods of churning for respective players of the first group and the second group based at least in part on game-play data collected from the respective players; and using a machine-learning engine, adjusting at least one of the churn-prediction models based in part on the predictions and in part based on the indications of whether or not players of the first group have churned, the adjusting causing at least some predictions made after the adjusting to differ from what they would have been had the adjusting not been performed.
7. The systen of Claim 6 wherein the iteratively performing includes generating feature vectors from the game-play data and using the feature vectors to make the predictions of likelihoods of churning for respective players.
8. The system of Claim 7 wherein the feature vectors include include values for representatives of each of the following categories of variables: transaction variables, ratio variables, individual-based cumulative average variables, game-based cumulative average variables, and indicator variables.

9. The system of Claim 8 wherein the machine-learning engine is a deep-
machine learning engine.
10. The system of Claim 6 wherein the iteratively performing further
includes dividing players into segments according to prices of their mobile
phones and other parameters, each of the segments corresponding to a
respective churn-prediction model, and adjusting the respective churn-
prediction models separately.

Documents

Application Documents

# Name Date
1 201821023349-STATEMENT OF UNDERTAKING (FORM 3) [22-06-2018(online)].pdf 2018-06-22
2 201821023349-POWER OF AUTHORITY [22-06-2018(online)].pdf 2018-06-22
3 201821023349-FORM 1 [22-06-2018(online)].pdf 2018-06-22
4 201821023349-DRAWINGS [22-06-2018(online)].pdf 2018-06-22
5 201821023349-DECLARATION OF INVENTORSHIP (FORM 5) [22-06-2018(online)].pdf 2018-06-22
6 201821023349-COMPLETE SPECIFICATION [22-06-2018(online)].pdf 2018-06-22
7 Abstract1.jpg 2018-08-11
8 201821023349-Proof of Right (MANDATORY) [20-12-2018(online)].pdf 2018-12-20
9 201821023349-FORM-26 [20-12-2018(online)].pdf 2018-12-20
10 201821023349-ORIGINAL UR 6(1A) FORM 1 & FORM 26-271218.pdf 2019-04-16