Appendix: Embedding Domain Knowledge for Estimating Customer Lifetime Value

How we designed an interpretable neural network to predict Customer Lifetime Value (Appendix)

This is an appendix to the blog post Embedding Domain Knowledge for Estimating Customer Lifetime Value. We will describe some alternatives we considered for solving the proposed problem, but did not end up being implemented.

First, let’s assume we have a pre-trained model for estimating the probability of the target $yAlive_N$ and $yTaker$.

Estimating Lifetime Value using an optimization function

With a model containing client propensity of accepting the offer (yTaker), we can make a simple calculation for estimating CLTV:

Business Rules only approach

     \begin{eqnarray*} argmax & ( & \\ \text{X in Offer} & & (Propensity(User, X) \times PriceDest(X) \times 24 + \\ & & (1-Propensity(User, X)) \times PriceOrigin(User, X) * FP)\\ & ) & \\ \end{eqnarray*}

The first term of the equation is the expected revenue at the end of the fidelization period (FP), which is being renewed to 24 months. A second term is summed, comprised of the expected revenue in case the client does not accept the offer (and assuming no new offer is made in the remaining months – as such, he remains for “FP” months).

Business Rules + Propensity + Churn Model approach

Let’s now assume we have two models:

  • Propensity Model: we can calculate the probability of y_taker_N (i.e., of client accepting the offer)
  • Churn Model: we can predict the number of remaining months until the client churns

And that we also have some business rules embedded:

  • Survival Buyers: we can calculate global survival curves, for the complete customer base (Buyers), for clients which accept any new offer. These give us the average number of months until the client leaves the company, if he accepts an offer.

We can then create a slightly more complex optimization function.

     \begin{eqnarray*} argmax & ( & ( PriceDest(X) \times Buyers(FP) \times Propensity(User,X) + \\ \text{X in Offer} & & (1-Propensity(User, X)) \times PriceOrigin(User, X) \times \\ & & Churn(User) \\ & ) & \\ \end{eqnarray*}

Single-Task Machine Learning 

Although this is a solution that can be quickly calculated in case pre-trained models are available for churn and taker tasks (which is good for quick proofs of concept and baseline performance), we are not using much of the knowledge which can be extracted from customer interaction.

A possible approach for using this is including the probabilities of accepting the offer and churning as features, as follows:

CLTV :: Propensity x OriginOffer x DestinationOffer x ChurnProbability

However, this would require maintaining three models in production, and assessing their quality constantly: a regression model for estimating customer lifetime value, propensity model and churn model. Also, if we wanted to do a multiple output approach, this would require having as many pre-trained models as the number of outputs.

Like this story?

Subscribe to Our Newsletter

Special offers, latest news and quality content in your inbox once per month.

Signup single post

Consent(Required)

Recommended Articles

Article
A new era has arrived for NILG.AI

Today is NILG.AI’s fourth anniversary. Happy birthday to us! For most humans, birthdays are a synonym for getting older and leaving the good days of the youth behind. For companies, they are a moment to reflect on everything we achieved, recognize how far we have come, and envision how far we will go. So, let’s […]

Read More
Article
Privacy Preserving Machine Learning

Trip data is any type of data that connects the origin and destination of a person’s travel and is generated in countless ways as we move about our day and interact with systems connected to the internet. But why is trip data sensitive? The trips we take are unique to us. Researchers have found that […]

Read More
Article
Local vs. global optimization

Is the fastest route always the best? This article may give you a different perspective if your answer is yes. Normally there are multiple ways to tackle a given problem or task, and the optimization field is no different, as there are different approaches we can take to find an optimal solution. The choice of […]

Read More