Personalization @Intuit Part 3 — Platform

Personalization @Intuit Part 3 — Platform In Part 2 we looked at how Personalization is a message or content that is relevant to the individual user, built on top of Globalization and interacts closely with Experimentation. In Part 3, we will look at how we have built out a Personalization platform that scalably solves for the scenarios […]

Personalization @Intuit Part 3 — Platform

In Part 2 we looked at how Personalization is a message or content that is relevant to the individual user, built on top of Globalization and interacts closely with Experimentation. In Part 3, we will look at how we have built out a Personalization platform that scalably solves for the scenarios we covered in Part 1.

There are 5 foundational blocks to Personalization namely Globalization, Experimentation , ML , Profile and Tracking & Instrumentation. Globalization and Experimentation are covered in detail in prior blogs, ML and Profile are below, while Tracking will be covered in a subsequent blog.

ML Platform

ML is to critical component for our personalization strategy.

Background — A ML algorithm uses example (training) data to create a generalized solution (a model) that addresses the business problem that needs to be solved. After you create a model, you can use it to answer the same business question for a new set of data. This is also referred to as obtaining inferences. A typical ML model cycle is shown below.

  1. Build — this is the phase where our Data scientists discover,explore and prepare the relevant data needed in our Data Lake. Additionally they narrow down on the right algorithm needed for a given problem. The algorithm depends on a number of features — relevant attributes or factors that are needed to develop and train the models. The process of feature engineering is to use the domain knowledge to extract the features.
  2. Train — After deciding on the approach, we need to teach the model how to make predictions by training. This involves providing the ML algorithm with training data to learn from. The features are selected from the training data. This phase can optionally also involve hyperparameter optimization. Hyperparameters unlike features are decided before fitting the model because they can’t be learned from the data.
  3. Deploy — after the model is trained it is deployed for inferences. The model performance is evaluated using different techniques.

The model evolution is a continuous cycle. After deploying a model, you monitor the inferences, collect performance data and evaluate the model to identify drift. You then increase the accuracy of your inferences by updating your training data to include the newly collected performance data, by retraining the model with the new dataset. As more and more example data becomes available, you continue retraining your model to increase accuracy. A core part of the lifecycle is the ability to manage all the artifacts associated with a model. This includes the source code, environment metadata, feature-sets, training sets and trained models. This allows us to tie the various components together into a seamless platform and support end to end automation of the model lifecycle.

A model can predict an outcome during a user interaction in one of the following ways.

  1. A combination of features along with their respective coefficients are evaluated online for a given context to produce a score.
  2. A combination of features along with their respective coefficients are evaluated offline to produce a score.
  3. The algorithm(code) is evaluated online for a given context. It can optionally look up a combination of features and/or scores.

User Profile

A user profile needs to encapsulate the view of customer with all profile attributes such as behavioral, social, mobile, demographics, transactional, contextual and location data, with the ability to aggregate & cross link 1st party (mobile, web) 2nd party & 3rd party data sets seamlessly across distinct data sources. The data model needs to be flexible to support extensibility where new attributes can be added dynamically.

The profile service can Personalize either on a strong identity like logged in user, phone, email or a weak identity like cookies,mobile identifiers or social identifier. The profile backend needs to supports highly parallelized ingestion of new identities and profile maps with inbuilt data security, privacy and compliance. Any Personalization technique has serious consequences to privacy and needs to be carefully evaluated around website’s data collection, data usage and sharing policies. It needs to be either disclosed in T&C or driven by explicit user consent. The profile service needs to support both deterministic segments or predictive scores.

Personalization Service

With all the concepts defined, a simplified architecture for serving of Personalization content is shown below.

  1. Users interact with QuickBooks Products across different touch-points. The Products as part of displaying the user experience makes a call to the Personalization Service, passing the context — identity, session and scope and what model to use to personalize.
  2. Personalization Service calls the Globalization Service to check what product capabilities are available for a given locale (country, region, language).
  3. This is followed by a call to Experimentation Service to check for eligibility to determine if there are active experiments running for a particular scope (component, page, product flow). If yes the Experiments triumph over any Personalization as we cannot experiment and personalize at the same time.
  4. This is followed by a process of model evaluation. The model could be i) a set of features with associated coefficients sitting behind the Feature Service ii) a set of predictive scores sitting behind the the Profile Service iii) or an the algorithm deployed behind the Model Inference Service or a combination of i) and ii) passed as an input to iii).
  5. Each of the steps between 2 to 4 above write the appropriate server side instrumentation using the Tracking Service that is used for further optimization.
  6. The Personalization Context and Data is returned back to the to the product applications to display the experience.
  7. Finally there is client side instrumentation through the Tracking Service that helps close the loop on the user interaction.

Conclusion

To conclude, at QuickBooks we leverage Personalization to create user delight . There are 5 essential components to a successful Personalization strategy namely Globalization , Experimentation , ML , Profile and Tracking.

Interested in working on these areas to power prosperity for Small Businesses? We are hiring…


Personalization @Intuit Part 3 — Platform was originally published in QuickBooks Engineering on Medium, where people are continuing the conversation by highlighting and responding to this story.

Source: Intuit