Skip to content

Lead scoring


Sonu Fakiha Feedback

When a lead is created or updated, the lead details are sent to Vymo's Machine Learning (ML) model. The ML model is trained to automatically qualify and classify the leads as Hot, Warm, and Cold. These quality labels are achieved through scoring.

lead prioritization

filters based on lead prioritization

The scoring is done by gauging the details of leads and the leads correlation to high conversion state history. Each lead is given a score (the score is a numeric value ranging from 0 to 100) and the score determines whether the lead is Hot, Warm or Cold lead.

lead prioritization

Factors that affect the lead scoring process

  • When a lead is created, lead details or attributes play a crucial role in lead scoring.
  • Scores are recomputed when a lead state updates and data under each state affect the scores.
  • The machine learning algorithm focuses on lead attributes or fields like Product or Age, which have a strong correlation with lead conversion when compared to Name, Phone, or Email, which have no correlation with lead conversion. Attributes like Product or Age play an important role in scoring the leads.
  • Model accuracy check leading to model retraining is an important factor that affect the lead scoring process.

Note

Data that doesn't enter the Vymo gateway has no significance in the lead scoring process.

Calculating lead score process

Lead scoring process includes cleaning of data, normalization, standardization, dealing with unseen values, null values, feature engineering, feature elimination, model inference, storing the predictions, surfacing the predictions, model training, and storing the ML model artifacts. This pipeline is triggered when there are changes or updates in lead.

Flowchart describing ML training pipeline

Ml Training flowchart

Eligibility criteria for lead scoring

For module For company
The eligibility of module for lead scoring is dependent on three metrics, they are Accuracy, F1 Scores, and AUC. The scores on all three metrics for each module should be above 85 to be eligible for lead scoring. The company's data determines their eligibility for lead scoring.

Note

Accuracy, F1 Scores, and AUC are common metrics used to gauge the performance of model in machine learning.

Retraining ML model

The performance of trained ML model deployed in production can fluctuate (enhance or diminish) with the arrival of new data. The performance is gauged using Accuracy, F1 Scores and AUC. Some reasons for fluctuation in model performance are:

  1. When the client/server configures a new campaign or addition or deletion of a new lead attribute.
  2. When there is alteration in data distribution or in workflows because of seasonal variations.
  3. New business models leading to change in the user targets of the company affects the model performance.

Keeping a track of preceding changes, monitoring the ML model performance in production and retraining the ML model is obligatory.

Some questions and answers
How often is the model trained?
Monthly
How do you keep track of model prediction accuracy?
Through the MySQL table
How can you prove that the hot leads are indeed converting more of them warm or cold leads?
Through the monitoring pipeline.
How are hot, warm, and cold ranges determined?
Through K means clustering.
How quickly are the scores computed?
From 30 minutes to an hour. The lead scores are not live scores.
Are these customizable?
Yes, through the self serve but isn't recommended.
For companies that are ineligible, is there any way to manually score leads?
Yes, through API.
When do you identify that the model needs to be retrained?
If the model performance is bad while scoring the new data and doesn't match the accuracy rate, then the model needs to be retrained.

See also


Did this page help? No help at allYes, totally!
Back to top