Technical User Reference (Optimization - Win Rate)
The project contains a Model Class definition, some logics corresponding to the different calculations and dashboard, an internal library logic, and the code to deploy the accelerator on Platform Manager.
Win Rate Model Class
The Win Rate Model Class organizes a list of logics to create the model architecture. It is a JSON file that refers to some logics and it is transformed into an UI in the Pricefx platform that is organized in 2 steps:
Definition - to define the user inputs of the modelization: data source and mapping, model configuration.
Results - to assess the results of the simulation with various dashboards.
There are two types of logics: calculation, which writes tables in the model, and evaluation, whose purpose is only to display some results. The standard Model Class definition is documented in Model Class (MC).
All the logics of the Win Rate Model Class follow a standard naming convention: first WR_ prefix, then the step name, then calc or eval, depending on the formula nature, then the tab it is referring to.
Library
The logic in WR_Lib.
Aim of the logic
This logic contains all the names and the labels of the important variables (ParametersUtils and LabelsUtils). This way it is aesy to adapt to the user vocabulary from one single place.
Most of the functions are in GeneralUtils.
The queries and functions specific to the evaluation of the EBM model are in EvaluatioUtils.
Common reasons to modify the logic
Change the names and the labels to adapt to the end-user vocabulary.
The EvaluationUtils could be changed if there are new ways to deal with the EBM model. It would certainly require a data scientist.
Definition Step
This step displays two tabs: Definition and Model Configuration.
Definition Tab
The logics are WR_1_Def_Eval_Definition and WR_1_Def_Eval_Definition_Configurator.
Aim of the logic
The configurator sets the user inputs to define the source, the mapping, and the filters of the input data. The evaluation logic calls the configurator and displays the data in and out of the scope.
Outputs of the evaluation
The inputs of this step are accessible from the other logics.
Change the tables that are displayed or add another visualisation.
Model Configuration Tab
The logic in WR_1_Def_Eval_Model_Configurator.
Aim of the logic
It defines the inputs of the modelling itself.
Outputs of the evaluation
The inputs of this step are accessible from the other logics.
Common reasons to modify the logic
Change the default values of the modelling, add or remove some modelling parameters. In this case, the Python script parameters and the machine learning script itself must be adapted (Technical User Reference (Optimization - Win Rate)).
Results Step
This step runs a training calculation and displays four tabs: Metrics, Feature Impact, Pairwise Impact and Evaluation.
Calculation: Training
The logic in WR_2_Res_Calc_Training.
Aim of the logic
The logic first materializes three tables to describe the in-scope data, runs outlier cleaning and then prepares and runs a Python script that implements an EBM training to define the expected win rate depending on the model features.
Outputs of the calculation
CreateQuoteTable: creates a table called Quote Data, which contains raw quote data
CleanOutliersFromQuoteTable: creates a table called Cleaned Quote Data which is cleaned from outliers and is the source of data for model training.
CreateWinLossOverTimeTable: aggregates the quote data by week and calculates the sum of wins and loss for each week, in the training data. It is stored in a table called Win Loss Over Time.
CreateProductDetailsTable: creates a table called Product Details. If the price index is set in the model, this table is used to get the reference price of each product.
StoreCategoricalValues: it outputs the list of values of each categorical feature. It is used to avoid a query on a table to get this information in the different Results tabs.
The element triggering the Python script returns the list of the parameters sent to the Python Engine. It is mostly a developper feature, usefull to verify that the model parameters are the expected ones.
The Python job itself (calculation called training:py-trainin the job tracker) outputs the intercept of the model, the list of features, and the list of pairs of features. It also pushes many tables in the model: two tables for each feature, with standardized names feature_score_<featureName>and feature_density_<featureName>, and two tables for each pair of features: pairwise_impact_<featureName1>_<featureName2> and heatmap_<featureName1>_<featureName2>. For features with enabled sigmoid fitting (markup, discount rate, price index), it creates fit parameter tables (fit_<featureName>_params) containing the sigmoid parameters (L, x0, k, b) and corresponding function tables (fit_<featureName>_function) with the fitted curve points. For price index specifically, the impact values are first adjusted by high price adjustments before sigmoid fitting is performed.
The Python script partially relates to hidden functions that are in the Python Engine itself.
Common reasons to modify the logic
The Python script can be improved. Don’t forget to input the right parameters, and to push the results either in the calculation outputs or in some model tables.
Other tables could be preprocessed, or the existing one changed.
Metrics Tab
The logic is WR_2_Res_Eval_Metrics.
Aim of the logic
Display some global information about the model and its results.
Outputs of the evaluation
There are no outputs, only a dashboard.
Common reasons to modify the logic
It is a set of standard portlets, that can be modified.
Feature Impact Tab
The logics are WR_2_Res_Eval_FeatureImpact and WR_2_Res_Eval_FeatureImpact_Configurator.
Aim of the logic
It displays the detailed model results for a given feature of the model.
Outputs of the evaluation
The charts based on the “feature tables”.
Common reasons to modify the logic
Adapt the charts, add other information about each feature.
Pairwise Impact Tab
The logics are WR_2_Res_Eval_PairwiseImpact and WR_2_Res_Eval_PairwiseImpact_Configurator.
Aim of the logic
It displays the detailed model results for a given pair of features of the model.
Outputs of the evaluation
The chart based on the “pairwise tables”.
Common reasons to modify the logic
Adapt the charts, add other information about each pair of features.
Evaluation Tab
The logic is WR_2_Res_Eval_Metrics.
Aim of the logic
Mock the model evaluation and provide a bit more information about the evaluation for a given set of inputs.
Outputs of the evaluation
The portlets are based on data calculated with the lib element EvaluationUtils. It is important to keep this, to ensure that the model evaluation and the evaluation dashboard provide the same outputs for the same inputs.
Common reasons to modify the logic
In general, if there are things to change (except only for a display puprose), it is better to go in the lib element EvaluationUtils, for the reason explained in the previous box.
Model Evaluation
The model class has an evaluation called win_rate_score. It can be called from any place of the solution (refer to Business User Reference (Optimization - Win Rate) | Model Evaluation). It is mostly based on the lib element EvaluationUtils. If you want to modify a calculation, it is important to do it in this lib, to share the modification with the Evaluation tab.
It is also possble to add some elements, or make some existing elements visible, to get their values in the code outside of a given model.
It is also possible, as in any Model Class, to add your own model evaluation, to access any calculation based on a model from any other piece of code, outside of the model.
Invoking win_rate_score evaluation
Evaluation can be invoked via the model API as follows:
api.model("My Win Rate Model").evaluate("win_rate_score", [ items : [ /* list of input items */ ], evaluationMode: 'OVERALL_ONLY' ])
items: a list of input items, each as a Map
Each item must contain key and all required model inputs. For example:
evaluationMode: controls what is calculated and returned. Always choose the most appropriate evaluationMode based on how much detail and which outputs you need. Supported values are:
1. OVERALL_ONLY - Only calculate the overall win rate over all input items.
2. OVERALL_AND_PER_ITEM - Calculate the overall win rate and per‑item win rates at the input prices.
3. OPTIMIZE_PRICES_PER_ITEM - Includes (2) plus optimized prices for each item.
4. EVALUATE_AT_OPTIMIZED_PRICES - Includes (3) plus win rates evaluated at the optimized prices.
Result structure
The evaluation returns a Map with:
OverallWinRateAtInputPrice: combined win rate calculated at the input prices over all provided items.
BaseWinRate: a number between 0 and 1 representing the model’s baseline probability of winning before considering any of the input item’s features
ItemResults: A list of per‑item results, each as a Map. Depending on evaluation mode, the whole list or some of the item fields may be null. These are the possible item fields:
key
winRateAtInputPrice – Predicted win rate at the item’s input price.
optimizedPrice – Optimized price for the item
winRateAtOptimizedPrice – Predicted win rate at the optimized price