Iterative calibration.

Published 6th August 2018
Written by James Allen (Senior Data Scientist, Sandtable)

Situation

When developing a simulation of households choosing which supermarkets to visit, we had some parameters – such as the relative importance of price and quality – that couldn’t be directly measured. Instead we had to choose their values by matching the outputs of the simulation to reference data. But with a run-time for the simulation of 20 minutes, and multiple parameters to optimise simultaneously, it wasn’t possible to find a solution on a single computer.

Action

We set up an iterative optimisation procedure in Sandman, which ran the model in parallel with many different parameter values, then compared the results to reference data to find the best areas of parameter space. By focusing the search we could optimise the model in 10-15 iterations, achieving in a few hours what would have taken all week without Sandman.

Benefit

Speeding up the optimisation process made a fundamental difference to how we could develop the model – instead of optimising once at the end of the development, we could optimise the model multiple times as we went along. This meant we could learn from the early versions and incorporate that understanding back into the model. Ultimately, that produced a model that better matched the real-world behaviour.

Next

Leave a Reply

Your email address will not be published. Required fields are marked *