Verification and validation of an insurance market model.
13th November 2018
Written by Juan Sabuco (Postdoctoral Research Officer, Mathematical Institute & Institute for New Economic Thinking, University of Oxford)
Verification and validation are two very important stages in the development of an agent-based model. During the verification stage it is checked that the agent-based model does what it was planned to do, checking that the representation is faithful to the modeler’s intentions. During the validation stage the modeler attempts to quantify if the model developed is good and faithful to the phenomena studied. During both stages it is usually necessary the use of a large amount of computational resources in order to explore as many possible outcomes of the model as possible. For that reason a cloud computing platform that can be easily scaled is an essential requirement.
Sandtable has provided us with access to their Sandman platform. This has allowed us to run large ensemble of simulations of our insurance model in parallel which has allowed us accelerate exponentially the verification and validation process. Sandtable has also provided extensive support at all stages of engagement with the platform.
We have been able to run our insurance model in parallel on large virtual clusters with only a moderate adjustment of our existing code. The Sandman platform is very intuitive and we have integrated it easily in our workflow. This has allowed us to carry out the verification and validation process in a way that would have otherwise been infeasible due to time constraints.
Multi-objective parameter fitting.
16th August 2018
Written by Jen Badham (Research Fellow, Centre for Public Health, Queen's University, Belfast)
We developed a prototype agent-based model to compare communication options that encourage people to adopt protective behaviour (such as increased hand washing) during an influenza epidemic. As part of that model, we had complex interactions between epidemic risk, personal characteristics, behaviour of others, and a (simulated) person’s own behaviour. Because of this complexity, statistical fitting of model parameters to the available empirical data was not feasible. Furthermore, the large number of parameters to be adjusted made brute force exploration impractical.
Sandtable helped us to efficiently sample the parameter space and identify sets of parameter values that gave the best model fit. Furthermore, we were able to impose several different conditions for fitting and assess how adjusting parameter values to improve the fit under one condition impacted on other aspects of model fitness.
We were able to determine objectively best parameter values and focus on the key trade-offs between constraints. The rigorous calibration process helped us to identify areas where further modelling work is required. Details of the project and the calibration process are available here.
Calibration of Economic ABMs.
15th August 2018
Written by Donovan Platt (PhD student, Mathematical Institute & Institute for New Economic Thinking, University of Oxford)
The calibration of economic agent-based models is a difficult problem requiring access to extensive computational resources that are able to facilitate large-scale parallel computing. While there are many cloud computing platforms available that are able to provide access to the required resources at a reasonable cost, they are often not entirely intuitive to use, particularly when building large clusters. We therefore sought a platform that allowed us to enjoy the benefits of cloud computing without the traditionally steep learning curve.
Sandtable provided us with access to the platform via the Sandman Python SDK, which allowed us to run a large set of realisations of our existing models in parallel, as required. They also provided extensive support at all stages of engagement with the platform.
We were able to run our existing models in parallel on large virtual clusters with little difficulty and only a moderate adjustment of our existing code. This allowed us to perform calibration experiments that would have otherwise been infeasible due to time constraints and the steep learning curve associated with other platforms.
14th August 2018
Written by Annie Hou (Senior Data Scientist, Sandtable)
We were tasked with building a model of consumer behaviour in the automotive industry. The model was complex with many parameters, so to better understand the model dynamics we wanted to conduct a sensitivity analysis of a number of the parameters over a wide range of values. Furthermore, because the model was stochastic, it required us to run a number of independent simulations in order to understand model behaviour per parameterisation.
Using Sandman, we were able to easily setup and run the sensitivity analysis. It allowed us to efficiently and reliably run thousands of parameter combinations across large amounts of cloud resources on-demand.
The parameter sweeps allowed us understand the model behaviour better and to continue to develop and improve it quickly. We were able to run further sensitivity analyses as the model developed. In the end, we were able to deliver a better model and insights to the client.