Appearance at InfiniteConf 2017: "Simulation in a Big Data World"

I'll be giving a talk at InfiniteConf 2017 on July 6th. I'll be speaking in particular on the technological overlap between simulation and "Big Data", using QuantLib and financial modelling as an example.

Why is this of interest?

In 2006, when I first begun working on scale-out of valuation of financial derivatives, the systems of choice were commercial "grid computation engines" specialised for scale-out of small-grained simulations. In fact, the system we used was spun out of an investment bank IT department in London.

In 2012, we realised that "big-data" technologies may have a potential in this area.

Today, in 2017, "big-data" technologies are the first choice in scale-out: Map-Reduce, Spark, in-memory databases such as Redis, streaming frameworks such as Apache Kafka and many, many other technologies. We're now able to burst out to 10,000 cloud-based cores in 2-5 seconds and reassembly the results of the computations 15 seconds later, all using standard big-data technologies.

So, in a little over 10 years, there has been an almost complete takeover by big-data technologies! This is not because these type of financial modelling applications have become more "big-data" -- they have not evolved very much at all. Rather, it seems to be simply the case that the technology being built for the more demanding, and maybe larger, "big-data" market is simply carrying financial simulation along with it.

Perhaps there is an aspect of worse-is-better to this. But it is happening, and in my talk I'll explain more about why and how of it! Hope to see you there.