Verne Global

Finance | HPC |

4 May 2017

Taking the risk out of risk infrastructure

Written by Stef Weegels

Based in London, Stef is Verne Global's Director of Sales and heads up the company's work within Financial Services and Capital Markets.

Over the last couple of weeks I’ve had many discussions with my colleagues in the banking sector and it’s clear that regulations are continuously driving the need for firms to perform more compressive risk controls.

Bank stress testing has become more onerous since the US Federal Reserve introduced the Comprehensive Capital Analysis and Review (CCAR). In Europe, MiFID II and MiFIR will increase the number of risk controls that firms across the industry will need to run. And in 2019, the Fundamental Review of the Trading Book (FRTB) will have a profound impact on how banks must calculate and report their market risk.

All of these regulatory initiatives will result in firms not only having to process far larger data sets, but also perform many more computations (particularly for FRTB, where industry estimates suggest that for market risk, around 12,000 calculations per trade will be needed, compared to the current range of 250 to 500).


With many risk calculation workloads being parallel in nature, dedicated high-performance computing (HPC) resources such as Graphical Processing Units (GPUs), which allow parallelised processing tasks to run faster and more efficiently, are increasingly being deployed.

But as those same colleagues are always telling me, implementing and managing HPC environments can be both challenging and expensive, which means that firms need to act wisely when it comes to deciding where to put their HPC infrastructure.

Although many market participants currently run their trading systems in expensive co-located data centres - which makes sense as they want to be as close as possible to the exchanges’ matching engines in order to minimise latency – it makes no sense at all to run large, power-hungry compute farms in those co-located environments. Particularly if the HPC resources are being used for latency-tolerant applications such as risk reporting, data analytics or even research and testing of algorithms and trading strategies.


This is why I am glad to see firms increasingly looking towards Iceland as a prime location for their dedicated HPC infrastructure. According to Citihub Consulting, Iceland may be as much as 50% cheaper over a seven-year term when compared with London and popular North American locations, such as New Jersey, Chicago and Illinois.

This is because, due to its abundance of hydroelectric and geothermal power, Iceland is one of the few locations where power supply vastly exceeds power consumption.

With power consumption being an important factor to consider for any HPC workload, having access to a fully deterministic, predictable power supply, where power costs can be fixed for multiple year contracts, and where the power also is 100% green, can make a significant impact to a firm’s bottom line in the new regulatory environment.

Later this month, the Realization Group will be publishing another of their acclaimed Financial Market Insights, focusing on this very subject – how HPC infrastructure can boost risk management within the financial services sector. I’m delighted to be contributing to that paper and showcasing how Iceland is perfectly placed to assist by providing access to Iceland’s advantageous, green power profile.


Sign up for the Verne Global newsletter

Opinion, thought leadership and news delivered directly to your inbox once a month.