Research

Computational Fluid Dynamics

Code Generation

Traditional numerical model development involves the production of hand-written computer code to implement the underlying discretisation and solution algorithms. As high-performance computing hardware continues to evolve, the task of maintaining codes such as these becomes harder than ever, often involving re-writes in order to support the new hardware and run simulations on it as efficiently as possible. This puts a huge burden on the numerical modeller to not only be an expert in their field of study but also in numerical methods and parallel computing.

A significant part of my research has focussed on the use of code generation techniques, whereby the numerical model's underlying code is produced automatically from a high-level problem specification. This code can then in turn be targetted towards a range of different hardware architectures such as multi-core CPUs, GPUs and Intel Xeon Phi processors. The layers of abstraction that are introduced means that the model equations and the discretisation procedures need only be written once at the high-level.

With funding from the UK National Supercomputing Service (ARCHER), I led the development of the Firedrake-Fluids project which uses Firedrake to automatically discretise and solve the shallow water equations with the finite element method. During the European Commission-funded ExaFLOW project I also played a key role in the development of OpenSBLI — a framework for the derivation of finite difference-based models which uses source-to-source translation to tailor the generated code towards different hardware backends with the help of the OPS library.

Simulation of the Taylor-Green vortex problem demonstrating vortex stretching and transition to turbulence, performed using OpenSBLI on an NVIDIA Tesla K40 Graphics Processing Unit. Approximately 1,500 lines of C code (which solve the compressible Navier-Stokes equations using a fourth-order finite difference scheme) were generated from a high-level problem description comprising approximately 100 lines of Python code. For more information on the setup and results, see the paper by Jacobs et al. (2017).

Further details can be found in the following papers:

  • C. T. Jacobs, S. P. Jammy, N. D. Sandham (2017). OpenSBLI: A framework for the automated derivation and parallel execution of finite difference solvers on a range of computer architectures. Journal of Computational Science, 18:12-23, DOI: 10.1016/j.jocs.2016.11.001.

  • S. P. Jammy, C. T. Jacobs, N. D. Sandham (2019). Performance evaluation of explicit finite difference algorithms with varying amounts of computational and memory intensity. Journal of Computational Science, 36, DOI: 10.1016/j.jocs.2016.10.015.

  • C. T. Jacobs, M. D. Piggott (2015). Firedrake-Fluids v0.1: numerical modelling of shallow water flows using an automated solution framework. Geoscientific Model Development, 8(3):533-547, DOI: 10.5194/gmd-8-533-2015.

  • C. T. Jacobs, M. D. Piggott, S. C. Kramer, S. W. Funke (2016). On the validity of tidal turbine array configurations obtained from steady-state adjoint optimisation. In Proceedings of the VII European Congress on Computational Methods in Applied Sciences and Engineering, held in Crete, Greece on 5-10 June 2016, volume 4, pages 8247-8261, DOI: 10.7712/100016.2410.4610.