September 2013 summaries

Massively parallel simulation of rogue wave occurrence in optical communication systems, Rudolf Roemer (University of Warwick)

The overall scientific aim of this project was to generate preliminary results in aid of a full EPSRC grant application using a previously developed (see our previous HECToR class 2 project e236) massively parallel simulation code for next generation fibre optic based communication systems. The results obtained show that our code, utilizing a third-order dispersion in the non-linear Schrödinger equation, is indeed sufficient to model rogue waves of the required exceptionally high peak powers. The obtained statistics of rogue wave occurrence is more than 5 orders of magnitude better than any published result to date. We have also studied in detail the parameter dependence, i.e. the conditions for the inelastic multiple-soliton collisions, such as e.g. the influence of the strength of the third-order dispersion and the initial optical power. The data obtained, due to the available computing power, and hence accuracy, on HECToR and ARCHER, show a tantalizingly revealing picture of rogue wave generation: in the presence of third-order dispersion, soliton-soliton scattering becomes inelastic, leading to energy transfer to the initially slightly more energetic soliton. This gives rise to a cascade mechanism in which repeated collisions result in an amplification of the strongest solitons and hence, eventually, rogue waves. For example, our calculations show that starting from 10 Watt continuous wave input power, rogue waves of more than 1000 Watt can emerge after only a few hundred meters of fibre propagation. In all these simulations, we have used realistic fibre parameters that correspond to experimental setups with excellent agreement obtained. The underlying microscopic mechanism leading to this rogue wave behaviour could also be understood and is routed in the interplay of the third- order dispersion with the modulation instability in such non-linear fibres. Our new understanding also suggests a way of suppressing rogue waves, at least in principle.

We are now in the process of writing up our results for publication where full details will be provided. We intend to submit at least two such papers, one covering the technical aspects of the code and the second the obtained scientific results. Furthermore, we successfully obtained computing allocation for runs on the Hartree Centre's BlueJoule capability machine and were able to extend the scaling runs (from e236) up to 100k cores, again showing full linear scaling. This and the two publications shall form the basis for our planned EPSRC responsive-mode application. In addition, the project has led to the formation of a new collaboration with Prof. Hanik's group at the TU München in Germany, opening the possibility to also apply for EU funding. Dr. Eberhard (CI) has already been invited and has spent 3 weeks in München this summer as visiting fellow.

Large scale simulations of bio-photonic structures using parallel and explicit MOT-TDVIE solver, Nicolae Panoiu (University College London)

Introduction and Motivation

Optical tomography is one of the most used methods for biological tissue characterization, mostly due to its non-invasive nature, versatility and high sensitivity. In this context, the availability of computational methods designed for modelling the interaction between ultra- short optical pulses and biological cells is of critical importance as they provide valuable information about the structure of the biological tissue without risking cell damage. In the computational study of these real-life and electrically large-scale bio-photonic structures, one requires the description of physical systems characterised by millions of degrees of freedom. Therefore, using a high-performance computational platform was a prerequisite to a successful completion of this project.

Objectives:

The main goal of the proposed research has been to numerically investigate the electromagnetic wave propagation and scattering phenomena in electrically very large bio- and nano- photonic structures. Thus, we aimed to characterise the electric field and optical energy density distributions produced by ultra-short pulsed Gaussian beams (PGBs) upon their interaction with ensembles of biological cells. These are key quantities that are used to assess the level of tissue damage produced by an optical pulse. In particular, our study aimed to provide in-depth understanding on the energy dissipation phenomena occurring in such bio-photonic systems and their connection to related processes, including two-photon absorption imaging and confocal and non-linear microscopy.

Methodology and Outcomes:

We considered ultra-short, spatio-temporally modulated PGBs at near-infrared frequencies probing inhomogeneous biological cells (organelles) and computed with extremely high spatial and temporal resolution the electric field distribution and the energy dissipated in the system. In order to perform this computational analysis, we used a recently developed method for solving 3D Maxwell equations, which relies on an explicit marching-on-in-time (MOT)-based solution of the time-domain volume integral equation (TDVIE). In the parallelization of the MOT-TDVIE solver, we applied a new unstructured data distribution and only point-to-point inter-process communication. This highly efficient parallelization scheme produced excellent scaling trends when tested on HECToR [1]. Moreover, for the cases where access to only small memory segments per process was needed, the parallel MOT-TDVIE solver showed super-linear scaling with the number of processes [1]. Due to this excellent scalability of the solver, our simulations were performed with a significant discount of about 75%, which explains the slight under usage of the originally awarded computing resources.

The results of this work will be presented to the UK’s premier conference in optics and photonics [1] and a paper to be submitted to a leading bio-optics journal is in preparation [2]. Equally important, the results regarding the solver scalability have been included in an EPSRC Fellowship application submitted by Dr Al-Jarro. We consider that these results have significantly strengthen the application and were instrumental in helping Dr Al-Jarro assess the resources needed to complete the work proposed in his Fellowship application. It should also be mentioned that as a first-time user of HECToR, Dr Al-Jarro has benefited from plenty and valuable training opportunities.

  1. A. AL-Jarro and N. C. Panoiu, “3D simulation of pulsed Gaussian beam probing of biological cells using an explicit solution of the time domain volume integral equation,” Photon14, London, (2014).
  2. A. AL-Jarro and N. C. Panoiu, “Probing biological cells with pulsed Gaussian beams: a time domain analysis based on the explicit and massively parallel MOT-TDVIE method,” Biomed. Opt. Express (in preparation).

Adaptation and Resilience In Energy Systems (ARIES), Gareth Harrison (University of Edinburgh)

The HECTOR HPC facility was used to progress research of value for the Adaptation and Resilience in Energy Systems (ARIES) consortium. ARIES aims to develop robust approaches to projecting future climate impacts on UK energy systems. This requires spatially and temporally credible models of future UK renewable energy resources suitable for application to power engineering models with a particular emphasis on bridging the spatial gap between large scale climate models and the high resolution data required for engineering analyses. The supercomputing resources were applied in order to meet the following objectives:

  1. Development of a robust dataset of the UK's recent wave climate at high spatial and temporal resolution suitable for application to long term wave energy assessment.
  2. Development of a suite of alternative wave climate futures at the same resolution representing potential conditions in the 2020s, 2050s and 2080s.
  3. Evaluation of the model uncertainty within the Weather Research and Forecasting (WRF) model.
  4. Development of an extended period of high resolution wind speed data across the UK to supplement existing datasets in order to enhance understanding of recent wind climate.
  5. Facilitation of less computationally-intense methods of simulating high resolution wind climate.

In the main the work achieved many of its objectives despite the challenges experienced with extremely high usage of the Hector facility prior to its switch off. Wave modelling was prioritised and the bulk of the resource used relates to this.

The wave modelling was carried out using the very latest version of Wavewatch III. An unplanned activity was to estimate the model uncertainty and optimal parameter settings and this has led to unexpected new collaboration with the NCEP Marine Modeling and Analysis Branch, who write the software. As planned, a series of long term wave hindcasts for the North Atlantic and UK were then generated using multiple reanalysis datasets including one 100+ years long. These are quite clear in demonstrating substantial changes in the wave climate over the last century. The final part of the wave modelling created a series of projections of future wave climate out to 2100 sing data from several well known climate models; the outputs are in the process of being analysed.

The focus on the wave modelling meant that the intended work on wind modelling was heavily curtailed. The formalised evaluation of uncertainty in the WRF numerical weather prediction model did not go ahead but a modest amount of new high resolution wind time series was created. This has been used in evaluating a series of methods for downscaling wind climate from large-scale models. It is hoped that evaluation of WRF model uncertainty will be a feature of a future ARCHER application.

Access to Hector has made possible 4 papers that will gratefully acknowledge use.

Understanding the mechanism of proton pumping in cbb3-type oxidase through computer simulations, Andrei Pisliakov (University of Dundee)

Cytochrome c oxidase (CcO) proton pumps are the last enzymatic complexes of most aerobic respiratory chains. They reduce O2 to water and couple this reaction to proton translocation across the membrane. Thus, the chemical energy is conserved in the form of a proton gradient (or more precisely, electrochemical potential), which later on is used for synthesis of ATP. The cbb3-type CcOs are found exclusively in proteobacteria, including in some clinically relevant human pathogens. They are structurally and functionally the most distant from the mitochondrial (aa3-type) enzymes, which for several decades were the most extensively studied CcOs. The functioning mechanism of cbb3 oxidase, in particular the mechanism of proton transfer & pumping, is currently unknown. The first important step towards explaining the action of this enzyme is identifying potential proton transfer pathways, which are needed for the catalytic reaction and proton pumping. Here, we have addressed this question through large-scale computer simulations.

Based on the first solved X-ray structure of cbb3 oxidase, we performed fully atomistic molecular dynamics (MD) simulations of the protein within a lipid bilayer membrane and explicit solvent. The total size of the all-atom simulation system was close to 200,000 atoms. All simulations were performed using NAMD, a highly parallel MD package. The combined simulation time was ~500 ns.

We have fully characterized the water distribution and transiently forming H-bonded networks inside the protein, and thus identified pathways that can be utilized for efficient proton transport. Building on these results, we have suggested a mechanism of how the “chemical” and “pumped” protons can be differentiated by cbb3 oxidase. Another important finding is a proton “exit” pathway that connects the active site region to the periplasmic side of the membrane. In addition to that, the results shed new light on evolution of respiratory enzymes, and together with recent structural data led to a new hypothesis, which contrasts previous, bioinformatics-based scenarios. The results of our studies are now being prepared for publication.

The obtained results, in particular a characterization of the proton pathways, provide a basis for further simulations, e.g. explicit proton transfer calculations and in silico mutants. The project also has enabled us to establish new collaborations with the experimental groups (Stockholm University and Frankfurt University) that focus on the mechanism of cbb3 oxidase. Based on results of the computational studies, we have suggested a list of specific mutations that would have effect on the proton transfer processes in cbb3. These predictions are now are being verified in the mutagenesis experiments.

Dragred, Pierre Ricco (University of Sheffield)

One of the major challenges for transportation is to tackle energy consumption issues. This problem is intrinsic to the modelling of turbulence and drag reduction studies within this context. Therefore, the objective is to reduce the skin-friction drag in turbulent flows. Indeed the friction drag contribution of the total drag in commercial aircrafts, underwater vehicles and pipelines can reach up to 50%, 90%, and 100 % respectively.

The objective of the project was to implement and test various control strategies in a channel flow geometry using the Direct Numerical Simulation (DNS) technique. Within this method, all the scales constituting turbulence can be accessed. However, this comes at the expense of computationally intensive simulations. This is one of the reason why parallel computation in this area is a key point. The code that was used for the project is called Incompact3D and was previously subject to three completed HECToR dCSE. However, it has never been tested in the context of flow control in channel flow geometry. Most of the other codes used in this area are based on spectral methods, while this code is based on high-order (6th order) compact finite difference. It is particularly efficient as a pencil decomposition is used for the parallel strategy. Preliminary simulations based on channel flow geometry without control were already tested on a smaller scale, but for the flow control problems, a larger amount and a more efficient supercomputing facility as HECToR was a necessity. The time requested for this project was therefore needed to assess the feasibility, with this particular code, of control through wall-forcing methods and to benchmark these control strategies in order to pave the way for future flow control problems.

The first control strategy is based on wall-oscillation, using periodic forcing in the spanwise direction. It has been successfully implemented and benchmarked with the following reference paper (Quadrio, Maurizio, and Pierre Ricco. "Critical assessment of turbulent drag reduction through spanwise wall oscillations." Journal of Fluid Mechanics 521 (2004): 251-271). The code was used with a large box size and tested with various resolutions. Using this code on the HECToR platform helped in significantly reducing the simulation runtime (compared to the reference simulation times).

The next control models a particular property called hydrophobicity. This control is of interest in a more fundamental viewpoint as its application domain is restricted to micro-channels. This particular property arises in the form of a finite slip at the boundaries. The reference paper (Min, Taegee, and John Kim. "Effects of hydrophobic surface on skin-friction drag." Physics of Fluids (1994-present) 16.7 (2004): L55-L58.) study was based on three different configuration, namely control based on pure streamwise slip, pure spanwise slip and the combination of the two. This part was also successfully implemented in the code and benchmarked with the reference paper through the computation of the statistics and flow visualisations. In order to test the validity, different order schemes were tested leading to the same results. This study is now being used for a new implementation where the slip length is no longer considered to be constant but a function of the shear. This implementation is of particular interest in the mathematical stability analysis of this problem. As the slip length is taken to be a linear function of the shear, the problem becomes nonlinear in the slip velocity and a two The final implementation focused on a new type of actuators based on rotating discs (Ricco, Pierre, and Stanislav Hahn. "Turbulent drag reduction through rotating discs." Journal of Fluid Mechanics 722 (2013): 267-290). The implementation has been carried out, and results are very well reproduced using various resolutions. For this implementation a diagnostics tool giving all the statistics and flow visualisation was implemented and has been parallelised and tested as well.

The implementation of the discs at the channel walls, has also enabled the combination of this technique with other control techniques such as opposition control. This is part of an ongoing work on ARCHER.

It has been proved that the high-order compact finite difference code Incompact3D can be efficiently used for dealing with flow control strategies. Its efficiency and flexibility allows to use this code for future flow control problems. With the computing time obtained through the RAP call, the work carried has allowed to advance the project to a new stage and enabled to prepare the transition to the ARCHER platform.

During this work, a two days MPI course provided by HECToR and EPCC was attended to and all the previous work was presented at the Incompac3D user group meeting at Imperial College, London.

A novel relaxation method for obtaining force-free magnetic fields: application for access to the GPGPU testbed resource, David Pontin (University Of Dundee)

Our objective was to develop a novel numerical method for obtaining so-called Beltrami or "force-free"" fields. These fields satisfy curl(B)=alpha*B, where B is a divergence-free field (e.g. magnetic field or vorticity field) and alpha is a scalar field (i.e. depending on spatial coordinates). These solutions represent equilibria of magnetic fields in plasmas or stationary solutions of the vorticity equation in hydrodynamics and are the minimum energy states under ideal deformations. Determining the existence and structure of such equilibria is crucial in a wide range of applications, from magnetically confined fusion plasmas through laboratory plasmas to astrophysical plasmas and also fluid bodies in hydrodynamics.

For studying the field's evolution towards this equilibrium we developed a new Lagrangian method with a moving mesh where the grid nodes follow the plasma motion. To improve upon previous numerical schemes we make use of so called mimetic differential operators which are integral operators mimicking operators like the curl or gradient. Our simulation code is developed in CUDA and can run on any modern Nvidia graphics card.

We simulated the relaxation of test configurations of magnetic braids and tested to what degree the expected force-free state was reached. Compared to previous numerical methods and codes we are able to obtain equilibrium states which are much closer to the expected equilibrium by several orders of magnitude. Further, the combination of the numerically efficient mimetic operators and the computing power of modern GPUs result in a highly increased speed up of between 80 to 150 times, as compared to previous codes.

These results have been submitted to refereed journals and will soon be published. Further investigations on magnetic field dynamics with our numerical code are in preparation and will be published in the near future. The GPU test bed helped us to improve the efficiency of our numerical code during its development and gave us considerable computation power for performing larger test simulations.

Quantitative simulation of rapid alloy solidification using phase-field simulations, Peter Jimack (University of Leeds)

This project involved the development of a state-of-the-art phase-field simulation tool to model rapid solidification of metallic alloys, and resulting physical artefacts such as the formation of dendrites.

Our software incorporates advanced numerical methods (such as adaptivity in space and time, implicit time-stepping for stiff systems of differential equations, and an optimal multigrid solver) to allow fast and accurate three-dimensional simulations in parallel. As a result we are now able to produce thermally-consistent, three-dimensional simulations of the rapid solidification of metallic alloys for the very first time. Our results clearly show the quantitative difference between 2-d and 3-d simulations and the importance of including a thermal equation within the model for important parameter ranges.

The outputs from this project will appear in two main publications, the first of which has been submitted and the second of which is in preparation.

  • Bollada, P.C., Goodyer, C.E., Jimack, P.K., Mullis, A.M. and Yang, F.W., “Three-dimensional, thermal-solute phase-field simulation of binary alloy solidification”, Journal of Computational Physics, submitted August 2014. This paper describes the computational techniques that we have developed to allow the first ever simulation of a coupled-thermal phase-field model for alloy solidification in 3D. The results demonstrate that we are able to obtain results on sufficiently fine meshes to show mesh-independence of quantitative features such as dentrite tip radius and velocity. Furthermore we show the optimality of our numerical methods, i.e. solution times which grow in proportion to the number of degrees of freedom required by the adaptive simulation. Significant speed-up is illustrated for problems of a fixed size on up to 1024 cores.
  • Bollada, P.C., Goodyer, C.E., Jimack, P.K. and Mullis, A.M., “Quantitative phase-field modelling of alloy solidification in three-dimensions”, in preparation (planned submission September 2014). This paper shows a range of quantitative results from our new software tool. Parameters studied include the under-cooling of the liquid melt (the level of which controls the rate of solidification) and the Lewis number (the ratio of thermal to solutal diffusivity). Comparisons are made against quantitative predictions from 2D simulations (up until now these were the state-of-the-art) which clearly illustrate the importance of full 3D resolution.

The main training opportunities provided by the project have been for Dr Peter Bollada, the PDRA who undertook most of the runs on HECToR. He has a mathematical background and so his skill set has been increased significantly by having the opportunity to work on this code on HECToR.