Surface boundary conditions on DL_MESO_DPD multi-GPUs

This module implement the solid surfaces boundary conditions on the multi-GPU version of DL_MESO_DPD.

Purpose of Module

The single GPU version contains already the wall surface boundary conditions. The following module is an implementation on the multi GPU version.

Real cases often involve complex geometries and require the implementation of solid walls as boundary conditions. A typical example if the flow in microchannels used for example in the production of Graphene. The interaction between fluid and surface create a different an unique profile of velocities which has a strong impact on the fluid dynamic, especially in case of non-Newtonian fluids (i.e. where the shear stress is a non linear function of the velocity gradient) like shampoo and other body care products.

This module will allow to study such phenomena reducing the computational cost and time and scaling up to larger systems.

Background Information

This module is part of the DL_MESO_DPD code. Full support and documentation is available at:

To download the DL_MESO_DPD code you need to register at Please contact Dr. Micheal Seaton at Daresbury Laboratory (STFC) for further details.

Building and Testing

To compile and run the code you need to have installed the CUDA-toolkit (>=8.0) and have a CUDA enabled GPU device (see For the MPI library the OpenMPI 3.1.0 has been used.

The DL_MESO code is developed using git version control. Currently, the multi GPU version is under a branch named multi_GPU_version. After downloading the code, checkout the GPU branch and move to the DPD/gpu_version/bin folder. Modify the Makefile to use the correct GPU architecture (sm_XX) and check if the CPP flags are supported (i.e.: -DAWARE_MPI for CUDA_aware_MPI support, -DOPENMPI for OpenMPI library, -DMVAPICH for MVAPICH library and -DHWLOC for hwloc support). Make sure nvcc is installed (or CUDA toolkit module loaded). Then, compile using make all. In short:

git clone
cd dl_meso
git checkout multi_GPU_version
cd ./DPD/gpu_version/bin
# Modify the Makefile according to your device and libraries
make all

To run the test case, copy the FIELD and CONTROL files from the “../tests/Poiseuille” directory and run using mpirun -np 8 ./dpd_gpu.exe on a job partition with 1 GPU available per MPI process. The test case consists in simulating the Poiseuille flow, using 8 GPUs, obtained between two parallel plane surfaces. Being the flow laminar, the solution has to match with the analytic parabolic profile of the velocity field. Compare the OUTPUT and the export files to verify your results. Do not worry about the problem with total_nbeads warning message.

Source Code

This module has been merged into DL_MESO code. It is composed of the following commits (you need to be registered as collaborator):