Molflow roadmap for 2020

Wednesday, November 20, 2019 - 11:18

|

2019 had four important changes concerning the future of Molflow:

  • In the beginning of the year, cross-platform support was added. For this, the separation of GUI (mollfow.exe) and the workers (molflowSub.exe) had to be ended, using instead a single binary with multithreading (Mollfow versions 2.7 and later)
  • In late April, the development of Molflow was put on hold, except for minor bugfixes
  • In June, a PhD student in computer science joined the Molflow team
  • In November, the development resumed, with a new build (2.7.9) released the first time in 6 months

As of the end of 2019, our plan is to...

  • Create a heterogeneous build workflow for all platforms - this is nearly done. With CMake, whether a developer is on Windows, Linux or MacOS, it is possible to pull the latest changes from the Git repository, and compile the project with a single command. This is an investment in future development, making the deployment of future versions much quicker.
  • Separate GUI and workers once again - for this, most changes of Molflow 2.7+ will be ported back to the 2.6 codebase. Instead of Windows-specific inter-process communication, we will use OpenMPI, an industry-standard for cluster job management. This separation, with a stand-alone process doing the actual calculation, opens the way to both cluster and GPU calculations.
  • Experimenting with GPU acceleration - with a stand-alone physics kernel, we can experiment with speeding up Mollflow using the CUDA framework of modern NVidia GPUs. The latest generation has a built-in ray tracing engine, which could be exploited to achieve additional speedup.
  • Cluster computing - again, with the physics kernel separated, one can defer the calculation to a remote computer, or even nodes of a high-performance cluster, such as the one based on HTCondor present at CERN.

Additional plans on the longer term:

  • Command line/script commands: would allow to call Molflow and Synrad with additional arguments, allowing to load a file, run the simulation and write the result automatically. This is necessary for the cluster computing, but also for enabling unit testing.
  • Iterative simulations: allowing to model saturation processes, where the sticking factor depends on the absorbed molecule dose. This is a difficult problem, since pressure depends on the sticking, but the absorbed molecule dose (and thus the sticking) depends on the pressure. The solution is to calculate in (logarithmically increasing) time steps, which are short enough that we can assume that the sticking is constant. After each step, the accumulated molecule dose is calculated, and the sticking of the surfaces is updated, before proceeding to the next step.