Use collections to bring together multiple datasets and their associated files in an almost unlimited number of ways.
A series of lectures and interviews, in collaboration with CECAM, dedicated to some of the pioneering contributions in the field of molecular and materials simulations.
View all the CECAM-MARVEL lectures at https://learn.materialscloud.org/sections/AhZ1ry/
hello I'm a collection with no child collection
Collections allow you to bring together multiple datasets and their associated files in an almost unlimited number of ways. Share a collection to a Space to work with your team. Permissions will be applied according to the Space's settings.
Density-functional theory has become a very popular and very powerful approach to the calculation from first-principles of the properties of molecules and materials. In these three talks, Nicola Marzari provides a gentle introduction 1) to the fundamentals of density-functional theory, 2) to the calculations that can be done with modern, open-source codes such as Quantum ESPRESSO, and 3) to its capabilities and limits. A typical target audience would be scientists (e.g. experimental colleagues) that want to learn more about what is possible and what is good for this kind of calculations (and what is not possible, and what is not good).
The second talk is complemented by a simple tutorial that can be done on any desktop or personal computer, independently of the operating system used (e.g. Windows, Mac, Linux), thanks to the Quantum Mobile virtual machine (for this tutorial we use the release 20.03.1). All the tutorial material is available on [Github](https://github.com/materialscloud-org/learn-fireside)
The evolutionary pressure on electronic structure software development is greatly increasing, due to the emergence of new paradigms, new kinds of users, new processes, and new tools. Electronic structure software complexity is consequently also increasing, requiring a larger effort on code maintenance. Developers of large electronic structure codes are trying to relieve some complexity by transitioning standardized algorithms into separate libraries [BigDFT-PSolver, ELPA, ELSI, LibXC, LibGridXC, etc.]. This paradigm shift requires library developers to have a hybrid developer profile where the scientific and computational skill set becomes equally important. These topics have been extensively and publicly discussed between developers of various projects including ABINIT, ASE, ATK, BigDFT, CASTEP, FHI-aims, GPAW, Octopus, Quantum Espresso, SIESTA, and SPR-KKR.
High-quality standardized libraries are not only a highly challenging effort lying at the hands of the library developers, they also open possibilities for codes to take advantage of a standard way to access commonly used algorithms. Integration of these libraries, however, requires a significant initial effort that is often sacrificed for new developments that often not even reach the mainstream branch of the code. Additionally, there are multiple challenges in adopting new libraries which have their roots in a variety of issues: installation, data structures, physical units and parallelism - all of which are code-dependent. On the other hand, adoption of common libraries ensures the immediate propagation of improvements within the respective library’s field of research and ensures codes are up-to-date with much less effort [LibXC]. Indeed, well-established libraries can have a huge impact on multiple scientific communities at once [PETSc].
In the Electronic Structure community, two issues are emerging. Libraries are being developed [esl, esl-gitlab] but require an ongoing commitment from the community with respect to sharing the maintenance and development effort. Secondly, existing codes will benefit from libraries by adopting their use. Both issues are mainly governed by the exposure of the libraries and the availability of library core developers, which are typically researchers pressured by publication deliverables and fund-raising burdens. They are thus not able to commit a large fraction of their time to software development.
An effort to allow code developers to make use of, and develop, shared components is needed. This requires an efficient coordination between various elements:
- A common and consistent code development infrastructure/education in terms of compilation, installation, testing and documentation.
- How to use and integrate already published libraries into existing projects.
- Creating long-lasting synergies between developers to reach a “critical mass” of component contributors.
- Relevant quality metrics ("TRLs" and “SRLs”), to provide businesses with useful information .
This is what the Electronic Structure Library (ESL)[esl, esl-gitlab] has been doing since 2014, with a wiki, a data-exchange standard, refactoring code of global interest into integrated modules, and regularly organizing workshops, within a wider movement lead by the European eXtreme Data and Computing Initiative [exdci].
References
[BigDFT-PSolver] http://bigdft.org/Wiki/index.php?title=The_Solver_Package
[ELPA] https://gitlab.mpcdf.mgp.de/elpa/elpa
[ELSI] http://elsi-interchange.org
[LibXC] http://www.tddft.org/programs/libxc/
[LibGridXC] https://launchpad.net/libgridxc
[PETSc] https://www.mcs.anl.gov/petsc/
[esl] http://esl.cecam.org/
[esl-gitlab] http://gitlab.e-cam2020.eu/esl
[exdci] https://exdci.eu/newsroom/press-releases/exdci-towards-common-hpc-strategy-europe
Quantum molecular dynamics simulations are pivotal to understanding and predicting the microscopic details of molecules, and strongly rely on a combined theoretical and computational effort. When considering molecular systems, the complexity of the underlying equations is such that approximations have to be devised, and the resulting theories need to be translated into algorithms and computer programs for numerical simulations. In the last decades, the joint effort of theoretical physicists and quantum chemists around the challenges of quantum dynamics made it possible to investigate the quantum dynamics of complex molecular systems, with applications ranging from energy conversion, energy storage, organic electronics, light-emitting devices, biofluorescent molecules, or photocatalysis, to name a few.
Two different strategies have been successfully applied to perform quantum molecular dynamics: wavepacket propagation or trajectories. The first family of methods includes all quantum nuclear effects, but their computational cost hampers the simulation of systems with moderate number of more than 10-12 degrees of freedom. The method coined multi-configuration time-dependent Hartree (MCTDH) constitutes one of the most successful developments in this field and is often considered as a gold standard for quantum dynamics [1]. Other strategies for wavepacket propagation try to identify procedures to optimize the “space” where the wavefunction information is computed, such that Cartesian grids can be replaced with Smolyak grids [2]. The second family of methods introduces the idea of trajectories as a way to approximate the nuclear subsystem, either classically or semiclassically, and is exemplified by methods like the trajectory surface hopping and Ehrenfest schemes [3], or the more accurate methods coupled-trajectory mixed quantum-classical (CT-MQC) [4] and quantum-classical Liouville equation (QCLE) [5].
From a computational perspective, both families of methods require extensive electronic structure calculations, as the nuclei move under the effect of the electronic subsystem, either “statically” occupying its ground state or “dynamically” switching between excited states. Solving the quantum nuclear dynamics equations also becomes in itself very expensive in the case of wavepacket propagation methods. Contrary to other, more consolidated, areas of modeling, quantum dynamics simulations do not benefit from established community packages and most of the progress occurs based on in-house codes, difficult to maintain and with limits in optimization and portability. One of the core actions of E-CAM has been to seed a change in this situation, by promoting systematic developments of software, providing a repository to host and share code, and fostering collaborations on adding functionalities and improving the performance of common software scaffolds for wavepacket (Quantics) and trajectory-based (PaPIM) packages. Collaborations on developments on other codes have also been initiated. This workshop aims at continuing and extending these activities based on input from the community.
Last year we had a workshop that looked at some possibilities for High Throughput Computing (you can find all the details here). We would like to invite you to a follow-up workshop this year from July 1-5, again hosted in Turin.
The workshop will be 3.5 days long consisting of 1.5 days with three different Python libraries related to Dask:
Dask: https://docs.dask.org/en/latest/
Dask_jobqueue: https://dask-jobqueue.readthedocs.io/en/latest/
jobqueue_features: https://github.com/E-CAM/jobqueue_features
The last library is something that has been developed since last year by E-CAM. It allows the user to create tasks that call out to MPI programs, and easily configure the tasks to run on different types of resources (CPU/GPU/KNL).
The final 2 days will be a hackathon where you can work on your own use case with technical assistance.
Classical molecular dynamics (MD) is a broad field, with many domains of expertise. Those specialist domains include topics like transition path sampling (which harvests many examples of a process in order to study it at a statistical level [1]), metadynamics (which runs a trajectory with modified dynamics that enhance sampling, and from which free energy profiles can be constructed [2]), as well as various topics focused on the underlying dynamics, either by providing better representations of the interactions between atoms (e.g., force fields [3] or neural network potentials [4]) or by changing the way the dynamics are performed (e.g., integrators [5]).
Frequently, experts in one domain are not experienced with the software of other domains. This workshop aims to combine both depth, by extending domain-specific software, and breadth, by providing participants an opportunity to learn about software from other domains. As an extended software development workshop (ESDW), a key component of the workshop will be the development of modules that extend existing software packages. Ideally, some modules may connect multiple domain-specific packages.
Topics at this workshop will include using and extending modern MD software in the domains of:
* advanced path sampling methods (and the software package OpenPathSampling)
* metadynamics and the calculation of collective variables (and the software package PLUMED)
* machine learning for molecular dynamics simulatons (including local structure recognition and representation of potential energy surfaces)
In addition, this workshop will feature an emphasis on performance testing and benchmarking software, with particular focus on high performance computing. This subject is relevant to all specialist domains.
By combining introductions to software from different specialist fields with an opportunity to extend domain-specific software, this workshop is intended to provide opportunities for cross-pollination between domains that often develop independently.
References
[1] Bolhuis, P.G. and Dellago, C. Trajectory‐Based Rare Event Simulations. Reviews in Computational Chemistry, 27, p. 111 (2010).
[2] A. Laio and F.L. Gervasio. Rep. Prog. Phys. 71, 126601 (2008).
[3] J.A. Maier, C. Martinez, K. Kasavajhala, L. Wickstrom, K.E. Hauser, and C. Simmerling. J. Chem. Theory. Comput. 11, 3696 (2015).
[4] T. Morawietz, A. Singraber, C. Dellago, and J. Behler. Proc. Natl. Acad. Sci USA, 113, 8368 (2016).
[5] B. Leimkuhler and C. Matthews. Appl. Math. Res. Express, 2013, 34 (2013).
The evolutionary pressure on electronic structure software development is greatly increasing, due to the emergence of new paradigms, new kinds of users, new processes, and new tools. The large feature-full codes that were once developed within one field are now undergoing a heavy restructuring to reach much broader communities, including companies and non-scientific users[1]. More and more use cases and workflows are performed by highly-automated frameworks instead of humans: high-throughput calculations and computational materials design[2], large data repositories[3], and multiscale/multi-paradigm modeling[4], for instance. At the same time, High-Performance Computing Centers are paving the way to exascale, with a cascade of effects on how to operate, from computer architectures[5] to application design[6]. The disruptive paradigm of quantum computing is also putting a big question mark on the relevance of all the ongoing efforts[7].
All these trends are highly challenging for the electronic structure community. Computer architectures have become rapidly moving targets, forcing a global paradigm shift[8]. As a result, long-ignored and well-established software good practices that were summarised in the Agile Manifesto[9] nearly 20 years ago are now adopted at an accelerating pace by more and more software projects[10]. With time, this kind of migration is becoming a question of survival, the key for a successful transformation being to allow and preserve an enhanced collaboration between the increasing number of disciplines involved. Significant efforts of integration from code developers are also necessary, since both hardware and software paradigms have to change at once[11].
Two major issues are also coming from the community itself. Hybrid developer profiles, with people fluent both in computational and scientific matters, are still difficult to find and retain. On the long run, the numerous ongoing training initiatives will gradually improve the situation, while on the short run, the issue is becoming more salient and painful, because the context evolves faster than ever. Good practices have usually been the first element sacrificed in the "publish or perish" race. New features have usually been bound to the duration of a post-doc contract and been left undocumented and poorly tested, favoring the unsustainable "reinventing the wheel" syndrome.
Addressing these issues requires coordinated efforts at multiple levels:
- from a methodological perspective, mainly through the creation of open standards and the use of co-design, both for programming and for data[12];
- regarding documentation, with a significant leap in content policies, helped by tools like Doxygen and Sphinx, as well as publication platforms like ReadTheDocs[13];
- for testing, by introducing test-driven development concepts and systematically publishing test suites together with software[14];
- considering deployment, by creating synergies with popular software distribution systems[15];
- socially, by disseminating the relevant knowledge and training the community, through the release of demonstrators and giving all stakeholders the opportunity to meet regularly[16].
This is what the Electronic Structure Library (ESL)[17] has been doing since 2014, with a wiki, a data-exchange standard, refactoring code of global interest into integrated modules, and regularly organising workshops, within a wider movement lead by the European eXtreme Data and Computing Initiative (EXDCI)[18].
Since 2014, the Electronic Structure Library has been steadily growing and developing to cover most fundamental tasks required by electronic structure codes. In February 2018 an extended software development workshop will be held at CECAM-HQ with the purpose of building demonstrator codes providing powerful, non-trivial examples of how the ESL libraries can be used. These demonstrators will also provide a platform to test the performance and usability of the libraries in an environment as close as possible to real-life situations. This marks a milestone and enables the next step in the ESL development: going from a collection of libraries with a clear set of features and stable interfaces to a bundle of highly efficient, scalable and integrated implementations of those libraries.
Many libraries developed within the ESL perform low-level tasks or very specific steps of more complex algorithms and are not capable, by themselves, to reach exascale performances. Nevertheless, if they are to be used as efficient components of exascale codes, they must provide some level of parallelism and be as efficient as possible in a wide variety of architectures. During this workshop, we propose to perform advanced performance and scalability profiling of the ESL libraries. With that knowledge in hand it will be possible to select and implement the best strategies for parallelizing and optimizing the libraries. Assistance from HPC experts will be essential and is an unique opportunity to foster collaborations with other Centres of Excellence, like PoP (https://pop-coe.eu/) and MaX (http://www.max-centre.eu/).
Based on the successful experience of the previous ESL workshops, we propose to divide the workshop in two parts. The first two days will be dedicated to initial discussions between the participants and other invited stakeholders, and to presentations on state-of-the art methodological and software developments, performance analysis and scalability of applications. The remainder of the workshop will consist in a 12 days coding effort by a smaller team of experienced developers. Both the discussion and software development will take advantage of the ESL infrastructure (wiki, gitlab, etc) that was set up during the previous ESL workshops.
[1] See http://www.nanogune.eu/es/projects/spanish-initiative-electronic-simulations-thousands-atoms-codigo-abierto-con-garantia-y and
[2] See http://pymatgen.org/ and http://www.aiida.net/ for example.
[3] http://nomad-repository.eu/
[4] https://abidev2017.abinit.org/images/talks/abidev2017_Ghosez.pdf
[5] http://www.deep-project.eu/
[6] https://code.grnet.gr/projects/prace-npt/wiki/StarSs
[7] https://www.newscientist.com/article/2138373-google-on-track-for-quantum-computer-breakthrough-by-end-of-2017/
[8] https://arxiv.org/pdf/1405.4464.pdf (sustainable software engineering)
[9] http://agilemanifesto.org/
[10] Several long-running projects routinely use modern bug trackers and continuous integration, e.g.: http://gitlab.abinit.org/, https://gitlab.com/octopus-code/octopus, http://qe-forge.org/, https://launchpad.net/siesta
[11] Transition of HPC Towards Exascale Computing, Volume 24 of Advances in Parallel Computing, E.H. D'Hollander, IOS Press, 2013, ISBN: 9781614993247
[12] See https://en.wikipedia.org/wiki/Open_standard and https://en.wikipedia.org/wiki/Participatory_design
[13] See http://www.doxygen.org/, http://www.sphinx-doc.org/, and http://readthedocs.org/
[14] See https://en.wikipedia.org/wiki/Test-driven_development and http://agiledata.org/essays/tdd.html
[15] See e.g. http://www.etp4hpc.eu/en/esds.html
[16] See e.g. https://easybuilders.github.io/easybuild/, https://github.com/LLNL/spack, https://github.com/snapcore/snapcraft, and https://www.macports.org/ports.php?by=category&substr=science
[17] http://esl.cecam.org/
[18] https://exdci.eu/newsroom/press-releases/exdci-towards-common-hpc-strategy-europe
Most of modern parallelized (classical) particle simulation programs are based on a spatial decomposition method as an underlying parallel algorithm. In this case, different processors administrate different spatial regions of the simulation domain and keep track of those particles that are located in their respective region. Processors exchange information (i) in order to compute interactions between particles located on different processors, and (ii) to exchange particles that have moved to a region administrated by a different processor. This implies that the workload of a given processor is very much determined by its number of particles, or, more precisely, by the number of interactions that are to be evaluated within its spatial region.
Certain systems of high physical and practical interest (e.g. condensing fluids) dynamically develop into a state where the distribution of particles becomes spatially inhomogeneous. Unless special care is being taken, this results in a substantially inhomogeneous distribution of the processors' workload. Since the work usually has to be synchronized between the processors, the runtime is determined by the slowest processor (i.e. the one with highest workload). In the extreme case, this means that a large fraction of the processors is idle during these waiting times. This problem becomes particularly severe if one aims at strong scaling, where the number of processors is increased at constant problem size: Every processor administrates smaller and smaller regions and therefore inhomogeneities will become more and more pronounced. This will eventually saturate the scalability of a given problem, already at a processor number that is still so small that communication overhead remains negligible.
The solution to this problem is the inclusion of dynamic load balancing techniques. These methods redistribute the workload among the processors, by lowering the load of the most busy cores and enhancing the load of the most idle ones. Fortunately, several successful techniques are known already to put this strategy into practice (see references). Nevertheless, dynamic load balancing that is both efficient and widely applicable implies highly non-trivial coding work. Therefore it has has not yet been implemented in a number of important codes of interest to the E-CAM community, e.g. DL_Meso, DL_Poly, Espresso, Espresso++, to name a few. Other codes (e.g. LAMMPS) have implemented somewhat simpler schemes, which however might turn out to lack sufficient flexibility to accommodate all important cases. Therefore, the present proposal suggests to organize an Extended Software Development Workshop (ESDW) together with E-CAM, where code developers of CECAM community codes are invited together with E-CAM postdocs, to work on the implementation of load balancing strategies. The goal of this activity is to increase the scalability of these applications to a larger number of cores on HPC systems, for spatially inhomogeneous systems, and thus to reduce the time-to-solution of the applications.
High throughput computing (HTC) is a computing paradigm focused on the execution of many loosely coupled tasks. It is a useful and general approach to parallelizing (nearly) embarrassingly parallel problems. Distributed computing middleware, such as Dask.distributed or COMP Superscalar (COMPSs), can include tools to facilitate HTC, although there may be challenges extending such approaches to the exascale.
Across scientific fields, HTC is becoming a necessary approach in order to fully utilize next-generation computer hardware. As an example, consider molecular dynamics: Excellent work over the years has developed software that can simulate a single trajectory very efficiently using massive parallelization. Unfortunately, for a fixed number of atoms, the extent of possible parallelization is limited. However, many methods, including semiclassical approaches to quantum dynamics and some approaches to rare events, require running thousands of independent molecular dynamics trajectories. Intelligent HTC, which can treat each trajectory as a task and manage data dependencies between tasks, provides a way to run these simulations on hardware up to the exascale, thus opening the possibility of studying previously intractable systems.
In practice, many scientific programmers are not aware of the range of middleware to facilitate parallel programming. When HTC-like approaches are implemented as part of a scientific software project, they are often done manually, or through custom scripts to manage SSH, or by running separate jobs and manually collating the results. Using the intelligent high-level approaches enabled by distributed computing middleware will simplify and speed up development.
Furthermore, middleware frameworks can meet the needs of many different computing infrastructures. For example, in addition to working within a single job on a cluster, Dask.distributed and COMPSs include support for working through a cluster's queueing system or working on a distributed grid. Moreover, architecting a software package such that it can take advantage of one HTC library will make it easy to use other HTC middleware. Having all of these possibilities immediately available will enable developers to quickly create software that can meet the needs of many users.
This E-CAM Extended Software Development Workshop (ESDW) will focus on intelligent HTC as a technique that crosses many domains within the molecular simulation community in general and the E-CAM community in particular. Teaching developers how to incorporate middleware for HTC matches E-CAM's goal of training scientific developers on the use of more sophisticated software development tools and techniques.
June 18, 2018 to June 29, 2018
Location : CECAM-FR-MOSER Maison de la Simulation
Quantum molecular dynamics simulations describe the behavior of matter at the microscopic scale and require the combined effort of theory and computation to achieve an accurate and detailed understanding of the motion of electrons and nuclei in molecular systems. Theory provides the fundamental laws governing the dynamics of quantum systems, i.e., the time-dependent Schroedinger equation or the Liouville-von Neumann equation, whereas numerical techniques offer practical ways of solving those equations for applications. For decades now, theoretical physicists and quantum chemists have been involved in the development of approximations, algorithms, and computer softwares, that together have enabled for example the investigation of photo-activated processes, like exciton transfer in photovoltaic compounds, or of nonequilibrium phenomena, such as the current-driven Joule heating in molecular electronics. The critical challenge ahead is to beat the exponential growth of the numerical cost with the number of degrees of freedom of the studied problem. In this respect, a synergy between theoreticians and computer scientists is becoming more and more beneficial as high-performance computing (HPC) facilities are nowadays widely accessible, and will lead to an optimal exploitation of the computational power available and to the study of molecular systems of increasing complexity.
From a theoretical perspective, the two main classes of approaches to solving the quantum molecular dynamical problem are wavepacket propagation schemes and trajectory-based (or trajectory-driven) methods. The difference between the two categories lies in the way the nuclear degrees of freedom are treated: either fully quantum mechanically or within the (semi)classical approximation. In the first case, basis-function contraction techniques have to be introduced to represent the nuclear wavefunction as soon as the problem exceeds 5 or 6 dimensions. Probably the most successful efforts in this direction have been oriented towards the development of the multi-configuration time-dependent Hartree (MCTDH) method [1]. Other strategies are also continuously proposed, focusing for instance on the identification of procedures to optimize the “space” where the wavefunction information is computed, e.g., by replacing Cartesian grids with Smolyak grids [2], and thus effectively reducing the computational cost of the calculation. In the second case, the nuclear subsystem is approximated classically, or semiclassically. Although leading to a loss of some information, this approximation offers the opportunity to access much larger systems for longer time-scales. Various examples of trajectory-based approaches can be mentioned, ranging from the simplest, yet very effective, trajectory surface hopping and Ehrenfest schemes [3], to the more involved but also more accurate coupled-trajectory mixed quantum-classical (CTMQC) [4] and quantum-classical Liouville equation (QCLE) [5]. At the interface between wavepacket and trajectory schemes, Gaussian-MCTDH [6], variational multi-configuration Gaussian (vMCG) [7], and multiple spawning [8] exploit the support of trajectories to propagate (Gaussian) wavepackets, thus recovering some of the information lost with a purely classical treatment. In the case of trajectory-based techniques, the literature provides a significant number of propositions that aim at recovering some of the quantum-mechanical features of the dynamics via appropriately choosing the initial conditions based on the sampling of a Wigner distribution [9].
From the computational point of view, a large part of the calculation effort is spent to evaluate electronic properties. In fact, the nuclei move under the effect of the electronic subsystem, either “statically” occupying its ground state or “dynamically” switching between excited states. Also, the nuclear dynamics part of a calculation becomes itself a very costly computational task in the case of wavepacket propagation methods. Therefore, algorithms for molecular dynamics simulations are not only required to reproduce realistically the behavior of quantum systems in general cases, but they also have to scale efficiently on parallelized HPC architectures.
The extended software development workshop (ESDW) planned for 2018 has three main objectives: (i) build upon the results of ESDW7 of July 2017 to enrich the library of softwares for trajectory-based propagation schemes; (ii) extend the capabilities of the existing modules by including new functionalities, thus giving access to a broader class of problems that can be tackled; (iii) construct links among the existing and the new modules to transversally connect methods for quantum molecular dynamics, types of modules (HPC/Interface/Functionality), and E-CAM work-packages (WP2 on electronic structure).
The central projects of the proposed ESDW, which are related to the modules that will be provided for the E-CAM library, are:
1. Extension of the ModLib library of model Hamiltonians, especially including high-dimensional models, which are used to test and compare existing propagation schemes, but also to benchmark new methods. The library consists of a set of subroutines that can be included in different codes to generate diabatic/adiabatic potential energy surfaces, and eventually, diabatic and nonadiabatic couplings, necessary for both quantum wavepackets methods and trajectory-based methods.
2. Use of machine-learning techniques to construct analytical forms of potential energy surfaces based on information collected along on-the-fly calculations. The Quantics software [10] provides the platform for performing direct-dynamics propagation employing electronic-structure properties determined at several different levels of theory (HF, DFT, or CASSCF, for example). The sampled nuclear configuration space is employed to build a “library” of potentials, that will be used for generating the potential energy surfaces.
3. Development of an interface for CTMQC. Based on the CTMQC module proposed during the ESDW7, the interface will allow the evolution of the coupled trajectories according to the CTMQC equations based on electronic-structure information calculated from quantum-chemistry packages, developing a connection between the E-CAM WP2 and WP3. Potentially, CTMQC can be adapted to the Quantics code, since the latter has already been interfaced with several electronic-structure packages. Optimal scaling on HPC architectures is fundamental for maximizing efficiency.
4. Extension of the QCLE module developed during the ESDW7 to high dimensions and general potentials. Two central issues need to be addressed to reach this goal : (i) the use of HPC infrastructures to efficiently parallelize the multi-trajectory implementation, and (ii) the investigation of the stochastic sampling scheme associated with the electronic part of the time evolution. Progress in these areas will aid greatly in the development of this quantum dynamics simulation tool that could be used by the broader community.
5. Development of a module to sample initial conditions for trajectory-based procedures. Based on the PaPIM module proposed during the ESDW7, sampling of initial conditions from a Wigner distribution will be adapted to excited-state problems, overcoming the usual approximation of a molecule pictured as a set of uncoupled harmonic oscillators. Also, an adequate sampling of the ground vibrational nuclear wavefunction would ensure calculations of accurate photoabsorption cross-sections. This topic connects various modules of the E-CAM WP3 since it can be employed for CTMQC, QCLE, and for the surface-hopping functionality (SHZagreb developed during the ESDW7) of Quantics.
6. Optimization of some of the modules for HPC facilities, adopting hybrid OpenMP-MPI parallelization approaches. The main goal here is to be able to exploit different architectures by adapting different kinds of calculations, e.g., classical evolution of trajectories vs. electronic-structure calculations, to the architecture of the computing nodes.
The format and organization described here focuses specifically on the production of new modules. Parallel or additional activities, e.g. transversal workshop on optimization of I/O with electronic structure code and possible exploitation of advanced hardware infrastructures (e.g. booster cluster in Juelich), will also be considered based on input from the community.
Powered by Clowder (1.13.0#local branch:custom_v1.13.0 sha1:dadb3be).