Create datasets to upload and publish data. Further organize your data using folders and assign metadata at both the file and dataset level.
test
single child
Datasets are the primary way to organize files. Share the dataset with a Space to work with your team. Permissions will be applied according to your Space's settings.
This dataset will be added to collection Child collection
An introduction to the properties that can be calculated with DFT, their accuracy, and the practical or conceptual limitations for such calculations. Suitable for everyone that wants to learn what can, or cannot, be done with DFT. Will also answer the perennial question “Why is DFT like Tinder?”.
An introduction to calculations using the total energy, planewaves, pseudopotential method. Suitable for everyone that wants to learn how to perform a DFT calculation. A self-learning handout and a virtual machine with pre-installed open-source quantum-simulation codes are also available - we'll use Quantum ESPRESSO. (Note: annotated slides are not available.)
An introduction to electronic-structure methods and in particular density-functional theory. Suitable for everyone that wants to learn what DFT is.
Organisers:
- Alan O'Cais (Forschungszentrum Juelich GmbH)
- David Swenson (École Normale Supérieure de Lyon)
High-throughput (task-based) computing is a flexible approach to parallelization. It involves splitting a problem into loosely-coupled tasks. A scheduler then orchestrates the parallel execution of those tasks, allowing programs to adaptively scale their resource usage. Individual tasks may themselves be parallelized using MPI or OpenMP, and the high-throughput approach can therefore enable new levels of scalability.
Dask is a powerful Python tool for task-based computing. The Dask library was originally developed to provide parallel and out-of-core versions of common data analysis routines from data analysis packages such as NumPy and Pandas. However, the flexibility and usefulness of the underlying scheduler has led to extensions that enable users to write custom task-based algorithms, and to execute those algorithms on high-performance computing (HPC) resources.
This workshop will be a series of virtual seminars/tutorials on tools in the Dask HPC ecosystem. The event will run online via Zoom for registered participants ("participate" tab) and it will be live streamed via YouTube at https://youtube.com/playlist?list=PLmhmpa4C4MzZ2_AUSg7Wod62uVwZdw4Rl.
Programme:
21 January 2021, 3pm CET (2pm UTC):
Dask - a flexible library for parallel computing in Python (YouTube link: https://youtu.be/Tl8rO-baKuY )
4 February 2021, 3pm CET (2pm UTC):
Dask-Jobqueue - a library that integrates Dask with standard HPC queuing systems, such as SLURM or PBS (YouTube link: https://youtu.be/iNxhHXzmJ1w )
11 February 2021, 3pm CET (2pm UTC) :
Jobqueue-Features - a library that enables functionality aimed at enhancing scalability (YouTube link: https://youtu.be/FpMua8iJeTk )
Scalability of parallel applications depends on a number of characteristics, among which is efficient communication, equal distribution of work or efficient data lay-out. Especially for methods based on domain decomposition, as it is standard for, e.g., molecular dynamics, dissipative particle dynamics or particle-in-cell methods, unequal load is to be expected for cases where particles are not distributed homogeneously, different costs of interaction calculations are present or heterogeneous architectures are invoked, to name a few. For these scenarios the code has to decide how to redistribute the work among processes according to a work sharing protocol or to dynamically adjust computational domains, to balance the workload.
The seminar will provide an overview about motivation, ideas for various methods and implementations on the level of tensor product decomposition, staggered grids, non-homogeneous mesh decomposition and a recently developed phase field approach. An implementation of several methods into the load balancing library ALL, which has been developed in the Centre of Excellence E-CAM, is presented. A use case is shown for the Materials Point Method (MPM), which is an Euler-Lagrange method for materials simulations on the macroscopic level, solving continuous materials equations.
The seminar is organised in three main parts:
- Overview of Load Balancing
- The ALL Load Balancing Library
- Balancing the Materials Point Method with ALL
What are “simulations” in advanced research? Is High Performance Computing the Holy Grail of scientific simulations? Let’s find out together through this unique Comic story and book
This workshop is an introduction to using high-performance computing systems effectively. We obviously can’t cover every case or give an exhaustive course on parallel programming in just two days’ teaching time. Instead, this workshop is intended to give students a good introduction and overview of the tools available and how to use them effectively.
By the end of this workshop, students will know how to:
Use the UNIX shell (also known as terminal or command line) to operate a computer, connect to a cluster, and write simple shell scripts.
Submit and manage jobs on a cluster using a scheduler, transfer files, and use software through environment modules.
The tutorial will cover what we outlined in the tutorial proposal we made for ISC'20. Our proposal got accepted but since ISC'20 has been transformed into an online conference without tutorials, the tutorial has been postponed until ISC'21. We did not want to let this opportunity go to waste however...
If you are interested in learning more about the basics of EasyBuild, and if you are not afraid to make your hands dirty by following with the hands-on exercises, please try it out!
This is a collection of (hopefully) useful information to help transition to online training. Guides to help with this are being rapidly created as the Covid-19 crises evolves, we try to keep the information here moderated to avoid overwhelming people.
If you know of something that could be of value in this list, please email Alan O'Cais (a.ocais@fz-juelich.de)
Powered by Clowder (1.13.0#local branch:custom_v1.13.0 sha1:dadb3be).