This news blog provides news about the e-IRG and related e-Infrastructure topics.

Back

Thomas Lippert to propose modular way of thinking to optimize both supercomputing technology and HPC policy

At the ISC'18 Conference in Frankfurt, Germany, we had the opportunity to talk with Thomas Lippert, who has recently become Chair of PRACE. Apart from that, he is also Director of the Jülich Supercomputing Centre where the first module of a new HPC system has been installed. This system is procured in the framework of the Gauss Centre for Supercomputing in Germany. The machine of about 10 Petaflops CPU performance and 2 Petaflops of GPU performance is called JUWELS. This first module is part of a modular supercomputer.

Does this mean that the modular supercomputer will have different kinds of hardware in it?

It means that the modular supercomputer has entities which are supercomputers by themselves, called modules: Let's say a system composed of CPUs mainly and another system that is concentrating on the use of GPUs. These modules are not brought together just by a standard connection or through storage technology but they are brought together directly on a joint network, maybe with a network bridge between two different network technologies or on the same network type. This connection between both modules is substantial. It is possible that a code that consists of a workflow can be run primarily on the cluster part and then maybe on the second part, the second module still to come which will be highly scalable.

You can also think of codes that have different parts, that have different scalability properties, so that the code can be divided according to these different scalability properties to parts that run on the cluster and parts that run on the booster, the other module. This is a way through which one can simplify the technology of the scaling module and concentrate a lot of technology on the cluster module. This will enable us to do the high scalable parts more cheaply than on a full scale cluster system.

The system is now up and running, at least the new part?

The first part is up and running. The second part will come next year. But the system is already in the TOP500. This is the cluster-booster system called JURECA.

It is very good that you mention the TOP500 because Erich Strohmaier in his presentation showed a comparison of HPC centres. The Jülich Supercomputing Centre was at rank 8. He compared 11 centres over the years. The only European centre that was consistently in the ranks was the Jülich Supercomputing Centre.

In some sense, this is by chance because the way Jülich does computing is never to try to be number 1 or to be very far up in the TOP500 list. Jülich has consistently been in the TOP500 over the years because it has two or three machines in the TOP500 list. Jülich tries to concentrate its budget on optimizing the computing for its portfolio. That is the reason why the Jülich Centre might be appearing in this list of 11 centres.

One wanted to show that centres are more than just one or two machines.

Exactly. Of course, with this experience where Jülich usually had a system which was more or less a cluster system and another one that was more or less something like a scalable system, like the BlueGene, we found out it would be better to combine both technologies into one computer because this is what we need in the future where we will have more heterogeneous types of computing, a more workflow type of computing and different concurrencies in one highly scalable code. These different concurrencies might fit better to different technologies.

Can we switch to PRACE of which you are now the Chairman? What are the main topics for PRACE in the future?

Let's look a little bit back to PRACE. PRACE has developed very much into a high-quality provider, high quality in the sense of the science provider of supercomputing. The organisation is qualified as THE provision entity for supercomputing cycles. Given the future developments of supercomputing in EuroHPC, one must see how we can utilize those virtues of PRACE, which are not only the cycles but which are also things like training people, going into code improvement, giving support in the so-called High Level Support Teams (HLSTs). We can utilize all those virtues also further for the upcoming pre-exascale and exascale technologies to be provided by EuroHPC. This would be maximizing the optimal engagement of all parties, including the users, the provisioners through PRACE, and of course the funders and builders of technology. I want to bring PRACE together with EuroHPC so that both activities in a maximal way will profit and benefit, for the benefit of science, the users and industry.