This news blog provides news about the e-IRG and related e-Infrastructure topics.

Back

Square Kilometre Array data crunching will require compute power beyond Moore's Law

During the 4th National eScience Symposium, held in Amsterdam ArenA last October, we were able to talk with Chris Broekema, a researcher in high performance computing at ASTRON, the Netherlands Foundation for Radio-astronomy, located in Dwingeloo. This is also the location for the very first dedicated radiotelescope in the Netherlands, the Dwingeloo telescope. ASTRON has a number of radiotelescopes, built in the Netherlands, the Dwingeloo telescope, that was built in the 1950's and located in the Dwingelderveld National Park. The probably most well known radiotelescope in the Netherlands is the Westerbork radiotelescope, a fourteen dish array, located next to the remembrance park Westerbork. Most recently, in the 2000's, ASTRON built the Lofar radiotelescope. This is a distributed sensor array consisting of about 40.000 fairly cheap and fairly small antennas distributed in fields across the north of the Netherlands and most of Western Europe as well. There currently are about a dozen of international stations in Germany, Poland, Sweden, the UK, and France, covering most of Europe. Next year, a station will be built in Ireland so there is a continuous growth in that array.

We noticed that Lofar is built on a lot of fairly cheap sensors but it actually is the computing that makes it a real high quality telescope.

Chris Broekema confirmed that Lofar often is described as a software telescope while a lot of data is generated by the sensors in the field. Making sense of that avalanche of data basically is done by computers in a central processing facility, located in Groningen and hosted by the University of Groningen.

We remarked that the next step will be the Square Kilometre Array (SKA) which is an international endeavour. Recently there was a main conference about this in South Africa. We asked Chris Boekema whether he could tell something about the SKA.

He explained that the Square Kilometre Array is a very large telescope that will be built, starting from 2018 onwards. Operational capabilities will be ready around 2023, so quite a few years from now. It will be built in two locations. In the Western Australian desert, there will be built a low frequency component which is very similar to the Lofar array, consisting of 130.000 very small and cheap antennas in 512 stations, each with 256 of these fairly cheap antennas, covering an area of about 65 kilometres. In South Africa, in the Karoo desert, there will be built an array consisting of an 133 fairly small dishes with a 15-metre diameter - the Westerbork array has 25-metre diameter dishes. The MeerkAT array, that the South-African team is currently commissioning and building, will be integrated. The MeerKAT array consists of 64 antennas. This makes a total of 197 dishes over an area of about 150 kilometres. This is quite a large array for both of them, several times larger than existing telescopes at the moment and located in very remote and radio-quiet areas, both in deserts, several hundred kilometres from the nearest population area. It is a very quiet area for radio science. Obviously, there are also a lot of challenges getting this infrastructure and data from there to a computing facility.

The computing facilities that will handle those data streams will be located in Perth, Australia, and Cape Town, South Africa. That is a distance of about 700 kilometres and the data rates over that distance are quite large as well. Each of those telescopes will generate in the order of 3 Terabytes a second. The Amsterdam Internet Exchange has just breached the 1 Terabit/second mark in 2016. It is comparable for each of these telescopes to the Amsterdam Internet Exchange right now. The data flows are completely different obviously but the data rates are very comparable. The data rates, whilst challenging, however are not the numbers that scare Chris Broekema. The requirements that the team has on the compute for those telescopes are quite large. The current modelling shows that the computation requirements are in the order of 10 to 17 Petaflops/second and these are chiefly Petaflops. The computational density or efficiency is not taken into account there. The algorithms that dominate that number, have a very low computational efficiency. That means that the system that needs to be built - take a computational efficiency of about 15 percent - is about ten times the one that Chris Broekema just mentioned. That puts the whole endeavour strongly in the exascale range.

Each of these telescopes will have a science data processor in Australia and in South Africa. Such a system will obviously cost a lot of money which, as a science instrument, is a challenge in itself but the team is also bound by very strict energy and operational budgets. In the end, those requirements will probably not limit the scope of the system. The team will be bound by the other budget: the capital, operational and energy budgets that the team has available. The team is currently challenged to design and build a system that maximizes and optimizes the science output that the team can achieve per euro or kilowatt, FTE or Joule. There will be a trade-off or roofline where the team hopefully will find the optimal solution.

The budget for the basic design of the telescope as far as the sensors, dishes and computation are concerned, is head-on and the team has to use it as best as possible, we wanted to know.

Chris Broekema explained that the antenna design is very optimized for the science cases. There is a lot of design effort being spent at the moment to optimize the frequency responses of the various elements to maximize the scientific output of those elements. The high performance or general purpose computing doesn't really mind what kind of hardware it runs on. On the other hand, the team has the opportunity to co-design the system in combination with the software development. The team knows fairly certain that the current systems that are operational in Lofar or other radio telescopes across the world, will not scale to the sizes needed for the SKA. This means that a lot of the software needs to be redeveloped. Doing that in close collaboration with the hardware design allows the team to hopefully bring that efficiency number up by a certain factor. This is quite a challenge though. The roll-out of the SKA system itself will start in 2018 but the computational requirements of a radiotelescope will scale in a superlinear way with the number of receivers that the team has. As the number of receivers grows, this will change. The first couple of years, the computational requirements for early science and commissioning are very limited indeed. The team will be able to do this on a few laptops or a single rack of compute. The full compute system will only be needed at the end of the roll-out period, by 2023. The system that the team is designing needs to take the technology into account from that era, from the 2023 time.

Most people predict that Moore's Law will end at a time when CMOS will not be possible. A lot of computing architectures and technologies are not proposed so people do not know what to use, we remarked.

Chris Broekema said that he doesn't know what to expect, although he is confident he will be able to buy a system that will be fit for the purpose in that time frame. He expects that the current technologies will still be around in 2023. CMOS scaling will end at that particular time more or less. The team will be able to take advantage of the last clasp of Moore's Law and CMOS scaling most likely. He can't assume that the team will be able to wait longer than 2023 and get more compute for the same amount of money. He is not convinced that the team will be able to take advantage of that. This is especially interesting for the second phase of the SKA project that is several orders of magnitude bigger than the first phase and will require even more compute power. If the team cannot take advantage of what one has been calling Moore's Law scaling in that time frame, something else needs to happen. What that is, Chris Broekema doesn't know yet. This will be beyond 2023. While there are no firm plans in place the team expects to start designing the second phase of the SKA when the first phase is operational. That should then be rolled out in 2030.

We wanted to know more about co-design and how it works in the SKA case. The systems are designed in collaboration with companies?

Chris Broekema explained that there are a number of components. The team distinguishes between internal and external co-design. The team has a fairly comprehensive parametric model of what it needs to do. It shows the various accesses, flops and I/O rates. The team has some fairly detailed design equations available. Those are an input to the hardware modelling that the team is starting now, using a tool that was developed in collaboration with IBM called "ExaBounds" which is part of the DOME project, a public-private collaboration between ASTRON and IBM. There are also other tools that may be around, such as "Aspen" that was developed by one of the American National Labs. This might be a very good solution and candidate as well. The intention of that is to model hardware that is not readily available yet. Take the parameters that define a current piece of hardware and modify it to reflect how the team sees hardware development go. One of the interesting developments in the last couple of years is that the amount of memory bandwidth per flop is steadily decreasing. The team can model that to see how the predicted efficiency of the systems is affected. Those kinds of things can give the team an indication of how the system will perform with future hardware.

The other side of the medal is that the team has a workload characterisation framework that takes real code, real data and monitors the way they run on existing hardware. The team counts the number of flops and I/Os to the memory and measures the energy consumption of the systems. In that way, it validates the parametric model of the hardware mentioned before to see modelled performance is about similar to the measured performance. Finally, in collaboration with the industry, the team takes hardware that might not be readily available yet - early engineering samples - and sees how cutting-edge hardware performs in the same workload characterisation framework and assesses the differences of what the team has available now and what will be on the market soon, so the team hopefully can extrapolate to what will be available in a few years. It is a multi-pronged approach of that particular topic.     

Which amount of data do all these sensors and dishes create and what does the team need to store, we asked.

The data rates from the antennas is staggering, Chris Broekema answered. It is Terabytes per second. This is something that the team does not have to transport though. It is reduced quite quickly on site. The amount of data that is transported is more or less manageable. As far as the Science Data Processors in Perth and South Africa are concerned, this amounts to 3 Terabytes/second. This is going to be reduced to 10-100 Gigabits/second. 100 Gigabits/second translates to about 1 Petabyte/day. This is a quite staggering amount of data still to preserve and to archive. These data streams are exported to regional science centres. The team expects each of the regions that have radio-astronomy expertise to perform the science on the SKA data. There is no interaction of the astronomer with the data in the Science Data Processor itself. The data are exported and interaction with the data is performed in the regional science centres. The team expects one to be hosted by the host countries Australia and South Africa. It is very likely that one will be located in China, in Europe - the Netherlands is a prime candidate for one of these regional science centres, and most likely one in North America as well. These are basically the hubs for expertise with the SKA data, not just doing the science with the data but also retrieving the data and also having the expertise of handling the massive amount of data that the team will produce. In addition to the reprocessing and the final processing of the data that needs to be done the SKA system itself will just produce intermediate products that will be turned into end products at the regional science centres.

Comments
Trackback URL:

No comments yet. Be the first.