University of Cambridge > > Computer Laboratory Systems Research Group Seminar > Condensing the cloud: running CIEL on many-core

Condensing the cloud: running CIEL on many-core

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Eiko Yoneki.

Distributed execution engines have revolutionized data processing by making parallel programming simple. Systems such as MapReduce, Dryad and Hadoop can achieve massive throughput when running on thousands of commodity servers, yet generally only require the programmer to provide sequential code. These systems were designed to scale out across many worker machines, each of which had at most a handful of processors. As such, intra-server parallelism is either left entirely to the developer or managed centrally. In this talk, I will compare the performance of our recently-developed CIEL distributed execution engine on three quite different 48-core platforms: an AMD “Magny-Cours” ccNUMA server, an experimental Intel Single-Chip Cloud computer, and an Amazon EC2 cluster of 48 uniprocessor VMs. Our results demonstrate task creation/coordination overhead to be a problem when using fine-grained tasks, and we demonstrate a few simple improvements that mitigate these issues. We also outline a set of further challenges and opportunities.

This is a practice talk for SFMA 2011 at Eurosys.



This talk is part of the Computer Laboratory Systems Research Group Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity