University of Cambridge > Talks.cam > Computer Laboratory Wednesday Seminars > Multi-Target Data-Parallel Programming with Accelerator for GPUs, Multicore Processors and FPGAs

Multi-Target Data-Parallel Programming with Accelerator for GPUs, Multicore Processors and FPGAs

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Stephen Clark.

In this presentation I will introduce the motivate the need for data-parallel descriptions that can be automatically re-targeted to execute on wildly different kinds of hardware including vector instructions on multicore processors, GPUs (graphics cards) and FPG As (special circuits that can be quickly reconfigured to implement new functionality). Specifically, I shall talk about the Accelerator project at Microsoft which has produced a library of data-parallel operations and memory transformations that aim to be high level enough to permit civilian programmers to express their algorithms in a manner which can be automatically compiled to parallel implementations on GPUs, SSE3 vector instructions on multiple cores and Xilinx FPGA circuits. As the processing capabilities on our desktops, on our devices and in the cloud become more heterogeneous we will increasingly need programming models that allow us to compile to multiple targets from a single description. Although this seems very hard to do in the general case we do believe there is a good chance of solving this problem for a class of data-parallel algorithms.

Satnam Singh’s research interests include involves finding novel ways to program and use reconfigurable chips called FPG As and in parallel functional programming. Satnam Singh completed his PhD at the University of Glasgow in 1991 where he devised a new way to program and analyze digital circuits described in a special functional programming language. He then went on to be an academic at the same university and lead several research projects that explored novel ways to exploit FPGA technology for applications like software radio, image processing and high resolution digital printing, and graphics. In 1998 he moved to San Jose California to join Xilinx’s research lab where he developed a language called Lava in conjunction with Chalmers University which allows circuits to be laid out nicely on chips to give high performance and better utilization of silicon resources. In 2004 he joined Microsoft in Redmond Washington where we worked on a variety of techniques for producing concurrent and parallel programs and in particular explored join patterns and software transactional memory. In 2006 he moved to Microsoft’s research laboratory in Cambridge where he works on reconfigurable computing and parallel functional programming. He is a fellow of the IET and a visiting professor at Imperial College and a visiting lecturer at Chalmers in Gothenburg, Sweden.

This talk is part of the Computer Laboratory Wednesday Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity