University of Cambridge > Talks.cam > Microsoft Research Cambridge, public talks > Turbocharging Rack-Scale In-Memory Computing with Scale-Out NUMA

Turbocharging Rack-Scale In-Memory Computing with Scale-Out NUMA

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact lecturescam.

Please be aware that this event may be recorded. Microsoft will own the copyright of any recording and reserves the right to distribute it as required.

Web-scale online services mandate fast access to massive quantities of data. In practice, this is accomplished by sharding the datasets across a pool of servers within a datacenter and keeping each shard in the servers’ main memory to avoid long-latency disk I/O. Accesses to non-local shards take place over the datacenter network, incurring communication delays that are 20-1000x greater than accesses to local memory. In this talk, I will introduce Scale-Out NUMA —a rack-scale architecture with an RDMA -inspired programming model that eliminates chief latency overheads of existing networking technologies and reduces the remote memory access latency to a small factor of local DRAM . I will overview key features of Scale-Out NUMA and will describe how it can bridge the semantic gap between software and hardware through integrated support for atomic object reads.

This talk is part of the Microsoft Research Cambridge, public talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2022 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity