University of Cambridge > Talks.cam > Computer Laboratory Opera Group Seminars > Why Scheduling

Why Scheduling

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Minor Gordon.

Research designs for high performance web servers have long been defined by the strategy they employ to handle many thousands of requests concurrently. A number of efficient designs have emerged in the last decade, with the most prominent of them (Flash and SEDA ) occupying the middle ground between the extremes of purely thread-based (Apache) and purely event-based (Zeus) concurrency.

What is not well understood is how the various concurrency strategies scale beyond uniprocessors. Multicore and multiprocessor environments induce new sources of latency such as remote cache misses, with slower clock speeds making disk reads even more expensive. Fortunately, with intelligent disk scheduling and a large RAM a web server can significantly reduce the impact of disk I/O on server performance under a typical static file workload. However, once the server is working primarily from memory the main source of latency becomes the memory hierarchy, particularly L2 data cache misses.

I am currently investigating the effects of different concurrency strategies on server software efficiency. In this talk I’ll present the application server I’ve been working on for the past year, explain why I think SPE Cweb2005 is all but useless, and sketch my plans for a thesis evaluation.

This talk is part of the Computer Laboratory Opera Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity