University of Cambridge > Talks.cam > Computer Laboratory Systems Research Group Seminar > LEO: Scheduling Sensor Inference Algorithms across Heterogeneous Mobile Processors and Network Resources

LEO: Scheduling Sensor Inference Algorithms across Heterogeneous Mobile Processors and Network Resources

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Liang Wang.

Mobile apps that use sensors to monitor user behavior often employ resource heavy inference algorithms that make computational offloading a common practice. However, existing schedulers/offloaders typically emphasize one primary offloading aspect without fully exploring complementary goals (e.g., heterogeneous resource management with only partial visibility into underlying algorithms, or concurrent sensor app execution on a single resource) and as a result, may overlook performance benefits pertinent to sensor processing. We bring together key ideas scattered in existing offloading solutions to build LEO – a scheduler designed to maximize the performance for the unique workload of continuous and intermittent mobile sensor apps without changing their inference accuracy. LEO makes use of domain specific signal processing knowledge to smartly distribute the sensor processing tasks across the broader range of heterogeneous computational resources of high-end phones (CPU, co-processor, GPU and the cloud). To exploit short-lived, but substantial optimization opportunities, and remain responsive to the needs of near real-time apps such as voice-based natural user interfaces, LEO runs as a service on a low-power co-processor unit (LPU) to perform both frequent and joint schedule optimization for concurrent pipelines. Depending on the workload and network conditions, LEO is between 1.6 and 3 times more energy efficient than conventional cloud offloading with CPU -bound sensor sampling. In addition, even if a general-purpose scheduler is optimized directly to leverage an LPU , we find LEO still uses only a fraction (< 1/7) of the energy overhead for scheduling and is up to 19% more energy efficient for medium to heavy workloads.

This talk is part of the Computer Laboratory Systems Research Group Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity