University of Cambridge > > Computer Laboratory Systems Research Group Seminar > 1) Mining Users' Significant Driving Routes with Low-power Sensors 2) DSP.Ear: Leveraging Co-Processor Support for Continuous Audio Sensing on Smartphones

1) Mining Users' Significant Driving Routes with Low-power Sensors 2) DSP.Ear: Leveraging Co-Processor Support for Continuous Audio Sensing on Smartphones

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Eiko Yoneki.

2 practice talks for SenSys 2014.

1) While there is significant work on sensing and recognition of significant places for users, little attention has been given to users’ significant routes. Recognizing these routine journeys, can open doors for the development of novel applications, like personalized travel alerts, and enhancement of user’s travel experience. However, the high energy consumption of traditional location sensing technologies, such as GPS or WiFi based localization, is a barrier to passive and ubiquitous route sensing through smartphones.

In this paper, we present a passive route sensing framework that continuously monitors a vehicle user solely through a phone’s gyroscope and accelerometer. This approach can differentiate and recognize various routes taken by the user by time warping angular speeds experienced by the phone while in transit and is independent of phone orientation and location within the vehicle, small detours and traffic conditions. We compare the route learning and recognition capabilities of this approach with GPS trajectory analysis and show that it achieves similar performance. Moreover, with an embedded co-processor, common to most new generation phones, it achieves energy savings of an order of magnitude over the GPS sensor.

2) The rapidly growing adoption of sensor-enabled smartphones has greatly fueled the proliferation of applications that use phone sensors to monitor user behavior. A central sensor among these is the microphone which enables, for instance, the detection of valence in speech, or the identification of speakers. Deploying multiple of these applications on a mobile device to continuously monitor the audio environment allows for the acquisition of a diverse range of sound-related contextual inferences. However, the cumulative processing burden critically impacts the phone battery.

To address this problem, we propose DSP .Ear—an integrated sensing system that takes advantage of the latest low-power DSP co-processor technology in commodity mobile devices to enable the continuous and simultaneous operation of multiple established algorithms that perform complex audio inferences. The system extracts emotions from voice, estimates the number of people in a room, identifies the speakers, and detects commonly found ambient sounds, while critically incurring little overhead to the device battery. This is achieved through a series of pipeline optimizations that allow the computation to remain largely on the DSP . Through detailed evaluation of our prototype implementation we show that, by exploiting a smartphone’s co-processor, DSP .Ear achieves a 3 to 7 times increase in the battery lifetime compared to a solution that uses only the phone’s main processor. In addition, DSP .Ear is 2 to 3 times more power efficient than a naive DSP solution without optimizations. We further analyze a large-scale dataset from 1320 Android users to show that in about 80-90% of the daily usage instances DSP .Ear is able to sustain a full day of operation (even in the presence of other smartphone workloads) with a single battery charge.

This talk is part of the Computer Laboratory Systems Research Group Seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2020, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity