University of Cambridge > Talks.cam > Machine Learning @ CUED > Coconut: Optimizing computations for machine learning

Coconut: Optimizing computations for machine learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Zoubin Ghahramani.

Matrix-vector notation is the predominant idiom in which machine learning formulae are expressed; some models, like Gaussian processes [5], would be extremely difficult to describe without it. Turning a matrix expression into a computer program is not always easy, however. Although good implementations of primitive matrix operations are available [2] as are packages like MATLAB [6], which provide a high-level interface to these primitives, two important tasks must still be carried out manually: (i) computing derivatives of matrix functions and (ii) turning a matrix expression into an efficient computer program. Not having tools to do this can and does harm research: even for the relatively simple example of fitting a linear regression model with gradient methods, the number of types and combinations of basis functions a researcher can experiment with is limited by the need to manually differentiate the objective function and write code for each version. We have addressed these issues by combining a symbolic matrix algebra engine with a superoptimizing compiler: an interesting learning problem in itself. We call our system Coconut.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2020 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity