COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |

University of Cambridge > Talks.cam > Machine Learning @ CUED > Learning of Milky Way Model Parameters Using Matrix-variate Data in a New Gaussian Process-based Method

## Learning of Milky Way Model Parameters Using Matrix-variate Data in a New Gaussian Process-based MethodAdd to your list(s) Download to your calendar using vCal - Dr Dalia Chakrabarty (University of Warwick)
- Thursday 15 November 2012, 11:30-12:30
- Engineering Department, CBL Room BE-438.
If you have a question about this talk, please contact Zoubin Ghahramani. In this talk I will discuss a new Bayesian non-parametric method of predicting the value of the model parameter vector that supports real observed data, where this measured information is in the form of a matrix. The information is then expressed as an unknown, matrix-variate function of the model parameter vector and this unknown function is modelled using a high-dimensional Gaussian Process. The model is trained on a training data set that is generated (via simulations) at a chosen design set. In fact, in our treatment of the information as a vector of corresponding dimensions, this function is modelled as a vector-variate Gaussian Process leading to the likelihood being matrix-normal in nature, with mean and covariance matrices suggested by the structure of the Gaussian Process in question. In an effort to learn selected process parameters (such as the smoothness parameters) from the data, in addition to the unknown model parameter vector value that supports the real data, we write their joint posterior probability, given training as well as observed data. Inference is performed using Transformation-based MCMC . An application of this method is made to learn feature parameters of the Milky Way, using measured and simulated data of velocity vectors of stars that live in the vicinity of the Sun. Learning of the Galactic parameters with the real data is shown to produce a similar result to a comparator method that requires a much larger data set, in order to accomplish estimation. This talk is part of the Machine Learning @ CUED series. ## This talk is included in these lists:- Seminar
- All Talks (aka the CURE list)
- Biology
- CBL important
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge Neuroscience Seminars
- Cambridge University Engineering Department Talks
- Centre for Smart Infrastructure & Construction
- Chris Davis' list
- Creating transparent intact animal organs for high-resolution 3D deep-tissue imaging
- Engineering Department, CBL Room BE-438
- Featured lists
- Guy Emerson's list
- Inference Group Summary
- Information Engineering Division seminar list
- Interested Talks
- Joint Machine Learning Seminars
- Life Science
- Life Sciences
- ML
- Machine Learning @ CUED
- Machine Learning Summary
- Neuroscience
- Neuroscience Seminars
- Neuroscience Seminars
- Required lists for MLG
- School of Technology
- Simon Baker's List
- Stem Cells & Regenerative Medicine
- Trust & Technology Initiative - interesting events
- bld31
- dh539
- ndk22's list
- rp587
- yk373's list
Note that ex-directory lists are not shown. |
## Other listsCentenary Year of the Medical Research Council and International Year of Statistics Cavendish Physical Society Cambridge UCU## Other talksMaking Refuge: Scripture and Refugee Relief Climate change, species' abundance changes and protected areas Nonlinear nonmodal stability theory In search of amethysts, black gold and yellow gold Requirements in Application Development Well-posedness of weakly hyperbolic systems of PDEs in Gevrey regularity. |