University of Cambridge > Talks.cam > CAS FPGA Talks > (FPT Preview) A Scalable FPGA Architecture for Non-linear SVM Training

(FPT Preview) A Scalable FPGA Architecture for Non-linear SVM Training

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr George A Constantinides.

Support Vector Machines (SVMs) is a popular supervised learning method, providing state-of-the-art accuracy in various classification tasks. However, SVM training is a time-consuming task for large-scale problems. This work proposes a scalable FPGA architecture which targets a geometric approach to SVM training based on Gilbert’s algorithm using kernel functions. The architecture is partitioned into floating-point and fixed-point domains in order to efficiently exploit the FPGA ’s available resources for the acceleration of the non-linear SVM training. Implementation results present a speed-up factor up to three orders of magnitude of the most computational expensive part of the algorithm compared to the algorithm’s software implementation.

This talk is part of the CAS FPGA Talks series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity