Parallel Support Vector Machines on a Hadoop Framework

Authors

  • Neha Jadhav Author
  • Ms. Ch. Bhavani Author

Keywords:

SVM Parameters, MapReduce, Hadoop, Parallel SVM

Abstract

The term "big data" refers to large datasets that cannot be processed using standard 
computer procedures. Hadoop applications may be stored and run on commodity hardware clusters.
Map Reduce, a distributed programming approach, may be used to break down large amounts of 
data into smaller chunks. SVM (Support Vector Machine) is a well-known and powerful classifier in 
the field of machine learning. As a consequence, SVM is inappropriate for large datasets due of its 
high computational cost. A Map Reduce-based SVM for large datasets was demonstrated in this 
research. Penalty and kernel settings have been used to analyse the parallel SVM's performance

Downloads

Published

21-02-2021

How to Cite

Parallel Support Vector Machines on a Hadoop Framework. (2021). Indo-American Journal of Pharma and Bio Sciences, 19(1), 6-17. https://iajpb.org/index.php/iajpb/article/view/77