CFP last date
01 May 2024
Reseach Article

Algorithm Selection based on Landmarking Meta-feature

by Ashvini Balte, Nitin Pise, Ranjana Agrawal
Communications on Applied Electronics
Foundation of Computer Science (FCS), NY, USA
Volume 2 - Number 6
Year of Publication: 2015
Authors: Ashvini Balte, Nitin Pise, Ranjana Agrawal
10.5120/cae2015651784

Ashvini Balte, Nitin Pise, Ranjana Agrawal . Algorithm Selection based on Landmarking Meta-feature. Communications on Applied Electronics. 2, 6 ( August 2015), 23-27. DOI=10.5120/cae2015651784

@article{ 10.5120/cae2015651784,
author = { Ashvini Balte, Nitin Pise, Ranjana Agrawal },
title = { Algorithm Selection based on Landmarking Meta-feature },
journal = { Communications on Applied Electronics },
issue_date = { August 2015 },
volume = { 2 },
number = { 6 },
month = { August },
year = { 2015 },
issn = { 2394-4714 },
pages = { 23-27 },
numpages = {9},
url = { https://www.caeaccess.org/archives/volume2/number6/402-2015651784/ },
doi = { 10.5120/cae2015651784 },
publisher = {Foundation of Computer Science (FCS), NY, USA},
address = {New York, USA}
}
%0 Journal Article
%1 2023-09-04T19:40:14.323947+05:30
%A Ashvini Balte
%A Nitin Pise
%A Ranjana Agrawal
%T Algorithm Selection based on Landmarking Meta-feature
%J Communications on Applied Electronics
%@ 2394-4714
%V 2
%N 6
%P 23-27
%D 2015
%I Foundation of Computer Science (FCS), NY, USA
Abstract

Knowledge discovery is the data mining task. Number of classification algorithms is present for knowledge discovery task in data mining. Each algorithm is differentiating with another based on their performance. No free lunch theorem [1] states that there no single prediction of algorithm is not possible for all kind of datasets. This implies that performance value of algorithm changes according to dataset characteristics. Non-expert can’t understand which will be best classifier for his/her dataset. Meta-learning is one machine learning technique which supports non-expert users for selecting classifier. In meta learning dataset characteristics well know as meta-features. Based on these meta-features the prediction of well suitable classifier is done. In this paper, in the first experiment, the prediction classifier is done by landmarking meta-features with k-NN approach. In the second experiment in addition to first experiment Win/ draw/ loss of corresponding classifiers is calculated using recommendation method and based on that the best classifier is recommended. Here the simple linear regression value of classifiers is taken into consideration. In both the experiments performance measure is the accuracy of classifier.

References
  1. Igel, C. and Toussaint, M. 2005. A No-Free-Lunch Theorem for Non-uniform Distributions of Target Functions. Journal of Mathematical Modelling and Algorithms, 3(4), 313-322.
  2. Fürnkranz, J. and Petrak, J. 2001 An Evaluation of Landmarking Variants. In: Working Notes of the ECML/PKDD 2000 Workshop on Integrating Aspects of Data Mining, Decision Support and Meta-Learning IDDM-2001), Freiburg, Germany, 57-68.
  3. Brazdil, P., Carrier, C. G., Soares, C. and Vilalta, R. 2008 . Metalearning: Applications to Data Mining. Springer Science & Business Media.
  4. Vanwinckelen, G. and Blockeel, H. 2014 A Meta-learning System for Multi-instance Classification. In Proceedings of the ECML-14 Workshop on Learning over Multiple Contexts, 1-14.
  5. Van Rijn, J. N., Holmes, G., Pfahringer, B. and Vanschoren, J. 2014 Algorithm Selection on Data Streams. In Discovery Science, Springer International Publishing, 325-336.
  6. Vilalta, R. and Drissi, Y. 2002 A Characterization of Difficult Problems in Classification. In: Proceedings of the 6th European Conference on Principles and Practice of Knowledge Discovery in Databases, Helsinki, Finland (2002).
  7. Pfahringer, B., Bensusan, H. and Giraud-Carrier, C. 2000 Tell me who Can Learn You and I Can Tell You Who You Are: Landmarking various learning algorithms. In Proceedings of the 17th international conference on machine learning , 743-750.
  8. Vanschoren, J. 2010 Understanding Machine Learning Performance With Experiment Databases. Ph.D. thesis, Arenberg Doctoral School of Science, Engineering & Technology, Katholieke Universiteit Leuven.
  9. Reif, M., Shafait, F., Goldstein, M., Breuel, T., & Dengel, A. 2014 Automatic Classifier Selection for Non-experts. Pattern Analysis and Applications, 17(1), 83-96.
  10. Peng, Y., Flach, P., Soares, C. and Brazdil, P. 2002. Improved Dataset Characterization for Meta-learning. S. Lange, K. Satoh, C. Smith (eds.) Discovery Science, Lecture Notes in Computer Science, 193-208.
  11. Balte, A., Pise, N. and Kulkarni, P. 2014 Meta-Learning with Landmarking: A Survey. International Journal of Computer Applications, 105(8), 47-51.
  12. Vanschoren, J. and Blockeel, H. 2006 Towards Understanding Learning Behavior. In Proceedings of the Annual Machine Learning Conference of Belgium and The Netherlands, Benelearn, 89-96.
  13. Abdelmessih, S. D., Shafait, F., Reif, M. and Goldstein M. 2010. Landmarking for Meta-learning Using RapidMiner. RapidMiner Community Meeting and Conference.
  14. Song, Q., Wang, G. and Wang, C. 2012. Automatic Recommendation of Classification Algorithms Based on Data set Characteristics. Pattern recognition, 45(7), 2672-2689.
  15. Black E.P. 2006 Manhattan distance. in Dictionary of Algorithms and Data Structures [online], Vreda Pieterse and Paul E. Black, eds. 31 May 2006. (accessed TODAY) Available from:  http://www.nist.gov/dads/HTML/manhattanDistance.htm
  16. Asuncion, A. and Newman, D. 2007 UCI machine learning repository. http://www.ics.uci.edu/~mlearn/MLRepository.html. University of California, Irvine, School of Information and Computer Sciences.
  17. Kohavi, R., Becker, B. and Sommerfield, D. 1997 Improving Simple Bayes.
  18. Aha, D. and Kibler D. 1991 Instance-based Learning Algorithms. Machine Learning. 6, 37-66.
  19. Quinlan, J. R. 1993 C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers.
  20. Friedman, J., Hastie T. and Tibshirani R. 1998 Additive Logistic Regression: a Statistical View of Boosting. Stanford University.
  21. Frank, E. and Witten, I. H. 1998 Generating Accurate Rule Sets Without Global Optimization. In: Fifteenth International Conference on Machine Learning, 144-151.
  22. Breiman, L. 2001 Random Forests. Machine Learning. 45(1), 5-32.
  23. Breiman, L. 1996 Bagging predictors. Machine Learning. 24(2), 123-140.
  24. Platt , J. 1998 Fast Training of Support Vector Machines using Sequential Minimal Optimization. In B. Schoelkopf and C. Burges and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning, 3.
  25. Kenney, J. F. and Keeping, E. S. 1962 Linear Regression and Correlation. Ch. 15 in Mathematics of Statistics, Pt. 1, 3rd ed. Princeton, NJ: Van Nostrand, 252-285.
Index Terms

Computer Science
Information Sciences

Keywords

Landmarking meta-feature No Free Lunch Theorem Knowledge Base Accuracy k-NN Recommendation