Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Brown, Laura E.a; * | Tsamardinos, Ioannisb; c | Hardin, Douglas P.d; e; 1
Affiliations: [a] Department of Computer Science, Michigan Technological University, Houghton, MI, USA | [b] Department of Computer Science, University of Crete, Iraklio, Crete, Greece | [c] Institute of Computer Science, Foundation for Research and Technology, Hellas, Greece | [d] Department of Mathematics, Vanderbilt University, Nashville, TN, USA | [e] Department of Biomedical Informatics, Vanderbilt University, Nashville, TN, USA
Correspondence: [*] Corresponding author: Laura E. Brown, Department of Computer Science, Michigan Technological University, Houghton, MI 49931, USA. Tel.: +1 906 487 3472; Fax: +1 906 487 2283; E-mail: [email protected]
Note: [1] The research of this author was supported, in part, by the U.S. National Science Foundation under grants DMS-0808093 and DMS-0934630.
Abstract: Polynomial Support Vector Machine models of degree d are linear functions in a feature space of monomials of at most degree d. However, the actual representation is stored in the form of support vectors and Lagrange multipliers that is unsuitable for human understanding. An efficient, heuristic method for searching the feature space of a polynomial Support Vector Machine model for those features with the largest absolute weights is presented. The time complexity of this method is Θ(dms2+sdp), where m is the number of variables, d the degree of the kernel, s the number of support vectors, and p the number of features the algorithm is allowed to search. In contrast, the brute force approach of constructing all weights and then selecting the largest weights has complexity Θ(sdm+dd). The method is shown to be effective in identifying the top-weighted features on several simulated data sets, where the true weight vector is known. Additionally, the method is run on several high-dimensional, real world data sets where the features returned may be used to construct classifiers with classification performances similar to models built with all or subsets of variables returned by variable selection methods. This algorithm provides a new ability to understand, conceptualize, visualize, and communicate polynomial SVM models and has implications for feature construction, dimensionality reduction, and variable selection.
Keywords: Support Vector Machines, classification, variable selection
DOI: 10.3233/IDA-2012-0539
Journal: Intelligent Data Analysis, vol. 16, no. 4, pp. 551-579, 2012
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]