oldnewbird的个人博客分享 http://blog.sciencenet.cn/u/oldnewbird

博文

libsvm Usage 参考libsvm————readme

已有 1780 次阅读 2019-10-23 09:39 |个人分类:机器学习|系统分类:科研笔记| libsvm, readme

`svm-train' Usage

=================


Usage: svm-train [options] training_set_file [model_file]

options:

-s svm_type : set type of SVM (default 0)

0 -- C-SVC (multi-class classification)

1 -- nu-SVC (multi-class classification)

2 -- one-class SVM

3 -- epsilon-SVR (regression)

4 -- nu-SVR (regression)

-t kernel_type : set type of kernel function (default 2)

0 -- linear: u'*v

1 -- polynomial: (gamma*u'*v + coef0)^degree

2 -- radial basis function: exp(-gamma*|u-v|^2)

3 -- sigmoid: tanh(gamma*u'*v + coef0)

4 -- precomputed kernel (kernel values in training_set_file)

-d degree : set degree in kernel function (default 3)

-g gamma : set gamma in kernel function (default 1/num_features)

-r coef0 : set coef0 in kernel function (default 0)

-c cost : set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)

-n nu : set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)

-p epsilon : set the epsilon in loss function of epsilon-SVR (default 0.1)

-m cachesize : set cache memory size in MB (default 100)

-e epsilon : set tolerance of termination criterion (default 0.001)

-h shrinking : whether to use the shrinking heuristics, 0 or 1 (default 1)

-b probability_estimates : whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)

-wi weight : set the parameter C of class i to weight*C, for C-SVC (default 1)

-v n: n-fold cross validation mode

-q : quiet mode (no outputs)

The k in the -g option means the number of attributes in the input data.


option -v randomly splits the data into n parts and calculates cross

validation accuracy/mean squared error on them.


See libsvm FAQ for the meaning of outputs.


`svm-predict' Usage

===================


Usage: svm-predict [options] test_file model_file output_file

options:

-b probability_estimates: whether to predict probability estimates, 0 or 1 (default 0); for one-class SVM only 0 is supported


model_file is the model file generated by svm-train.

test_file is the test data you want to predict.

svm-predict will produce output in the output_file.


`svm-scale' Usage

=================


Usage: svm-scale [options] data_filename

options:

-l lower : x scaling lower limit (default -1)

-u upper : x scaling upper limit (default +1)

-y y_lower y_upper : y scaling limits (default: no y scaling)

-s save_filename : save scaling parameters to save_filename

-r restore_filename : restore scaling parameters from restore_filename


See 'Examples' in this file for examples.


Tips on Practical Use

=====================


* Scale your data. For example, scale each attribute to [0,1] or [-1,+1].

* For C-SVC, consider using the model selection tool in the tools directory.

* nu in nu-SVC/one-class-SVM/nu-SVR approximates the fraction of training

  errors and support vectors.

* If data for classification are unbalanced (e.g. many positive and

  few negative), try different penalty parameters C by -wi (see

  examples below).

* Specify larger cache size (i.e., larger -m) for huge problems.




https://blog.sciencenet.cn/blog-3421825-1203101.html

上一篇:win10下 MATLAB2018a 安装libSVM-3.23 解决无法编译的问题
下一篇:mysvmtrian
收藏 IP: 219.239.227.*| 热度|

0

该博文允许注册用户评论 请点击登录 评论 (0 个评论)

数据加载中...
扫一扫,分享此博文

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-11-24 01:29

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部