||
在上一讲中,我们有结论:
$E_{out}\approx E_{in}$ possible if $m_H(N)$ breaks somewhere and $N$ large enough.
同时,得到$m_H(N)$的界
可以得到错误界
于是,我们有如下假设,并希望得到这样的结果:
1.VC Dimension
VC dimension of $H$, denoted $d_{VC}(H)$ is the largest $N$ for which $m_H(N) = 2^N$
the most inputs H that can shatter
$d_{VC}$ = "minimum k" $-1$.
$d_{VC} \approx \#$free parameters (but not always hold!)
2.VC Dimension and Learning
finite $d_{VC} \Rightarrow g$ "will" generalize ($E_{out}(g)\approx E_{in}(g)$)
regardless of learning algorithm $A$;
regardless of input distribution $P$;
regardless of target function $f$.
3.For d-D perceptrons: $d_{VC} =d+1$.
4.VC Bound Rephrase: Penalty for Model Complexity
THE VC Message:
5.关于Hoeffding Inequality,Union Bound和VC Bound
Yaser Abu-Mostafa教授在其课程《Learning from data》中,给出了关于上述三个界的一个形象化的图示[1]
[1] Yaser Abu-Mostafa. Learning From Data - Online Course (MOOC) .http://work.caltech.edu/telecourse.html. http://www.amlbook.com/slides/iTunesU_Lecture06_April_19.pdf
Archiver|手机版|科学网 ( 京ICP备07017567号-12 )
GMT+8, 2024-11-24 20:58
Powered by ScienceNet.cn
Copyright © 2007- 中国科学报社