- 主要特点: 一般主要是对后验概率建模,从统计的角度表示数据的分布情况,能够反映同类数据本身的相似度。 只关注自己的inclass本身(即点左下角区域内的概率),不关心到底 decision boundary在哪。 - 优点: 实际上带的信息要比判别模型丰富, 研究单类问题比判别模型灵活性强 模型可以通过增量学习得到 能用于数据不完整(missing data)情况 modular construction of composed solutions to complex problems prior knowledge can be easily taken into account robust to partial occlusion and viewpoint changes can tolerate significant intra-class variation of object appearance - 缺点: tend to produce a significant number of false positives. This is particularly true for object classes which share a high visual similarity such as horses and cows 学习和计算过程比较复杂
- 常见的主要有: Gaussians, Naive Bayes, Mixtures of multinomials Mixtures of Gaussians, Mixtures of experts, HMMs Sigmoidal belief networks, Bayesian networks Markov random fields
- 主要应用: NLP: Traditional rule-based or Boolean logic systems (Dialog and Lexis-Nexis) are giving way to statistical approaches (Markov models and stochastic context grammars) Medical Diagnosis: QMR knowledge base, initially a heuristic expert systems for reasoning about diseases and symptoms been augmented with decision theoretic formulation Genomics and Bioinformatics Sequences represented as generative HMMs
【两者之间的关系】 由生成模型可以得到判别模型,但由判别模型得不到生成模型。 Can performance of SVMs be combined elegantly with flexible Bayesian statistics? Maximum Entropy Discrimination marries both methods: Solve over a distribution of parameters (a distribution over solutions)