| |
| |
Foreword | |
| |
| |
Preface | |
| |
| |
| |
The framework of learning | |
| |
| |
| |
Introduction | |
| |
| |
| |
A formal setting | |
| |
| |
| |
Hypothesis spaces and target functions | |
| |
| |
| |
Sample, approximation, and generalization errors | |
| |
| |
| |
The bias-variance problem | |
| |
| |
| |
The remainder of this book | |
| |
| |
| |
References and additional remarks | |
| |
| |
| |
Basic hypothesis spaces | |
| |
| |
| |
First examples of hypothesis space | |
| |
| |
| |
Reminders I | |
| |
| |
| |
Hypothesis spaces associated with Sobolev spaces | |
| |
| |
| |
Reproducing Kernel Hilbert Spaces | |
| |
| |
| |
Some Mercer kernels | |
| |
| |
| |
Hypothesis spaces associated with an RKHS | |
| |
| |
| |
Reminders II | |
| |
| |
| |
On the computation of empirical target functions | |
| |
| |
| |
References and additional remarks | |
| |
| |
| |
Estimating the sample error | |
| |
| |
| |
Exponential inequalities in probability | |
| |
| |
| |
Uniform estimates on the defect | |
| |
| |
| |
Estimating the sample error | |
| |
| |
| |
Convex hypothesis spaces | |
| |
| |
| |
References and additional remarks | |
| |
| |
| |
Polynomial decay of the approximation error | |
| |
| |
| |
Reminders III | |
| |
| |
| |
Operators defined by a kernel | |
| |
| |
| |
Mercer's theorem | |
| |
| |
| |
RKHSs revisited | |
| |
| |
| |
Characterizing the approximation error in RKHSs | |
| |
| |
| |
An example | |
| |
| |
| |
References and additional remarks | |
| |
| |
| |
Estimating covering numbers | |
| |
| |
| |
Reminders IV | |
| |
| |
| |
Covering numbers for Sobolev smooth kernels | |
| |
| |
| |
Covering numbers for analytic kernels | |
| |
| |
| |
Lower bounds for covering numbers | |
| |
| |
| |
On the smoothness of box spline kernels | |
| |
| |
| |
References and additional remarks | |
| |
| |
| |
Logarithmic decay of the approximation error | |
| |
| |
| |
Polynomial decay of the approximation error C[infinity]for kernels | |
| |
| |
| |
Measuring the regularity of the kernel | |
| |
| |
| |
Estimating the approximation error in RKHSs | |
| |
| |
| |
Proof of Theorem 6.1 | |
| |
| |
| |
References and additional remarks | |
| |
| |
| |
On the bias-variance problem | |
| |
| |
| |
A useful lemma | |
| |
| |
| |
Proof of Theorem 7.1 | |
| |
| |
| |
A concrete example of bias-variance | |
| |
| |
| |
References and additional remarks | |
| |
| |
| |
Least squares regularization | |
| |
| |
| |
Bounds for the regularized error | |
| |
| |
| |
On the existence of target functions | |
| |
| |
| |
A first estimate for the excess generalization error | |
| |
| |
| |
Proof of Theorem 8.1 | |
| |
| |
| |
Reminders V | |
| |
| |
| |
Compactness and regularization | |
| |
| |
| |
References and additional remarks | |
| |
| |
| |
Support vector machines for classification | |
| |
| |
| |
Binary classifiers | |
| |
| |
| |
Regularized classifiers | |
| |
| |
| |
Optimal hyperplanes: the separable case | |
| |
| |
| |
Support vector machines | |
| |
| |
| |
Optimal hyperplanes: the nonseparable case | |
| |
| |
| |
Error analysis for separable measures | |
| |
| |
| |
Weakly separable measures | |
| |
| |
| |
References and additional remarks | |
| |
| |
| |
General regularized classifiers | |
| |
| |
| |
Bounding the misclassification error in terms of the generalization error | |
| |
| |
| |
Projection and error decomposition | |
| |
| |
| |
Bounds for the regularized error D([gamma],[pi]of f[subscript gamma] | |
| |
| |
| |
Bounds for the sample error term involving f[subscript gamma] | |
| |
| |
| |
Bounds for the sample error term involving f[superscript pi][subscript z,gamma] | |
| |
| |
| |
Stronger error bounds | |
| |
| |
| |
Improving learning rates by imposing noise conditions | |
| |
| |
| |
References and additional remarks | |
| |
| |
References | |
| |
| |
Index | |