Skip to content

Learning Theory An Approximation Theory Viewpoint

ISBN-10: 052186559X

ISBN-13: 9780521865593

Edition: 2007

Authors: Felipe Cucker, Ding Xuan Zhou

List price: $90.00
Shipping box This item qualifies for FREE shipping.
Blue ribbon 30 day, 100% satisfaction guarantee!
Buy eBooks
what's this?
Rush Rewards U
Members Receive:
Carrot Coin icon
XP icon
You have reached 400 XP and carrot coins. That is the daily max!


The goal of learning theory is to approximate a function from sample values. To attain this goal learning theory draws on a variety of diverse subjects, specifically statistics, approximation theory, and algorithmics. Ideas from all these areas blended to form a subject whose many successful applications have triggered a rapid growth during the last two decades. This is the first book to give a general overview of the theoretical foundations of the subject emphasizing the approximation theory, while still giving a balanced overview. It is based on courses taught by the authors, and is reasonably self-contained so will appeal to a broad spectrum of researchers in learning theory and adjacent fields. It will also serve as an introduction for graduate students and others entering the field, who wish to see how the problems raised in learning theory relate to other disciplines.
Customers also bought

Book details

List price: $90.00
Copyright year: 2007
Publisher: Cambridge University Press
Publication date: 3/29/2007
Binding: Hardcover
Pages: 238
Size: 6.25" wide x 9.00" long x 0.75" tall
Weight: 0.990
Language: English

Ding Xuan Zhou is an Associate Professor in the Department of Mathematics at the City University of Hong Kong.

The framework of learning
A formal setting
Hypothesis spaces and target functions
Sample, approximation, and generalization errors
The bias-variance problem
The remainder of this book
References and additional remarks
Basic hypothesis spaces
First examples of hypothesis space
Reminders I
Hypothesis spaces associated with Sobolev spaces
Reproducing Kernel Hilbert Spaces
Some Mercer kernels
Hypothesis spaces associated with an RKHS
Reminders II
On the computation of empirical target functions
References and additional remarks
Estimating the sample error
Exponential inequalities in probability
Uniform estimates on the defect
Estimating the sample error
Convex hypothesis spaces
References and additional remarks
Polynomial decay of the approximation error
Reminders III
Operators defined by a kernel
Mercer's theorem
RKHSs revisited
Characterizing the approximation error in RKHSs
An example
References and additional remarks
Estimating covering numbers
Reminders IV
Covering numbers for Sobolev smooth kernels
Covering numbers for analytic kernels
Lower bounds for covering numbers
On the smoothness of box spline kernels
References and additional remarks
Logarithmic decay of the approximation error
Polynomial decay of the approximation error C[infinity]for kernels
Measuring the regularity of the kernel
Estimating the approximation error in RKHSs
Proof of Theorem 6.1
References and additional remarks
On the bias-variance problem
A useful lemma
Proof of Theorem 7.1
A concrete example of bias-variance
References and additional remarks
Least squares regularization
Bounds for the regularized error
On the existence of target functions
A first estimate for the excess generalization error
Proof of Theorem 8.1
Reminders V
Compactness and regularization
References and additional remarks
Support vector machines for classification
Binary classifiers
Regularized classifiers
Optimal hyperplanes: the separable case
Support vector machines
Optimal hyperplanes: the nonseparable case
Error analysis for separable measures
Weakly separable measures
References and additional remarks
General regularized classifiers
Bounding the misclassification error in terms of the generalization error
Projection and error decomposition
Bounds for the regularized error D([gamma],[pi]of f[subscript gamma]
Bounds for the sample error term involving f[subscript gamma]
Bounds for the sample error term involving f[superscript pi][subscript z,gamma]
Stronger error bounds
Improving learning rates by imposing noise conditions
References and additional remarks