Introduction to Machine Learning

ISBN-10: 026201243X

ISBN-13: 9780262012430

Edition: 2nd 2010

Authors: Ethem Alpaydin

List price: $60.00
30 day, 100% satisfaction guarantee

If an item you ordered from TextbookRush does not meet your expectations due to an error on our part, simply fill out a return request and then return it by mail within 30 days of ordering it for a full refund of item cost.

Learn more about our returns policy

Description:

The goal of machine learning is to program computers to use example data or past experience to solve a given problem. Many successful applications of machine learning exist already, including systems that analyze past sales data to predict customer behavior, optimize robot behavior so that a task can be completed using minimum resources, and extract knowledge from bioinformatics data. Introduction to Machine Learningis a comprehensive textbook on the subject, covering a broad array of topics not usually included in introductory machine learning texts. In order to present a unified treatment of machine learning problems and solutions, it discusses many methods from different fields, including statistics, pattern recognition, neural networks, artificial intelligence, signal processing, control, and data mining. All learning algorithms are explained so that the student can easily move from the equations in the book to a computer program. The text covers such topics as supervised learning, Bayesian decision theory, parametric methods, multivariate methods, multilayer perceptrons, local models, hidden Markov models, assessing and comparing classification algorithms, and reinforcement learning. New to the second edition are chapters on kernel machines, graphical models, and Bayesian estimation; expanded coverage of statistical tests in a chapter on design and analysis of machine learning experiments; case studies available on the Web (with downloadable results for instructors); and many additional exercises. All chapters have been revised and updated. Introduction to Machine Learningcan be used by advanced undergraduates and graduate students who have completed courses in computer programming, probability, calculus, and linear algebra. It will also be of interest to engineers in the field who are concerned with the application of machine learning methods. Adaptive Computation and Machine Learning series
what's this?
Rush Rewards U
Members Receive:
coins
coins
You have reached 400 XP and carrot coins. That is the daily max!
Study Briefs

Limited time offer: Get the first one free! (?)

All the information you need in one place! Each Study Brief is a summary of one specific subject; facts, figures, and explanations to help you learn faster.

Add to cart
Study Briefs
SQL Online content $4.95 $1.99
Add to cart
Study Briefs
MS Excel® 2010 Online content $4.95 $1.99
Add to cart
Study Briefs
MS Word® 2010 Online content $4.95 $1.99
Add to cart
Study Briefs
MS PowerPoint® 2010 Online content $4.95 $1.99
Customers also bought
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading

Book details

List price: $60.00
Edition: 2nd
Copyright year: 2010
Publisher: MIT Press
Publication date: 12/4/2009
Binding: Hardcover
Pages: 584
Size: 8.00" wide x 9.00" long x 1.25" tall
Weight: 1.210
Language: English

Ethem Alpaydin is a Professor in the Department of Computer Engineering at Bogazi�i University, Istanbul.

Series Foreword
Figures
Tables
Preface
Acknowledgments
Notes for the Second Edition
Notations
Introduction
What Is Machine Learning?
Examples of Machine Learning Applications
Learning Associations
Classification
Regression
Unsupervised Learning
Reinforcement Learning
Notes
Relevant Resources
Exercises
References
Supervised Learning
Learning a Class from Examples
Vapnik-Chervonenkis (VC) Dimension
Probably Approximately Correct (PAC) Learning
Noise
Learning Multiple Classes
Regression
Model Selection and Generalization
Dimensions of a Supervised Machine Learning Algorithm
Notes
Exercises
References
Bayesian Decision Theory
Introduction
Classification
Losses and Risks
Discriminant Functions
Utility Theory
Association Rules
Notes
Exercises
References
Parametric Methods
Introduction
Maximum Likelihood Estimation
Bernoulli Density
Multinomial Density
Gaussian (Normal) Density
Evaluating an Estimator: Bias and Variance
The Bayes' Estimator
Parametric Classification
Regression
Tuning Model Complexity: Bias/Variance Dilemma
Model Selection Procedures
Notes
Exercises
References
Multivariate Methods
Multivariate Data
Parameter Estimation
Estimation of Missing Values
Multivariate Normal Distribution
Multivariate Classification
Tuning Complexity
Discrete Features
Multivariate Regression
Notes
Exercises
References
Dimensionality Reduction
Introduction
Subset Selection
Principal Components Analysis
Factor Analysis
Multidimensional Scaling
Linear Discriminant Analysis
Isomap
Locally Linear Embedding
Notes
Exercises
References
Clustering
Introduction
Mixture Densities
k-Means Clustering
Expectation-Maximization Algorithm
Mixtures of Latent Variable Models
Supervised Learning after Clustering
Hierarchical Clustering
Choosing the Number of Clusters
Notes
Exercises
References
Nonparametric Methods
Introduction
Nonparametric Density Estimation
Histogram Estimator
Kernel Estimator
k-Nearest Neighbor Estimator
Generalization to Multivariate Data
Nonparametric Classification
Condensed Nearest Neighbor
Nonparametric Regression: Smoothing Models
Running Mean Smoother
Kernel Smoother
Running Line Smoother
How to Choose the Smoothing Parameter
Notes
Exercises
References
Decision Trees
Introduction
Univariate Trees
Classification Trees
Regression Trees
Pruning
Rule Extraction from Trees
Learning Rules from Data
Multivariate Trees
Notes
Exercises
References
Linear Discrimination
Introduction
Generalizing the Linear Model
Geometry of the Linear Discriminant
Two Classes
Multiple Classes
Pairwise Separation
Parametric Discrimination Revisited
Gradient Descent
Logistic Discrimination
Two Classes
Multiple Classes
Discrimination by Regression
Notes
Exercises
References
Multilayer Perceptrons
Introduction
Understanding the Brain
Neural Networks as a Paradigm for Parallel Processing
The Perceptron
Training a Perceptron
Learning Boolean Functions
Multilayer Perceptrons
MLP as a Universal Approximator
Backpropagation Algorithm
Nonlinear Regression
Two-Class Discrimination
Multiclass Discrimination
Multiple Hidden Layers
Training Procedures
Improving Convergence
Overtraining
Structuring the Network
Hints
Tuning the Network Size
Bayesian View of Learning
Dimensionality Reduction
Learning Time
Time Delay Neural Networks
Recurrent Networks
Notes
Exercises
References
Local Models
Introduction
Competitive Learning
Online k-Means
Adaptive Resonance Theory
Self-Organizing Maps
Radial Basis Functions
Incorporating Rule-Based Knowledge
Normalized Basis Functions
Competitive Basis Functions
Learning Vector Quantization
Mixture of Experts
Cooperative Experts
Competitive Experts
Hierarchical Mixture of Experts
Notes
Exercises
References
Kernel Machines
Introduction
Optimal Separating Hyperplane
The Nonseparable Case: Soft Margin Hyperplane
�-SVM
Kernel Trick
Vectorial Kernels
Defining Kernels
Multiple Kernel Learning
Multiclass Kernel Machines
Kernel Machines for Regression
One-Class Kernel Machines
Kernel Dimensionality Reduction
Notes
Exercises
References
Bayesian Estimation
Introduction
Estimating the Parameter of a Distribution
Discrete Variables
Continuous Variables
Bayesian Estimation of the Parameters of a Function
Regression
The Use of Basis/Kernel Functions
Bayesian Classification
Gaussian Processes
Notes
Exercises
References
Hidden Markov Models
Introduction
Discrete Markov Processes
Hidden Markov Models
Three Basic Problems of HMMs
Evaluation Problem
Finding the State Sequence
Learning Model Parameters
Continuous Observations
The HMM with Input
Model Selection in HMM
Notes
Exercises
References
Graphical Models
Introduction
Canonical Cases for Conditional Independence
Example Graphical Models
Naive Bayes' Classifier
Hidden Markov Model
Linear Regression
d-Separation
Belief Propagation
Chains
Trees
Polytrees
Junction Trees
Undirected Graphs: Markov Random Fields
Learning the Structure of a Graphical Model
Influence Diagrams
Notes
Exercises
References
Combining Multiple Learners
Rationale
Generating Diverse Learners
Model Combination Schemes
Voting
Error-Correcting Output Codes
Bagging
Boosting
Mixture of Experts Revisited
Stacked Generalization
Fine-Tuning an Ensemble
Cascading
Notes
Exercises
References
Reinforcement Learning
Introduction
Single State Case: K-Armed Bandit
Elements of Reinforcement Learning
Model-Based Learning
Value Iteration
Policy Iteration
Temporal Difference Learning
Exploration Strategies
Deterministic Rewards and Actions
Nondeterministic Rewards and Actions
Eligibility Traces
Generalization
Partially Observable States
The Setting
Example: The Tiger Problem
Notes
Exercises
References
Design and Analysis of Machine Learning Experiments
Introduction
Factors, Response, and Strategy of Experimentation
Response Surface Design
Randomization, Replication, and Blocking
Guidelines for Machine Learning Experiments
Cross-Validation and Resampling Methods
K-Fold Cross-Validation
5�2 Cross-Validation
Bootstrapping
Measuring Classifier Performance
Interval Estimation
Hypothesis Testing
Assessing a Classification Algorithm's Performance
Binomial Test
Approximate Normal Test
t Test
Comparing Two Classification Algorithms
McNemar's Test
K-Fold Cross-Validated Paired t Test
5 � 2 cv Paired t Test
5 � 2 cv Paired F Test
Comparing Multiple Algorithms: Analysis of Variance
Comparison over Multiple Datasets
Comparing Two Algorithms
Multiple Algorithms
Notes
Exercises
References
Probability
Elements of Probability
Axioms of Probability
Conditional Probability
Random Variables
Probability Distribution and Density Functions
Joint Distribution and Density Functions
Conditional Distributions
Bayes' Rule
Expectation
Variance
Weak Law of Large Numbers
Special Random Variables
Bernoulli Distribution
Binomial Distribution
Multinomial Distribution
Uniform Distribution
Normal (Gaussian) Distribution
Chi-Square Distribution
t Distribution
F Distribution
References
Index
×
Free shipping on orders over $35*

*A minimum purchase of $35 is required. Shipping is provided via FedEx SmartPost® and FedEx Express Saver®. Average delivery time is 1 – 5 business days, but is not guaranteed in that timeframe. Also allow 1 - 2 days for processing. Free shipping is eligible only in the continental United States and excludes Hawaii, Alaska and Puerto Rico. FedEx service marks used by permission."Marketplace" orders are not eligible for free or discounted shipping.

Learn more about the TextbookRush Marketplace.

×