Skip to content

Neural Networks and Learning Machines

Spend $50 to get a free DVD!

ISBN-10: 0131471392

ISBN-13: 9780131471399

Edition: 3rd 2009

Authors: Simon Haykin

List price: $253.32
Shipping box This item qualifies for FREE shipping.
Blue ribbon 30 day, 100% satisfaction guarantee!
what's this?
Rush Rewards U
Members Receive:
Carrot Coin icon
XP icon
You have reached 400 XP and carrot coins. That is the daily max!


Fluid and authoritative, this well-organized book represents the first comprehensive treatment of neural networks from an engineering perspective, providing extensive, state-of-the-art coverage that will expose readers to the myriad facets of neural networks and help them appreciate the technologys origin, capabilities, and potential applications.Examines all the important aspects of this emerging technolgy, covering the learning process, back propogation, radial basis functions, recurrent networks, self-organizing systems, modular networks, temporal processing, neurodynamics, and VLSI implementation. Integrates computer experiments throughout to demonstrate how neural networks are designed…    
Customers also bought

Book details

List price: $253.32
Edition: 3rd
Copyright year: 2009
Publisher: Prentice Hall PTR
Publication date: 11/18/2008
Binding: Hardcover
Pages: 936
Size: 7.00" wide x 9.50" long x 1.50" tall
Weight: 3.520

What is a Neural Network?
The Human Brain
Models of a Neuron
Neural Networks Viewed As Directed Graphs
Network Architectures
Knowledge Representation
Learning Processes
Learning Tasks
Concluding Remarks
Notes and References
Rosenblatt's Perceptron
The Perceptron Convergence Theorem
Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment
Computer Experiment: Pattern Classification
The Batch Perceptron Algorithm
Summary and Discussion
Notes and References
Model Building through Regression
Linear Regression Model: Preliminary Considerations
Maximum a Posteriori Estimation of the Parameter Vector
Relationship Between Regularized Least-Squares Estimation and MAP Estimation
Computer Experiment: Pattern Classification
The Minimum-Description-Length Principle
Finite Sample-Size Considerations
The Instrumental-Variables Method
Summary and Discussion
Notes and References
The Least-Mean-Square Algorithm
Filtering Structure of the LMS Algorithm
Unconstrained Optimization: a Review
The Wiener Filter
The Least-Mean-Square Algorithm
Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter
The Langevin Equation: Characterization of Brownian Motion
Kushner's Direct-Averaging Method
Statistical LMS Learning Theory for Small Learning-Rate Parameter
Computer Experiment I: Linear Prediction
Computer Experiment II: Pattern Classification
Virtues and Limitations of the LMS Algorithm
Learning-Rate Annealing Schedules
Summary and Discussion
Notes and References
Multilayer Perceptrons
Some Preliminaries
Batch Learning and On-Line Learning
The Back-Propagation Algorithm
XOR Problem
Heuristics for Making the Back-Propagation Algorithm Perform Better
Computer Experiment: Pattern Classification
Back Propagation and Differentiation
The Hessian and Its Role in On-Line Learning
Optimal Annealing and Adaptive Control of the Learning Rate
Approximations of Functions
Complexity Regularization and Network Pruning
Virtues and Limitations of Back-Propagation Learning
Supervised Learning Viewed as an Optimization Problem
Convolutional Networks
Nonlinear Filtering
Small-Scale Versus Large-Scale Learning Problems
Summary and Discussion
Notes and References
Kernel Methods and Radial-Basis Function Networks
Cover's Theorem on the Separability of Patterns
The Interpolation Problem
Radial-Basis-Function Networks
K-Means Clustering
Recursive Least-Squares Estimation of the Weight Vector
Hybrid Learning Procedure for RBF Networks
Computer Experiment: Pattern Classification
Interpretations of the Gaussian Hidden Units
Kernel Regression and Its Relation to RBF Networks
Summary and Discussion
Notes and References
Support Vector Machines
Optimal Hyperplane for Linearly Separable Patterns
Optimal Hyperplane for Nonseparable Patterns
The Support Vector Machine Viewed as a Kernel Machine
Design of Support Vector Machines
XOR Problem
Computer Experiment: Pattern Classification
Regression: Robustness Considerations
Optimal Solution of the Linear Regression Problem
The Representer Theorem and Related Issues
Summary and Discussion
Notes and References
Regularization Theory
Hadamard's Conditions for Well-Posedness
Tikhonov's Regularization Theory
Regularization Networks
Generalized Radial-Basis-Function Networks
The Regularized Least-Squares Estimator: Revisited
Additional Notes of Interest on Regularization
Estimation of the Regularization Parameter
Semisupervised Learning
Manifold Regularization: Preliminary Considerations
Differentiable Manifolds
Generalized Regularization Theory
Spectral Graph Theory
Generalized Representer Theorem
Laplacian Regularized Least-Squares Algorithm
Experiments on Pattern Classification Using Semisupervised Learning
Summary and Discussion
Notes and References
Principal-Components Analysis
Principles of Self-Organization
Self-Organized Feature Analysis
Principal-Components Analysis: Perturbation Theory
Hebbian-Based Maximum Eigenfilter
Hebbian-Based Principal-Components Analysis
Case Study: Image Coding
Kernel Principal-Components Analysis
Basic Issues Involved in the Coding of Natural Images
Kernel Hebbian Algorithm
Summary and Discussion
Notes and References
Self-Organizing Maps
Two Basic Feature-Mapping Models
Self-Organizing Map
Properties of the Feature Map
Computer Experiments I: Disentangling Lattice Dynamics Using SOM
Contextual Maps
Hierarchical Vector Quantization
Kernel Self-Organizing Map
Computer Experiment II: Disentangling Lattice Dynamics Using Kernel SOM
Relationship Between Kernel SOM and Kullback-Leibler Divergence
Summary and Discussion
Notes and References
Information-Theoretic Learning Models
Maximum-Entropy Principle
Mutual Information
Kullback-Leibler Divergence
Mutual Information as an Objective Function to be Optimized
Maximum Mutual Information Principle
Infomax and Redundancy Reduction
Spatially Coherent Features
Spatially Incoherent Features
Independent-Components Analysis
Sparse Coding of Natural Images and Comparison with ICA Coding
Natural-Gradient Learning for Independent-Components Analysis
Maximum-Likelihood Estimation for Independent-Components Analysis
Maximum-Entropy Learning for Blind Source Separation
Maximization of Negentropy for Independent-Components Analysis
Coherent Independent-Components Analysis
Rate Distortion Theory and Information Bottleneck
Optimal Manifold Representation of Data
Computer Experiment: Pattern Classification
Summary and Discussion
Notes and References
Stochastic Methods Rooted in Statistical Mechanics
Statistical Mechanics
Markov Chains
Metropolis Algorithm
Simulated Annealing
Gibbs Sampling
Boltzmann Machine
Logistic Belief Nets
Deep Belief Nets
Deterministic Annealing
Analogy of Deterministic Annealing with Expectation-Maximization Algorithm
Summary and Discussion
Notes and References
Dynamic Programming
Markov Decision Process
Bellman's Optimality Criterion
Policy Iteration
Value Iteration
Approximate Dynamic Programming: Direct Methods
Temporal-Difference Learning
Approximate Dynamic Programming: Indirect Methods
Least-Squares Policy Evaluation
Approximate Policy Iteration
Summary and Discussion
Notes and References
Dynamic Systems
Stability of Equilibrium States
Neurodynamic Models
Manipulation of Attractors as a Recurrent Network Paradigm
Hopfield Model
The Cohen-Grossberg Theorem
Brain-State-In-A-Box Model
Strange Attractors and Chaos
Dynamic Reconstruction of a Chaotic Process
Summary and Discussion
Notes and References
Bayseian Filtering for State Estimation of Dynamic Systems
State-Space Models
Kalman Filters
The Divergence-Phenomenon and Square-Root Filtering
The Extended Kalman Filter
The Bayesian Filter
Cubature Kalman Filter: Building on the Kalman Filter
Particle Filters
Computer Experiment: Comparative Evaluation of Extended Kalman and Particle Filters
Kalman Filtering in Modeling of Brain Functions
Summary and Discussion
Notes and References
Dynamically Driven Recurrent Networks
Recurrent Network Architectures
Universal Approximation Theorem
Controllability and Observability
Computational Power of Recurrent Networks
Learning Algorithms
Back Propagation Through Time
Real-Time Recurrent Learning
Vanishing Gradients in Recurrent Networks
Supervised Training Framework for Recurrent Networks Using Nonlinear Sequential State Estimators
Computer Experiment: Dynamic Reconstruction of Mackay-Glass Attractor
Adaptivity Considerations
Case Study: Model Reference Applied to Neurocontrol
Summary and Discussion
Notes and References