| |

| |

Preface | |

| |

| |

Introduction to Adaptive Filtering | |

| |

| |

Introduction | |

| |

| |

Adaptive Signal Processing | |

| |

| |

Introduction to Adaptive Algorithms | |

| |

| |

Applications | |

| |

| |

Fundamentals of Adaptive Filtering | |

| |

| |

Introduction | |

| |

| |

Signal Representation | |

| |

| |

Deterministic Signals | |

| |

| |

Random Signals | |

| |

| |

Ergodicity | |

| |

| |

The Correlation Matrix | |

| |

| |

Wiener Filter | |

| |

| |

Linearly-Constrained Wiener Filter | |

| |

| |

The Generalized Sidelobe Canceller | |

| |

| |

Mean-Square Error Surface | |

| |

| |

Bias and Consistency | |

| |

| |

Newton Algorithm | |

| |

| |

Steepest-Descent Algorithm | |

| |

| |

Applications Revisited | |

| |

| |

System Identification | |

| |

| |

Signal Enhancement | |

| |

| |

Signal Prediction | |

| |

| |

Channel Equalization | |

| |

| |

Digital Communication System | |

| |

| |

Concluding Remarks | |

| |

| |

The Least-Mean-Square (LMS) Algorithm | |

| |

| |

Introduction | |

| |

| |

The LMS Algorithm | |

| |

| |

Some Properties of the LMS Algorithm | |

| |

| |

Gradient Behavior | |

| |

| |

Convergence Behavior of the Coefficient Vector | |

| |

| |

Coefficient-Error-Vector Covariance Matrix | |

| |

| |

Behavior of the Error Signal | |

| |

| |

Minimum Mean-Square Error | |

| |

| |

Excess Mean-Square Error and Misadjustment | |

| |

| |

Transient Behavior | |

| |

| |

LMS Algorithm Behavior in Nonstationary Environments | |

| |

| |

Examples | |

| |

| |

Analytical Examples | |

| |

| |

System Identification Simulations | |

| |

| |

Channel Equalization Simulations | |

| |

| |

Fast Adaptation Simulations | |

| |

| |

The Linearly-Constrained LMS Algorithm | |

| |

| |

Concluding Remarks | |

| |

| |

LMS-Based Algorithms | |

| |

| |

Introduction | |

| |

| |

Quantized-Error Algorithms | |

| |

| |

Sign-Error Algorithm | |

| |

| |

Dual-Sign Algorithm | |

| |

| |

Power-of-Two Error Algorithm | |

| |

| |

Sign-Data Algorithm | |

| |

| |

The LMS-Newton Algorithm | |

| |

| |

The Normalized LMS Algorithm | |

| |

| |

The Transform-Domain LMS Algorithm | |

| |

| |

The Affine Projection Algorithm | |

| |

| |

Simulation Examples | |

| |

| |

Signal Enhancement Simulation | |

| |

| |

Signal Prediction Simulation | |

| |

| |

Concluding Remarks | |

| |

| |

Conventional RLS Adaptive Filter | |

| |

| |

Introduction | |

| |

| |

The Recursive Least-Squares Algorithm | |

| |

| |

Properties of the Least-Squares Solution | |

| |

| |

Orthogonality Principle | |

| |

| |

Relation Between Least-Squares and Wiener Solutions | |

| |

| |

Influence of the Deterministic Autocorrelation Initialization | |

| |

| |

Steady-State Behavior of the Coefficient Vector | |

| |

| |

Coefficient-Error-Vector Covariance Matrix | |

| |

| |

Behavior of the Error Signal | |

| |

| |

Excess Mean-Square Error and Misadjustment | |

| |

| |

Behavior in Nonstationary Environments | |

| |

| |

Simulation Examples | |

| |

| |

Concluding Remarks | |

| |

| |

Adaptive Lattice-Based RLS Algorithms | |

| |

| |

Introduction | |

| |

| |

Recursive Least-Squares Prediction | |

| |

| |

Forward Prediction Problem | |

| |

| |

Backward Prediction Problem | |

| |

| |

Order-Updating Equations | |

| |

| |

A New Parameter [delta](k, i) | |

| |

| |

Order Updating of [xi superscript d subscript b subscript min](k, i) and w[subscript b](k, i) | |

| |

| |

Order Updating of [xi superscript d subscript f subscript min](k, i) and w[subscript f](k, i) | |

| |

| |

Order Updating of Prediction Errors | |

| |

| |

Time-Updating Equations | |

| |

| |

Time Updating for Prediction Coefficients | |

| |

| |

Time Updating for [delta](k, i) | |

| |

| |

Order Updating for [gamma](k, i) | |

| |

| |

Joint-Process Estimation | |

| |

| |

Time Recursions of the Least-Squares Error | |

| |

| |

Normalized Lattice RLS Algorithm | |

| |

| |

Basic Order Recursions | |

| |

| |

Feedforward Filtering | |

| |

| |

Error-Feedback Lattice RLS Algorithm | |

| |

| |

Recursive Formulas for the Reflection Coefficients | |

| |

| |

Lattice RLS Algorithm Based on A Priori Errors | |

| |

| |

Quantization Effects | |

| |

| |

Concluding Remarks | |

| |

| |

Fast Transversal RLS Algorithms | |

| |

| |

Introduction | |

| |

| |

Recursive Least-Squares Prediction | |

| |

| |

Forward Prediction Relations | |

| |

| |

Backward Prediction Relations | |

| |

| |

Joint-Process Estimation | |

| |

| |

Stabilized Fast Transversal RLS Algorithm | |

| |

| |

Concluding Remarks | |

| |

| |

QR-Decomposition-Based RLS Filters | |

| |

| |

Introduction | |

| |

| |

Triangularization Using QR-Decomposition | |

| |

| |

Initialization Process | |

| |

| |

Input data matrix triangularization | |

| |

| |

QR-Decomposition RLS Algorithm | |

| |

| |

Systolic Array Implementation | |

| |

| |

Some Implementation Issues | |

| |

| |

Fast QR-RLS Algorithm | |

| |

| |

Backward Prediction Problem | |

| |

| |

Forward Prediction Problem | |

| |

| |

Conclusions and Further Reading | |

| |

| |

Adaptive IIR Filters | |

| |

| |

Introduction | |

| |

| |

Output-Error IIR Filters | |

| |

| |

General Derivative Implementation | |

| |

| |

Adaptive Algorithms | |

| |

| |

Recursive least-squares algorithm | |

| |

| |

The Gauss-Newton algorithm | |

| |

| |

Gradient-based algorithm | |

| |

| |

Alternative Adaptive Filter Structures | |

| |

| |

Cascade Form | |

| |

| |

Lattice Structure | |

| |

| |

Parallel Form | |

| |

| |

Frequency-Domain Parallel Structure | |

| |

| |

Mean-Square Error Surface | |

| |

| |

Influence of the Filter Structure on MSE Surface | |

| |

| |

Alternative Error Formulations | |

| |

| |

Equation Error Formulation | |

| |

| |

The Steiglitz-McBride Method | |

| |

| |

Conclusion | |

| |

| |

Nonlinear Adaptive Filtering | |

| |

| |

Introduction | |

| |

| |

The Volterra Series Algorithm | |

| |

| |

LMS Volterra Filter | |

| |

| |

RLS Volterra Filter | |

| |

| |

Adaptive Bilinear Filters | |

| |

| |

Multilayer Perceptron Algorithm | |

| |

| |

Radial Basis Function Algorithm | |

| |

| |

Conclusion | |

| |

| |

Subband Adaptive Filters | |

| |

| |

Introduction | |

| |

| |

Multirate Systems | |

| |

| |

Decimation and Interpolation | |

| |

| |

Filter Banks | |

| |

| |

Two-Band Perfect Reconstruction Filter Banks | |

| |

| |

Analysis of Two-Band Filter Banks | |

| |

| |

Analysis of M-Band Filter Banks | |

| |

| |

Hierarchical M-Band Filter Banks | |

| |

| |

Cosine-Modulated Filter Banks | |

| |

| |

Block Representation | |

| |

| |

Subband Adaptive Filters | |

| |

| |

Subband Identification | |

| |

| |

Two-Band Identification | |

| |

| |

Closed-Loop Structure | |

| |

| |

Cross-Filters Elimination | |

| |

| |

Fractional Delays | |

| |

| |

Delayless Subband Adaptive Filtering | |

| |

| |

Computational Complexity | |

| |

| |

Frequency-Domain Adaptive Filtering | |

| |

| |

Conclusion | |

| |

| |

Quantization Effects in the LMS and RLS Algorithms | |

| |

| |

Quantization Effects in the LMS Algorithm | |

| |

| |

Error Description | |

| |

| |

Error Models for Fixed-Point Arithmetic | |

| |

| |

Coefficient-Error-Vector Covariance Matrix | |

| |

| |

Algorithm Stop | |

| |

| |

Mean-Square Error | |

| |

| |

Floating-Point Arithmetic Implementation | |

| |

| |

Floating-Point Quantization Errors in LMS Algorithm | |

| |

| |

Quantization Effects in the RLS Algorithm | |

| |

| |

Error Description | |

| |

| |

Error Models for Fixed-Point Arithmetic | |

| |

| |

Coefficient-Error-Vector Covariance Matrix | |

| |

| |

Algorithm Stop | |

| |

| |

Mean-Square Error | |

| |

| |

Fixed-Point Implementation Issues | |

| |

| |

Floating-Point Arithmetic Implementation | |

| |

| |

Floating-Point Quantization errors in RLS Algorithm | |