Skip to content

Mathematical Methods and Algorithms for Signal Processing

Best in textbook rentals since 2012!

ISBN-10: 0201361868

ISBN-13: 9780201361865

Edition: 2000

Authors: Todd Moon, Wynn Stirling

List price: $233.32
Blue ribbon 30 day, 100% satisfaction guarantee!
what's this?
Rush Rewards U
Members Receive:
Carrot Coin icon
XP icon
You have reached 400 XP and carrot coins. That is the daily max!

Description:

For Senior/Graduate Level Signal Processing courses. The book is also suitable for a course in advanced signal processing, or for self-study. Mathematical Methods and Algorithms for Signal Processing tackles the challenge of providing students and practitioners with the broad tools of mathematics employed in modern signal processing. Building from an assumed background in signals and stochastic processes, the book provides a solid foundation in analysis, linear algebra, optimization, and statistical signal processing.
Customers also bought

Book details

List price: $233.32
Copyright year: 2000
Publisher: Pearson Education
Publication date: 8/4/1999
Binding: Paperback
Pages: 984
Size: 8.27" wide x 10.31" long x 1.57" tall
Weight: 4.488

Introduction and Foundations
Introduction and Foundations
What is signal processing?
Mathematical topics embraced by signal processing
Mathematical models
Models for linear systems and signals
Adaptive filtering
Gaussian random variables and random processes
Markov and Hidden Markov Models
Some aspects of proofs
An application: LFSRs and Massey's algorithm
Exercises
References
Vector Spaces and Linear Algebra
Signal Spaces
Metric spaces
Vector spaces
Norms and normed vector spaces
Inner products and inner-product spaces
Induced norms
The Cauchy-Schwarz inequality
Direction of vectors: Orthogonality
Weighted inner products
Hilbert and Banach spaces
Orthogonal subspaces
Linear transformations: Range and nullspace
Inner-sum and direct-sum spaces
Projections and orthogonal projections
The projection theorem
Orthogonalization of vectors
Some final technicalities for infinite dimensional spaces
Exercises
References
Representation and Approximation in Vector Spaces
The Approximation problem in Hilbert space
The Orthogonality principle
Error minimization via gradients
Matrix Representations of least-squares problems
Minimum error in Hilbert-space approximations
Applications of the orthogonality theorem
Approximation by continuous polynomials
Approximation by discrete polynomials
Linear regression
Least-squares filtering
Minimum mean-square estimation
Minimum mean-squared error (MMSE) filtering
Comparison of least squares and minimum mean squares
Frequency-domain optimal filtering
A dual approximation problem
Minimum-norm solution of underdetermined equations
Iterative Reweighted LS (IRLS) for L[subscript p] optimization
Signal transformation and generalized Fourier series
Sets of complete orthogonal functions
Signals as points: Digital communications
Exercises
References
Linear Operators and Matrix Inverses
Linear operators
Operator norms
Adjoint operators and transposes
Geometry of linear equations
Four fundamental subspaces of a linear operator
Some properties of matrix inverses
Some results on matrix rank
Another look at least squares
Pseudoinverses
Matrix condition number
Inverse of a small-rank adjustment
Inverse of a block (partitioned) matrix
Exercises
References
Some Important Matrix Factorizations
The LU factorization
The Cholesky factorization
Unitary matrices and the QR factorization
Exercises
References
Eigenvalues and Eigenvectors
Eigenvalues and linear systems
Linear dependence of eigenvectors
Diagonalization of a matrix
Geometry of invariant subspaces
Geometry of quadratic forms and the minimax principle
Extremal quadratic forms subject to linear constraints
The Gershgorin circle theorem
Application of Eigendecomposition methods
Karhunen-Loeve low-rank approximations and principal methods
Eigenfilters
Signal subspace techniques
Generalized eigenvalues
Characteristic and minimal polynomials
Moving the eigenvalues around: Introduction to linear control
Noiseless constrained channel capacity
Computation of eigenvalues and eigenvectors
Exercises
References
The Singular Value Decomposition
Theory of the SVD
Matrix structure from the SVD
Pseudoinverses and the SVD
Numerically sensitive problems
Rank-reducing approximations: Effective rank
Applications of the SVD
System identification using the SVD
Total least-squares problems
Partial total least squares
Rotation of subspaces
Computation of the SVD
Exercises
References
Some Special Matrices and Their Applications
Modal matrices and parameter estimation
Permutation matrices
Toeplitz matrices and some applications
Vandermonde matrices
Circulant matrices
Triangular matrices
Properties preserved in matrix products
Exercises
References
Kronecker Products and the Vec Operator
The Kronecker product and Kronecker sum
Some applications of Kronecker products
The vec operator
Exercises
References
Detection, Estimation, and Optimal Filtering
Introduction to Detection and Estimation, and Mathematical Notation
Detection and estimation theory
Some notational conventions
Conditional expectation
Transformations of random variables
Sufficient statistics
Exponential families
Exercises
References
Detection Theory
Introduction to hypothesis testing
Neyman-Pearson theory
Neyman-Pearson testing with composite binary hypotheses
Bayes decision theory
Some M-ary problems
Maximum-likelihood detection
Approximations to detection performance: The union bound
Invariant Tests
Detection in continuous time
Minimax Bayes decisions
Exercises
References
Estimation Theory
The maximum-likelihood principle
ML estimates and sufficiency
Estimation quality
Applications of ML estimation
Bayes estimation theory
Bayes risk
Recursive estimation
Exercises
References
The Kalman Filter
The state-space signal model
Kalman filter I: The Bayes approach
Kalman filter II: The innovations approach
Numerical considerations: Square-root filters
Application in continuous-time systems
Extensions of Kalman filtering to nonlinear systems
Smoothing
Another approach: H[subscript [infinity]] smoothing
Exercises
References
Iterative and Recursive Methods in Signal Processing
Basic Concepts and Methods of Iterative Algorithms
Definitions and qualitative properties of iterated functions
Contraction mappings
Rates of convergence for iterative algorithms
Newton's method
Steepest descent
Some Applications of Basic Iterative Methods
LMS adaptive Filtering
Neural networks
Blind source separation
Exercises
References
Iteration by Composition of Mappings
Introduction
Alternating projections
Composite mappings
Closed mappings and the global convergence theorem
The composite mapping algorithm
Projection on convex sets
Exercises
References
Other Iterative Algorithms
Clustering
Iterative methods for computing inverses of matrices
Algebraic reconstruction techniques (ART)
Conjugate-direction methods
Conjugate-gradient method
Nonquadratic problems
Exercises
References
The EM Algorithm in Signal Processing
An introductory example
General statement of the EM algorithm
Convergence of the EM algorithm
Example applications of the EM algorithm
Introductory example, revisited
Emission computed tomography (ECT) image reconstruction
Active noise cancellation (ANC)
Hidden Markov models
Spread-spectrum, multiuser communication
Summary
Exercises
References
Methods of Optimization
Theory of Constrained Optimization
Basic definitions
Generalization of the chain rule to composite functions
Definitions for constrained optimization
Equality constraints: Lagrange multipliers
Second-order conditions
Interpretation of the Lagrange multipliers
Complex constraints
Duality in optimization
Inequality constraints: Kuhn-Tucker conditions
Exercises
References
Shortest-Path Algorithms and Dynamic Programming
Definitions for graphs
Dynamic programming
The Viterbi algorithm
Code for the Viterbi algorithm
Maximum-likelihood sequence estimation
HMM likelihood analysis and HMM training
Alternatives to shortest-path algorithms
Exercises
References
Linear Programming
Introduction to linear programming
Putting a problem into standard form
Simple examples of linear programming
Computation of the linear programming solution
Dual problems
Karmarker's algorithm for LP
Examples and applications of linear programming
Linear-phase FIR filter design
Linear optimal control
Exercises
References
Basic Concepts and Definitions
Set theory and notation
Mappings and functions
Convex functions
O and o Notation
Continuity
Differentiation
Basic constrained optimization
The Holder and Minkowski inequalities
Exercises
References
Completing the Square
The scalar case
The matrix case
Exercises
Basic Matrix Concepts
Notational conventions
Matrix Identity and Inverse
Transpose and trace
Block (partitioned) matrices
Determinants
Exercises
References
Random Processes
Definitions of means and correlations
Stationarity
Power spectral-density functions
Linear systems with stochastic inputs
References
Derivatives and Gradients
Derivatives of vectors and scalars with respect to a real vector
Derivatives of real-valued functions of real matrices
Derivatives of matrices with respect to scalars, and vice versa
The transformation principle
Derivatives of products of matrices
Derivatives of powers of a matrix
Derivatives involving the trace
Modifications for derivatives of complex vectors and matrices
Exercises
References
Conditional Expectations of Multinomial and Poisson r.v.s
Multinomial distributions
Poisson random variables
Exercises
Bibliography
Index