Skip to content

Principles of Digital Communication

Best in textbook rentals since 2012!

ISBN-10: 0521879078

ISBN-13: 9780521879071

Edition: 2008

Authors: Robert G. Gallager

List price: $115.00
Shipping box This item qualifies for FREE shipping.
Blue ribbon 30 day, 100% satisfaction guarantee!
what's this?
Rush Rewards U
Members Receive:
Carrot Coin icon
XP icon
You have reached 400 XP and carrot coins. That is the daily max!

Description:

The renowned communications theorist Robert Gallager brings his lucid writing style to the study of the fundamental system aspects of digital communication for a one-semester course for graduate students. With the clarity and insight that has characterized his teaching and earlier textbooks, he develops a simple framework and then combines this with careful proofs to help the reader understand modern systems and simplified models in an intuitive yet precise way. A strong narrative and links between theory and practice reinforce this concise, practical presentation. The book begins with data compression for arbitrary sources, Gallager then describes how to modulate the resulting binary data…    
Customers also bought

Book details

List price: $115.00
Copyright year: 2008
Publisher: Cambridge University Press
Publication date: 2/28/2008
Binding: Hardcover
Pages: 422
Size: 7.13" wide x 9.88" long x 0.94" tall
Weight: 2.2
Language: English

Preface
Acknowledgements
Introduction to digital communication
Standardized interfaces and layering
Communication sources
Source coding
Communication channels
Channel encoding (modulation)
Error correction
Digital interface
Network aspects of the digital interface
Supplementary reading
Coding for discrete sources
Introduction
Fixed-length codes for discrete sources
Variable-length codes for discrete sources
Unique decodability
Prefix-free codes for discrete sources
The Kraft inequality for prefix-free codes
Probability models for discrete sources
Discrete memoryless sources
Minimum L for prefix-free codes
Lagrange multiplier solution for the minimum L
Entropy bounds on L
Huffman's algorithm for optimal source codes
Entropy and fixed-to-variable-length codes
Fixed-to-variable-length codes
The AEP and the source coding theorems
The weak law of large numbers
The asymptotic equipartition property
Source coding theorems
The entropy bound for general classes of codes
Markov sources
Coding for Markov sources
Conditional entropy
Lempel-Ziv universal data compression
The LZ77 algorithm
Why LZ77 works
Discussion
Summary of discrete source coding
Exercises
Quantization
Introduction to quantization
Scalar quantization
Choice of intervals for given representation points
Choice of representation points for given intervals
The Lloyd-Max algorithm
Vector quantization
Entropy-coded quantization
High-rate entropy-coded quantization
Differential entropy
Performance of uniform high-rate scalar quantizers
High-rate two-dimensional quantizers
Summary of quantization
Appendixes
Nonuniform scalar quantizers
Nonuniform 2D quantizers
Exercises
Source and channel waveforms
Introduction
Analog sources
Communication channels
Fourier series
Finite-energy waveforms
L[subscript 2] functions and Lebesgue integration over [-T/2, T/2]
Lebesgue measure for a union of intervals
Measure for more general sets
Measurable functions and integration over [-T/2, T/2]
Measurability of functions defined by other functions
L[subscript 1] and L[subscript 2] functions over [-T/2, T/2]
Fourier series for L[subscript 2] waveforms
The T-spaced truncated sinusoid expansion
Fourier transforms and L[subscript 2] waveforms
Measure and integration over R
Fourier transforms of L[subscript 2] functions
The DTFT and the sampling theorem
The discrete-time Fourier transform
The sampling theorem
Source coding using sampled waveforms
The sampling theorem for [Delta - W, Delta + W]
Aliasing and the sinc-weighted sinusoid expansion
The T-spaced winc-weighted sinusoid expansion
Degrees of freedom
Aliasing - a time-domain approach
Aliasing - a frequency-domain approach
Summary
Appendix: Supplementary material and proofs
Countable sets
Finite unions of intervals over [-T/2, T/2]
Countable unions and outer measure over [-T/2, T/2]
Arbitrary measurable sets over [-T/2, T/2]
Exercises
Vector spaces and signal space
Axioms and basic properties of vector spaces
Finite-dimensional vector spaces
Inner product spaces
The inner product spaces R[superscript n] and C[superscript n]
One-dimensional projections
The inner product space of L[subscript 2] functions
Subspaces of inner product spaces
Orthonormal bases and the projection theorem
Finite-dimensional projections
Corollaries of the projection theorem
Gram-Schmidt orthonormalization
Orthonormal expansions in L[subscript 2]
Summary
Appendix: Supplementary material and proofs
The Plancherel theorem
The sampling and aliasing theorems
Prolate spheroidal waveforms
Exercises
Channels, modulation, and demodulation
Introduction
Pulse amplitude modulation (PAM)
Signal constellations
Channel imperfections: a preliminary view
Choice of the modulation pulse
PAM demodulation
The Nyquist criterion
Band-edge symmetry
Choosing {p(t - kT); k [set membership] Z} as an orthonormal set
Relation between PAM and analog source coding
Modulation: baseband to passband and back
Double-sideband amplitude modulation
Quadrature amplitude modulation (QAM)
QAM signal set
QAM baseband modulation and demodulation
QAM: baseband to passband and back
Implementation of QAM
Signal space and degrees of freedom
Distance and orthogonality
Carrier and phase recovery in QAM systems
Tracking phase in the presence of noise
Large phase errors
Summary of modulation and demodulation
Exercises
Random processes and noise
Introduction
Random processes
Examples of random processes
The mean and covariance of a random process
Additive noise channels
Gaussian random variables, vectors, and processes
The covariance matrix of a jointly Gaussian random vector
The probability density of a jointly Gaussian random vector
Special case of a 2D zero-mean Gaussian random vector
Z = AW, where A is orthogonal
Probability density for Gaussian vectors in terms of principal axes
Fourier transforms for joint densities
Linear functionals and filters for random processes
Gaussian processes defined over orthonormal expansions
Linear filtering of Gaussian processes
Covariance for linear functionals and filters
Stationarity and related concepts
Wide-sense stationary (WSS) random processes
Effectively stationary and effectively WSS random processes
Linear functionals for effectively WSS random processes
Linear filters for effectively WSS random processes
Stationarity in the frequency domain
White Gaussian noise
The sinc expansion as an approximation to WGN
Poisson process noise
Adding noise to modulated communication
Complex Gaussian random variables and vectors
Signal-to-noise ratio
Summary of random processes
Appendix: Supplementary topics
Properties of covariance matrices
The Fourier series expansion of a truncated random process
Uncorrelated coefficients in a Fourier series
The Karhunen-Loeve expansion
Exercises
Detection, coding, and decoding
Introduction
Binary detection
Binary signals in white Gaussian noise
Detection for PAM antipodal signals
Detection for binary nonantipodal signals
Detection for binary real vectors in WGN
Detection for binary complex vectors in WGN
Detection of binary antipodal waveforms in WGN
M-ary detection and sequence detection
M-ary detection
Successive transmissions of QAM signals in WGN
Detection with arbitrary modulation schemes
Orthogonal signal sets and simple channel coding
Simplex signal sets
Biorthogonal signal sets
Error probability for orthogonal signal sets
Block coding
Binary orthogonal codes and Hadamard matrices
Reed-Muller codes
Noisy-channel coding theorem
Discrete memoryless channels
Capacity
Converse to the noisy-channel coding theorem
Noisy-channel coding theorem, forward part
The noisy-channel coding theorem for WGN
Convolutional codes
Decoding of convolutional codes
The Viterbi algorithm
Summary of detection, coding, and decoding
Appendix: Neyman-Pearson threshold tests
Exercises
Wireless digital communication
Introduction
Physical modeling for wireless channels
Free-space, fixed transmitting and receiving antennas
Free-space, moving antenna
Moving antenna, reflecting wall
Reflection from a ground plane
Shadowing
Moving antenna, multiple reflectors
Input/output models of wireless channels
The system function and impulse response for LTV systems
Doppler spread and coherence time
Delay spread and coherence frequency
Baseband system functions and impulse responses
A discrete-time baseband model
Statistical channel models
Passband and baseband noise
Data detection
Binary detection in flat Rayleigh fading
Noncoherent detection with known channel magnitude
Noncoherent detection in flat Rician fading
Channel measurement
The use of probing signals to estimate the channel
Rake receivers
Diversity
CDMA: the IS95 standard
Voice compression
Channel coding and decoding
Viterbi decoding for fading channels
Modulation and demodulation
Multiaccess interference in IS95
Summary of wireless communication
Appendix: Error probability for noncoherent detection
Exercises
References
Index