Ads 468x60px

Engineering Electromagnetics by William H. Hayt


Book Name: Engineering Electromagnetics
Author: William H. Hayt
Pages: 614
Edition: Eighth  




Click here to download

Digital Signal Processing A Computer-Based Approach by Sanjit K. Mitra


Book Name:D
igital Signal Processing A Computer-Based Approach
Author: Sanjit K. Mitra
Pages: 879



Click here to download

Digital Signal Processing – Principles, Algorithms & Applications by John G. Proakis & Dimitris G.Manolakis



Book Name: Digital Signal Processing – Principles, Algorithms & Applications
Author: John G. Proakis & Dimitris G.Manolakis
Pages: 1103
Edition:Fourth

Click here to download
Contents
Preface
1 Introduction 
1.1 Signals, Systems, and Signal Processing
1.1.1 Basic Elements of a Digital Signal Processing System
1.1.2 Advantages of Digital over Analog Signal Processing
1.2 Classification of Signals
1.2.1 Multichannel and Multidimensional Signals
1.2.2 Continuous-Time Versus Discrete-TIme Signals
1.2.3 Continuous-Valued Versus Discrete-Valued Signals
1.2.4 Deterministic Versus Random Signals
1.3 The Concept of Frequency in Continuous-Time and Discrete-Time Signals
1.3.1 Continuous-Time Sinusoidal Signals
1.3.2 Discrete-Time Sinusoidal Signals
1.3.3 Harmonically Related Complex Exponentials
1.4 Analog-to-Digital and Digital-to-Analog Conversion
1.4.1 Sampling of Analog Signals
1.4.2 The Sampling Theorem
1.4.3 Quantization of Continuous-Amplitude Signals
1.4.4 Quantization of Sinusoidal Signals
1.4.5 Coding of Quantized Samples
1.4.6 Digital-to-Analog Conversion
1.4.7 Analysis of Digital Signals and Systems Versus Discrete-Time Signals
and Systems
1.5 Summary and References
Problems
2 Discrete-Time Signals and Systems
2.1 Discrete-Time Signals
2.1.1 Some Elementary Discrete-Time Signals
2.1.2 Classification of Discrete-Time Signals
2.1.3 Simple Manipulations of Discrete-Time Signals
2.2 Discrete-Time Systems
2.2.1 Input-Output Description of Systems
2.2.2 Block Diagram Representation of Discrete-Time Systems
2.2.3 Classification of Discrete-Time Systems
2.2.4 Interconnection of Discrete-Time Systems
2.3 Analysis of Discrete-Time Linear Time-Invariant Systems
2.3.1 Techniques for the Analysis of Linear Systems
2.3.2 Resolution of a Discrete-Time Signal into Impulses
2.3.3 Response of LTI Systems to Arbitrary Inputs: The Convolution Sum
2.3.4 Properties of Convolution and the Interconnection of LTI Systems
2.3.5 Causal Linear Time-Invariant Systems
2.3.6 Stability of Linear Time-Invariant System
2.3.7 Systems with Finite-Duration and Infinite-Duration Impulse
Response
2.4 Discrete-Time Systems Described by Difference Equations
2.4.1 Recursive and Nonrecursive Discrete-Time Systems
2.4.2 Linear Time-Invariant Systems Characterized by
Constant-Coefficient Difference Equations
2.4.3 Solution of Linear Constant-Coefficient Difference Equations
2.4.4 The Impulse Response of a Linear Time-Invariant Recursive System
2.5 Implementation of Discrete-Time Systems
2.5.1 Structures for the Realization of Linear Time-Invariant Systems
2.5.2 Recursive and Nonrecursive Realizations of FIR Systems
2.6 Correlation of Discrete-Time Signal
2.6.1 Crosscorrelation and Autocorrelation Sequences
2.6.2 Properties of the Autocorrelation and Crosscorrelation Sequences
2.6.3 Correlation of Periodic Sequences
2.6.4 Input-Output Correlation Sequences
2.7 Summary and References
Problems
3 The z -Transform and Its Application to the Analysis of L TI 
Systems
3.1 The z-Transform
3.1.1 The Direct z-Transform
3.1.2 The Inverse z-Transform
3.2 Properties of the z-Transform
3.3 Rational z-Transforms
3.3.1 Poles and Zeros
3.3.2 Pole Location and TIme-Domain Behavior for Causal Signals
3.3.3 The System Function of a Linear Time-Invariant System
3.4 Inversion of the z-Transform
3.4.1 The Inverse z-Transform by Contour Integration
3.4.2 The Inverse z-Transform by Power Series Expansion
3.4.3 The Inverse z-Transform by Partial-Fraction Expansion
3.4.4 Decomposition of Rational z-Transforms
3.5 Analysis of Linear Time-Invariant Systems in the z-Domain
3.5.1 Response of Systems with Rational System Functions
3.5.2 Transient and Steady-State Responses
3.5.3 Causality and Stability
3.5.4 Pole-Zero Cancellations
3.5.5 Multiple-Order Poles and Stability
3.5.6 Stability of Second-Order Systems
3.6 The One-sided z-Transform
3.6.1 Definition and Properties
3.6.2 Solution of Difference Equations
3.6.3 Response of Pole-Zero Systems with Nonzero Initial Conditions
3.7 Summary and References
Problems
4 Frequency Analysis of Signals
4.1 Frequency Analysis of Continuous-Time Signals
4.1.1 The Fourier Series for Continuous-TIme Periodic Signals
4.1.2 Power Density Spectrum of Periodic Signals
4.1.3 The Fourier Transform for Continuous-Time Aperiodic Signals
4.1.4 Energy Density Spectrum of Aperiodic Signals
4.2 Frequency Analysis of Discrete-Time Signals
4.2.1 The Fourier Series for Discrete-TIme Periodic Signals
4.2.2 Power Density Spectrum of Periodic Signals
4.2.3 The fourier Transform of Discrete-TIme Aperiodic Signals
4.2.4 Convergence of the Fourier Transform
4.2.5 Energy Density Spectrum of Aperiodk Signals
4.2.6 Relationship of the Fourier Transform to the z -Transform
4.2.7 The Cepstrum
4.2.8 The Fourier Transform of Signals with Poles on the Unit Circle
4.2.9 Frequency-Domain Classification of Signals: The Concept of
Bandwidth
4.2.10 The Frequency Ranges of Some Natural Signals
4.3 Frequency-Domain and Time-Domain Signal Properties
4.4 Properties of the Fourier Transform for Discrete-Time Signals
4.4.1 Symmetry Properties of the Fourier Transform
4.4.2 Fourier Transform Theorems and Properties
4.5 Summary and References
Problems
5 Frequency-Domain Analysis of LTI Systems 
5.1 Frequency-Domain Characteristics of Linear TIme-Invariant Systems
5.1.1 Response to Complex Exponential and Sinusoidal Signals: The
Frequency Response Function
5.1.2 Steady-State and Transient Response to Sinusoidal Input Signals
5.1.3 Steady-State Response to Periodic Input Signals
5.1.4 Response to Aperiodic Input Signals
5.2 Frequency Response of LTI Systems
5.2.1 Frequency Response of a System with a Rational System Function
5.2.2 Computation of the Frequency Response Function
5.3 Correlation Functions and Spectra at the Output of LTI Systems
5.3.1 Input-Output Correlation Functions and Spectra
5.3.2 Correlation Functions and Power Spectra for Random Input Signals
5.4 Linear Time-Invariant Systems as Frequency-Selective Filters
5.4.1 Ideal Filter Characteristics
5.4.2 Lowpass, Highpass, and Bandpass Filters
5.4.3 Digital Resonators
5.4.4 Notch Filters
5.4.5 Comb Filters
5.4.6 All-Pass Filters
5.4.7 Digital Sinusoidal Oscillators
5.5 Inverse Systems and Deconvolution
5.5.1 Invertibility of Linear Time-Invariant Systems
5.5.2 Minimum-Phase, Maximum-Phase, and Mixed-Phase Systems
5.5.3 System Identification and Deconvolution
5.5.4 - Homomorphic Deconvolution
5.6 Summary and References
Problems
6 Sampling and Reconstruction of Signals
6.1 Ideal Sampling and Reconstruction of Continuous-Time Signals
6.2 Discrete-Time Processing of Continuous-Time Signals
6.3 Analog-to-Digital and Digital-to-Analog Converters
6.3.1 Analog-to-Digital Converters
6.3.2 Quantization and Coding
6.3.3 Analysis of Quantization Errors
6.3.4 Digital-to-Analog Converters
6.4 Sampling and Reconstruction of Continuous-Time Bandpass Signals
6.4.1 Uniform or First-Order Sampling
6.4.2 Interleaved or Nonuniform Second-Order Sampling
6.4.3 Bandpass Signal Representations
6.4.4 Sampling Using Bandpass Signal Representations
6.5 Sampling of Discrete-Time Signals
6.5.1 Sampling and Interpolation of Discrete-Time Signals
6.5.2 Representation and Sampling of Bandpass Discrete-Time Signals
6.6 Oversampling A/D and D/ A Converters
6.6.1 Oversampling AID Converters
6.6.2 Oversampling DI A Converters
6.7 Summary and References
Problems
7 The Discrete Fourier Transform: Its Properties and Applications 
7.1 Frequency-Domain Sampling: The Discrete Fourier Transform
7.1.1 Frequency-Domain Sampling and Reconstruction of Discrete-Time
Signals
7.1.2 The Discrete Fourier Transform (OFT)
7.1.3 The DFT as a Linear Transformation
7.1.4 Relationship of the DFT to Other Transforms
7.2 Properties of the OFT
7.2.1 Periodicity, Linearity, and Symmetry Properties
7.2.2 Multiplication of Two DFTs and Circular Convolution
7.2.3 Additional OFT Properties
7.3 Linear Filtering Methods Based on the OFT
7.3.1 Use of the DFT in Linear Filtering
7.3.2 Filtering of Long Data Sequences
7.4 Frequency Analysis of Signals Using the OFT
7.5 The Discrete Cosine Transform
7.5.1 Forward DCT
7.5.2 Inverse DCT
7.5.3 DCT as an Orthogonal Transform
7.6 Summary and References
Problems
8 Efficient Computation of the DFT: Fast Fourier Transform
Algorithms
8.1 Efficient Computation of the OFT: FFT Algorithms
8.1.1 Direct Computation of the DFT
8.1.2 Oivide-and-Conquer Approach to Computation of the DFT
8.1.3 Radix-2 FFT Algorithms
8.1.4 Radix-4 FFT Algorithms
8.1.5 Split-Radix FFT Algorithms
8.1.6 Implementation of FFT Algorithms
8.2 Applications of FFT Algorithms
8.2.1 Efficient Computation of the DFT of Two Real Sequences
8.2.2 Efficient Computation of the DFT of a 2N -Point Real Sequence
8.2.3 Use of the FFT Algorithm in Linear Filtering and Correlation
8.3 A Linear Filtering Approach to Computation of the DFT
8.3.1 The Goertzel Algorithm
8.3.2 The Chirp-z Transform Algorithm
8.4 Quantization Effects in the Computation of the DFT
8.4.1 Quantization Errors in the Direct Computation of the Off
8.4.2 Quantization Errors in FFT Algorithms
8.5 Summary and References
Problems
9 Implementation of Discrete-Time Systems
9.1 Structures for the Realization of Discrete-Time Systems
9.2 Structures for FIR Systems
9.2.1 Direct-Form Structure
9.2.2 Cascade-Form Structures
9.2.3 Frequency-Sampling Structures
9.2.4 Lattice Structure
9.3 Structures for IIR Systems
9.3.1 Direct-Form Structures
9.3.2 Signal Flow Graphs and Transposed Structures
9.3.3 Cascade-Form Structures
9.3.4 Parallel-Form Structures
9.3.5 Lattice and Lattice-Ladder Structures for IIR Systems
9.4 Representation of Numbers
9.4.1 Fixed-Point Representation of Numbers
9.4.2 Binary Floating-Point Representation of Numbers
9.4.3 Errors Resulting from Rounding and Truncation
9.5 Quantization of Filter Coefficients
9.5.1 Analysis of Sensitivity to Quantization of Filter Coefficients
9.5.2 Quantization of Coefficients in FIR Filters
9.6 Round-Off Effects in Digital Filters
9.6.1 Limit-Cycle Oscillations in Recursive Systems
9.6.2 Scaling to Prevent Overflow
9.6.3 Statistical Characterization of Quantization Effects in Fixed-Point
Realizations of Digital Filters
9.7 Summary and References
Problems
1 0 Design of Digital Filters 
10.1 General Considerations
10.1.1 Causality and Its Implications
10.1.2 Characteristics of Practical Frequency-Selective Filters
10.2 Design of FIR Filters
10.2.1 Symmetric and Antisymmetric FIR Filters
10.2.2 Design of Linear-Phase FIR Filters Using Windows
10.2.3 Design of Linear-Phase FIR Filters by the Frequency-Sampling
Method
10.2.4 Design of Optimum Equiripple Linear-Phase FIR Filters
10.2.5 Design of FIR Differentiators
10.2.6 Design of Hilbert Transformers
10.2.7 Comparison of Design Methods for Linear-Phase FIR Filters
10.3 Design of IIR Filters From Analog Filters
10.3.1 JIR Filter Design by Approximation of Derivatives
10.3.2 IIR Filter Design by Impulse Invariance
10.3.3 IIR Filter Design by the Bilinear Transformation
10.3.4 Characteristics of Commonly Used Analog Filters
10.3.5 Some Examples of Digital Filter Designs Based on the Bilinear
Transformation
10.4 Frequency Transformations
10.4.1 Frequency Transformations in the Analog Domain
10.4.2 Frequency Transformations in the Digital Domain
10.5 Summary and References
Problems
11 Multirate Digital Signal Processing 
11.1 Introduction
11.2 Decimation by a Factor D
11.3 Interpolation by a Factor I
11.4 Sampling Rate Conversion by a Rational Factor 1/ D
11.5 Implementation of Sampling Rate Conversion
11.5.1 Polyphase Filter Structures
11.5.2 Interchange of Filters and Downsamp\ers/Upsamplers
11.5.3 Sampling Rate Conversion with Cascaded Integrator Comb Filters
11.5.4 Polyphase Structures for Decimation and Interpolation Filters
11.5.5 Structures for Rational Sampling Rate Conversion
11.6 Multistage Implementation of Sampling Rate Conversion
11.7 Sampling Rate Conversion of Bandpass Signals
11.8 Sampling Rate Conversion by an Arbitrary Factor
11.8.1 Arbitrary Resampling with Polyphase Interpolators
11.8.2 Arbitrary Resampling with Farrow Filter Structures
11.9 Applications of Multirate Signal Processing
11.9.1 Design of Phase Shifters
11.9.2 Interfacing of Digital Systems with Different Sampling Rates
11.9.3 Implementation of Narrowband Lowpass Filters
11.9.4 Subband Coding of Speech Signals
11.10 Digital Filter Banks
11.10.1 Polyphase Structures of Uniform Filter Banks
11.10.2 Transmultiplexers
11.11 Two-Channel Quadrature Mirror Filter Bank
11.11.1 Elimination of Aliasing
11.11.2 Condition for Perfect Reconstruction
11.11.3 Polyphase Form of the QMF Bank
11.11.4 Linear Phase FIR QMF Bank
11.11.5 IIR QMF Bank
11.11.6 Perfect Reconstruction Two-Channel FIR QMF Bank
11.11.7 Two-Channel QMF Banks in Subband Coding
11.12 M-Channel QMF Bank
11.12.1 Alias-Free and Perfect Reconstruction Condition
11.12.2 Polyphase Form of the M -Channel QMF Bank
11.13 Summary and References
Problems
12 Linear Prediction and Optimum Linear Filters
12.1 Random Signals, Correlation Functions, and Power Spectra
12.1.1 Random Processes
12.1.2 Stationary Random Processes
12.1.3 Statistical (Ensemble) Averages
12.1.4 Statistical Averages for Joint Random Processes
12.1.5 Power Density Spectrum
12.1.6 Discrete-Time Random Signals
12.1.7 Time Averages for a Discrete-TIme Random Process
12.1.8 Mean-Ergodic Process
12.1.9 Correlation-Ergodic Processes
12.2 Innovations Representation of a Stationary Random Process
12.2.1 Rational Power Spectra 836
12.2.2 Relationships Between the Filter Parameters and the
Autocorrelation Sequence
12.3 Forward and Backward linear Prediction
12.3.1 Forward Linear Prediction
12.3.2 Backward Linear Prediction
12.3.3 The Optimum Reflection Coefficients for the Lattice Forward and
Backward Predictors
12.3.4 Relationship of an AR Process to Linear Prediction
12.4 Solution of the Normal Equations
12.4.1 The Levinson-Durbin Algorithm
12.4.2 The Schur Algorithm
12.5 Properties of the linear Prediction-Error Filters
12.6 AR Lattice and ARMA Lattice-Ladder Filters
12.6.1 AR Lattice Structure
12.6.2 ARMA Processes and Lattice-Ladder Filters
12.7 Wiener Filters for Filtering and Prediction
12.7.1 FIR Wiener Filter
12.7.2 Orthogonality Principle in Linear Mean-Square Estimation
12.7.3 IIR Wiener Filter
12.7.4 Noncausal Wiener Filter
12.8 Summary and References
Problems
1 3 Adaptive Filter
13.1 Applications of Adaptive Filters
13.1.1 System Identification or System Modeling
13.1.2 Adaptive Channel Equalization
13.1.3 Echo Cancellation in Data Transmission over Telephone Channels
13.1.4 Suppression of Narrowband Interference in a Wideband Signal
13.1.5 Adaptive Line Enhancer
13.1.6 Adaptive Noise Cancelling
13.1.7 Linear Predictive Coding of Speech Signals
13.1.8 Adaptive Arrays
13.2 Adaptive Direct-Form FIR Filters-The LMS Algorithm
13.2.1 Minimum Mean-Square-Error Criterion
13.2.2 The LMS Algorithm
13.2.3 Related Stochastic Gradient Algorithms
13.2.4 Properties of the LMS Algorithm
13.3 Adaptive Direct-Form Filters-RLS Algorithms
13.3.1 RLS Algorithm
13.3.2 The LOU Factorization and Square-Root Algorithms
13.3.3 Fast RLS Algorithms
13.3.4 Properties of the Direct-Form RLS Algorithms
13.4 Adaptive Lattice-Ladder Filters
13.4.1 Recursive Least-Squares Lattice-Ladder Algorithms
13.4.2 Other Lattice Algorithms
13.4.3 Properties of Lattice-Ladder Algorithms
13.5 Summary and References
Problems
14 Power Spectrum Estimation
14.1 Estimation of Spectra from Finite-Duration Observations of Signals
14.1.1 Computation of the Energy Density Spectrum
14.1.2 Estimation of the Autocorrelation and Power Spectrum of Random
Signals: The Periodogram
14.1.3 The Use of the OFT in Power Spectrum Estimation
14.2 Nonparametric Methods for Power Spectrum Estimation
14.2.1 The Bartlett Method: Averaging Periodograms
14.2.2 The Welch Method: Averaging Modified Periodograms
14.2.3 The Blackman and Tukey Method: Smoothing the Periodogram
14.2.4 Performance Characteristics of Nonparametric Power Spectrum
Estimators
14.2.5 Computational Requirements of Nonparametric Power Spectrum
Estimates
14.3 Parametric Methods for Power Spectrum Estimation
14.3.1 Relationships Between the Autocorrelation and the Model
Parameters
14.3.2 The Yule-Walker Method for the AR Model Parameters
14.3.3 The Burg Method for the AR Model Parameters
14.3.4 Unconstrained Least-Squares Method for the AR Model
Parameters
14.3.5 Sequential Estimation Methods for the AR Model Parameters
14.3.6 Selection of AR Model Order
14.3.7 MA Model for Power Spectrum Estimation
14.3.8 ARMA Model for Power Spectrum Estimation
14.3.9 Some Experimental Results
14.4 Filter Bank Methods
14.4.1 Filter Bank Realization of the Periodogram
14.4.2 Minimum Variance Spectral Estimates
14.5 Eigenanalysis Algorithms for Spectrum Estimation
14.5.1 Pisarenko Harmonic Decomposition Method
14.5.2 Eigen-decomposition of the Autocorrelation Matrix for Sinusoids in
White Noise
14.5.3 MUSIC Algorithm
14.5.4 ESPRIT Algorithm
14.5.5 Order Selection Criteria
14.5.6 Experimental Results
14.6 Summary and References
Problem
A Random Number Generators
B Tables of Transition Coefficients for the Design of Linear-Phase
FIR Filters
References and Bibliography
Answers to Selected Problems
Index


Digital Signal Processing A Practical Approach by E.C. Ifeachor, B.W. Jervis


Book Name:Digital Signal Processing A Practical Approach 
Author: E.C. Ifeachor, B.W. Jervis
Pages:862

Click here to download

Digital Signal Processing SIGNALS SYSTEMS AND FILTERS by Andreas Antoniou


Book Name:Digital Signal Processing SIGNALS SYSTEMS AND FILTERS 
Author: Andreas Antoniou
Pages: 991

Click here to download



Preface
Chapter 1. Introduction to Digital Signal Processing  
1.1 Introduction
1.2 Signals
1.3 Frequency-Domain Representation
1.4 Notation
1.5 Signal Processing
1.6 Analog Filters
1.7 Applications of Analog Filters
1.8 Digital Filters
1.9 Two DSP Applications
1.9.1 Processing of EKG signals
1.9.2 Processing of Stock-Exchange Data
References
Chapter 2. The Fourier Series and Fourier Transform
2.1 Introduction
2.2 Fourier Series
2.2.1 Definition
2.2.2 Particular Forms
2.2.3 Theorems and Properties
2.3 Fourier Transform
2.3.1 Derivation
2.3.2 Particular Forms
2.3.3 Theorems and Properties
References
Problems
Chapter 3. The z Transform 
3.1 Introduction
3.2 Definition of z Transform
3.3 Convergence Properties
3.4 The z Transform as a Laurent Series
3.5 Inverse z Transform
3.6 Theorems and Properties
3.7 Elementary Discrete-Time Signals
3.8 z-Transform Inversion Techniques
3.8.1 Use of Binomial Series
3.8.2 Use of Convolution Theorem
3.8.3 Use of Long Division
3.8.4 Use of Initial-Value Theorem
3.8.5 Use of Partial Fractions
3.9 Spectral Representation of Discrete-Time Signals
3.9.1 Frequency Spectrum
3.9.2 Periodicity of Frequency Spectrum
3.9.3 Interrelations
References
Problems
Chapter 4. Discrete-Time Systems 
4.1 Introduction
4.2 Basic System Properties
4.2.1 Linearity
4.2.2 Time Invariance
4.2.3 Causality
4.3 Characterization of Discrete-Time Systems
4.3.1 Nonrecursive Systems
4.3.2 Recursive Systems
4.4 Discrete-Time System Networks
4.4.1 Network Analysis
4.4.2 Implementation of Discrete-Time Systems
4.4.3 Signal Flow-Graph Analysis
4.5 Introduction to Time-Domain Analysis
4.6 Convolution Summation
4.6.1 Graphical Interpretation
4.6.2 Alternative Classification
4.7 Stability
4.8 State-Space Representation
4.8.1 Computability
4.8.2 Characterization
4.8.3 Time-Domain Analysis
4.8.4 Applications of State-Space Method
References
Problems
Chapter 5. The Application of the z Transform 
5.1 Introduction
5.2 The Discrete-Time Transfer Function
5.2.1 Derivation of H(z) from Difference Equation
5.2.2 Derivation of H(z) from System Network
5.2.3 Derivation of H(z) from State-Space Characterization
5.3 Stability
5.3.1 Constraint on Poles
5.3.2 Constraint on Eigenvalues
5.3.3 Stability Criteria
5.3.4 Test for Common Factors
5.3.5 Schur-Cohn Stability Criterion
5.3.6 Schur-Cohn-Fujiwara Stability Criterion
5.3.7 Jury-Marden Stability Criterion
5.3.8 Lyapunov Stability Criterion
5.4 Time-Domain Analysis
5.5 Frequency-Domain Analysis
5.5.1 Steady-State Sinusoidal Response
5.5.2 Evaluation of Frequency Response
5.5.3 Periodicity of Frequency Response
5.5.4 Aliasing
5.5.5 Frequency Response of Digital Filters
5.6 Transfer Functions for Digital Filters
5.6.1 First-Order Transfer Functions
5.6.2 Second-Order Transfer Functions
5.6.3 Higher-Order Transfer Functions
5.7 Amplitude and Delay Distortion
References
Problems
Chapter 6. The Sampling Process 
6.1 Introduction
6.2 Fourier Transform Revisited
6.2.1 Impulse Functions
6.2.2 Periodic Signals
6.2.3 Unit-Step Function
6.2.4 Generalized Functions
6.3 Interrelation Between the Fourier Series and the Fourier Transform
6.4 Poisson’s Summation Formula
6.5 Impulse-Modulated Signals
6.5.1 Interrelation Between the Fourier and z Transforms
6.5.2 Spectral Relationship Between Discrete- and Continuous-Time Signals
6.6 The Sampling Theorem
6.7 Aliasing
6.8 Graphical Representation of Interrelations
6.9 Processing of Continuous-Time Signals Using Digital Filters
6.10 Practical A/D and D/A Converters
References
Problems
Chapter 7. The Discrete Fourier Transform 
7.1 Introduction
7.2 Definition
7.3 Inverse DFT
7.4 Properties
7.4.1 Linearity
7.4.2 Periodicity
7.4.3 Symmetry
7.5 Interrelation Between the DFT and the z Transform
7.5.1 Frequency-Domain Sampling Theorem
7.5.2 Time-Domain Aliasing
7.6 Interrelation Between the DFT and the CFT
7.6.1 Time-Domain Aliasing
7.7 Interrelation Between the DFT and the Fourier Series
7.8 Window Technique
7.8.1 Continuous-Time Windows
7.8.2 Discrete-Time Windows
7.8.3 Periodic Discrete-Time Windows
7.8.4 Application of Window Technique
7.9 Simplified Notation
7.10 Periodic Convolutions
7.10.1 Time-Domain Periodic Convolution
7.10.2 Frequency-Domain Periodic Convolution
7.11 Fast Fourier-Transform Algorithms
7.11.1 Decimation-in-Time Algorithm
7.11.2 Decimation-in-Frequency Algorithm
7.11.3 Inverse DFT
7.12 Application of the FFT Approach to Signal Processing
7.12.1 Overlap-and-Add Method
7.12.2 Overlap-and-Save Method
References
Problems
Chapter 8. Realization of Digital Filters 
8.1 Introduction
8.2 Realization
8.2.1 Direct Realization
8.2.2 Direct Canonic Realization
8.2.3 State-Space Realization
8.2.4 Lattice Realization
8.2.5 Cascade Realization
8.2.6 Parallel Realization
8.2.7 Transposition
8.3 Implementation
8.3.1 Design Considerations
8.3.2 Systolic Implementations
References
Problems
Chapter 9. Design of Nonrecursive (FIR) Filters 
9.1 Introduction
9.2 Properties of Constant-Delay Nonrecursive Filters
9.2.1 Impulse Response Symmetries
9.2.2 Frequency Response
9.2.3 Location of Zeros
9.3 Design Using the Fourier Series
9.4 Use of Window Functions
9.4.1 Rectangular Window
9.4.2 von Hann and Hamming Windows
9.4.3 Blackman Window
9.4.4 Dolph-Chebyshev Window
9.4.5 Kaiser Window
9.4.6 Prescribed Filter Specifications
9.4.7 Other Windows
9.5 Design Based on Numerical-Analysis Formulas
References
Problems
Chapter 10. Approximations for Analog Filters 
10.1 Introduction
10.2 Basic Concepts
10.2.1 Characterization
10.2.2 Laplace Transform
10.2.3 The Transfer Function
10.2.4 Time-Domain Response
10.2.5 Frequency-Domain Analysis
10.2.6 Ideal and Practical Filters
10.2.7 Realizability Constraints
10.3 Butterworth Approximation
10.3.1 Derivation
10.3.2 Normalized Transfer Function
10.3.3 Minimum Filter Order
10.4 Chebyshev Approximation
10.4.1 Derivation
10.4.2 Zeros of Loss Function
10.4.3 Normalized Transfer Function
10.4.4 Minimum Filter Order
10.5 Inverse-Chebyshev Approximation
10.5.1 Normalized Transfer Function
10.5.2 Minimum Filter Order
10.6 Elliptic Approximation
10.6.1 Fifth-Order Approximation
10.6.2 Nth-Order Approximation (n Odd)
10.6.3 Zeros and Poles of L(−s2)
10.6.4 Nth-Order Approximation (n Even)
10.6.5 Specification Constraint
10.6.6 Normalized Transfer Function
10.7 Bessel-Thomson Approximation
10.8 Transformations
10.8.1 Lowpass-to-Lowpass Transformation
10.8.2 Lowpass-to-Bandpass Transformation
References
Problems
Chapter 11. Design of Recursive (IIR) Filters 
11.1 Introduction
11.2 Realizability Constraints
11.3 Invariant Impulse-Response Method
11.4 Modified Invariant Impulse-Response Method
11.5 Matched-z Transformation Method
11.6 Bilinear-Transformation Method
11.6.1 Derivation
11.6.2 Mapping Properties of Bilinear Transformation
11.6.3 The Warping Effect
11.7 Digital-Filter Transformations
11.7.1 General Transformation
11.7.2 Lowpass-to-Lowpass Transformation
11.7.3 Lowpass-to-Bandstop Transformation
11.7.4 Application
11.8 Comparison Between Recursive and Nonrecursive Designs
References
Problems
Chapter 12. Recursive (IIR) Filters Satisfying Prescribed Specifications
12.1 Introduction
12.2 Design Procedure
12.3 Design Formulas
12.3.1 Lowpass and Highpass Filters
12.3.2 Bandpass and Bandstop Filters
12.3.3 Butterworth Filters
12.3.4 Chebyshev Filters
12.3.5 Inverse-Chebyshev Filters
12.3.6 Elliptic Filters
12.4 Design Using the Formulas and Tables
12.5 Constant Group Delay
12.5.1 Delay Equalization
12.5.2 Zero-Phase Filters
12.6 Amplitude Equalization
References
Problems
Chapter 13. Random Signals 
13.1 Introduction
13.2 Random Variables
13.2.1 Probability-Distribution Function
13.2.2 Probability-Density Function
13.2.3 Uniform Probability Density
13.2.4 Gaussian Probability Density
13.2.5 Joint Distributions
13.2.6 Mean Values and Moments
13.3 Random Processes
13.3.1 Notation
13.4 First- and Second-Order Statistics
13.5 Moments and Autocorrelation
13.6 Stationary Processes
13.7 Frequency-Domain Representation
13.8 Discrete-Time Random Processes
13.9 Filtering of Discrete-Time Random Signals
References
Problems
Chapter 14. Effects of FiniteWord Length in Digital Filters 
14.1 Introduction
14.2 Number Representation
14.2.1 Binary System
14.2.2 Fixed-Point Arithmetic
14.2.3 Floating-Point Arithmetic
14.2.4 Number Quantization
14.3 Coefficient Quantization
14.4 Low-Sensitivity Structures
14.4.1 Case I
14.4.2 Case II
14.5 Product Quantization
14.6 Signal Scaling
14.6.1 Method A
14.6.2 Method B
14.6.3 Types of Scaling
14.6.4 Application of Scaling
14.7 Minimization of Output Roundoff Noise
14.8 Application of Error-Spectrum Shaping
14.9 Limit-Cycle Oscillations
14.9.1 Quantization Limit Cycles
14.9.2 Overflow Limit Cycles
14.9.3 Elimination of Quantization Limit Cycles
14.9.4 Elimination of Overflow Limit Cycles
References
Problems
Chapter 15. Design of Nonrecursive Filters Using Optimization Methods
15.1 Introduction
15.2 Problem Formulation
15.2.1 Lowpass and Highpass Filters
15.2.2 Bandpass and Bandstop Filters
15.2.3 Alternation Theorem
15.3 Remez Exchange Algorithm
15.3.1 Initialization of Extremals
15.3.2 Location of Maxima of the Error Function
15.3.3 Computation of |E(ω)| and Pc(ω)
15.3.4 Rejection of Superfluous Potential Extremals
15.3.5 Computation of Impulse Response
15.4 Improved Search Methods
15.4.1 Selective Step-by-Step Search
15.4.2 Cubic Interpolation
15.4.3 Quadratic Interpolation
15.4.4 Improved Formulation
15.5 Efficient Remez Exchange Algorithm
15.6 Gradient Information
15.6.1 Property 1
15.6.2 Property 2
15.6.3 Property 3
15.6.4 Property 4
15.6.5 Property 5
15.7 Prescribed Specifications
15.8 Generalization
15.8.1 Antisymmetrical Impulse Response and Odd Filter Length
15.8.2 Even Filter Length
15.9 Digital Differentiators
15.9.1 Problem Formulation
15.9.2 First Derivative
15.9.3 Prescribed Specifications
15.10 Arbitrary Amplitude Responses
15.11 Multiband Filters
References
Additional References
Problems
Chapter 16. Design of Recursive Filters Using Optimization Methods
16.1 Introduction
16.2 Problem Formulation
16.3 Newton’s Method
16.4 Quasi-Newton Algorithms
16.4.1 Basic Quasi-Newton Algorithm
16.4.2 Updating Formulas for Matrix Sk+1
16.4.3 Inexact Line Searches
16.4.4 Practical Quasi-Newton Algorithm
16.5 Minimax Algorithms
16.6 Improved Minimax Algorithms
16.7 Design of Recursive Filters
16.7.1 Objective Function
16.7.2 Gradient Information
16.7.3 Stability
16.7.4 Minimum Filter Order
16.7.5 Use of Weighting
16.8 Design of Recursive Delay Equalizers
References
Additional References
Problems
Chapter 17. Wave Digital Filters 
17.1 Introduction
17.2 Sensitivity Considerations
17.3 Wave Network Characterization
17.4 Element Realizations
17.4.1 Impedances
17.4.2 Voltage Sources
17.4.3 Series Wire Interconnection
17.4.4 Parallel Wire Interconnection
17.4.5 2-Port Adaptors
17.4.6 Transformers
17.4.7 Unit Elements
17.4.8 Circulators
17.4.9 Resonant Circuits
17.4.10 Realizability Constraint
17.5 Lattice Wave Digital Filters
17.5.1 Analysis
17.5.2 Alternative Lattice Configuration
17.5.3 Digital Realization
17.6 Ladder Wave Digital Filters
17.7 Filters Satisfying Prescribed Specifications
17.8 Frequency-Domain Analysis
17.9 Scaling
17.10 Elimination of Limit-Cycle Oscillations
17.11 Related Synthesis Methods
17.12 A Cascade Synthesis Based on the Wave Characterization
17.12.1 Generalized-Immittance Converters
17.12.2 Analog G-CGIC Configuration
17.12.3 Digital G-CGIC Configuration
17.12.4 Cascade Synthesis
17.12.5 Signal Scaling
17.12.6 Output Noise
17.13 Choice of Structure
References
Problems
Chapter 18. Digital Signal Processing Applications 
18.1 Introduction
18.2 Sampling-Frequency Conversion
18.2.1 Decimators
18.2.2 Interpolators
18.2.3 Sampling Frequency Conversion by a Noninteger Factor
18.2.4 Design Considerations
18.3 Quadrature-Mirror-Image Filter Banks
18.3.1 Operation
18.3.2 Elimination of Aliasing Errors
18.3.3 Design Considerations
18.3.4 Perfect Reconstruction
18.4 Hilbert Transformers
18.4.1 Design of Hilbert Transformers
18.4.2 Single-Sideband Modulation
18.4.3 Sampling of Bandpassed Signals
18.5 Adaptive Digital Filters
18.5.1 Wiener Filters
18.5.2 Newton Algorithm
18.5.3 Steepest-Descent Algorithm
18.5.4 Least-Mean-Square Algorithm
18.5.5 Recursive Filters
18.5.6 Applications
18.6 Two-Dimensional Digital Filters
18.6.1 Two-Dimensional Convolution
18.6.2 Two-Dimensional z Transform
18.6.3 Two-Dimensional Transfer Function
18.6.4 Stability
18.6.5 Frequency-Domain Analysis
18.6.6 Types of 2-D Filters
18.6.7 Approximations
18.6.8 Applications
References
Additional References
Problems
Appendix A. Complex Analysis
A.1 Introduction
A.2 Complex Numbers
A.2.1 Complex Arithmetic
A.2.2 De Moivre’s Theorem
A.2.3 Euler’s Formula
A.2.4 Exponential Form
A.2.5 Vector Representation
A.2.6 Spherical Representation
A.3 Functions of a Complex Variable
A.3.1 Polynomials
A.3.2 Inverse Algebraic Functions
A.3.3 Trigonometric Functions and Their Inverses
A.3.4 Hyperbolic Functions and Their Inverses
A.3.5 Multi-Valued Functions
A.3.6 Periodic Function
A.3.7 Rational Algebraic Functions
A.4 Basic Principles of Complex Analysis
A.4.1 Limit
A.4.2 Differentiability
A.4.3 Analyticity
A.4.4 Zeros
A.4.5 Singularities
A.4.6 Zero-Pole Plots
A.5 Series
A.6 Laurent Theorem
A.7 Residue Theorem
A.8 Analytic Continuation
A.9 Conformal Transformations
References
Appendix B. Elliptic Functions
B.1 Introduction
B.2 Elliptic Integral of the First Kind
B.3 Elliptic Functions
B.4 Imaginary Argument
B.5 Formulas
B.6 Periodicity
B.7 Transformation
B.8 Series Representation
References
Index 

Discrete Time Signal Processing by Alan V. Oppenheim , Ronald W. Schafer


Book Name:Discrete Time Signal Processing
Author: Alan V. Oppenheim , Ronald W. Schafer
Edition:Third
Pages: 1137

Click here to download

Content

Preface
The Companion Website
The Cover
Acknowledgments
1 Introduction
2 Discrete-Time Signals and Systems
2.0 Introduction
2.1 Discrete-Time Signals
2.2 Discrete-Time Systems
2.2.1 Memoryless Systems
2.2.2 Linear Systems
2.2.3 Time-Invariant Systems
2.2.4 Causality
2.2.5 Stability
2.3 LTI Systems
2.4 Properties of Linear Time-Invariant Systems
2.5 Linear Constant-Coefficient Difference Equations
2.6 Frequency-Domain Representation of Discrete-Time Signals and Systems
2.6.1 Eigenfunctions for Linear Time-Invariant Systems
2.6.2 Suddenly Applied Complex Exponential Inputs
2.7 Representation of Sequences by Fourier Transforms
2.8 Symmetry Properties of the Fourier Transform
2.9 Fourier Transform Theorems
2.9.1 Linearity of the Fourier Transform
2.9.2 Time Shifting and Frequency Shifting Theorem
2.9.3 Time Reversal Theorem
2.9.4 Differentiation in Frequency Theorem
2.9.6 The Convolution Theorem
2.9.7 The Modulation or Windowing Theorem
2.10 Discrete-Time Random Signals
2.11 Summary
Problems
3 The z-Transform 
3.0 Introduction
3.1 z-Transform
3.2 Properties of the ROC for the z-Transform
3.3 The Inverse z-Transform
3.3.1 Inspection Method
3.3.2 Partial Fraction Expansion
3.3.3 Power Series Expansion
3.4 z-Transform Properties
3.4.1 Linearity
3.4.2 Time Shifting
3.4.3 Multiplication by an Exponential Sequence
3.4.4 Differentiation of X(z)
3.4.5 Conjugation of a Complex Sequence
3.4.6 Time Reversal
3.4.7 Convolution of Sequences
3.4.8 Summary of Some z-Transform Properties
3.5 z-Transforms and LTI Systems
3.6 The Unilateral z-Transform
3.7 Summary
Problems
4 Sampling of Continuous-Time Signals
4.0 Introduction
4.1 Periodic Sampling
4.2 Frequency-Domain Representation of Sampling
4.3 Reconstruction of a Bandlimited Signal from Its Samples
4.4 Discrete-Time Processing of Continuous-Time Signals
4.4.1 Discrete-Time LTI Processing of Continuous-Time Signals
4.4.2 Impulse Invariance
4.5 Continuous-Time Processing of Discrete-Time Signals
4.6 Changing the Sampling Rate Using Discrete-Time Processing
4.6.1 Sampling Rate Reduction by an Integer Factor
4.6.2 Increasing the Sampling Rate by an Integer Factor
4.6.3 Simple and Practical Interpolation Filters
4.6.4 Changing the Sampling Rate by a Noninteger Factor
4.7 Multirate Signal Processing
4.7.1 Interchange of Filtering with Compressor/Expander
4.7.2 Multistage Decimation and Interpolation
4.7.3 Polyphase Decompositions
4.7.4 Polyphase Implementation of Decimation Filters
4.7.5 Polyphase Implementation of Interpolation Filters
4.7.6 Multirate Filter Banks
4.8 Digital Processing of Analog Signals
4.8.1 Prefiltering to Avoid Aliasing
4.8.2 A/D Conversion
4.8.3 Analysis of Quantization Errors
4.8.4 D/A Conversion
4.9 Oversampling and Noise Shaping in A/D and D/A Conversion
4.9.1 Oversampled A/D Conversion with Direct Quantization
4.9.2 Oversampled A/D Conversion with Noise Shaping
4.9.3 Oversampling and Noise Shaping in D/A Conversion
4.10 Summary
Problems
5 Transform Analysis of Linear Time-Invariant Systems 
5.0 Introduction
5.1 The Frequency Response of LTI Systems
5.1.1 Frequency Response Phase and Group Delay
5.1.2 Illustration of Effects of Group Delay and Attenuation
5.2 System Functions—Linear Constant-Coefficient Difference Equations
5.2.1 Stability and Causality
5.2.2 Inverse Systems
5.2.3 Impulse Response for Rational System Functions
5.3 Frequency Response for Rational System Functions
5.3.1 Frequency Response of 1st-Order Systems
5.3.2 Examples with Multiple Poles and Zeros
5.4 Relationship between Magnitude and Phase
5.5 All-Pass Systems
5.6 Minimum-Phase Systems
5.6.1 Minimum-Phase and All-Pass Decomposition
5.6.2 Frequency-Response Compensation of Non-Minimum-Phase
Systems
5.6.3 Properties of Minimum-Phase Systems
5.7 Linear Systems with Generalized Linear Phase
5.7.1 Systems with Linear Phase
5.7.2 Generalized Linear Phase
5.7.3 Causal Generalized Linear-Phase Systems
5.7.4 Relation of FIR Linear-Phase Systems to Minimum-Phase Systems
5.8 Summary
Problems

6 Structures for Discrete-Time Systems
6.0 Introduction
6.1 Block Diagram Representation of Linear Constant-Coefficient
Difference Equations
6.2 Signal Flow Graph Representation
6.3 Basic Structures for IIR Systems
6.3.1 Direct Forms
6.3.2 Cascade Form
6.3.3 Parallel Form
6.3.4 Feedback in IIR Systems
6.4 Transposed Forms
6.5 Basic Network Structures for FIR Systems
6.5.1 Direct Form
6.5.2 Cascade Form
6.5.3 Structures for Linear-Phase FIR Systems
6.6 Lattice Filters
6.6.1 FIR Lattice Filters
6.6.2 All-Pole Lattice Structure
6.6.3 Generalization of Lattice Systems
6.7 Overview of Finite-Precision Numerical Effects
6.7.1 Number Representations
6.7.2 Quantization in Implementing Systems
6.8 The Effects of Coefficient Quantization
6.8.1 Effects of Coefficient Quantization in IIR Systems
6.8.2 Example of Coefficient Quantization in an Elliptic Filter
6.8.3 Poles of Quantized 2nd-Order Sections
6.8.4 Effects of Coefficient Quantization in FIR Systems
6.8.5 Example of Quantization of an Optimum FIR Filter
6.8.6 Maintaining Linear Phase
6.9 Effects of Round-off Noise in Digital Filters
6.9.1 Analysis of the Direct Form IIR Structures
6.9.2 Scaling in Fixed-Point Implementations of IIR Systems
6.9.3 Example of Analysis of a Cascade IIR Structure
6.9.4 Analysis of Direct-Form FIR Systems
6.9.5 Floating-Point Realizations of Discrete-Time Systems
6.10 Zero-Input Limit Cycles in Fixed-Point Realizations of IIR
Digital Filters
6.10.1 Limit Cycles Owing to Round-off and Truncation
6.10.2 Limit Cycles Owing to Overflow
6.10.3 Avoiding Limit Cycles
6.11 Summary
Problems
7 Filter Design Techniques 
7.0 Introduction
7.1 Filter Specifications
7.2 Design of Discrete-Time IIR Filters from Continuous-Time Filters
7.2.1 Filter Design by Impulse Invariance
7.2.2 Bilinear Transformation
7.3 Discrete-Time Butterworth, Chebyshev and Elliptic Filters
7.3.1 Examples of IIR Filter Design
7.4 Frequency Transformations of Lowpass IIR Filters
7.5 Design of FIR Filters by Windowing
7.5.1 Properties of Commonly Used Windows
7.5.2 Incorporation of Generalized Linear Phase
7.5.3 The KaiserWindow Filter Design Method
7.6 Examples of FIR Filter Design by the KaiserWindow Method
7.6.1 Lowpass Filter
7.6.2 Highpass Filter
7.6.3 Discrete-Time Differentiators
7.7 Optimum Approximations of FIR Filters
7.7.1 Optimal Type I Lowpass Filters
7.7.2 Optimal Type II Lowpass Filters
7.7.3 The Parks–McClellan Algorithm
7.7.4 Characteristics of Optimum FIR Filters
7.8 Examples of FIR Equiripple Approximation
7.8.1 Lowpass Filter
7.8.2 Compensation for Zero-Order Hold
7.8.3 Bandpass Filter
7.9 Comments on IIR and FIR Discrete-Time Filters
7.10 Design of an Upsampling Filter
7.11 Summary
Problems
8 The Discrete Fourier Transform 
8.0 Introduction
8.1 Representation of Periodic Sequences: The Discrete Fourier Series
8.2 Properties of the DFS
8.2.1 Linearity
8.2.2 Shift of a Sequence
8.2.3 Duality
8.2.4 Symmetry Properties
8.2.5 Periodic Convolution
8.2.6 Summary of Properties of the DFS Representation of Periodic
Sequences
8.3 The Fourier Transform of Periodic Signals
8.4 Sampling the Fourier Transform
8.5 Fourier Representation of Finite-Duration Sequences
8.6 Properties of the DFT
8.6.1 Linearity
8.6.2 Circular Shift of a Sequence
8.6.3 Duality
8.6.4 Symmetry Properties
8.6.5 Circular Convolution
8.6.6 Summary of Properties of the DFT
8.7 Linear Convolution Using the DFT
8.7.1 Linear Convolution of Two Finite-Length Sequences
8.7.2 Circular Convolution as Linear Convolution with Aliasing
8.7.3 Implementing Linear Time-Invariant Systems Using the DFT
8.8 The Discrete Cosine Transform (DCT)
8.8.1 Definitions of the DCT
8.8.2 Definition of the DCT-1 and DCT-2
8.8.3 Relationship between the DFT and the DCT-1
8.8.4 Relationship between the DFT and the DCT-2
8.8.5 Energy Compaction Property of the DCT-2
8.8.6 Applications of the DCT
8.9 Summary
Problems
9 Computation of the Discrete Fourier Transform 
9.0 Introduction
9.1 Direct Computation of the Discrete Fourier Transform
9.1.1 Direct Evaluation of the Definition of the DFT
9.1.2 The Goertzel Algorithm
9.1.3 Exploiting both Symmetry and Periodicity
9.2 Decimation-in-Time FFT Algorithms
9.2.1 Generalization and Programming the FFT
9.2.2 In-Place Computations
9.2.3 Alternative Forms
9.3 Decimation-in-Frequency FFT Algorithms
9.3.1 In-Place Computation
9.3.2 Alternative Forms
9.4 Practical Considerations
9.4.1 Indexing
9.4.2 Coefficients
9.5 More General FFT Algorithms
9.5.1 Algorithms for Composite Values of N
9.5.2 Optimized FFT Algorithms
9.6 Implementation of the DFT Using Convolution
9.6.1 Overview of theWinograd Fourier Transform Algorithm
9.6.2 The Chirp Transform Algorithm
9.7 Effects of Finite Register Length
9.8 Summary
Problems
10 Fourier Analysis of Signals Using the Discrete Fourier Transform
10.0 Introduction
10.1 Fourier Analysis of Signals Using the DFT
10.2 DFT Analysis of Sinusoidal Signals
10.2.1 The Effect of Windowing
10.2.2 Properties of the Windows
10.2.3 The Effect of Spectral Sampling
10.3 The Time-Dependent Fourier Transform
10.3.1 Invertibility of X[n,)
10.3.2 Filter Bank Interpretation of X[n,)
10.3.3 The Effect of the Window
10.3.4 Sampling in Time and Frequency
10.3.5 The Overlap–Add Method of Reconstruction
10.3.6 Signal Processing Based on the Time-Dependent Fourier
Transform
10.3.7 Filter Bank Interpretation of the Time-Dependent Fourier
Transform
10.4 Examples of Fourier Analysis of Nonstationary Signals .
10.4.1 Time-Dependent Fourier Analysis of Speech Signals
10.4.2 Time-Dependent Fourier Analysis of Radar Signals
10.5 Fourier Analysis of Stationary Random Signals: the Periodogram
10.5.1 The Periodogram
10.5.2 Properties of the Periodogram
10.5.3 Periodogram Averaging
10.5.4 Computation of Average Periodograms Using the DFT
10.5.5 An Example of Periodogram Analysis
10.6 Spectrum Analysis of Random Signals
10.6.1 Computing Correlation and Power Spectrum Estimates Using
theDFT
10.6.2 Estimating the Power Spectrum of Quantization Noise
10.6.3 Estimating the Power Spectrum of Speech
10.7 Summary
Problems
11 Parametric Signal Modeling
11.0 Introduction
11.1 All-Pole Modeling of Signals
11.1.1 Least-Squares Approximation
11.1.2 Least-Squares Inverse Model
11.1.3 Linear Prediction Formulation of All-Pole Modeling
11.2 Deterministic and Random Signal Models
11.2.1 All-Pole Modeling of Finite-Energy Deterministic Signals
11.2.2 Modeling of Random Signals
11.2.3 Minimum Mean-Squared Error
11.2.4 Autocorrelation Matching Property
11.2.5 Determination of the Gain Parameter G
11.3 Estimation of the Correlation Functions
11.3.1 The Autocorrelation Method
11.3.2 The Covariance Method
11.3.3 Comparison of Methods
11.4 Model Order
11.5 All-Pole Spectrum Analysis
11.5.1 All-Pole Analysis of Speech Signals
11.5.2 Pole Locations
11.5.3 All-Pole Modeling of Sinusoidal Signals
11.6 Solution of the Autocorrelation Normal Equations
11.6.1 The Levinson–Durbin Recursion
11.6.2 Derivation of the Levinson–Durbin Algorithm
11.7 Lattice Filters
11.7.1 Prediction Error Lattice Network
11.7.2 All-Pole Model Lattice Network
11.7.3 Direct Computation of the k-Parameters
11.8 Summary
Problems
12 Discrete Hilbert Transforms
12.0 Introduction
12.1 Real- and Imaginary-Part Sufficiency of the Fourier Transform
12.2 Sufficiency Theorems for Finite-Length Sequences
12.3 Relationships Between Magnitude and Phase
12.4 Hilbert Transform Relations for Complex Sequences
12.4.1 Design of Hilbert Transformers
12.4.2 Representation of Bandpass Signals
12.4.3 Bandpass Sampling
12.5 Summary
Problems
13 Cepstrum Analysis and Homomorphic Deconvolution 
13.0 Introduction
13.1 Definition of the Cepstrum
13.2 Definition of the Complex Cepstrum
13.3 Properties of the Complex Logarithm
13.4 Alternative Expressions for the Complex Cepstrum
13.5 Properties of the Complex Cepstrum
13.5.1 Exponential Sequences
13.5.2 Minimum-Phase and Maximum-Phase Sequences
13.5.3 Relationship Between the Real Cepstrum and the Complex
Cepstrum
13.6 Computation of the Complex Cepstrum
13.6.1 Phase Unwrapping
13.6.2 Computation of the Complex Cepstrum Using the Logarithmic
Derivative
13.6.3 Minimum-Phase Realizations for Minimum-Phase Sequences
13.6.4 Recursive Computation of theComplexCepstrum forMinimumand
Maximum-Phase Sequences
13.6.5 The Use of Exponential Weighting
13.7 Computation of the Complex Cepstrum Using Polynomial Roots
13.8 Deconvolution Using the Complex Cepstrum
13.8.1 Minimum-Phase/Allpass Homomorphic Deconvolution
13.8.2 Minimum-Phase/Maximum-Phase Homomorphic
Deconvolution
13.9 The Complex Cepstrum for a Simple Multipath Model
13.9.1 Computation of the Complex Cepstrum by z-Transform
Analysis
13.9.2 Computation of the Cepstrum Using the DFT
13.9.3 Homomorphic Deconvolution for the Multipath Model
13.9.4 Minimum-Phase Decomposition
13.9.5 Generalizations
13.10 Applications to Speech Processing
13.10.1 The Speech Model
13.10.2 Example of Homomorphic Deconvolution of Speech
13.10.3 Estimating the Parameters of the Speech Model
13.10.4 Applications
13.11 Summary
Problems
A Random Signals
B Continuous-Time Filters
C Answers to Selected Basic Problems
Bibliography

Index

PRINCIPLES OF DIGITAL SIGNAL PROCESSING

PRINCIPLES OF DIGITAL SIGNAL PROCESSING  
OBJECTIVES:

  •  To learn discrete Fourier transform and its properties
  •  To know the characteristics of IIR and FIR filters learn the design of infinite and finite impulse  response filters for filtering undesired signals
  •  To understand Finite word length effects
  •  To study the concept of Multirate and adaptive filters


UNIT I DISCRETE FOURIER TRANSFORM 
Discrete Signals and Systems- A Review – Introduction to DFT – Properties of DFT – Circular Convolution - Filtering methods based on DFT – FFT Algorithms –Decimation in time Algorithms, Decimation in frequency Algorithms – Use of FFT in Linear Filtering.
UNIT II IIR FILTER DESIGN 
Structures of IIR – Analog filter design – Discrete time IIR filter from analog filter – IIR filter design by Impulse Invariance, Bilinear transformation, Approximation of derivatives – (LPF, HPF, BPF, BRF) filter design using frequency translation.
UNIT III FIR FILTER DESIGN 
Structures of FIR – Linear phase FIR filter – Fourier Series - Filter design using windowing techniques (Rectangular Window, Hamming Window, Hanning Window), Frequency sampling techniques – Finite word length effects in digital Filters: Errors, Limit Cycle, Noise Power Spectrum. UNIT IV FINITE WORDLENGTH EFFECTS
Fixed point and floating point number representations – ADC –Quantization- Truncation and Rounding errors - Quantization noise – coefficient quantization error – Product quantization error - Overflow error – Roundoff noise power - limit cycle oscillations due to product round off and overflow errors – Principle of scaling
UNIT V DSP APPLICATIONS 
 Multirate signal processing: Decimation, Interpolation, Sampling rate conversion by a rational factor – Adaptive Filters: Introduction, Applications of adaptive filtering to equalization.
TOTAL (L:45+T:15): 60 PERIODS
OUTCOMES: Upon completion of the course, students will be able to

  • apply DFT for the analysis of digital signals & systems
  • design IIR and FIR filters
  • characterize finite Word length effect on filters 
  • design the Multirate Filters
  • apply Adaptive Filters to equalization

TEXT BOOK:
1. John G. Proakis & Dimitris G.Manolakis, “Digital Signal Processing – Principles, Algorithms & Applications”, Fourth Edition, Pearson Education / Prentice Hall, 2007.

REFERENCES:
1. Emmanuel C..Ifeachor, & Barrie.W.Jervis, “Digital Signal Processing”, Second Edition, Pearson Education / Prentice Hall, 2002.
2. Sanjit K. Mitra, “Digital Signal Processing – A Computer Based Approach”, Tata Mc Graw Hill, 2007.
3. A.V.Oppenheim, R.W. Schafer and J.R. Buck, “Discrete-Time Signal Processing”, 8th Indian Reprint, Pearson, 2004.
4. Andreas Antoniou, “Digital Signal Processing”, Tata Mc Graw Hill, 2006.

An Introduction to Analog and Digital Communications by Simon S. Haykin, Michael Moher


Book Name: An Introduction to Analog and Digital Communications
Author: Simon S. Haykin, Michael Moher 
Edition:Second
Pages: 537

Click here to download

Content

1 Introduction 
1.1 Historical Background
1.2 Applications
1.3 Primary Resources and Operational Requirements
1.4 Underpinning Theories of Communication Systems
1.5 Concluding Remarks
2 Fourier Representation of Signals and Systems 
2.1 The Fourier Transform
2.2 Properties of the Fourier Transform
2.3 The Inverse Relationship Between Time and Frequency
2.4 Dirac Delta Function
2.5 Fourier Transforms of Periodic Signals
2.6 Transmission of Signals Through Linear Systems: Convolution
Revisited
2.7 Ideal Low-pass Filters
2.8 Correlation and Spectral Density: Energy Signals
2.9 Power Spectral Density
2.10 Numerical Computation of the Fourier Transform
2.11 Theme Example: Twisted Pairs for Telephony
2.12 Summary and Discussion
Additional Problems
Advanced Problems
3 Amplitude Modulation 
3.1 Amplitude Modulation
3.2 Virtues, Limitations, and Modifications of Amplitude Modulation
3.3 Double Sideband-Suppressed Carrier Modulation
3.4 Costas Receiver
APPENDIX 1 POWER RATIOS AND DECIBEL
3.5 Quadrature-Carrier Multiplexing
3.6 Single-Sideband Modulation
3.7 Vestigial Sideband Modulation
3.8 Baseband Representation of Modulated Waves and Band-Pass
Filters
3.9 Theme Examples
3.10 Summary and Discussion
Additional Problems
Advanced Problems
4 Angle Modulation 
4.1 Basic Definitions
4.2 Properties of Angle-Modulated Waves
4.3 Relationship between PM and FM Waves
4.4 Narrow-Band Frequency Modulation
4.5 Wide-Band Frequency Modulation
4.6 Transmission Bandwidth of FM Waves
4.7 Generation of FM Waves
4.8 Demodulation of FM Signals
4.9 Theme Example: FM Stereo Multiplexing
4.10 Summary and Discussion
Additional Problems
Advanced Problems
5 Pulse Modulation:Transition from Analog to Digital
Communications 
5.1 Sampling Process
5.2 Pulse-Amplitude Modulation
5.3 Pulse-Position Modulation
5.4 Completing the Transition from Analog to Digital
5.5 Quantization Process
5.6 Pulse-Code Modulation
5.7 Delta Modulation
5.8 Differential Pulse-Code Modulation
5.9 Line Codes
5.10 Theme Examples
5.11 Summary and Discussion
Additional Problems
Advanced Problems
6 Baseband Data Transmission 
6.1 Baseband Transmission of Digital Data
6.2 The Intersymbol Interference Problem
6.3 The Nyquist Channel
6.4 Raised-Cosine Pulse Spectrum
6.5 Baseband Transmission of M-ary Data
6.6 The Eye Pattern
6.7 Computer Experiment: Eye Diagrams for Binary and Quaternary
Systems
6.8 Theme Example: Equalization
6.9 Summary and Discussion
Additional Problems
Advanced Problems
7 Digital Band-Pass Modulation Techniques 
7.1 Some Preliminaries
7.2 Binary Amplitude-Shift Keying
7.3 Phase-Shift Keying
7.4 Frequency-Shift Keying
7.5 Summary of Three Binary Signaling Schemes
7.6 Noncoherent Digital Modulation Schemes
7.7 M-ary Digital Modulation Schemes
7.8 Mapping of Digitally Modulated Waveforms onto Constellations
of Signal Points
APPENDIX 1 POWER RATIOS AND DECIBEL
7.9 Theme Examples
7.10 Summary and Discussion
Additional Problems
Advanced Problems
Computer Experiments
8 Random Signals and Noise 
8.1 Probability and Random Variables
8.2 Expectation
8.3 Transformation of Random Variables
8.4 Gaussian Random Variables
8.5 The Central Limit Theorem
8.6 Random Processes
8.7 Correlation of Random Processes
8.8 Spectra of Random Signals
8.9 Gaussian Processes
8.10 White Noise
8.11 Narrowband Noise
8.12 Summary and Discussion
Additional Problems
Advanced Problems
Computer Experiments
9 Noise in Analog Communications
9.1 Noise in Communication Systems
9.2 Signal-to-Noise Ratios
9.3 Band-Pass Receiver Structures
9.4 Noise in Linear Receivers Using Coherent Detection
9.5 Noise in AM Receivers Using Envelope Detection
9.6 Noise in SSB Receivers
9.7 Detection of Frequency Modulation (FM)
9.8 FM Pre-emphasis and De-emphasis
9.9 Summary and Discussion
Additional Problems
Advanced Problems
Computer Experiments
10 Noise in Digital Communications 
10.1 Bit Error Rate
10.2 Detection of a Single Pulse in Noise
10.3 Optimum Detection of Binary PAM in Noise
10.4 Optimum Detection of BPSK
10.5 Detection of QPSK and QAM in Noise
10.6 Optimum Detection of Binary FSK
10.7 Differential Detection in Noise
10.8 Summary of Digital Performance
10.9 Error Detection and Correction
10.10 Summary and Discussion
Additional Problems
Advanced Problems
Computer Experiments
11 System and Noise Calculations 
11.1 Electrical Noise
11.2 Noise Figure
11.3 Equivalent Noise Temperature
11.4 Cascade Connection of Two-Port Networks
11.5 Free-Space Link Calculations
11.6 Terrestrial Mobile Radio
11.7 Summary and Discussion
Additional Problems
Advanced Problems
APPENDIX 1 POWER RATIOS AND DECIBEL
APPENDIX1 POWER RATIOS AND DECIBEL
APPENDIX2 FOURIER SERIES
APPENDIX3 BESSEL FUNCTIONS
APPENDIX4 THE Q-FUNCTION AND ITS RELATIONSHIP TO THE ERROR
FUNCTION
APPENDIX5 SCHWARZ’S INEQUALITY
APPENDIX6 MATHEMATICAL TABLES
APPENDIX 7 MATLAB SCRIPTS FOR COMPUTER EXPERIMENTS TO
PROBLEMS IN CHAPTERS 7-10
APPENDIX8 ANSWERS TO DRILL PROBLEMS
GLOSSARY
BIBLIOGRAPHY

INDEX 

Digital Communications Fundamentals and Applications by Bernard Sklar



Book Name:Digital Communications  Fundamentals and Applications
Author: Bernard Sklar
Edition:Second
Pages: 1032

Click here to download


Contents
PREFACE
1 SIGNALS AND SPECTRA                                                                                
1.1 Digital Communication Signal Processing,                                                    
1.1.1 Why Digital?,                                                                                                
1.1.2 Typical Block Diagram and Transformations,
1.1.3 Basic Digital Communication Nomenclature,
1.1.4 Digital versus Analog Performance Criteria,
1.2 Classification of Signals,
1.2.1 Deterministic and Random Signals,
1.2.2 Periodic and Nonperiodic Signals,
1.2.3 Analog and Discrete Signals,
1.2.4 Energy and Power Signals,
1.2.5 The Unit Impulse Function,
1.3 Spectral Density,
1.3.1 Energy Spectral Density,
1.3.2 Power Spectral Density,
1.4 Autocorrelation,
1.4.1 Autocorrelation of an Energy Signal,
1.4.2 Autocorrelation of a Periodic (Power) Signal,
1.5 Random Signals,
1.5.1 Random Variables,
1.5.2 Random Processes,
1.5.3 Time Averaging and Ergodicity,
1.5.4 Power Spectral Density of a Random Process,
1.5.5 Noise in Communication Systems,
1.6 Signal Transmission through Linear Systems,
1.6.1 Impulse Response,
1.6.2 Frequency Transfer Function,
1.6.3 Distortionless Transmission,
1.6.4 Signals, Circuits, and Spectra,
1.7 Bandwidth of Digital Data,
1.7.1 Baseband versus Bandpass,
1.7.2 The Bandwidth Dilemma,
1.8 Conclusion,
2 FORMATTING AND BASEBAND MODULATION
2.1 Baseband Systems,
2.2 Formatting Textual Data (Character Coding),
2.3 Messages, Characters, and Symbols,
2.3.7 Example of Messages, Characters, and Symbols,
2.4 Formatting Analog Information,
2.4.1 The Sampling Theorem,
2.4.2 Aliasing,
2.4.3 Why Oversample?
2.4.4 Signal Interface for a Digital System,
2.5 Sources of Corruption,
2.5.7 Sampling and Quantizing Effects,
2.5.2 Channel Effects,
2.5.3 Signal-to-Noise Ratio for Quantized Pulses,
2.6 Pulse Code Modulation,
2.7 Uniform and Nonuniform Quantization,
2.7.7 Statistics of Speech Amplitudes,
2.7.2 Nonuniform Quantization,
2.7.3 Companding Characteristics,
2.8 Baseband Modulation,
2.8.1 Waveform Representation of Binary Digits,
2.8.2 PCM Waveform Types,
2.8.3 Spectral Attributes of PCM Waveforms,
2.8.4 Bits per PCM Word and Bits per Symbol,
2.8.5 M-ary Pulse Modulation Waveforms,
2.9 Correlative Coding,
2.9.7 Duobinary Signaling,
2.9.2 Duobinary Decoding,
2.9.3 Preceding,
2.9.4 Duobinary Equivalent Transfer Function,
2.9.5 Comparison of Binary with Duobinary Signaling,
2.9.6 Poly binary Signaling,
2.10 Conclusion,
3 BASED AND DEMODULATION/DETECTION
3.1 Signals and Noise,
3.1.1 Error-Performance Degradation in Communication Systems,
3.1.2 Demodulation and Detection,
3.1.3 A Vectorial View of Signals and Noise,
3.1.4 The Basic SNR Parameter for Digital Communication Systems,
3.1.5 Why Eb/N0 Is a Natural Figure of Merit,
3.2 Detection of Binary Signals in Gaussian Noise,
3.2.1 Maximum Likelihood Receiver Structure,
3.2.2 The Matched Filter,
3.2.3 Correlation Realization of the Matched Filter,
3.2.4 Optimizing Error Performance,
3.2.5 Error Probability Performance of Binary Signaling,
3.3 Intersymbol Interference,
3.3.1 Pulse Shaping to Reduce ISI,
3.3.2 Two Types of Error-Performance Degradation,
3.3.3 Demodulation/Detection of Shaped Pulses,
3.4 Equalization,
3.4.1 Channel Characterization,
3.4.2 Eye Pattern
3.4.3 Equalizer Filter Types,
3.4.4 Preset and Adaptive Equalization,
3.4.5 Filter Update Rate,
3.5 Conclusion,
4 BANDPASS MODULATION AND DEMODULATION/DETECTION
4.1 Why Modulate?
4.2 Digital Bandpass Modulation Techniques,
4.2.1 Phasor Representation of a Sinusoid,
4.2.2 Phase Shift Keying,
4.2.3 Frequency Shift Keying,
4.2.4 Amplitude Shift Keying,
4.2.5 Amplitude Phase Keying,
4.2.6 Waveform Amplitude Coefficient,
4.3 Detection of Signals in Gaussian Noise,
4.3.1 Decision Regions,
4.3.2 Correlation Receiver,
4.4 Coherent Detection,
4.4.1 Coherent Detection of PSK,
4.4.2 Sampled Matched Filter,
4.4.3 Coherent Detection of Multiple Phase Shift Keying,
4.4.4 Coherent Detection of FSK,
4.5 Noncoherent Detection,
4.5.1 Detection of Differential PSK,
4.5.2 Binary Differential PSK Example,
4.5.3 Noncoherent Detection of FSK,
4.5.4 Required Tone Spacing for Noncoherent Orthogonal FSK,
4.6 Complex Envelope,
4.6.1 Quadrature Implementation of a Modulator,
4.6.2 D8PSK Modulator Example,
4.6.3 D8PSK Demodulator Example,
4.7 Error Performance for Binary Systems,
4.7.1 Probability of Bit Error for Coherently Detected BPSK,
4.7.2 Probability of Bit Error for Coherently Detected
Differentially Encoded Binary PSK,
4.7.3 Probability of Bit Error for Coherently Detected
Binary Orthogonal FSK,
4.7.4 Probability of Bit Error for Noncoherently Detected
Binary Orthogonal FSK,
4.7.5 Probability of Bit Error for Binary DPSK,
4.7.6 Comparison of Bit Error Performance for Various
Modulation Types,
4.8 M-ary Signaling and Performance,
4.8.1 Ideal Probability of Bit Error Performance,
4.8.2 M-ary Signaling,
4.8.3 Vectorial View of MPSK Signaling,
4.8.4 BPSK and QPSK Have the Same Bit Error Probability,
4.8.5 Vectorial View of MFSK Signaling,
4.9 Symbol Error Performance for M-ary Systems (M > 2),
4.9.1 Probability of Symbol Error for MPSK,
4.9.2 Probability of Symbol Error for MFSK,
4.9.3 Bit Error Probability versus Symbol Error Probability
for Orthogonal Signals,
4.9.4 Bit Error Probability versus Symbol Error Probability
for Multiple Phase Signaling,
4.9.5 Effects of Intersymbol Interference,
4.10 Conclusion,
5 COMMUNICATIONS LINK ANALYSIS 
5.1 What the System Link Budget Tells the System Engineer,
5.2 The Channel,
5.2.7 The Concept of Free Space,
5.2.2 Error-Performance Degradation,
5.2.3 Sources of Signal Loss and Noise,
5.3 Received Signal Power and Noise Power,
5.3J The Range Equation,
5.3.2 Received Signal Power as a Function of Frequency,
5.3.3 Path Loss is Frequency Dependent,
5.3.4 Thermal Noise Power,
5.4 Link Budget Analysis,
5.4.1 Two E//NQ Values of Interest,
5.4.2 Link Budgets are Typically Calculated in Decibels,
5.4.3 How Much Link Margin is Enough?
5.4.4 Link Availability,
5.5 Noise Figure, Noise Temperature, and System Temperature,
5.5 J Noise Figure,
5.5.2 Noise Temperature,
5.5.3 Line Loss,
5.5.4 Composite Noise Figure and Composite Noise Temperature,
5.5.5 System Effective Temperature,
5.5.6 Sky Noise Temperature,
5.6 Sample Link Analysis,
5.6.1 Link Budget Details,
5.6.2 Receiver Figure of Merit,
5.6.3 Received Isotropic Power,
5.7 Satellite Repeaters,
5.7.7 Nonregenerative Repeaters,
5.7.2 Nonlinear Repeater Amplifiers,
5.8 System Trade-Offs,
5.9 Conclusion,
6 CHANNEL CODING: PART 1 
6.1 Waveform Coding and Structured Sequences,
6.1.1 Antipodal and Orthogonal Signals,
6.1.2 M-ary Signaling,
6.1.3 Waveform Coding,
6.1.4 Waveform-Coding System Example,
6.2 Types of Error Control,
6.2.1 Terminal Connectivity,
6.2.2 Automatic Repeat Request,
6.3 Structured Sequences,
6.3.1 Channel Models,
6.3.2 Code Rate and Redundancy,
6.3.3 Parity Check Codes,
6.3.4 Why Use Error-Correction Coding?
6.4 Linear Block Codes,
6.4.1 Vector Spaces,
6.4.2 Vector Subspaces,
6.4.3 A (6, 3) Linear Block Code Example,
6.4.4 Generator Matrix,
6.4.5 Systematic Linear Block Codes,
6.4.6 Parity-Check Matrix,
6.4.7 Syndrome Testing,
6.4.8 Error Correction,
6.4.9 Decoder Implementation,
6.5 Error-Detecting and Correcting Capability,
6.5.1 Weight and Distance of Binary Vectors,
6.5.2 Minimum Distance of a Linear Code,
6.5.3 Error Detection and Correction,
6.5.4 Visualization of a 6-Tuple Space,
6.5.5 Erasure Correction,
6.6 Usefulness of the Standard Array,
6.6.1 Estimating Code Capability,
6.6.2 An (n, k) Example,
6.6.3 Designing the (8, 2) Code,
6.6.4 Error Detection versus Error Correction Trade-Offs,
6.6.5 The Standard Array Provides Insight,
6.7 Cyclic Codes,
6.7.7 Algebraic Structure of Cyclic Codes,
6.7.2 Binary Cyclic Code Properties,
6.7.3 Encoding in Systematic Form,
6.7.4 Circuit for Dividing Polynomials,
6.7.5 Systematic Encoding with an (n - k)-Stage Shift Register,
6.7.6 Error Detection with an (n - k)-Stage Shift Register,
6.8 Weil-Known Block Codes,
6.8.1 Hamming Codes,
6.8.2 Extended Golay Code,
6.8.3 BCH Codes,
6.9 Conclusion,
7 CHANNEL CODING: PART 2 
7.1 Convolutional Encoding,
7.2 Convolutional Encoder Representation,
7.2.1 Connection Representation,
7.2.2 State Representation and the State Diagram,
7.2.3 The Tree Diagram,
7.2.4 The Trellis Diagram,
7.3 Formulation of the Convolutional Decoding Problem,
7.3.1 Maximum Likelihood Decoding,
7.3.2 Channel Models: Hard versus Soft Decisions,
7.3.3 The Viterbi Convolutional Decoding Algorithm,
7.3.4 An Example of Viterbi Convolutional Decoding,
7.3.5 Decoder Implementation,
7.3.6 Path Memory and Synchronization,
7.4 Properties of Convolutional Codes,
7.4.1 Distance Properties of Convolutional Codes,
7.4.2 Systematic and Nonsystematic Convolutional Codes,
7.4.3 Catastrophic Error Propagation in Convolutional Codes,
7.4.4 Performance Bounds for Convolutional Codes,
7.4.5 Coding Gain,
7.4.6 Best Known Convolutional Codes,
7.4.7 Convolutional Code Rate Trade-Off,
7.4.8 Soft-Decision Viterbi Decoding,
7.5 Other Convolutional Decoding Algorithms,
7.5.1 Sequential Decoding,
7.5.2 Comparisons and Limitations of Viterbi and Sequential Decoding,
7.5.3 Feedback Decoding,
7.6 Conclusion,
8 CHANNEL CODING: PART 3 
8.1 Reed-Solomon Codes,
8.1.1 Reed-Solomon Error Probability,
8.1.2 Why R-S Codes Perform Well Against Burst Noise,
8.1.3 R-S Performance as a Function of Size,
Redundancy, and Code Rate,
8.1.4 Finite Fields
8.1.5 Reed-Solomon Encoding,
8.1.6 Reed-Solomon Decoding,
8.2 Interleaving and Concatenated Codes,
8.2.1 Block Interleaving,
8.2.2 Convolutional Interleaving,
8.2.3 Concatenated Codes,
8.3 Coding and Interleaving Applied to the Compact Disc
Digital Audio System,
8.3.1 CIRC Encoding,
8.3.2 CIRC Decoding,
8.3.3 Interpolation and Muting,
8.4 Turbo Codes,
8.4.1 Turbo Code Concepts,
8.4.2 Log-Likelihood Algebra,
8.4.3 Product Code Example,
8.4.4 Encoding with Recursive Systematic Codes,
8.4.5 A Feedback Decoder,
8.4.6 The MAP Decoding Algorithm,
8.4.7 MAP Decoding Example,
8.5 Conclusion,
Appendix 8A The Sum of Log-Likelihood Ratios,
9 MODULATION AND CODING TRADE-OFFS 
9.1 Goals of the Communications System Designer,
9.2 Error Probability Plane,
9.3 Nyquist Minimum Bandwidth,
9.4 Shannon-Hartley Capacity Theorem,
9.4.1 Shannon Limit,
9.4.2 Entropy,
9.4.3 Equivocation and Effective Transmission Rate,
9.5 Bandwidth Efficiency Plane,
9.5.7 Bandwidth Efficiency ofMPSK and MFSK Modulation,
9.5.2 Analogies Between Bandwidth-Efficiency
and Error Probability Planes,
9.6 Modulation and Coding Trade-Offs,
9.7 Defining, Designing, and Evaluating Digital
Communication Systems,
9.7.7 M-ary Signaling,
9.7.2 Bandwidth-Limited Systems,
9.7.3 Power-Limited Systems,
9.7.4 Requirements for MPSK and MFSK Signaling,
9.7.5 Bandwidth-Limited Uncoded System Example,
9.7.6 Power-Limited Uncoded System Example,
9.7.7 Bandwidth-Limited and Power-Limited
Coded System Example,
9.8 Bandwidth-Efficient Modulation,
9.5.7 QPSK and Offset QPSK Signaling,
9.8.2 Minimum Shift Keying,
9.8.3 Quadrature Amplitude Modulation,
9.9 Modulation and Coding for Bandlimited Channels,
9.9.7 Commercial Telephone Modems,
9.9.2 Signal Constellation Boundaries
9.9.3 Higher Dimensional Signal Constellations,
9.9.4 Higher-Density Lattice Structures,
9.9.5 Combined Gain: N-Sphere Mapping and Dense Lattice,
9.10 Trellis-Coded Modulation,
9.70.7 The Idea Behind Trellis-Coded Modulation (TCM),
9.10.2 TCM Encoding,
9.10.3 TCM Decoding,
9.10.4 Other Trellis Codes,
9.10.5 Trellis-Coded Modulation Example,
9.10.6 Multi-Dimensional Trellis-Coded Modulation,
9.11 Conclusion,
10 SYNCHRONIZATION 
10.1 Introduction,
10.1.1 Synchronization Defined,
10.1.2 Costs versus Benefits,
10.1.3 Approach and Assumptions,
10.2 Receiver Synchronization,
10.2.1 Frequency and Phase Synchronization,
10.2.2 Symbol Synchronization—Discrete Symbol Modulations,
10.2.3 Synchronization with Continuous-Phase Modulations (CPM),
10.2.4 Frame Synchronization,
10.3 Network Synchronization,
10.3.1 Open-Loop Transmitter Synchronization,
10.3.2 Closed-Loop Transmitter Synchronization,
10.4 Conclusion,
11 MULTIPLEXING AND MULTIPLE ACCESS
11.1 Allocation of the Communications Resource,
11.1.1 Frequency-Division Multiplexing/Multiple Access,
11.1.2 Time-Division Multiplexing/Multiple Access,
11.1.3 Communications Resource Channelization,
11.1.4 Performance Comparison ofFDMA and TDMA,
11.1.5 Code-Division Multiple Access,
11.1.6 Space-Division and Polarization-Division Multiple Access,
11.2 Multiple Access Communications System and Architecture,
11.2.1 Multiple Access Information Flow,
11.2.2 Demand Assignment Multiple Access,
11.3 Access Algorithms,
11.3.1 ALOHA
11.3.2 Slotted ALOHA,
11.3.3 Reservation-ALOHA,
11.3.4 Performance Comparison ofS-ALOHA and R-ALOHA,
11.3.5 Polling Techniques,
11.4 Multiple Access Techniques Employed with INTELSAT,
11.4.1 Preassigned FDM/FM/FDMA or MCPC Operation,
11.4.2 MCPC Modes of Accessing an INTELSA T Satellite,
11.4.3 SPADE Operation,
11.4.4 TDMA in INTELSAT,
11.4.5 Satellite-Switched TDMA in INTELSAT,
11.5 Multiple Access Techniques for Local Area Networks,
11.5.1 Carrier-Sense Multiple Access Networks,
11.5.2 Token-Ring Networks,
11.5.3 Performance Comparison of CSMA/CD and Token-Ring Networks,
11.6 Conclusion,
12 SPREAD-SPECTRUM TECHNIQUES 
12.1 Spread-Spectrum Overview,
12.1.1 The Beneficial Attributes of Spread-Spectrum Systems,
12.1.2 A Catalog of Spreading Techniques,
12.1.3 Model for Direct-Sequence Spread-Spectrum
Interference Rejection,
12.1.4 Historical Background,
12.2 Pseudonoise Sequences,
72.2.1 Randomness Properties,
12.2.2 Shift Register Sequences,
12.2.3 PN Autocorrelation Function,
12.3 Direct-Sequence Spread-Spectrum Systems,
12.3.1 Example of Direct Sequencing,
12.3.2 Processing Gain and Performance,
12.4 Frequency Hopping Systems,
12.4.1 Frequency Hopping Example,
12.4.2 Robustness,
12.4.3 Frequency Hopping with Diversity,
12.4.4 Fast Hopping versus Slow Hopping,
12.4.5 FFH/MFSK Demodulator,
12.4.6 Processing Gain,
12.5 Synchronization,
12.5.1 Acquisition,
12.5.2 Tracking,
12.6 Jamming Considerations,
12.6.1 The Jamming Game,
12.6.2 Broadband Noise Jamming,
12.6.3 ^Partial-Band Noise Jamming,
12.6.4 . Multiple-Tone Jamming,
12.6.5 Pulse Jamming,
12.6.6 Repeat-Back Jamming,
12.6.7 BLADES System,
12.7 Commercial Applications,
12.7.1 Code-Division Multiple Access,
12.7.2 Multipath Channels,
12.7.3 The FCC Part 15 Rules for Spread-Spectrum Systems,
12.7.4 Direct Sequence versus Frequency Hopping,
12.8 Cellular Systems,
12.8.1 Direct Sequence CDMA,
12.8.2 Analog FM versus TDMA versus CDMA,
12.8.3 Interference-Limited versus Dimension-Limited Systems,
12.8.4 IS-95 CDMA Digital Cellular System,
12.9 Conclusion,
13 SOURCE CODING 
13.1 Sources,
13.1.1 Discrete Sources,
13.1.2 Waveform Sources,
13.2 Amplitude Quantizing,
13.2.1 Quantizing Noise,
13.2.2 Uniform Quantizing,
13.2.3 Saturation,
13.2.4 Dithering,
13.2.5 Nonuniform Quantizing,
13.3 Differential Pulse-Code Modulation,
13.3.1 One-Tap Prediction,
13.3.2 N-Tap Prediction,
13.3.3 Delta Modulation,
13.3.4 Sigma-Delta Modulation,
13.3.5 Sigma-Delta A-to-D Converter (ADC),
13.3.6 Sigma-Delta D-to-A Converter (DAC),
13.4 Adaptive Prediction,
13.4.1 Forward Prediction,
13.4.2 Synthesis/Analysis Coding,
13.5 Block Coding,
13.5.1 Vector Quantizing,
13.6 Transform Coding,
13.6.1 Quantization for Transform Coding,
13.6.2 Subband Coding,
13.7 Source Coding for Digital Data,
13.7.1 Properties of Codes,
13.7.2 Huffman Codes,
13.7.3 Run-Length Codes,
13.8 Examples of Source Coding,
13.8.1 Audio Compression,
13.8.2 Image Compression,
13.9 Conclusion,
14 ENCRYPTION AND DECRYPTION 
14.1 Models, Goals, and Early Cipher Systems,
14.1.1 A Model of the Encryption and Decryption Process,
14.1.2 System Goals,
14.1.3 Classic Threats,
14.1.4 Classic Ciphers,
14.2 The Secrecy of a Cipher System,
14.2.1 Perfect Secrecy,
14.2.2 Entropy and Equivocation,
14.2.3 Rate of a Language and Redundancy,
14.2.4 Unicity Distance and Ideal Secrecy,
14.3 Practical Security,
14.3.1 Confusion and Diffusion,
14.3.2 Substitution,
14.3.3 Permutation,
14.3.4 Product Cipher Systems,
14.3.5 The Data Encryption Standard,
14.4 Stream Encryption,
14.4.1 Example of Key Generation Using a Linear
Feedback Shift Register,
14.4.2 Vulnerabilities of Linear Feedback Shift Registers,
14.4.3 Synchronous and Self-Synchronous Stream
Encryption Systems,
14.5 Public Key Cryptosystems,
14.5.1 Signature Authentication using a Public Key Cryptosystem,
14.5.2 A Trapdoor One-Way Function,
14.5.3 The Rivest-Shamir-Adelman Scheme,
14.5.4 The Knapsack Problem,
14.5.5 A Public Key Cryptosystem based on a Trapdoor Knapsack,
14.6 Pretty Good Privacy,
14.6.1 Triple-DBS, CAST, and IDEA,
14.6.2 Diffie-Hellman (Elgamal Variation) and RSA,
14.6.3 PGP Message Encryption,
14.6.4 PGP Authentication and Signature,
14.7 Conclusion,
15 FADING CHANNELS 
15.1 The Challenge of Communicating over Fading Channels,
15.2 Characterizing Mobile-Radio Propagation,
75.2.7 Large-Scale Fading,
15.2.2 Small-Scale Fading,
15.3 Signal Time-Spreading,
75.3.7 Signal Time-Spreading Viewed in the Time-Delay Domain,
15.3.2 Signal Time-Spreading Viewed in the Frequency Domain,
15.3.3 Examples of Flat Fading and Frequency-Selective Fading,
15.4 Time Variance of the Channel Caused by Motion,
75.4.7 Time Variance Viewed in the Time Domain,
15.4.2 Time Variance Viewed in the Doppler-Shift Domain,
15.4.3 Performance over a Slow-and Flat-Fading Rayleigh Channel,
15.5 Mitigating the Degradation Effects of Fading,
75.5.7 Mitigation to Combat Frequency-Selective Distortion,
75.5.2 Mitigation to Combat Fast-Fading Distortion,
15.5.3 Mitigation to Combat Loss in SNR,
15.5.4 Diversity Techniques,
15.5.5 Modulation Types for Fading Channels,
15.5.6 The Role of an Interleaver,
15.6 Summary of the Key Parameters Characterizing Fading Channels,
75.6.7 Fast Fading Distortion: Case 1,
15.6.2 Frequency-Selective Fading Distortion: Case 2,
15.6.3 Fast-Fading and Frequency-Selective Fading Distortion: Case 3,
15.7 Applications: Mitigating the Effects of Frequency-Selective Fading,
75.7.7 The Viterbi Equalizer as Applied to GSM,
15.7.2 The Rake Receiver as Applied to Direct-Sequence
Spread-Spectrum (DS/SS) Systems,
15.8 Conclusion,
A A REVIEW OF FOURIER TECHNIQUES
A.I Signals, Spectra, and Linear Systems,
A.2 Fourier Techniques for Linear System Analysis,
A2.7 Fourier Series Transform,
A.2.2 Spectrum of a Pulse Train,
A.2.3 Fourier Integral Transform,
A.3 Fourier Transform Properties,
A.3.1 Time Shifting Property,
A.3.2 Frequency Shifting Property,
A.4 Useful Functions,
A.4.1 Unit Impulse Function,
A.4.2 Spectrum of a Sinusoid
A.5 Convolution,
A5.7 Graphical Example of Convolution,
A.5.2 Time Convolution Property,
A.5.3 Frequency Convolution Property,
A.5.4 Convolution of a Function with a Unit Impulse,
A.5.5 Demodulation Application of Convolution,
A.6 Tables of Fourier Transforms and Operations,
B FUNDAMENTALS OF STATISTICAL DECISION THEORY 
B.I Bayes' Theorem,
5.7.7 Discrete Form of Bayes'Theorem,
B.1.2 Mixed Form of Bayes'Theorem,
B.2 Decision Theory,
5.2.7 Components of the Decision Theory Problem,
B.2.2 The Likelihood Ratio Test and the Maximum
A Posteriori Criterion,
B.2.3 The Maximum Likelihood Criterion,
B.3 Signal Detection Example,
B.3.1 The Maximum Likelihood Binary Decision,
B.3.2 Probability of Bit Error,
C RESPONSE OF A CORRELATOR TO WHITE NOISE 
D OFTEN-USED IDENTITIES 
E s-DOMAIN, z-DOMAIN AND DIGITAL FILTERING 
E.I The Laplace Transform,
£.7.7 Standard Laplace Transforms,
E.1.2 Laplace Transform Properties,
E.1.3 Using the Laplace Transform,
E.1.4 Transfer Function,
E.1.5 RC Circuit Low Pass Filtering,
E.1.6 Poles and Zeroes,
E.1.7 Linear System Stability,
E.2 The z-Transform,
E.2.1 Calculating the z-Transform,
E.2.2 The Inverse z-Transform,
E.3 Digital Filtering,
E.3.1 Digital Filter Transfer Function,
E.3.2 Single Pole Filter Stability,
E.3.3 General Digital Filter Stability,
E.3.4 z-Plane Pole-Zero Diagram and the Unit Circle,
£.3.5 Discrete Fourier Transform of Digital Filter Impulse Response,
E.4 Finite Impulse Response Filter Design,
E.4.1 FIR Filter Design,
E.4.2 The FIR Differentiator,
E.5 Infinite Impulse Response Filter Design,
E.5.1 Backward Difference Operator,
£.5.2 HR Filter Design using the Bilinear Transform,
E.5.3 The IIR Integrator,
F LIST OF SYMBOLS
INDEX