| |
| |
| |
Introduction and examples | |
| |
| |
| |
Introduction | |
| |
| |
| |
Why Bayes? | |
| |
| |
| |
Estimating the probability of a rare event | |
| |
| |
| |
Building a predictive model | |
| |
| |
| |
Where we are going | |
| |
| |
| |
Discussion and further references | |
| |
| |
| |
Belief, probability and exchangeability | |
| |
| |
| |
Belief functions and probabilities | |
| |
| |
| |
Events, partitions and Bayes' rule | |
| |
| |
| |
Independence | |
| |
| |
| |
Random variables | |
| |
| |
| |
Discrete random variables | |
| |
| |
| |
Continuous random variables | |
| |
| |
| |
Descriptions of distributions | |
| |
| |
| |
Joint distributions | |
| |
| |
| |
Independent random variables | |
| |
| |
| |
Exchangeability | |
| |
| |
| |
de Finetti's theorem | |
| |
| |
| |
Discussion and further references | |
| |
| |
| |
One-parameter models | |
| |
| |
| |
The binomial model | |
| |
| |
| |
Inference for exchangeable binary data | |
| |
| |
| |
Confidence regions | |
| |
| |
| |
The Poisson model | |
| |
| |
| |
Posterior inference | |
| |
| |
| |
Example: Birth rates | |
| |
| |
| |
Exponential families and conjugate priors | |
| |
| |
| |
Discussion and further references | |
| |
| |
| |
Monte Carlo approximation | |
| |
| |
| |
The Monte Carlo method | |
| |
| |
| |
Posterior inference for arbitrary functions | |
| |
| |
| |
Sampling from predictive distributions | |
| |
| |
| |
Posterior predictive model checking | |
| |
| |
| |
Discussion and further references | |
| |
| |
| |
The normal model | |
| |
| |
| |
The normal model | |
| |
| |
| |
Inference for the mean, conditional on the variance | |
| |
| |
| |
Joint inference for the mean and variance | |
| |
| |
| |
Bias, variance and mean squared error | |
| |
| |
| |
Prior specification based on expectations | |
| |
| |
| |
The normal model for non-normal data | |
| |
| |
| |
Discussion and further references | |
| |
| |
| |
Posterior approximation with the Gibbs sampler | |
| |
| |
| |
A semiconjugate prior distribution | |
| |
| |
| |
Discrete approximations | |
| |
| |
| |
Sampling from the conditional distributions | |
| |
| |
| |
Gibbs sampling | |
| |
| |
| |
General properties of the Gibbs sampler | |
| |
| |
| |
Introduction to MCMC diagnostics | |
| |
| |
| |
Discussion and further references | |
| |
| |
| |
The multivariate normal model | |
| |
| |
| |
The multivariate normal density | |
| |
| |
| |
A semiconjugate prior distribution for the mean | |
| |
| |
| |
The inverse-Wishart distribution | |
| |
| |
| |
Gibbs sampling of the mean and covariance | |
| |
| |
| |
Missing data and imputation | |
| |
| |
| |
Discussion and further references | |
| |
| |
| |
Group comparisons and hierarchical modeling | |
| |
| |
| |
Comparing two groups | |
| |
| |
| |
Comparing multiple groups | |
| |
| |
| |
Exchangeability and hierarchical models | |
| |
| |
| |
The hierarchical normal model | |
| |
| |
| |
Posterior inference | |
| |
| |
| |
Example: Math scores in U.S. public schools | |
| |
| |
| |
Prior distributions and posterior approximation | |
| |
| |
| |
Posterior summaries and shrinkage | |
| |
| |
| |
Hierarchical modeling of means and variances | |
| |
| |
| |
Analysis of math score data | |
| |
| |
| |
Discussion and further references | |
| |
| |
| |
Linear regression | |
| |
| |
| |
The linear regression model | |
| |
| |
| |
Least squares estimation for the oxygen uptake data | |
| |
| |
| |
Bayesian estimation for a regression model | |
| |
| |
| |
A semiconjugate prior distribution | |
| |
| |
| |
Default and weakly informative prior distributions | |
| |
| |
| |
Model selection | |
| |
| |
| |
Bayesian model comparison | |
| |
| |
| |
Gibbs sampling and model averaging | |
| |
| |
| |
Discussion and further references | |
| |
| |
| |
Nonconjugate priors and Metropolis-Hastings algorithms | |
| |
| |
| |
Generalized linear models | |
| |
| |
| |
The Metropolis algorithm | |
| |
| |
| |
The Metropolis algorithm for Poisson regression | |
| |
| |
| |
Metropolis, Metropolis-Hastings and Gibbs | |
| |
| |
| |
The Metropolis-Hastings algorithm | |
| |
| |
| |
Why does the Metropolis-Hastings algorithm work? | |
| |
| |
| |
Combining the Metropolis and Gibbs algorithms | |
| |
| |
| |
A regression model with correlated errors | |
| |
| |
| |
Analysis of the ice core data | |
| |
| |
| |
Discussion and further references | |
| |
| |
| |
Linear and generalized linear mixed effects models | |
| |
| |
| |
A hierarchical regression model | |
| |
| |
| |
Full conditional distributions | |
| |
| |
| |
Posterior analysis of the math score data | |
| |
| |
| |
Generalized linear mixed effects models | |
| |
| |
| |
A Metropolis-Gibbs algorithm for posterior approximation | |
| |
| |
| |
Analysis of tumor location data | |
| |
| |
| |
Discussion and further references | |
| |
| |
| |
Latent variable methods for ordinal data | |
| |
| |
| |
Ordered probit regression and the rank likelihood | |
| |
| |
| |
Probit regression | |
| |
| |
| |
Transformation models and the rank likelihood | |
| |
| |
| |
The Gaussian copula model | |
| |
| |
| |
Rank likelihood for copula estimation | |
| |
| |
| |
Discussion and further references | |
| |
| |
Exercises | |
| |
| |
Common distributions | |
| |
| |
References | |
| |
| |
Index | |