Skip to content

Scientific Reasoning The Bayesian Approach

Best in textbook rentals since 2012!

ISBN-10: 081269578X

ISBN-13: 9780812695786

Edition: 3rd 2005

Authors: Colin Howson, Peter Urbach

List price: $40.00
Blue ribbon 30 day, 100% satisfaction guarantee!
what's this?
Rush Rewards U
Members Receive:
Carrot Coin icon
XP icon
You have reached 400 XP and carrot coins. That is the daily max!

Customers also bought

Book details

List price: $40.00
Edition: 3rd
Copyright year: 2005
Publisher: Open Court Publishing Company
Publication date: 3/16/2006
Binding: Paperback
Pages: 470
Size: 5.75" wide x 8.75" long x 0.75" tall
Weight: 1.254
Language: English

Colin Howson is Professor of Philosophy at the University of Toronto and Emeritus Professor, London School of Economics and Political Science. He is the author of Hume's Problem: Induction and the Justification of Belief (2000), Logic with Trees (1997) and, with Peter Urbach, Scientific Reasoning: The Bayesian Approach (3rd edition, 2006).

Preface to the Second Edition
Bayesian Principles
Introduction
The Problem of Induction
Popper's Attempt to Solve the Problem of Induction
Scientific Method in Practice
Probabilistic Induction: The Bayesian Approach
The Objectivity Ideal
The Plan of the Book
Exercises
The Probability Calculus
Introduction
Some Logical Preliminaries
The Probability Calculus
The Axioms
Two Different Interpretations of the Axioms
Useful Theorems of the Calculus
Random Variables
Kolmogorov's Axioms
Propositions
Infinitary Operations
Countable Additivity
Exercises
Distributions and Densities
Distributions
Probability Densities
Expected Values
The Mean and Standard Deviation
Probabilistic Independence
Conditional Distributions
The Bivariate Normal
The Binomial Distribution
The Weak Law of Large Numbers
Exercises
The Classical and Logical Theories
Introduction
The Classical Theory
The Principle of Indifference
The Rule of Succession
The Principle of Indifference and the Paradoxes
Carnap's Logical Probability Measures
Carnap's c[dagger] and c*
The Dependence on A Priori Assumptions
Exercises
Subjective Probability
Degrees of Belief and the Probability Calculus
Betting Quotients and Degrees of Belief
Why Should Degrees of Belief Obey the Probability Calculus?
The Ramsey--de Finetti Theorem
Conditional Betting-Quotients
Fair Odds and Zero Expectations
Fairness and Consistency
Upper and Lower Probabilities
Other Arguments for the Probability Calculus
The Standard Dutch Book Argument
Scoring Rules
Using a Standard
The Cox-Good-Lucas Argument
Introducing Utilities
Conclusion
Exercises
Updating Belief
Bayesian Conditionalisation
Jeffrey Conditionalisation
Generalising Jeffrey's Rule to Partitions
Dutch Books Again
The Principle of Minimum Information
Conclusion
Exercises
Bayesian Induction: Deterministic Theories
Bayesian Versus Non-Bayesian Approaches
The Bayesian Notion of Confirmation
The Application of Bayes's Theorem
Falsifying Hypotheses
Checking a Consequence
The Probability of the Evidence
The Ravens Paradox
The Design of Experiments
The Duhem Problem
The Problem
Lakatos and Kuhn on the Duhem Problem
The Duhem Problem Solved by Bayesian Means
Good Data, Bad Data, and Data Too Good to Be True
Ad Hoc Hypotheses
Some Examples of Ad Hoc Hypotheses
A Standard Account of Adhocness
Popper's Defence of the Adhocness Criterion
Why the Standard Account Must Be Wrong
The Bayesian View of Ad Hoc Theories
The Notion of Independent Evidence
Infinitely Many Theories Compatible with the Data
The Problem
The Bayesian Approach to the Problem
Conclusion
Exercises
Classical Inference in Statistics
Fisher's Theory
Falsificationism in Statistics
Fisher's Theory
Has Fisher's Theory a Rational Foundation?
Which Test-Statistic?
The Chi-Square Test
Sufficient Statistics
Conclusion
The Neyman-Pearson Theory of Significance Tests
An Outline of the Theory
How the Neyman-Pearson Theory Improves on Fisher's
The Choice of Critical Region
The Choice of Test-Statistic and the Use of Sufficient Statistics
Some Problems for the Neyman-Pearson Theory
What Does It Mean to Accept and Reject a Hypothesis?
The Neyman-Pearson Theory as an Account of Inductive Support
A Well-Supported Hypothesis Rejected in a Significance Test
A Subjective Element in Neyman-Pearson Testing: The Choice of Null Hypothesis
A Further Subjective Element: Determining the Outcome Space
Justifying the Stopping Rule
Testing Composite Hypotheses
Conclusion
Exercises
The Classical Theory of Estimation
Introduction
Point Estimation
Sufficient Estimators
Unbiased Estimators
Consistent Estimators
Efficient Estimators
Interval Estimation
Confidence Intervals
The Categorical-Assertion Interpretation of Confidence Intervals
The Subjective-Confidence Interpretation of Confidence Intervals
The Stopping Rule Problem, Again
Prior Knowledge
The Multiplicity of Competing Intervals
Principles of Sampling
Random Sampling
Judgment Sampling
Objections to Judgment Sampling
Some Advantages of Judgment Sampling
Conclusion
Exercises
Statistical Inference in Practice
Causal Hypotheses: Clinical and Agricultural Trials
Introduction: The Problem
Control and Randomization
Significance-Test Justifications for Randomization
The Problem of the Reference Population
Fisher's Argument
Some Difficulties with Fisher's Argument
A Plausible Defence
Why the Plausible Defence Doesn't Work
The Eliminative-Induction Defence of Randomization
Sequential Clinical Trials
Practical and Ethical Considerations
Conclusion
Regression Analysis
Introduction
Simple Linear Regression
The Method of Least Squares
Why Least Squares?
Intuition as a Justification
The Gauss-Markov Justification
The Maximum-Likelihood Justification
Summary
Prediction
Prediction Intervals
Prediction by Confidence Intervals
Making a Further Prediction
Examining the Form of a Regression
Prior Knowledge
Data Analysis
Inspecting Scatter Plots
Outliers
Influential Points
Conclusion
Exercises
The Bayesian Approach to Statistical Inference
Objective Probability
Introduction
Von Mises's Frequency Theory
Relative Frequencies in Collectives
Probabilities in Collectives
Independence in Derived Collectives
Summary of the Main Features of Von Mises's Theory
The Empirical Adequacy of Von Mises's Theory
The Fast-Convergence Argument
The Laws of Large Numbers Argument
The Limits-Occur-Elsewhere-in-Science Argument
Preliminary Conclusion
Popper's Propensity Theory, and Single-Case Probabilities
Popper's Propensity Theory
Jacta Alea Est
The Theory of Objective Chance
A Bayesian Reconstruction of Von Mises's Theory
Are Objective Probabilities Redundant?
Exchangeability and the Existence of Objective Probability
Conclusion
Exercises
Bayesian Induction: Statistical Hypotheses
The Prior Distribution and the Question of Subjectivity
Estimating the Mean of a Normal Population
Estimating a Binomial Proportion
Credible Intervals and Confidence Intervals
The Principle of Stable Estimation
Describing the Evidence
Sufficient Statistics
Methods of Sampling
Testing Causal Hypotheses
A Bayesian Analysis of Clinical Trials
Clinical Trials without Randomization
Conclusion
Exercises
Finale
The Objections to the Subjective Bayesian Theory
Introduction
The Bayesian Theory Is Prejudiced in Favour of Weak Hypotheses
The Prior Probability of Universal Hypotheses Must Be Zero
Probabilistic Induction Is Impossible
The Principal Principle Is Inconsistent (Miller's Paradox)
The Paradox of Ideal Evidence
Hypotheses Cannot Be Supported by Evidence Already Known
P(h
Evidence Doesn't Confirm Theories Constructed to Explain It
The Principle of Explanatory Surplus
Prediction Scores Higher Than Accommodation
The Problem of Subjectivism
Entropy, Symmetry, and Objectivity
Simplicity
People Are Not Bayesians
The Dempster-Shafer Theory
Belief Functions
What Are Belief Functions?
Representing Ignorance
Evaluating Probabilities with Imprecise Information
Are We Calibrated?
Reliable Inductive Methods
Finale
Exercises
Bibliography
Index