Skip to content

Program Evaluation Methods and Case Studies

Best in textbook rentals since 2012!

ISBN-10: 0132275600

ISBN-13: 9780132275606

Edition: 7th 2007 (Revised)

Authors: Emil J. Posavac, Raymond G. Carey

List price: $123.20
Blue ribbon 30 day, 100% satisfaction guarantee!
what's this?
Rush Rewards U
Members Receive:
Carrot Coin icon
XP icon
You have reached 400 XP and carrot coins. That is the daily max!

Description:

Comprehensive yet accessible, this book provides a practical introduction to the skills, attitudes, and methods required to assess the worth and value of human services offered in public and private organizations in a wide range of fields. Readers are introduced to the need for such activities, the methods for carrying out evaluations, and the essential steps in organizing findings into reports. The book focuses on smaller projects carried out by an internal evaluator (i.e., on the work of people who are closely associatedwith the service to be evaluated), and is designed to help program planners, developers, and evaluators to work with program staff members who might be threatenedby…    
Customers also bought

Book details

List price: $123.20
Edition: 7th
Copyright year: 2007
Publisher: Prentice Hall PTR
Publication date: 6/26/2006
Binding: Hardcover
Pages: 320
Size: 6.34" wide x 9.06" long x 0.91" tall
Weight: 1.298
Language: English

Emil J. Posavac (Ph.D., University of Illinois, Champaign) is Professor Emeritus of Psychology at Loyola University of Chicago where he served as director of the Applied Social Psychology Graduate Program and chairman of the Psychology Department. He has consulted with a number of public and private organizations. He has published over sixty papers and chapters, edited or co-edited six volumes on program evaluation and applied social psychology, and written numerous evaluation reports for health care and educational institutions. He has written a textbook (with Eugene B. Zechmeister) on statistical analysis based on emerging orientations that emphasize a more complete understanding and…    

Emil J. Posavac (Ph.D., University of Illinois, Champaign) is Professor Emeritus of Psychology at Loyola University of Chicago where he served as director of the Applied Social Psychology Graduate Program and chairman of the Psychology Department. He has consulted with a number of public and private organizations. He has published over sixty papers and chapters, edited or co-edited six volumes on program evaluation and applied social psychology, and written numerous evaluation reports for health care and educational institutions. He has written a textbook (with Eugene B. Zechmeister) on statistical analysis based on emerging orientations that emphasize a more complete understanding and…    

Preface
Program Evaluation: An Overview
Evaluation Tasks That Need to Be Done
Verify That Resources Would Be Devoted to Meeting Unmet Needs
Verify That Implemented Programs Do Provide Services
Examine the Outcomes
Determine Which Programs Produce the Most Favorable Outcomes
Select the Programs That Offer the Most Needed Types of Services
Provide Information to Maintain and Improve Quality
Watch for Unplanned Side Effects
Common Types of Program Evaluations
Assess Needs of the Program Participants
Examine the Process of Meeting the Needs
Measure the Outcomes of the Program
Integrate the Needs, Costs, and Outcomes
Activities Often Confused with Program Evaluation
Different Types of Evaluations for Different Kinds of Programs
Organizations Needing Program Evaluations
Time Frames of Needs
Extensiveness of Programs
Purpose of Program Evaluation
The Roles of Evaluators
A Variety of Work Settings
Comparison of Internal and External Evaluators
Evaluation and Service
Evaluation and Related Activities of Organizations
Summary and Preview
Study Questions
Additional Resource
Planning an Evaluation
An Overview of Evaluation Models
The Traditional Model
Social Science Research Model
Industrial Inspection Model
Black Box Evaluation
Objectives-Based Evaluation
Goal-Free Evaluation
Fiscal Evaluation
Accountability Model
Expert Opinion Model
Naturalistic or Qualitative Model
Success Case Method
Empowerment Evaluation
Theory-Driven Evaluation
An Improvement-Focused Approach
Steps in Preparing to Conduct an Evaluation
Identify the Program and Its Stakeholders
Become Familiar with Information Needs
Plan the Evaluation
Dysfunctional Attitudes Toward Program Evaluation
Assume That the Program Is Perfect
Fear That the Evaluation Will Offend the Staff
Fear That the Evaluation Will Inhibit Innovation
Fear That the Program Will Be Terminated
Fear That Information Will Be Misused
Fear That Qualitative Understanding May Be Supplanted
Fear That Evaluation Drains Program Resources
Fear of Losing Control of the Program
Fear That Evaluation Has Little Impact
The Effect of These Attitudes
Summary and Preview
Study Questions
Additional Resource
Selecting Criteria and Setting Standards
Useful Criteria and Standards
Criteria That Reflect a Program's Purposes
Criteria That the Staff Can Influence
Criteria That Can Be Measured Reliably and Validly
Criteria That Stakeholders Participate in Selecting
Developing Goals and Objectives
How Much Agreement on Goals Is Needed?
Different Types of Goals
Goals That Apply to All Programs
Evaluation Criteria and Evaluation Questions
Does the Program or Plan Match the Values of the Stakeholders?
Does the Program or Plan Match the Needs of the People to Be Served?
Does the Program as Implemented Fulfill the Plans?
Do the Outcomes Achieved Match the Goals?
Using Program Theory
Is the Program Accepted?
Are the Resources Devoted to the Program Being Expended Appropriately?
Some Practical Limitations in Selecting Evaluation Criteria
Evaluation Budget
Time Available for the Project
Criteria That Are Credible to the Stakeholders
Summary and Preview
Study Questions
Additional Resource
Developing Measures
Sources of Data for Evaluation
Intended Beneficiaries of the Program
Providers of Services
Observers
Which Sources Should Be Used?
Good Assessment Procedures
Use Multiple Variables
Use Nonreactive Measures
Use Variables Relevant to Information Needs
Using Multiple Measures in an Evaluation of a Summer Community Program for Youth
Use Valid Measures
Use Reliable Measures
Use Measures That Can Detect Change
Use Cost-Effective Measures
Types of Measures of Evaluation Criteria
Written Surveys and Interviews with Program Participants
Checklists, Tests, and Records
Preparing Special Surveys
Format of a Survey
Preparing Survey Items
Instructions and Pretests
Summary and Preview
Study Questions
Additional Resource
Ethics in Program Evaluation
Standards for the Practice of Evaluation
Ethical Issues Involved in the Treatment of People
Compensating for Ineffective, Novel Treatments
Obtaining Informed Consent
Maintaining Confidentiality
Role Conflicts Facing Evaluators
Recognizing the Needs of Different Stakeholders
Program Managers Are Concerned with Efficiency
Staff Members Seek Assistance in Service Delivery
Clients Want Effective and Appropriate Services
Community Members Want Cost-Effective Programs
The Validity of Evaluations
Valid Measurement Instruments
Skilled Data Collectors
Appropriate Research Design
Adequate Descriptions of Program and Procedures
Avoiding Possible Negative Side Effects of Evaluation Procedures
Can Someone Be Hurt by Inaccurate Findings?
Consider Statistical Type II Errors
Pay Attention to Unplanned Effects
Analyze Implicit Values Held by the Evaluator
Institutional Review Boards and Program Evaluation
Ethical Problems Evaluators Report
Summary and Preview
Study Questions
Additional Resource
The Assessment of Need
Definitions of Need
Sources of Information for the Assessment of Need
Describing the Current Situation
Social Indicators of Need
Community Surveys of Need
Services Already Available
Key Informants
Focus Groups and Open Forums
Inadequate Assessment of Need
Failing to Examine Need
Failing to Examine the Context of Need
Failing to Relate Need to Implementation Plans
Failing to Deal with Ignorance of Need
Using Needs Assessments in Program Planning
Summary and Preview
Study Questions
Additional Resource
Monitoring the Operation of Programs
Monitoring Programs as a Means of Evaluating Programs
What to Summarize with Information Systems
Relevant Information
Actual State of Program
Program Participants
Providers of Services
Program Records and Information Systems
Problems with Agency Records
Increasing the Usefulness of Records
How Records Can Be Used to Monitor Programs
Reporting Information Separately for Each Therapist
Developing Information Systems for Agencies
Threatening Uses of Information Systems
Avoiding Common Problems in Implementing an Information System
Guard Against the Misuse of the Information
Avoid Setting Arbitrary Standards
Avoid Serving the Needs of Only One Group
Avoid Duplicating Records
Avoid Adding to the Work of the Staff
Avoid a Focus on Technology
Summary and Preview
Study Questions
Additional Resource
Qualitative Evaluation Methods
Evaluation Settings Best Served by Qualitative Evaluations
Admission to Graduate Studies
Dissatisfaction with a Library Collection
Evaluating a Political Campaign
Gathering Qualitative Information
The Central Importance of the Observer
Observational Methods
Using Qualitative Methods in an Evaluation of a University Library
Interviewing to Obtain Qualitative Information
Carrying Out Naturalistic Evaluations
Making Unrestricted Observations
Integrating Impressions
Sharing Interpretations
Preparing Reports
Are Qualitative Evaluations Subjective?
Coordinating Qualitative and Quantitative Methods
The Substance of the Evaluation
Getting Insights from the Most Successful Participants
Changing Emphases as Understanding Expands
The Evaluation Questions
Cost of Evaluation
Philosophical Assumptions
Summary and Preview
Study Questions
Additional Resource
Single-Group, Nonexperimental Outcome Evaluations
Single-Group Evaluation Designs
Observe Only After the Program
Observe Before and After the Program
Uses of Single-Group, Descriptive Designs
Did the Participants Meet a Criterion?
Did the Participants Improve?
A Pretest-Posttest Design to Evaluate a Peer-Based Program to Prevent Skin Cancer
Did the Participants Improve Enough?
Relating Change to Service Intensity and Participant Characteristics
Threats to Internal Validity
Actual but Nonprogram-Related Changes in the Participants
Apparent Changes Dependent on Who Was Observed
Changes Related to Methods of Obtaining Observations
Effects of Interactions of These Threats
Internal Validity Threats Are Double-Edged Swords
Construct Validity in Pretest-Posttest Designs
Overinterpreting the Results of Single-Group Designs
Usefulness of Single-Group Designs as Initial Approaches to Program Evaluation
Assessing the Usefulness of Further Evaluations
Correlating Improvement with Other Variables
Preparing the Facility for Further Evaluation
Summary and Preview
Study Questions
Additional Resource
Quasi-Experimental Approaches to Outcome Evaluation
Making Numerous Observations
Time-Series Designs
Patterns of Outcomes Over Time Periods
Analysis of Time-Series Designs
Observing Other Groups
Nonequivalent Control Group Designs
Problems in Selecting Comparison Groups
Nonequivalent Control Groups Used to Evaluate an Employee Incentive Plan
Regression-Discontinuity Design
Observing Other Dependent Variables
Combining Designs to Increase Internal Validity
Time-Series and Nonequivalent Control Groups
Selective Control Design
Summary and Preview
Study Questions
Additional Resource
Using Experiments to Evaluate Programs
Experiments in Program Evaluation
Benefits of Experiments
Experimental Designs
Objections to Experimentation
Don't Experiment on Me!
We Already Know What Is Best
I Know What Is Best for My Client
Experiments Are Just Too Much Trouble
The Most Desirable Times to Conduct Experiments
When a New Program Is Introduced
When Stakes Are High
When There Is Controversy About Program Effectiveness
When Policy Change Is Desired
Teaching Doctors Communication Skills: An Evaluation with Random Assignment and Pretests
When Demand Is High
Getting the Most out of an Experimental Design
Take Precautions Before Data Collection
Keep Track of Randomization While the Experiment Is in Progress
Analyze the Data Reflectively
Summary and Preview
Study Questions
Additional Resource
Analyses of Costs and Outcomes
Cost Analyses and Budgets
Types of Costs
An Example Budget
The Necessity of Examining Costs
Comparing Outcomes to Costs
The Essence of Cost-Benefit Analysis
The Essence of Cost-Effectiveness Analysis
When Outcomes Cannot Be Put into the Same Units
Some Details of Cost Analyses
Units of Analysis Must Reflect the Purpose of the Program
Future Costs and Benefits Are Estimated
Who Pays the Costs and Who Reaps the Benefits?
The Value of Providing Smoking Cessation Clinics for Employees on Company Time
Using Cost-Benefit and Cost-Effectiveness Analyses
Major Criticisms of Cost Analyses
The Worth of Psychological Benefits Is Hard to Estimate
Placing a Value on Lives Seems Wrong
Cost-Benefit and Cost-Effectiveness Analyses Require Many Assumptions
Summary and Preview
Study Questions
Additional Resource
Evaluation Reports: Interpreting and Communicating Findings
Developing a Communication Plan
Explore Stakeholder Information Needs
Plan Reporting Meetings
Set a Communication Schedule
Personal Presentations of Findings
Need for Personal Presentations
Content of Personal Presentations
Audience for the Personal Presentations
Distributing Drafts of Reports
Content of Formal Written Evaluation Reports
Remember the Purposes of the Formal Report
Provide an Outline and a Summary
Describe the Context of the Evaluation
Describe the Program Participants
Justify the Criteria Selected
Describe the Data-Gathering Procedures
Provide the Findings
Develop Recommendations
Formal Reports Should Look Attractive
Provide Progress Reports and Press Releases
Summary and Preview
Study Questions
Additional Resources
How to Encourage Utilization
Obstacles to Effective Utilization
Constraints on Managers
Value Conflicts Among Stakeholders
Misapplied Methodology
Evaluating a Program at Arm's Length
Dealing with Mixed Findings
Don't Abdicate Your Responsibility
Don't Take the Easy Way Out
Show How to Use the Evaluation to Improve the Program
Using Evaluations When an Innovative Program Seems No Better than Other Treatments
When Can Evaluators Be Sure Groups Do Not Differ?
Are Evaluations Valuable Even When No Advantage for the Innovation Is Found?
Evaluations of the Outcomes of Boot Camp Prisons: The Value of Finding No Differences Between Program and Comparison Groups
Developing a Learning Culture
Work with Stakeholders
Adopt Developmental Interpretations
Frame Findings in Terms of Improvements
Treat Findings as Tentative Indicators, Not Final Answers
Recognize Service Providers' Needs
Keep Evaluation Findings on the Agency's Agenda
The Evaluation Attitude
Summary and Possible Trends for Program Evaluation
Study Questions
Additional Resource
Illustrative Program Evaluation Report
References
Name Index
Subject Index