Statistical and Methodological Myths and Urban Legends Doctrine, Verity and Fable in the Organizational and Social Sciences

ISBN-10: 0805862382
ISBN-13: 9780805862386
Edition: 2009
List price: $59.95 Buy it from $47.79
This item qualifies for FREE shipping

*A minimum purchase of $35 is required. Shipping is provided via FedEx SmartPost® and FedEx Express Saver®. Average delivery time is 1 – 5 business days, but is not guaranteed in that timeframe. Also allow 1 - 2 days for processing. Free shipping is eligible only in the continental United States and excludes Hawaii, Alaska and Puerto Rico. FedEx service marks used by permission."Marketplace" orders are not eligible for free or discounted shipping.

30 day, 100% satisfaction guarantee

If an item you ordered from TextbookRush does not meet your expectations due to an error on our part, simply fill out a return request and then return it by mail within 30 days of ordering it for a full refund of item cost.

Learn more about our returns policy

Description: The objective of this book is to provide an up-to-date review of commonly undertaken methodological and statistical practices that are sustained, in part, upon sound rationale and justification and, in part, upon unfounded lore. The practices  More...

New Starting from $69.60
what's this?
Rush Rewards U
Members Receive:
coins
coins
You have reached 400 XP and carrot coins. That is the daily max!

Study Briefs

Limited time offer: Get the first one free! (?)

All the information you need in one place! Each Study Brief is a summary of one specific subject; facts, figures, and explanations to help you learn faster.

Add to cart
Study Briefs
Psychology Online content $4.95 $1.99
Add to cart
Study Briefs
Lifespan Human Development Online content $4.95 $1.99

Customers also bought

Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading

Book details

List price: $59.95
Copyright year: 2009
Publisher: Taylor & Francis Group
Publication date: 10/3/2008
Binding: Paperback
Pages: 412
Size: 6.00" wide x 9.00" long x 1.00" tall
Weight: 1.298
Language: English

The objective of this book is to provide an up-to-date review of commonly undertaken methodological and statistical practices that are sustained, in part, upon sound rationale and justification and, in part, upon unfounded lore. The practices themselves are not necessarily intrinscially faulty. Rather, it is often the reasoning why or rationalization used to justify the practices that is questionable. In this book, a group of scholars look at statistical and urban myths and legends and suggest what the state of practice should be. This book meets an important need and will be of interest to researchers, students and scholars in the fields of organizational and social sciences. Book jacket.

Charles E. Lance is a Professor of Industrial and Organizational Psychology at The University of Georgia. His work in the areas of performance measurement, assessment center validity, research methods, and structural equation modeling has appeared in such journals as Psychological Methods, Organizational Research Methods (ORM), Journal of Applied Psychology, Organizational Behavior and Human Decision Processes, Journal of Management and Multivariate Behavioral Research. His 2000 ORM article with Bob Vandenberg on measurement invariance is the most often cited article in ORM�s history and won the 2005 Research Methods Division�s Robert McDonald Advancement of Organizational Research Methodology Award. His 2006 ORM article on the origin and evolution of four statistical cutoff criteria won the Research Methods Division of the Academy of Management Best Paper of the Year Award. Also, his 2008 article "Why Assessment Centers (ACs) Do Not Work the Way They�re Supposed to" was one of the two inaugural focal articles in Industrial and Organizational Psychology: An Exchange of Perspectives on Science and Practice. Dr. Lance is also co-editor of Performance Measurement: Current Perspectives and Future Challenges (with Wink Bennett and Dave Woehr). Dr. Lance is a Fellow of the Society for Industrial and Organizational Psychology (SIOP) and the American Psychological Association, former President of the Atlanta Society for Applied Psychology, is a member of the Society for Organizational Behavior and is a licensed psychologist in the State of Georgia. He is currently Associate Editor of ORM, and on the editorial boards of Personnel Psychology, Human Performance, and Group & Organization Management.Robert J. Vandenberg is a Professor of Management in the Terry College of Business at the University of Georgia, Athens, GA (USA). Bob's primary substantive research focuses are on organizational commitment, and high involvement work processes. His methodological research stream includes measurement invariance, latent growth modeling, and multilevel structural equation modeling. Bob's articles on these topics have appeared in the Journal of Applied Psychology, Journal of Management, Journal of Organizational Behavior, Human Resource Management, Organization Sciences, Group and Organization Management, Journal of Managerial Psychology, Organizational Behavior and Human Decision Processes, and Organizational Research Methods. Since 1999, both his substantive and methodological work has been integral to three funded grants totaling $4 million from the Centers for Disease Control, and the National Institute of Occupational Safety and Health. Bob's measurement invariance article co-authored with Chuck Lance received the 2005 Robert McDonald Award for the Best Published Article to Advance Research Methods given by the Research Methods Division of the Academy of Management. He has served on the editorial boards of the British Journal of Management, Journal of Applied Psychology, Journal of Management, Organizational Behavior and Human Decision Processes, and Organizational Research Methods. He is currently the editor of Organizational Research Methods. He is past division chair of the Research Methods Division of the Academy of Management. In addition, he is a fellow of the American Psychological Association, the Society for Industrial and Organizational Psychology, and the Southern Management Association. He is also a fellow in the Center for the Advancement of Research Methods and Analysis at Virginia Commonwealth University in which he conducts annual short courses in advanced structural equation modeling techniques.

Preface
About the Editors
Acknowledgments
Introduction
Statistical Issues
Missing Data Techniques and Low Response Rates: The Role of Systematic Nonresponse Parameters
Organization of the Chapter
Levels, Problems, and Mechanisms of Missing Data
Three Levels of Missing Data
Two Problems Caused by Missing Data (External Validity and Statistical Power)
Missingness Mechanisms (MCAR, MAR, and MNAR)
Missing Data Treatments
A Fundamental Principle of Missing Data Analysis
Missing Data Techniques (Listwise and Pairwise Deletion, ML, and MI)
Systematic Nonresponse Parameters (d[subscript miss] and f[superscript 2 subscript miss])
Theory of Survey Nonresponse
Missing Data Legends
"Low Response Rates Invalidate Results"
"When in Doubt, Use Listwise or Pairwise Deletion"
Applications
Longitudinal Modeling
Within-Group Agreement Estimation
Meta-analysis
Social Network Analysis
Moderated Regression
Conclusions
Future Research on d[subscript miss] and f[superscript 2 subscript miss]
Missing Data Techniques
References
Appendix
Derivation of Response Rate Bias for the Correlation (Used to Generate Figure 1.1c)
The Partial Revival of a Dead Horse? Comparing Classical Test Theory and Item Response Theory
Basic Statement of the Two Theories
Classical Test Theory
Item Response Theory
Criticisms and Limitations of CTT
Lack of Population Invariance
Person and Item Parameters on Different Scales
Correlations Between Item Parameters
Reliability as a Monolithic Concept
Criticisms and Limitations of IRT
Large Sample Sizes
Strong Assumptions
Complicated Programs
Times to Use CTT
Small Sample Sizes
Multidimensional Data?
CTT Supports Other Methodologies
Times to Use IRT
Focus on Particular Range of Construct
Conduct Goodness-of-Fit Studies
IRT Supports Many Psychometric Tools
Conclusions
References
Four Common Misconceptions in Exploratory Factor Analysis
The Choice Between Component and Common Factor Analysis Is Inconsequential
The Component Versus Common Factor Debate: Methodological Arguments
The Component Versus Common Factor Debate: Philosophical Arguments
Differences in Results from Component and Common Factor Analysis
Orthogonal Rotation Results in Better Simple Structure Than Oblique Rotation
Oblique or Orthogonal Rotation?
Do Orthogonal Rotations Result in Better Simple Structure?
The Minimum Sample Size Needed for Factor Analysis Is... (Insert Your Favorite Guideline)
New Sample Size Guidelines
The "Eigenvalues Greater Than One" Rule Is the Best Way of Choosing the Number of Factors
Discussion
References
Dr. StrangeLOVE, or: How I Learned to Stop Worrying and Love Omitted Variables
Theoretical and Mathematical Definition of the Omitted Variables Problem
Violated Assumptions
More Complex Models
Path Coefficient Bias Versus Significance Testing
Minimizing the Risk of LOVE
Experimental Control
More Inclusive Models
Use Previous Research to Justify Assumptions
Consideration of Research Purpose
References
The Truth(s) on Testing for Mediation in the Social and Organizational Sciences
Baron and Kenny's (1986) Four-Step Test of Mediation
Condition/Step 1
Condition/Step 2
Condition/Step 3
Condition/Step 4
The Urban Legend: Baron and Kenny's Four-Step Test Is an Optimal and Sufficient Test for Mediation Hypotheses
The Kernel of Truth About the Urban Legends
Debunking the Legends
A Test of a Mediation Hypothesis Should Consist of the Four Steps Articulated by Baron and Kenny (1986)
Baron and Kenny's (1986) Four-Step Procedure Is the Optimal Test of Mediation Hypotheses
Fulfilling the Conditions Articulated in the Baron and Kenny (1986) Four-Step Test Is Sufficient for Drawing Conclusions About Mediated Relationships
Suggestions for Testing Mediation Hypotheses
Structural Equation Modeling (SEM) as an Analytic Framework
Summary of Tests of Mediation
A Heuristic Framework for Classifying Mediation Models
Summary
Conclusion
Author Note
References
Seven Deadly Myths of Testing Moderation in Organizational Research
The Seven Myths
Product Terms Create Multicollinearity Problems
Coefficients on First-Order Terms Are Meaningless
Measurement Error Poses Little Concern When First-Order Terms Are Reliable
Product Terms Should Be Tested Hierarchically
Curvilinearity Can Be Disregarded When Testing Moderation
Product Terms Can Be Treated as Causal Variables
Testing Moderation in Structural Equation Modeling Is Impractical
Myths Beyond Moderation
Conclusion
References
Alternative Model Specifications in Structural Equation Modeling: Facts, Fictions, and Truth
The Core of the Issue
AMS Strategies
Equivalent Models
Nested Models
Nonnested Alternative Models
Summary
AMS in Practice
Summary
References
On the Practice of Allowing Correlated Residuals Among Indicators in Structural Equation Models
Unraveling the Urban Legend
Extent of the Problem
Origins
A Brief Review of Structural Equation Modeling
Indicator Residuals
Model Fit
An Example
Why Correlated IRs Improve Fit
Problems With Correlated Residuals
Recommendations
Summary and Conclusions
References
Methodological Issues
Qualitative Research: The Redheaded Stepchild in Organizational and Social Science Research?
Definitional Issues
Philosophical Differences in Qualitative and Quantiative Research
Quantitative and Qualitative Conceptualizations of Validity
Caveats and Assumptions
Beliefs Associated With Qualitative Research
Qualitative Research Does Not Utilize the Scientific Method
Qualitative Research Lacks Methodological Rigor
Qualitative Research Contributes Little to the Advancement of Knowledge
Evaluating the Beliefs Associated With Qualitative Research
Qualitative Research Does Not Utilize the Scientific Method
Qualitative Research Is Methodologically Weak
Qualitative Research Has Weak Internal Validity
Qualitative Research Has Weak Construct Validity
Qualitative Research Has Weak External Validity
Qualitative Research Contributes Little to the Advancement of Knowledge
The Future of Qualitative Research in the Social and Organizational Sciences
Concluding Thoughts
Author Note
References
Do Samples Really Matter That Much?
Kernel of Truth
Background
History of the Concern
The Research Base
Why Do Samples Seem to Matter So Much?
People Confuse Random Sampling With Random Assignment
People Focus on the Wrong Things
People Rely on Superficial Similarities
Concluding Thoughts
Author Note
References
Sample Size Rules of Thumb: Evaluating Three Common Practices
Determine Whether Sample Size Is Appropriate by Conducting a Power Analysis Using Cohen's Definitions of Small, Medium, and Large Effect Size
Discussion
Increase the A Priori Type I Error Rate to .10 Because of Your Small Sample Size
Discussion
Sample Size Should Include at Least 5 Observations per Estimated Parameter in Covariance Structure Analyses
Discussion
Discussion
Author Note
References
When Small Effect Sizes Tell a Big Story, and When Large Effect Sizes Don't
Effect Size Defined
The Urban Legend
The Kernel of Truth
Quine and Ontological Relativism
Contextualization
Inauspicious Designs
Phenomena With Obscured Consequences
Phenomena That Challenge Fundamental Assumptions
The Flip Side: Trivial "Large" Effects
Conclusion
References
So Why Ask Me? Are Self-Report Data Really That Bad?
The Urban Legend of Self-Report Data and Its Historical Roots
Construct Validity of Self-Report Data
Interpreting the Correlations in Self-Report Data
Social Desirability Responding in Self-Report Data
Value of Data Collected From Non-Self-Report Measures
Conclusion and Moving Forward
References
If It Ain't Trait It Must Be Method: (Mis)application of the Multitrait-Multimethod Design in Organizational Research
Background
Literature Review
Range of Traits Studied
Range of Methods Studied
Not All "Measurement Methods" Are Created Equal
The Case of Multisource Performance Appraisal
The Case of AC Construct Validity
Other Cases
So, Are Any "Method" Facets Really Method Facets?
Discriminating Method From Substance, or "If It Looks Like a Method and Quacks Like a Method..."
References
Chopped Liver? OK. Chopped Data? Not OK
Urban Legends Regarding Chopped Data
Urban Legends Associated With the Occurrence of Chopped Data
Urban Legends Associated With Chopped Data Techniques
Urban Legends Associated With Chopped Data Justifications
Literature Review
Chopped Data Through the Years
Prevalence of Chopped Data
The Occurrence of Chopped Data Over Time
Chopped Data Across Disciplines
Types of Chopped Data Approaches
Evaluating Justifications for Using Chopped Data
Insufficient or Faulty Justifications (Myths)
Legitimate Justifications (Truths)
Advantages of, Disadvantages of, and Recommendations for Using Chopped Data
(Perceived) Advantages of Chopping Data
Disadvantages of Chopping Data
Recommendations When Faced With Chopping Data
Conclusion
References
Subject Index
Author Index

×
Free shipping on orders over $35*

*A minimum purchase of $35 is required. Shipping is provided via FedEx SmartPost® and FedEx Express Saver®. Average delivery time is 1 – 5 business days, but is not guaranteed in that timeframe. Also allow 1 - 2 days for processing. Free shipping is eligible only in the continental United States and excludes Hawaii, Alaska and Puerto Rico. FedEx service marks used by permission."Marketplace" orders are not eligible for free or discounted shipping.

Learn more about the TextbookRush Marketplace.

×