This page has moved to my new website.
Someone sent me a question a month ago that I never got around to responding to. She asks
What would you consider a Cronbach alpha of .60 to be in terms of “label” (i.e., fair, poor, etc.)?
In general, I don't think much of Cronbach's Alpha. Everyone runs Cronbach's Alpha on their data because it is an easy thing to do and it shows that they are sincere in trying to assess the validity and reliability of their instrument. It doesn't matter that in most cases, Cronbach's Alpha does not directly address the major concerns about your data. You have to show that you are doing SOMETHING.
Cronbach's Alpha is a measure of how well each individual item in a scale correlates with the sum of the remaining items. It measures consistency among individual items in a scale. Streiner and Normal offer this advice on Cronbach's Alpha.
It is nearly impossible these days to see a scale development paper that has not used alpha, and the implication is usually made that the higher the coefficient, the better. However, there are problems in uncritically accepting high values of alpha (or KR-20), and especially in interpreting them as reflecting simply internal consistency. The first problem is that alpha is dependent not only on the magnitude of the correlations among items, but also on the number of items in the scale. A scale can be made to look more 'homogenous' simply by doubling the number of items, even though the average correlation remains the same. This leads directly to the second problem. If we have two scales which each measure a distinct construct, and combine them to form one long scale, alpha would probably be high, although the merged scale is obviously tapping two different attributes. Third, if alpha is too high, then it may suggest a high level of item redundancy; that is, a number of items asking the same question in slightly different ways. -- pages 64-65, Health Measurement Scales A Practical Guide to Their Development and Use. Streiner DL, Norman GR (1989) New York: Oxford University Press, Inc.
After all these thoughtful warnings, they say that alpha should be above 0.7, but not much higher than 0.9. They cite a classic text, Nunnally 1978, though the third edition of the book, published in 1994, [BookFinder4U link] might be better. I do not have this book, but I've seen Nunnally cited a lot (Google lists over two thousand matches for the phrase "Nunnally 1978").
Oops! I just realized that I am citing the second edition of Streiner and Norman, when a third edition, published in 2003 [BookFinder4U link], might be better.
G. David Garson offers another opinion about what corresponds to a good value for Cronbach's Alpha:
The widely-accepted social science cut-off is that alpha should be .70 or higher for a set of items to be considered a scale, but some use .75 or .80 while others are as lenient as .60. That .70 is as low as one may wish to go is reflected in the fact that when alpha is .70, the standard error of measurement will be over half (0.55) a standard deviation. -- www2.chass.ncsu.edu/garson/pa765/standard.htm
The Wikipedia says that
As a rule of thumb, a proposed psychometric instrument should only be used if an α value of 0.8 or higher is obtained on a substantial sample. However the standard of reliability required varies between fields of psychology: cognitive tests (tests of intelligence or achievement) tend to be more reliable than tests of attitudes or personality. There is also variation within fields: it is easier to construct a reliable test of a specific attitude than of a general one, for example. -- en.wikipedia.org/wiki/Cronbach%27s_alpha
A smattering of other web pages seems to claim that a value as low as 0.6 might be okay for an exploratory study.
I've started a web page about how to assess reliability and validity, but it is still very preliminary.