P.Mean >> Category >> Survey design (created 2007-09-12). 

These pages discuss how to design a questionnaire or survey. Also see Category: Qualitative data. Articles are arranged by date with the most recent entries at the top. You can find outside resources at the bottom of this page.


10. The Monthly Mean: I only got a 20% response rate, but at least my confidence interval was narrow (April 2010)

9. P.Mean: An example of a bad survey (created 2010-06-11). I was asked to fill out an Internet survey to define my "consulting needs." That's a rather strange invitation, and sounds almost like a cheap way to develop business leads. But it was a request through LinkedIn, so I thought it was worth filling out. I want to try to build my contacts at LinkedIn, and filling out a short survey seemed like a small price to pay to get a potential lead for my own consulting business. When I went to the webpage with the actual survey, though, I was shocked and disappointed with what I found.


8. P.Mean: The perils of self-evaluation (created 2009-06-30). A survey by New Scientist magazine examined a phenomenon called "citation amnesia." This is the tendency of researchers to overlook previously published work in the bibliography of their articles. Most of the respondents felt that citation amnesia was a problem. "Indeed, the vast majority of the survey's roughly 550 respondents -- 85% -- said that citation amnesia in the life sciences literature is an already-serious or potentially serious problem. A full 72% of respondents said their own work had been regularly or frequently ignored in the citations list of subsequent publications. Respondents' explanations of the causes range from maliciousness to laziness." There are several problems with this survey, though.

7. P.Mean: Five points or seven points on a survey scale (created 2009-03-12). I am creating a survey and wanted to know if anybody can suggest a scale: both the wording and 5 versus 7 point.


6. P.Mean: How to design a new survey (created 2008-10-28). Someone wrote in with a question about how to design a survey. There are entire books devoted to the subject. I couldn't do the subject justice in a single email, but here's what I sent.

5. P.Mean: Processing skip fields in SPSS (created 2008-09-25). How do I program skips on SPSS so that data would not be entered on irrelevant questions?

Outside resources:

Webpage: American Association for Public Opinion Research. AAPOR Commitee and Task Force Reports Excerpt: "Founded in 1947, the American Association for Public Opinion Research is the leading association of public opinion and survey research professionals. The AAPOR community includes producers and users of survey data from a variety of disciplines. Our members span a range of interests including election polling, market research, statistics, research methodology, health related data collection and education." [Accessed on August 14, 2011]. http://www.aapor.org/Reports1/4212.htm.

Journal article: Travis D Leleu, Isabel G Jacobson, Cynthia A Leardmann, Besa Smith, Peter W Foltz, Paul J Amoroso, Marcia A Derr, Margaret Ak Ryan, Tyler C Smith, et al. Application of Latent Semantic Analysis for Open-Ended Responses in a Large, Epidemiologic Study BMC Medical Research Methodology. 2011;11(1):136. ABSTRACT: "BACKGROUND: The Millennium Cohort Study is a longitudinal cohort study designed in the late 1990s to evaluate how military service may affect long-term health. The purpose of this investigation was to examine characteristics of Millennium Cohort Study participants who responded to the open-ended question, and to identify and investigate the most commonly reported areas of concern. METHODS: Participants who responded during the 2001-2003 and 2004-2006 questionnaire cycles were included in this study (n = 108,129). To perform these analyses, Latent Semantic Analysis (LSA) was applied to a broad open-ended question asking the participant if there were any additional health concerns. Multivariable logistic regression was performed to examine the adjusted odds of responding to the open-text field, and cluster analysis was executed to understand the major areas of concern for participants providing open-ended responses. RESULTS: Participants who provided information in the open-ended text field (n = 27,916), had significantly lower self-reported general health compared with those who did not provide information in the open-ended text field. The bulk of responses concerned a finite number of topics, most notably illness/injury, exposure, and exercise. CONCLUSION: These findings suggest generalized topic areas, as well as identify subgroups who are more likely to provide additional information in their response that may add insight into future epidemiologic and military research." [Accessed on October 11, 2011].

Newspaper article: Mark Blumenthal. The Secret Lives of Pollsters The New York Times. 2008. Excerpt: "Many pollsters fail to disclose basic facts about their methods. Very few, for instance, describe how they determine likely voters. Did they select voters based on their self-reported history of voting, their knowledge of voting procedures, their professed intent to vote or interest in the campaign? Did they use actual voting history gleaned from official lists of registered voters?" [Accessed on August 14, 2011]. http://www.nytimes.com/2008/02/07/opinion/07blumenthal.html.

A Brief Guide to Questionnaire Development. Robert Frary, Virginia Tech. Excerpt: Most people have responded to so many questionnaires in their lives that they have little concern when it becomes necessary to construct one of their own. Unfortunately the results are often unsatisfactory. These problems are sufficiently prevalent that numerous books and journal articles have been written addressing them (e.g., see Dillman, 1978). Also, various educational and proprietary organizations regularly offer workshops in questionnaire development. Therefore, the brief exposition that follows is intended only to identify some of the more prevalent problems in questionnaire development and to suggest ways of avoiding them. This paper does not cover the development of inventories designed to measure psychological constructs, which would require a deeper discussion of psychometric theory than is feasible here. Instead, the focus will be on questionnaires designed to collect factual information and opinions. This website was last verified on 2008-01-14. URL: www.testscoring.vt.edu/questionaire_dev.html

Comparative response to a survey executed by post, email, & web form. Gi Woong Yun, Craig W Trumbo. JCMC 2000: 6(1); [Full text]. Description: This article studies a data collection approach that used postal mail, e-mail, and a web-based form. Each method tended to solicit a different group of respondents. The authors conclude that using multiple methods to collect data will provide a more representative sample.

John F. Hall. Journeys in Survey Research - Home. Excerpt: "Welcome to this new resource for researchers, students and others doing, or learning about, survey research and the analysis of survey data. You will find here a wealth of materials drawn from my 45 years of doing and teaching survey research." [Accessed July 9, 2010]. Available at: http://surveyresearch.weebly.com/.

EDF 5841 Methods of Educational Research. Guide 5: A Survey Research Timetable. Susan Carol Losh, Florida State University, October 15, 2001. Description: This webpage outlines the general setup of a focus group and explains what type of information that a focus group is likely to provide. URL: edf5481-01.fa01.fsu.edu/Guide5.html

EDF 5841 Methods of Educational Research. Guide 6: Focus Group Basics. Susan Carol Losh, Florida State University, September 25, 2001. Description: This webpage outlines the steps you need to follow in a survey research study, with special emphasis on pilot testing. URL: edf5481-01.fa01.fsu.edu/Guide6.html

Streiner DL, Norman GR. Health measurement scales. 4th ed. New York: Oxford University Press; 2008.

John F. Hall. Journeys in Survey Research - Home. Excerpt: "Welcome to this new resource for researchers, students and others doing, or learning about, survey research and the analysis of survey data. You will find here a wealth of materials drawn from my 45 years of doing and teaching survey research." [Accessed July 9, 2010]. Available at: http://surveyresearch.weebly.com/.

Peter B. Gilkey. Questionaire. Excerpt: "You are no doubt aware that the number of questionnaires circulated is rapidly increasing, whereas the length of the working day has at best remained constant. In order to resolve the problem presented by this trend, I find it necessary to restrict my replies to questionnaires to those questioners who first establish their bona fide by completing the following questionnaire. Please fill it out and return it to me electronically. This will help me compile a profile of people who compile profiles." [Accessed May 1, 2010]. Available at: http://www.uoregon.edu/~gilkey/dirhumor/questionaire.html.

Journal article: Maria Prior, Jemaima Che Hamzah, Jillian Francis, Craig Ramsay, Mayret Castillo, Susan Campbell, Augusto Azuara-Blanco, Jennifer Burr. Pre-validation methods for developing a patient reported outcome instrument. BMC Medical Research Methodology. 2011;11(1):112. Abstract: "BACKGROUND: Measures that reflect patients' assessment of their health are of increasing importance as outcome measures in randomised controlled trials. The methodological approach used in the pre-validation development of new instruments (item generation, item reduction and question formatting) should be robust and transparent. The totality of the content of existing PRO instruments for a specific condition provides a valuable resource (pool of items) that can be utilised to develop new instruments. Such 'top down' approaches are common, but the explicit pre-validation methods are often poorly reported. This paper presents a systematic and generalisable 5-step pre-validation PRO instrument methodology. METHODS: The method is illustrated using the example of the Aberdeen Glaucoma Questionnaire (AGQ). The five steps are: 1) Generation of a pool of items; 2) Item de-duplication (three phases); 3) Item reduction (two phases); 4) Assessment of the remaining items' content coverage against a pre-existing theoretical framework appropriate to the objectives of the instrument and the target population (e.g. ICF); and 5) qualitative exploration of the target populations' views of the new instrument and the items it contains. RESULTS: The AGQ 'item pool' contained 725 items. Three de-duplication phases resulted in reduction of 91, 225 and 48 items respectively. The item reduction phases discarded 70 items and 208 items respectively. The draft AGQ contained 83 items with good content coverage. The qualitative exploration ('think aloud' study) resulted in removal of a further 15 items and refinement to the wording of others. The resultant draft AGQ contained 68 items. CONCLUSIONS: This study presents a novel methodology for developing a PRO instrument, based on three sources: literature reporting what is important to patient; theoretically coherent framework; and patients' experience of completing the instrument. By systematically accounting for all items dropped after the item generation phase, our method ensures that the AGQ is developed in a transparent, replicable manner and is fit for validation. We recommend this method to enhance the likelihood that new PRO instruments will be appropriate to the research context in which they are used, acceptable to research participants and likely to generate valid data." [Accessed on August 14, 2011]. http://www.biomedcentral.com/1471-2288/11/112.

Journal article: Tore Wentzel-Larsen, Tone M Norekval, Bjorg Ulvik, Ottar Nygard, Are H Pripp. A proposed method to investigate reliability throughout a questionnaire BMC Medical Research Methodology. 2011;11(1):137. ABSTRACT: "BACKGROUND: Questionnaires are used extensively in medical and health care research and depend on validity and reliability. However, participants may differ in interest and awareness throughout long questionnaires, which can affect reliability of their answers. A method is proposed for "screening" of systematic change in random error, which could assess changed reliability of answers. METHODS: A simulation study was conducted to explore whether systematic change in reliability, expressed as changed random error, could be assessed using unsupervised classification of subjects by cluster analysis (CA) and estimation of intraclass correlation coefficient (ICC). The method was also applied on a clinical dataset from 753 cardiac patients using the Jalowiec Coping Scale. RESULTS: The simulation study showed a relationship between the systematic change in random error throughout a questionnaire and the slope between the estimated ICC for subjects classified by CA and successive items in a questionnaire. This slope was proposed as an awareness measure - to assessing if respondents provide only a random answer or one based on a substantial cognitive effort. Scales from different factor structures of Jalowiec Coping Scale had different effect on this awareness measure. CONCLUSIONS: Even though assumptions in the simulation study might be limited compared to real datasets, the approach is promising for assessing systematic change in reliability throughout long questionnaires. Results from a clinical dataset indicated that the awareness measure differed between scales." [Accessed on October 11, 2011].

Alreck PL, Settle RB. Survey research handbook. 3rd ed. Boston: McGraw-Hill/Irwin

Yes, Polling Works. Frank Newport. Excerpt: There's little question that some Americans are skeptical of polls and the process by which we use small samples to represent the views of millions of people. We pick up that skepticism when we poll people about polls (something we do from time to time!), and I certainly hear it when I am on a radio talk show or make a speech and get bombarded with questions about the believability of our polls, which are based on what seems to the questioners to be ridiculously small numbers of people. This website was last verified on 2008-01-14. URL: www.gallup.com/poll/7174/Yes-Polling-Works.aspx

Creative Commons License All of the material above this paragraph is licensed under a Creative Commons Attribution 3.0 United States License. This page was written by Steve Simon and was last modified on 2017-06-15. The material below this paragraph links to my old website, StATS. Although I wrote all of the material listed below, my ex-employer, Children's Mercy Hospital, has claimed copyright ownership of this material. The brief excerpts shown here are included under the fair use provisions of U.S. Copyright laws.


4. Stats: Real-life examples of survey mistakes (January 31, 2006). Tzippy Shocat was nice enough to forward a link to an article that she wrote for the iSixSigma website (www.isixsigma.com), titled "Tips for Getting the Most from Six Sigma Surveys." There were some amusing examples of bad survey practices that she cites.


3. Stats: Open-ended questions on a survey (March 25, 2005). No one seems to talk about how to handle those pesky open-ended questions you see on a survey. I usually hold my breath and hope that the researcher doesn't think to mention it. Alicia O'Cathain and Kate Thomas address this important issue in a recently published article and they gently scold us for ignoring an important source of information.


2. Stats: Designing a questionnaire (December 24, 2004). I'm behind in my reading of the British Medical Journal, and the first issue I looked at today has a gem of an article, Selecting, designing, and developing your questionnaire. Boynton PM, Greenhalgh T. Bmj 2004: 328(7451); 1312-5. Questionnaire development is something that many researchers do, but few researchers do well. Here's a quick summary of the questions this paper raises.


1. Stats: So you want to write a questionnaire (July 12, 2002). Dear Professor Mean, I need to write a questionnaire for a research study I am conducting. Can you help me write it? -- Cautious Carmen

What now?

Browse other categories at this site

Browse through the most recent entries

Get help