Survey Format

The Research Behind the Survey Design

When considering the survey design, Tourangeau, et al.’s (2000) four aspects of participant response (comprehension, retrieval, judgement and answer) were used to inform every decision, in the hopes of limiting the influence of the researchers’ implicit and explicit ‘contributions to the conversation’ (Schwarz, 2007;page 279)

In terms of comprehension, it is well known that the combined linguistic and aesthetic components of a survey contribute to the participant’s overall experience (in terms of enjoyment (Coates, 2004)). In turn this heavily impacts upon participant commitment (used here to encapsulate their subconscious visceral, behavioural and reflective responses (Norman, 2005)), completion rate (Bowker & Dillman, 2000) and ultimately, upon the quality of the resulting data (Mahon-Haf & Dillman, 2010). As such, several design choices were made with the hopes of achieving the harmony, rhythm and comfort (Mahon-Haf & Dillman, 2010) necessary to support optimal participation.

Firstly, to ensure that participants were able to understand each question, appropriate language was used (Alderman & Salem, 2010), avoiding any specialised educational vocabulary with which they might not be familiar (Bradburn, et al., 2004). In recognising that participant understanding can also be effected by the visual elements of a survey’s design, attention was paid to its layout, colours, font and graphics (Couper, et al., 2001): a simple, symmetrical aesthetic was applied, using muted colours and a repeating structure (Jones, et al., 2013) (in relation to wording and physical layout) so that respondents were able to expect, and react accordingly to, a consistent design (Henningsson, 2001). For the web survey, only one question was presented per screen to encourage a greater response rate (Toepoel, et al., 2009).

In terms of retrieval, closed-ended questions were used predominantly. This choice was made to: minimise the participant break off associated with open-ended questions (Roberts, et al., 2014); limit the effect of a participant’s reactive, in-the-moment response to a question (Ottati, 1997); ensure an equal context for all participants (Gendall & Hoek, 1990). That is, the way in which a participant searches their memory and selects the information (of facts or events (Lavrakas, 2008)) they deem relevant to answer the question (Chen, 2017), is affected by their individual context, internal state and emotional mood (Koriat, 2000). However, more focused retrieval is found to occur when questions are specific (Bekerian & Dritschel, 1992), or in this case, closed-ended. A participant’s answers are then further influenced by the content and structure of these questions (Schwarz, et al., 1991). As such, the interpretive heuristics that participants may follow (in assigning importance to, or inferring relations between answer options) (Tourangeau, et al., 2003), were considered when designing the layout of each question:

  • The answer categories were displayed vertically, with an alternate shading pattern, to encourage participants to process all options  before making a choice (Dillman & Smyth, 2007).
  • The answer choices were listed in alphabetical order to overcome the bias of attributing the greatest value, and subconsciously higher quality cognitive processing (Krosnick & Alwin, 1987), to those options listed first (Toepoel, et al., 2006).    
  • The Likert-type scale (measuring participant confidence), for the one question displayed horizontally, was read from left to right, 0 to 10, to combat the bias of associating positive answers with those on the left hand side (Weng & Cheng, 2000).
  • To balance the participant’s intrinsic tendency towards primacy, drop boxes were avoided and all answer options, per question, were displayed at once (Couper, et al., 2004).

However, it was also deemed important to still provide families with the opportunity to express themselves, in their own words (Geer, 1988), without the impact of context effects (Larsen, 2002) or the subconscious bias that often results from the researcher’s suggested responses, in closed-ended questions (Foddy, 1994). Open-ended questions can also be considered more suitable for capturing a greater diversity of answers (Reja, et al., 2003), as reflective, in this study, of the diversity of the Scottish population. As such, four open-ended questions (in keeping with research’s trend to include a median of three open-ended questions (Tran, et al., 2016)) were purposefully included to echo the study’s sub questions.

In terms of judgement, motivation and sensitivity were considered (Geisen & Romano Bergstrom, 2017):

  • Motivation: Most aspects of this survey’s design were focused upon ensuring the parents’ ease of participation and preventing satisficing (shortcuts taken by participants to reduce their cognitive expenditure leading to poor quality data (Barge & Gehlbach, 2012)). However, it was also hoped that, in relating to and aiming to directly improve, their children’s education (as explicitly detailed in the ‘Participant Information Sheet’ that was distributed as the first page of every online and paper survey), parents would be intrinsically motivated to take part and complete the survey. 
  • Sensitivity: Each question was carefully written to aptly determine parents’ salient understanding of Early Years education, whilst avoiding the interrogative pressure (Baxter, et al., 2006) that participants can feel when answering questions of a sensitive nature (Rasinksi, et al., 1999) and exposing themselves to judgement (Collins, 2003). Although unintentional, participants could categorise the survey as sensitive, in perceiving their responses to be judged as synonymous with the quality of their parenting. As such, the questionnaire was designed to be fully anonymous (unless participants chose to opt-in to the focus group at the end of the survey). This anonymity was assured in the hopes of promoting parents to disclose more relevant information, even if potentially sensitive or stigmatising (Murdoch, et al., 2014) and especially when pertaining to their attitudes or beliefs (Tourangeau, 2018).  

In terms of answer, further considerations were made:

  • Although survey length has been found to be a substantial disincentive for participants (Burchell & Marsh, 1992) in increasing burden and reducing data quality (Lavrakas, 2008), there is relatively little research offering guidance for researchers on how to choose the ideal survey length (Bogen, 1991). One study, however, found that the median and maximum survey length should be ten and 20 minutes respectively (Revilla & Ochoa, 2017). Considering the average time it takes a participant to answer the first and second questions of a survey (approximately 75 and 40 seconds respectively (Chudoba, 2019)) and the steps taken in this study to maximise the chances of an equal distribution of participant effort across all survey questions, a one-minute-per-question response rate and 15 minute maximum time limit resulted in a 15 question threshold.     
  • It was recognised that too many response choices can in fact be demotivating (Iyengar & Lepper, 2000) and increase the survey’s response time (Yan & Tourganeau, 2008), as above. An equilibrium was sought between providing enough categories for a participant to choose one that accurately represents their opinion (reducing error) and the point of exceeding channel capacity (whereby participants are no longer able to distinguish any meaningful difference between categories) (Hawthorne, et al., 2006). In this way, examples for each category were also provided.      
  • All scaled questions were balanced (Kitchenham & Pfleeger, 2002). For example: for the option ‘0 – not at all confident’, is the exact opposite of the option ‘10 – very confident;’ the options relating to the social, emotional and behavioural environment were the exact opposite in assigning full responsibility to either the school, or the home. In this way, most questions had four or five answer options (in accordance with the magical number seven (Miller, 1956), or five (Simon, 1974), plus or minus two): one/two at either end of the expected scale of opinion, one in the middle and the ‘Don’t Know’ option, as below.  
  • A ‘Don’t Know’ choice was included for all questions to ensure that the validity of the survey results was not affected by participants being forced to select an option they didn’t mean (Derham, 2011), or dropping out of the survey in the face of a question they found difficult to answer (Porritt & Marx, 2011).
  • All four formats for a closed-ended question were used in this survey, producing single, checklist, ranked and rating scale responses (Harlacher, 2016) with a respective question frequency of six, two, one and one.
  • For consistency, throughout every question in the survey the participant was only asked to respond to positive stimuli (Fink, 2002) in uniformly selecting the options they felt most important/ representative (Frary, 1996).
  • Questions followed a logical order for participants, in that questions were grouped by content (Fanning, 2005), and a purposeful order for the researcher, in that, within these groups, the open-ended questions preceded the closed-ended questions (McFarland, 1981) to avoid the priming effects of the latter’s response category content (Smyth, 2016). In relation to this study’s main objective, an introductory question (affirming parental perceptions of the aims of Early Years education) and summary question (determining the role parents would like to play in their child’s Early Years education) were used (Jones, et al., 2013).
  • The content of the pre-determined answer choices was researched systematically to ensure each question could be considered a ‘reliable and valid measure’ (Fowler, 1995; page 2).
    See Blog Post: Survey Content.

Leave a Reply

%d bloggers like this: