Voter behaviour

Shaun Ratcliff

Key terms/names

agenda setting, cues, ecological fallacy, framing, heuristics, non-response bias, normative, random sampling, rational choice, response rates, sample size, social desirability bias, survey research

 

Representation is the basis of modern democratic theory. In most mature electoral democracies, it is achieved through regular elections, which provide voters with the opportunity to select representatives whose policy goals align with their own. This chapter explores how citizens vote and some of the key influences on their behaviour.

Research into voter behaviour has been greatly influenced by a shift from normative assumptions about how citizens should behave in democratic society to studying how they act. This highlights a troubling and persistent problem for democratic governance: if citizens in representative democracies are largely not interested in politics and are under informed about basic matters of state, how can they provide any control over public policy through elections or referendums?

Borrowing from social psychology, political science provides an answer to this. While most voters are far from perfectly equipped to analyse political issues, most use limited information to make reasonably sophisticated judgements about political leaders, candidates, parties and salient matters, particularly those relevant to their lived experiences. When voters pool their individual opinions at elections, the resulting collective decision is actually likely to be better than an individual decision.

This chapter will explore the political science research on voter behaviour to better understand how representative democracy functions.

What is public opinion?

Public opinion is a concept frequently used by political leaders, journalists and political scientists to describe and understand politics. It can be viewed as the aggregation of the attitudes and preferences of individuals who comprise the public. This term – ‘the public’ – is widely used, but in political science it has a particular meaning. Sociologist Herbert Blumer suggested three criteria. In his framework, the public consists of a group of people who:

  1. face a common issue
  2. are divided on how to address it
  3. are engaged in discussion or debate about the issue.1

In this view, publics emerge over particular issues, such as immigration or the rate of taxation. To become a member of a public, an individual must join a discourse on an issue, thinking and reasoning with others. According to Blumer, if a public is not critically engaged with an issue, then that public ‘dissolves’, and uncritical and unengaged public opinion becomes mere ‘public sentiment’.

However, this is not a universally accepted definition. More recently, philosopher and sociologist Jürgen Habermas argued that public opinion is context dependent, anchored to the ‘public sphere’ – the political and social domain in which people operate, which changes over time.2 It comprises public discussions about politics outside the formal arena of government, such as conversations in a cafe or bar, talkback radio or what is covered in the editorial pages of a newspaper. Changes in the public sphere include who is permitted to participate and the issues and positions that are considered to be socially acceptable. In the past, women, those who didn’t own property and some ethnic and racial groups were not permitted to engage in Australian political debate or vote in elections. Because it consisted only of the opinions of certain groups of men, the public sphere in mid-19th-century Australia, for instance, did not consider it socially acceptable to discuss issues such as LGBTIQ+ rights.

The history of the public opinion as an idea

Most early theorists and philosophers, including Plato and Machiavelli, were generally dismissive of the political opinions of the common people. They believed most citizens did not have the capacity for rational political judgement. However, some were more positive. Aristotle advocated an early version of the wisdom of the crowd. The modern, mostly more positive, attitude towards public opinion can be traced to the Enlightenment, which saw a growth in literacy, the development of early newspapers and the distribution of political pamphlets. Enlightenment thinkers, including John Locke and Jean-Jacques Rousseau, argued for the existence of normative, inalienable rights for individuals, protected by the state, and for greater citizen participation in government.

Lockean political theory was a significant inspiration for the design of the political system and culture of the USA and other modern representative democracies. Locke argued that humanity was subject to three laws: divine, civil and opinion (or reputation). He regarded the latter as arguably the most important. Poor public opinion could force people to conform to social norms. Despite this, he generally did not consider public opinion to be a suitable influence for governments. Other Enlightenment thinkers had a more positive view. David Hume argued that public support provided government with legitimacy – and was the only thing that could do so. This view is closest to modern normative beliefs about the functioning of democracy.

Modern views of voter behaviour

Despite the early origins of the concept, the study of voter behaviour and public opinion emerged as modern fields of research later, in the 1930s. Key debates included how voters learn, why they believe certain things and prefer particular policy options, how their attitudes match with their behaviours and their influence on government policy decisions.

Much of our understanding of human behaviour comes from the field of social psychology, where studies of public opinion typically employ one or more of four basic concepts: beliefs, values, attitudes and opinions.

  1. Belief systems tend to be thematically and psychologically consistent. They are the assumptions by which we live our lives, comprising our understanding of the world, our attitudes and our opinions.
  2. Values are ideals. They are our understanding of the way things should be. Many researchers distinguish between ‘terminal’ and ‘instrumental’ values. Terminal values are ultimate social and individual goals, like prosperity and freedom. Instrumental values are the constraints on the means used to pursue our goals, such as honesty and loyalty.
  3. Attitudes are the relatively stable and consistent views we hold about people and objects. These are often defined as evaluations combining emotions, beliefs, knowledge and thoughts about something.
  4. Opinions are the expressions of attitudes, sometimes seen as narrower, more specific and more consciously held (as opposed to unconscious attitudes we may have formed without deliberation) than attitudes. The idea that opinions are separate from attitudes is not universal, though.

Do voters hold meaningful political opinions?

Political science research was deeply influenced by the behavioural revolution that occurred during the mid-20th century. Changes in approaches to investigation permitted researchers to measure citizens’ preferences and behaviours, raising questions about the capacity of citizens and challenging some of the normative assumptions of representative democracy. Whether voters are competent political agents and can be considered rational actors began to be studied.

Besides social psychology, theories of voter behaviour and public opinion have been heavily influenced by the discipline of economics. Rational choice theory has been one of the most consequential of these theories. It is a set of normative standards and empirical models used to understand human decision making. These operate on the assumption that aggregate social behaviour is the result of independent decisions made by individual rational actors. These decisions are informed by a set of defined preferences from among the available alternatives.

Preferences are assumed to be complete and transitive. Individuals with complete preferences can always say which of two alternatives they prefer or that neither is preferred. Transitive preferences are always internally consistent in their order of desirability. If option A is preferred over option B and option B is preferred over option C, then A must always be preferred over C. When preference order is both transitive and complete, it is commonly called a ‘rational preference relation’, and those who comply with it ‘rational agents’. In this framework, the rational agent can take available information, probabilities of events and potential costs and benefits into account when determining preferences and will act consistently in selecting the alternatives that maximise their interests.3

Anthony Downs’ An economic theory of democracy is one of the most influential political science works published after the Second World War.4 In a Downsian view of electoral democracy, voters are rational utility maximisers. They support the party with policies closest to their own preferences (which are generally expected to benefit their self-interest). Parties and candidates are also utility maximisers, seeking the private benefits of public office and, therefore, electorally motivated and willing to adjust their policy offerings to match the preferences of the median voter. In doing this, parties provide voters with the greatest utility for their vote and increase their chances of electoral success.

Not all political scientists believe voters are rational actors, though the public’s general lack of knowledge about all but the most important political events and actors is one of the best documented findings in all of the social sciences,5 with citizens’ limitations as political actors causing some political scientists to question whether they are capable of acting as we might expect and hope, even in the modern era.

American writer and political commentator Walter Lippmann6 mirrored many earlier views of the public. He argued citizens were unable to behave rationally or think deeply. Similarly, one of the founders of modern public opinion research, Phillip E. Converse,7 found that, in the 1950s and 1960s, only slim majorities of voters knew the simplest facts about how government worked. Fewer still held informed attitudes on even the most significant political issues, and the opinions they did hold lacked consistency across issues. Citizens’ answers to individual survey questions on one issue were largely unrelated to their answers to other questions on different (but related) topics. For instance, a respondent who wanted lower taxes would not always also support less spending. More broadly, very few voters were consistently on the left or right of the political spectrum.

To make matters worse for normative concepts of democracy, citizens were also found by Lazarsfeld, Berelson and Gaudet,8 Butler and Stokes,9 and Converse to provide inconsistent answers when asked the same question at different times. A respondent, asked whether they supported higher spending or lower taxes one year, often completely changed their position when asked two years later.

Much of the research from social psychology supports this cynicism about citizen competence. Psychological and experimental research has repeatedly demonstrated the irrationality of individuals10 and the influence of context on preferences and decision making.11 Citizens’ policy positions are often unstable and inconsistent.12 Behaviour is frequently influenced by emotion13 and framing.14 Voters use evidence incorrectly or prejudicially and are often overly confident about their conclusions,15 and their acceptance of new evidence is clouded by motivated reasoning.16

Reconciling these findings with democratic theory

Concerns about the capacity of citizens to meaningfully participate in electoral democracy are inconsistent with the general assumptions of classical democratic theory, which requires citizens to be informed and attentive for democracy to properly function. These concerns are typically reconciled with the normative ideals of democratic theory through the wisdom of the crowd argument. Aggregate opinion can be much more stable and apparently ‘rational’ than individual opinion, so long as error in individual opinions is assumed to be random.17 Even large proportions of random error ‘cancel out’ when aggregated, resulting in reasonably efficient and stable collective choices.

There may also be some problems with the concerns about voter competence raised above. In a number of countries, representative democracy appears to be working relatively well. Lau and Redlawsk18 estimate that, at the five US presidential elections between 1972 and 1988, approximately 75 per cent of citizens voted the same as they would have if they had been operating with ‘full information’. Here, full information is the decision they would make if they had the greatest possible understanding of the choice they were making and the alternatives.

Lippmann and Converse may have been overly pessimistic about voters’ political sophistication. It is possible that unrealistic goals were set for the average voter. There were also measurement problems with some of the earlier studies. The period in which Converse studied may also have been one with unusually low levels of ideological difference between the major political parties in the USA (where they conducted their research), making it harder for voters to understand the difference between the parties or to adopt strong positions on many areas of policy.19

Gerald Pomper20 studied the association between party identification and voter preferences on six issues between 1956 and 1968. Consistent with Converse’s findings, from 1956 to 1960 the relationship between party identification and preferences was weak or non-existent. However, this relationship strengthened for all six issues between 1960 and 1964. Regardless of starting position, from 1964 Democrats were more likely to be liberal and Republicans conservative in all of these policy areas.

Earlier studies of voters’ political preferences also failed to take into account the measurement error inherent in public opinion surveys. Responses to these surveys can be influenced by external stimulus, which may change the salience of different attitudes at different times,21 and questions may be unclear or the respondent may become confused or bored, answering incorrectly or carelessly. These problems with survey design can result in greater apparent instability in the political attitudes held by citizens than is actually the case. Most voters hold relatively stable political preferences, but this has a random component that adds noise to survey responses.22

The general consensus in the modern political science literature is that most voters hold positions on a wide range of public policy issues that can be measured, with error, which is largely created by imprecise question wording and respondent inattention.23

In defence of voters

Voters certainly face limitations, but how far do these extend? Voting is cognitively demanding. Most political issues are complex, abstract and remote from citizens’ lives, and voters lack the time and resources to properly make informed policy distinctions between parties.

The average citizen is not always capable of making – or willing to invest the resources to make – optimal choices. Rather, we as individuals are often forced to trade off effort and optimisation. It cannot be expected that voters will have a high degree of familiarity with policy details in most domains, nor should it be expected that they will behave equally rationally across all issues.

Although citizens may not be familiar with policy details, they usually exhibit behaviour that is logical, responding to circumstances with ‘bounded rationality’ to obtain some utility from their vote. ‘Bounded rationality’ makes different assumptions than economic theories of rationality.24 Rather than being intimately familiar with policy themselves, citizens learn from their own lived experience and take cues from parties, elites and opinion leaders, who actively promote specific policies to voters, providing cues to their supporters about political matters and the importance of particular issues.25

How citizens learn

Political and social psychology provide substantial critiques on citizens’ capacities to perform their democratic duties, helping us reconcile voters’ limitations with the idea that democracies work reasonably well.

Voters do not necessarily need detailed knowledge about politics and policy to fulfil their democratic duty. They can be thought of as ‘cognitive misers’, who minimise the effort involved in making potentially complex or difficult decisions using shortcuts, learning only as much as they need to and receiving and interpreting signals from elected officials, opinion leaders and other sources.

One way voters make political choices (such as choosing who to vote for) without a substantial investment in information gathering is through the use of heuristics, or cognitive shortcuts.26 These are also used when making non-political decisions.

Individuals are using a heuristic, for instance, when they fix their beliefs more heavily on the first piece of information offered (the ‘anchor’) when making decisions. This is known as the anchoring heuristic. An example is the first price mentioned during a negotiation. If a salesperson offers a very high price to start negotiations, this becomes a psychological anchor for the buyer, meaning the counter-offer and final price are more likely to be higher than otherwise.

The representativeness heuristic is another cognitive shortcut. This involves comparing a problem or decision to the most representative mental prototype. When a voter is trying to decide if a politician is trustworthy, they might compare that politician’s characteristics to other people they have known in the past. If the politician shares traits with a kind grandfather or harsh teacher, they might be assumed to be gentle and trustworthy or critical and mean. This results in classifications that may or may not be correct, but saves on the effort of seeking additional information for critical analysis.

Party identification can also be thought of as a form of heuristic that guides voter behaviour.27 This helps to make politics less cognitively demanding for voters. Once citizens decide which party generally represents their interests, this single piece of information can act as a shortcut guiding how they view issues. A policy advocated by one’s party is more likely to meet with favour than one advocated by another party. For instance, if the Liberal Party promotes a new policy, voters who identify as Liberal supporters may be more inclined to believe this is a good idea than if the Labor Party had proposed the same policy.

Party identification can also guide how we view events. Bartels showed that voters’ statements about objective facts, such as whether unemployment increased or decreased, were heavily influenced by party identification.28 Under Republican presidents, Democratic identifiers were more likely to believe the economy was doing poorer than it was and Republicans were likely to believe it was doing better; and the reverse is usually true when a Democrat is in office.

Another common shortcut is the availability heuristic. This involves assessing the probability of an event based upon how easy it is to recall similar cases. When you are trying to make a decision, you might quickly remember a number of relevant examples. Since these are more readily available in your memory, you will likely judge these outcomes as being more common than harder to recall examples. For example, it might be easy for individuals to remember media coverage of violent crime, but harder to recall car crash fatalities, which are more common but less frequently reported.

The availability heuristic is driven, in part, by the influence of mass media. Newspapers, radio, television and news on the internet provide examples of crime, terrorism, plane crashes and shark attacks out of proportion to their actual incidence compared to other events. This often causes us to overestimate their likelihood. The availability heuristic allows politicians – whose message is amplified by the media – to influence us with cues and gives the media itself the power to help set the agenda.

Agenda setting, elite cues and framing

The reason voters use heuristics or other shortcuts – as Lippmann29 and Zaller30 identified – is that in large and complex societies they generally have no other choice. Their time and attention is finite, and political and policy issues are complicated. There is too much happening, often at a significant distance from their lived experience, for the average citizen to form a detailed and intimate understanding of every event, policy and personality that makes up modern politics in electoral democracies.

One of the major sources relied upon by voters for political information is the media. Its influence on voter attitudes and decision making has long been recognised. It is important to realise that the information the public receives – and that shapes its opinion – is never a full account of all important facts. Rather, it is a selective view of what is happening, which voters use to try and understand their political environment.31

By choosing to report certain stories, the news media and other actors control the flow of information to the public. They cannot necessarily tell people what to believe, but they can impact perceptions about the importance of issues.32 This process is called agenda setting .

The media are not the only group that have influence on public opinion. Cues can be taken from parties, elites and opinion leaders, who actively promote specific policies to voters. Individuals use these signals to save time and effort. Rather than attempting to master all the issues that might be important, voters can rely on experts and political elites to help shape their opinions on matters about which they are not well informed.

Political elites are not just politicians but also policy experts and religious leaders, union officials and business executives, environmental campaigners and other interest groups, and journalists. Individuals may also take cues from personal acquaintances if they are seen as being more knowledgeable about a particular issue.33

As with heuristics, the use of cues is an imperfect but necessary part of democratic engagement by ordinary citizens. For the vast majority of individuals, participation would be impossible without it. It can be a reasonably sophisticated process. Voters can take into account the source and nature of cues on a particular issue, including how close the position taken by the source of the cue is to the recipient’s views on other issues.34

Beyond agenda setting and cues, the media and elites – including political campaigns run by parties and candidates – may also use framing to influence voters.35 This occurs when an issue is portrayed a particular way to guide its interpretation. Individuals will react to a choice differently, depending on how it is presented.

Most political issues are heavily framed to persuade voters. In Australia, the decision to call people arriving by boat to seek asylum ‘refugees’, ‘boat people’ or ‘illegals’ is the result of framing. The choice of words and imagery is often deliberate – designed to evoke a particular reaction from the audience. Political actors try and place their cause and message in a positive frame or their opponent’s in a negative frame.

Aggregating individual preferences: studying voter behaviour

We can study voter behaviour a number of ways: through electoral results (aggregate studies) and using public opinion surveys (individual-level studies). Both have strengths and weaknesses.

Measuring aggregate voter behaviour

The ultimate expression of public opinion is the votes cast by citizens at elections, referendums and plebiscites, which we can examine to understand what voters think about particular issues and how they behaved in different parts of the country.

We can combine election results at the level of legislative districts – the discrete geographic spaces represented in a legislature, such as the Australian parliament – with other information. This can include census data, such as the average age of an electorate. We can combine these data with election results to see how the average age of the electorate was associated with support for different political parties or policy preferences.

However, there are risks associated with exclusively relying on these aggregate election results to study voter behaviour. This risks committing an ecological fallacy, a type of error where inferences are made about individuals based on aggregate group-level data. For instance, we may observe that the Liberal–National (Coalition) parties do better in low-income electorates. We may infer that this means that lower income voters support these parties. However, the aggregate relationship between income and voting for the Coalition parties is meaningless if rural electorates tend to have lower average incomes and voters in rural areas are also more conservative, rather than low-income voters themselves necessarily being more likely to support conservative parties. Within individual districts, voters with lower income may actually be more likely to vote for the Labor party. We cannot be sure whether this is the case without individual-level data, including the kind of information collected through public opinion surveys.

Using surveys to understand voter behaviour

As students and scholars of public opinion, we want to examine the attitudes and behaviours of voters more frequently than every three (or more) years, when elections are held, and to make inferences about the behaviour of individual citizens, not just aggregate-level election results. Generally, electoral returns are not disaggregated by demographics, socio-economic status, issue preferences or other attributes of citizens. We also want to understand attitudes towards issues that elections are not necessarily held on. Quantitative data from random, representative samples of the electorate – public opinion surveys – can provide a snapshot.

Much of our exposure to public opinion surveys (commonly called ‘polls’) is through the ‘horse race’ coverage of politics – who is winning, who is unpopular and how much has changed in recent weeks or months. Survey research can be much more extensive than this and can be used to understand what shapes public opinion (Is it the media, politicians’ messages or culture?). Surveys are useful for understanding citizens’ attitudes towards policies, events and political leaders, and how they might vote at elections and respond to future political decisions. Surveys can also be used to examine the influence of public opinion on political and policy decisions made by leaders.

The history of public opinion surveys

Prior to the development of survey research, sociologists and political scientists generally studied behaviour and opinions by interviewing people in small groups. Although providing detailed information, this often resulted in samples that were too small and too concentrated in limited geographical areas (such as particular neighbourhoods or workplaces), making it impossible to make generalisations about the broader public. Journalists and magazines often conducted informal straw polls and interviews on the street, but these were more for entertainment than serious research.

Most of the tools on which modern sampling is built have their origins in the 1940s and 1950s. In the USA, Australia and most other representative democracies, populations became more urban (and therefore concentrated), household telephones became more common, mailing lists became more accurate and people became generally easier to reach.

A significant incentive for the development of better public opinion measures was the burgeoning US radio industry in the 1920s and 1930s. Broadcasts were primarily funded by advertisers, who wanted to know the size of audiences when agreeing to pay for air time. Statistical sampling provided this, with random samples of hundreds or thousands of people offering relatively accurate estimates of the general population.

Political surveys followed, providing a way to regularly measure citizens’ privately held opinions. This was done by the news media, obtaining measurements of shifting opinion that they could report. Political parties, candidates and leaders also undertook surveys and used the data obtained to guide political decisions.

Early survey research relied on in-person interviews. Home telephones were not yet ubiquitous and were mostly owned by the wealthy. Mail surveys were difficult, as there was often an absence of complete and reliable lists of valid postal addresses. However, face-to-face surveys have many of the same drawbacks as interviews. Regardless, these early efforts at sampling sometimes provided useful data and established the foundations for later efforts.

There are several types of surveys, and methodological decisions can influence the utility of different survey types for different purposes. First, researchers need to decide how they are going to select their sample. The most common method is opt-out, or random, sampling, which sits at the heart of modern survey research. It is built around the idea that every individual in the population of interest (e.g. citizens likely to vote in an election) has a known probability of being sampled. Random sampling helps us to secure a representative sample by providing the means to obtain what is intended to be an unbiased selection of the larger population. From address-based, in-home interview sampling in the 1930s to surveys by mail, random digit dialing after the growth of landlines and mobile phones, and online surveys, researchers have placed significant efforts into obtaining representative samples.

The near-universal acceptance and use of representative, random samples is due to a high profile polling error more than 80 years ago. During the 1936 US presidential election, the then very popular magazine Literary Digest ran a mail-in survey that attracted more than two million responses. This is a truly massive sample size (generally a good thing), even by modern standards. Despite this, the magazine incorrectly predicted a landslide victory for Republican candidate Alf Landon over the incumbent Democrat, Franklin Roosevelt, who decisively won the election. The reason for the error? The magazine’s very biased sample of voters. Subscribers to the Literary Digest were predominantly car and telephone owners – an affluent group of voters who were not representative of the wider electorate – and Roosevelt’s supporters were under-represented.36

The attribute that made the Literary Digest sample so large – the huge list of subscribers who mailed in survey responses – also made it more error prone. It used a biased sample. The Literary Digest survey is what we call an opt-in survey. This is the other main form of sampling.

The problem with this form of survey is that often the respondents who choose to opt-in are different from the population you are trying to study in important ways that correlate with the outcome you are researching, biasing the results. Smaller surveys conducted by George Gallup, Archibald Crossley and Elmo Roper, with samples comprised of randomly selected voters, more accurately predicted the 1936 election results.37 Accordingly, opt-in convenience surveys were largely discarded by researchers in favour of random sampling.

In addition to the nature of the sample, there are also different methodologies with which to collect a survey sample. The most common forms of surveys are:

  • In-person survey: these allow the interviewer to build a personal rapport with respondents and gain more complete answers. This method can also allow for longer and more detailed surveys, and interviewers can use visual aids. However, in-person surveys are much more expensive than other methods and are geographically constrained by the area an interviewer can cover. They also have significant problems with social desirability bias – the tendency of respondents to answer questions in a way they believe will be viewed favorably by others, under-reporting potentially undesirable behaviour (e.g. eating junk food, smoking) and over-reporting what might be construed as good behavior (e.g. exercising daily, eating well, working hard). Due to the cost involved, this methodology is not used regularly for surveys in the modern era.
  • Mail survey: these surveys have the benefit of being affordable, suffer less from social desirability bias, as there is no human interviewer directly involved, and can be longer than phone polls. As a result, they have remained popular for academic surveys. However, as there is no person involved – either on the other end of the phone or in the room with the respondent – response rates can be very low.
  • Phone survey: this is are the most frequently used survey method. Phone surveys are cheaper than in-person interviews. Most general population telephone samples use random digit dialing, with phone numbers sampled from computerised lists of all possible telephone exchanges in the relevant population. These surveys generally provide a high-quality representative sample and are fast and reliable. A national representative sample of a thousand respondents can usually be collected in a few days at limited cost. However, the rapid spread of mobile phones and caller ID has complicated survey research. In addition, phone surveys tend to be quite time-limited, as it is difficult to keep the respondent on the phone for more than a few questions. To reduce costs, some survey research companies have adopted ‘robocall’ technologies. These use prerecorded questions, with respondents providing answers through the keys on their telephone or through automated voice recognition. This reduces costs and the problem of social desirability bias. There is no interviewer to offend or be judged by – or for the researcher to pay. However, robopolls have high non-response rates and can only be used for shorter interviews, as respondents are more willing to hang up on a machine than a human.
  • Online survey: these tend to have lower response rates than surveys involving human interviewers. However, they have fewer problems with social desirability bias and tend to be affordable. Originally, they were criticised for not being representative, with their samples skewed towards a young, internet-connected population. However, this has become less of a problem as internet penetration has increased. Additionally, some survey research companies have tried to build representative panels that samples can be drawn from, often providing high-quality results.

Conclusions

Learning about voter behaviour is the first step to understanding if and how democracy works. For students of electoral democracy, this is important as representation sits at the heart of democratic theory. Research shows that citizens’ aggregate preferences influence policy outcomes to varying degrees.38

While there are questions about the ability of voters to function as competent political actors, some of the early critiques were found to have been overly pessimistic. It is arguable that many studies set unrealistic expectations of the average voter. Rather, public opinion and the involvement of voters are necessary safeguards of democracy.

References

Achen, Christopher H. (1975). Mass political attitudes and the survey response. The American Political Science Review 69(4): 1218–31. DOI: 10.2307/1955282

Bartels, Larry M. (2002). Beyond the running tally: partisan bias in political perceptions. Political Behavior 24(2): 117–50. DOI: 10.1023/A:1021226224601

Blumer, Herbert (1946). Collective behavior. In Robert E. Park, ed. Principles of sociology, 219–88. New York: Barnes & Noble.

Brader, Ted (2012). The emotional foundations of democratic citizenship. In Adam Berinsky, ed. New directions in public opinion, 193–216. New York: Routledge.

Butler, David, and Donald E. Stokes (1973). Political change in Britain, 2nd edn. London: Macmillan.

Campbell, A., P.E. Converse, W.E. Miller and D.E. Stokes (1960). The American voter. New York: John Wiley & Sons.

Cohen, Bernard C. (2001). The press and foreign policy. New York: Harcourt.

Converse, Philip E. (2000). Assessing the capacity of mass electorates. Annual Review of Political Science 3(1): 331–53. DOI: 10.1146/annurev.polisci.3.1.331

—— (1975). Public opinion and voting behavior. In Fred I. Greenstein and Nelson W. Polsby, eds. Handbook of political science, 75–169. Reading, MA: Addison-Wesley.

—— (1964). The nature of belief systems in mass publics. In David E. Apter, ed. Ideology and discontent. New York: The Free Press.

Delli Carpini, Michael X., and Scott Keeter (1996). What Americans know about politics and why it matters. New Haven, CT: Yale University Press.

Dowding, K. (2009). Rational choice. In M. Flinders, A. Gamble, C. Hay and M. Kenny, eds. The Oxford Handbook of British Politics. Oxford: Oxford University Press.

Downs, Anthony (1957). An economic theory of democracy. New York: Harper.

Feldman, Stanley (1990). Measuring issue preferences: the problem of response instability. In James A. Stimson, ed. Political analysis 1: 25–60. Ann Arbor: University of Michigan Press.

Gilens, Martin (2012). Affluence and influence: economic inequality and political power in America. Princeton, NJ: Princeton University Press.

Gilens, Martin, and Naomi Murakawa. (2002). Elite cues and political decision making. In Michael X. Delli Carpini, Leonie Huddy and Robert Y. Shapiro, eds. Political decision making, deliberation and participation, 15–49. New York: Emerald Group Publishing Limited.

Gilovitch, Thomas (1991). How we know what isn’t so: the fallibility of human reason in everyday life. New York: The Free Press.

Gosnell, Harold F. (1937). How accurate were the polls? Public Opinion Quarterly 1(1): 97–105. http://www.jstor.org.libproxy.murdoch.edu.au/stable/2744805

Graber, D. (2001). Processing politics: learning from television in the internet age. Chicago: University of Chicago Press.

Habermas, Jürgen (1989). The structural transformation of the public sphere: an inquiry into a category of bourgeois society. Cambridge, MA: MIT Press.

Hindmoor, A. (2006). Rational choice. New York: Palgrave Macmillan.

Kahneman, Daniel (2003). Maps of bounded rationality: psychology for behavioral economics. American Economic Review 93(5): 1449–75. DOI: 10.1257/000282803322655392

Kahneman, Daniel, Paul Slovic and Amos Tversky. (1982). Judgment under uncertainty: heuristics and biases. Cambridge, UK: Cambridge University Press.

Kahneman, Daniel, and Amos Tversky (1979). Prospect theory: an analysis of decision under risk. Econometrica 47(2): 263–92. DOI: 10.2307/1914185

Lau, Richard R., and David P. Redlawsk (1997). Voting correctly. American Political Science Review 91(3): 585–98. DOI: 10.2307/2952076

Lazarsfeld, Paul F., Bernard Berelson and Hazel Gaudet (1968 [1948]). The people’s choice: how the voter makes up his mind in a presidential campaign, 3rd edn. New York; London: Columbia University Press.

Levendusky, Matthew S. (2010). Clearer cues, more consistent voters: a benefit of elite polarization. Political Behavior 32(1): 111–31. DOI: 10.1007/s11109-009-9094-0

Lippmann, Walter (1927). The phantom public. London: The Macmillan Company.

—— (1922). Public opinion. London: Allen & Unwin.

Lupia, Arthur (2016). Uninformed: why people seem to know so little about politics and what we can do about it. Oxford: Oxford University Press.

Lupia, Arthur, and Matthew D. McCubbins. (1998). The democratic dilemma. Cambridge: Cambridge University Press.

McGann, A. (2016). Voting choice and rational choice. In Oxford Research Encyclopedia, Politics. Oxford: Oxford University Press.

Mills, C. Wright (1956). The power elite. New York: Oxford University Press.

Nie, Norman H., Sidney Verba and John R. Petrocik. (1976). The changing American voter. Cambridge, MA; London: Harvard University Press.

Page, Benjamin I., and Robert Y. Shapiro (1992). The rational public: fifty years of trends in Americans’ policy preferences. Chicago: University of Chicago Press.

Pomper, Gerald M. (1972). From confusion to clarity: issues and American voters, 1956–1968. The American Political Science Review 66(2): 415–28. DOI: 10.1017/S0003055400259285

Popkin, Samuel L. (1991). The reasoning voter: communication and persuasion in presidential campaigns. Chicago; London: University of Chicago Press.

Rabin, Matthew (1998). Psychology and economics. Journal of Economic Literature 36(1): 11–46. http://www.pugetsound.edu/facultypages/gmilam/courses/econ291/readings/Rabin98.pdf

Redlawsk, David P., and Richard R. Lau (2013). Behavioural decision making. In Leonie Huddy, David O. Sears and Jack S. Levy, eds. The Oxford handbook of political psychology, 2nd edn, 130–64. New York: Oxford University Press.

Squire, Peverill (1988). Why the 1936 Literary Digest poll failed. Public Opinion Quarterly 52(1): 125–33. DOI: 10.1086/269085

Tversky, Amos, and Daniel Kahneman (1991). Loss aversion in riskless choice: a reference-dependent model. The Quarterly Journal of Economics 106(4): 1039–61. http://www.sscnet.ucla.edu/polisci/faculty/chwe/austen/tversky1991.pdf

—— (1981). The framing of decisions and the psychology of choice. Science 211: 453–58. http://www.stat.columbia.edu/~gelman/surveys.course/TverskyKahneman1981.pdf

Watts, Duncan J., and Peter Sheridan Dodds (2007). Influentials, networks, and public opinion formation. Journal of Consumer Research 34(4): 441–58. DOI: 10.1086/518527

Zaller, John (1992). The nature and origins of mass opinion. Cambridge, UK: Cambridge University Press.

Zaller, John, and Stanley Feldman (1992). A simple theory of the survey response: answering questions versus revealing preference. American Journal of Political Science 36(3): 579–616. http://www.uvm.edu/~dguber/POLS234/articles/zaller_feldman.pdf

About the author

Dr Shaun Ratcliff is a lecturer in political science at the United States Studies Centre at the University of Sydney. His research examines public opinion, the behaviour of political actors and the role of parties as interest aggregators in the USA, Australia and other democracies. He teaches public opinion and the use of quantitative research methods.

1 Blumer 1946.

2 Habermas 1989.

3 For a general discussion on rational choice, see Hindmoor 2006. For specific discussions on rational choice theory as a framework for understanding politics, see McGann 2016 and Dowding 2009.

4 Downs 1957.

5 Converse 1975; Delli Carpini and Keeter 1996.

6 Lippmann 1927; Lippmann 1922.

7 Converse 1964.

8 Lazarsfeld, Berelson and Gaudet 1968 [1948].

9 Butler and Stokes 1973.

10 Redlawsk and Lau 2013.

11 Rabin 1998.

12 Converse 1964.

13 Brader 2012.

14 Kahneman 2003; Kahneman and Tversky 1979; Tversky and Kahneman 1991.

15 Gilovitch 1991.

16 Bartels 2002.

17 Page and Shapiro 1992.

18 Lau and Redlawsk 1997.

19 Nie, Verba and Petrocik 1976, 99, 179–80.

20 Pomper 1972.

21 Zaller 1992; Zaller and Feldman 1992.

22 Achen 1975; Feldman 1990.

23 More recently, Converse (2000) clarified his position on this issue, stating that survey item responses are probabilistic over a ‘latitude of acceptance’, with this probability space varying depending on the political sophistication and interest of the respondent.

24 Kahneman 2003.

25 Gilens and Murakawa 2002; Levendusky 2010; Lupia 2016; Lupia and McCubbins 1998; Popkin 1991.

26 Kahneman, Slovic and Tversky 1982.

27 Campbell et al. 1960.

28 Bartels 2002.

29 Lippmann 1922, 59.

30 Zaller 1992, 6.

31 Graber 2001.

32 Cohen 2001.

33 Watts and Dodds 2007.

34 Gilens and Murakawa 2002.

35 Tversky and Kahneman 1981.

36 Squire 1988.

37 Gosnell 1937.

38 Gilens 2012.