FEBRUARY 1984, NORTHWEST ORIENT 35

A Primer on Polls

How they work. The history of polls. How a magazine went bankrupt over a poll. Amusing anecdotes about polls. Can you trust a Cosmo or Playboy poll? The accuracy of polls. And many other facts.

by Jean Marie Hamilton

It’s estimated that in 1980 candidates for local and national public office spent approximately $20 million on polls and that over 1,500 surveys were taken during the election year. In addition, corporations spent over $750 million polling Americans about their opinions on everything from soup to nuts. Polling has become a $4 billion-a-year business, and if past history is any predictor, polling in 1984 will surpass all previous records.

Beginning officially February 28 with the New Hampshire primary, and unofficially February 20 with the Iowa caucuses, the public will be inundated with pollsters’ perceptions of how it will vote in the November presidential elections, for while there are as many kinds of polls as there are subjects to be polled, the presidential poll is the standard against which all other polls are measured. It is the poll for which most pollsters use their most advanced and expensive techniques to assure accuracy, the poll that is immediately judged to be a success or a failure. It’s been known to make or break pollsters and has sent one magazine to its demise. Ironically, for a good many noted pollsters, the consumer market poll or market research and not the election poll is the principal source of revenue.

For the uninitiated, we present here a guide to polling in America – with special emphasis on political polling – something to help the nonpolled (according to research, only 15 to 35 percent of the American public has ever been asked its opinion on soup or nuts or anything else) make it through the 1,500th survey on the race for our vote.

History

The poll as we know it today came into existence in the 1930s, but the art of determining how people are going to vote, who is going to win an election and what the public opinion of the day is has been practiced throughout U.S. history. A look at the time line below makes it clear that the poll has been making news for years, and will continue to do so.

1700s: Poll books of early politicians record who voted and how they voted.

Jefferson Administration, 1801-1809: Regular canvassing of voters – by individual political parties of party members only – begins. Voters are asked about their voting intentions only, and demographic questions or attitudinal questions are not included.

Nineteenth Century: Canvassing becomes widely utilized, with national parties using armies of volunteers to canvass voters.

1824: The first recorded straw vote appears in the Harrisburg Pennsylvanian; Andrew Jackson is the favored presidential candidate in Wilmington.

1850: J.D.B. DeBow, the director of the 1850 census, utilizes the concept of the random sample, sampling 23 counties and cross-tabulating data regarding marriage, schooling and inequality of wealth.

1888: The term “dark horse” is used by the Boston Journal to describe a candidate other than the leading contenders likely to emerge as a winner of an election. The process of election watching and candidate watching becomes known as horse-race journalism.

1892: The Democratic National Committee spends $2.5 million to circulate campaign pamphlets and personalized letters and sponsor 14,000 field workers and orators. The Republican National Committee spends $3.5 million in 1896 to sponsor what author Richard Jensen, writing in Public Opinion, calls polling, with an “intensity never matched before or since in a democratic society.”

1896: Chicago newspapers conduct straw polls to determine the outcome of the McKinley-Bryan presidential election. The Chicago Record spends $60,000 plus to mail postcard ballots to each of the registered voters in Chicago and to a random sample of one voter out of every eight in twelve Midwestern states. A quarter of a million returns predict McKinley will win and are off by only .04 percent in Chicago, but fail outside of Chicago.

First three decades of the twentieth century: Straw polls become even more popular and are conducted by the Hearst Newspapers, New York Herald, Cincinnati Enquirer, Columbus Dispatch, Chicago Tribune, Omaha World-Herald and the Des Moines Tribune, among others.

1916: The Columbus Dispatch begins systematic polling in Ohio and by 1920 conducts polls using geographical locations and a quota system based on party, sex, religion, nationality and economic status. Literary Digest, a popular weekly magazine, begins the first of its straw polls, focusing on presidential elections.

World War I: Army psychologists administer intelligence and aptitude tests to get recruits into the right jobs. The art of designing questionnaires is improved, and the discovery of patterns of response through statistical analysis is made.

1920s: Advertising agencies and the Curtis chain pioneer buyer-attitude studies.

1930s: Media researchers, government and academic statisticians improve upon the sampling technique, adopting quota sampling.

1932: Henry Ling, psychologist and media, advertising and marketing expert, creates the first modern poll, the Psychological Barometer, for the Psychological Corporation (still in business) surveying public attitudes on various products. Ling’s polls are conducted in home, not by mail, eliminating the problems of nonresponse.

1932: Mrs. Alex Miller runs for secretary of state in Iowa as a Democrat and her son-in-law tests a public opinion sampling technique developed while he was working on a doctorate degree. Mrs Miller wins, and her son-in-law, George H. Gallup, is in business.

1935: George Gallup, Archibald Crossley and Elmo Roper launch the modern attitudinal poll, asking those polled about more than just how they intend to vote, correctly predicting Franklin Delano Roosevelt’s victory in 1936. The Gallup Poll is syndicated for newspapers. Roper conducts the Fortune Poll for Fortune magazine.

1936: Gallup gives Roosevelt 55.7 percent, Crossley predicts 53.8 percent and Roper says 61.7 – the president’s actual share is 62.5 percent The Literary Digest predicts that Republican Alfred Landon will win and moves into polling infamy.

1936: King Features syndicates the Crossley Poll.

1940: President Roosevelt uses public opinion information gathered from polls to lead the public.

1940s: Media-supported or -conducted state polls such as Joe Belden’s Texas Poll (1940), Mervin Field’s California Poll (1947), the Des Moines Register Iowa Poll (1943) and the Minneapolis Tribune’s Minnesota Poll (1944) are organized.

1948: Major polls and pollsters predict a Dewey landslide. Polling suffers a credibility gap.

1956: Harris survey begins.

1960: John F. Kennedy utilizes polls during his presidential campaign.

1960s: Polls are conducted by CBS/New York Times, NBC/Associated Press. ABC/Harris, the Washington Post and the Los Angeles Times. Time and Newsweek sponsor opinion polls.

1967: Exit polls are introduced by CBS News. Voters are asked demographic questions and whom they voted for. In 1972 CBS News adds questions about the mood and motivations of the voters.

1972 Amitai Etzioni’s MINERVA system adds voting capability to the standard home telephone for conference calls of up to 30 people.

1976: Jimmy Carter hires pollster Patrick Cadell, and comes from behind to win. Cadell becomes the first pollster to become a full-fledged member of the inner circle at the White House.

1980: NBC predicts Ronald Reagan has won the presidency by 8:15 p.m., before the polls have closed in the western states.

Polling Folklore

Down for the Count

Literary Digest Magazine began conducting straw polls in 1916, focusing on presidential elections. The Digest made mass mailings, sometimes as many as 10 million pieces, to its readers and later began to mail ballots to the public at large with a subscription blank enclosed so the voter could see his vote tabulated in future issues. Archibald Crossley, a leading scientific pollster, joined the staff and conducted the Digest’s first systematic polls, using telephone directories and automobile registration files. In 1936, with Crossley no longer at the Digest, the magazine conducted its now famous poll. To save money (it was the midst of the Depression), the magazine used its 1932 mailing list, sending 10 million ballots to every registered voter in Allentown. Pennsylvania. and to random samples of half the registered voters in Scranton and a third in Chicago. The Literary Digest poll predicted that Republican Alfred Landon of Kansas would defeat Franklin Delano Roosevelt for the presidency by 60 to 40 percent of the vote. Roosevelt was elected by a landslide 62.5 percent of the vote, and the magazine was discredited and went out of business. According to the analysts, the poll had failed to take into account the nonresponse factor of those polled. One-fourth of the Landon supporters mailed back their cards, versus one-sixth of the Roosevelt voters. In addition, 59 percent of the upper-third income groups supported Landon, versus 30 percent of the lower-third and 18 percent of the voters on relief. (The Digest poll’s use of telephone lists and automobile registration was biased toward the upper-income voters.)

Leave It to Winston

“Nothing is more dangerous than to live in the temperamental atmosphere of a Gallup poll, always taking one’s temperature,” said Sir Winston Churchill. “There is only one duty, only one safe course. and that is to be right and not to fear to do or say what you believe to be right.”

A Matter of Probability

At a social gathering in Princeton, a lady introduced herself to George Gallup, Sr., and asked why she had never been interviewed. Dr. Gallup explained to her that her chances of being interviewed were about as great as her chances of being struck by lightning. “But Dr. Gallup,” said the lady, “I have been struck by lightning.”

The Squeaky Wheel ...

“Mail, unfortunately, is not true as an indicator of the feelings of the people,” President John F. Kennedy told a 1962 press conference. “I got last week 28 letters on Laos... (and) 440 letters on the cancellation of a tax exemption for a ’mercy’ foundation.”

A Poll Is a Poll, According to Pollsters Surveyed

NBC’s Saturday Night Live generated 466,000 calls when it asked viewers to vote on whether a live lobster named Larry should be boiled alive at the end of a restaurant comedy sketch. Larry was reprieved by a narrow 12,000-vote margin.

The Basics of Polling

In terms of user interest, there are two broad categories of polls: public policy polls involving issues and campaigns. and all other polls, which are basically consumer polls and market-research polls. Within these two categories, consumer and public, there are two types of polls: the simple, more easily verifiable poll on subject matters such as who owns what, how many, how much money a person makes, product familiarity, product use or voting intentions; and the more difficult attitudinal poll often dealing in abstract thoughts and perceptions such as environmental policy or nuclear arms control – issues that are a matter of interpretation and for which there is rarely a yes or no answer.

Basically, all polls are alike in that they are based to some extent on the theory of mathematical probability, which means that by polling a sample of a certain population, if each member in the population has an equal chance of falling into the sample, a number of unknowns can be predicted for the entire population. Unfortunately, after that broad generality, every poll is different in hundreds of ways. each accompanied by a set of advantages and disadvantages.

There are three types of sampling that are done in polling: purposive, quota and probability. Purposive sampling refers to obtaining the sample to be surveyed from a cluster of the population because that population is such a rarity that it’s the only way the population could ever be reached economically: for example. to discover the views of an abstract world sect a small community of the sect in New York is surveyed. Quota sampling requires establishing an overall framework similar to the distribution of the whole population by specific characteristics (males/females; under 18/over 18). The interviewer is then sent to a certain district or block where subjects fall into those characteristic groups and told to conduct interviews with a given number of persons. The choice of selecting the interviewee is left to the interviewer. Because the interviewer’s judgment intervenes, the randomness aspect is lost and the theory of probability is compromised. There’s a danger that those selected do not represent the characteristic group. Probability sampling is the most accurate technique and requires dividing an entire population into separate categories according to the size of the locality the people live in. Geographic areas are then determined on a random basis, and a specified number of interviews are conducted. Those interviewed are interviewed solely because the area in which they live has fallen into the sample, not because of any characteristic they are thought to possess. The question of who is sampled means nothing until all these unrepresentative elements are added together and the sample becomes representative. Because pure probability sampling is expensive and quota sampling is unreliable, reputable pollsters use a technique known as modified probability sampling, which combines sampling to choose a geographic area and quotas to obtain those interviewed from that area.

Selecting the sample involves, then, a number of problems, the most common of which are listed below.

Sampling Error: There is always the possibility that the sample selected does not accurately represent the entire population. It’s called the sampling error, and it does not account for any other errors such as interview techniques, etc. A sample of 1,500 interviews is the standard in national opinion polls (polls conducted by television and newspapers generally have a much smaller sample size), because it has an error margin of plus or minus three percentage points, at a 95 percent confidence level, which is considered acceptable. That means that an election poll of 1,500 that reports that candidate A leads candidate B by 53 to 47 percent, has a margin of error of plus or minus 3 percent at a 95 percent confidence level. Thus, in 5 out of 100 polls, the percentages could be 50 percent for candidate A and 50 for candidate B or 56 to 44 percent The smaller the sample, the larger the margin of error due to sampling size and the lower the confidence level. Regional breakouts of national election polls of 1,500 have an error of 5 or 6 percentage points: findings for Democrats or independents will be off by 4, Republicans by 5, Jews by 18 percent, blacks by 8 percent – all at lowering levels of confidence. If the sample has been limited to registered voters only, the sampling errors in each of the above cases will be 2 percent higher. Generalizing from small subsamples can be very misleading.

Once the sample size is drawn up, another problem arises when someone selected at random is not at home, refuses to answer the poll or lies. Studies have shown that mail surveys are more prone to such inaccuracies, as are telephone surveys. It’s also known that the secret ballot on election choices as a part of a standard nonsecret interview is more effective in getting people to give an accurate response to the questions than is the straight mail ballot or personal interview. Dishonest answers can also have a negative effect on a poll. In 1976 and 1978 National Election Studies of voting records conducted by the Center for Political Studies of the University of Michigan found that 14 percent of the sample misreported in the direction of social acceptability.

To compensate for some of these discrepancies. pollsters often weight their samples. For example, because people who are less likely to be found at home when the interviewer calls will be undersampled, compensation will be made in the sampling formula.

Whenever human judgment is introduced into the equation, by weighting or formulation of questions, bias is almost inevitable. In fact, most polling error is not statistical but interpretive. (Technical changes in surveying have had a substantial effect on presidential polling error. In seven of the national elections from 1936 to 1948, the mean error was 4.0 percentage points and in the five elections from 1970 through 1978 the mean error was 1.0 points.) Especially difficult problems occur when citizens are polled about their attitudes.

Studies have shown that respondents can be conditioned by the questions that precede the survey’s main question. Hypothetical questions can weight responses heavily in one direction (“If John Anderson had a real chance of winning...”) and the wording or phrasing of a question in a certain manner can elicit various responses. The problem is especially troublesome when pollsters are asked to educate their interviewees on the subject of the interview, when interviewees are asked about subjects that are not directly related to their experiences or with which they are unfamiliar or when they are asked to reconstruct past experiences (“How many times have you eaten Cheerios in the last month?”). The best example regarding the way in which question phrasing and similar technicalities affect the outcome of the poll is the case of the polls on the Strategic Arms Limitation Treaty (SALT II) between the U.S. and Russia. One poll asked “Do you favor or oppose a new agreement between the U.S. and Russia which would limit nuclear weapons?” The agreement was favored by 81 percent. Another poll asked those polled if they would “favor ...the U.S. and Russia coming to a new SALT arms agreement” and 72 percent agreed. A Decision/Making/Information poll, sponsored by the Coalition for Peace Through Strength, asked those polled if they were opposed to any agreement that “does not cover certain types of Soviet bombers and missiles which could strike the U.S.” and 81 percent agreed. But the most telling poll of all was a New York Times/CBS News poll taken a few weeks before the SALT Treaty was signed: Fewer than one in three Americans could even identify which countries would sign the pact.

Studies have also found that the stability, coherence and intensity of polling answers are often misreported, skewing poll results. Philip Converse studied the issue positions of people who were interviewed three times over the course of four years by the University of Michigan’s Survey Research Center, repeating several issue questions during each wave of the survey. He found that less than 20 percent of the total sample had real and stable attitudes on any given issue. The shifting opinions of the remaining 80 percent tended to cancel each other out, so that for the sample as a whole the answers remained remarkably stable.

Another source of poll error is reportorial. Because polls are news covered by the media they are subject to the same constraints by which all news is judged.

Because the news media have put a premium on speed, complex issues are sometimes oversimplified, or information is not sought that is not thought to be timely. Because of the media’s tight scheduling, polling must be geared around media time frames, and sometimes polling stops too soon. The media are also faced with great space restrictions, limiting the number of words or minutes that can be devoted to reporting the results of a poll. The media try to avoid the abstract and tend to disregard the issue, and focus on the personalities or events involved. Reporters and editors often overstress sampling error and understress other more important sources of error. Often two responses are added together, which is a dangerous practice because when neutral responses are grouped together with an extreme position, there’s no way of knowing which position the neutral respondent would take if pressed to give a definite response.

The American Association for Public Opinion Research and the National Council on Public Polls have adopted standards of poll publishing or broadcasting to assure accuracy in polling and poll reporting. Make sure the surveys you read or use include the following: sample size (the number of persons interviewed); the sponsor or organization responsible for the survey (it often happens that polling organizations piggyback questions on surveys designed for other purposes); the complete wording of the questions asked: the sampling error (percentage error at some statistical level of confidence); the definition of the population sampled (registered voters, likely voters); the method of obtaining interviews (in person. telephone. mail); timing, or the dates when the interviews were conducted; and the basis for results that use less than the total sample.

Pseudo Polls

Informal polls asking people how they will vote or their opinions on issues have been popular in America since the early days of newspaper reporting, which is to say from the beginning. Today the term also applies to the polls on sex habits, consumer preferences, and so on sponsored and conducted by magazines such as Playboy, Glamour, Psychology Today or Cosmopolitan. The technique has also been used by groups in regional goal setting such as the Goals for Dallas program in the late 1960s or Atlanta 2000 in 1978. The polls are unreliable and inaccurate because the sample from which they are drawn is self-selected and unscientific. Because the polls have no control over who responds, the poll responses are unrepresentative of any larger group. Among the new forms of pseudo-polls are these:

Fund-Raising Polls

Studies have shown that including a poll in a fund-raising letter increases the response to such appeals by one-tenth to four-tenths of a percentage point. (The average successful direct mail campaign has a return of about 1 percent, so by using a poll the return increases to 1.1 or 1.4 percent.) The technique is successful because it raises the potential respondent’s involvement and reinforces his or her beliefs. Groups using the technique include the Republican National Committee and the League of Women Voters.

Telephone/Computer Interviews

After the Carter/Reagan debate, ABC News asked viewers to call a 900 exchange number to register their opinions on who won the debate. The poll recorded 727,000 calls, and Reagan was declared the winner by a two-to-one margin. Agar, the results are not scientific and represent only the views of 727,000 persons who could get through to the telephone poll. In Columbus, Cincinnati, Dallas and Pittsburgh, another application of the technique is employed. Viewers of Qube, Warner-Amex Corporation’s two-way cable television system, can send digital signals to a central computer on a number of topics, products and so on. Responses are tabulated by computer in less than ten seconds.

The Presidential Election Poll

Among the most interesting polls are the major election polls. Polling techniques have been improved upon because of them, and a number of discoveries regarding the electorate have been made. Here’s a look at how major presidential election polls are conducted and some problems encountered.

Because of the high cost of polling, most pollsters use modified probability sampling during the early parts of the campaign, supplemented by probability sampling in the later stages.

Since quite a large number of Americans don’t vote in elections, survey results can be very inaccurate if the entire population is sampled. Election precincts are used as the basis for sampling, but survey organizations differ in whom they select to survey – Gallup surveys all registered voters, Harris selects likely voters, Roper polls “certain registered voters” – and they all employ different formulas of selection. Because it is expensive to separate out the nonvoters from the voters in a poll, and because last-minute shifts in voter sentiment can alter election outcomes, pollsters save their most accurate methods for the final poll. Gallup generally bases its final election survey on surveys taken in early October and on a final survey taken the Friday before election day. In the first survey, for instance, in 1980, Gallup drew its sample from a population of registered voters and conducted in-person interviews. Households were randomly selected for sample precincts and no callbacks were made. Respondents were systematically selected on the basis of age and sex. The study was weighted to correct for times at home and to reflect demographics. Likely voters were identified based on past behavior, interest and intention to vote and expected turnout. Secret ballots for tickets labeled by party were used and undecideds were asked to mark their ballots based on how they were leaning. Adjustments were made for undecideds, and figures were corrected for deviation of sample precincts from national results in 1976.

In the second survey, the voting preference of only those who plan to vote is tabulated and the trend in the plan-to-vote group is then applied to the first survey, which has been processed completely using all the factors.

The Exit Poll

An interesting development in presidential polling in recent years has been the exit poll conducted on election day. In Public Opinion Quarterly, Mark R Levy reports that during the 1980 presidential elections, exit polls were conducted in precincts or election districts sampled at random from frames that included all voting districts nationwide. Precincts were usually grouped into two to six strata based on things like past voting behavior, urban vs. rural counties and other factors and sampling was conducted with probabilities proportionate to either the total number of votes cast within precincts in some base year or proportionate to current voter registration. Within precincts respondents were selected on a systematic random basis; that is, interviewers were told to interview every nth voter, skipping those in between. (Intervals were based on size of precincts or other factors.) Those polled were given a secret, self-administered questionnaire and were asked to fill out 20 to 50 questions. including how they voted, demographic measures, time and influence of vote decision and so on. Questionnaires were folded by respondents and deposited in a ballot box.

The polling techniques had to take into account those who failed to answer all the questions or refused to be surveyed (women, blacks and older Americans are more likely to refuse) and timing (most data were collected by 6 p.m.). Procedures varied according to the polling organization. All surveys used weighting – quite complicated and varying – to adjust their formulas.

Candidate Polls

Candidates, for economic reasons, generally have pollsters conduct a “benchmark” poll early in the campaign to pinpoint central issues and problems of the campaign, and then to track the candidate’s standing periodically throughout the campaign. To track the candidate’s standing. a number of self-contained surveys at intervals of up to three weeks before the election, based on interviews with a number of respondents, are conducted. Polling can differ as to the number of persons polled and the frequency of the polls. One method is to interview 50 to 150 persons each night in the closing weeks of a campaign. Tracking is meant to measure shifts in voter sentiment but the number surveyed is so small that the results usually aren’t statistically significant

Horse-race Journalism

It used to be, in the pretelevision days, that media coverage of presidential elections centered around the issues and leadership capabilities of the candidates, but today things have changed. An examination of press coverage of the presidential campaign in 1976, focusing on the three major networks, two major news magazines as well as two newspapers in Pennsylvania and California, plus a comprehensive panel survey of 1,200 respondents, revealed that the coverage concentrated on the strategic game played by the candidates. Winning and losing, strategy and logistics, appearances and gaffes were the dominant categories of election news. Other studies have made similar findings. The focus is now on the race itself, the contest, and polling has played a big part in this trend by bringing the most immediate aspects of the contest – the contestant’s standings – to the public’s attention.

The Last Word

In 1975 a Gallup poll found that 9 percent of the public rated the polls’ accuracy in elections as excellent. 41 percent rated it good; 29 percent had no opinion. Trust in the result of surveys as being almost always right or right most of the time was expressed by 40 percent of sampled respondents in two matched nationwide studies in 1978. A 1976 Lipset survey showed that polls and pollsters shared the same general degree of confidence given to business leaders/big business, union leaders/big labor and advertising/advertising agencies.

Polls can be used by elected officials, the press, business leaders and special interest groups to formulate opinions and courses of action. If they are accurate, they can be the collective voice of the common citizen. That doesn’t insure that wise decisions will be made or that the wisest decision is the will of those polled.

Jean Marie Hamilton is the editor of Northwest Orient magazine.