[Chronicle]

Nov. 15, 2001
Vol. 21 No. 5

current issue
archive / search
contact

    NORC completes examination of Florida ballots

    Q&A with ... Kirk Wolter of NORC


    Diana Jergovic, Senior Research Scientist, and Kirk Wolter, Senior Vice President for Statistics and Methodology and Professor in Statistics, supervised the National Opinion Research Center’s review of uncertified ballots in Florida.

    As widely reported earlier this week, the National Opinion Research Center has completed its examination of uncounted ballots cast in the 2000 presidential election in Florida.

    Last January, a consortium of the nation’s leading news organizations hired NORC to independently review the more than 175,000 uncertified ballots. During a roughly three-month period, NORC staffers coded the individual ballots and organized their findings for the sponsoring organizations. After a two-month delay caused by the Tuesday, Sept. 11 terrorist attacks, sponsoring media, which include The New York Times, The Wall Street Journal, The Washington Post, the Chicago Tribune, the Associated Press and CNN, published their interpretation of NORC’s data.

    Kirk Wolter, Senior Vice President for Statistics and Methodology and Professor in Statistics at the University, and Diana Jergovic, Senior Research Scientist, supervised the ballot project. Wolter discussed the operational aspects of the ballot project.

    How did NORC examine the ballots?

    In each of the counties, local election officials assigned county workers to display the ballots. NORC coding teams reviewed each ballot and recorded the markings they observed. The team of coders sat side by side, but members worked independently of each other and made individual determinations of the appearance of the ballots. They did not talk among themselves or consult each other in any way.

    What specifically did NORC coders look for?

    Coders recorded the condition of each ballot examined. Thus, for Votomatic (and to some extent Datavote) ballots, coders noted whether chads were dimpled and if so, whether light was shining through the dimple. (Each coder worked with a small light table that helped coders to examine for light.) Coders also noted whether chads were completely punched or hanging by one, two or three corners. For optical scan ballots and any Votomatic or Datavote absentee ballots completed outside the voting booth, coders noted whether the ovals or arrows were fully filled or otherwise marked (with a check, slash, X, etc.). Coders noted whether there were stray marks on the ballot that would confuse a scanning machine and whether ballots were uncertified because the wrong color ink was used. Finally, coders recorded verbatim any written notations on the ballots.

    What was done to ensure accuracy in the field?

    Because this data set is intended to be the authoritative description of the uncertified ballots, we took a number of steps to ensure high quality. First, we hired only qualified individuals to review the ballots. Because of the nature of the task, all coders were administered a near-point vision test before being staffed on this project. Project coders were trained and tested on coding procedures before being allowed to code. Team leaders––who were long-term NORC employees––conducted the training and worked closely with the coders to ensure consistently high performance. Every evening, prior to shipping the coding forms to Chicago, the team leaders reviewed the forms for completeness and legibility of coding. NORC also attempted to verify the accuracy of the coding by randomly selecting ballots from every county to recode. These recordings were later matched with the original codings and reviewed for consistency of coding.

    What happened with the data?

    Information on the ballot markings was recorded on coding forms that were sent to our Chicago offices daily. In Chicago, a trained team of data entry specialists entered the information into electronic files.

    What is the final product?

    NORC compiled 17.5 million pieces of information into two primary data sets. One is a ballot-level database (the raw database) that contains information on every chad or candidate space on every ballot across the 67 counties. This file does not attempt to align candidate information across ballots; it simply reflects the reality of the disparate ballot designs used throughout the state of Florida.

    The second is an aligned database that does reconcile every coder’s information for every ballot for each presidential and U.S. Senate candidate. This file contains the first processing step necessary to facilitate comparison of the codings for each candidate regardless of his or her various ballot positions across the state. The raw database is the definitive historical archive of every mark on the uncertified ballots. The aligned file is an analyst’s tool, presenting only the markings related to the various candidate positions on each county’s ballot.

    Secondary data sets include the ballot notations copied verbatim by NORC’s coders, the demographic characteristics of the coders, the recoding data collected while coding and a number of files produced by the media group. The media group files contain qualitative and quantitative county- and precinct-level information used by the media in their analyses.

    The data may be reviewed by visiting http://www.uchicago.norc.edu.