[Image courtesy of Joe Hall]
Berkeley's Philip Stark and David Wagner recently shared a paper they have submitted for publication entitled "Evidence-Based Elections". While subject matter is highly technical, the authors do a nice job of making it accessible to the informed layperson - and tucked into the piece is an observation that could significantly revamp the approach to voting technology at every level of government nationwide.
Stark and Wagner start with this assertion: "an election should ﬁnd out who won, but ... should also produce convincing evidence that it found the real winners - or report that it cannot." Working from that premise, the authors describe various recent elections where voting technology failures created controversy about the validity of the results.
Some of the blame, they suggest, can be laid at the continued reliance of the field on a testing and certification process aimed at identifying and screening for problems before a given technology can be used. Indeed, they note that "the trend over the past decade or two has been towards more sophisticated, complex technology and much greater reliance upon complex software--trends [to which] that the voting standards and the testing process have been slow to react." Moreover, the continued consolidation of the voting equipment market and slow development of the federal testing process has resulted in a high cost of testing that serves as a barrier to entry and innovation in the field.
Stark and Wagner suggest that the better approach to take is an evidence-based "resilient canvass framework" approach that examines how voting technology actually performed - with "evidence" defined using the following formulation:
Auditability, they say, is the ability to "produce a trustworthy audit trail that can be used to check the accuracy of the election." This audit trail, then, would be subjected to a two-step auditing process aimed at validating the results:
a compliance audit and a risk-limiting audit. The compliance audit checks that the audit trail is sufﬁciently complete and accurate to tell who won. The risk-limiting audit checks the audit trail statistically to determine whether the vote tabulation system found the correct winners, and, with high probability, corrects the outcome if the vote tabulation system was wrong.
The real bombshell in the piece, though, is the suggestion that the proposed approach is not only preferable to testing and certification but can be effective even if such procedures did not exist or weren't used:
Currently, certiﬁcation does not serve the interests of the public or local elections ofﬁcials as well as one might hope. It erects barriers to competition, increases acquisition and maintenance costs, slows innovation, and makes risk-limiting audits harder, slower, more expensive, and less transparent. And risk-limiting audits provide more direct evidence that outcomes are correct than certiﬁcation can provide. Requiring local election ofﬁcials to conduct compliance and risk-limiting audits rather than requiring them to use certiﬁed equipment would give them an incentive to use the most accurate (and most easily audited) tabulation technology available, because more accurate initial counts require less hand counting during the audit, reducing labor costs and allowing the canvass to ﬁnish sooner ... Using voting systems that are designed to support efﬁcient auditing--whether those systems have been certiﬁed or not-- can substantially reduce the cost of collecting convincing evidence that the ofﬁcial results are correct. [emphasis added]
This sentiment - favoring auditing over testing - is slowly gaining favor in some circles, and could have a huge impact on the way in which voting technology is developed, marketed and maintained by public and private actors alike.
Kudos to Stark and Wagner for a provocative piece - election geeks of all kinds should add this one to the reading pile, preferably near the top.