Voatz: a tale of a terrible, horrible, no-good, very bad idea

Let’s get the fish in the barrel out of the way. Voatz are a tech startup whose bright idea was to disrupt democracy by having people vote on their phone, and store the votes on, you guessed it, a blockchain. Does this sound like a bad idea? Welp.

It turned out that they seemed awfully casual about basic principles of software security, such as not hard-coding your AWS credentials. It turned out that their blockchain was an eight-node Hyperledger install, i.e. one phenomenologically not especially distinguishable from databases secured by passwords. They have been widely and justly chastised for these things. But they aren’t what’s important.

To their credit, their system is opt-in, and apparently generates real-time voter-verified paper ballots, the single most important thing about any voting system. But still. We need to step back and ask a question here: why are we trying to vote via an app and collate election results on any kind of centralized system at all? We don’t want to make voting more efficient. Efficiency is not the problem we are trying to solve with elections. The inefficiency of paper ballots and their handling and collation and tabulation is a feature, not a bug.

Just ask everyone at Def Con’s Vote Hacking Village, whose successes have been rampant this weekend, in the midst of the enmity of the National Association of Secretaries of State:


— TProphet (@TProphet) August 9, 2018

Voatz were approaching the wrong problem in the wrong way from the start. Even if your blockchain repository is verifiably write-once, which it isn’t, it only records the data sent to it via your app and servers. Voting cannot rely on apps and servers, no matter how allegedly secure they are claimed to be. It’s nice that you generate paper ballots for a post-election audit, but since we should not and cannot ever trust voting servers and software, and therefore will need to do a post-election paper ballot count every time — how about we skip the man-in-the-middle, and all of your software, and go straight to that part?

The other point is brought to us by XKCD, who responded to Voatz with this:

which in turn brought this response from Facebook’s (soon-to-be-former) CISO Alex Stamos:

I agree with Chris.

This is the kind of thinking that leads to "Why can't we just have building codes for software? It worked to protect against earthquakes and fire!"

Earthquakes and fire aren't conscious adversaries. Try writing a standards document on how to win at chess. https://t.co/eAPk6M3ijN

— Alex Stamos (@alexstamos) August 8, 2018

which in turn brought this response from CFI and engineer Rob Russell, which a lot of the finest engineers I know have been sharing across social media:

This is wildly disingenuous, I speak as a flight instructor and major IT incident investigator. Modern software authors have the professional discipline of a cute puppy in comparison to aviation practitioners. https://t.co/6GzCqLNpcl

— Rob Russell (@www_ora_tion_ca) August 9, 2018

There are valid points on all sides here. Stamos is right that most spheres, e.g. aviation, don’t have to deal with the constant threat of intelligent adversaries attacking the system in the same way that software does (although as they events at SeaTac yesterday show us, they are by no means devoid of such threats.)

But Russell brings up the very valid point that because software people are so fixated on adversaries, on hackers and not being hacked, their definition of “security” is often restricted to breaches and exploits and vulnerabilities, rather than systemic flaws, or sloppy development techniques, which hurt users’ security even if no external hacker is involved. In fairness, over the last few years the infosec community has been good at broadening its definition of “secure” beyond “external hacker resistant” … but it seems pretty apparent that much, much more work is needed.

Original Article

Be the first to comment

Leave a Reply

Your email address will not be published.