Categories
Technology

Vote-by-Mail

(Image credit: @morningbrew, Unsplash)

Following my recent post on form design, I thought it might be interesting to take a look at what is, in the US, one of the most universal forms: the ballot.

This sits at the intersection of my interests in design and civic engagement. It’s also a much more controversial topic than I’d normally touch with a ten-foot pole, but here I am.

(‘Pole/poll’ pun? Absolutely intended.)

And first, an admission: that image up top, of people at a polling station? That’s an utterly alien concept to me. I’ve never been to a polling station; I live in Oregon, a state that finished moving to universal vote-by-mail when I was in elementary school.

Now, vote-by-mail is a very controversial topic these days, but as someone who grew up with it, I thought it would be interesting to do a case study of how it works in Oregon.

Vote by Mail: A Case Study

Overall, the user experience of voting in an election in Oregon is, to my eye, already a ways ahead of most of the rest of the country. There’s still room for improvement, though.

So, what is Oregon doing right, and what are we doing wrong?

Photograph of 'I Voted' stickers.

Ease of Registration

I take a great amount of civic pride in the lengths my state goes to for voting rights: you can register to vote online, or be automatically registered when you get your driver’s license. I registered to vote at 17, after being handed a registration card by the Oregon Secretary of State at a voter registration drive; you can, in fact, register on your 16th birthday, if you so desire.

(Image credit: @element5digital, Unsplash)

In doing the research for this article, I found out that “register when you get your driver’s license” went from “… if you fill out this extra piece of paper while you’re at the DMV” to being an automatic process, thanks to the amusingly-titled “Motor-Voter Law.”

Photograph of an Oregon Voters' Pamphlet

Ease of Information

While I can’t say we don’t have our share of crappy political ads, the state has a standard way of providing information on everyone running: the voting pamphlet, sent to every household prior to the election. (They are also available online.)

(Image credit: Statesman Journal)

These pamphlets, nice as they are, aren’t perfect. The actual process for putting information in them is a touch convoluted, and surprisingly unregulated. While each entry mentions where the information comes from, and plagiarism or misquoting are banned, there is nothing enshrined in law (or policy) to prevent misleading entries.

Let’s take a look at the information architecture of the section on a single ballot measure.

Screenshot of Measure 96 from the 2016 Oregon voter's pamphlet

At the top, the easily-memorable measure number takes precedence, followed by the shortest-form summary, the title.

Each measure has a very clear summary section, as well as the “Result of ‘Yes’ Vote” and “Result of ‘No’ Vote” area, which state, very explicitly, what each bubble on the accompanying ballot will do.

The estimate of financial impact is prepared by an eminently qualified committee; the text of the measure is resolute in itself.

But then things break down, with “Arguments in Favor” and “Arguments in Opposition.” This is an official state document; everything about this measure, so far, has been as factual and rigorous as one could hope. These arguments, though, are unregulated beyond “no plagiarism, no misquoting people.” If you want to write a 350 words of “why you should vote against this,” pay the fee to have it entered in, and file it as an “Argument in Favor” there’s nothing to stop you.

And that’s a problem. This is an official state document; it’s got the seal on the cover, and a lot of very solid information in it, giving it credence. Unverified information in the Arguments gains legitimacy by association.

(Screenshot taken from the 2016 Oregon Voter’s Pamphlet.)

For comparison, let’s take a look at a different state.

California’s voter pamphlets are laid out very differently. The first thing that caught my eye – and made assembling this comparison in a visually-pleasing way rather difficult – was that the ballot measure doesn’t have a table of contents.

Screenshot from Proposition 20 in the 2020 California Voter's Guide
I’ll give California this, though: they’ve lined up “Proposition” with “20” much better than Oregon did “Ballot Title” and “96.” The all-caps feels a bit “Newspaper Headline”, though, and what is going on with the kerning in “Prepared by the Attorney General”?

To save you a great deal of scrolling, I’ve pulled together the section headings from the 2020 Voter’s Pamphlet, Proposition 20, and summarizing somewhat:

  • Official Title and Summary
    • Summary of Legislative Analyst’s Estimate of Net State and Local Government Fiscal Impact
  • Analysis by the Legislative Analyst
    • Overview
    • (Specific Proposal Title)
      • Background
      • Proposal
    • Fiscal Effects
  • Argument in Favor of Proposition
  • Rebuttal to Argument in Favor of Proposition
  • Argument Against Proposition
  • Rebuttal to Argument Against Proposition

Distinctly more words to it, but notice some key differences:

  1. There’s an actual analysis of the legislation, expanding beyond the fiscal impact to the actual outcomes of the bill. This is provided by the Legislative Analyst’s Office, who are explicitly nonpartisan.
  2. Arguments in favor and against are paired with rebuttals, allowing for more of a dialog between sides.

Note, however, that there’s still no legal requirement for an “argument in favor” to actually be in favor. There is a strong precedent for judicial intervention, which is something of an improvement, at least. And the typography makes a statement, too.

Screenshot of the Analysis by the Legislative Analyst section of Proposition 20 in the 2020 California voter's guide.
Screenshot of the Argument in Favor section of Proposition 20 in the 2020 California voter's guide.

On the left, the Analysis by the Legislative Analyst. It’s rich text – there’s use of bold to highlight key points, and various levels of headings to organize it. On the right, the Arguments, sans formatting. They’re also in a smaller font size, and have their own unique style of heading. In short, the Arguments look different, providing a subtle reminder that this section is not the product of the nonpartisan election officers.

And now, back to the other aspects of the election. But first, a reminder to find other sources of information on the candidates and ballot measures. As a good starting point, I recommend Vote411, from the League of Women Voters, and Ballotpedia.

Photo of an envelope labeled 'Business Reply Mail' next to a face mask.

Ease of Voting

A few weeks before the election, the Oregon Department of State mails out ballots to all registered voters. The package includes the ballot, a return envelope, and a second ‘privacy’ envelope that you can use if you’re worried somebody might be able to see the contents of your ballot through the outer envelope.

(Image credit: @element5digital, Unsplash)

As of 2019, the return envelopes are postage-paid; if you’d rather drop it off in person, you can take it to drop box – most public libraries have one, as well as various other locations.

Screenshot of the AIGA proposed ballot design. Original is at https://www.aiga.org/globalassets/migrated-pdfs/dfd_opticalscan_sampleballot_proposed

Ease of Marking

One of the inspirations for this post was the American Institute for Graphic Arts’ “Design for Democracy” program. They did a larger-scale version of my research on form design, investigating and determining best practices for ballot design.

AIGA produced a hand-filled ballot design (pictured, right), as well as a ballot design for touch screen voting machines, and a full set of resources for election design. I’ll be focusing on the hand-filled ballot, though, as that’s what we use in Oregon.

Let’s take a look at a (reasonably) representative sample of an Oregon ballot. This comes from the 2020 primaries, courtesy of Lincoln County:

Screenshot from the 2020 primary election ballot from Lincoln County, Oregon

It’s actually pretty good. The visual design follows most of the guidelines, although the “Write-in” lines and text need a bit more space. The main issue I see is the instructions – they’re fairly clearly written, but they are written. Pairing them with a visual explanation – the cartoons, in AIGA’s example – makes them even easier to follow.

The UX of Elections

Elections in the US, being controlled at the state level, rather than federally designed, are something of an ongoing A/B test. We can see different electoral systems in use across the country, and use opinion polls after the fact to judge how well-represented people feel by the results.

Were I polled right now, I’d feel fairly happy with how my state handles things.

So, what are the key action items for, say, another state, looking to implement some of Oregon’s best practices?

  1. Reduce the friction of voting. Make it easy to register, easy to get your ballot, easy to fill out your ballot, and easy to turn it in.

That’s it, that’s the list, the whole idea. Make it easier to vote, by whatever means possible. I’m an advocate for doing so by enabling universal mail-in-voting, thanks to some of the inherent benefits:

  • No standing in line, or going into cramped polling places – an excellent benefit during a pandemic!
  • Voting on your own time, rather than needing to take half a day off to wait in line. (‘On your own time’ within reasonable limits – there’s still a deadline to get it turned in on time.)

But, of course, I’m not done. I have some recommendations for Oregon, as well:

  1. Validate the voting pamphlet materials. Don’t just trust what people submit, make sure it’s actually espousing the viewpoint it claims. Do some fact checking, while you’re at it.
  2. Visually distinguish public submissions. Use some gestalt principles – things that are close together, and look similar, look like they’re part of a group. Move the public submissions a little further away, and make them look different, to remind people that they aren’t from the same source as the rest of the material.
  3. Demonstrate how to fill out the ballot. The instructions are fairly well-written, but “comfortable reading English” should not be a requirement to vote.
Photograph of a polling station.
Image credit: Elliott Stallion, Unsplash.

Postscript: while I’m advocating for improving voting, I am absolutely not advocating digital voting. No. No, no, no.

Categories
Education Portfolio Technology

HCI, Cognitive Load, and Forms: Pre-Filled and Required Fields

Written for our “Innovations in HCI and Design” course.

Cognitive Load Theory

For form design, cognitive load theory can be boiled down to the idea that people only have so much space in their brain, so don’t overfill it. The exact amount varies depending on context: is the information auditory or visual?1 What stage of processing are you going through? (Gwizdka 3)

Techniques for Reducing Cognitive Load

  • Produce less cognitive load. Intrinsic cognitive load is necessary to what the user is trying to do; extrinsic is work because the design surrounding the goal is bad (Hollender et al. 1279; Feinberg & Murphy 345).
  • Use multiple modalities. Mixing visual with auditory, for example, allows users to distribute the cognitive load across multiple cognitive subsystems (Oviatt 4).
  • Do the work for them. Pre-filling known fields (i.e., a user’s name and address when they’re already signed in) moves the cognitive load from the user to the computer, saving the user the effort (Gupta et al. 45; Winckler et al. 195).

Cognitive Load in Human-Computer Interaction

Under heavy cognitive load, users work slower, and may commit more errors (Rukzio et al. 3). From a young age, humans are goal-oriented; slowing them down as they work towards these goals, unless explicitly a design goal, can only cause frustration (Klossek et al.). Reducing cognitive load leads to happier users.

Applying Cognitive Load Theory to Form Design

Cognitive load theory gives us several key takeaways:

  • Indicate which fields are required. Provide a clear indicator of what is required so your users don’t have to guess (Bargas-Avila, Javier A., et al., 20 Guidelines 5).2
  • Pre-fill data when possible. Use available sources—an existing account, or on-device sensors—to save the user the effort. However, if that data might not be accurate, don’t guess; leave the field blank to prompt the user to enter the correct data (Rukzio et al. 3-4).
  • Don’t interrupt the user by validating data. Real-time validation is fine, as long as it doesn’t force the user to switch from ‘completion mode’ to ‘revision mode’ (Bargas-Avila, Javier A., et al., Useable error messages 5).3

There has not been any research into the combined effects of marking required fields and pre-filling fields; however, we can extend the conclusions in the first two points, above, as such: a required field, even if pre-filled, remains required, and should be marked as such.

Bibliography

Baddeley, Alan D., and Graham Hitch. “Working memory.” Psychology of learning and motivation. Vol. 8. Academic press, 1974. 47-89.
Bargas-Avila, Javier A., et al. “Simple but crucial user interfaces in the World Wide Web: introducing 20 guidelines for usable web form design, user interfaces.” (2010).
Bargas-Avila, Javier A., et al. “Usable error message presentation in the World Wide Web: Do not show errors right away.” Interacting with Computers 19.3 (2007): 330-341.
Budiu, Raluca. Marking Required Fields in Forms. 16 June 2019, www.nngroup.com/articles/required-fields/.
Feinberg, Susan, and Margaret Murphy. “Applying cognitive load theory to the design of web-based instruction.” 18th Annual Conference on Computer Documentation. ipcc sigdoc 2000. Technology and Teamwork. Proceedings. IEEE Professional Communication Society International Professional Communication Conference an. IEEE, 2000.
Gupta, Abhishek, et al. “Simplifying and improving mobile based data collection.” Proceedings of the Sixth International Conference on Information and Communications Technologies and Development: Notes-Volume 2. 2013.
Gwizdka, Jacek. “Distribution of cognitive load in web search.” Journal of the American Society for Information Science and Technology 61.11 (2010): 2167-2187.
Harper, Simon, Eleni Michailidou, and Robert Stevens. “Toward a definition of visual complexity as an implicit measure of cognitive load.” ACM Transactions on Applied Perception (TAP) 6.2 (2009): 1-18.
Hollender, Nina, et al. “Integrating cognitive load theory and concepts of human–computer interaction.” Computers in human behavior 26.6 (2010): 1278-1288.
Klossek, U. M. H., J. Russell, and Anthony Dickinson. “The control of instrumental action following outcome devaluation in young children aged between 1 and 4 years.” Journal of Experimental Psychology: General 137.1 (2008): 39.
Oviatt, Sharon. “Human-centered design meets cognitive load theory: designing interfaces that help people think.” Proceedings of the 14th ACM international conference on Multimedia. 2006.
Pauwels, Stefan L., et al. “Error prevention in online forms: Use color instead of asterisks to mark required-fields.” Interacting with Computers 21.4 (2009): 257-262.
Rukzio, Enrico, et al. “Visualization of uncertainty in context aware mobile applications.” Proceedings of the 8th conference on Human-computer interaction with mobile devices and services. 2006.
Stockman, Tony, and Oussama Metatla. “The influence of screen-readers on web cognition.” Proceeding of Accessible design in the digital world conference (ADDW 2008), York, UK. 2008.
Tullis, Thomas S., and Ana Pons. “Designating required vs. optional input fields.” CHI’97 Extended Abstracts on Human Factors in Computing Systems (1997): 259-260.
Winckler, Marco, et al. “An approach and tool support for assisting users to fill-in web forms with personal information.” Proceedings of the 29th ACM international conference on Design of communication. 2011.


  1. The foremost theory splits it into three: the phonological loop (sound), the episodic buffer, and the visuospatial scratchpad, all controlled by a central executive (Baddeley & Hitch; the episodic buffer was added by Baddeley in a later revision than that cited here). 
  2. There is some dispute over what makes the best indicator; the general consensus in industry is to use asterisks to mark required fields (Budiu). Studies have shown, however, that using a background color in the field to highlight required fields performs better (Pauwels et al.), which in turn is outperformed by physically separating the required fields from the optional ones (Tullis & Pons). All, however, agree that it is preferable to mark the required fields, rather than the optional. 
  3. Non-interruptive real-time validation, say by adding error messages beneath invalid fields, works well for sighted users. Be aware, however, that screen reader software struggles with dynamically-updating pages (Stockman & Metatla); avert this accessibility problem by providing both real-time and on-demand validation, presenting errors in a modal fashion when the user attempts to submit the form with invalid data. 
Categories
Portfolio

Collecting Clippings

I recently started a Master’s in Human-Computer Interaction and Design. While the main goal of the program is, obviously, learning, one of the side goals is to build a portfolio; as such, I’m going to be making an effort to post some of what we’re working on in class here.

One of our assignments was to start collecting images and clippings, and doing some annotations of them. The idea is to build up a library; something like Quiver, but for design instead of development. It was actually very fun; a lot of my classmates did it handmade, and I considered it, but I’ve also been meaning to try Adobe XD and figured, why not?

Source: Paprika

It’s a pretty neat tool, it made it very quick to put these all together. Much more focused than my other digital go-to, PhotoShop, where I probably could’ve accomplished the same thing, but would’ve spent more time digging around in the ‘draw a line’ settings and whatnot.

Source: Windows Task Manager (Windows 10)

A lot of what I did for mine were mobile apps – that is, after all, my main area of interest – but I also made an effort to branch out to other operating systems and even (gasp) physical devices.

Source: the office Keurig. No, I don’t know how to work it.

A big part of the assignment was giving feedback to our classmates on what they’d collected. The goal wasn’t “hey, that’s a cool interface you were looking at,” but “oh, that’s a good point about the interface.” The exercise, after all, was about the act of collecting and annotating.

Source: Overcast

It was interesting to see how everybody did theirs – most of the people I talked with had done them by hand, although somebody did a lovely blended style one by collecting screenshots and photos and then annotating them with an iPad and Apple Pencil.

Source: Hazel

I’m not sharing all of what I’ve collected here – I’m proud of my annotations of the to-do list app I use, but I don’t actually want to share the contents of my to-do list. And while I had fun marking up Dark Sky, I don’t feel the need to share my home address with the internet.

Source: Calory

I’ve got some pieces from my sketchbook that I might post sometime soon, and I’ll hopefully continue to have interesting things to share as the program goes on!