Are dashboards a design failure? Here’s where Jared Spool and I disagree.

Hans van de Bruggen
10 min readMar 30, 2021

A few days ago, I ran across a tweet by Jared Spool that had been making the rounds again. I weighed in with my own thoughts which led to a friendly and substantive debate on the subject of dashboards. But it was more than that — at a more essential level, it was a debate on what role the designer should play regarding the degree of abstraction in a tool.

In other words, what limits should we put on the types or formats of data presented to the user, or on the use of the tool?

I’ll confess to my bias upfront, having designed dashboards for studio executives at Netflix who were adamant they wanted the raw data and not some abstraction. I came to learn why as I watched them use these tools for real work.

It was an interesting conversation and one that was received well by those who could follow along with the branching tweet threads. However, as Twitter doesn’t lend itself well to this format, I think it’s best to post a roundup of the discussion here to make it easier to digest. I’ll do my best to represent his views in good faith, and will be linking back to the original tweets throughout.

I hope you enjoy this discussion, as both of us had a lot of fun with it. The best debates are those that get the audience to think, and this leads inevitably toward a larger pool of opinions. To that end, I’m curious to know your thoughts on this, too. See you in the comments section?

At the outset, Jared started by saying that creating dashboards for users was a likely sign that the designer’s user research was incomplete.* “Dashboards,” he says, “are often a designer’s response to not understanding the full scale of a complex problem ecology.”* In other words, they’re a result of the designer shipping what was asked for instead of what was really needed by the user.

As an example, he says that the dashboard of a car contains a number of gauges that are “vestigial,”* and unnecessary to today’s drivers. He says “[y]ou don’t need to know how much gas is left. You need to know if you have enough fuel to get to your destination.”* Furthermore, “[y]ou don’t need to know how fast the car is going,”* saying the goal is to help the user optimize their speed for safety and asking that, if a car could be made aware of the speed limit the way Waze is, “[w]hy would it even let me drive too fast?”*

He says that in their most basic form, dashboards are “information displays.”* Because of this, they are subject to the user absorbing and making judgements on this information, which leaves the door open for substantial errors. He proposes that to address this, dashboards should be eliminated and instead be replaced with tools that assist with decision making, either via automation or “decision support.”*

I feel this oversimplifies things a bit. There are certainly many cases where a dashboard is the wrong answer, but there are many legitimate uses, as well. He seems to be making two points: first, on what data should be presented to the user, and second, on how flexible a tool should be.

Firstly, regarding what data should be presented to the user, I believe that users will choose the easiest way to get the capabilities they want. To his point, the easiest solution often doesn’t involve a dashboard or raw data alone. It’s easier to know if you’re speeding if a system simply says so instead of relying knowing the current speed limit + the current speed and inferring the answer yourself. There’s less to process, mentally.

Despite this, there are scenarios where users can more easily process things mentally, in the moment, than by making a request through the UI. Take his gas gauge example. It’s possible a user doesn’t simply want to know if they can make it to their next destination, but to their next one, after that.* Seeing a reading of the fuel level gives users the ability to quickly calculate in their heads, in the moment, without needing to engage the UI.* This is an example of how raw data can help the average user with less common use cases.*

It’s also important to note who’s asking for the dashboard in the first place. The odds are it’s not a request coming from a “general user,” but a domain expert.* Users with uncommon needs are able to leverage raw information in a wide variety of permutations, and can often arrive at answers more quickly than if a specific UI had been built to accommodate for each of these myriad use cases. As an example, Jared notes that most cars made in the past decade have perhaps 500+ sensors collecting all manner of datapoints.* The majority of this data would be lost on the average driver, but could provide valuable information to a mechanic trying to diagnose an issue.* As a designer, I would not deny the mechanic a dashboard if this is what they’d asked for, but I would also seek to identify their most common use cases and try to accommodate these with specific UI.*

My stance is that Jared’s not completely wrong about dashboards containing “vestigial remnants”* in many cases and that for most users, some simplification helps. But oversimplification has real downsides as well, with all but the most common use cases. This is true for average non-expert users and domain experts, alike.

I believe this also applies to the second point, regarding how much flexibility a tool should have. There are legitimate cases when exceeding the speed limit is the safer option, and this can be best determined by the user acting on information they have that the car isn’t aware of. Take natural disasters for example.* While Jared initially laughed off this suggestion as having a “less-than-once-in-a-lifetime chance” of happening,* the truth is that natural disasters like floods and landslides are something that much of the developing world* faces with some frequency (not to mention wild animals). Creating a vehicle that prevented users from outrunning a natural disaster out of a designer’s hubris in thinking they’ve made things safer is a failure of design.*

Limitations are a double-edged sword. They can help a user avoid coloring outside the lines, but without the ability to override them, they can end up holding users back.* I find it’s helpful to imagine if certain limitations were put on the tools we use as designers. For example, certain font or color pairings may not look good together, but if the tool prevented me from using them in whichever combinations I wanted, I would feel overly constrained. However, if the tool offered me suggestions around which fonts or colors worked well together, that could speed things up for me in certain situations. As the professional, I’ll ultimately know best — the tool should defer to the user.*

Jared later clarified his opinion and agreed that users should be able to override,* and that there shouldn’t in fact be hard limits on the ways tools can be used.* As I like to put it, the tool should offer assistance, not give mandates.* Good defaults and assistive modes that users can easily step into or out of are means to this end. In the context of a dashboard, this means giving users answers to common questions while still allowing them to access any raw data they may request.* In the case of fuel gauges, there’s a real-world example that does this well — Tesla vehicles tell users whether they’ll make it to their next destination (and will offer routes to charging stations as needed) while still providing users information about the battery level.* The common use cases are addressed, and other use cases are supported without substantially complicating the interface or the means of getting answers.

We wrapped up by talking about cameras, which led to some interesting discussion on the limits of simplification and how users select which tools to use. Jared believes that cameras for pro photographers reveal “all sorts of settings, most of which are anachronistic and where there’s many complex interdependencies,”* noting that changes to the ISO, shutter speed, or f-stop (sometimes called the exposure triangle) all affect the exposure, so changes to one element need to be counterbalanced by adjusting one or more of the others to keep the brightness consistent.* He contends that “[t]echnology now solves the interdependency issues that photographers had to keep in their heads,” allowing them to get shots faster than ever before.*

A graphic of the Exposure Triangle, showing three sides with Shutter Speed, ISO, and Aperture. Each one changes the overall exposure of the image in addition to other image qualities.
“The Exposure Triangle comprises aperture, shutter speed, and ISO. These three camera and lens controls work together to regulate the amount of light that makes it to the light-sensitive surface (aperture and shutter speed) and the sensitivity of that surface (film or digital ISO). Not only do those three controls affect the light of a photograph, they also have unique ‘side effects.’ Aperture controls depth of field, shutter speed can blur or freeze action, and ISO can add or subtract film grain or digital noise from an image.” — via B&H

He’s right — to an extent. It’s true that changes to any part of the exposure triangle will change the image brightness, but it’s not enough to simply say these are three different ways to change the image brightness. In reality, they each affect the image in different ways while also affecting brightness.* Higher ISO can increase the amount of noise in an image (or reduce it when reaching a second threshold on a dual-gain sensor), longer shutter exposures can cause motion trails (possibly desired, possibly not), and a wider f-stop will blur out the background.* No photograph gets captured without these elements playing a role, whether they’re exposed to users or not — light will always travel through an aperture, get exposed for an amount of time, and be received by a photosensitive surface at a given sensitivity level.*

Once again, this is a point about different users having different needs. Remember, users will choose the easiest way to get the capabilities they want. For the average non-professional photographer, they are happy to give up some control over the process of creating a photograph in exchange for a photograph that looks more or less like what they saw.* For others, these automatic settings aren’t the easiest solution for the capabilities they’re looking for — some photographers want to create dreamy, clean, hyper-real, grungy, or non-figurative photographs. For them, control over these settings gives them the capability to create the images they want.*

Take aperture, for example, which is historically represented as an f-stop value. Jared says that photographers who want access to indicators like f-stop numbers on prosumer cameras will eventually die out.* He says these numbers are “no longer meaningful” and anachronistic,* theorizing they are offered “as placebos” in order to “make the photographers more comfortable.”* Setting aside his claim that this is partly due to apertures being a fixed size in most cameras today* (which is certainly true of most smartphones but not of prosumer cameras), it’s simply not the case that this information isn’t useful.

Adjusting the aperture can create a brighter image, yes, but it also changes the amount of blur or clarity in the background, how sharp the in-focus areas are, the amount of vignetting in the corners, and the relative ability of autofocus to find its target quickly — all of these tradeoffs are tied to aperture. A photographer taking photos of a situation they may not get to re-shoot, like a wedding, has some very good reasons for wanting direct control over this attribute, and others.

Jared believes that computational power in cameras has reduced the need to know and understand these values.* Again, he’s right — in a sense. Abstracting away what these controls do and what these values mean has worked wonderfully for the average point-and-shoot user.* But there are users for whom abstracting this information doesn’t make any sense.

If we were to create a single control that combined background blur and brightness that abstracts away any mention of f-stop value, this might simplify things for the novice user but complicate things for the experienced photographer who is no longer able to bring their knowledge of f-stops to bear.* Or, if we were to simplify the control further to control background blur alone, the camera would need to counterbalance the brightness via automatic changes to shutter speed and ISO that the user may have good reasons to want to control themselves.*

You might argue that there are new techniques available — that computational photography can analyze a photo and apply background blur via software. But simulated software-based background blur has different properties from lens-based background blur. Either of these techniques might be leveraged to artistic effect, but as each result has a unique character, it’s no different from telling an oil painter they must use watercolors because they both are equally able to create paintings. The capabilities of each technique is different.*

We even had some audience participation toward the end. With HDR photography,* Jared posits that users no longer need to calculate the effects themselves and could instead “get immediate quality results” from a camera’s ability to compute results automatically.* My friend Jo chimed in to say that while their camera had the ability to do automatic bracketing, they prefer to control things manually because it gives them results they prefer.* If Jo could get the same results from an automatic process, they would, but the results from auto bracketing aren’t the same. To put it another way, cameras without the ability to do manual bracketing aren’t capable of producing the results Jo wants.* Users will choose the easiest way to get the capabilities they want — capabilities take precedence over ease of use. Ease of learning (Learnability) and ease of day-to-day use (Ergonomics) follow from there.

There are users who want to push the limits of what is possible and want greater control over the outcome. The same is true for a CMO who asks for a dashboard of numbers. There are common questions they may want to have answered, yes, but there are also creative connections they can more easily make by seeing these base truths for themselves.* We should answer the common questions for them, but not prevent them from learning more — especially when they’ve told us they’re interested.

In the end, Jared says “[s]how me a place where you think a dashboard is valuable and I’ll show you a place where we don’t know enough about users and what they need to accomplish,” suggesting this may be the case “98% of the time.”* Perhaps that’s not so far off? Users who ask for traditional “dashboards” by name constitute a very small percentage of the population. That said, while a readout of 500+ sensors would be useless to the average driver,* information displays like the gas gauge can be valuable tools for helping a wide set of users and use cases.*

In short, raw data can help with edge cases, and domain experts are simply users with more edge cases. So don’t worry — sometimes, a dashboard is the perfect tool for the job.

What do you think? Do you agree or disagree? Feel free to share your thoughts in the comments.

Once again,* a kind thanks to Jared Spool for the friendly debate. If you enjoyed this discussion, you may enjoy my upcoming book that explores these ideas of Capability, Learnability, and Ergonomics in greater depth, and explores what drives users to select certain tools instead of others. It also looks at ways to make tools that are easier to use day-to-day instead of simply being easier to learn upfront.

Jared will be receiving an advance copy.

--

--

Hans van de Bruggen

Product, Design, and other musings. Author of Learnability Isn’t Enough (book.hansv.com)