Let's look to magicians to better understand technological deception

In order to best understand the new technologies in our lives, it may be more useful to look to stage magicians than to source code.

The science fiction writer Arthur C. Clarke wrote that, “Any sufficiently advanced technology is indistinguishable from magic.” But I’m not saying that the performances of David Copperfield and the products of Steve Jobs are one and the same. What’s important about the example of stage magic is that it will help us think about which technological deceptions should be permissible in society.

We like the deception at the heart of a Penn & Teller show, but we don’t like the idea of being “deceived” if the outcome is a negative one. Take, for example, the Volkswagen scheme to evade emissions regulations. At the heart of the plot was a kind of magician’s trick. The company’s “green” diesel cars had a “defeat device,” software that produced the illusion of a car with stable emissions behavior. Like a trick deck of cards that looks “fair” but is actually rigged to favor the magician, the “defeat device” masked the car’s reality, that it was a computer with four wheels designed to fool environmental regulators.

The Samsung SmartTV is another recent example. While in many respects it looks and behaves like a traditional television, it also has the 1984-style capability to listen in on conversations happening around it. Like Volkswagen’s cars, the TVs rely on the illusion of category—when a device takes the form of something recognizable, but in fact behaves and works quite differently under the hood. The SmartTV is not accurately described as a television at all, but instead a listening device that happens to produce a television-like experience.

When the illusions were revealed, they set off a wave of privacy concerns for Samsung owners, and environmental concerns for Volkswagen drivers, as well as anger from commentators and the public at large who were not happy about the illusions.

Regardless, there are many legitimate reasons for engaging in this sort of deception. Taking the form of a familiar technology often directly improves usability and adoption. Samsung would confuse people if rebranded its television a “two-way entertainment portal.” Because it looks like a TV, behaves like a TV, and is controlled like a TV, users can take advantage of new features without having to relearn much.

Moreover, these deceptions provide a sense of magic and novelty that is often the selling point for consumers. The thermostats produced by Nest, for instance, borrow the form factor of traditional thermostats, but intelligently adjust temperature, monitor energy consumption, warn of burst pipes, and so on. And indeed, it is these “additional” features which are giving Nest a competitive advantage against other traditional home appliance companies.

To that end, deception by machines is a crucial part of the economy around smart devices, the Internet of Things, and “intelligent” assistants like Siri. Much like in a magic show, the majority of these deceptions make technology better. But these tricks demand our attention because while sometimes the wool is pulled over our eyes for our amusement, sometimes it is done to take advantage of us.

Looking to the world of magic might help us come up with a way of preserving the good work that techno-illusionists do while limiting the potential harm. After all, the concept of socially acceptable deception is not a new one. For performers of magic tricks (“Illusions, Michael!”), their entire business is deceiving the public for profit. Rather than being a source of fear or producing calls for regulation, these activities are permitted, if not actively encouraged.

We’re going to see more techno-illusionism in the future. Computers and sensors are increasingly being embedded in familiar objects that have historically been “dumb.” This allows them to be configured in ways radically different from what we traditionally expect.

Take Mattel’s new “Hello Barbie,” a distributed listening network made to look like a traditional doll, or the new generation of smartwatches that are simply small computers attached to your wrist. The smartphone retains the notional label of “phone” even though nowadays it functions as a bundle of features with a phone vestigially tacked on.

This expanding deceptive divergence between how we expect things to behave by their appearance and how they actually function can be unsettling. The sometimes-good, sometimes-bad motivations behind these deceptions present challenges for regulators and those interested in protecting consumers. To what degree should devices be required to reveal their inner-workings? Should disclosure by companies be required to the public? How should a regulator effectively audit deceptive software to catch cheaters?

The answer to all this is usually a call for more transparency and “truth” in the design of devices. No one likes being fooled, particularly when the trickery is done by a corporation with financial interests that might make us doubt their commitment to the public interest (what researcher Christian Sandvig dubs “Crandall’s Law”). We want to resist a “black box society.”

But companies often argue that transparency might reveal closely held trade secrets, violate the privacy of their users, and reveal vulnerabilities. More broadly, it begs a bigger philosophical question of design: is it wrong on an ethical level for new technologies to mimic the interfaces and forms of old devices when they do so much more?

Magic is an important guide here. For one, stage magic suggests that regulation of deceptive technologies does not have to fit into the neat binary of a black box which is either open or closed. Magicians’ sleights are permissible in part because of the theatrical props that accompany their performance: dramatic costume, plentiful smoke, and the particular aesthetic of magical props.

All of these signals are ways of indicating to the audience the presence of an amusing or engaging deception. These cues are critical. Without them the audience has no notice that a trick is taking place, and what would otherwise be a performance becomes a scam, particularly if money or valuables become involved. It’s why the same sleights of hand might be employed by David Blaine and a Three Card Monte operator, but one ends up being permissible, and the other one illegal.

The same may apply to sleights of machine. We might have a set of cues in machines—the equivalent of the magician’s hat and stage smoke—to raise awareness for users of the presence of something unexpected in their devices. These might be simple: imagine an indicator light that flashes every time a device transfers data locally into the cloud, or one that flashes every time a device accesses certain types of personally identifiable information.

Before we can come up with solutions though, we need more sophisticated language to talk about technological sleight of hand. Magicians have numerous categories for the large variety of magic effects that can be performed in a show, as well as a deep vein of literature on the taxonomy of deception itself. There are “vanishes” (an object disappears ‘as if by magic’), “transpositions” (the movement of one position to another), and so on.

These taxonomies are useful in the technological context to differentiate between the many types of deception a device might engage in. It may be necessary to invent new ones, but in many cases, taxonomies from stage magic might directly apply. Joe Bruno’s 1978 The Anatomy of Misdirection, for instance, proposes three types of deception:

  • Distraction, in which many things happen simultaneously, preventing the viewer from perceiving the trick;
  • Diversion, where audience attention is intentionally directed to a point of interest away from where the actual action is happening; and
  • Relaxation, where actions like repeating an action several times will lessen the attention of the spectator when the pattern changes surreptitiously.

An app that produces a “transposition”—by moving user data from the phone to the cloud unexpectedly—is different from say, a social network that deceives through “repetition”—giving no notice when it diverges from usual patterns of behavior, when your Newsfeed is suddenly running an unannounced experiment on you. We need to differentiate between them, because these deceptions differ on the level of the harm each might create, and in the costs and design of a remedy that would prevent that harm.

The increased speed of technological change is leading designers to bridge the gap between our expectations of technology and its actual capabilities with theatrical devices that take on the shape of old, understood, predictable ones. This makes these devices more prone to illusion and hocus-pocus, for good or for ill. We can marvel, but there are risks when we do not know the magician is at work.

Tim Hwang is a writer and researcher at Data & Society. He currently serves as research director of the California Selfie Conservancy, a think tank focused on the economics and public policy of selfie photography. He recently co-authored “The Container Guide”, an Audubon-style field guide to identifying shipping containers in the wild, and is founder of the Awesome Foundation, a global network of giving circles supporting awesomeness. He is @timhwang and timhwang.org.

 
Join the discussion...