Age of Experience

This is an excerpt from an upcoming book on the discipline of Creative Technology.

I have a confession to make. I love watching ghost hunting shows. Since Ghost Hunters started on the SciFi channel almost a decade ago, it’s been fascinating to see people go into locations around the world to attempt to find concrete examples of paranormal activity. Ghost hunting has blown up since then. We have bros with t-shirts two sizes too small yelling into empty rooms. There are teams of pseudo-scientific engineers creating Tesla coil powered pyramids to capture a spirit in a “focus crystal.” Beyond the historical aspect of learning about asylums, castles and abandoned mansions, it’s also interesting to hear why people believe in ghosts. Not in the sense of mediums and ghost whisperers, but what are the environmental effects that people feel that cause cold spots? Why certain rooms in buildings cause a sense of dread in some people?

In the early 1980s, Vic Tandy, an engineer and IT lecturer at Coventry University had an encounter with the paranormal. Tandy was working in a medical device research lab when he, “was sweating but cold, and the feeling of depression was noticeable – but there was also something else. It was as though something was in the room with me.” As the feeling grew more intense he started to see an apparition appear in his peripheral vision. When Tandy turned to look at the apparition, it disappeared.

The next day Tandy returned to the same laboratory. An avid fencer, Tandy had his foil clamped in a vice so he could polish it. When he walked away from the blade it started to vibrate wildly. Another case of the phantom of the research lab? Quite by accident Vic Tandy stumbled upon the theory that infrasound was present in the lab.

Sound is an incredibly powerful thing for animals, from vocal communications that allow us to convey and share complex concepts, to music which allows us to convey a range of emotions. There are also other sounds that we can experience that triggers things that are more primal. Things in our reptile brains that we know are instinctually dangerous. Have you ever heard a lion roar? Not on TV. In person. Where you hear that majestic roar that has a deep bass that shakes you to your bones. That triggers your reptile brain to know that there’s something dangerous around.

Sound waves are physical pressure waves that vibrate in a medium like air or water. If we step back into high school physics class, waves consist of a few key properties: the frequency of a wave is defined by the number of repeating occurrences per period of time, measured in hertz (Hz); the inverse of frequency is the wavelength, the distance over which a wave repeats once, typically measured in meters for sound; and amplitude is the measure of change in the wave over a period of time, also measured in meters.

In terms of human hearing, we generally say that humans can hear sound from 20Hz to 20,000Hz. Frequencies under 20Hz are called infrasound while frequencies above 20,000Hz are called ultrasound.

Armed with the theory that infrasound was present in the lab, Tandy ran a series of experiments. Tandy’s theory was correct. Infrasound was present in the lab, caused by the presence of a newly installed extractor fan that generated sound waves at 18.9Hz. The infrasonic sound waves were strongest at Tandy’s desk, the location where he experienced his ghost sighting and where his fencing foil mysteriously shook on its own. When the extractor fan was turned off, the sense of dread lifted and the foil stopped dancing.

Able to reproduce the experiment, Tandy published his findings in the Journal of the Society for Psychical Research. While it had been known for some time that infrasound can trigger feelings of fear and shivering, Vic Tandy was the first researcher who linked that experience to ghostly sightings. Tandy followed up by testing his theory at other haunted locations like Edinburgh Castle and the Tourist Information Bureau next to Coventry Cathedral, finding the presence of infrasound at each location.

Modern day ghost hunters use a wide variety of technology tools to search for the source of apparitions. Electromagnetic field meters, white noise generators, thermal cameras. Perhaps ghost hunters should add an infrasonic microphone to their standard load out for the next ghost adventure.

So infrasound can instill a sense of fear and dread in people. Unless you are in the business of building haunted houses, which would be very cool, how does this help build engaging experiences? One of the interesting things about human emotions, especially physiological reactions, is that emotions lie on a spectrum. Sometimes two different emotions can trigger a similar physiological response.

Pipe organs trace their origins back to the 3rd century BC where the Greeks built a hydraulis, a water powered organ. A hydraulis would use the force of water, from either a waterfall or a pump, to generate air pressure that was pushed through pipes of various lengths. To see an amazing and modern version of a water organ, look at Nikola Bašić’s Sea Organ in Zadar, Croatia.

Around the 2nd century AD people started replacing the water mechanisms with inflated leather bags. Around the 6-7th century AD bellows were being used to drive the air pressure through the pipes. Pipe organs have grown more mechanically complex over the centuries and still show up in cathedrals and theaters around the world.

In One God Clapping: The Spiritual Path of a Zen Rabbi, Rabbi Alan Lew explains there are several different words for fear in biblical Hebrew. Pachad, or dread, is “projected or imagined fear,” the “fear whose objects are imagined.” This can be thought of in terms of instinctual fears as well, triggered in our reptile brain. Like hearing the roar of a lion.

Lew also describe a second type of fear described in Hebrew. Yirah is described as, “the fear that overcomes us when we suddenly find ourselves in possession of considerably more energy than we are used to, inhabiting a larger space than we are used to inhabiting.” That sounds a bit more like awe.

If you think of fear and awe in terms of physiological reaction, they are often quite similar. Goose bumps on your skin, hackles on your neck raise as your heart rate accelerates. If you experience those physical symptoms in a cathedral, your mind will probably wander to a different place than if you experience them in the basement of an abandoned sanitarium. And could those 64’ organ pipes that generate an 8Hz infrasonic tone be reinforcing another feeling in that cathedral?

While it’s impractical to install a pipe organ in a modern physical-digital experience, learnings from a 2000 year old technology can still help us add layers of depth of emotion to our experiences. By understanding traditional technology and the visceral reactions they can cause on human physiology and emotion, we can provide more compelling experiences that shape and focus users’ emotional responses.
Age of Experience

Technology + Imagination Can Save Brick and Mortar Shopping
Brick and Mortar is Losing
In the war for America’s spending dollar, brick and mortar stores fight for survival with e-commerce. Brick and Mortar stores are losing — in fact they’re doing even worse than you might guess.
Part of this loss is natural. As more goods and services become digital, there’s no need for a physical location to purchase them. Also for the purchase of many goods, the convenience of shopping at home or on the go just can’t be beat.
However, brick and mortar is failing as badly as it is because of a pathetic, industry-wide lack of imagination, investment, and courage.

In the face of competing with Amazon, one of the most innovative companies in the world, most Brick and Mortar stores have stood still. Some have piloted 1 incremental idea at a time — and watched it fail — and thrown up their hands in despair while sitting down in a deck chair on the Titanic.

Seriously, how hard should it be for a shopping experience crafted by multimillion dollar companies in large, fully controlled spaces to compete with the shopping experience on a 5-inch touch screen? These phone-apps usually lack sound, smell, taste, excitement, group interaction, and more. This contest should be Disney World vs the Disney website. Instead it’s like Warehouse vs Warehouse Catalog.
There is nothing sacred about the standard design of a Mall or Department Store.
Au Bon Marche – The Original Department Store
The design of the modern Department Store dates back to the mid 1800s and a store called Le Bon Marche. The store was “an entire wonderland of clothes, fabrics, furniture, trinkets, jewelry, and countless other goods — all collected together in one extravagant space.” It was an echo of the “seductive power of sensory overload” as demonstrated by the Worlds Fair of 1855. The idea was to invite shoppers in with fancy windows and trap them by confusing their senses — and sell them things they didn’t need as they wandered around in a daze.
Victor Gruen designs the Modern Shopping Mall
The “Modern Mall” was designed in the 1950s by an avant-garde European socialist named Victor Gruen. He was attempting to recreate the vibrancy of a small European village in weather-protected, managed space. He had planned a beautiful town around the mall — instead the developers surrounded the market space with a parking lot. The corrupted design did spectacularly well — it suited the lifestyles of the 1950s suburbanites. Access to all the same goods as their city counterparts, some excitement and social interaction during the cold winters and hot summers, and a place to spend lots of time. (For more, read Wonderland by Steven Johnson. http://amzn.to/2jknWR8 )
Neither the Department Store or the Modern Mall address the needs and desires of modern shoppers. Modern shoppers do not react well to being treated as a mass market, ripe to being fooled by shiny things. Online competitors have shown us that we deserve more — personalized experiences, which are time-efficient and fun. The Mall/Store should be doing all the hard work, while we are treated like royalty. Satisfying the modern shopper requires re-inventing the shopping experience, nuts-to-bolts.
Below is an example of what we think is possible — using technologies that are available in beta or production today (January 2017.) We think this is the very least that must happen to save modern retail.
Shopping at the Mall in the Age of Experience
Priya is a typical suburban mom in a typical large suburb.
She has an automated setup with Amazon.com and various other websites that purchases her consumables when they are needed and delivers them to her door. She goes shopping for things she finds more fun — clothes, gifts, technology, and the like. Today she needs a new dress for a party that she’s attending and new jeans for her son.
Priya’s self-driving SUV pulls up to the Mall entrance. Her phone registers her presence with the Mall, which accesses her account. The Mall knows many facts about her and her family members:
  • Family Members: She, her husband, her 6 year old son, her 4 year old daughter
  • credit cards
  • home address
  • clothing sizes
  • shoe sizes
  • interests and hobbies
  • birthdays
  • purchase history
The Mall assigns the car a parking spot and it goes and parks itself there. She walks into the Mall with her kids.
She knows that her son had a recent growth spurt, so she first stops in the fitting booth. He is 3D scanned and his new measurements are stored with the Mall and sent to Priya by email.
She exits the booth and whispers into her phone, requesting a Children’s Concierge. A young woman soon pulls up in a small electric vehicle. She greets Priya and the children by name, and wristbands each child. Priya can now track her kids wherever they are in the mall, and the kids can call her or pay for a snack. After kissing them both, the concierge whisks the kids off to the child center. The Child Center is a Mall facility which is part playground, part video game arcade, and part heathy snack bar. Every Mall has one.
Now Priya is free to shop without having to worry about her kids going crazy with boredom, and driving her crazy in response.
Priya has already been to the gym today, so she doesn’t feel the need to walk for exercise. She grabs a Segway from the stand, and activates it using her phone. Her vehicle session is registered to her account, and she’ll be charged a small rental fee unless she purchases something during her time at the Mall. Directed by the vehicle, she rolls up a ramp, through a couple of corridors, and emerges into the women’s formalwear department of Macy’s.
The store is not arranged as the Macy’s of today. Gone are the racks of the same model of garment in different sizes. Each type of garment is shown on its own, or in an ensemble, or on a manikin. Each display is beautifully laid out, showing off the product in unique ways. The store is constantly trying new layouts and sensors track how many customers visit the displays, how long they browse, how happy they seem, and how much they purchase.
Priya parks the vehicle and walks through the displays, enjoying the clothes. She finds a dress that she likes. She tells the display the she likes it, and it indicates that there is a dress in her size. She says that she will try it on. She wanders around and finds 3 more dresses that she likes. Finally she glances at her phone, which shows pictures of the 4 dresses she has on her list. She tells the phone “Let’s try them on!” The phone displays a map to the nearest dressing room, which is not too far away.
She walks over to the dressing room, and it greets her by name. Lights on the floor guide her to dressing room 4, which has her dresses in them. On the wall is a large video-mirror. Priya tries the dresses on, taking a few pictures by asking the wall to “take a photo” at various times. Each one is stamped with Priya’s account, the time of day, and the model and size of the dress. She even takes some video of her favorite dress. She tells the dressing room that she’ll take her favorite, but she’s not sure if the others are worth buying. She sends the whole package of photos and videos to a couple of girlfriends for second and third opinions, then she leaves Macy’s on her Segway and rolls over to a smaller store called “The Children’s Place.”
Her phone asks her who she is shopping for, and she answers with her sons name. Now the Children’s Place knows the measurements and gender of the target, and it reacts the same way that Macy’s did. Some of the clothes will be a little long for him — and the clothes displays show her that by fitting the jeans to a 3D model of him, from his scan when they walked in. He’s growing quickly, so she decides to buy them anyway. She buys 3 pairs of jeans in different styles.
She calls her kids on their bands — they’re having fun and don’t want to be disturbed. She smiles and rides her Segway over to the food court, to grab a Froyo and just enjoys the solo time.
Finally she rolls over to the Child Center to pick up the children. She could have asked a concierge to bring them to the exit, but her kids typically hate leaving the center, and can get surly. She knows she’s better off being there to pull them away from the fun.
She leaves her Segway (which drives itself back to its stand) and is driven back to the exit by another electric car. The phone has ordered the SUV to come meet them, and has delivered her purchases to the exit. All arrive at the same time, and a robot concierge loads the purchases into the back of the car. As the SUV drives them home, Priya gets responses from her friends about the dresses. She’s convinced that the green one looks good on her, so she makes the purchase on her phone. An hour later, the dress is delivered to her house by drone.
Age of Experience

Fake News in an Infinitely Editable World
We already know that, just because we read that “The President said blah blah blah” — that the President might not have said that. It’s easy to fake text, all it takes is a keyboard and a fibber with an agenda. BAM! Fake News, in your face. Nobody automatically trusts that text is evidence of anything.
Photography, once worth a thousand words, is now worth nothing, when it comes to evidence. Pyramids are rearranged, women become inhumanly thin and long, and people’s faces often appear atop the wrong bodies. As John Stewart once said, “Yeah, we have Photoshop.” Bzzzz — fake news!
We have however, as a society, come to trust video. If we can see it happen, we believe that it happened. Sure, there’s a little blurring around the edges — the camera angle might hide some of the action, the quality might obscure some detail. But in general, seeing is believing,
That trust is about to die.
So far, creating convincing fake video is difficult, expensive, and slow. It takes a Hollywood specifical-effects house and millions of dollars to make a scene that looks, feels, and sounds real. As humans, we’re good at reading human facial expressions — we have low tolerance for RobotFace. We’re good at recognizing voices — we know an impostor when we hear one. When we watch an approximation of a real-life scene in a movie, we marvel when it gets close.
In a year or two, creating fake video is will become quick, cheap, and easy. How?
Here’s one way — Adobe is working on a project called VoCo. It can take 20 minutes of an audio track of a person speaking, and create a voice model based on it. Then you can type in arbitrary text, and hear the person speak your text. In a believable fashion. It sounds good — and will undoubtedly get much better before it’s near-future launch.
Adobe’s VoCo
Here’s another — a group of university researchers have demonstrated a system that maps your face — from a webcam — to another face. Any facial expression, tick, or emotion that crosses your face is reflected on the other. In this demonstration of  Face2Face, they show an everyday joe driving The Donald’s face with his own. And it looks really, really good.
Face2Face, a university research project
These technologies will be finding their way into consumer products, for fun, gaming, ebook-reading, live-mediated acting, and thousands of other applications. As humans, we’ve always enjoyed mimicry and identity-bending, and these technologies will be all the rage at a Christmas not too far from now.
But what does this mean for recordings of news? The next time you see one man killing another in a video, will you believe what you’re seeing? If you see a politician make a speech and say something controversial, will you wonder if it’s her, or will you assume that it is some kid with Adobe’s Creative Cloud?
As a society, we’ve developed ways of dealing with easy-to-fake evidence. We’ve evolved “authority” — voices that we trust like the Police, News organizations, the Government. At least — our parents used to trust them. We’ve been raised to question everything we’re told, whether it’s news about the President or science about the weather. Many people find it easier to believe completely falsified “news” if it confirms their existing biases and worldview, regardless of the truth. We’ve left “Truth” behind and become a “Truthy” culture, in which nobody can convince anybody of the truth of anything.
Even our major tool in convincing others — Evidence — is about to become “Evidency.”
And down the rabbit-hole we go.
Age of Experience

A Westworld Experience Is Closer Than We Think
Last Saturday, I got into a rather excited argument with my four year-old over what music we were going to listen to. The argument started with, “Alexa, play Stevie Ray Vaughn,” to which my son responded, “Alexa, play the Chainsmokers.” That sequence repeated itself until my four year-old won out, because we ain’t ever getting older. Put Closer on a postmodern player-piano and we can transport ourselves to a saloon in Westworld, an anachronistic phenom of a show that outlines some of the coming ethical and moral questions surrounding artificial intelligence.
 
Westworld exists at an unidentified point in time, but at a time that is likely closer than most of us think. In this world, Hosts, human replicants driven by Artificial Intelligence, are programmed with complex storylines in a theme park where human Guests can come and live out their Wild West fantasies. Although our ability to fabricate Hosts to the level of reality outlined in the show is far off, machine learning and AI is on the cusp of the exponential growth that will make deep conversational interfaces possible. 
 
One of the challenges of creating conversational interfaces is the geometric growth of the decision trees that power the experience. Combining machine learning with novel approaches like Computer Assisted Authoring of Interactive Narratives could provide us with tools that greatly accelerate the rate at which we can develop these interfaces, while dramatically reducing the complexity. This method of collaboration, human/AI symbiosis, is one likely way forward in a world where machines continue to dominate humans in an increasing number of tasks. By amplifying human creativity with machine intelligence, we can create chatbots that provide semi-scripted experiences, so users will have a unique experience with each visit.
 
In Westworld, the Host Maeve has been programmed to read and manipulate human emotions. Machines being able to read and evoke emotions in people may seem like a futuristic concept, but the field of Affective Computing has been around for decades now. Advances in machine learning and image recognition have allowed facial recognition services, like Google and Microsoft, to have emotion detection baked into their systems. Even more impressive is the announcement that we can now detect emotional responses using WiFi signals. While WiFi routers can’t approach Maeve’s cunning, their presence is more ubiquitous. Interlacing conversational experiences with emotional feedback will create more immersive and engaging experiences. And the use of emotion detection can extend to out-of-home displays to create playful experiences with passers-by.
 
As we enter the Age of Experience, it’s critical to understand and leverage the amazing array of tools that AI is providing us. As neural networks dream in order to learn faster, teach themselves how to remember and learn to read lips better than we can, how can we weave these technical marvels into more engaging brand experiences? As experiences become the product, can we use these tools to create nonlinear narratives, replacing traditional digital experiences?
 
Some of the most interesting, and potentially creepy, applications of these technologies will happen when we merge conversational UIs with adaptive learning based on data from vendors like Acxiom and El Toro. Instead of the sledgehammer of retargeting, we can shape a user’s experience with a palette knife, using a combination of AI and segmented behavior analysis. Even better, with the advent of group predictive sentiment analysis, we can custom tailor experiences for users before they know they want them. An objectively formulated, data derived experience, born of machine-driven insights, but tweaked, tuned and tailored by the creativity of humans.
 
Chatbots are but one tool in the oncoming stampede of machine learning technologies that marketers can use to drive engagement and create compelling experiences. The best marketers will embrace this rapid onslaught of change and emerge from the maze (so to speak) with the knowledge of how to use these technologies to amplify simple brand truths. And the truly brave will question the nature of their reality, creating anticipatory experiences instead of relying on the same old safe bet.