In our houses, cars, and factories, we’re surrounded by tiny, intelligent devices that capture data about how we live and what we do. Now they are beginning to talk to one another. Soon we’ll be able to choreograph them to respond to our needs, solve our problems, even save our lives.
On a 5-acre plot in Great Falls, Virginia, less than a mile’s stroll through exurban scrub from the wide Potomac River, Alex Hawkinson has breathed life into a lifeless object. He has given his house, a sprawling six-bedroom Tudor, what you might describe as a nervous system: a network linking together the home’s very sinews, its walls and ceilings and windows and doors. He has made these parts move, let them coalesce as a bodily whole, by giving them a way to talk among themselves. Open a telnet session in the house’s digital hub and you can actually spy on his chattering stuff, hear what it says when no one’s listening:
- LIBRARY MOTION SENSOR: DEVICE 0X9E07 ZONE STATUS 0X0031
- CAR DOOR: TEMPERATURE: +13.0C; BATTERY: 2.4V
- CAR GLOVE COMPARTMENT: [87AC] CHECKIN
- FAMILY ROOM LIGHT: 2001-
- KITCHEN COUNTER LIGHT: 2001-
- THERMOSTAT: 4301-
- FOYER LIGHT: 2001-
- COFFEEPOT: 2001-
- LIVING ROOM MOTION SENSOR: DEVICE 0XB247 ZONE STATUS 0X0031
This is the language of the future: tiny, intelligent things all around us, coordinating their activities. Coffeepots that talk to alarm clocks. Thermostats that talk to motion sensors. Factory machines that talk to the power grid and to boxes of raw material. A decade after Wi-Fi put all our computers on a wireless network—and half a decade after the smartphone revolution put a series of pocket-size devices on that network—we are seeing the dawn of an era when the most mundane items in our lives can talk wirelessly among themselves, performing tasks on command, giving us data we’ve never had before.
Imagine a factory where every machine, every room, feeds back information to solve problems on the production line. Imagine a hotel room (like the ones at the Aria in Las Vegas) where the lights, the stereo, and the window shade are not just controlled from a central station but adjust to your preferences before you even walk in. Think of a gym where the machines know your workout as soon as you arrive, or a medical device that can point toward the closest defibrillator when you have a heart attack. Consider a hybrid car—like the new Ford Fusion—that can maximize energy efficiency by drawing down the battery as it nears a charging station.
There are few more appropriate guides to this impending future than Hawkinson, whose DC-based startup, SmartThings, has built what’s arguably the most advanced hub to tie connected objects together. At his house, more than 200 objects, from the garage door to the coffeemaker to his daughter’s trampoline, are all connected to his SmartThings system. His office can automatically text his wife when he leaves and tell his home A/C system to start powering up.
In this future, the intelligence once locked in our devices now flows into the universe of physical objects. Technologists have struggled to name this emerging phenomenon. Some have called it the Internet of Things or the Internet of Everything or the Industrial Internet—despite the fact that most of these devices aren’t actually on the Internet directly but instead communicate through simple wireless protocols. Other observers, paying homage to the stripped-down tech embedded in so many smart devices, are calling it the Sensor Revolution.
But here’s a better way to think about what we’re building: It’s the Programmable World. After all, what’s remarkable about this future isn’t the sensors, nor is it that all our sensors and objects and devices are linked together. It’s the fact that once we get enough of these objects onto our networks, they’re no longer one-off novelties or data sources but instead become a coherent system, a vast ensemble that can be choreographed, a body that can dance. Really, it’s the opposite of an “Internet,” a term that even today—in the era of the cloud and the app and the walled garden—connotes a peer-to-peer system in which each node is equally empowered. By contrast, these connected objects will act more like a swarm of drones, a distributed legion of bots, far-flung and sometimes even hidden from view but nevertheless coordinated as if they were a single giant machine.
For the Programmable World to reach its full potential, we need to pass through three stages. The first is simply the act of getting more devices onto the network—more sensors, more processors in everyday objects, more wireless hookups to extract data from the processors that already exist. The second is to make those devices rely on one another, coordinating their actions to carry out simple tasks without any human intervention. The third and final stage, once connected things become ubiquitous, is to understand them as a system to be programmed, a bona fide platform that can run software in much the same manner that a computer or smartphone can. Once we get there, that system will transform the world of everyday objects into a designable environment, a playground for coders and engineers. It will change the whole way we think about the division between the virtual and the physical. This might sound like a scary encroachment of technology, but the Programmable World could actually let us put more of our gadgets away, automating activities we normally do by hand and putting intelligence from the cloud into everything we touch.
Imagine a house with a nervous system, where the sprinklers take orders from moisture sensors.
Your coffee shop senses your approach and starts preparing your regular order.
THE FIRST STAGE of this transformation—the simple act of putting objects on the network—is well under way, spurred by a few different economic forces. For makers of consumer devices, one way to escape the trap of commodification is to put a device (alarm clock! refrigerator! fitness tracker!) on the network and call it “smart.” No doubt that’s a big reason why more than half of the gadgets displayed at this year’s International Consumer Electronics Show boasted some sort of wireless hookup. But an even bigger reason is that the rise of the smartphone has supplied us with a natural way to communicate with those smart objects. Nearly 700 million new smartphones shipped last year, most of which can communicate with nearby sensors via multiple wireless languages. At the same time, the staggering scale of the smartphone market has spurred sensor manufacturers to miniaturize and innovate, driving the cost of all the wireless chipsets (both sensors and receivers) down to a pittance. This has created a built-in market for these first-stage products—formerly unnetworked items that now deliver simple information to your phone, and from there to the cloud—at a relatively minimal manufacturing cost.
Already, scores of products have emerged to take advantage of Bluetooth Smart, one low-energy radio protocol that hit the market in October 2011. They include watches, heart rate monitors, and even some new Nike shoes (which use four built-in pressure sensors to send workout data back to your phone). One project, called Asthmapolis, uses a sensor that attaches to an asthma inhaler; it maps usage to generate insights into where attacks are likely to occur. Another rising technology is NFC, short for near-field communication; Visa just announced that it plans to let Samsung smartphone users make payments to merchants wirelessly over NFC instead of swiping a card, and some billboards are using the protocol to beam content to passersby who ask for it.
In the industrial realm, there’s a similar dynamic at work but with even higher stakes. Massive US companies like IBM (through its Smarter Planet initiatives), Qualcomm, and Cisco all see ubiquitous connectivity as a way to sell more products and services—particularly Big Data–style analysis—to their large corporate customers. Chinese manufacturers have much the same idea, and the Chinese government is pumping hundreds of millions of dollars every year into so-called Internet of Things–based manufacturing. (This project kicked off a few years ago when China’s then premier Wen Jiabao put forward the following equation in a speech: “Internet + Internet of Things = Wisdom of the Earth.”) Global analysts look at all these developments and project that by 2025 there will be 1 trillion networked devices worldwide in the consumer and industrial sectors combined.
Take one case in point: General Electric, which has been trying to apply the sensor revolution (what it calls the Industrial Internet) to 50 different projects across scores of businesses, from wind turbines to railroad locomotives to a pilot program with Mount Sinai Medical Center in New York that predicts, based on sensors in beds, when rooms will become available. But perhaps GE’s most remarkable application of this program has been to its own manufacturing process at the Durathon battery factory, completed last year in Schenectady, New York. Its biggest manufacturing challenge is the high tech ceramics that separate the electrodes inside the battery: Tiny variations in the mixing and firing process can lead to huge swings in quality and consistency of these ceramics. So the solution, GE’s team decided, was to engineer their way to consistency through data.
Step by step, they developed and refined their process using feedback from the machines. One crucial step was near the beginning, in mixing the powder that would eventually be pressed to form the ceramics. The team didn’t know the optimal mixing time needed to give that powder a perfectly even consistency that wouldn’t vary from batch to batch. And since the raw materials would themselves vary slightly—in density, for example, or moisture content—the mixing time would need to vary too. So, says Randy Rausch, a manager of manufacturing engineering at the plant, “we put a sensor on everything,” from the outside of the factory to the inside of the room to the inside of the vat to the innards of the machines. Eventually the team realized that the powder was ideally mixed when it reached a certain viscosity. The key sensor, it turned out, was inside the mixing apparatus itself: When it needed to draw more than a certain amount of power, indicating that the powder was at just the right thickness, the process was done.
In many ways, this is the most extreme possible example of a first-stage usage. GE estimates that this single factory generates some 10,000 data points every second, and using that data has allowed GE to eliminate the high defect rates that typically plague high tech ceramics. Yet it has done it through pure data analysis, not through the actual coordination of these low-level sensors and devices. Like the consumer-hardware makers linking up their products to distinguish them from ordinary toasters and refrigerators, GE is connecting its industrial components to solve a near-term business problem. But in the process, the company and its industrial brethren are laying the groundwork for a far deeper transformation.
The rise of the smartphone has given us a natural way to communicate with all of our smart objects.
THE SECOND STAGE—the yoking together of two or more smart objects—is the trickiest, because it represents the vertiginous shift from analysis, the mere harvesting of helpful data, to real automation. This is a leap that tries our nerves: No matter how thoroughly we might use data to fine-tune our lives and businesses, it’s scary to take any of those decisions out of human hands. But it’s also a challenge to our imagination. In a non-programmable world, when few objects are connected, it can be tough to grasp how even pairs of things might naturally fit together. Alex Hawkinson of SmartThings likes to draw an analogy to Facebook, which has famously described the underlying data it owns as the social graph—the knowledge of who is connected to whom and how. Hawkinson wants us to think of a “physical graph” where all the objects in our lives take on similar underlying connections, based on how we might want the state of one object to depend on the state or behavior of another. But until you actually have the intelligence baked into your objects—until you have, say, a network-connected sprinkler system on one hand and an in-ground moisture sensor on the other—it can be hard to imagine the automation you might someday want, or even need, in your daily life.
Think about where you spend most of your waking hours: your office, perhaps, or your living room or car. There are all sorts of adjustments you make over the course of any given day that are reducible to simple if-then relationships. If the sun hits your computer screen, then you lower a shade. If someone walks in the door, then you turn down your music. If there’s too much noise outside, then you close your window. If you have a Word document open but haven’t finished writing a sentence in 10 minutes, then you brew another pot of coffee. Would you want to automate all of these relationships? Not necessarily. But you might find that automating some of them would make your life easier, more streamlined.
Perhaps the clearest two-sensor example is where one of the sensors is on us. “Presence” tags—low-energy radio IDs that sit on our keychains or belt loops and announce our location, verify our identity—are what let the SmartThings system text your wife or fire up your A/C when you leave the office. It’s also the principle behind Square Wallet and a number of other nascent payment systems, including ones from PayPal and Google. (When you walk into a participating store today, Square can let the cashier know you’re there; you pay simply by giving your name.) For the four-legged set, Qualcomm has created a product called Tagg, a tracking tool that monitors your pet’s movements while you’re gone, estimating its activity levels and alerting you if it strays too far from home.
With GPS we can reliably know our location within 100 feet, give or take, and that knowledge has transformed our lives immeasurably: turn-by-turn driving directions, local restaurant recommendations, location-based dating apps, and so on. But with presence technology, we have the potential to know our location absolutely, down to a foot or even a few inches. That means knowing not merely which bar your friend is at but which couch she’s sitting on if you walk through the door. It means receiving a coupon for a grocery item on the endcap at the moment you walk by. It means walking through an art museum and having your phone interpret the paintings as you pause in front of them. This simple link—between a tag on us and a tag in the world—stands to become the culmination of the location revolution, delivering on all the promises it hasn’t quite fulfilled yet.
Dennis Crowley, CEO of Foursquare, the location-based social app, thinks of location as a giant X on Earth that grows smaller as our technology improves. “We want to get down to the point where the X is this big,” he says, holding out his hands. “X marks the spot: It’s pirate’s treasure.” Already that X is shrinking: On Google Maps, you can now navigate inside certain airports and stores, with Wi-Fi triangulation helping out your GPS. (The Wi-Fi is especially important to distinguish among levels of a multistory building, which GPS is poorly equipped to handle.) But presence tags can simplify that math, replacing it with a concrete assurance of where we are. And the treasure that digs up could be considerable. This is obviously true for retailers: According to a mobile couponing firm called Koupon Media, some 80 percent of customers who buy gas at one major convenience-store chain never walk inside the store, so presence-based coupons could make a huge impact on the bottom line. But it’s also true for our everyday lives. Have you ever lost an object in your house and dreamed that you could just type a search for it, as you would for a wayward document on your hard drive? With location stickers, that seemingly impossible desire has become a reality: A startup called StickNFind Technologies already sells these quarter-sized devices for $25 apiece.
Your office will text your wife when you leave and tell your home A/C system to start powering up.
A simple link—between a tag on us and a tag in the world—will complete the location revolution.
THE THIRD AND FINAL STAGE is to build applications on top of these connected objects. This means not just tying together the behavior of two or more objects—like the sprinkler and the moisture sensor—but creating complex interrelationships that also tie in outside data sources and analytics. Think about how much more intelligent your sprinklers could be if they responded to the weather report as well as to historical patterns of soil moisture and rainfall. Plugged into that information, your system wouldn’t just know how much water is in the soil; it could predict how much there will be, based on whether it’s going to rain or the sun will be baking hot that day. Think about a home medical monitoring system that didn’t just feed back data from diabetic patients but adjusted the treatment regimen as the data demanded. Think about a liquor cabinet that auto-populated your shopping list based on the levels in the bottles—but also locked automatically if your stock portfolio dropped more than 3 percent.
With his 200-plus sensors and objects installed, Hawkinson is using his house as a laboratory to dream up just these sorts of elaborate interconnections. A blond, hulking, bespectacled entrepreneur in his forties, he lays out some of the basic objects on the coffee table in his airy living room. There is a motion sensor, a moisture sensor. There is the “multi sensor,” whose two pieces can be mounted opposite each other on a door and its frame, registering movement and also ambient temperature. There is a power outlet that listens for commands as it sits plugged between the wall and any AC-powered device. And there is the presence tag, worn on a keychain or belt loop, which announces to the house that its bearer is home. Finally, there is the sandwich-sized device that binds them together: the SmartThings Hub.
Right now there are multiple efforts under way to standardize how connected objects talk to one another. Two different projects, led by big companies—AllJoyn, spearheaded by Qualcomm, and MQTT, pushed by Cisco and others—are trying to create something like an HTTP for smart objects, giving them a shared language to coordinate their actions. Hawkinson’s strategy, by contrast, is to make his hub a universal translator, deciphering the different types of chatter over multiple wireless protocols and processing it all in the cloud. The SmartThings Hub includes Wi-Fi, Bluetooth Smart, and two mesh technologies called ZigBee and Z-Wave that allow each device to extend the network. He estimates there are already more than a thousand compatible off-the-shelf devices, but he says that if some popular new wireless chipset (or HTTP-style protocol) comes along, “we’ll just throw that in too.”
The true genius of SmartThings, though, isn’t in the sensors or the hub but in the system that Hawkinson and his users are building on top of it. Open the SmartThings mobile app and one finds its own array of apps inside, a pleasingly designed grid of bubbles that show the status of the people and places and things on your system and the various programs that connect them. Through the SmartThings Store, users and developers can share their simple if-then apps and, in the case of more complex relationships, make money off of apps, just like in the mobile marketplaces.
For example, Hawkinson’s users are already hacking smart thermostats, which draw on sensors and historical usage patterns to help save energy. Essentially these are open source competitors to Nest, the most successful connected home appliance on the market right now. Designed by former Apple engineers, Nest is a proprietary system where all the intelligence is embedded in the thermostat itself. But on the SmartThings platform, a thermostat app can pull in readings from any other device on that platform—motion sensors that might say which room you’re in, presence tags that identify individual family members (with different temperature preferences)—as well as outside data sources like weather or variable power prices.
An even more natural category for apps is security. While Hawkinson is away from his house, he receives texts when any door opens, when any motion is detected. When the last person leaves the SmartThings office, it locks itself up, shuts down the lights and thermostat, and activates an alarm system complete with siren, flashing lights, and auto-notifications, all coordinated through the SmartThings system. With so much intelligence built into the structure, why would you ever pay ADT—or one of the other myriad firms that makes up that $20 billion market—just to duplicate that effort? For those rare situations when someone needs to come out to the premises, Hawkinson imagines a low-cost security service that will wed his simple sensors and notifications with an on-call platoon of off-duty cops—“an Uber for home protection,” as he puts it.
This, finally, is the Programmable World, the point at which the full power of developers, entrepreneurs, and venture capitalists are brought to bear on the realm of physical objects—improving it, customizing it, and groping toward new business plans for it that we haven’t dreamed of yet. Indeed, it will marshal all the forces that made the Internet so transformational and put them to work on virtually everything around us.
It’s a future where the intelligence once locked in our devices can now flow into the universe of physical objects.
THERE ARE OBVIOUSLY some pitfalls lurking in this future of connected objects. Our fears about malicious hackers preying on our email and bank accounts via the cloud might pale in comparison to how we’ll feel about those same miscreants pwning our garage doors and bathroom light fixtures. The mysterious Stuxnet and Flame exploits have raised the issue of industrial security in the era of connected devices. Vanity Fair recently detailed nightmare scenarios in which hackers could hit connected objects, from our high tech cars (university researchers have figured out how to exploit an OnStar-type system to cause havoc in a vehicle) to our utility “smart meters” (which collect patterns of energy use that can reveal a great deal about our activities at home) to even our pacemakers.
Hawkinson, for his part, sees security as a concern but hardly an existential threat. The traffic between the cloud and the SmartThings Hub is encrypted, making it very difficult for hackers to intercept, let alone to modify for malevolent ends. It might actually be the case that automation, even mediated through the cloud, will make our lives more rather than less secure. As Wired senior writer Mat Honan has documented, our recent hacking epidemic has largely exploited the human interface—the password. We’re always the weak link in online security, and in the Programmable World, our objects will carry out their business without needing us to get involved at all.
A bigger concern, perhaps, is simple privacy. Just because we’ve finally warmed up to oversharing in the virtual world doesn’t mean we’ll be comfortable doing the same in the physical world, as all our interactions with objects capture more and more data about where we are and what we’re doing. Certainly the gradual acceptance of smart toll tags for cars (e.g., E-ZPass) shows that such qualms can be overcome, so long as there’s a demonstrated benefit and a fair assurance of security. In that regard, personalized billboards are arguably a step in the wrong direction, but wireless payments will make users happy; so too will the coffee shop that knows your order and lets you skip the line, or the rental-car seat that adjusts to your preferences before you sit in it. Just as with social networking, the privacy concerns of a sensor-connected world will be fast outweighed by the strange pleasures of residing in it.
No, the main existential threat to the Programmable World is the considerably more mundane issue of power. Every sensor still needs a power source, which in most cases right now means a battery; low-energy protocols allow those batteries to last a long time, even a few years, but eventually they’ll need to be replaced. In a hyperconnected home like Hawkinson’s, that will eventually mean changing scores of batteries every year, and the numbers in a large office complex on the SmartThings system would be even higher. Hawkinson hopes that within a few years we will see the commercial rollout of wireless power, which uses a technology called resonant magnetic coupling to beam power to devices as far as several meters away from a charging station. (MIT recently spun out a company called WiTricity to bring such a system to market; its founders are hopeful that even electric cars could charge up using its technology.)
The idea of animating the inanimate, of compelling the physical world to do our bidding, has been a staple of science fiction for half a century or more. Often we’ve imagined the resulting objects to be perverse in their lack of intelligence, like those remorselessly multiplying brooms conjured up by Mickey Mouse in Fantasia. At other times we’ve feared the perversity that results when our things get too smart, like HAL refusing to open those damn pod-bay doors. In reality, though, just as in our programmable computers, the “intelligence” in our programmable world will never be more or less than the intelligence we can instill into its far-flung moving parts. It’s vanishingly unlikely that we’ll ever have a car like KITT or a house like Tony Stark’s Jarvis, chatting us up in urbane British accents about our built-in weapons systems. But someday soon we’ll have a house that can warn us about a flood or keep an eye on our kids or turn off that stove when we forget—acts of genuine intelligence that will enrich our lives far more than any missile launcher ever could.