WARNING: This is one of those posts I occasionally write to organize my thoughts and test some ideas — it won’t hold together that well and feedback is what I need most, so please comment if you’re interested in the topics below.
Q: What do smartphones, robotics, mobile health, quantified self, and the singularity all have in common?
A: When the “mobile phone” becomes a digital proxy for human identity — the filter through which we perceive the world and the intimate facilitator of an ever-increasing share of our intellectual and physical activity — the barrier between human and machine shrinks dramatically.
A corollary: the more powerful, portable and personalized these machines become, the more we will employ them as interfaces to every other machine we interact with — including our toys, our cars, our televisions, and ourselves.
Smartphones are the Singularity on little cat feet
Since the release of the first iPhone in 2007 this future has been visible, and the competitive energy introduced by Google’s Android project has only accelerated our progress in this direction.
In 2010, 19% of the 1.6 billion “mobile phones” sold around the world were smartphones — personal mobile computing platforms loaded with sensors and pre-configured to run the more than 1 million apps currently available. As technology inevitably does, smartphones keep getting both smarter and cheaper (Huawei is now offering an unlocked Android smartphone in Africa for $80) turning a first-world luxury into a global commonplace and further accelerating the mobile innovation ecosystem.
Early attempts to grasp the envisioned future of technology-enhanced human experience — “augmented reality” apps like Google Goggles and Layar, “personal instrumentation” offerings like Fitbit and Wakemate — are faintly ridiculous to “normals” and interesting only to early adopters and future-freaks (myself included) living inside the technology bubble.
But when a leading innovator — and perfectionist brand — like Apple decides to include mobile, personalized artificial intelligence in their flagship product, you know something’s up.
It’s not yet commonplace to ask a machine a question and expect a sensible answer — particularly not an answer that’s conditioned on your current location, past history of questions asked and current appointment calendar (not to mention real-time access to a wide array of cloud-based datastores) — and it will likely take time for the broad swath of consumer culture to embrace the behavior as “normal”.
If any company has demonstrated an ability to shift both behavior and culture it’s Apple, so this will be interesting to watch.
Smartphones will “behead” most consumer electronics
The Unique Device Identifier (UDID) assigned to each smartphone handset is a key value that unifies every aspect of our lives — our social graph, how we pass through space and time, and the content, offers, media + entertainment we consume along the way. The more this data is used to enhance and personalize our mobile experience — as Siri promises to do for iPhone users — the more tasks we will assign to our little digital helpers.
It’s not a wild leap to see that every waking moment of our day (and, via apps like Wakemate, our sleeping hours as well) will ultimately produce a digital crumbtrail — bound to our UDID — rich in meta-data and ready to be converted into labor-saving rule-sets and shared back with us (and others) as insight into how we’re living and how that might be improved.
As soon as one machine in our lives takes on this central role — and especially if it endears itself to us with ever-more-clever personalized solutions to our most-common needs — we won’t want to split our time across multiple machines. And we won’t tolerate the need to “train up” a new device when we already have one that gets it.
We’ll want every other machine in our lives to automatically detect the presence of our “master” device and respond as a “slave” — performing its specific function under the direction of the primary device.
This has interesting implications for the full spectrum of consumer electronics makers — especially those that make their margin on hardware sales. Why should anyone have to program their DVR, or the treadmill at the gym, or the navigation system in their car, when their smartphone already has the answers?
Machines that aspire to be “smart” will want to get “dumber” to stay in the game, and machines that “have to” be dumb for cost reasons now have an opportunity to be a whole lot smarter, by borrowing the brain of the master device.
Humans will use machines to become better humans
Even if the Singularity’s not your bag, there’s still plenty to like about the creeping ubiquity of our smartphone overlords. Business wonks are fond of the saying (apparently misattributed to Dr. Deming): “you can’t manage what you don’t measure”. And there’s nothing that humans like to think about, talk about and “manage” (or try to manage) more than themselves.
The “Quantified Self” movement is currently a geeky fringe culture of tech- and data-enthusiasts who are willing to invest significant effort — and adopt bleeding edge tools and techniques — to understand themselves better as people and organisms.
With mass adoption of smartphones, a huge slice of the world’s population is voluntarily instrumenting itself with a sensor + transceiver package that effortlessly gathers data about its owner’s activities, movements and need states.
Most of those people care deeply about their own well-being. How many of them will be willing to download an app or opt-in to a service that interprets their activity data and feeds it back to them in the form of insights and constructive suggestions for self-improvement?
(For a mind-bending conversation about the technology and business implications of this trend, check in with Buster and Jen at Habit Labs).
Am I nuts? Not thinking big enough? Let’s discuss!