Making wearables more useful and smart homes less of a chore

Making wearables more useful and smart homes less of a chore

Wearables might be set to get a whole lot more useful in future if research being conducted by Carnegie Mellon University’s Future Interfaces Group is indicative of the direction of travel.

While many companies, big and small, have been jumping into the wearables space in recent years, the use-cases for these devices often feels superficial — with fitness perhaps the most compelling scenario at this nascent stage. Yet smartwatches have far richer potential than merely performing a spot of sweat tracking.

The other problem with the current crop of smartwatches is the experience of using apps on wrist-mounted devices does not always live up to the promise of getting stuff done faster or more efficiently. Just having to load an app on this type of supplementary device can feel like an imposition.

If the primary selling point of a smartwatch is really convenience/glanceability the watch wearer really does not want to have to be squinting at lots of tiny icons and manually loading data to get the function they need in a given moment. A wearable needs to be a whole lot smarter to make it worth the wearing vs just using a smartphone.

At the same time, other connected devices populating the growing Internet of Things can feel pretty dumb right now — given the interface demands they also place on users. Such as, for example, connected lightbulbs like Philips Hue that require the user to open an app on their phone just in order to turn a lightbulb on or off, or change the colour of the light.

Which is pretty much the opposite of convenient, and why we’ve already seen startups trying to fix the problems IoT devices are creating via sensor-powered automation.

“The fact that I’m sitting in my livingroom and I have to go into my smartphone and find the right application and then open up the Hue app and then set it to whatever, blue, if that’s the future smart home it’s really dystopian, “ argues Chris Harrison, an assistant professor of Human-Computer Interaction at CMU’s School of Computer Science, discussing some of the interface challenges connected device designers are grappling with in an interview with TechCrunch.

But nor would it be good design to put a screen on every connected object in your home. That would be ugly and irritating in equal measure. Really there needs to be a far smarter way for connected devices to make themselves useful. And smartwatches could hold the key to this, reckons Harrison.

A sensing wearable

He describes one project researchers at the lab are working on, called EM-Sense, which could kill two birds with one stone: provide smartwatches with a killer app by enabling them to act as a shortcut companion app/control interface for other connected devices. And (thereby) also make IoT devices more useful — given their functionality would be automatically surfaced by the watch.

The EM-Sense prototype smartwatch is able to identify other electronic objects via their electromagnetic signals when paired with human touch. A user only has to pick up/touch or switch on another electronic device for the watch to identify what it is — enabling a related app to be automatically loaded onto their wrist. So the core idea here is to make smartwatches more context aware.

Harrison says one example EM-Sense application the team has put together is a timer for brushing your teeth so that when an electric toothbrush is turned on the wearer’s smartwatch automatically starts a timer app so they can glance down to know how long they need to keep brushing.

“Importantly it doesn’t require you to modify anything about the object,” he notes of the tech. “This is the really key thing. It works with your refrigerator already. And the way it does this is it takes advantage of a really clever little physical hack –- and that is that all of these devices emit a small amounts of electromagnetic noise. Anything that uses electricity is like a little miniature radio station.

“And when you touch it it turns out that you become an extension of it as an antenna. So your refrigerator is basically just a giant antenna. When you touch it your body becomes a little bit of an antenna as well. And a smartwatch sitting on the skin can actually detect those emissions and because they are fairly unique among objects it can classify the object the instant that you touch it. And all of the smartness is in the smartwatch; nothing is in the object itself.”

While on the one hand it might seem like the EM-Sense project is narrowing the utility of smartwatches — by shifting focus from them as wrist-mounted mobile computers with fully features apps to zero in on a function more akin to being a digital dial/switch — smartwatches arguably sorely need that kind of focus. Utility is what’s lacking thus far.

And when you pair the envisaged ability to smartly control electrical devices with other extant capabilities of smartwatches, such as fitness/health tracking and notification filtering, the whole wearable proposition starts to feel rather more substantial.

And if wearables can become the lightweight and responsive remote control for the future smart home there’s going to be far more reason to strap one on every day.

“It fails basically if you have to ask your smartwatch a question. The smartwatch is glanceability,” argues Harrison. “Smartwatches will fail if they are not smart enough to know what I need to know in the moment.”

His research group also recently detailed another project aimed at expanding the utility of smartwatches in a different way: by increasing the interaction surface area via a second wearable (a ring), allowing the watch to track finger gestures to compute gesture inputs on the hands, arm and even in the air. Although whether people could be convinced they need two wearables seems a bit of a stretch to me.

A less demanding smart home 

To return to the smart home, another barrier to adoption that the CMU researchers are interested in unpicking is the too-many-sensors problem — i.e. the need to physically attach sensors to all the items you want to bring online, which Harrison argues simply does not scale in terms of user experience or cost.

“The ‘smart home’ notion right now is you stick one sensor on one object. So if I want to have a smart door I stick a sensor on it, if I want to have a smart window I stick a sensor on it, if I have an old coffee machine that I want to make smart I stick a sensor to it,” he tells TechCrunch. “That world I think is going to be very intensive labor to be replacing batteries, and it’s also very expensive.

“Because even if you make those sensors $10 or $20 if you want to have dozens of these in your house to make it a smart house, I just don’t think that’s going to happen for quite some time because just the economies are not going to work in its favor.”

One possible fix for this that the researchers have been investigating is to reduce the number of sensors distributed around a home in order to bring its various components online, and instead concentrate multiple sensors into one or two sensor-packed hubs, combining those with machine learning algorithms that are trained to recognize the various signatures of your domestic routines — whether it’s the refrigerator running normally or the garage door opening and closing.

Harrison calls these “signal omnipotent” sensors and says the idea is you’d only need one or two of these hubs plugged into a power outlet in your home. Then, once they’d been trained on the day-to-day hums and pings of your domestic bliss, they’d be able to understand what’s going on, identify changes and serve up useful intel.

“We’re thinking that we’d only need three or four sensors in the typical house, and they don’t need to be on the object — they can just be plugged into a power outlet somewhere. And you can immediately ask hundreds of questions and try to attack the smart home problem but do it in a minimally intrusive way,” he says.

“It’s not that it’s stuck on the refrigerator, it might be in the room above the refrigerator. But for whatever reason there’s basically — let’s say — mechanical vibrations that propagate through the structure and it oscillates at 5x per second and it’s very indicative of the air compressor in your refrigerator, for example.”

This approach to spreading connected intelligence around a home would also not require the person to have to make a big bang spend on a mass, simultaneous upgrade of their in-home electronics, which is never going to happen. And is one of the most obvious reasons why smart home devices haven’t been generating much mainstream consumer momentum thus far.

“You need a way for people to ask interesting questions,” says Harrison, boiling down the smart home to an appealing consumer essence. “Is the car in the garage? Are my kids home from school? Is the dog bowl out of water? Etc etc. And you just can’t get there if people have to plunk down $50,000. What you have to do is to deliver it incrementally, for $20 at a time. And fill it in slowly. And that’s what we’re trying to attack. We don’t want to rely on anything.”

More than multi-touch

Another interesting project the CMU researchers are working on is looking at ways to extend the power of mobile computing by allowing touchscreen panels to be able to detect far more nuanced interactions than just finger taps and presses.

3d touch camera

Harrison calls this project ‘rich touch’, and while technologies such as Apple’s 3D Touch are arguably already moving in this direction by incorporating pressure sensors into screens to distinguish between a light touch and a sustained push, the researchers are aiming to go further; to, for example, be able to recover an entire hand position based on just a fingertip touchscreen interaction. Harrison dubs this a “post-multitouch era”.

“We have a series of projects that explore what would be those other dimensions of touch that you might layer on to a touchscreen experience? So not just two fingers does this and three fingers does that… The most recent one is a touchscreen that can deduce the angle that your finger is approaching the screen,” he says.

“It’s stock hardware. It’s a stock Android phone. No modifications. That with some machine learning AI can actually deduce the angle that your finger is coming at the screen. Angle is a critical feature to know — the 3D angle — because that helps you recover the actually hand shape/the hand pose. As opposed to just boiling down a finger touch to only a 2D co-ordinate.”

The question then would be what app developers would do with the additional information they could glean. Apple’s 3D Touch tech has not (at least yet) led to huge shifts in design thinking. And anything richer is necessarily more complex — which poses challenges for creating intuitive interfaces.

But, at the same time, if Snapchat could create so much mileage out of asking people to hold a finger down on the screen to view a self-destructing image, who’s to say what potential might lurk in being able to use a whole hand as an input signal? Certainly there would be more scope for developers to create new interaction styles.

Future projections

Harrison is also simultaneously a believer in the notion that computing will become far more embedded in the environments where we work, live and play in future — so less centered on these screens.

And again, rather than necessitating that a ‘smart home’ be peppered with touchscreens to enable people to interact with all their connected devices the vision is that certain devices could have a more dynamic interface projected directly onto a nearby wall or other surface.

Here Harrison points to a CMU project called the Info Bulb, which plays around with this idea by repurposing a lightbulb as an Android-based computer. But instead of having a touchscreen for interactions, the device projects data into the surrounding environs, using an embedded projector and gesture-tracking camera to detect when people are tapping on the projected pixels.

He gave a talk about this project at the World Economic Forum (below) earlier this year.

“I think it’s going to be the new desktop replacement,” he tells TechCrunch. “So instead of a desktop metaphor on our desktop computer it will literally be your desktop.

“You put it into your office desk light or your recessed light in your kitchen and you make certain key areas in your home extended and app developers let lose on this platform. So let’s say you had an Info Bulb above your kitchen countertop and you could download apps for that countertop. What kind of things would people make to make your kitchen experience better? Could you run YouTube? Could you have your family calendar? Could you get recipe helpers and so on? And the same for the light above your desk.”

Of course we’ve seen various projection-based and gesture interface projects over the years. The latter tech has also been commercialized by, for example, Microsoft with its Kinect gaming peripheral or Leap Motion’s gesture controller. But it’s fair to say that the uptake of these interfaces has lagged more traditional options, be it joysticks or touchscreens, so gesture tech feels more obviously suited to more specialized niches (such as VR) at this stage.

And it also remains to be seen whether projector-style interfaces can make a leap out of the lab to grab mainstream consumer interest in future — as the Info Bulb project envisages.

“No one of these projects is the magic bullet,” concedes Harrison. “They’re trying to explore some of these richer [interaction] frontiers to envision what it would be like if you had these technologies. A lot of things we do have a new technology component but then we use that as a vehicle to explore what these different interactions look like.”

Which piece of research is he most excited about, in terms of tangible potential? He zooms out at this point, moving away from interface tech to an application of AI for identifying what’s going on in video streams which he says could have very big implications for local governments and city authorities wanting to improve their responsiveness to real-time data on a budget. So basically as possible fuel for powering the oft discussed ‘smart city’. He also thinks the system could prove popular with businesses, given the low cost involved in building custom sensing systems that are ultimately driven by AI.

This project is called Zensors and starts out requiring crowdsourced help from humans, who are sent video stills to parse to answer a specific query about what can be seen in the shots taken from a video feed. The humans act as the mechanical turks training the algorithms to whatever custom task the person setting up the system requires. But all the while the machine learning is running in the background, learning and getting better — and as soon as it becomes as good as the humans the system is switched to being powered by the now trained algorithmic eye, with humans left to do only periodic (sanity) checks.

“You can ask yes, no, count, multiple choice and also scales,” says Harrison, explaining what Zensors is good at. “So it could be: how many cars are in the parking lot? It could be: is this business open or closed? It could be: what type of food is on the counter top? The grad students did this. Grad students love free food, so they had a sensor running, is it pizza, is it indian, is it Chinese, is it bagels, is it cake?”

What makes him so excited about this tech is the low cost of implementing the system. He explains the lab set up a Zensor to watch over a local bus stop to record when the bus arrived and tally that data with the city bus timetables to see whether the buses were running to schedule or not.

The Zensors bus classifier, we trained that for around $14. And it just ran. It was done.

“We gave that exact same data-set to workers on oDesk [now called Upwork] – a contracting platform – and we asked them how much wold it cost to build a computer vision system that worked at X reliability and recognized buses… It’s not a hard computer vision problem. The average quote we got back was around $3,000. To build that one system. In contrast the Zensors bus classifier, we trained that for around $14. And it just ran. It was done,” he notes.

Of course Zenzors aren’t omniscient. There are plenty of questions that will fox the machine. It’s not about to replace human agency entirely, quite yet.

“It’s good for really simple questions like counting or is this business open or close? So the lights are on and the doors open. Things that are really readily recognizable. But we had a sensor running in a food court and we asked what are people doing? Are they working? Are they talking? Socializing and so on? Humans will pick up on very small nuances like posture and the presence of things like laptops and stuff. Our computer vision was not nearly good enough to pick up those sorts of things.”

“I think it’s a really compelling project,” he adds. “It’s not there yet — it still probably requires another year or two yet before we can get it to be commercially viable. But probably, for a brief period of time, the street in front of our lab probably was the smartest street in the world.”

Harrison says most of the projects the lab works on could be commercialized in a relatively short timeframe — of around two years or more — if a company decided it wanted to try to bring one of the ideas to market.

To my eye, there certainly seems to be mileage in the notion of using a clever engineering hack to make wearables smarter, faster and more context aware and put some more clear blue water between their app experience and the one smartphone users get. Less information that’s more relevant is the clear goal on the wrist — it’s how to get there that’s the challenge.

What about — zooming out further still — the question of technology destroying human jobs? Does Harrison believe humanity’s employment prospects are being eroded by ever smarter technologies, such as a deep learning computer vision system that can quickly achieve parity with its human trainers? On this point he is unsurprisingly a techno-optimist.

“I think there will be these mixtures between crowd and computer systems,” he says. “Even as deep learning gets better that initial information that trains the deep learning is really useful and humans have an amazing eye for certain things. We are information processing machines that are really, really good.

“The jobs that computers are replacing are really menial. Having someone stand in a supermarket for eight hours per day counting the average time people look at a particular cereal is a job worth replacing in my opinion. So the computer is liberating people from the really skill-less and unfulfilling jobs. In the same way that the loom, the mechanical loom, replaced people hand-weaving for 100 hours a week in backbreaking labour. And then it got cheaper, so people could buy better clothes.

“So I don’t subscribe to the belief that [deep learning] technology will take jobs permanently and will reduce the human condition. I think it has great potential, like most technologies that have come before it, to improve people’s lives.”

Source link

Leave a Reply

Your email address will not be published.