Overall, I thought the technology was underwhelming, especially compared to the hype in the media and Google's advertising. The screen is small, the battery is terrible, the processor is barely fast enough to do anything on the Augmented Reality front. I fully expected it to directly connect my brain to the Google-plex, and turn me into some sort of man-machine hybrid that never forgot a face, with the entirety of wikipedia ready to drop in on any conversation.
What I got was a gadget that showed me text messages and let me take pictures without me having to fish out my phone--which is actually pretty great, but not change-the-world-worthy.
(not mention most of the demo videos are fake--there's no way to capture the screen display at a decent frame rate, and definitely not the display and camera at the same time, so be wary!)
I think people (myself included) are misjudging Glass, comparing it against some imagined set of capabilities, and maybe giving it a little too much credit--and fear--than it deserves. Glass is not innovative because it puts a screen on your face. That's been done for 30+ years in industry, military and academia.
It also isn't going to single-handedly ruin privacy in bars and any other public places. You can buy hidden "button" cameras that record for hours (Glass's battery eats it after very little recording), and because hard-drive space and cameras are so cheap, stores can put hundreds of security cameras everywhere. And people are already flying drones with hi-def cameras.
I would argue on the privacy front, at least Glass is up-front about being a camera! Maybe it could use a "recording" LED to be even more transparent, but privacy is already being encroached upon--and covertly so--with or without Glass.
I think the main innovation is that Google squeezed a small computer, camera and screen into something that's stylish enough to be worn by real people in their daily lives. There are plenty of $5k+ heads-up-displays that are more capable than Glass for specific applications.
Motorola HC1 heads up display for industrial applications |
My initial point is just that we should be judging glass as a consumer gadget, not as a technical innovation in the camera / heads-up-display front. The rest of this review will go over a bit more of the reality of Glass's current capabilities from the perspective of a developer. I hope to offer a perspective that's a little more aware of the technological landscape than the typical Glasshole :) (Was that a Glasshole thing to say??)
There's No Way to Surreptitiously Read the Glass Screen at Dinner
If you've ever had dinner with someone at a bar while they're watching a TV behind you, this is what it feels like to talk to someone while they're reading something on the Glass screen. The idea that you could wear Glass and be blissfully and secretly reading email during a boring dinner--or any conversation--is as likely as doing this with a standard smart phone.The main way to get input into Glass is by speaking, so there's also no way to Google someone while mid-conversation other than saying, "Ok Glass, Google Jane doe."
Not the Augmented Reality Experience You're Thinking of
When I think of Augmented Reality, I think of X-Ray vision of the electrical lines behind a wall, or real-time labeling of people and things Terminator Vision style.Terminator Vision - Realtime object identification |
X-Ray view of internal parts for industrial pump. |
Facial Recognition (banned by google!) What we expect from Google Glass |
The screen is quite a bit smaller than most people think. Here's the same screen shown in an approximation of my field of view.
Glass screen size relative to your field of view. Hold your hand flat about a foot from your face, and that's about the size. |
But this feels more like a gimmick to me than a real use case. It'd be just as effective to take a single picture and get the translated text on your screen rather than having to hold your head perfectly steady while it tries to translate frames in real-time.
You're not looking through the Glass screen at the sign, you're looking at a small video monitor and pointing your head in the direction of the sign.
One benefit of Glass's small screen is that it avoids a lot of the subtle problems that other glasses with larger screens are facing. When a screen starts taking up a significant portion of your view, like Oculus Rift, if there's any delay between what your brains feels and what your eyes see (when you turn your head for instance), many people experience nausea. In the case of a see through screen, it's also hard to line up the real world with the display, and any lag is incredibly apparent.
Glass's see through screen is only see through to avoid blocking your vision when the screen isn't on. You never look at a mix of the real world and the screen, like the annotated dating app above. Even in the World Lens demo, Glass is replaying a video of the camera on the screen.
A final note: Having a transparent screen means it doesn't block your peripheral view most of the time, but it also means you can't read the screen if you're looking at anything too bright. A white wall inside was still legible, but outside it was impossible to read. Also, while running outside, not only is it too bright to read the screen, but it's also bouncing around too much to read.
Glass wearers are not recording everything (not enough battery!)
While Glass is capable of some impressive things like real-time translation shown above, and face detection, the battery only lasts 15 minutes when doing intensive stuff, and Google has said they won't publish any apps that do facial recognition. So everyone's fear that they're being recorded or identified by the wearer is not yet an issue for battery and political reasons.Hands-free applications are the killer apps for Glass (too bad voice recognition is still crippled!)
We developed a prototype industrial app for factory workers (still unreleased and private) that showed them step-by-step pictures and videos for an assembly process--all hands free. Using voice to navigate is something I think the public and most developers assumed is one of the core features of Glass open to developers, but it's sadly (as far as I know) still crippled, and restricted to the main menu and other cumbersome internet-required interfaces.We had to resort to head tilts to navigate. This looked a little ridiculous, but actually worked very well, and might be necessary regardless depending on the factory noise level.
But despite this minor hiccup, the hands free applications are where Glass truly shines. Recording cooking instructions (still no good app for this, though!), following assembly instructions, taking a picture while holding your dog or baby with both hands--this is where Glass shines. I loved being able to take pictures of some weird shit on GA highways when I normally couldn't get my phone out while driving.
Typical weird shit caught on GA highways, nice to have hands free Glass at the ready! |
Taking picture of tangled dog leashes while holding two dogs--hands free camera required! |
One caveat is that sometimes you want to use your hands--like if you're on public transit, in the grocery store, etc., anywhere you don't want to dictate an email. Google went through great effort to fit everything on your head with no wires or touch pads handing off, but this has its costs in usability. Both Meta Pro and Moverio BT-200 have wired touch pads / battery packs that address this usability issue and also add 4-8 hours on the 15 minute Glass battery life for intensive apps.
Meat Controllers and Conversation Boot-strapper
While I was talking about Augmented Reality on the Amp-hour podcast, the host Chris Gammell used the phrase "Meat Controller" to describe a human worker enhanced by technology. For all the talk about technology (robots in particular) taking away jobs, technology like Google Glass could actually create some jobs, giving unskilled people heads-up instructions either from a trouble-shooting application or piped in from an expert. This is already the case in the nuclear repair industry where experienced workers reach their exposure limit, but can pass on their knowledge remotely to others.Unskilled worker + Google Glass = Meat Controller |
And a final comment about Google's decision to ban apps that do facial recognition. Just to be clear, there are already apps out there that will do facial recognition--this is not a technical limitation. The algorithms already work. The only limitation is feeding the algorithms data, which for many of us, could probably be scoured from the internet.
One expectation we have when seeing someone wearing Google Glass, is that they're googling us while we're first talking to them. While I've explained this is NOT currently happening since recognition is banned, and they'd have to say, "Ok Glass, google...", and there's no way to focus on the screen and a person at the same time,... I don't think a future where this happens would be all that bad. What if you could post a favorites list of conversation topics to the cloud, and when meeting someone, they immediately see those topics? If you both immediately saw you were interested in the same obscure topic, you could get right into it, rather than potentially missing a connection while wading through the typical "Where are you from?" introductions...
No comments:
Post a Comment
comments are moderated, I'll review them as soon as possible!