Categories

Calling the end of innovation in mobile computers

People are lining up to call the market for mobile phones. Analysts and amateurs alike are connecting points on charts and predicting with confidence the future of mobile platforms. Consensus is forming that there is no future but a quiescent state. By the acclamation of pundits, the survivors are declared to be iOS and Android. They are also predictably arranged in a way similar to OS X and Windows. End of story.

Except for one thing.

3.5 years ago neither of these platforms existed.  In fact, it was only two and a half years ago, in mid 2008, that one of the finalists even became a platform with the launch of an app store. The other “winner” only launched in a handset later that year and had no significant volumes until a year ago. In other words, these suddenly predictable platforms have been in existence for less than the life span of one device that runs them.

To point out how extraordinary this claim is, if we look back to the history of computing, could we have declared the PC industry future cut and dried in 1978, two years after the founding of Apple? What about 1983, two years after the emergence of the IBM PC? What about 1986, two years after the Mac? Or maybe 1990, after Windows was born? Maybe the writing was on the wall in 1997, two years after Windows 95 caught up with the Mac’s UX. That would be 21 years after Apple’s founding. Some would argue that even today the story of desktop or portable computing is far from over.

Sure, times have changed. Things move much faster now. But markets are also much bigger and take a long time to penetrate. Mobile channels are bounded by provincialism and parochialism.

What’s more, it’s not like there haven’t been platforms before these two upstarts came along. I’ve seen 9 mobile platforms in 9 years (PalmOS, Windows Mobile, Symbian, Java, WebOS, Windows Phone, iOS, Android, RIM Blackberry). New platforms are still emerging and evolving (e.g. MeeGo, Bada, LiMo, QNX and Tapas).

So the existential question is this: If you were to draw a time line, would a mark in early 2011 signify the end of mobile platform history? The point in time when unforeseeable change will cease to happen?

Let me offer a hint as to why not: every major platform battle has been sparked by an innovation built around a new input method. The keyboard integrated into a personal computer in the 1970s, the mouse in the 1980s, the stylus in the 1990s and the fingertip in the 2000s. Each of these input methods created new platforms (DOS/Apple ][, MacOS/Windows, PalmOS/Windows CE, iOS/Android), new ecosystems, new industries, new competitors and new platform battles designed to enrich forecasting pundits. I won’t even try to go back to the changed in the half century of computing prior to the PC.

So here’s my challenge to the prognosticators: If you are willing to draw a line in the calendar now and say that no input methods will ever emerge in mobile computing after this point in time I’ll buy the thesis that it’s over and we can count our chips.

If you think that out of the hundreds of user experience patents being filed, and hundreds of prototypes being slaved over in labs throughout the world, not a single new product will emerge with a new way of interacting with a computing device, then it’s time to move on to something other than what can be declared a commodity market.

But if you believe that, as happened four years ago, a new input method, previously only in the realm of mockups and movie magic, suddenly was shown, in a working product, on stage, then this business still has a chance of staying interesting.

  • Rou

    This sums up the tech pundits and analysts:

    "Post-modernism is born of culture which is at best a mere representation of another, which has lost ambition for itself and the world. It has no sense of the future and cannot make sense of the past. It's born of an ignorance of the past which prevents it from having any sense of the future, so it looks (blindly) backwards and ends up in that permanent now-ness which is so gratefully embraced by those in need of an excuse. They will sneer at any consumer who gets fed up with the joke, and they have nothing but contempt for those too benighted to get the joke in the first place." (google text for source)

    Apple is unashamed minimalist modernist.

    • julienv42

      Googling text didn't yield any results (nor Binging for that matter).

  • http://twitter.com/bigbadrobbo @bigbadrobbo

    Very interesting. Thanks for the sound analysis.

  • Hamranhansenhansen

    You have to think there is an opportunity for a mobile platform to be the iPhone of voice. Maybe Ford's car system moves into an earpiece.

    It is too early to call the touch phones, though. Apple is not going away, but I think the Apple alternative will have to also have native C apps and it's own hardware. Maybe it will be HP? Android is not a platform, and it is becoming less so every day, not more.

    • http://twitter.com/aegisdesign @aegisdesign

      Android *does* allow native development via the NDK. It's er, how I was looking at porting Qt apps from Symbian/MeeGo/Maemo so I don't have to learn another platform and language.

      Instructions here… http://code.google.com/p/android-lighthouse/wiki/

      You need a simple Java stub program that loads a native library. In the library you've your regular C++ code. Apparently in Honeycomb you can dispense with the Java stub now.

      • Pieter

        That actually is not nice for the Android platform.
        Currently, due to the Dalvik VM, applications can run on different hardware platforms without recompiling. That way a non-ARM Android device can be made (e.g. Intel x86? (Not that you would want a power hungry x86, bit still…))
        By compiling to the ARM processor, that advantage is lost…

        In that sense, what would the chance be of Apple introducing a LLVM byte-code based application binary? That way apps would become processor agnostic, as the final compile would be done on the actual hardware… It would allow e.g. ARM based 'MacBook Air Light' devices.
        Especially if the Mac App Store would support (require?) those applications, distribution of that kind of App would go quickly…
        Apple has switched processor architecture multiple times, I would expect them to be very interested in such an meta-platform that they have in-house…

      • http://twitter.com/aegisdesign @aegisdesign

        Intel are claiming the new Medfield Atom chipsets match ARM for power usage and the Medfield CPU in the development handsets Intel/Nokia are using run at 1.6Ghz (I'd guess that's full on turbo mode).

        If Android go Intel also instead of just ARM then they'll have to address the same issue of either compiling for each architecture and having two binaries in the store or having fat binaries like Apple do.

        As a developer who really doesn't want to learn Java and another API, recompiling and shipping two binaries/fat binaries is much more preferable and a native app will run quicker than the Java one so it's win-win as far as I'm concerned.

      • Pieter

        'Intel are claiming the new Medfield Atom chipsets match ARM for power'
        Therefor those processors may be interesting to use and thus it would be interesting to only have Dalvik binaries…

        'having two binaries in the store or having fat binaries like Apple do'
        But for that you need developer support…
        Apple had to add a 68K emulator to the PowerPC machines and a PowerPC emulator to the Intel x86 machines when they switched processor architectures to support older applications…

        'a native app will run quicker than the Java one'
        Not always true.
        The JIT has information that the compiler does not have.
        HP years ago had a paper about emulating (I think it was) PA-RISC on a PA-RISC.

      • http://twitter.com/aegisdesign @aegisdesign

        Yep, you need developer support and that's coming to Android…

        From the NDK release notes…

        "The latest release of the NDK supports these ARM instruction sets:

        ARMv5TE (including Thumb-1 instructions)
        ARMv7-A (including Thumb-2 and VFPv3-D16 instructions, with optional support for NEON/VFPv3-D32 instructions)

        Future releases of the NDK will also support:

        x86 instructions (see CPU-ARCH-ABIS.HTML for more information)"
        http://developer.android.com/sdk/ndk/overview.htm

  • http://twitter.com/vikingbrad @vikingbrad

    Interesting artcile, next big input method would have to be voice, wonder if it will take 10 years.

    • Pieter

      Everybody blabbing into their computer?
      No thanks…
      It will not work with multiple people at the same time talking, or a TV/radio in the background…
      I don't think it will work for the general purpose computer due to that, maybe for some niches.

  • Macorange

    I was making computer purchase decisions in 1983, and yes, it was all over except the shouting by then. You used a PC at work, and you needed to be compatible with it at home. Apple produced some superior products along the way, but the game was over. Even now Microsoft has a 90% share.

    IOS is going to overtake the old order, but it will have taken 30 years for that to happen. Of course something else will eventually overtake iOS, but if it takes another 30 years, I regrettably will not be on this earth to see it.

  • Dan

    Even before pure voice commands take over, think of Minority Report, et al, where pointing at the hologram will control some of the features, with voice the rest. In time, voice will probably win, with hologram projections the "hardware", if someone wants to "type" rather than speak, for security, privacy, or whatever. Also, eye movement tracking has a place here, as the military can attest, and eventually, pure thought will control most, if not all, functions. Sooner than we might imagine…..

  • slimcode

    Voice. That new input method has to be voice. We've just scratched the surface.

    • THT

      Verbal communication is a rather taxing input mechanism. Not taxing on the computer, taxing on the human. Just think about how hard it is to communicate something to someone else. It is tremendously difficult to do anything outside of the rather simplistic things.

    • Iosweeky

      Could easily be eyeball or motion tracking.

      Think kinect for mobile devices.

      • nns

        I don't think we should jump all over a new input method, just because it's new. Is motion tracking really better than a touchscreen? Personally, I think that multitouch augmented by voice commands is the near-term future for phones.

  • Marcos

    I'd guess that you were not a Mac user back when Windows 95 came out. It did not catch up with Macintosh in terms of user experience. It only got good enough for people to judge based on screenshots that they were roughly the same. In many ways Android is the same: an inferior user experience due to hundreds of small and hard to pinpoint details. But people that don't have the experience with the iPhone are fooled into thinking they caught up in terms of user experience. Being cheaper does not hurt, either (Mac vs PC).

    People who think technology has stopped evolving forget just how revolutionary the Mac was when it first came out, or the iPhone… I'm sure we will be delighted with new ideas in the world of technology for a very long time.
    Thanks for the interesting blog!

  • THT

    I'd propose you could have called it for Microsoft, which includes both DOS and Windows, in 1982. This was the year after IBM chose Microsoft to provide DOS for their PC, and when CDP clean-room cloned the IBM PC BIOS, Microsoft was able to reap all of the rewards. From there it was over.

    Nothing could have stopped the train. Microsoft had the benefit of shipping an OS on the PC from the computer monopolist at the time (IBM), and the benefit of shipping software on the clones of the PC from the monopolist. Once those two things happened, all MS needed to do was just be good enough.

    In interesting aside is that back in those days, "portable" had a slightly different meaning. Your desktop computer was the "portable" computer. ;)

    • http://twitter.com/NanDuan @NanDuan

      IBM made the biggest "unforced error" (in the words of my Strategy for IT Firms professor) in IT history, by failing to assert control over the IBM PC platform. In this sense, it is only with the benefit of hindsight (and full knowledge of the series of mistakes IBM made after 1982) that you could claim "it was over" in 1982.

      Or to put it another way, the reason you see IBM choosing MS-DOS and CDP cloning BIOS as the two defining moments, is because IBM later on continued to make a series of major strategic mistakes, which ultimately handed the market to Wintel and the clones, and the development of the market could have taken very different turns. Thus it would have been premature to declare victory for MS in 1982.

      • 2sk21

        Don't forget the anti-trust problems hanging over IBM at that time.

      • THT

        I agree with you that only in hindsight could we call it for Microsoft in 1982.

        But look at it this way, assuming some analyst had superb insight. Price is the primary driver for sales of product. IBM couldn't eliminate cloning, after what, the lawsuit with Phoenix on clean-room copying of the BIOS failed? They like to operate with a good profit margin, just like any other company with distinct branding. But once clones became possible with MS sitting in the pole position with OS software, their was nothing they could do. MS was going to win the PC wars.

        IBM tried take back the PC market with the PS/2, maybe PC Jr, memory is escaping me, but their products wasn't of sufficient difference to become a premium market leader or to take significant share from the clones. I don't think there was anything they could do as clone vendors plus Microsoft provided the cheapest product for 90% of the capability. An analyst probably called that back then.

        There was another significant inflection point for GUI layers over DOS between OS/2 and Windows. That is an interesting story in of itself. Maybe OS/2 would have had a chance if it was lighter, but by then MS began dominating Office software resulting in Windows + Office becoming a virtuous cycle.

      • asymco

        By the early 1990s the game was on again as IBM tried to establish OS/2 vs. Windows. Just because in 1982 DOS was a clear winner did not establish Microsoft as the dominant force in software. All eyes were still on IBM.

      • davel

        I would choose 1990 or whenever win 3.1 came about.

        they had the office suite and the combo pricing and win 3.1 made the pc good enough to be useful in a windowed environment.

  • WaltFrench

    Let's add "Kinecting" to the list of input innovations.

    Who really doubts that in a couple of years, a smartphone camera will be read our body language? If hearing-impaired semi-unconsciously fill in the gaps with their sight of a face, why not the phone, too?

  • THT

    If I were to draw the line, yeah, I think Summer of 2010 would be it. It's not the technology superiority of one platform or the other, it's the business models.

    Whose business model could defeat Google's? Whose business model could defeat Apple's? There's always the chance of mismanagement imploding a company, but barring that, it's hard to envision a business model more advantageous than Google's. Apple, they are just freakish. You'd think they would trip up sooner or later.

    • Kizedek

      You'd think so. But even the business models are part of what Horace is talking about. Analysts and pundits are calling it on what they see, and on the basis of what they have seen before, without knowing what they are really seeing. So Google's business model looks advantageous, and Apple's looks freakishly lucky? That's just the point. This is a conclusion based on the PC wars.

      Just a couple of articles back, Horace was wondering just what advantages the Android "platform" really holds for Google. That question extends to business model, too. In fact, the question related particularly to business model.

      As I see it, Apple is not at all freakish or lucky, especially lately with business models. While others march toward the commoditization of devices and OS' and design and UX, Apple has quietly commoditized some processes and business models. While others are busy re-inventing the wheel and wondering how they are going to compete with Apple's business models by building out their own non-starter stores with little or no developer and media company support, Apple is quietly presenting sure-thing business models to developers, music companies, media companies, TV networks, radio broadcasters, book publishers, magazine and newspaper companies, game companies, …and likely soon, credit card companies, banks and retailers of tangible goods.

      On the other hand, Google has to madly scramble to preserve its one-track business model and the income it receives from eyes on ads. How does Google stem the tide of eyes going to apps instead of browsers? How do they add their aging business model to new, quality media that people are increasingly willing to pay for (due to Apple's commoditized business models)?

      • THT

        Google let the genie out of the bottle. What they are doing is using their search monopoly to finance the development of freely licensable operating systems and free services in order to promote the usage of the Internet. Their business is selling ad-space on the Internet. More usage of the Internet means more money for them.

        Even if the companies fork Android to unrecognizability, it'll still mean people are using the Internet. They don't get direct search-based ad-revenue or service-oriented ad-revenue, but they get web-page display ad-revenue.

        So, their primary impetus is to drive down the cost of and to commoditize both hardware and software, so that more and more people use the Internet and their services. They are making the business of selling software obsolete and selling hardware a business of nothing but thin margins.

        So, I'm calling it for Google at least. The business model seems unassailable. If anything disrupts them, it'll be something that challenges Google's revenue model, not the technical superiority of any of their products. It's really hard to compete with free (really ad-supported, of which they are a monopolist or nearly so).

        I say Apple is freakish as a compliment. They are playing a high-wire games of developing hit product after hit product. iMac, iPod, iTunes, MBP/MBA, iPhone, iPad. There's always room in the market for the premium end. But to continue to develop those hits year over year is amazing. Engineering projects fail. Entropy sets in on any organization. Operating at such high levels for such a long time is a freakish feat.

        There will always be a market for the premium end, which gobbles up most of the profits of the market, but it takes tremendous discipline and focus. Whether Apple can continue being the top dog in their respective markets as the top premium vendor is always an open question.

      • http://twitter.com/NanDuan @NanDuan

        I think what's interesting is that while Google carries on with an explicit strategy to commoditize adjacent markets (e.g. mobile OS), and as you say, push for more Internet usage, Google's primary traffic driver (and hence revenue driver based on ads) search is itself being commoditized.

        The recent Google / Bing spat exemplifies this. This is what directly challenges Google's revenue model. Obviously, Google still enjoys a dominant market position in search, but Microsoft can continue to pump money into Bing simply to undermine Google's revenue (and vice versa, Google undermining MS with Apps).

        And there's also the broader, perhaps overhyped, trend of web 2.0. Facebook is not undermining Google in the sense that it does search better through social, but it will continue to take traffic share away from Google, which in turn directly challenges Google's business model.
        At the end of the day, Google's model is based on owning a large share of web traffic, so it could sell advertising. It doesn't have to be search, that just happened to be where they started. As more and more new web services come into play and take traffic away from Google, Google will feel the pressure.

    • Steven Noyes

      Oracle could easily trip Google. If Google has to pay 10 USD to Oracle for every Android handset, how long do you think Google will support Android?

  • poke

    I will go on record as the anonymous internet commenter who in 2011 said that, in fact, there will not be any more significant innovations in input method. It actually annoys me intensely when an article discusses the iPad and then, in the same breath, discusses "Minority Report" style gestures and similar flights of fancy as if multitouch is just one more iteration on the input innovation treadmill. This misses the deep significance of multitouch entirely. Multitouch is the end point of computer input innovation.

    Multitouch, as it exists on the iPad, is not a gesture-based form of input (although it can do gestures) but a form of direct interaction. It doesn't get anymore direct than actually touching the object you're manipulating. All other forms of input and all speculative forms are indirect. Typed and spoken commands are indirect, the mouse is indirect, the pen is indirect (except when writing handwritten text or drawing) and gestures are indirect. So there isn't anything beyond multitouch. So there isn't a greater level of indirection to move to after you've got multitouch. I think the multitouch tablet, in a form factor not significantly different than what we have now, will be with us as long as there are general purpose computers. It's like pixel density. You could argue that screen resolution will increase forever but once you're past any detectable difference there's no need.

    That's not to say there won't be innovations, just not significant innovations, on the level of multitouch. Multitouch devices will be extended, they'll gain greater sensitivity to pressure, they'll be able to detect hovering fingers, etc. There may be voice and gestural input but they'll be auxiliary forms of input. Sometimes indirection is desirable, and voice and gestures can provide that, but those are special cases on a general purpose computer. The only thing that could replace multitouch is a solid hologram projector or the holodeck but neither appears to be within the realm of physical possibility. (There are ways to fake it, such as using VR headsets and force feedback gloves, but they're clumsy.)

    That's not to say there won't be any more disruptive changes in the computer industry. But I think they're more likely to be in the form of software and services rather than input.

    • http://twitter.com/MalphasWats @MalphasWats

      I think you're thinking ever-so-slightly too small – multitouch is just the current iteraction of the "finger touch" input method, a refinement really.

      There are 2 input methods that are yet to be implemented usefully anywhere, but when are cracked will spark the kind of innovation being discussed here:

      conversational voice input – think star trek computers, rather than Dragon Dictate!
      direct thought control

      Both are areas currently under development (you can even buy some simple brain input devices already). That's pretty much it though, we can safely call the market for computing devices once they implement a direct brain interface. Cool huh!

      • http://twitter.com/aegisdesign @aegisdesign

        The problem with voice input is that everyone can hear what you're doing with your computer.

        I really don't want to hear someone operating Facebook by voice command and I'm pretty sure they'd not want me to hear either.

      • poke

        I agree that voice control will be part of it (for issuing commands) but I don't believe it will displace multitouch whereas, for example, I am absolutely convinced multitouch will displace the mouse. I think you'll only be able to find a mouse in a museum 10-20 years from now. People who grow up on multitouch will see the mouse as something utterly antiquated and bizarre. Meanwhile, I don't think anybody will ever use voice to layout a document, make a presentation, etc, except in an auxiliary way. There are situations where voice-only makes sense – i.e., any device you only issue commands too – but those are not general purpose computing situations.

        I don't know what you mean by 'direct thought control.' If you mean the sort of devices we have now where you can wear a headband and learn to control a cursor, I don't think they're useful. If you mean some sort of science fiction scenario where we're plugged into the Matrix, I don't think that's possible.

      • kwyjiob

        No they won't. Your thinking is very narrow.

        Direct manipulation of screen elements is not the end point for HCI. Voice and gestures will not merely be "auxiliary", but will probably form the centre of the living space. This is significantly more direct than having to pick up some kind of pad.

        The mouse will see continued use, because the natural place for the screen at a desk does not suit itself to touch manipulation, users soon get tired and a case of Gorilla arm. It'd work better with screens angled like Architectural easels, but I doubt ergonomically and health wise that we'll be looking down at easels rather than up at screens.

        We don't want a screen where the mouse is, because we never want to look at it. Which is why Apple's multitouch magic pad has no screen, which makes it just as abstract, if not more so than a traditional mouse.

        The start of something new, doesn't mean the end of everything else.

    • Steko

      "it doesn't get anymore direct than actually touching the object you're manipulating. All other forms of input and all speculative forms are indirect. Typed and spoken commands are indirect"

      This is inaccurate for several reasons.

      First directness is not the grail. In many tasks the user wants to work at a high level and have the CPU/AI do the grunt work.

      Second verbal interaction, as an example, is often more direct way of doing things. Saying "Call Joe" is far more direct then click phone, click contacts, find Joe, click Joe. Touching things is optimally efficient when everything you'd want to touch is visible.

      Third input speed matters. People think faster then they talk and talk faster then they type and type on physical keys faster then they type on tiny mobile keys or soft keys. Typing on soft keys is good enough for mobiles now and will improve with more tactile feedback but eventually voice input/AI will mature.

      Fourth, touch becomes less relevant as screens are abstracted away.

      I don't think touch is going anywhere but it will certainly be supplemented, augmented and partially replaced by voice, 3D, retina tracking and (eventually) direct thought input.

      • poke

        Let me clarify what I mean by directness. When I'm laying out a presentation and I move and resize images using multitouch that is absolute directness of interaction. That's because the goal of laying out my presentation is just to have certain elements on the screen in certain places. The depiction I'm manipulating is the very thing I'm creating. In this situation, I can get no more direct than multitouch input. Even if I was jacked into the Matrix, if I'm laying out a presentation, the most direct way to do it would be to position things on a plane with my hands.

        With your example of "Call Joe", an onscreen button and a voice command wouldn't be any more or less direct, since the only direct way to speak to Joe involves not using the device at all. Issuing a verbal command might be the faster of the two options but wouldn't be any more direct (there's an ambiguity in the word 'direct' where it can be used to mean doing something without mediation or merely doing it faster, I'm using the former meaning). On the other hand, if you were laying out a presentation, issuing verbal commands (or pressing buttons) would be indirect, as would be using a mouse.

        So what I'm arguing is that there's a large set of tasks where multitouch is the be all and end all of input methods and any general purpose personal computing devices will need to include it, even if it is augmented with voice input. I don't think screens will ever be abstracted away because personal computers inherently involve depiction and hence some form of display. I don't think we'd benefit from indirectness of output (i.e., having a computer describe things to us by voice output). Written text, which is a major part of what we do with computers now, is a representation of speech, of course, but it's one with a great many advantages. We can read faster than we can listen to someone talk, it's easier to scan, etc. But even with recorded speech, it seems that you'd also want video: very few people listen to the radio except in circumstances where they can't watch a television.

      • Steko

        I absolutely agree like I said "touch isn't going anywhere" because for many things it is optimal. But as other input methods mature, that set of things it's optimal for will decrease somewhat.

    • ______

      You made a point pretty strongly and fairly well until you gave yourself an out:
      "The only thing that could replace multitouch is a solid hologram projector or the holodeck but neither appears to be within the realm of physical possibility."

      Don't get me wrong, I appreciate what you wrote, as it made me stop and think about the evolution of input and I think you're right that multitouch (on a surface) is pretty much the foreseeable future.

      I just feel like you made a strong point, then pointed out the one thing that could upend your view, and then dismissed the disruption as a physical impossibility. I think that warrants further discussion on the physics of it all and whether multitouch was considered an impossibility at some point (and if so, when?).

      • poke

        I didn't think of it as an out. I was just trying to be comprehensive. Holograms are optical illusions and there isn't a comparable illusion of solidity without some kind of mechanical interaction. I wouldn't rule out the possibility of pseudo-3-dimensional projection into empty space since, at least here on Earth, there's a medium that could be manipulated in some as yet unforeseen way. I think it's unlikely though. There's also the interesting question of whether 2-dimensional depictions are actually more valuable to us than the illusory creations of 3-dimensional forms in space. We live in a world of texts, images, charts, video, and so forth, and the usefulness of these mediums lies as much in the ways they fail to resemble the real world as much as in the ways they succeed in resembling it. (But then if we did have solid holograms, we could just as easily produce 2-dimensional displays as objects, so…)

        Your last question is interesting. I don't think touch was ever considered an impossibility. One of the interesting things about the computer industry is that it has never really stepped outside the vision of its early pioneers. Touch was conceived of as the natural way to interact with graphical displays and was talked about from the very beginning. Ivan Sutherland's Sketchpad used a light pen to draw on screen in 1963. Alan Kay came up with the Dynabook in 1968. Engelbart invented the mouse and basic GUI concepts in the mid-60s. Touch, graphically rich displays, the tablet form factor and networking have a long history. It's just taken awhile for the technology to really get to the point where a responsive multitouch tablet is possible. There has been a philosophy in computing that takes the computer to be a new medium, comparable to print, television, etc, and I think this has been the most productive way of viewing the computer.

    • nashxena

      Your analysis made me post this. Dont you think touch itself is an indirect way of interacting. Question here is where to draw the line between the system and the environment. Believe me this line is fading have a look at these links: http://www.braingate.com/, http://www.spatialrobots.com/2009/09/augmented-reality-glasses-concept-by-nokia/ . We are far far far from the end. As Morpheus would have said "do you think this is ipad you are touching?"

  • xman

    Voice input is not generally useful. Imagine sitting in a meeting with everyone talking to their devices? Or on a train/plane/bus – it's bad enough with all the half conversations, imagine hearing someone's meeting minutes or facebook status updates? Sure, there are times when voice input would be useful but not nearly as often as people imagine.

    The minority report style thing is just silly – again imagine a crowd of people all gesturing : makes no sense. Again in limited circuimstances yes but not the normal way of interacting. |

    The point about multi-touch is definitely spot on, BUT we also need more styles – personally I would like to be able to use a pen sometimes as I have big fingers, I would also like to be able to use a brush for other things. We need screens with hover and pressure sensing, with haptic feedback and the ability to use thigns other than just your fingers. It's what we do in real life so lets have it on our devices.

    • http://twitter.com/aegisdesign @aegisdesign

      Nokia are nearly there. The haptic feedback on the C7 is really good with differing levels of vibration depending on what you're doing. Usually I switch haptic off on phones but about a month in to owning the C7, I'd realised I'd not switched it off as it wasn't annoying. It's better than the N8 too which is earlier hardware.

      On the Intel Medfield based MeeGo handsets, supposedly the capacitive screen has support for capacitive pens with pressure and angle detection so handwriting is possible. They use the Atmel mXT224 controller or that's what's been showing up in bug reports. It's also in the Galaxy Tab.
      http://www.atmel.com/dyn/products/product_card.as

  • Les S
  • Les S

    More precisely does it qualify as a possible future UI option not does it signal some kind of end for current UI’s:

    http://m.wired.com/beyond_the_beyond/2008/01/heads-up-displa/

    • Waveney

      Darn, just got round to reading the rest of the thread and you beat me to it ;~) I did expand the idea a bit tho'
      Great link btw

  • steve mobs

    Future of mobile computing is already here and it's called Motorola Atrix. That's the phone everybody else is trying to copy right now.

    • PatrickG

      I would tend to disagree – in that you seem to be enamored of the hardware and not the actual functionality. This stands to reason as the device is not in general circulation until after the March release. If you are basing your estimation of that future on what was demonstrated at CES, that would certainly be the case. The only innovation here is the ability to put it into an enhanced docking mechanism which gives it a laptop-esque ala Linux alter ego – an interesting concept but scaled towards the tech users, not the mainstream. There are many potential problems to address before this is ready for primetime. Including the need to transition from direct input a la multitouch to abstracted keyboard/touchpad, battery life for that dual core processor from that diminuitive 1900mAh battery. Moving program operations back and forth between mobile Android and laptop Linux. And so on. Great concept, interesting prototype, but not fully baked and certainly not, in my experience at least – "the phone everyone else is trying to copy right now".

    • nns

      Who exactly is trying to copy the Atrix, lol?

  • Waveney

    OK Horace, I'll bite… sorta, kinda 'ish – but the line in the sand would be very wide and encompass all known and proposed methods of input. The next few years will be about refinement and sophistication. What I'm struggling to say, is that 'input' per se is the line in the sand. I think the next paradigm will be 'miniaturisation' and 'display activation' which will revolutionise the way we interact with data.
    The handheld device is simply too big, having to cater for input(touch), storage and display screen, also be big enough to provide ample ports and a chassis housing the antennae for communication, cameras and microphone. I think we will see the components separated and the main device shrunk to something the size of a coat button, or earring or even a nose clip(fashion) housing the camera/storage/communication functions. The display will be worn(spectacles or contact lenses) and be a heads up presentation with eye control(which Canon SLRs had ten years ago) and some sort of pocketable/wearable/skin strip touch interface. NFC will tie the whole lot together and allow for user interaction with a wide range of pervasive environmental information.
    Of course it won't all arrive together but I would expect someone(Apple?) to come up with a front mounted camera/eye control system within the next year or so and a 3D eye control heads up display(think military) not long after. Most of the miniaturisation technology is already available.
    For lack of better words, I expect 'invisible' and 'transparent' to be on everyone's lips. 'i(n)Visible' anyone?

  • Waveney

    *sigh* Oh dear, after reading through my post, I realise I was having a 'Predator' alien moment.

  • ARJWright

    Very refreshing, and appreciated reading. I don't see voice as the input paradigm that others do. I see motion, re the MS Kinect. That would be the shift, and the one most applicable to the paradigm that Horace speaks of here.

    Man I like folks who make me think.

  • Fred

    Voice UI is closer than you think. Apple bought Tom Gruber's company Siri.

    See Siri demo http://siri.com/about/product

    See Tom Gruber's vision of the intelligent interface and digital assistant http://vimeo.com/9221827

    • PatrickG

      The key issue with any input interface at least thus far is ensuring discrete resolution of the input and error correcting. With keyboards we developed onboard spell-checkers with dictionaries to increase input integrity, with mice we moved from mechanical rollers, to gridded optical pads to lasers and accelerometers. With voice then we need first to distinguish which stream of input to pay attention to (especially in high-noise environments) – helped by noise-cancelling for example. And then parsing the input into actual data representations – for example apps like Dragon dictation, but with a full "hands-off" command set. How does a mechanism recognize that when you say "OPEN" in the context of input that it is a command and not just the word? Or if you have a verbal cue, that you are in fact using the command cue and not the string in a sentence. All of this requires processing power and controls refinement, because we aren't talking merely about English here, we are approaching all the world's languages . As for using gestures in public, or speaking commands – that issue has already been not just breached in cellphone use, but already parodied in the WP7 commercials.

    • Vatdoro

      I totally agree. Ever since Apple bought Siri last year I've been convinced iOS 5 will have some seriously cool voice technology.

      Imagine receiving a text while driving, telling your iPhone to read it to you, and replying to the text. All without touching or looking at your iPhone.

      Now imagine Apple opening up these powerful voice APIs to 3rd party developers.
      I'm getting giddy just thinking about iOS 5!

      (This is total speculation, but I really think Apple could do it this year.)

  • 2sk21

    The next step is no interface at all. You just carry the device with you and it monitors everything around you and on the network and the device simply does the right thing. For example, if you walk into a meeting room your device brings up all the files relevant to that meeting. Google has talked about ambient searching in which your device is always conducting searches for everything around you.

    • nns

      Now that is an interesting idea. That sounds way out in the future, though.

  • Chris

    I'm propose that there are four factors to consider:

    1) Hardware: Mainframe / Mini / PC / Notebook / PDA-Phone. I think this moves in the direction of increased portability.

    2) Interface: Toggles / Punch cards / Keyboards / Mice / Touch. I think this evolves in a manner that best lets us harness the power at hand.

    3) Lock in: How easily can people move to the next innovation?

    4) Audience: How many people use the new platform, especially compared to the previous one?

    In moving from minis to PCs, we had new cheaper hardware which allowed for a vastly expanded audience with no ties to the old platform. The Mac then came along, the price putting serious limits on growing the audience in the short term, while those already at the party were locked into DOS.

    With smart phones, the hardware cannot shrink much further while remaining general-purpose. The interface feels correct but does not preclude innovation. People are probably becoming locked in, although more and more services are migrating to clouds. And, critically, there will soon be no larger market as we will all carry smart phones. So, if voice comes along it is likely to simply be integrated into what we already have.

    So, until the device becomes integrated into us, I think we have all the pieces of our technological puzzle for the next while. Just as someone from the 1940s would quickly come up to speed on transportation in the 2010s, so I think we will feel about technology if transported twenty years in the future. Don't quote me on it, though.

  • Fake Tim Cook

    When Steve returned to Apple in the late 90s, he forced the entire management team to watch a week long Star Trek marathon. The Next Generation to be exact. He even came in wearing a Star Trek captains uniform. At first I thought “what have I gotten myself into”. After it was over he told us that Gene Roddenbury had basically done a lot of the hard work for us. We just needed to set about to bringing this vision to life. We all new he was right.
    We began working on “next generation” input technologies while simultaneously creating Mac OS X as the software foundation for these new innovations.
    So to answer Horace’s question, “is innovation dead?”. No, but there are few companies with the foresight, resources and leadership that can create these kinds of breakthroughs.
    For example, who besides Microsoft could have developed a gestural interface as advanced as Kinect? Shortlist: Apple, Sony, Google and maybe HP. How many are even trying to tackle these types of issues. Kinect was purely a reactionary move to Wii. Microsoft wasn’t thinking about the future of the PC interfaces. Their Slates clearly show they are focused on preserving point and click.
    Innovation is alive and well at Apple. We have been thinking about this stuff for the past decade and we are not afraid of building a better mousetrap.

    • David

      MS didn't developed the Kinect. They bought the company and for not all that much money. Just look to the startups for innovation. Remember, Apple purchased Fingerworks, Lala and Siri also.

      • Fake Tim Cook

        Yes, it is true that Apple bought Fingerworks. They made some great technology. Can you name one of their products? No. We sell more multiTouch devices every minute than FingerWorks ever did.
        Innovation is alive in small companies but they have a hard time bringing that innovation to the masses.
        There are only a few companies that are truly shaping the future of computing.

    • Vatdoro

      I'm not sure if you actually worked at Apple in the late 90's, since your name is Fake Time Cook. I've never heard that Steve Jobs had a mandatory Star Trek TNG marathon, but that makes a lot of sense.

      I was in Middle School when The Next Generation was on, and I watched it religiously. To this day when I picture the future I often pull up images in my mind from that show.

      The iPhone and iPad definitely remind me of Star Trek like devices.
      And just as important as a device being "futuristic" it is critical to make them easy to use. This is where Apple and Steve Jobs shine.
      When I think of some character in Star Trek walking up to a computer panel on a wall, I imagine the UI is completely intuitive.

      This is exactly how iOS is designed. It's powerful, but at the same time a 2 year old can pick up an iPad and use it.
      That combination is incredibly rare.

  • Steve

    Contact Lens Displays. You heard it here first.

    • unhinged

      What's going to be really impressive is direct manipulation of the optic nerve or the centres in the brain that process imagery. I remember seeing a documentary from the 1970s where a blind man was hooked up to a computer that was able to display 16-pixel images directly to his brain.

      I'll be equal parts impressed and scared, of course.

  • davel

    The problem with the pc analogy is that Microsoft was able to build a wall.

    They had corporate support because of the IBM connection and those consumers that needed to work on documents at home, then there was the office suite that took over because of marketing.

    This led to market dominance.

    With phones you do not have that. You need to make calls, have a long battery life and surf the web. You do not need a specific OS for that. Apple is trying to build a wall to keep its customers in with iTunes.

    It has only been around a few years so the lead is not insurmountable.

    There is much time for things to change. The problem is what company has the vision and execution of Apple? I don't think they have competition there from anyone.

  • berult

    The next quantum leap is a paradox. Push the evolution of the computing device into thin client perfection. A thin client to the brain, to human consciousness, is the key to unlock and as fate would have it, unleash "signature" networking, a paradigm shift of universal proportion.

    The idevices are comfortably settling into a thin client groove with the end user's mindset. They are nevertheless destined for the ultimate thin client role of mirroring, and reflecting on the end user's mind …per se. Just as Apple has evolved to become close to perfect thin client to Steve Jobs.

    Symmetry. It empowers one, blinds all.

  • Roger

    Apple are adopting the closed shop, proprietry model again. This dooms them to being a niche player, albeit a large one, in the mobile market. I believe that if Apple had licensed their technology way back when, there would be no Microsoft today. I was a user and a programmer on both PC platforms then and there was just no comparison.
    This is not about a better mousetrap, it's about the one most readily acceptable and available.

    • asymco

      A niche is, by definition, not large. Perhaps you can think of a better term to call a platform with a projected half billion tenaciously loyal users.

  • Waveney

    @Roger
    You will not find sustenance here for that assertion. Please read and digest those posts germane to market share/industry profit – here is the latest: http://www.asymco.com/2011/02/02/making-it-up-in-
    Please, enough with the 'doom' scenarios, the world has moved on.

  • http://twitter.com/fictionalui @fictionalui

    Stimulating post.

    Some thoughts:

    # Innovation in mobile computing will keep on steady, you don't even need to imagine new input methods for that. We're just scratching the surface, literally :) , with the possibilities of handheld touch slates packed with radios, sensors and teraflops of processing power.
    Overall, the platforms and the ecosystems will allow for many innovative usage patterns.

    Think about the WIMP GUI desktop era, the input method basics didn't change for 30 years but in that timeframe ecosystem growth and concurrent innovations enabled so many breakthroughs in the consumer space, from DTP to A/V to http://WWW...
    Think about the touch devices we're going to see in the next few years, and what kind of impact they will have on the whole of our culture…

    # In general, new form factors, input methods and HCI paradigms trail the advancements in computing density and interface technologies.
    Transistors advancements allowed for desktop computers. GUI was possible when Moore's law allowed for cheap graphics cabaple desktops. Moore's law again and LCD tech allowed for laptop computing, with trackpads and trackpoints joining the mouse. Touch screens came to help when the computing density allowed for pocket sized device where keyboards and trackpads couldn't fit.

    Guess next step will involve breaking free of the classic limitation of having a display on the device – with devices so small that could fit in a wristwatch, a ring or a jacket button, that would be a constraint.
    I'm curious if the solution will come from holographic projections, some kind of glasses-like interface, or from turning clothing and other surfaces into bendable and foldable screen, and if the input method will still be tactile, or motion sensing, or some other kind of muscle feedback.
    I think interaction will have to involve visual display of information for long – aural is too limited. Until we go neural :)

  • Omar

    Remember iron man 2? Tony Stark spoke to his computer AI to construct a digital wire frame of his fathers 1970 expo miniature model! Yea, voice recognition and command input has a long ways to go. And will possibly be the next evolutionary means of information input for the modern computers.

  • chandra2

    What would the effect of innovations in display technology on mobile device innovations? Like, for example, roll up displays that have been talked about for a few years now.