Categories

5by5 | The Critical Path #12: Back to the Future

Episode #12 • November 2, 2011 at 5:00pm

Dan and Horace talk about the tension between relying on data and using intuition to make strategy decisions. We also apply this dual approach to think through the next evolution of user interaction and the jobs we might hire mobile computers to do for us.

via 5by5 | The Critical Path #12: Back to the Future.

Dirk tweets an alternate description:

“The iPhone prepaid availability is not the key to emerging market. It’s Siri suiting high illiteracy rates.”

Regarding the title: Asymco in DeLorean | Flickr – Photo Sharing!

 

  • Amit

    Hi Horace,

    Great as usual. Did you say you will be in London for a paid event? Apologies if i miised it in the podcast but would it be possible to share the details if it is not invitation-only?

  • http://enkerli.wordpress.com Enkerli

    Not done, yet. Stopped because it sent me on a “drift-off moment” (thinking about the possibilities). Sounds like you keep on the topic of interfaces and literacy, which is an amazingly important topic that is almost never addressed.

    There’s an assumption that literacy is a prerequisite for any type of computing. Those we could call “scriptocentrists” (people who ethnocentrically think that writing is the basis of everything else) have a hard time imagining any type of computer use without some kind of writing.

    Those who work on the digital divide have a lot to say about the prerequisites for computing. It’s quite clear at this point that literacy is the sine qua non of computer use. “Illiterates” using computers is a crazy idea.

    In fact, chances are that I’ll get replies saying that the connection between computing and writing is a necessary one. “Maybe you can have kids’ apps which don’t rely on writing but anything useful done with a computer requires writing. After all, you can’t code if you’re illiterate. Attempts at visual coding have always failed because it’s more sophisticated that linking little icons.” Or some such.

    Thing is, though, computing can go in different directions and it’s not because so much of computing has been invested in writing that the connection is a necessary one. After all, numeracy should be more important than literacy, in the original meaning of “computing”. People who don’t read and write text may very well read and write numbers and even be pretty good at working with numbers. (Ron Eglash would have neat things to say about this.)

    Still, even with GUI, people are stuck on writing as the primary input method for computers.

    Typically, keyboards are the primary support for written interaction with a computer. This is a significant part of what made the OLPC project so myopic: why impose a keyboard on multilinguals who may not be that efficient at writing?

    Of course, as we know so well, many other systems have been tried. Handwriting recognition (Newton), shorthand (Graffiti), specialized keyboards (MessagEase), gesture-based predictive keyboards (Dasher), etc.

    [Since my first Newton MessagePad 130, I’ve been quite interested in all of these. For some reason, they all seem to work rather well, for me, even though my handwriting is recognized as illegible and I’ve never been an incredibly fast typist. Basically, I’m just about as fast on any of these, probably 20–30wpm.]

    Accessibility is probably a key, here. Perhaps it can shake up computing the same way feminism shook up social sciences: by pointing out a basic flaw in the mainstream approach.
    Something which is frequently said, in the field of accessibility, is that it benefits everyone. Funny to notice people’s reactions when they realize that “Google is blind” and that SEO is based on the content screenreaders use. Perhaps more importantly (since there are usually more of them than people with visual and hearing limitations), those with cognitive limitations require well-structured content, something which could clearly benefit everyone. Content in “simple English” isn’t just useful for those with low literacy rates. It also shows a careful approach to text crafting which may easily improve those texts. Technical writers probably know this and those who bemoan the “bad writing” of academics would probably support work in this direction. “Simple language” is quite useful, when it comes to computing. After all, computers don’t do well on nuance.

    So, I will resume listening to this episode but what I’ve heard so far already gave me a lot of food for thought.

    Thanks!

  • anonymous

    Great! As always, Horace.

    The fact that you decided not to productize your data reminds me of my school days where instead of keeping my knowledge secret, I decided to bare them all out for everyone.

    This move has tremendously helped my intellectual development as I get to focus on what really matters. Every single thing that I knew became open to the public for them to not only benefit from it, but also to point out mistakes and to question my every move.

    On top of that, I became a reference point or an unofficial teacher to my classmates because they somehow know that I’m the type of person that will go the extra mile to make sure that I fully answer their question.

    I’m not here to brag (I’m posting this anonymously anyway), I just want to point out that you are on a great path.

    I came to the conclusion that “what you gain is far greater than what you give.” And soon you’ll realize this too.

  • poke

    Great episode. I think the important thing about Siri, and agent-style interfaces generally, is to realise that getting it right is a human-computer interaction (HCI) problem and not an artificial intelligence problem. Yes, artificial intelligence techniques are important to the back-end, but making the interface work is HCI. The companies that are going to get it right are the same ones who get HCI right in other UI paradigms. In fact, I think agent-style interfaces are probably more sensitive to these issues, since discoverability, responsiveness and user expectations are such huge concerns. It’s probably easier to make a “good enough” GUI or touch UI that users can fumble their way through than to make a “good enough” agent that users don’t just give up on.

    • http://enkerli.wordpress.com Enkerli

      Agreed… to a large extent. What I’m finding with Siri is the power of collaborative learning. Siri is adapting to me (a bit; not that obvious) as I’m adapting to Siri. There’s almost a playfulness involved and the discoverability is almost a collaboration. So, it’s HCI but involving machine- and human-learning.

      • poke

        Yes, the technology has to be there. I don’t want to overstate my case in that respect. But the whole thing revolves around the “personality” you give the device. Using a dull and uninspiring graphic user interface isn’t a big deal, but who wants to talk to a dull and uninspiring personal assistant?

  • Greg Lomow

    Regarding data vs intuition it is not an either-or situation. The obvious bridge is the use of models (and I’m not sure models weren’t talked about during the episode). Models are the best antidote to the analysts dilemma. Of course models aren’t 100% correct but they can be validated ( and improved) based on historical data. And they provide predictions about the future. And they can be easily shared ( unlike intuition or gut feelings).

    Keep up the good work.

  • dave

    Great podcast as usual Horace, I really feel that I am learning something when listening to these.

    Do you or others have any other suggestions of similarly illuminating podcasts?