Categories

What's a Post-PC device?

Microsoft’s Steve Ballmer argues that the tablet computers (aka slates, media tablets or iPads) are PCs. Steve Jobs argues that they are post-PC devices. There are analogies to trucks, cars and various metaphors for what these new devices symbolize. Some argue that because the iPad needs a PC, it’s not a post-PC device.

But how to define a new generation of computing? Since the PC is not the beginning of computing, it may make sense to look to the eras of computing that preceded it.

The first post-mainframe computers worked alongside mainframes and were typically used by engineering departments (vs. the finance and accounting departmental role occupied by mainframes.) They did not require a dedicated (often external) mainframe service team and could be administered by in-department system administrators. Input was not through dedicated data entry personnel and output was not a ream of folded paper. Users could use a CRT terminal to interact with the computer directly. Digital Equipment Corporation, Control Data, HP, Honeywell, Prime, et. al. grew up in the shadow of IBM but thrived. Their products were cheaper and simpler to operate. Initially, the products were not more powerful.

The first post-minicomputer microcomputers (also known as PCs) were used alongside mainframes and mini-computers. They were typically used for 2-dimensional spreadsheets analysis by individuals in many departments and were easier to justify cost-wise than the time-sharing cost structure of the incumbent computers. Many microcomputers were able to access mini-computers and mainframes through terminal emulation software (and sometimes special hardware boards) so they were still able to work with existing business processes. They did not require dedicated personnel for administration but had a shared support department. Input and data storage were on the “desktop” as was printing. Compaq, Dell, HP, Apple and IBM embraced this new form factor. Their products were cheaper and simpler to operate than the previous generation of computers. Initially, the products were not more powerful.

The first post-microcomputer tablets are used alongside microcomputers for tasks such as presentations and entertainment. They have software available that allows access to desktop computers and services dependent on PC-architectured servers. They depend on PCs for data backup and software updates. They do not require IT support. They do not require a keyboard or a desk. Apple and many mobile phone vendors are embracing the new technology. They are cheaper and simpler to operate. The new products are not more powerful.

To define a new generation of computing by its isolation and exclusion of the previous generation is not sustained by the history of computing.

I would suggest that the definition of a new generation of computing is that the new products rely on new input / output methods and allow a new population of non-expert users to use the product more cheaply and simply.

I might add that the consequences of each generational shift are:

  1. Consumption increases
  2. Skill required decreases
  3. Support required decreases
  4. There are new applications and use cases
  5. The economics are not favorable for incumbents
  6. The economics are favorable for new entrants
  7. The older generation slowly fades through diminished growth but never disappears
  • http://ximagin.co/thecw/ The CW

    Developer resources, excited by the new technology and having easier access to development tools, migrate in large numbers to the new platform at the expense of innovation and improvement in the incumbent platform.

    New programmers generally don't think… "I want to create that killer new program for a workstation!"

  • http://maheshcr.com/blog Mahesh CR

    8. Typically uses new means of interacting with users. Think punch cards, keyboards, mouse, touch etc.

    Works?

    • asymco

      That's more of an extension of the definition (new input method) rather than a consequence (which are enumerated).

  • http://julienboyreau.com Julien Boyreau

    Thank you for this lightening clarification.

    PC = Personal Computer = One Computer Used By One Person and, maybe, Owned by this person
    (Tablet = Computer) && (Tablet = For One Person) => (Tablet = Personal Computer)
    Touch Smartphone = Tablet For Pocket = a Tablet = a PC
    Tablets, (Pocketable or Luggable) = 4th Generation Computer after Mainframe , Mini & Micro

  • FalKirk

    Horace, I think you've provided us all with a fantastic analytical tool. I will be very interested in reading the many comments that are sure to follow in the hope that the brilliant contributors to the ASYMCO Blog may expand, refine or confirm the validity, practicality and the usefulness of this newly formulated guideline.

    • http://twitter.com/davidchu @davidchu

      Einstein said that finding the right question was 90% of solving a problem. To me, disruptive companies are obsessed with two questions. "Why aren't people consuming" and "How can we deliver to non-consumers in an economically viable way?". The entrenched are usually obsessed with the question, "How can we leverage our assets to grow" and "How can we wring the last dollar out of our market."

      • asymco

        Well put. Another way to look at it is that too often the question is: "How do we get a bigger piece of the pie?" and not "how do we make a bigger pie?"

        Consider how all the headlines related to smartphones "war" are about share of platforms not share of non-platform devices.

      • http://ximagin.co/thecw/ The CW

        or… in Apple's case, "how do we make a new pie entirely?"

      • http://www.dimspace.com Ben

        Or maybe: "For every person with the apetite for pie, there are 20 with an apetite big enough for a cupcake. Why don't we make some cupcakes?"

      • capnbob66

        And for each cupcake there are 14 donut holes and a chocolate croissant. Of are we stretching the tasty pastry metaphor too far? ;-)

      • http://twitter.com/davidchu @davidchu

        Thanks for simplification

  • arvleo

    Its interesting you mention "Consumption Increase" & "Skill level required decrease" as consequences…for me they(usability & experience) would be key requirements or objectives on which any generational shift is built upon…not a consequence

  • http://twitter.com/davidchu @davidchu

    The economics of post-PC devices can't be stressed enough.

    In the old world, you created lots of different hardware for each niche. In the post PC era, the hardware conforms to each niche through software. That allows a company like Apple to create gain massive economies of scale in order to hit price points that old business models can't match. This allows companies like Foxconm to pump out millions of products for Apple for only 10% gross margins.

    When people say that Honeycomb will succeed because manufacturers can make different products to address every niche, I often think that they don't get it. It's not about the hardware anymore. It's the software.

    • Hamranhansenhansen

      Definitely true. But I think it has always been true. A Compaq 486 and Dell 486 were essentially the same Intel hardware and Microsoft software, pumped out in volume and then the magic was all in the apps. Now, Apple is beating the PC industry at their own game. Apple is just doing to consumer electronics what Wintel did to PC's, but it is NeXTApple, not Wintel. Foxconn is Dell.

      All of Apple's 21st century victories have been by out-softwaring their competitor: iTunes beats Windows Media Player, FairPlay beats PlaysForSure, Final Cut Pro beats hardware-assisted Avid, Mac OS X Intel beats Windows on high-end PC's, iOS on iPhone beats Symbian/BlackBerry, App Store beats Java/BREW and all others, iOS on iPad beats TabletPC and Windows Embedded.

      Consumer Electronics (including phones) is a lot bigger than PC's. So being the Wintel of CE is a strong position.

      • sirfixalot

        Yeah, Big Mac, Double Whoppers, KFC, etc have also shown us that it doesn't take the best quality "hardware" to be successful, it's more about how you process your product.

        For what's here right now it may be appropriate to simplify it within contexts of limited scope, but I don't think it is that simple when looking even a couple of years ahead, as it is still early days for smart-devices. New usage scenarios and technologies are being added to the mix faster than ever (i.e. voice/language recognition, location awareness, NFC, augmented reality, motion sensing, transparent displays, attention detection and tracking, biometrics, and more and more coming). So hardware will still matter to some extent, and the fragmentation of the overall market means there is enough room for various competitors to try out a lot of different compositions, silly as some of them may seem to others.

        Also, you don't specify what your criteria are for "beating".
        iTunes beats Windows Media player?
        iOS beats Windows Embedded?
        Maybe your are right, but I can't see what single quantifiable criteria would produce just those two statements, so it would be nice with some elaboration here.

        For me as a technical support staff, and developer, and from a general philosophical point I can't see any absolute advantage for either, though I would be biased because I did invest some time and energy in it. On the other hand it also goes for any existing technology.

        Still as far I can understand, the general assumption of these discussions is that simpler and easier is better. I don't think that is universally true, or even desirable as the initial fast food reference was meant to hint towards. But that's a rather lengthy discussion to even begin to agreeing on the premises, assumptions, propositions, and particularly the exact predicate I find hard to even difficult to formulate accurately, as it has a lot to with the nature of human reasoning in itself and our tendencies to be "bipolar ambigamists", selectively arguing for and against open-/closed- [anything – platform, mind, foreign policy, hiring practice, workspace arrangements, etc].

  • George

    Actually, all tablets currently require keyboards. They’re just not physical keyboards. Tablets are probably even easier to use with physical bluetooth-connected keyboards.

    • FalKirk

      One of Apple's great insights was simply recognizing reality – that people can input data faster using a virtual keyboard than they can through writing by hand. I well remember commentators arguing that the iPad was not even a tablet because a tablet – by definition – had to have a stylus and handwriting recognition software!

      But I don't think that a virtual keyboard defines a tablet. I think "touch" defines a tablet. Right now, the fastest way to enter data via "touch" is with a virtual keyboard. If someone thinks up a better way to touch our iPads and input data faster, then that's what we'll use.

      As for a physical keyboard, it's like attaching a U-Haul trailer to a car. Terribly useful on special occasions. But if you need to do it all the time, you'd be much better off buying a truck instead.

    • Hamranhansenhansen

      They’re easier to use with physical keyboards maybe 10% of the time, like when writing long-form text. The rest of the time they are in the way.

      If you go into a pro music store, you’ll see lots of computers without typewriter keyboards, and you see virtual versions of them on an iPad also. The fact that iPad can emulate them in many cases better than the original device, with connectivity and more storage and longer battery life, is why music and audio people love their iPads as much as their Macs.

      Drawing also prefers the touchscreen over a keyboard. Video editors use the keyboard as a set of arbitrary buttons so much so that there are dedicated keyboard with the labels changed. They are happier with onscreen buttons.

      Even on the Web, most users use the keyboard so little that it is not worth carrying it around.

  • RobDK

    Brilliant analysis! I would guess this is how Apple sees it, and that their approach will be for massive growth with a controlled but diversified product line with several sizes/levels/feature sets, just like the iPod.

  • ozechad

    Thanks for such an insightful post Horace. The only thing I can add to this discussion is…sadly for Microsoft I don't think many industry people really give a damm what Steve Ballmer says or believes anymore!

    • FalKirk

      You know, it's easy for us armchair CEOs (or at least THIS armchair CEO) to make fun of people who are far more powerful and probably far more intelligent than we are, for failing to see what may only be obvious with 20/20 hindsight. But in Ballmer's case, it's beginning to look like the entire world will recognize Microsoft's dilemma before he does! His lack of insight is truly astonishing. Yet Bill Gates and the Microsoft's board of director's continue to support him. What does that say about them?

      • WaltFrench

        Q: “What does that say about them?”

        A1: They think that replacing him would result in worse performance.

        If Ballmer and his board think that he has some impossibly good skill in rolling dice — say, that he will get boxcars 1 out of 25 times versus the textbook 1/36 — they should not be dissuaded from ten or twenty rolls of everything else.

        After he's gone 100 times and only one hit, the Board might figure out some way to prove it was HIS inadequacy, rather than their faulty premise, that allowed the losses to go on.

        I actually think Zune, WinMobile, Cloud, WP7 were all expected as somewhat long odds that the company had to pour money in to protect being end-run and have their monopoly businesses undercut, since Desktop, Server and Office all continue to be hugely profitable. The company DNA knows massively complex systems and nobody will compete with their core area; the others crash-absorbent bumpers. I wouldn't reject outright the notion that Ballmer deserves hazardous career duty pay for agreeing to preside over so many likely fails.

        A2: The pay that Ballmer is NOT getting is incentives to create successful new ventures that undercut the mothership. This is an easy call given the Innovator's Dilemma / Disruptive Technology meme that is old hat here. Incentives matter; Ballmer is probably doing what the board expects, what almost any board would expect.

        A3: I continue to be mystified that Ballmer/Microsoft doesn't realize that they will NEVER make as much money on a mobile OS platform as they can make by providing first-class services— Mobile Office, MobileSharePoint, MobileXBox, etc. — to ALL OTHER mobile platforms that will now refuse to touch them, meaning that their development costs are spread over their trivial single-digit-and-falling share. Had the DoJ split the company into these three units, and had Microsoft created a New Ventures separate company with incentive to succeed in the new areas, all four Microsofts might have been even more successful. This failure rests absolutely at the Board.

        Three answers, all about the same, to your good question.

      • http://theappleportfolio.blogspot.com greg

        Ballmers whole problem is he's not likeable. He is socially awkward and inept, and he can't sell. I would venture that if Microsoft had invented the ipad, it would have been a failure. We live in a world where appearance and likeability are essential, with likeability, I would suggest, being the more important of the two. He fails to realize that people want to buy an Apple product in part because they love Steve Jobs. They want the product to succeed, and they want to be part of that success.
        The world is full of smart people who history will regard as failures because of their unlikeability, regardless of the lofty positions they may have occupied at one time.
        Dennis Kowslowski, John Thain, Ken Lay, Chuck Prince, Jack Welch, Angelo Mozila,and Hank Paulson to name just a few, are not likeable except perhaps to their own families.
        As far as the board supporting him, I think the last three years have shown America that boards of directors are pretty much an inside joke, a joke unfortunately played on all of us.

      • unhinged

        I would say respected rather than liked. From all I've read of their respective personalities (lots about SJ, very little about SB), Steve Jobs has a mean streak and enjoys tormenting other people, Steve Ballmer throws chairs around when he gets mad. Ultimately, it comes down to what is achieved under a person's leadership, and we all know the score there.

      • http://theappleportfolio.blogspot.com greg

        Therin lies the difference. When we hear of Steve Jobs mean streak we assume it's directed at people like a Steve Ballmer type, and we are ok with that. Steve Ballmer throwing chairs…he's not good enough at what he does to throw chairs, period.

      • Kizedek

        Some may interpret it as a mean streak, others may just say he can't tolerate BS. Between staying healthy so that he can spend time with his family and do more at Apple, he can't tolerate BS for two seconds; he simply doesn't have time, especially for some journalists and bloggers. …and yet, he's the only CEO I hear of who personally answers emails from the rest of us. But, maybe he knows he has a mean streak and that is what he is trying to compensate for through his Bhuddism.

      • chandra2

        How do you explain the success of 'Microsoft Kinect'. Is that in spite of Ballmer? I do not think we can definitely say that if MSFT had invented iPad it would have been a failure.

  • OpenMind

    How about difference through cord. Mainframe, a lot of cords. Minicomputer, less cords. Desktop PC, 2 power cords, 1 Ethernet cord. Laptop PC, 1 power cord, 1 Ethernet cord . Tablet, no cord. Therefore, for tablet battery life is very important. I don't think Apple's competitors understand the importance of battery life. That is why Apple is reluctant to add nice-to-have but power-drain features. And those idiocy Mot, HTC, RIM,etc. are rush to add unnecessary features to starve themselves of battery juice.

    • thejy

      Totally agree on the idiocy of OEMs, they often miss en-user needs because they don't have strong marketing departments (or haven't developped yet the user oriented design culture). OEM products seem to be designer by a mix of engineers and bean-counters and the result is crappy and segmented. Apple is by far leading the pack by their unique understanding of user interface constraints and their non-OEM business model. Microsoft had to enforce hardware design to obtain an acceptable level of quality on Phone7. Will OEMs kill Android as they did for WiMo (I admit MS helped a bit).

  • Waveney

    I think the iPad(and other tablets) are about to become or be seen as, the ultimate thin client and it's Post PC status will be measured by the removal of most of the present computing infra structure. What remains, will be the backend servers, firewalls etc but even those will disappear as the Cloud takes over.
    The version 1 iPad was judged to be merely a flyweight but turned out to be more of a welterweight ie. it boxed 4 divisions higher. iPad 2 sits squarely in the middleweight class but will be beefed up by software and enterprise users to be comfortable in light heavyweight work. By the time a true heavyweight tablet arrives(iPad 4 maybe?) I expect landfill sites to be overflowing with masses of old desktop kludgeware and this, incidentally, will mark the end of the traditional hordes of MS certified IT staffers.
    I agree with every one of your points Horace, and I think they provoke other interesting questions..
    At what point does 'old' technology become unsustainable for incumbent manufacturers? – is it when new technology meets modern requirements? or maybe when gains are not apparent when measured against investment costs? What is the size of the threatened manufacturing segment?
    Is the iPad a tipping point with a steep decline for established players or will they fight back with much leaner systems trying to delay the inevitable?
    Are the capabilities of a full-blown iPad really necessary in the 'ultimate thin client' scenario?
    I'm beginning to get the full meaning of Jobs saying "We've got a tiger by the tail here."

  • http://theorangeview.net/ Aaron Pressman

    As a person who lived through many of these transitions, I think some of your 8 consequences are off.

    >>Support required decreases
    At my junior high and high schools, we had access to mini-computers and a mainframe via terminals. Zillions of kids used the terminals without ever messing up the entire configuration. Personal computers, especially once Windows proliferated, required a lot more support for each users and were much less robust. IT departments expanded in size for most of the first 15 or 20 years I was in the workforce, before outsourcing came into vogue. Seems like you might be force-fitting the data to support your hypothesis.

    >>The economics are not favorable for incumbents
    Is this necessarily the case? Certainly, there is a lot of Clay Christensen's Innovator's Dilemma at play. But IBM was big in mainframes and then big in PCs for a while. Arguably, if they had struck a different kind of deal with Bill Gates for DOS, they'd have been big for longer. Dominant companies can sometimes leverage their strengths to grab hold of new markets, as you could argue Apple did to get from iPod to iPad.

    Frankly, I think the whole post-PC analogy as you've outlined it is not very enlightening. What's more interesting here is the evolution of personal computing platform from big desktops to mobile laptops to smart devices like iPhones and tablets. Moore's law make this possible but then many of your 8 consequences come in to play. I think the commenter above who mentioned net services highlighting a more important trend.

    • asymco

      Support needs to be measured in terms of staff needed per user. Users of computers exploded by orders of magnitude per generational transition. The number of support personnel grew as well but not faster than the user numbers did. If they had, it would not have been economical to switch over.

      Incumbents can survive a generational shift. Note that I did not define each shift as "disruptive". In some cases it's sustaining. However, the tendency is for incumbents to find the new generation uneconomical or asymmetric in business model. You could argue that Apple is enabling the new generation but it still has a strong presence in the old (having pioneered it in fact.)

      To your point about the interesting evolution of personal computing, I have discussed the impact of portability on computing before: http://www.asymco.com/2010/11/19/why-the-mac-keep

      • http://theorangeview.net/ Aaron Pressman

        I'm not sure where we can go for empirical data on support/user ratios but I don't agree. You want to make the point that in moving from Pcs to iPads, users require less support and the devices are more robust. That is true but it was not the case on the earlier transition.

        In moving from terminals to PCs, each user required more support and each device required more support. One of the arguments against putting PCs in business was that they cost more per user over time, were less robust and users didn't need that kind of power. It was worth it because despite the higher support costs, productivity and capabilities increased more.

      • http://theorangeview.net/ Aaron Pressman

        From a 1995 Microsoft white paper, and they would hardly be unbiased towards PCs:
        "Quantitative evidence is hard to find. In the 1980s, data centers spent 60% on staff and 40% on equipment – very similar to the Forrester study of PC cost-of-ownership today (see footnote 1)."
        http://www.google.com/url?sa=t&source=web&amp

      • asymco

        I'm sure there are numbers out there, but again, the ratio of users of PCs in the workplace vs. the number of users of computer terminals in the workplace must be multiple orders of magnitude (as in factors of thousands if not tens of thousands). There just weren't many people who used command line interfaces.

      • PatrickG

        Aaron, I think you may be ignoring the applications to which the systems were being addressed, they moved from calculation to representation to documentation as the machine came nearer to the human agent in those earlier examples. Likewise in the PC era, the uses expanded from documentation to entertainment to personal utility (organizing photos, recording or playing music collections, etc.). It is as important to consider those elements as it is to consider support ratio or whether PCs were robust enough to use in business. That wasn't the primary consideration – it was having a set of documentation tools with a decent UI that was (relatively) easy to learn and then apply. Because you have to track the gradual move from analog record-keeping/documentation to electronic record-keeping/ documentation.

      • Childermass

        My first computing job was looking after an ICL1901A (small mainframe) that ran payroll for the organization that owned it. I think you could fairly categorize it as having one user (the company's payroll needs). To make it work we had 12 programmers, 6 data control clerks (1/6 being me), 23 typists churning out the punch cards, 4 operators, 3 managers and a driver. That makes 49 to 1.

        Later I worked with a DEC PDP-11 (supermini). Its output went to accounts, shipping, sales and management, four users. There were three of us running it at work and on call we had a team of engineers which we used one at a time, but nearly everyday, so let's call it four people. The software was bought off the shelf and was supported by four guys at their office. That makes 2 to 1.

        Not long after I bought my first micro computer from Commodore built to be useable by any number of people. The ratios get more complex now as I don't know how many users there were or how many people supported the machine mechanically and virtually, but I think it is safe to say there were more users than supporters. So, some to many is the ratio.

        With the iPad that ratio is now some to very many. iPads are the best selling computers ever.

        HD is right: point three is true.

      • anobserver

        >> 3. Support required decreases
        >HD is right: point three is true.

        Ah, but how much support actually migrated from dedicated central teams of engineers to the users themselves?

      • Childermass

        Almost none. The time and effort an individual user puts in to maintaining a micro computer is tiny compared to full-time. On the mainframe we worked 3×8 shifts all week all year. Even with a Windows box it's not even close. Macs less and iPads hardly at all. This is HD's main thesis and he's right.

  • famousringo

    Is the future really in network services, though?

    The move to the cloud has been predicted for 20 years, and there's no doubt that as the internet becomes ever more fast, sophisticated, and ubiquitous, we've found a whole lot of new uses for it. The founding idea behind the smartphone market was that you could get network anywhere.

    But the smartphone market didn't really explode until the native app market matured for smartphones. Now we see more and more people fiddling with billions of app downloads. Instead of all media migrating up to the web as so many predicted, we see even web pages migrating down into the app.

    It seems like just having a killer network wasn't enough, you need killer local software on the device to make the network worthwhile. That's because even as the network becomes more powerful and mobile, the local device also becomes more powerful and mobile. Rather than a balance of the opposing forces of network vs. local, it's a glorious whole supported by the pillars of network, software, and hardware.

    • Hamranhansenhansen

      Yeah, the cloud idea is overrated and misunderstood. The client is part of the cloud. The whole idea of HTML5 is migrating computation from server to client. Instead of doing heavy lifting with PHP running on the server before the page is loaded into the browser, you do that heavy lifting with JavaScript after the page is loaded into the browser. You turn your 1000 clients into a distributed computer to run your app. And when the client becomes disconnected from the server, the app continues to run. Years ago, the idea of a client/server app that could run while disconnected was preposterous because the clients were on desks with wired connections. Today, the clients are in a car going through a tunnel.

      Some people say Apple is not exploiting the cloud, but that is ridiculous. The installer for every iOS app is in the cloud, along with the music for all the iPods, the operating systems for all their devices, and a chunk of the Mac app platform, too. And WebKit is the leading HTML5 engine, the "cloud browser". Hiding someone's music behind their per-megabyte AT&T connection is misusing the cloud. Pretending bandwidth increases will be magical and free and fix all your problems is misusing the cloud.

      So the bigger the cloud gets, the more powerful your client device needs to be. You don't just suddenly use the 2020 Internet with a 486 one day.

      • Joe_Winfield_IL

        Amen. I believe the Mac App Store is the ultimate use case for the cloud. Within a few clicks, a user can buy a new machine and have it configured to their liking with all their applications – right out of the box. Files can be stored on the server side as well, where it makes sense. However, the App Store is not a "pure" cloud play in the sense that Chrome is. HTML5 is great, but doesn't get you all the way there. I would hate to buy a Google netbook, only to see it bricked every time the network goes down or a data limit is exceeded. The cloud has plenty of weaknesses, and gives way too much importance to the network. I love Netflix streaming video, but I can't use it to watch movies on the plane. News on the web is great when connected, but a client side reader is required to make the information usable offline. Photo and video editing is taxing even on a local machine; constantly moving file versions back and forth to the cloud is just silly. I agree with the article's premise that these are true post PC devices. The current use of the cloud is great, but we don't need to see everything moved online before we can refer to devices as post-PC.

    • Jons

      My argument wasn't really moving processing power to the cloud. It's more about a paradigm shift to net-centric computing to relieve the end user of maintenance issues and to offer his computing environment on more than one device. Html5 goes far with this but it has a long way to go to be able to replace today's apps.
      If you look at the ipad/iphone model, even though the apps are downloaded and run on the device (mostly) your app is at the app store so if you get a new ipad you just redownload it. Apple is now negotiating with music content companies about letting users store their music in the cloud. That's not to say a copy won't be downloaded to the device but the drift to net-centric content is clear. The data will be principally stored on the cloud and the device may want to "cache" the data or app.
      It's been predicted for 20 years in all kinds of formats but it is happening slowly – so slowly that it's hardly noticable.

    • JimWilson

      The tablet is Post-PC? Only if a laptop / nettop is post-pc as well. Tablets are just the latest practical iteration of mobile computing where the focus remains with the abilities of a mainline operating system and associated connectivity rather than the amount of end-user functionality that can be crammed onto a limited hardware platform. See the Asus Eee Slate EP121 as an example of the 'mobile tablet pc'.

      I personally no longer use a phone . . . landline, feature phone, or smartphone ala iPhone. Appropriately sized tablet PCs with "mobile" broadband connectivity have obsoleted these devices for me.

      I believe Jobs should be referring to "full OS" tables as post-Phone devices. But then that observation would not serve Apple well at all. Yet.

  • ViewRoyal

    It's very likely that the iPad (and competing Android, WebOS, and other non-PC tablets) will eventually be "Post-PC" devices. But in their current state, they aren't there yet.

    The iPad 2 (and its competitors) is still just a mobile device. Like mobile phones or touch media players like iPod Touch, they can only exist as a computer peripheral. That is, you cannot (yet) replace your personal computer (PC) with a touch tablet or a mobile phone. If you don't have a PC, you can't use the computer-like functions of a touch tablet or a smartphone.

    In the future, iOS and Honeycomb may evolve into desktop OS replacements. When that happens, users will be able to get rid of their desktop or notebook PCs if they want to. But for now, calling these tablets "Post-PC" devices (replacements for PCs) is premature.

    • asymco

      But the article plainly and clearly says that each new generation of computing does not replace the previous generation. Post PC does *not* mean replacement for PC.

      • http://www.noisetech-software.com/Perspectives.html Steven Noyes

        Good point to emphasize. With perhaps "minicomputers" that turned into "workstations" that turned into gaming computers, the old reliables still exist but get re-tasked.

        For example, look at mainframes. They went through a dark ages as PCs replaced their traditional uses but re-emerged as the driving force in the internet.

      • unhinged

        I'm not sure your statement is accurate, Steven. The terms "mainframe" and "minicomputer" and "workstation" have changed their meaning over the years, but I would argue that nowadays we would call minicomputers "servers" because they are generally fulfilling only a small number of tasks (for example, database hosting, email routing, etc) and the original minis were cheaper, dedicated machines that took the load off the expensive but much more capable mainframe. The workstation is where a person is stationed to do the work, so it's the device by which users provide input and process output – the terminal, which these days is the (varyingly powerful) PC.

        The reason mainframes still exist is that they are more economical for hugely complex tasks than a very large cluster of PCs. When the economies change (or when hugely complex tasks no longer exist, ha!) the mainframe will die off.

        The driving force of the Internet is largely servers, or minicomputers, rather than mainframes because the economics of multiple machines are much more achievable. Can you imagine trying to build a single mainframe that could handle all that work?

    • Hamranhansenhansen

      It is a canard to say you cannot replace a PC with an iPad. In his first iPad review, Walt Mossberg, who is fairly techie, said iPad replaced 80% of his PC use, and he was using a high-end MacBook Pro day-to-day. That is replacement. A family with 4 notebook PC's can move to 1 desktop PC with 4 accounts and 4 iPads and you are minus 3 PC's. Plus the desktop acts as a server 24/7, it is a great setup.

      Also, something like 10% of iPad users never used a Mac or PC. Apple will activate your iPad for you at Apple Store, and you can use MobileMe to store your documents on iDisk, contacts and calendars and bookmarks and mail on Apple servers, watch video off Netflix and Hulu. The only thing you lose is system upgrades and backups, which are 2 things most consumer Windows PC users don't do anyway.

      Finally, you have to remember that the majority of Mac/PC users never install a 3rd party native app. They use the browser and whatever is on the system. When you arm one of them with iPad and App Store, they find themselves more capable, more powerful than with a PC, not less. And remember that most PC users HATE their PC. They despise it. They love the Web, they love Facebook, they despise their PC. Once they see the Web and Facebook on a device they love, they are eager to use it, even if it means working around a few minor obstacles once in a while.

      But forget all that, all you need to consider is that in 2006, the Web was PC only (including Mac PC's and Linux PC's) but since 2007, it is also on iPhone, iPod, Android, BlackBerry, Palm, various set-tops, iPad, and it will be on Windows phones after an upcoming update that adds IE9. All of this Web surfing on non-PC devices is PC replacement. 5 years ago, we would have had to use PC's to do it. Now, we do not.

      Even the original iPod counts. In 2001, when it shipped, I had a Power Mac G3 Blue and White in my living room running iTunes 24/7. We replaced it with an iPod that had the exact same music on it.

      And PC's are only one of the devices iPad replaces. There are some iPad apps that replace a whole device. For example, an iPad running AC-7 Core replaces a Logic Control, which is a controller for a music studio. And the iPad is 1/4 the weight, wireless, and cheaper.

  • http://theonda.org/ Antonio Rodriguez

    H—

    Great insight (to look at new platforms in the context of old ones). Still, I wonder what "consumption increases" means in this case as I think the nature of what people do with these computing devices is changing more now than it ever has before. And I am not talking about simplistic arguments like "producing versus consuming" though there is an element of truth in those too.

    For instance, consumption as you define it I think means aggregate minutes spent behind the new platform's screen but I am not sure that this is as relevant as the fact that there are many more sessions that take place in new use cases that are much different from accounting program->spreadsheet. Not quite sure how to wrap my arms around that one but I think it is worth trying to…

    • asymco

      Antonio,
      Consumption I mean, as you put it, hours spent in use. If you consider the historic pattern, the same happened in every generational change. It take a while for the transition but eventually behavior changes.

  • davel

    "Microsofts Steve Ballmer argues that the tablet computers (aka slates, media tablets or iPads) are PCs. Steve Jobs argues that they are post-PC devices. There are analogies to trucks, cars and various metaphors for what these new devices symbolize."

    They are both right. Balmer is right in that all the machines you list above are digital machines. They are not much different than the machine depicted in the link below.
    http://en.wikipedia.org/wiki/Difference_engine

    Jobs is right in that he is trying to create a distinction of a new class of machine along a continuum of digital computers. By taking features out and providing a different interface Apple is forcing the user to interact with a computer in a different way.

    Just as a minicomputer is the same but different from a mainframe and just as a personal computer is the same but different from the other two and just as the macintosh was the same but different from the Apple 2, the iPad is the same but different from the iMac.

    • KenC

      Not sure Ballmer meant that all tablets are PCs, because they are digital. I think he sees anything that can support a full-blown copy of Windows OS, as a PC.

      • davel

        I am sure in Microsoft's world view you are right.

        Their world begins and ends with Windows OS and Office.

        However, my understanding was that in his view a tablet is simply an abbreviated PC.

  • mcsquared

    The iPad is definitely a Post-PC device, but not a fully mature one. The original iPod was like a kid Post-PC, fully dependent on a PC to feed and clothe it. The iPad is a teenager, ready to assert its independence, but still hitting up the PC for cash. In a couple of years he'll move out of home and meet a girl, then PC will only ever see him at Christmas and Thanksgiving.

    • Splashman

      Yep! Cute analogy!

      Or, to use the truck analogy, when people moved from the farm to the city they used their existing trucks less and less, and when it was time to buy a new vehicle, they thought about how little they used their truck for hauling stuff, and bought a car instead. When they needed a truck, they rented one or borrowed one from the cousin.

      Until 2003 or so, I never thought Ballmer was an idiot — I just thought he was a tasteless used-car salesman. He has since proved he is an idiot, first with his reaction to the iPod, then with his reaction to the iPhone, and then with his reaction to the iPad. He simply can't see the forest for the trees. Unless an open-minded non-idiot takes over Microsoft, the company is doomed — not tomorrow, but soon. They are in the truck-support business. Even now, if they jumped with both feet into the business of supporting post-PC devices, they would at least have a shot at maintaining their relevance. For that to happen, though, Ballmer would have to admit that Microsoft is no longer the center of the universe. And that will *never* happen.

  • Sandervvan der Wal

    Apple is an incumbent, but also the first company to make the new paradigm work forvthem too.

    • Hamranhansenhansen

      Apple was always more of an outsider in the PC industry than incumbent, though, at least since IBM came into the PC industry. And especially since 1997, when they bought NeXT, who were even more of an outsider. The new Apple had a vested interest in moving to a post-Wintel era, an interest in breaking the status quo. The iMac in 1998 and Mac OS X in 1999-2001 and iPod and Apple Store in 2001 were all seen as pure unadulterated crazy by the rest of the industry. Unix on a PC? Never going to work!

      The new Apple is 14 years old, and they were PC-only for only the first 4.

  • WaltFrench

    Methinks your taxonomy based on I/O overlooks what actually moved people from mainframes to minis to micros.

    Minicomputers had a sweet spot for some isolated divisions and smaller organizations, but I'll wager that many of them were sold to the same sort of large banks, corporations and schools where competitors used timesharing to provide superior throughput, capacity and even flexibility. (Timesharing started with BASIC in 1964 and kept the mainframes dominant up until the 80s.) And just as with minis, the generational change to PCs appears to me more tied to internal costing, turf wars and other such; early PCs of course had people staring at the same green-screen terminals as they did before there was Intel Inside.

    I think the generational change is from the economics of capitalized brainpower. Early computers had rudimentary hardwired knowledge and most of the expense was in the processing. Today, you have a hundred million iOS devices, each containing very roughly 100 million lines of code (as authored) that might cost a billion dollars to reproduce — but amortized over some reasonable horizon that's only $1 per device. A hundred million Androids, each carrying something more like 15 million lines, same very crude math yields 15¢ per device.

    In contrast, IBM's System 360 had something like a million lines of assembly code at introduction; price it at today's engineer costs over 20,000 machines (?) and you have $2000 of code in each one. Early 360s topped out somewhere around 1 MIPS, so the brainpower was not only more expensive but could so much less.

    (I would say that S/360 and other systems of the era marked a generational change. Its predecessors managed job queues; it juggled a blend of different workloads, some of which persisted for hours or days.)

    (I'll let somebody else estimate the brainpower captured in an i7 or ARM A9, but again, we're talking HUGE cost bases, spread over hundreds of millions, or billions of devices.)

    With radically different economics of capability, uses are also radically different. I believe that I/O is determined by purpose, not the other way around, so I would focus generation lines around uses.

    Tablets have two uses: first, as a variant of a notebook: better battery life, more portable and with a rotating screen for proper viewing of 8.5X11 format paper-emulating (and especially, multi-column) documents but a lousy keyboard and much less capable CPU for those big spreadsheets, PhotoShopping etc. They can be better or more convenient for traditional notebook jobs, but those uses do NOT define a new generation of device.

    The other uses are as a virtual musical instrument, as a fingerpainting canvas, as a skymap, as a pre-schooler's smartbook, as a surveyor's or nurse's workstation, as location-based social hubs. Those are new devices because there are NOT good alternative platforms for those activities.

    So economics of hardwired brainpower create uses that were impossible or utterly uneconomical a generation earlier. I/O follows the uses.

    • Childermass

      This is a good way getting at the real cost of the tool but it doesn't help us understand the flexibility or the accessibility of the tool. Computers have gradually become easier to use and the effect on usability is compound – you need the tech-priests less and the tech-priests' control and exclusion diminishes. The tool is more flexible both because it actually can do more things and because more people can get at it to do their thing. The iPad is transformational not just because of the low unit cost of brainpower but because anyone can use it, as in truly anyone. Untrained and untutored. That is why there are emergent uses – the generators of these uses were denied access to the tool before.

      • WaltFrench

        “The iPad is transformational not just because of the low unit cost of brainpower but because anyone can use it, as in truly anyone.”

        This is the point I was trying to get at: the way to measure how bionic / leveraged anybody can be is how much of other human intelligence is working in the device, so that you can use it without expert guidance — rather, that the expert guidance is built in. I wanted to measure how much guidance was built in, compared to the cost of that guidance if personalized.

        I actually started my ramble with another thought and this developed. Thanks for pointing out — for helping me focus on — what I think is the key point.

      • davel

        Yes.

        Apple is well on its way to creating the toaster.

        For the most part Apple devices, esp the later mobile ones, just work.

        They are appliances. The iPhone is a web devices with a phone attached. The iPad is a very portable web device. These are the electronic toasters that Steve Jobs described many years ago.

  • stephenreed

    Steve Jobs uses "post-PC" to drive a brand wedge between the previously dominant Microsoft Windows/X86 platform – the PC, and IOS/ARM – the iPhone & iPad. He wants customers to regard the "PC" as old-fashioned, behind the times, and unable to catch-up. Post-PC is a brilliant marketing slogan that fully illustrates the positioning of the Apple mobile platform brand.

    Microsoft and Intel are indeed faced with the innovators dilemma. Both are fully constrained by their legacy product lines, which respectively define their brands. Intel went so far as to sell off its ARM CPU business to Marvell in 2006. Intel's Atom, even if eventually energy competitive with ARM, will face commodity pricing rather than monopoly pricing. Likewise Microsoft already has ARM-compatible Windows CE and Windows Mobile. These do not run typical cash-cow Windows applications. In order to make the leap to the Post-PC era, both Microsoft and Intel must leave their cash cows behind, or worse, participate in their decline.

    We are at just the beginning of the Post-PC era, and I'm very excited about the multitude of sensors and potential actuators that can be coupled with, and enhance the usefulness of, smartphone and tablet devices. The GPS, accelerometer, gyroscope, microphone, speaker, cameras, and NFC are just the starting point. Expected in a few years are barometric pressure, humidity, and temperature sensors. Most of these don't these make sense on a PC. Furthermore, smartphones and tablets will mesh with each other and with the increasingly intelligent devices encountered in our everyday environment.

    The Post-PC era is additionally characterized by responsive cloud services that offset the limited hardware capabilities of ARM mobile devices. For example, Google is rolling out a real time speech translation app that simply wraps up the microphone input, translates it at a server farm, then immediately sends the translated speech waveforms to the speaker output. Your Post-PC device can hear and talk. Likewise, imagine what happens when the video camera output is processed by a powerful cloud server – we have product and facial recognition, and beyond that I expect general vision capabilities to arise.

    • PatrickG

      I respectfully disagree with your assertion that the post-PC era is defined by cloud computing. The concept itself is just a reformulation of the remote mainframe model of the computing era, and has been delivered upon internally by most large corporations and their data centers full of file/print, database and web-hosting servers and storage devices. The key issue for taking cloud computing seriously is whether personal data can ever be protected to an acceptable level, and up-time kept within five 9s or better for the consumer population just as the buzz reaching that necessary threshold of viability you have a "Danger" incident, or another rolling Blackberry services outage. Moreover your "powerful" servers are cost prohibitive for a large mass of users, and Google's ability to serve processor-intensive live-time translation will depend on a massive-scale data center to be able to serve anything close to the current computing population. No, I think your cloud has not yet acquired the necessary silver-lining, which is why you see Apple putting energy-efficient dual-core processors in the iPad2 and Motorola, RIM, and Samsung all touting the processor speed and graphics support for these devices and future ones.

      • Hamranhansenhansen

        Yeah, I agree, the Google model where they get all your data is flawed. Even just at the liability level. It's just not going to happen. Journalists, for example, are obligated to keep their notes within their personal possession at all times. They cannot interview a source using a Google cloud translator, or store their notes on a cloud server. If you create original work, you cannot send unpublished work over the Internet. Everything at Google is only a subpoena away from public record because it's 3rd party, it's already been shared once. It's much harder to get what is on your local computer, that is private, unshared, unpublished information.

        We will just have more cores that will sleep to preserve battery and wake up to quadruple your computation for translation or other brute tasks. Today's dual core ARM costs the same as yesterday's single core. Plus ça change, etc.

      • stephenreed

        Patrick,
        If one upgrades "characterized by" to "defined by" then I narrowly agree with your point. Client/Server computing is not new of course, but wireless cloud computing enables a smartphone/tablet platform satisfactorily perform tasks that its limited hardware would otherwise preclude.

        Your analysis of cloud reliability is flawed in that it ignores the safety of cloud data and program storage vs the loss of a non-backed up local device. I hypothesize that most consumers will prefer other cloud services in much the same way as they prefer server-stored email messages today.

        There is no data I know of that supports your assertion that shared powerful servers are cost prohibitive for a large mass of users. Rather experience shows the opposite phenomenon even in the previous eras. Given that local, personal devices are intermittently used, and that communication costs are reasonably low, then shared, highly utilized resources are indeed cost effective. E.g. a traditional networked file server.

        Regarding your last point, observe that Moore's Law applies to smartphones and tablets. At this point, the hardware specifications of ARM devices are about 8-10 years behind similarly priced PCs of the time – constrained only by energy efficiency and small form factors. I expect post-PC devices to advance to quad core and multi-gigabyte RAM and beyond – following the same progression as PC hardware, but delayed.

      • sirfixalot

        I think New York City is going to be an interesting case to follow, as they decided to go with Microsoft's cloud solutions all the way.

        They're expecting to save $50 million over the next 5 years through this solution, and if NYC trusts the cloud, I would think the safety issues have largely been resolved.. although I personally prefer to wait a little and observe this for a while to see if any unforeseen problems arise.

        Most of the cloud service providers have products that scale down to even personal usage, although the savings might not do so linearly it should still be reasonably cheaper than upgrading your own hardware now and then.

      • davel

        Why is cloud computing the second coming of a traditional hub and spoke centralized computing metaphor?

        I do not know how companies implement their cloud infrastructure, but you can certainly use any number of SIMD( Single Instruction Multiple Data), MIMD ( Multiple Instruction Multiple Data ) architectures to model your systems. Many vanilla unix boxes of the mid tier variety with some sort of replicated storage and a load balancing front end will get you away from your mainframe analogy. You can think of it as a big mainframe if you like, but the above architecture utilizes multiple copies of an OS that can be the same or not running off of physically and logically disparate machines.

        Personal data can and is stored in some sort of structure that can be replicated over a wide area making up time 100%.

        The issue of security is a different issue. If you make anything easily accessible your data is not secure. A standard 6-9 character password challenge system is not really secure.

  • 21tiger

    The post-PC is a step away from the standard Keyboard/Mouse layout, I suppose. Remember when the iPhone launched and Mossberg asked Jobs and Gates when we were ever going to ditch the 'same old same old' OS set up (eg. Windows, a Desktop, with Menus, etc)? Well it turns out that with touch screens we can move away from that stuff, because it's more natural to use your finger than entering things on a keyboard.

    The iPod was referenced as a post-PC device, because it relegated the PC to secondary status (eg. when you used an iPod, you used a PC too, but only to store/back up your songs, not as the main interface).

    I suppose anything that views the PC as a 'backroom server' is a post-PC device

  • Mike.mccloskey

    If memory serves me, IBM had a famous FUD episode in the late 19th century. It involved cash regesters; which they also never delivered.

  • Hamranhansenhansen

    I think too many people took “post-PC” to mean “iPad replaces PC, PC dies.” I think that reading is a symptom of the totalitarianism of the PC business under the Microsoft/IBM business computing monopoly. I don’t think Mr. Jobs meant that at all, because he included the iPod from 2001. It replaces a PC in that you can listen to digital music without a PC, but it doesn’t make you stop using a PC. It may make you have fewer PC’s than otherwise, but not zero PC’s. I think that is what Mr. Jobs meant. An end to the totalitarian PC, where every computer is a PC, not an end to all PC’s.

    There are households that used to have 4 notebook PC’s that have gone to a shared iMac and 4 iPads. They still have a PC, but they are very much a post-PC household.

    Also, anyone in the tech press who criticizes Mr. Jobs for pushing a post-PC era is not thinking clearly. His company operated in the shadow of the IBM monopoly and the Microsoft monopoly. Of course he wants to declare that era to be over. And the fact that he started the PC era gives his word some weight on this matter. There’s no denying that many people who never missed a day of PC use now go weeks without using one because of iPod, iPhone, and iPad. I never thought I would use anything other than a Mac to browse the Web, and now I hardly ever use a Mac to browse the Web.

  • kevin

    With mainframes, storage/processing was on mainframe. I/O was punch cards/paper/specialized attendants.
    With minicomputers, storage/processing was on minicomputer. I/O was a remote terminal/ keyboard.
    With microcomputers (PC), storage/processing was on microcomputer. I/O was on local microcomputer with display/keyboard/mouse. Internet connection moved some storage/processing onto remote servers.
    With tablets, storage/processing is primarily on tablet, with some storage/processing on remote servers. I/O is on the local tablet with touch display and optional keyboard.

    So to be post-PC, does the storage/processing have to move primarily to remote servers?

    One could argue that the shift to the PC from mini moved storage/processing to the local computer, and thus that is part of the definition of the shift. However, the shift from mainframes to minicomputers did not switch the location. And within PC era, a shift has occurred but not totally. Many young people use the PC purely to access the web (excepting some school papers*); for them, the shift already occurred within the PC. (* My kids use many web services for play and schoolwork. They use Google Docs for homework assignments as much or more than they use Word or Pages.)

    • Hamranhansenhansen

      Computation and storage has always been on both client and server. All computers have computation and storage. Different apps stress the server or client more. The Internet predates the PC. A PC that was not hooked up to the Internet was a brief anachronism. The reason NeXT was based on BSD Unix was it was always supposed to be hooked up to the Internet, and that is 1988. Client/server is the rule, not the exception, even on the PC.

  • PatrickG

    One of the necessary apps I use consistently on my iPhone is Dragon Dictation for speech to text. Especially since I can speak in a clear, largely uninflected upper Midwest accent (grin). It instantly obviated the need to use a keyboard for most of my daily/momentary input needs – from blog entries to texts and emails. I think it goes without saying that it serves a very general need and falls short for technical jargon or for someone with a heavier accent than mine. But it points up the "unlocking" of some of the elements that were part of the PC era. Unlike my classically desk-bound desktop (which also has speech to text on it), my iPhone is with me constantly, is instantly available to meet the need at hand, and conveniently returns to it's out-of-the-way pocket status. My last evaluation session for my staff was all recorded via my iPhone and the dictation app, emailed to my work email and only then did I sit down at my desk, engage the machine and cut/paste the evals into electronic forms. The true measure of post-PC is increasing integration with the human agent, moment-by-moment.

  • Sanand

    I think the vision put forward in the March 2 event can be thought of as "Appliance Computing". Apple was projecting the post-PC device, in particular the iPad, as a neutral platform, a "tabula rasa", for realizing any appliance (eg. a drum, a guitar).

    In Personal Computing, users strongly relate with device and OS constructs – "my keyboard, my mouse, my filesystem, my colors scheme, my launcher". Applications are like 3rd parties who share my computer, my filesystem. And the elements of the computer (keyboard, mouse) and OS (taskbar) are all around you constantly reminding that you are working on a computer.

    In Appliance Computing, there are no elements which "belong to" the OS (except the home button in the case of iPad).
    OS doesn't own screen space (therefore no notification area, no taskbar)
    OS doesn't own UI events (therefore no multi-tasking gestures)
    OS doesn't own files (therefore no OS file management)

    In short, Apple is saying that – do not assume iPad as a "computer" and then complain about how it is not a "computer".

    • davel

      Very nice.

  • Hamranhansenhansen

    IBM did invent FUD.

  • Pingback: En bra definition av Post–PC | MaxiMac()

  • davel

    I do not know what the range of devices CDC produced, but they were early leaders with Cray in the supercomputer field.

  • http://bGenuity.com Martin Halford

    Like the concept. However, it could be argued the Post PC device is more to do with the paradigm shift associated with the new means of interaction. That is, touch. Each new computing device has been easier to interact with. Punch cards to terminals to mouse pointers to stylus and touch. The other aspect is the integration with non-PC hardware and the general “smartification” of previous dumb devices from cars to microwave ovens.

  • alone_in_a_boat

    Not being tethered to a desktop? – Netbooks and laptops are still PCs

    Keyboardless? – Could still be a PC

    Personal device & capable general purpose computing? (ignoring basic phone features) – Still a PC.

    Dumb personal device (touch screen & net access) all user's files & computation done remotely?
    Ah.. now that's definitely a post-PC scenario.

    Wishful thinking on Apple's part.

    • asymco

      All user files and computation done remotely? Sounds like a pre-PC scenario.

      • Jons

        Exactly. Everything goes in circles (or maybe spirals).

        I think in the end, most people will want a maintenance free device. It's the best of both worlds, full functionality (like pc's) but the maintenance work is done by professionals in computer departments somewhere. Another important difference from the pre-pc era is that you're not tied to one service.

        The future of computing is not android or ios. It's chrome-os.

  • Ralph Wilson

    "The first post-mainframe computers worked alongside mainframes and were typically used by engineering departments (vs. the finance and accounting departmental role occupied by mainframes.)"

    I was an IT Professional (although the term "IT" was not in use at the time ;-) during _that_ era. Again, mini's were not necessarily being used as an adjunct to mainframes. In fact, I worked at a systems house that used 3 mini's (two of them working in tandem and a third working independently) to fully automate flight service stations and the only contact that the system had with mainframes was via a dial up connection to download either radar images or fax images. Other than that, the systems were totally self contained.

  • Ralph Wilson

    "The first post-minicomputer microcomputers (also known as PCs) were used alongside mainframes and mini-computers."

    Having been in the IT profession during that time, I can tell you that I was creating totally independent systems that did not rely on nor were they used "alongside mainframes and [minicomputers]." (Please note: there is not hyphen in minicomputer. ;-)

    I also worked with mini's and micro's as "just another computer" in a network of computers. Each of the various "sizes" of computers had its task and served its purpose.

    By the way, you also skipped over the "mid-range" computers (e.g. the AS/400, aka iSeries aka Series i). Where do they fit in your taxonomy and pregression of systems?

  • Pingback: What is a "Post-PC" device? | Information Overdose()

  • StephenH

    Charles Arthur and Killian Fox of The Guardian have quoted Horace in this article: 'How the iPad revolution has transformed working lives.', though they have not linked to the article itself.

    I have attempted to share this insight with The Guardian's readership in my comment: http://www.guardian.co.uk/technology/2011/mar/27/

    On The Guardian site there's lots of uninformed debate going on in the comments. By contrast, this blog and the associated comments are a pleasure to read. Many thanks to Horace and all of his knowledgeable contributors. I have learned a great deal since I started reading it.

    Regards

  • Pingback: Post-DVD Ära: UltraViolet und iTunes Play | Digitaler Film()

  • Pingback: 10 Device Pc Sites | Sarbuss()

  • Pingback: Tablets are not Post-PC devices | Steadier Footing()

  • Sheamus Warior

    Awesome blog! Now In anticipation of a follow-up ….is bubblegum casting legitimate