Categories

Moonshot

When describing the process of disruptive innovation, Clay Christensen set about to also describe the process by which a technology is developed by visionaries in a commercially unsuccessful way. He called it cramming.

Cramming is a process of trying to make a not-yet-good-enough technology great without allowing it to be bad. In other words, it’s taking an ambitious goal and aiming at it with vast resources of time and money without allowing the mundane trial and error experimentation in business models.

To illustrate cramming I borrowed his story of how the transistor was embraced by incumbents in the US vs. entrants in Japan and how that led to the downfall of the US consumer electronics industry.

Small upstarts were able to take the invention, wrap a new business model around it that motivated the current players to ignore or flee their entry. They thus successfully displaced the entrenched incumbents even though the incumbents were investing heavily in the technology and the entrants weren’t.

In the image below, the blue “path taken by established vacuum tube manufacturers” is the cramming approach vs. the green entry by outsiders who worked on minor new products which could make use of the rough state of transistors at their early stages of development.

Screen Shot 2013-12-17 at 12-17-3.04.59 PM

The history of investment in transistor-based electronics shows how following the money (i.e. R&D) did not lead to value creation, quite the opposite. There are many such examples: The billions spent on R&D by Microsoft did not help them build a mobile future and the billions spent on R&D by Nokia did not help them build a computing future.

There are other white elephant stories such as IBM’s investment in speech recognition to replace word processing, the Japanese government spending on “Fifth Generation Computing” and almost all research into machine translation and learning from the 1960s to the present.

But today we hear about initiatives such as package delivery drones and driverless cars and robots and Hyperloops and are hopeful. Perhaps under the guiding vision of the wisest, most benevolent business wizards, breakthrough technologies and new infrastructures can finally be realized and we can gain the growth and wealth that we deserve but are so sorely lacking.

But the failure of crammed technologies isn’t rooted in a lack of wisdom. It was the wisest of minds which foresaw machine learning, advanced computing, mobility and convergence coming decades before they came. It was their wisdom which convinced others that resources should be spent on these initiatives. And it was the concentrated mind power of thousands of scientists which spent hundreds of billions in academic and government research.

What failed wasn’t the vision but the timing and the absence of a refinement process. Technologies which succeed commercially are not “moonshots.”[1] They come from a grinding, laborious process of iteration and discovery long after the technology is invented.

The technology is one part of the problem to be solved, the other is how to get people to use it. And that problem is rooted in understanding the jobs people have to get done and how the technology can be used defensibly. That’s where the rub is. An unused technology is a tragic failure. Not just because it has no value but because the resources (those beautiful minds) used in making it could have been applied elsewhere.

Building usage means imparting and retrieving learning through a conversation with the customer. That conversation is best spoken with a vocabulary of sales and profits. Without profits the value is unclear. Nothing is proven. Without economic performance data, the decision of resource allocation becomes a battle of egos.[2] The decision may be right but going by past history the odds are low.

Incidentally, timing is the other element that is key to success. It might seem that timing really is a matter of luck. But timing can be informed by the same conversation with the customer. As you observe adoption you can also measure how long it take for a technology to be adopted. You can do A/B tests and see what is faster.

The most reliable method of breakthrough creation is not the moonshot but a learning process that involves steady iteration. Small but profitable wins.  A driver-less car might be achieved but first a driver-assisting car might teach the right lessons. An electric car might be achieved but first a hybrid car might teach the lessons needed. A delivery drone might be achieved but first a programmable UPS truck might be a better way to learn.

And finally, Android (née Linux) in 2005 might have been foreseen as the future of mobile operating systems but it took the learning from iOS to shape it into a consumer-friendly product. Even when you see a moonshot work, you realize that a lot of learning had to have taken place. It’s like the story of an overnight success that took a lifetime of perseverance.

Notes:
  1. It could be said that the space and nuclear weapons programs during the Cold War were the original “moonshots” and they succeeded. But although successful in their goals, those goals were not commercial value creation and we are left with little to show for it. See also Concorde. []
  2. This is how most research is allocated today. []
  • willo

    AirShow and this post about delivery drones. It´s a lot easier to autonomously fly drones to certain locations than it is to get a programmable UPS truck to deliver.

    Drones have made heaps of improvements in the last three years. With GPS, sensors, etc – they are able to fly from A to B unmanned easily and with tons of redundancy. Even a storm wouldn´t be a problem as it´s programmed to adjust for variations in milliseconds and will remain perfectly levelled throughout the flight.

    Roads pose a much higher risk than flying drones at this time. Amazon was spot on, but people are clueless as just how far this tech has come in the last three years.

    It´s not hard to create enough redundancy to make a almost impossible to crash drone, and you can easily detect humans/avoid obstacles. FAA might want to restrict the use of drones for other reasons than safety. Security/privacy comes to mind. Plus the fact that a careless pilot might interfere with airtraffic.

    • Luis Alejandro Masanti

      quote:

      “It´s not hard to create enough redundancy to make a almost impossible to crash drone,”

      I worked in Aeronautical, Nuclear and Systems business and my experience shows me that “it is really, really, hard to make anything almost impossible to crash,”
      Remember Three Miles Island, Chernobyl, O’Hare’s L1011 crash, Shuttle’s disasters, big electric dark down, and… recently, Obama’s HealthCare.

      • willo

        The Octo drone Amazon showcased on 60 Minutes will still be able to make it home even after 4 of the 8 rotors fail. It´s running DJI´s WKM and can rely on GPS and even offline GPS for some period of time for guidance. I would not be surprised if it even had multiple Lipos for redundancy. Chances of that drone crashing or falling down is slim to none and it´s just a prototype.

        Now the FAA most likely will not approve it, but it´s not a far fetched idea from Bezos. Horace seem to think autonomous trucks are easier to implement, I beg to disagree.

      • EW Parris

        So, looking at Horace’s chart above, wouldn’t that sort of dependability and reliability start to beg the question, “Why not put humans in those drones?” Will Amazon end up in the flying car business?

        It seems like the sort of progression that Japanese electronic entrepreneurs succeeded with.

      • obarthelemy

        Energy. Limited, Hard to transport. Dangerous. Polluting.

        I’m fairly sure successful mid-or even short-term innovations will be in the field of using less energy, not more.

      • http://www.asymco.com Horace Dediu

        I did not make a comment on ease of implementation. My claim is that improvements in road-based transportation are likely to more commercially viable and those commercial results will yield the fuel (in terms of information and profit) which will result in a more reliable path to autonomous vehicles.

      • willo

        Horace I was referring to the latest 5by5 show comment about it. Sounded like it was not very feasible at this time.

      • obarthelemy

        How does it handle gusts of wind ? rain ? lightning ? birds ? Smog ? Construction crates ?

        The hard stuff to handle is the externalities, not single or quadruple engine failure.

    • http://twitter.com/LunaticSX Lun Esex

      Sure, Amazon’s prototype aerial delivery drones already work as a technology demonstration under ideal conditions. That’s miles and miles, many years, and many big hurdles away from practical usage, though.

      Isn’t that the point of the OP?

      It’s not like there haven’t been many other impressive technology demonstrations before that faced insurmountable obstacles to large-scale production and adoption. There were real-life working jetpacks and flying cars way back in the 1960’s, for example. :)

      “Ah, but this time it’s different!”

    • http://www.noisetech-software.com/Home.html Steven Noyes

      As someone that works in avionics, I see serious safety concerns with the prototype shown by Amazon that would need to be worked through. One of these not hard to manage. 1000’s is a different story especially in Class A airspace.

      The other issue was the staged video. If you are using electric motors (God forbid you have 8 gas engines whining away pumping 2stroke fumes in the air), your range is very limited. With a light payload, you are talking about 2-3 miles with the prototype shown. Wind will absolutely be an issue as these are very susceptible to wind sheer, up drafts and micro-bursts. Even when they stay fully stable, strong winds will increase power usage and diminish their already short range.

      It was a neat tech demo. It still has lots of hurdles to be viable.

      • Peter

        One of the non-trivial hurdles is exactly how to specify the precise target. GPS is good enough to get it to my house and with help it could get it onto the porch. But how can I specify the porch as the target?

    • StevenDrost

      It’s clearly possible and likely will happen at some point, but I think Horace was more focused on the commercially viability.
      It does not make any sense if it costs ten times as much as using the existing infrastructure. To fall back on Apple, their success is not do to inventing new technologies, but rather choosing to integrate existing technologies at a point where they can become commercially viable.

  • Robert Heiblim

    Good post, I’ve seen this up close. I’ve been guilty of being too far ahead in the performance of items developed and also seen the need for small wins building. Moon shots are great, but what happens after is even more important

  • Ernest

    One thing I learned early in my career is that the fewer resources people have, the more creative they have to be. And necessity, not more money, is the mother of invention.

    I also observed the converse: throwing more money at a problem often yields inefficient and mediocre results.

  • donsleeter

    This article helps to explain why biotechnology has not become a major established market for products or services in over 30 years of boom-bust investment and failure. We have seen some dramatic technical achievements, but few, if any, sustainable businesses. Probably because the “conversation” you talk of with sales and profits is debased by grant funding and academic/educational goals as a primary metric of output. Clearly we are still in the battle of egos stage in Biotech. With the moonshot (human genome) behind us, sustainable incremental improvements in products and services are still over the horizon. Biotech still is looking for this conversation you speak of with a disruptive low-performance yet profitable start.

    • Xavier Itzmann

      Highly regulated market. Extremely tall entry barriers caused by government diktat. Difficult for any one person, startup, crowfunding, or even VCs to get anything off the ground. The only entities able to move the ball forward are deep-pocketed Big Pharma, but Big Pharma is like the U.S. vacuum tube industry. Might be centuries/millenia before biotech is deployed to humans (i.e., until economic/government paradigms change)

      Where has biotech actually brought results? GMO agriculture. Even though the barriers are enormous, at least there is less regulation than in human care.

      • Knowles2

        Where has biotech actually brought results?

        Medicine, especially production of Insulin.
        The problem with biotech is also the price of the technology just to enter it.
        And given the capabilities for biotech to go very wrong it needs to be highly regulated in my opinion.

      • Ian Ollmann

        Usually what happens is a small biotech comes up with the advancement, but rarely has the capital to advance something through stage 3 clinical studies, or frankly the manufacturing facilities or drug sales organization. They license the discovery to big pharma, who for hundreds of millions takes it the last very expensive mile, and assumes some of the FDA approval risk.

        Biotech is unlikely to go “very wrong”. It is playing with the same toolkit that Mother Nature + cosmic rays + retro viruses + random whatnot have been doing for billions of years. The only difference is who is doing it . I am a lot more comfortable with biotech than I am with the idea of so many pigs humans and chickens living together in Southeast Asia.

        I’m guessing you haven’t done much biotech work.

      • Walt French

        I no longer see an acquaintance (kid’s friend’s dad) who’s the Chief Scientist at one of the world’s biggest biotechs, but pretty sure that he’d agree that about ALL pharma advances in the last couple of decades is biotech. Kinda by definition.

    • crocodilechuck

      Obtaining the one dimensional sequence of DNA is not a ‘moonshot’. It was just ‘one small step for man…..’

      Understanding the fine grain detail of how gene expression/regulation actually work at the molecular level, and the corresponding detail of that of our proteome, will take 40-50 years.

      More like Monsieur Dedieu’s notion of ‘steady iteration’.

      By the way, the space program (‘moonshots’) was a dead end.

  • dicklacara

    Tired of getting my posts deleted… Won’t bother anymore… Sigh!

    • http://www.asymco.com Horace Dediu

      Which posts are being deleted. I have no record of deletions of anything you wrote.

      • collins RUDZUNA

        Can’t believe you fell for a troll. Lol

  • handleym

    Horace, you left out two important points.

    On the negative side, we have what I call the “totalizing” impulse within companies, the insistence that the new thing BE an updated version of something that already exists, not a clean version of what it is.

    This sounds a bit abstract, so let me give examples:

    – We have Intel insisting that its mobile CPUs be full x86’s even though that makes no sense because the target market doesn’t care about legacy Windows software, and being a full x86 brings a massive burden in engineering complexity to the problem.

    – We have Microsoft insisting that its mobile OS be “desktop” Windows; and since that’s not feasible as long as “desktop” Windows is a mouse/keyboard OS, let’s change “desktop” Windows to no longer be that.

    – We have Samsung getting the idea of a smart watch, but then insisting that it run Android even though that’s a wholly inappropriately large OS for the target because, hey, Android is how we do mobile.

    As examples in the other direction, we have Apple with both iPod and iPhone accepting that what they were building was NOT some sort of mutated Mac, and so allowing the target to be what it should be, not to force it to be a weird Mac. For a non Apple example, we could look at the IBM PC which, for all its (god knows there were many) flaws, was again allowed to be what it was, not to be some sort of weird System 370 terminal. (And, of course, IBM’s role in the PC spaces started diminishing as soon as the drive to convert the PC into that role got serious.)

    For my second point, I’d simply refer you to this magnificent essay:
    http://www.ribbonfarm.com/2012/05/09/welcome-to-the-future-nauseous/

    the point of which is that technologies succeed with customers (as opposed to succeed in taming the physics, or in selling to users with no choice like the military) by “converting insanely weird things into what appears to be normal and familiar”. So a commercial airline, for example, does not flaunt the fact that it’s doing something fundamentally amazing; it makes a flight feel like a bus ride.

    It’s not easy to appreciate this manufactured normalcy because in the cases that work, the result becomes normal! Of course I can fly into a city on the other side of the world, pull out a small device from my pocket, and immediately get a map of where I am, where I want to go, and directions to get there! (Yet for Douglas Adams in 1978, something like this was a product of the far distant ridiculously advanced future.)

    It’s easiest to see this in cases where there have been spectacular failures.
    One is the whole OLE/OpenDoc/Publish and Subscribe thing from the 90s. Plenty of companies tried, but they could not establish any sort of mental models that allowed users to “get” the point of the technology and fit it into their workflows. (This is not the same thing as saying that the technology solved a problem that didn’t need to be solved. There IS a need for publishing information which can then change, and in fact we are using such a tech right now: a URL is, in a sense, the SUCCESSFUL version of Publish and Subscribe.)

    A second such case is Google Wave, where again, everyone agreed that the tech seemed to be interesting, and maybe one day might be useful, but Google never bothered to put the effort in “normalizing” it — showing how it was a natural augmentation of our existing lives and what it could do better.

    • Insider

      The examples of Intel, MSFT and Samsung are actually illustrative for the lack of vision. Or rather a process where the “corporate antibodies” (as Horace used to call them) are trying to preserve the current system by re-purposing it:
      Intel makes an x86 CPU power efficient, MSFT takes its desktop OS and morphs it into a mobile one, etc One issue is clearly the timing: if they would have done this in 2007, the mobile landscape would have looked entirely different today. They had no vision to do it until Apple showed them how it’s done. Or if they had vision, the corporate antibodies were efficient enough to derail it. The moves from Intel and MSGFT are reactions to the massive threat to their core business models and their very existence. None of them has done or proved innovative in their approach. You can not solve a problem by following the same path who lead you to the problem in the first place. If you have a hammer in your hand everything looks like a nail: if you have a desktop OS throw it on a mobile device, if you have a desktop CPU throw it into a cell phone.
      Apple clearly didn’t do that as they weren’t desperately trying to make Mac OS to survive since it was almost dead anyway.

      • Walt French

        I like a couple of your points but feel compelled to make some tweaks.

        First, Intel reported that Apple ccame to them and asked for a good price on a CPU that Intel was then selling — an XScale processor that was shortly later spun off to Marvell. The excuse was that they just didn’t imagine the volumes necessary to make a business out of the low price Apple was willing to pay. (The CPU apparently was quite similar in performance to the Samsung CPU that Apple settled for.) This is a bit of an “antibodies” story, but more a combo of traditional low-end disruption, combined with the Fog of Business. Le Sigh.

        Maybe John Siracusa or somebody else will look at how OSX has evolved over the years. Installing the Mavericks upgrade, and the Server app in my basement machine, reminded me how totally different the user experience is than the original 10.0 and 10.1 versions. (Which I never used, staying with the original Mac System SW until I got a machine more suited.) But while a lot of the UI concepts are very familiar to Mac users, the overall User Experience today is nothing like the days of yore. The Mac OS is alive and well, although in a very different form.

    • Walt French

      @handleym:disqus wrote, We have Intel insisting that its mobile CPUs be full x86’s even though that makes no sense…”

      My immediate reaction was extremely positive. Thank you! But I think you may have pulled up a bit short: isn’t the real problem that Intel’s existing customers demand full X86 compatibility, and new customers, e.g., Samsung, don’t actually give enough of a damn to point out the mismatch between the offering and the needs?

      Intel has shown repeatedly how poorly it understands customers’ needs; the fact that they have no significant OEM relationshipsin mobile simply makes it worse.

      The same way that Michael Mace called out Microsoft’s lack of customer focus today (at http://mobileopportunity.blogspot.com/2013/12/has-microsoft-gone-nuts.html ), you’ve nailed Intel’s challenge, in just a few words.

      Beautiful minds, wasted. Indeed.

      • charly

        The big advantage of x86 is that it is much more open than ARM. Not so much from the perspective of OEM but it is absolutely true from the perspective of end users. On true X86 (this incorporates more than just the cpu) there is an expectation that one can run (maybe badly and not fully functionally) a two year old version of FreeBSD without writing a single line of code. That is simply not true on ARM.

        This openish is a big competitive advantage of X86. Problem is that it comes a an energy cost

      • Walt French

        Boy, I’m sure not understanding why FreeBSD is a Big Deal® in the mobile space.

      • charly

        It is not. PS4 is based on BSD IIRC so it would surprise me if their other game stuff is also BSD based. But i used it as a way to say that even unpopular operating systems have an expectation to run on standard x86. But with ARM there is no standard ARM and no expectation to get it working

    • Chandra

      I am late to this excellent party but better than never.

      I liked your ‘manufacturing normalcy’ characterization. Like mechanical
      engineering, electronic engineering , software engineering, we need to
      think if ‘normalcy engineering” which is a meta-engineering process.
      One aspect of it is user interface. But not the only aspect. Reliability and
      price point are part of this ‘normalcy engineering’. And Safety where it
      applies. I also think the process of ‘Productizing’ is similar. Anyone who has workeded
      on bringing sufficiently complicated technology to product knows well that
      90% of the work is in perspiring to complete that last 10% so we create a functional
      veneer that hides all that complexity and focuses on the ‘job to be done’

      And normalcy engineering is related to the S curve as well, since
      what is normal for early adopters is not same as what is needed to ramp
      up on the ‘S’.

      You have pointed out correctly where such normalcy engineering
      archived through co-opting existing technologies did not work. This points to the
      problem with such analysis where both success bias and failure bias is abundant and
      cutting through all that to come up with some principles that are actually actionable
      is monumentally difficult.
      ( Just to add to the failure bias, the two cases where ‘building on what
      you have’ actually worked are: windows ’95 and Intel Pentium.
      Remember at that time it was not clear if building things on top of CISC
      was the right move. In retrospect Intel seems to have read the market properly ( or
      lucked out, who knows ) and won that war.

      Regarding Mobile, I wonder why Intel did not just buy ARM. They knew that Apple went with
      ARM before everyone else knew. Would it have been blocked on anti-trust grounds?
      They could have picked it up then for quite a cheap price, especially any time during 2004-2008
      when Intel currency was pretty good with ARM in a long term steady pattern.
      Of course, this is just after the fact curiosity, but if the history of the transition to Mobile is written
      I would like to know if Intel thought about that and if so why they did not pursue it.

  • Gene Grush

    Thanks for the article.

    I believe a number of companies waste a good portion of their R&D on unfocused technologies without a proper success criteria established. For example, the google glass may turn out to be a very successful product, but I don’t want one because I don’t want to wear glasses if I don’t have to, other than taking pictures what is it good for, and yes I can surf with eye display but I don’t want to always have to talk to my computer to accomplish this. A marvel of engineering, but did they put themselves into the consumers body and ask why would they buy this product. Was there R&D on google glass really focused on a consumer product people wanted and did they establish the right success criteria.

    On the other hand, the core technologies for making the first IPhone(minus the OS) were basically there except what would the consumer want before buying one. I don’t buy Apple because it’s Apple, I buy it because I know that their devices enable me to utilize their computing power to the maximum possible extent for managing photos, music, movies, budgeting, taxes, surfing, e-mail, calendar, contacts. This is their vision and mind for their product. It makes my life easier. When they developed the IPhone I am sure that their success criteria was at less: 1) the virtual keyboard must be very usable, 2) switching between applications must be accomplished with only a few hand movements and be straight forward, 3) the device must have most of the functions you currently have on computers (e-mail, calendar, surfing, contacts, games, music/movies, phone, camera, maps, etc.) for full mobility. 4) Significant use per battery cycle. 5) An OS that could accomplish all of the above, and 6) Growth for new functions. They were not going to market a device until that met these demanding gates. They believed that if they met all these criteria than the consumer would buy their product.

    The other important element for R&D is vision. This may be viewed differently by different people, but I believe it to be the capability of seeing all the basic building blocks available and seeing the numerous possible product/design capabilities that could be accomplished. Also seeing the path to developing the delta design work to accomplish these products. A successful visionary must also be capable of seeing themselves as the end user and to know whether they would truly use the product they envision. Can you Connect the Dots.

  • Walt French

    “If we knew what it was we were doing, it would not be called research, would it?”—Albert Einstein

    Now, of course he is talking about real research, not the development that is actually what goes on in early stages of a business forming a technology or product, but I think the notion stands: there’s a LOT of uncertainty.

  • bob

    does facebook, google, apple, and googles founding count as moonshots? Or are you just saying this applies to schlerotic companies? How about kindle, and 1M books and 60 second download. was that moonshot? Or do you only mean if it is a moonshot if a paypal founder starts a battery powered car company?

    • mjw149

      Let’s take those in order.

      Facebook is not a technological achievement, it is/was a social club. And as a technology, it wasn’t the first one, either, even though parts of it were innovative, it didn’t succeed due to tech.

      Google wasn’t the first search company and their primary innovation was the search keyword auction, a business model. Again, though they made many technological innovations, they weren’t the first search engine.

      Apple wasn’t a moonshot, it started as a hobbyist computer. The moonshot Lisa failed, the more affordable mac succeeded. Fitting Dediu’s theory again.

      GM had an electric car in the 80s, brilliant tech. But the Prius succeeded. MS had a tablet before Apple. Lots of people had voice recognition before Siri. etc.

    • http://www.asymco.com Horace Dediu

      A moonshot is a deliberately targeted opportunity considered beyond reach using existing technology.
      A successful technology business is usually the result of a learning process whose end-goal is unknown when the development is started.
      Facebook, Google, Apple were not founded with the ambition to achieve what they actually achieved and they were not deliberate endeavors.

  • Michael Davis-Burchat

    Thanks so much for the term of art, Horace.

    I have struggled to explain this concept – amidst fellow innovators – for maybe a decade now. And in hindsight ‘cramming’ is very descriptive of how one’s thinking fails them. And by extension distorts the collective logic of their company.

    I intend to start using this idea tomorrow. And I hope to hear if more often in the future. It would save people alot of wasted, time, hope and energy.

  • Sander van der Wal

    I like the use of Unix here. On the one hand, it was born because a “moonshot” development, Multics, failed and a couple of the engineers that needed a time sharing system wrote Unix so he would manage to make a PDP-7 or whatnot to do useful work. Being a programmer, he got exactly what he needed, a system that supported programmers.

    And on the other hand, the success of both iOS and Android is based very much on them being not Unixy at all. Neither for the user, who would interact on a proper Unix system using a terminal and by writing scripts to automate much of their routine work. Like scientists running their data processing pipelines.

    And also not for the programmer, as one can write all kinds of interesting and usable apps without ever doing a proper fork() and exec(), the two system calls that were invented by Unix, the system calls that are to Unix what multi-touch is for iOS.

    Unix as a software system has managed to escape from the clutches of its users. But that isn’t because it is Unix. Linux and BSD, the parents of Android and iOS, are internally very different. POSIX, the API that defines Unix, is not used in iOS and Android. Why then still call it Unix?

    Or attribute the success of iOS and Android to it being Unix?

  • Jessica Darko

    Regarding the footnotes: The space program was a “moonshot”, but it was not a success, if your definition of success is economic development. You talked about the tragedy of beautiful minds being diverted from productive economic work… well the space race had that in spades. A trillion dollars and all those minds focused on building single use un-economic launch vehicles, etc.

    If we had not done that, and those minds and money had been left in the economy, we would have had the personal computer, internet and cellphones a good 10-15 years earlier.

    • charly

      Deranged economic theorist would believe that but in the real world it would have been 10-15 years later without the space program. I see the moon program also more as a military than a scientific program and as a military program it was cheap and good for direct economic development

      ps. PC common in 1982 but Apple started in the mid 70’s. Subtracting 15 years and i get a date before the moon project started

    • http://www.asymco.com Horace Dediu

      I made the qualification that it was a success in terms of the goal set (reaching the moon). That was not an economic definition of success.

    • Evan Thomas

      The US computer industry was created by the military, including the space program. Post WWII, none of the major US corporations, including IBM Westinghouse and GE, wanted to make computers for the Pentagon, so they set up Seymour Cray by paying half up front and half on delivery. IBM got in years later when computers became commercially viable thanks to military funded research. Silicon Valley sprang up around the Space Program. The Japanese computer industry lacked a military willing to pay outrageous amounts for products with no apparent commercial viabiliy, consequently they have been non players in high tech innovation.The myth that the free market is why the US is so dominate in tech ignores the genesis of high tech. It’s impossible to imagine where we would be in terms of tech without military spending, but supposing that market shares of the slide ruler market might be seriously evaluated today is not too far fetched.

      • http://www.asymco.com Horace Dediu

        There is a lot of myth and lore in this subject. The data on how the industry developed is out there. To begin I suggest looking at the following presentation on the history of Silicon Valley: http://www.youtube.com/watch?v=ZTC_RxWN_xo

    • Walt French

      I recall that years ago we cited Tang and Teflon as the valuable offshoots of the effort to put men in space and on the moon. Computers would’ve been too esoteric to cite. Our values have changed quite a bit.

      What HASN’T changed is the relatively low share of investments that get captured by the public. This is regardless of public or private investments. But it’s hard to see engineers not employed in NASA/military work as blowing up the envelope working at RCA or Zenith.

    • Will

      That money was left in the economy. Paying scientists, creating technology… all of the money spent remained on Earth.

  • ronin48

    Christensen is no genius. Why do people keep citing him? Because of one gimmicky and obvious book?

    He’s said Apple’s business model was flawed and that they would fail. How much more wrong could he have been?

    Christensen called for Apple’s decline and said its business model (integrated hardware and software) would lead to their fall.

    http://www.businessweek.com/stories/2006-01-09/how-apple-could-mess-up-againbusinessweek-business-news-stock-market-and-financial-advice

    That was in 2006 and in the face of overwhelming evidence to the contrary he doubled down again on this prediction in 2010.

    http://www.newsweek.com/predicting-innovation-winners-and-losers-71451

    How does he keep his job and keep publishing? Why are people paying to have him speak?

    http://notes.kateva.org/2011/10/clayton-dilemma-christensen-apple-will.html

    • N

      Even Steve Jobs made mistakes. You need to look at someone’s entire body of work. Even when data is used, business theory is touchy feely because of the human beings who generated the data. It’s not like running a scientific experiment.

      The space program actually spawned a number of industries thanks to advances in materials science, rocket propulsion, etc. It also inspired a lot of people to become engineers. It was government spending at its best.

    • http://www.asymco.com Horace Dediu

      Christensen’s theory can be correct even if he misapplies it. You seem to be confusing the man with the idea.

      • ronin48

        It seems hard to believe that a theory’s originator would misapply his own theory – multiple times. If true, however, it would certainly be an indictment of both the theorist and the theory.

        Maybe it’s not really a theory but more of a meme or simple description masquerading as a theory.

        Good theories don’t simply describe a few past events but can, to some extent, predict future similar events or at the very least hold up when applied to data outside that which was used to develop the theory.

      • preclude

        To require of a person that they never have been wrong before for you to take them seriously is going to preclude you from ever doing that.

      • r00fus

        No one is requiring it – but people are making CC to be much more than he is – he has a well-sounding explanation as to why disruption happens, but then goes on to make bad analyses.

        So his theory is nice, but he can’t make good predictions out of it? Not very interesting.

      • victor

        Please take look at the book the innovators manifesto written by michael raynor in order to be aware of the predictive power of the theory.

    • Martin

      Theories require data to function. That’s easy in science where the data is there for anyone willing to take the effort to find, but hard in fields like business where the data tends to be secretive. Looking in from the outside. CC could not see what Apple was doing, but Steve Jobs (almost uniquely) could. Jobs would never advance the theory publicly, because that would undermine Apple’s business advantage. By contrast, CC cannot measure the theory accurately at Apple, because the data is kept so secret by Apple. Only in hindsight, when that data comes out, can you say ‘Aha! That theory turned out to right after all, but it could not have been predicted because the data needed was unavailable at the time to make the prediction’.

      Disprovability and repeatability are critical elements to scientific theory, but they simply aren’t always possible in messier endeavors. That doesn’t make the theories useless – it simply makes them difficult to apply unless you happen to be in the charmed position to have access to that knowledge. We routinely call those people billionaires.

      In the examples you cite, CC appears to lack the technical imagination to foresee new applications for tight integration. Few people really possess that. Elon Musk would be a great example of someone who does. Horace and a few others have been driving the point of Apple’s relentless incrementalism which eventually leads to emergent applications (5 years development on batteries, screens, multitouch, sensors, processors eventually yields iPhone). There’s quite a good talk about that here: http://www.youtube.com/watch?v=C4IJXAVXgIo related to the development of autonomous flying machines – nothing individually revolutionary but relentless incremental advances in several areas eventually created a whole new market to work in. It’s a very difficult thing to predict, particularly if that incremental development is happening in secret.

      • ronin48

        CC didn’t recognize or allow for bad or missing data. Yet he wasn’t shy about spouting theories and criticizing Apple’s approach again and again. And again and again he was wrong.

        And if you read what he wrote about Apple, he criticized things that were not secret and made his predictions based on known Apple information. And he was wrong.

        So, to the original point, why is CC held up as such an expert and visionary if he is so obviously neither?

      • http://www.asymco.com Horace Dediu

        He is not held up as a visionary or expert. He is held up as a theorist.

      • ronin48

        I’m not sure the semantic distinction matters. The point is that his theories and his theorizing failed. When a theory is wrong it should be discarded and replaced. So why do we continue look to him and his theories for insight?

      • http://www.asymco.com Horace Dediu

        Of course it matters. A theorist devises theories. An analyst depends on them.
        The measure of a theory is not the accuracy of its application. Theories have anomalies. Building theories is a process of understanding where they don’t apply and adjusting iteratively. You don’t discard Newtonian laws of motion because you discover anomalies around the speed of light. The theory is adjusted.

      • ronin48

        I respectfully disagree. When the theorist and his theory fail in fundamental ways on multiple occasions, more than an “adjustment” is needed and the result is replacement. And I’d never compare CC with Isaac.

  • Yes Again

    Well, I am not for or against either moonshots or small wins. I am for a research portfolio that has both. A completely small win approach is going to deliver, well, small wins, is it not? A portfolio that is balanced (say to the risk preference of whomever owns it) may pay off more handsomely, no? Picking your moonshots, now that is interesting in such a portfolio. Assigning market value to such things sort of defeats the purpose. There is something base about it all being about money. I am not naive, of course its about money, at every turn. This has driven much basic research into the not-for-profit realm. Maybe that is good, maybe not. As a ‘system’, I suppose not…bottom line, there is room for both. Will people pick wrong? Yes. Is ‘wrong’ temporal, yes again.

  • poke

    Companies that set up so-called “skunk works” operations tend to forget that the original Skunk Works had customers and worked closely with those customers. When it operated as a mere technology incubator it tended to produce duds. Its major successes were produced for the CIA and the Air Force.

    Apple is really the moonshot idea turned on its head. It has the “skunk works” at the top and treats traditional operations as part of the product being created. You have to wonder why, if producing a defined product en masse is a solved problem, any company would see fit to put bean counters and salesmen at the top. Product development should be “C-level”, including prototyping, and the process of turning out millions of those products, marketing them and selling them, etc, should be assumed. It should also be headless and hence killable once the market for the product dries up, regardless of how long that takes.

  • Tim Sweetman

    The (Apollo) moon shots were arguably not Moon Shots. There was a customer, a fixed schedule, iteration (Gemini, Mercury, Apollo…), and the determination to build something bad first. Throw THREE rockets away to get to the moon? Return by burning half the spaceship away in the atmosphere, then slam into a poorly-specified point in the ocean? Breathe pure oxygen, and hope not to catch fire? Terrible, yet marvellous.

    Mostly, though, this reads like a paean to iteration and feedback, and a jeremiad against big design up front, possibly against research. (Contrast: http://worrydream.com/dbx/)

    What about the (totally bizarre) history of the Internet and the web? Incremental, yes, but with what business model? Internet development was substantially government-funded, World Wide Web was a hobby, Hypertext was a quixotic dream.

    Are there any successful examples of Moon Shots that started with big R&D and become commercially successful? (French nuclear power?) (The Internet, in general?) (Boeing 747?)