Ten years ago: Clayton Christensen on Capturing the Upside

You can hear this as an MP3.

[It’s important to understand just how much the theory has evolved in the last 10 years. Much more perhaps than in its first eight.]

Doug Kaye: Hello, and welcome to IT Conversations, a series of interviews recording and transcripts on the hot topics of information technology. I am your host, Doug Kaye, and in today’s program, I am pleased to bring you this special presentation from the Open Source Business Conference held in San Francisco on March 16 and 17, 2004.

Mike Dutton: My name is Mike Dutton, and it is my pleasure to introduce to you today Clayton Christensen. Professor Christensen hardly needs an introduction. His first bestseller, “The Innovator’s Dilemma,” has sold over half a million copies and has added the terms “disruptive innovation” to our corporate lexicon. His sequel — and you have to have a sequel to be a management guru — is entitled “The Innovator’s Solution” and is currently Business Week’s bestseller’s list. Professor Christensen began his career at the Boston Consulting Group and served as a White House fellow in the Reagan administration. In 1984, he cofounded and served as chairman of Ceramics Process Systems Cooperation. Then, as he was approaching his 40th birthday, he took the logical step of quitting his job and going back to school, where he earned a doctorate in Business Administration from Harvard Business School. So, today he is a professor of Business Administration at Harvard Business School where teaches and researches technology commercialization innovation. Professor Christensen is also a practicing entrepreneur. In 2000 he founded Innosight, a consulting firm focused on helping firms set their innovative strategies. And according to a recent article in Newsweek, “Innosight’s phones ring off the hook, and the firm cannot handle all the demand,” very similar to all the startups in open source here today. So, please join me in welcoming Clayton Christensen.

Clayton Christensen: Thank you, Mike! I’m 6 feet 8, so if it’s okay, I’ll just…the mic picks up okay. I’m sure delighted to be with you, especially because there is blizzard in Boston today; my kids have to shovel the snow!

As Mike mentioned, I came in to academia late in life, and the first chunk of research that I was engaged in was trying to understand what it is that could kill a successful, well — run company. And those of you who are familiar with it, probably know that the odd conclusion that I got of that was that it was actually good management that kills these companies. And subsequent then to the publishing of the book that summarized that work, “The Innovator’s Dilemma,” I’ve been trying to understand the flip side of that, which is if I want to start a new business that has the potential to kill a successful, well — run competitor, how would I do it? And that’s what we tried summarize in the book, “The Innovator’s solution.” It’s really quite a different book than the “Dilemma” was, because the “Dilemma” built a theory of what is it that caused these companies to fail. And then in the writing of this solution, I’ll just give you analogy for where we came out on how to successfully start new growth businesses.

I remember when I first got out of business school and had my first job. I was taught the methods of total quality management as they existed in the 1970’s, and we had this tool that was called a “statistical process control chart.” (Do they still teach that around here?) Basically you made a piece, you measured the critical performance parameter and you plotted it on this chart, and there was a target parameter that you were always trying to make the piece to hit, but you had this pesky scatter around that target. And I remember being taught at the time that the reason for the scatter is that there is just intrinsic variability and unpredictability in manufacturing processes.

So, the methods that were taught about manufacturing quality control in the ‘70’s were all oriented to helping you figure out how to deal with that randomness. And then the quality movement came of age, and what they taught us is, “No, there’s not randomness in manufacturing processes.” Every time you got a result that was bad, it actually had a cause, but it just appeared to be random because you didn’t know what caused it. And so the quality movement then gave us tools to understand what are all the different variables that can affect the consistency of output in a manufacturing operation. And once we could understand what those variables were and then develop methods to control them, manufacturing became not a random process, but something that was highly predictable and controllable.

Well, I think that today the creation of new growth businesses is where the quality movement was 30 years ago, and that is that there’s just a widespread belief that it’s random and unpredictable. So, for example, every time venture capitalists invest in a new company, they invest in the belief that it will be successful, but the odds are that two out of ten really become successful, and the whole industry is structured to help them deal with the alleged variability in creating new growth businesses. And for established companies who launch new products, every time they invest in a new product, they think it will be successful, but actually 75% fail. And so a lot of the methods that are taught for how to manage innovation are really structured around how to deal with the alleged unpredictability. We think that it is not intrinsically unpredictable. If we can actually understand the variables that affect the success of new businesses that we’re trying to start, that we can succeed with a much higher probability than has historically been the case.

And so that’s what I want to talk about is, “What are these variables that we have to control?” And in kind of an unabashed way, I’m going to structure this around theories of strategy and management. And the word “theory” gets a bum rap amongst some managers because it’s associated with the word “theoretical,” which connotes “impractical.” But a theory is actually a very practical thing because it’s a statement of causality, a statement of what causes what and why.

And so, like gravity is a theory; it allows to predict that if you jumped out of a window on the top floor of this hotel, you’re going to fall, and you don’t have to collect experimental evidence on that question. And what this means is that every time a manager takes an action, it’s actually predicated upon a theory in her head that, “If I do this, I’m going get the result that I need.” And every time you put a business plan into place, you’re actually employing theories in your mind that if you do these things, you will be successful. It’s just you don’t know quite often what the theories are that you’re employing and aren’t aware whether they’re good or bad. So, I just want to try to discuss with you some of the theories that we’ve tried to draw upon to put together this work, especially as they relate to how you would create a successful open source software business and how the companies that might be disruptable or threatened by open source software create new ways of growth that will keep them healthy.

So, I’m going to… move to the next slide please. These are the questions that you’ve got to get a lot of things right, but 10 questions that you need to get right in building a new growth business are, “How do we beat the competitors?” “How do we know what customers we ought to focus on with our first product?” “When we are focusing on a set of customers, how do we know whether they’re going to want to buy the product that we have in mind?” “How do we distribute to them, and how do we build a brand that communicates what we want to communicate to them?” “Of all of the things that have to be in place for the customers to benefit from the product, which should we do ourselves and what can we rely upon partners and suppliers to provide?” “How do we keep our product from getting commoditized?” “Who should we hire to run this new business, and what kind of person if we hired to run it would kill the business?” “How do we structure the organization, and if it’s within an established company, where is the right home for the new growth business to live?” “How do we know when we’ve got the right strategy, and how do we know when the strategy that has been working will not work in the future?” And finally, “Whose money should we take to fund the business and whose money, if we took it, would kill the business?” turns out to be quite an important question.

And so I want to just walk through the models that we propose you can use to think your way through these questions as far as we can go in the time that we have. And then we’ve got some time at the end for you to call all of this into question or send barbs or criticisms, or ask other questions as you see fit.

So, lets start at the top one, and it turned out that this one was really quite readily answerable by the model of disruption that we summarized in “The Innovators Dilemma,” and for those of you aren’t familiar with it, I’d like to just walk through that as quickly as we can. There are three parts to this model.

The first one is represented by that line, and what it suggests is that in every market, there is a trajectory of improvement that customers are able to utilize over time. And a good way to visualize that is in the car industry. Every year the car companies give us new and improved engines, and yet we can’t utilize all the improvement that they give us because you’ve got nuisances like police that put a crimp on how much of the engine we can use.

Now to keep the diagram simple, I’ll just depict that ability to utilize improvement as a single line representing the median customers in a market. But if you can remember that there’s really a distribution of customers in every market, and so at the high end, really demanding applications that are never satisfied with the best they can find, and at the low end, pretty unsophisticated folks that are overserved by very little. So, that’s the first piece of the model is there’s an ability to utilize improvement.

And then the second one is that in each market there’s a trajectory of improvement that the innovating companies provide as they introduce new and improved products. And the most important finding on this is that this trajectory of technological progress almost always outstrips the ability of customers to use that improvement. And so it means that a company whose products aren’t good enough to be used by customers in the mainstream of a market at one point can improve its products at such a rapid rate that it overshoots what they’re able to use at a later point in time. Now they may keep buying that product out here; they just can’t utilize all the improvement that’s made available within it.

And a good way to visualize this one is, if you go back to the early years of the personal computer industry when we were first learning how to do word processing. Do you remember how often you had to stop your fingers and let the Intel 286 chip inside catch up to you because it wasn’t good enough even for a simple application like word processing? But as Intel has introduced faster and faster chips that it can sell for more attractive profits to demanding customers in higher tiers of the market, now that they’re at a 3 — gigahertz Pentium IV processor up here, they’ve way overshot the speed that mainstream business users are able to use.

Now at the same time, there’re still some freaky people at the high end that need even faster chips, but they’ve overshot what the mainstream can use. Now some of the innovations that allow a company to move up this performance trajectory are simple incremental year — to — year engineering improvements, and others are this sort of dramatic breakthrough technologies. Like in telecommunications, the change from analog to digital and from digital to optical were very complicated technological tour de force. But they had the same effect on the industry as the simple ones, and that is they sustained this trajectory of performance improvement.

And what we found, as you may know in that study, is that it actually doesn’t matter technologically how difficult or radical that innovation is, that almost always the incumbents win these battles of sustaining innovations. Again, it just doesn’t matter technologically how hard it is. It just seems like, if it helps them make a better product that they can sell for more attractive margins to their best customers, they figure out a way to get it done.

But then there was this other kind that we call the Disruptive Technology that comes into the market every once in a while, and open source clearly is one of these. And we called it “disruptive” not because it was a dramatic breakthrough improvement, but instead of sustaining the trajectory of improvement, it disrupted and redefined it and brought to the market a product that was crummier than those that historically had been available. In fact, it performed so poorly that it couldn’t be used by customers in the mainstream. But it brought to the market a simpler and more affordable product that allowed a whole new population of people now to begin owning and using it, and then, because trajectory is so steep, what takes route in a simple application then can intersect with the mainstream. And so that’s the basic model of disruption.

One the companies that has tried to use this a lot in the last few years in managing their strategy and new product development has been Intel because, as I mentioned, they’ve gone from a point of having a product that wasn’t good enough to now overshooting the volume of the market. I got invited to go out to a meeting there because, through the late 1990’s, Intel, they were coming into the low end of the processor market in entry — level computer systems.

Much cheaper processors were made by Cyrix and AMD, and they were just killing Intel at the low end; in fact, their market share in entry — level systems dropped from 90% to 30% in 18 months. And it felt great actually to get driven out of the low end, because as they were losing the volume in the most price — sensitive tiers of the market, they were replacing the volume at the high end with much more attractive margins, and so overall their reported gross profits were improving, and Wall Street just loves gross margins! And so it felt good until they saw this dumb model, and then it helped them see, “My gosh! If we lose the low end today, we may lose the mainstream tomorrow!”

So they I got to come out and have this meeting with their executive staff and their chairman, Andy Grove, who is in the audience. And I was going through this, and he had a real puzzled look on his face, and then it was like a teacher’s dream and the light bulb turned on, and he raised his hand, and he said, “I see what’s wrong with your idea.” He went up and he crossed out the word “technologies” there and he said, “Clay, if you frame this as a technological problem, you’re going to mislead the world.”

And unfortunately, “The Innovators Dilemma” had just been published and I couldn’t get it out. But he said, “If I get the idea, I would characterize it as just straightforward technology that disrupts the business model of the leaders, and that’s what makes it so hard.” And he then went on to give his view of the puzzle that had been in my mind that it kind of triggered the whole line of research, and that was living in the Boston area, how can digital equipment get killed?

I remember watching digital equipment grow up through the ‘70’s and ‘80’s. It was probably the most widely admired of all the companies in the world economy. And when you read articles in the business press about why they were so successful, it was always attributed to the brilliance of the management team. Then about 1988, they just fell off a cliff and began to unravel very quickly, and when you then read articles in the business press about why they had stumbled so badly, it was always attributed to the ineptitude of the management team. They were the same folks running the company.

And so for a while, I scratched my head and wondered, “How could good managers get that dumb and that fast?” That really is the bad management hypothesis, is the most ready explanation that we can offer for the failure of most companies. But the reason it didn’t quite fit right in this case is that every minicomputer company in the world collapsed in unison. Is was not just digital, it was Data General, Prime, Wang, NEXT, Hewlett Packard, and you might expect them to collude on pricing, but to collude to collapse was a bit of a stretch, and this had to be something more fundamental going on.

And that’s what really precipitated this question. And so Andy then went on to give his view (let’s go to the next slide) of what happened to digital equipment. And he said, “In the first place, if we are able to line up in this room the sequence of mini computers that Digital introduced to its markets, they didn’t skip a beat.” If you peeled the covers off and looked at the technologies that were required to make a good mini computer better, anything that helped them make a better product that they could sell for higher margins to their best customers, they got it done.

But he reminded me, “Do you remember how crummy those early personal computers were?” They were toys. In fact, Apple sold the Apple II primarily as a toy to children. It wasn’t good enough to be used by customers in the main stream market as it existed in the late ’70’s and early ‘80’s, and that meant that the more carefully Digital listened to its customers and tried to reflect their unmet needs in the properties of their next generation product, they got no signal that the personal computer mattered because, in fact, their customers couldn’t use it. And so it took root as a toy, and then because this trajectory improves at such a rapid rate, within a few years it intersected with the main stream needs, and it wasn’t just digital, it was the whole population that got blown out of the water.

And he said to his earlier point, “This wasn’t a technology problem. Digital’s engineers could have designed a PC with their eyes shut.” But they had a business model, and the minicomputers were quite expensive and complicated, and in order to sell them, they had to be sold direct to the customer, and the selling process involved the lot of training and support and service. You just had to have costs like that in the business in order to play in that game.

Given that kind of a business model, Digital had to make about 45% gross profit margins, and a typical computer sold for about $250,000. Now, in that environment, as in most companies environments, people are walking in to senior management all of the time with proposals to make better products. Well, some of the proposals that management was entertaining entailed making a better computer than Digital had ever made before. If you look at those proposals, they typically promise gross margins of 60% and the machines could easily be sold for half a million dollars.

But at that same time that management was trying decide if they should invest in those things, other people were walking in with proposals to invest in personal computers because it was quite obvious by the early ‘80’s, that this was going to be a big market. But if you look at those business plans, in the very best of years they promise gross margins of 40% and they were headed down to 20%, and these machines could only be sold for $2000.

And so Andy said, “So really, the decision the management had to make was, “Should we invest our money to make better products that our best customers could use that would improve our profit margins, or should we invest our money to make worse products that none of our customers could use that would ruin our profit margins? What should we do?” And I’m pretty quick, and so what I said was, “Andy, since you’re in this situation, my advice is to bale out and become a professor.” But then I realized that the very same thing is happening to the Harvard and Stanford business schools. We have become very good and very expensive, and we’re getting disrupted by crummy, low — end, on — the –job learning experiences like you’re having today! A little bit later I want to talk about that because it really is quite frightening for us!

But anyway, this is kind of my answer for the first of these questions that you need to get right, which is, “How do you beat the competitors?” And the answer is, that if you come into an existing market with a better product, the odds are that the competitors will get you because you’ve taken a piece of real estate up there that is financially attractive for them to pursue. If you come into the market with a disruptive product, the odds are that the entrant will win because you set up a situation where they’re motivated to flee rather than fight. And if you pick a fight like that where they don’t want to fight you because it’s in their interest to move in some other direction, it’s a great way for a little company to beat a big company. And so that’s the answer to the first question of why disruption is a great tool to beat the competition.

Now, I want to skip down. If you look at the stodgy companies today, and really crawl back inside of their history, most of them started out as disruptive innovators. It’s interesting in Japan. I had a student who went back to Japan and became a senior official in their Ministry of International Trade and Industry a few years ago, and he, poor guy, got sentenced to having to write plan for the resurrection of Japan’s economy.

And he worked on this thing for about two years and then called up and said, “I don’t think there’s any hope for Japan.” And he was looking at it from a macroeconomic policy perspective and came over and we talked about it for a couple of days. And then what hit us is that every one of the industries that constituted a fundamental engine of Japan’s economic miracle in the ‘60’s, ‘70’s and ‘80’s did this.

And so, those of you who have gray hair may remember that Toyota came into our market in the ‘60’s with a crummy, rusty, subcompact model called the Corona that no self — respecting non — college student would think of owning, and now they make Lexuses. And Sony came in with crummy transistor radios, and now they’re the best consumer electronics maker in the world. Their steel industry came in with the lowest quality steel in the world, and now they’re the highest quality steel companies. Canon did it in photocopiers, and Seiko did it in watches, and over and over.

And just like it happens in our economy, now those companies have become huge global giants, making the highest quality products, serving the most demanding tiers, and there is no growth up there. If they had tried to somehow cobble together the computer from outsourced subsystems or modules that fit together according to some industry standard, they couldn’t have done it because the establishment of the interface standards takes so many degrees of freedom away from the design engineers that they would have to back away from the frontier of what’s possible. And when the product isn’t good enough, competitively you can’t back off the frontier. And so that meant that in order to play in that game, you had to do everything in order to do anything. There is a huge advantage to being integrated and having a proprietary architecture in this era when the functionality isn’t good enough.

And so in the early years of that industry, IBM just dominated its world. And in the similar period of the automobile industry, General Motors and Ford just dominated their world. And the question comes, “What happens once the functionality and reliability become more than good enough for what customers in the less demanding tiers of the market can use? What do you do to get traction with these kinds of customers if you want to build a new business serving them?”

And the answer is that what’s not good enough now changes, and what begins to matter to these customers is, “I can’t get what I need fast enough, and I can’t get exactly what I need as fast as possible.” And so improvements in speed to market and the ability responsively to give every customer exactly what they need, that constitutes a new trajectory of innovation along which improvements are rewarded with attractive prices and increases in market share.

And in order to compete in this way to be fast and flexible and responsive, the architecture of the product has to evolve towards a modular architecture, because modularity enables you to upgrade one piece of the system without having to redesign everything, and you can mix and match and plug and play best of breed components to give every customer exactly what they need. And because there are clean interface standards here, when that happens, the industry disintegrates. (And this is just the chart that I put together with Andy Grove a few years ago to illustrate the rough concept.)

So here the rough stages of value added in the computer industry, and during the first two decades, it was essentially dominated by vertically integrated companies because they had to be integrated given the way you had to compete at the time. We could actually insert right in here “Apple Computer.” (Let me go back to the prior slide.) Do you remember in the early years of the PC industry Apple with its proprietary architecture? Those Macs were so much better than the IBM’s. They were so much more convenient to use, they rarely crashed, and the IBM’s were kludgy machines that crashed a lot, because in a sense, that open architecture was prematurely modular.

But then as the functionality got more than good enough, then there was scope, and you could back off of the frontier of what was technologically possible, and the PC industry flipped to a modular architecture. And the vendor of the proprietary system, Apple continues probably to make the neatest computers in the world, but they become a niche player because as the industry disintegrates like this, it’s kind of like you ran the whole industry through a baloney slicer, and it became dominated by a horizontally stratified population of independent companies who could work together at arm’s length interfacing by industry standards.

One of the things is that to us was most interesting is that where the money is made flips on both sides of this equation. We wrote an article about this that we published in the Harvard Business Review called “Skate to Where the Money Will Be” in honor of the ice hockey star, Wayne Gretzky. Somebody asked him, “How come you’re so good?” and he said, “Well, I never skate to where the puck is; I always skate to where the puck is going to be.”

And the notion here is that if you create a new business that tries to position itself at the point in a value chain where really attractive money is being made, by the time you get there it probably will have gone, and you can tell where it’s gone in a very predictable way, and that’s what I want to try to get at here. Over on this side of the world, the money tends to be made by the company that designs the architecture, the system, that solves what is not good enough. Because it’s functionality and reliability that’s not good enough, the company that makes this systems that is proprietary and optimized tends to be at the place where most of the profit in the industry is made. Because the performance of that kind of a product isn’t dictated by the individual components, of which it is comprised; this is determined at the level of the architecture of the system, and that is where the money is made.

So in the early years of computing, IBM had a 70% market share; they made 95% of the industry’s profit. In the similar era in automobiles, General Motors had a 55% market share; they made 80% of the industry’s profit. And if you were a supplier to General Motors or IBM, you just lived a miserable, profit — free existence year after year because the components did not solve the problem of what was not good enough; the system solved the problem.

But on this side, when it becomes more than good enough and the architecture becomes modular, where the money is made flips to the inside of the product. And a good way to visualize this is just imagine that you were working as a computer designer for Compac, and your boss said, “I want you to go design a better computer than Dell.” How are you going to do this? Put in a faster microprocessor, more gigapixels on the screen, higher capacity disk drive, or anything you can do, the competitors can just copy instantly because in a nonintegrated word, you’re outsourcing from a common supplier base, and when the architecture of the system is modular, and it fits together according to industry standards, the better products are not created through clever architectural design; the performance of the product is driven by what’s inside.

And so the ability to make money migrates from the system to the subsystems that define the performance and allow these guys to keep moving up market. And so that’s the answer for why in the computer world IBM in the design and assembly of computers made all of the money, and it was not in the components. And so when they got into the personal computer business, they thought that the same formula would hold here, and so they outsourced the components and stayed in the design and assembly of the computer, and they did just what Wayne Gretzky said don’t do. They skated to where the money used to be and outsourced where the money would be.

You can see the very same thing happening in the automobile industry today. Automobiles have become more than good enough for what all but the most demanding customers are able to use. Our family is a great example. We just sold our Toyota Corolla after about 180,000 miles of loyal problem — free service. Just a beautiful car! but my kids hadn’t been willing to ride in it for about the last four years because it went out of style about five years before it wore out.

And so, do I need Toyota to give me an even more reliable car next year? I can’t absorb more reliability. And so this very same thing is happening in the automobile industry. Over here, when the architectures were like that, it took six years to design a new car; now it takes two years. You can walk into a Toyota dealership today and custom order a car assembled exactly to your spec, and it will be delivered in five days, about as fast as Dell can deliver a computer assembled to your spec.

And the way they’re becoming this way, fast and responsibly flexible, is the architecture of the automobiles have evolved from a proprietary and interdependent architecture to a modular architecture. Over here, they source components from hundreds of suppliers, no one of which made a difference. On this side, they source components from a few suppliers that they call “tier — one suppliers.”

On the left hand side, for example, Dana Corporation supplied axles. On the right hand side, Dana Corporation supplies a complete rolling chassis with all of the suspension system and everything. And the smoothness of the ride, or the feel of the ride, isn’t dictated by Ford anymore, it’s dictated by Dana because that problem is solved in the rolling chassis. Similarly, Johnson Controls on the left hand side supplied seats; on the right hand side they supplied the entire interior cockpit subsystem and Delco supplies the electrical system and Bosch, the breaking system, and so on.

And true to form, the industry has had to disintegrate. And so the integrated giants that dominated over here, General Motors, packaged up all of its components operations and sold them off in a company called Delphi Automotive, and Ford packaged its components operations off and sold them off as Visteon. But you can see that the car companies did exactly what IBM did when it put Microsoft and Intel into business, and that is, they sold off the pieces of value added, the subsystems, where in the future the money would be made in order to stay at the level of value added, which is the assembly of a car, where in the past the money was made.

This also highlights, for me, a process of what you would call “commoditization.” And what commoditization means is a company’s product getting better and better and better and better, and you reach a point where these folks aren’t benefited by an even better product, and so their willingness to pay a better price for an improved product diminishes to the point that you can’t get pricing to stick for an improvement.

And that’s one dimension of selling a commodity is, you just can’t get a premium price for a better product. The other dimension of commoditization is that your ability to differentiate your product disappears. Here, this is highly differentiable; here, it’s not at all differentiable. And so the process of a company’s products becoming commoditized is just a very natural result of the interaction of the technological progress in customers’ ability to utilize that progress.

Even a brand can become commoditized. And most companies think that, “Well, if our product is really not differentiable, at least we take refuge in having a brand.” If you think about it in these terms, a brand has value when you’re marketing upward to customers who are not yet satisfied with the best they can find, because the brand serves to close as much as is possible the emotional gap. But once the product is manifestly more than adequate and you’re marketing down to overserve customers, the brand really does not create value, and the brand itself can become commoditized.

Now, I want to try to walk into how you can use this way of thinking in a concept that’s called the “law of conservation of modularity.” And I want to illustrate this in a view of what I think is going to happen on the hardware side, in particular in the semiconductor industry, and then try to use that to think about what open source software could mean.

And the core concept of the law of conservation of modularity is that — if you can just visualize — if you are writing a software application to run on Windows, you might go to Redmond and knock on the door and say, “Would you please just let me into Windows? And if I could just change these 25 lines of code, the application would run so much better!” But Windows doesn’t dare open the door, do they? because it has an interdependent architecture, and if you change a couple of lines, who knows what else would get screwed up!

And so the application has to be suboptimized and conform itself to Windows so that Windows could be optimized. And the reason is, according to (I’ll go back a slide) this model, historically in order to fuel Dell moving up market so that it could keep competing against Sun Microsystems at the margin there, the fuel that allows Dell to move up is the microprocessor inside and the operating system inside.

That’s what constrains its up — market progress. And so the microprocessor and the operating systems have a proprietary and interdependent architecture even while Dell’s product has a modular architecture. And so, back to the software analogy. The application has to be suboptimized so that Windows could be optimized. But if you’re writing an application to run on Linux, because Linux has a modular architecture, you don’t even have to knock on the door. You just walk in, change what needs to be changed as long as you don’t screw up the interfaces, and the modularity and conformability of Linux allows the application to be optimized.

And so one side or the other needs to be modular and conformable to allow what’s not good enough to be optimized. If you think about it in a hardware context, because historically the microprocessor had not been good enough, then its architecture inside was proprietary and optimized and that meant that the computers architecture had to modular and conformable to allow the microprocessor to be optimized. But in a little hand held device like the RIM BlackBerry, it’s the device itself that’s not good enough, and you therefore cannot have a one — size — fits — all Intel processor inside of a BlackBerry, but instead, the processor itself has to be modular and conformable so that it has on it only the functionality that the BlackBerry needs and none of the functionality that it doesn’t need. So again, one side or the other needs to be modular and conformable to optimize what’s not good enough.

Now, there was a guy at Bell Labs a few years ago who published an article about Moore’s Law, and what he showed is that in pursuit of Moore’s Law – now the vertical axis here is the complexity of the circuit, which may roughly equate to the speed of the circuit. In the pursuit of Moore’s Law every year, the fabs and applied materials make 60% more transistors available on an area of silicon than were available the prior year.

But if you look at the ability of circuit designers to utilize transistors year on year, they’re are only able to utilize 20% more transistors than they were the year before for any given level of complexity of circuit, and the reason is they just have design budgets; they don’t have enough money or time to design circuits that are complex enough to utilize all the transistors that Moore’s Law makes available.

What that means is, for most of the volume applications in the world, circuit designers are actually awash in transistors. Even while at the very high end, they still need even finer line widths and demand that Moore’s Law take the next step to that next node of technology, but they’ve overshot what most circuit designers are able to utilize. And so, what this would then predict is that circuits which on this side had to be proprietary and interdependent in their architecture, over here, now the way you compete to win the business of those people is going to change and you’re going to need to be very fast and flexible and responsive and be able to deliver systems on chips that offer every application exactly the functionality that they need and none of the functionality that they don’t need.

And so, how the law of conservation will play itself out is, this is kind of my sense of how the industry was structured in the past. So the microprocessor wasn’t good enough. That meant that the desktop computer had to have a modular architecture to conform itself in order to allow this to be optimized because the line widths on the circuit were not good enough.

The equipment that was made by companies like Applied Materials and Tokyo Electron, each piece of equipment was optimized. It had its own proprietary architecture, and in the sequence of steps that a wafer has to go through, there was no attempt, nor could you make an attempt, to synchronize the flow of material across those machines. Each piece of equipment had to be optimized for itself. That meant that the fabs had to be laid out in a modular way bay by bay, and the sequential steps in the process had to be buffered or modularized by having gobs of working process inventory in those fabs. And that made the fabs very slow, but they actually had to optimize this rather than vice versa because this wasn’t good enough, and then the components that comprise applied materials equipment did not matter at all. So the money was made here and the money was made here, and these guys, these guys and these guys lived a miserable, profit — free existence.

Now in the future, in handheld devices, I’m just talking about this little piece of the world, but I think it applies to almost any situation where logic gets embedded in a system. But, I’ll talk about a handheld device like the RIM BlackBerry. It’s the device itself that is not yet good enough and, therefore, you cannot back off the frontier of what’s technologically possible. It has to be optimized with a proprietary interdependent architecture. That means that the processor inside of a BlackBerry has to be modular and conformable to allow that to be optimized.

Now you think about this, where these chips are now customized chips delivered to every customer’s application, and the design cycle out there in the customer’s end is measured in months rather than years. The fabs up here would take three months often to work an order through all of that inventory in the fab. And for a fab to take three months to deliver an order in a world down here where these are really fast — cycle custom designed products, is just intolerable.

And so the fabs are going to need to figure out how to deliver products really fast, and that means that over the next few years rather than being laid out in bay structure, a fab is going to need to reconfigure itself in a single wafer process so that they can process silicon like Toyota makes cars, with very little inventory in the process, and that’s what make them really fast.

And nobody has figured out how to do that yet, but the pressure from the market will mean that it’s the fab that is not good enough on this critical performance dimension, which is, get every customer exactly the circuit they need as quickly as they can do it. That then means that the manufacturing equipment from companies like Applied Materials needs to be modular and conformable so that the fab can optimize the flow of product through itself.

And this is possible now because Moore’s Law has overshot what most circuit designers can utilize, and this is possible because these products don’t need a Pentium IV processor, and so you can back away from the frontier. And so what it means is that the places in the value chain where attractive profits can be earned are going to migrate from where they are today in a very predictable way.

So, I’m not a software engineer or designer, but this is what I think Linux does, or MySQL or Apache, whatever it is, is that because of the open source character of it, the architecture is modular. And what that means is — let me back off from this. What this really tells me is that, like the microprocessor is going through a process of commoditization as it overshoots and becomes modular and undifferentiable. But, whenever that happens at one layer of value added that there is a process of commoditization, it initiates a reciprocal process of decommoditization at the next level of value added.

So, whereas the device up here was a commodity, this is not a commodity. Whereas a fab was commodity, this is a proprietary architecture that’s not a commodity and so on. So — and I’ll come back to the software world — there’s a fellow named Tim O’Reilly who’s done a lot of thinking about this that some of you may know. (Is that you? We just emailed. Stand up. He’s a lot smarter than he looks actually!) And another guy who runs a company in Santa Clara called Tensilica, who has thought a lot about this law of conservation of modularity. Tensilica makes these modular integrated circuits.

So the operating system is going, because of Linux, from a proprietary to a commoditized modular architecture, and what you’ll see happen then, because the very modularity of the open source architecture allows it to conform itself to allow the application to be optimized. And the operating system in many ways just folds itself in to the application and disappears.

And so my sense is if you look at how Red Hat lives, ostensibly they’re an operating system vendor, but really what the value that they create is at the next layer, the software that keeps the operating system from ever crashing and keeps maintaining itself, and that’s what is not good enough. And the conformability of Linux allows them to sell what you might call an application that is just extraordinarily optimized, and that’s becoming an noncommodity.

And similarly, in the Oracle world, the database software is proprietary and optimized, and a lot of money was made there, but MySQL allows to you to just fold the database into whatever the next layer of value added is so that it can be optimized, the application can be. And Google runs on Linux, and the operating system disappears into the search engine.

And so I don’t think that you could say that open source is a movement and nobody has figured how to make money in it, it is just where the money is made migrates to a different layer in the value — added chain and, in fact, it facilitates the decommoditization of the next layer because what’s not good enough can now be optimized. And so that’s my rough…Tim, do you want to clarify any of that gibberish? Or did I get your argument right?

Tim O’Reilly: I’m actually talking about it tomorrow, but actually I’m really struck by something else that you are saying here, and actually I’m going to disagree with you about Red Hat because what I think Red Hat is much more analogous to is your fab. I look at Apple with Mac OS X has done something that’s much more analogous to folding the value into an application layer on top, but what Red Hat does, and what I think really all Linux vendors do, and actually somebody else in the audience, Ian Murdoch, is really a leading thinker on this.

He’s right over here. It’s really that the critical competency of open source distributions is actually the active assembly. I think that’s really an interesting thing, and we were starting to see that what Ian’s new company does is really focus on really custom distributions and that ability to be responsive, to be faster. So I think there’s a lot of different elements in the story. You did actually correctly characterize my argument that we’re driving value up to things like Google on top of Linux.

There are many, many instances of that, but there is a lot of other pieces of the story. I think there is another one, too. I’m jumping into things that we’re going to ask you in the questions, but I think that your whole fabrinology here, you’re starting to have a lot of people starting to play with FPGAs, for example, where you’re actually literally doing a lot of the processor work in a quick responsive way, and then some.

Clayton Christensen: Okay! Thanks, Tim. I had a student write an article about cellphones and where the value migrates there because, in many ways, a Nokia and a Motorola phone have been over on the left hand side where the proprietary architectures, then they do their own processors, they do their own operating system optimized, but now those cellphones have so many features that the limitations on the system are not in the handset themselves, it’s elsewhere in the system.

And so we wrote a couple of things that forecast that the handsets are going to become modular, and because of that then, where the money is made in that value chain is going to migrate to the back end. And it’ll become a disintegrated industry, and the way to make money would be for Motorola to sell its chip sets to a thousand Chinese assemblers and for Nokia to sell its operating system to a thousand Chinese assemblers, and all of those guys colliding against each other in commodities would then drive the pricing of those things down and so on, and that’s the way the world would work.

And sure enough, Motorola subsequently announced that they were opening up their system and selling chip sets to anybody who wanted to buy. And then, according to this student, Nokia announced that it would make its operating system available to anybody who wanted to buy, and so I’m thinking, “Boy, these guys are brilliant because they followed Clay’s advice.” But then Nokia announced that it was almost giving its operating system away, and I thought, “Those idiots! That’s where the money is going to be made,” but then my student said, “No, they’re a lot smarter than you, Clay, because of the law of conservation of modularity.

By opening up their operating system and making it essentially free,” he asserted, “what that then allows is the operating system becomes modular and conformable so that Nokia could keep optimizing hardware and keep the hardware part of the system proprietary.” Had they let it became a truly modular world, then Microsoft was there sitting with its own operating system ready to move in and make all the money in the assembly of a modular handset, and maybe the strategy that Nokia followed, it is actually kind of clever to essentially wipe out the value that is created at the operating system layer in order to keep playing the game where they had an advantage over Microsoft.

(Okay, now I want to click ahead.) They’re just one other set of concepts that I wanted to go over, and then we could just have some questions. This last one was the question, I think, number two on our list of, “How do I know who are the right customers to target with my new technology?”

And I think from what little I know about open source, there are a lot of wrong customers that have been targeted that have caused a lot of expense in brief. Now, where this idea came from is actually in one of our MBA classrooms. And I had written an article about the disruption of the Harvard Business School, and what it asserts is that our MBA’s have become extremely expensive.

I would never criticize Stanford in public, so I’ll just talk about Harvard. They cost about $130,000 to hire. And if you look at who recruits on our campus, operating companies have a very hard time recruiting because they’re so expensive and they can’t fit our expensive graduates into their salary structures. So who recruits increasingly are venture capitalists, private equity investors, Mackenzie and Goldman Sachs.

Now, the operating companies aren’t getting lower quality talent; they’re just going into undergraduate programs and raking out the best engineers and others that they can find, putting them to work, and then two years later at the time when many of them would leave to get an MBA, the companies are saying, “Nope, you don’t need an MBA!” “We have GE Crotonville or we have Motorola University or Intel University. IBM spends $500 million dollars a year in management training. We’ll train you right here!”

So anyway, I wrote a case about how on — the — job training is disrupting the Harvard Business School, and one of the students raised her hand and, in a very polite way, she said, Well, excuse me, but I think you can only be disrupted if you have overshot what the market needs, and frankly I am not overserved by your teaching.” So I was convinced that Harvard was getting it, and yet it was very clear that she wasn’t overserved. And so it helped me think through that there are actually two different kinds of disruption, and I want to just talk that through.

So, one type of disruption, we’ll plot on this chart, and what we showed before is that if a company’s entry strategy is to bring a better product into an established market, the probability that it will build a successful growth business is zero because of these asymmetries of motivation that exist. Now, incidentally, if a venture capitalist funds a venture that tries to do this, and they actually do come into an established market with a better product, if their strategy is to turn around quick and sell out to the incumbent leader, they can actually turn in a nice piece of money, but it’s not a strategy to create a new growth business.

Now, one type of disruption we called a “low — end disruption,” it just takes root in the very same market where the incumbent leaders are, but it just picks it off at the low end, and they build a business model that can make money at the discount prices that’s required to steal the business down here. So it doesn’t create a new growth market, but it does create a new growth business. And the examples I’ve used in my writings of steel mini — mills, they did this. Discount department stores did that. They didn’t create a new market; they just had a lower cost business model, and the incumbents were motivated to flee rather that fight.

But the other type of disruption, and this is what corporate education is, we called the “new market disruption,” and it comes out in a new context, and so it’s almost like you have a third plane of competition out here. And by bringing a product that is so simple and inexpensive, a whole new population of people can now afford to own and use a product who historically couldn’t do it because they didn’t have the money or the skill, and it creates a booming new market out in this new plane of competition and doesn’t effect the business of the original players at all for a very long time.

The personal computer was one of these, right? So I remember when I got out of grad school, when I had to compute, I had to take my punched cards to the corporate mainframe center, and the expert ran the job for me. Because it was so expensive and inconvenient, we didn’t compute very much. But when the personal computer was introduced, it was so inexpensive and so idiot simple that an idiot like me could now begin to compute for himself in the convenience of my own office.

And at the beginning, out here in this new plane of competition, those early PC’s could barely do word processing. But because I hadn’t been able to do anything myself, I was delighted to have something that wasn’t very good. And then as the PC and the software associated with it got better and better and better out in this third plane of competition, ultimately it got good enough that it started to then suck applications out of the back plane into the new plane, and little by little the customers left the established players. And so the effect of the disruption was the same in that the established leaders got killed; it just is kind of a different animal bringing something that is so much more affordable and simple that a whole new group of people can now begin to do it for themselves.

I want to just illustrate this with a couple of examples from history, and then think through how open source software might be affected by this principle. This is the historical example: The transistor was a disruptive innovation relative to the vacuum tube because when it emerged in the late 40s and early 50s, it simply couldn’t handle the power that was required to be used in the markets that existed at the time, the big tabletop radios and floor standing televisions and so on.

Every one of the vacuum tube companies took a license to the transistor, but they carried the license into their laboratories and they framed it as a technological deficiency. In other words, the transistor isn’t enough yet to be used in the market. And if you could go back and get all of the expenses out of these companies, they probably in aggregate spent $2 billion in today’s dollars, investing, trying to make solid state electronics good enough that you could make big products out of them.

And while they were trying to do that, over here — now I’m going collapse this back into two dimensions, but when you see green, I really mean that that’s taking root out here in the third plane of competition. The first application was a germanium transistor hearing aid in 1952. A tiny little market, but it valued it for the very attributes that it made it useless in the mainstream, and that was low power consumption. And then in 1955, Sony introduced its first pocket radio. And those of you with gray hair remember how crummy those things were, just static laced, very low fidelity, wouldn’t get a signal from much of a distance.

But Sony chose to sell the pocket radio to the rebar of humanity, people we call teenagers. And the teenagers were delighted to have a product that was not very good because their alternative was no radio at all, and it allowed them to do something that they had wanted to do, but never could do, and that is listen to Rock — n — Roll out of the earshot of their parents.

So a booming new market emerged in this third plane of competition, and these guys back here felt no pain because they were all new customers. Had Sony tried to sell its pocket radio to the parents, a crummy product would have been judged to be crummy because they had the alternative of a high quality vacuum tube radio. Then in 1959, Sony introduced its first portable television, and again, they competed against nonconsumption. They made it so affordable and simple that a whole new population of households who didn’t have a big enough apartment to have a big floor standing TV or didn’t have enough money to buy one, now they could own one, and because the alternative was no TV at all, they were delighted with the crummy product.

And again, a booming new market emerged in this third plane of competition until the mid 1960’s. And now solid state electronics was good enough that it could handle the power required to be used in making these big devices. And bam! within three years, all of the applications got sucked out into the solid state world, and the vacuum tube companies were just dead, and these are venerable institutions like RCA. And the punishing thing is that it’s not that they didn’t see the technology coming. They saw it before Sony did.

It was not that they weren’t aggressive and visionary. They invested far more money trying to make the technology good enough than did Sony as they were building these growth businesses. The punishing thing is that they targeted the wrong customers. They targeted consumers, and the only way the customers here would have adopted the new technology is if it were better than the old technology and more cost effective. That was a very demanding technical hurdle for the vacuum tubes companies to surmount. As they came out and competed against nonconsumption in contrast, Sony had a much more modest performance hurdle because they just had to make the product that was better that nothing, and the customers were delighted.

Now, where you see this happening today is voice recognition software. So, the next time you go to a computer superstore, go to the voice recognition software shelf and pick up a box there that’s called the IBM ViaVoice. Now don’t buy it, but just look at it! They have a picture of the customer on the box, and it’s an administrative assistant who is sitting in front of her computer wearing a headset speaking rather than word processing.

You think about the value proposition that IBM has to be making to this woman. She types 90 words a minute. She is 99% accurate. If she needs to capitalize something, she just instinctively presses shift and cruises through. And IBM has to say, “No, don’t do that anymore. I want you to put this headset on and teach yourself to speak in a slow and distinct and consistent manner in complete sentences. If you must capitalize, you must pause, speak the command “capitalize,” pause, speak the word you want to capitalize, pause, speak the command “uncapitalize,” pause, please be patient, we are 70% accurate, this will get better we promise.”

This is not an attractive proposition to this customer. And IBM has — I’ve not worked with them at all, but as I understand it — they’ve spent maybe $700 million trying to make voice recognition technology good enough that it can be used in that market. This is a very difficult technical hurdle to surmount. Meanwhile, while they are investing that aggressively, Lego comes up with these robots that recognize “stop,” “go,” “left,” “right,” and the kids are thrilled with the four word vocabulary, and then press — or — say — one kinds of applications take root, and now directory assistants ask you to say the city and state and so on, and much simpler, and an interesting market is emerging.

I bet maybe the next place it takes route is in chatrooms because the kids don’t spellcheck or capitalize anyway, and they would rather speak than type. And maybe then the next application would be, when you see these stubby fingered executives with their BlackBerrys trying to peck out emails, and their fingers are four times the diameter of the keys, they’re only 70% accurate. If somebody gave them a voice recognition algorithm that really didn’t have to be very good so that they could speak their wireless email rather than peck it out, I bet they’d be thrilled with the crummy product. And ultimately, as it takes route in these new applications, it may get good enough that we can do word processing with it, but it’ll be a long time.

I’ve asked myself, “Why would the IBM engineers have picked off the most demanding application conceivable for this technology,” and it probably resides in the resource allocation process of the company, because it’s not just IBM, everybody does it. You got to rule out stupidity because they’re at least as smart as us, but in order to get funded, the people who had the idea knew that they just couldn’t stand up in front of senior management without PowerPoint and just say, “I’m sure there’s going to be a lot of ‘press or say’ when stuff happens some time.

They never get funded. But they have to do a PowerPoint presentation that has financial projections, and they have to be able to say that we hired a consulting firm and they did a market study and, there are 37.9 million administrative assistants who spend this many hours a day word processing, and this is how big the market is, and in order to get funded it forces the company to target the market that ultimately causes them to fail.

And the digital camera people like Kodak fell victim to the same process. These digital cameras are potentially disruptive, and so in order to make a digital camera good enough that people would opt to take a digital rather than a film image, they have to cram it full of charge couple devices, and that drives the price point up so high that the only people who can afford to buy a digital camera are the people who own film cameras, and so then the reward for success once they got a good enough digital cameras is they don’t sell film.

So a massive investment and no growth. And meanwhile, we see taking root out in a third plane of competition, credit — card — thickness cameras that have just dirt — cheap CEMA sensors, and now CEMA sensor — based imagers in cellphones that take crummy images, but they’re so much better than nothing at all, that a booming new market is emerging. And whether a disruption creates new growth or threat to the core business in many ways is driven by this very same thing.

Our daughter just finished a mission for our church in Mongolia where she had been for a couple of years, and we went over to get her in August when she finished her service. And I’d always wondered whether solar energy was ever going to be commercially viable because the European and American governments have probably invested $15 billion trying to make solar energy good enough that we could use it in our markets.

If you think about it, this a very difficult technical hurdle because our homes are filled with air conditioners and power sucking computers and microwave ovens. And you got minor problems like nights and clouds that get in the way and, it is a very difficult problem! And so the technology making solar cells with this as the core market that they invasion is very expensive and optimized to try to wring the most power possible out of the materials and technology that are available.

Anyway, when we were in Mongolia, our daughter took us to this big open — air market in the capital city Ulaan Baatar. There was this big line of stalls in this market where the vendors are selling dirt — cheap solar cells shrink wrapped with 6” televisions and rabbit ears antennas, and they’re just walking out of marketplace! Because half the population of Mongolia doesn’t have access to power over a grid. And it doesn’t really matter if it was cloudy today; I couldn’t watch TV yesterday.

And there are about two billion people in South Asia and Africa who don’t have access to electric power at all, and it turns they don’t have air conditioners and microwave ovens in their homes, and they’re delighted with something that won’t be very good. Now whether solar energy ever takes root as a viable commercial technology, I don’t think it’ll ever get driven in the sophisticated engineering labs of North America and Europe. It will be by folks out there in the third plane of competition competing against nonconsumption and get better and better all the time.

So as I look at how different companies have tried to deploy Linux. What this would suggest is that if IBM tries to shove Linux down Merrill Lynch’s throat in its core internal mainframe computer system, there are just so many complicated interdependencies when you insert a new technology into an existing system of use, that you’re running into all kinds of problems.

But I think at that end of the world where you see open source taking root is in web servers, which is a totally new application taking root out here in Internet — based computing. And as it gets better and better and sucks applications out of an internal system to a web — based model, then gradually Linux disrupts, as a new market disruption UNIX, in those tiers of the market. But it does it not by a direct attack, but by taking root in a new application.

And similarly what it would forecast, and again I say this is just what the model says, is that any attempt to deploy Linux right onto a desktop, there are just so many complicated and unforeseeable interdependencies when you insert a new technology into an existing system of use. So many other subtle things would have to change, and it would have to become better than the existing thing in the ways that the consumer measures goodness, I wouldn’t expect it to take root.

But because of the law of conservation of modularity, as it takes root out in the new plane of competition with wireless handheld devices, because the modularity of Linux allows the device to become optimized. And then as it gets better and better, applications get sucked off the desktop into the handheld device, and that would be the mechanism by which the Microsoft proprietary operating system would get disrupted by Linux. I mean, I don’t know if that’s true, but that’s what the model would predict.

Well, there are a bunch of other questions that we didn’t answer, but I hope this gives you a flavor of at least the way we’ve been trying to think our way through some of these problems. And this book that we wrote about it, “The Innovator’s Solution” gives the models that address the other questions. If you have any interest, they’re there, or I just work at our business school, it won’t get finished off for another 20 years. Disruption is a process, not an event. So you could contact me at Harvard if you have any other questions.

Anyway, criticisms or barbs or…. Yes?

(Question by a speaker too distant to be heard clearly.)

Yes, that would work, kind of, in that it’s so much better than nothing and it’s a new system of use to an extent. The problem is if, in fact, they bump into existing systems of use, and that is application software that just don’t run well on that, then you would expect them to run into problems, but China is a much easier place to deploy it then existing markets. Yes?

(Question by another speaker too distant to be heard clearly.)

Well, that’s a great question. Can I give you a longer answer than you asked for? When we were having this case discussion about the disruption of Harvard Business School, I walked into the classroom and took a vote of the students, and there were 100 students in the class. “How many of you think Harvard’s in trouble?” Three students raised their hand; 97 — there were no abstentions — voted that, “Don’t worry, be happy, this could never happen to our school.”

And so I asked if one of the three who was worried why he was worried and he said, “Well, there’s a real pattern here,” and he lists out the elements of the pattern and then he said, “Now look in the case and everything that happened to all of these other people is happening to us. That’s why I am worried.” So then I turned to the don’t — worry — be — happy crowd and said, “So why aren’t you worried because it fits the pattern so closely?”

And everything that they cited related to the data, that more people are applying to Harvard then ever before, our students are getting more money than ever before, and the more money they get paid, the higher…we’re much further ahead of Stanford on the rankings, and so on. And so we had this argument back and forth.

So then I asked one of the students who was the most vociferous defender of Harvard’s invincibility, “So imagine you were dean of the school, what evidence would you need to see to become convinced that this is a problem that we need to address?” And he said, “Well, I would look at Harvard’s market share amongst the CEO’s of the global 1000 corporations, and if it starts to dip, then I’d worry.”

And I said, “Well, when you saw that data, would it signal that the problem needs to be addressed, or that the game is over?” He said, “Oh, yeah, the game would be over!” And so I turned to the rest of the don’t — worry — be — happy crowd and said, “So any of the rest of you, imagine you were dean. Could you give me data that would convince you that it was time to take action?” And actually, every piece of data that they could come up with was evidence that the game was over.

And the idiot — simple insight that we got is that data is only available about the past. And yet, the way we teach our students, at our school we teach by the case method and if the student makes a comment in a case discussion that can’t be backed up by his analysis of the data in the case, the instructors are trained to crucify the student on the spot. And so we enshrine the virtues of data — driven analytical decision making in the way we teach at our business schools, and the our students go to work for McKinsey and they carry data driven analytics to the nth degree, but in many ways the very ways we teach them condemns our managers to take action when the game is over.

But the scary thing for Harvard is that this means that we need to take action based upon a theory, not evidence. And that’s how I got into this thing that, well crum, we’re always using a theory anyway, because a theory is a statements of cause and effect. We just hope we have better theories.

So at the time IBM made the decision, there was no theory to guide them; there was only evidence of the past, and the evidence of the past was that the money was made at the system, not in the components. And now, as you have a theory, hopefully it’s a good theory, then it allows companies who face this transition to make a decision about whether they ought to create an interdependent or a modular architecture, and when it’s going to flip.

For example, what’s going on right now with wireless access to the Internet; it’s not good enough, it needs a proprietary system. And so companies that have tried to cobble together a system around WAP standards, for example, it’s prematurely modular. And the companies that have an interdependent system like DoCoMo and J — phone in Japan, they’re making bundles of money, and they’re also out competing against nonconsumption for the business of teenagers. So that’s what I hope to accomplish, is that we can look into the future and make statements about what will happen before the data is clear, but in the past that hasn’t been possible. Yes, Tim?

Tim O’Reilly: I have a question about the theory of conservation of modularity. (The rest of the question is obscured due to the distance of the speaker).

Clayton Christensen: Yeah, I think it would be, Tim. In other words…but you have to think of it in relative terms. So, what is it in that system in the value chain of the system that gets delivered to the customer is not good enough, and that tells you then what has to be optimized, and you’ve got to be integrated across that interface that optimizes it, and what is next to it that needs to be broken up. And you’re thinking about it, I think, in a healthy way. Yes?

(Question by another speaker too distant to be heard clearly.)

Yes, his question is. “Does the theory predict that the profits will disappear in every market or that they will just be made by different parties?” And it’s very much the latter. In fact, for marketing purposes, we called this concept the “law of conservation of modularity.” In the book, we called that “law of conversation of attractive profits.”

And again, at one layer as it moves towards commoditization by overshooting and becoming modular, the adjacent layers move towards decommoditization, and so the ability to make money will shift. So like in the car industry, the subsystems, the tier — one suppliers are not yet manifestly more profitable than the car assemblers, but we should see that transition occurring over the next few years as the power in that industry shifts and so on. Yes?

(Question by another speaker too distant to be heard clearly.)

Well, I don’t think I can repeat your statement, but I hope everybody heard it. I think the concept is that when you have a modular architecture, it truly is undifferentiable because the pieces fit together in a pattern that everybody follows, and so the product is not differentiable. And in a disintegrated industry structure, the cost structure of the company is heavily variable because you outsource the components.

And so a variable cost structure means that the scale curve is flat, so small companies and large companies have the same cost, and the product is undifferentiable, and that’s why it’s very hard to make money as an assembler of a modular system. In a proprietary system, it’s just the opposite, that the proprietary architecture yields a highly differentiable product, and it’s a heavy fixed cost because the design of a proprietary product is expensive, and so that creates steep scale economics, meaning that the big players sell — that’s a recipe for minting money versus not making money.

And so whether MySQL gets folded in to optimize some other application, or Linux gets folded in to some other application, or they try to fold an application into the operating system, where the money will be made is where you’re optimizing the performance of something that is not yet good enough, and that’s really all I can say, because then that creates a condition in which a proprietary architecture will win, and that’s where money gets made. So, I got to believe that a lot of money stays in the system, that it just gets made at a different level.

(Comment by another speaker too distant to be heard clearly.)

This merits a microphone here.

Member of audience #1: I’m the guy from MySQL and I think that the value is being produced in the layer above the infrastructure, so in web services specifically. So look at Amazon, Yahoo, Google, who all use standardized open source modular components that they pay virtually nothing for, and then they create a highly interdependent system with a very high value that they can sell out the customers, and they make much more money than the open source players, so I guess that’s part of an answer.

Clayton Christensen: Great! Thank you. There, in the middle…?

(Question by another speaker too distant to be heard clearly.)

Northeast. Yeah. Microsoft is in the Northwest.

(Continuation of speaker too distant to be heard clearly.)

I will just conjecture because I do not consult for them or anything like that, but that NT is a disruption relative to UNIX, and that system has just been moving up market and squeezing the UNIX and the risk world up to the high end of the market. And then Linux on web servers has inserted itself there. It’s an interesting question whether, Microsoft…in the absence of Linux, I think Microsoft would’ve had a really exhilarating…there was a lot of headroom above them.

Whether Linux limits the further up market march as applications move to a web — based system, I don’t know. I also think that it’s just very hard for me to imagine the core franchise of an operating system on a desktop as being threatened in the foreseeable future because of the reasons that we’ve talked about. Now, the growth markets they may very well have a difficult time catching.

I also see them launching their own disruptions against other folks, which is really a great idea, and so they’ve got database software coming up against Oracle, and they have got Great Plains, but SAP needs to be disrupted. There are a very complicated, high — end, overshot interdependent system, and there’s a lot of growth there, and so they’re doing a lot of interesting and useful things.

But I think the critical thing as per what he said is if they want it keep being profitable, there’s a mindset that while we milk the operating system business over here, in the future, operating system is not where the money will be made, and so don’t somehow try to make money off of Linux. Use Linux to somehow optimize the next layer up and shift how we make money there. But they’re a tough company, and a good company.

Member of audience #2: My question is about switching costs. Not certainly the one we stopped the new line of business, but in the one where a lower cost comes in and attacks. In that, case switching costs seems to me to have effect of the diagram of moving the curves further apart, and that, for instance defends Office against Open Office, is attempts to tie it in and keeping the switching costs very high.

Clayton Christensen: Yeah, it’s a very good point, and that’s why the cramming it into the blue space so often is expensive and fails whereas if you take root in a whole new plane of competition, you don’t run into the switching cost problem out there. It’s like this: The shortest distance between two points is not a straight line. Yes?

(Question by another speaker too distant to be heard clearly.)

Well, of course I think it’s a perfect theory or I would’ve changed it, right? The only way you can develop a better theory is to find anomalies that the theory won’t explain. And so I’ll tell you what I do think is that these pressures…are you familiar with the story of the minimill? So in the new book I understood that a lot better and told the deeper story that as that the minimills hit the rebar market, because they had a 20% cost advantage, they made tons of money.

And, of course, the integrated mills, because they were more profitable investments up, they got out of that business. And it felt great until 1979 when the minimills finally had the whole market to themselves, and then the price collapsed by 20% because, the insight I got was that a low — cost strategy is only viable as long as there’s a high — cost competitor around, and so they had to move up because that’s where the high — cost competitor was.

And that’s why Dell has to race up market because it gives them the privilege of competing against Sun Microsystems. So I would say that those pressures to move up exist on everybody. Now, some companies, like in the pharmaceutical industry, they haven’t yet overshot very much, and so the forces are there, it just hasn’t played itself out.

In other industries, there are anomalies that I think you can explain, so let me tell about this and then there are some that I don’t understand yet. Airlines. Southwest Airlines was a new market disruption. It’s that when they started out, they were competing against buses and cars, not against United, and so just like WalMart locked up the little cities, Southwest locked up the point — to — point routes between the little cities, and they’ve done okay.

But all of the other discount airlines have come in as a low — end disrupter, not a new market disrupter, hitting the low end of the main market. The latest is Jet Blue, and they’re flying in and out of mainstream airports. And it’s easy for a discount airline to create a low cost structure; you just buy fully depreciated jets and higher nonunion people and so on. And you notice how many discount airlines have started over the last 20 years, and their average life expectancy is about four years.

As they hit the low end of the market, and it looks like a classic disruption, the problem is for the major airlines. Their cost structure is such that they cannot live without the volume at the low end of the market. And so the discounters can gain a little share, and then the major airlines just have set up shuttle by United or (Song) and swat them down, and then as soon as the discounter is gone out of business, then they shut down shuttle by United and go back up. And so it’s an eminently disruptable business, but the incumbents cannot flee; they have to fight. And so it makes it miserable for everybody except Southwest; they’re kind of in their own little world. So that’s an anomaly which I actually think the theory can explain.

I can’t explain, for example, why EMC beat IBM. It wasn’t a disruptive technology, it was a better product, and yet they cleaned them out. And there are a couple like that, but I think it’s a pretty pervasive phenomenon. Religion? I haven’t applied it to churches yet, but…

It’s a great question. How does this apply to Intel? So they were getting killed at the lower end, and they then developed what they call the Celeron chip and tried to sell it through the sales force. The problem was the sales force had a problem, because when they were trying to get a design win, say, at Dell, they had to choose, “Should we spend our energy trying to get a design win for a $300 Pentium processor that gives us 80% margins? Or should we get a design win for a $50 or $70 processor that yields 30% margins?”

So they set up a separate marketing organization out of Israel and gave it the responsibility to sell the Celeron chip and gave it a cost structure so that a $70 price point was attractive, and as soon as they put good technology in a business model that found the disruption attractive, bam! and they got the low end of the market back again. Cyrix was gone, AMD was nailed to the wall, and now the Celeron is the highest volume of all of the products in the line.

And now the next thing that is happening is the system on a chip, the modular custom — configured processor that needs to be delivered very rapidly. And they have got a fab in Marlboro, Mass. with their StrongARM technology trying to do that. And I think this disruption it is a new market disruption here is a bigger deal.

(Question by another speaker too distant to be heard clearly.)

Yeah, they’re really…and that’s the evidence that the incumbents actually don’t need to get killed, but they actually they’ve got to set up their separate organization that can focus on the new thing. And the reason you have to set it up separately is that the new game begins before the old game ends, and if you try to do the new thing within the established when you take your off the old game, and there’s still a lot of money to be made there.

There’s a company in Santa Clara called Tensilica that I think is very interesting. In the new market disruption, what happens…I’ll give you an analogy of how this works because I think it’s a big deal for the software industry. If you go back 30 or 50 years ago in organic chemistry, if you wanted a new molecule, you had to go to the labs at Dupont, and they employed the finest organic chemists in the world. And one of them would mix up a few atoms in a beaker and heat it up and draw a fiber out and look at it under a microscope, and then go down and talk to a colleague and say, “What do you think this is?” and the colleague would say, “I have no idea, but heat it up for ten more minutes and let us see what happens.”

And the miracle molecules that we know as nylon and polyester and Kevlar emerged from an unstructured trial — and — error problem — solving process in the labs at Dupont. And nobody could do this like Dupont or like the engineers who were at Dupont; you just had to be the best experts in the world. But as they kept doing this over and over, they began to recognize patterns of cause and effect.

And when that realm of knowledge migrated out of an unstructured experimental problem solving mode down into pattern recognition, then you didn’t have to be nearly the world’s experts in order to make molecules. And so other companies like Dow and Union Carbide and so on could get into the plastics business. And then quantum theory took over, and quantum theory allows you — I’m going to overstate the case a little bit — it allows you to predict that if you put the atoms together in this structure in the molecule, these will be the properties of the material, and that then allows you to write a piece of software to reverse engineer, so if you need these properties, this is what the molecule has to be structured like.

And so now you really just need a BS in Chemical Engineering from Montana State and a good piece of software, and you can design better molecules faster and at lower cost than the world’s experts could a generation ago because it’s gone from unstructured problem solving to pattern recognition down to a rules base.

And in many ways the progression from interdependence to modular follows that same thing, so in a world of interdependence, you just need the best experts in the world and through trial and error and working things out, they create great systems. And modularity that defines how the pieces snap together is only possible when it’s down more of in a rules — based world, and that allows now people with much less skill to do very sophisticated things because the technology is more rules based.

So what Tensilica is doing is the software that allows their customers to create custom — configured processors tailored exactly to each situation, their target is to make it so idiot — simple to design a custom configured processor that even a software engineer could design a processor to optimize the application. I think that is kind of slick, but now that the functionality is more than good enough and you can back off the frontier and the rules are not yet there but they are emerging, it could really change a lot of the world for somebody like Intel.

And another thing is that as the law of conservation and modularity works its way through, Intel needs to be an integrated company because at the high end, there’s not a clean design and manufacturing interface. But at the low end there are much better defined rules for design and manufacture, and so the fabulous companies and the focussed fabs are kind of taking over. It would be crazy if, as that disintegrated industry structure works its way up market, it would be crazy for Intel to sell off its fabs. And its finance people are going to want to sell them off because they’re so capital intensive, but the fab is the place where the money will be made if they are fast. Yes?

(Question by another speaker too distant to be heard clearly.)

Her question is. “What happens if these market disruptors get bought by big players and by the incumbents?” It actually is a great strategy. I want to say two things. Companies like Johnson & Johnson as the healthcare industry is getting disrupted in a pretty big way — we don’t have time to talk about it — but they’ve decided, “We’re not good at getting these things off the ground,” but they’ve got a team of people that have these lenses on going around the healthcare industry, identifying little companies that are new market disruptors.

And so a company that has a technology that will allow you to do in the home what today you have do in an office, or will allow you do in an office what today you have to do in a hospital, that’s a new market disruption, and they’re actively trying to buy them. And then if they keep it separate and let them have their own processes and create their own business model and just give them their money, it’s a great way to generate growth. And J&J has generated…now every year they get about $9 billion of revenue from disruptive companies that they bought a decade ago. If they try to fold the company into the parent, then it destroys what they bought because it has to conform to the business model of the parent.

(Question by another speaker too distant to be heard clearly.)

Can I fit the University of Phoenix into the Harvard Business School’s picture? Yes, it is exactly one of those things. Not good enough, but there are 160,000 students at the University of Phoenix getting management degrees of one sort or another online, and it is getting better and better every year, and the students that they cater to are not people who would have had gone to a leading MBA program anyway, and therefore they’re really quite happy to able to do something that’s online and ostensibly not as good, but, geez! they’re getting good.

(Comment by the same speaker too distant to be heard clearly.)

Well, it is a good question. So the University of Phoenix is not something that Harvard should go after. If you go back to the model of skating to where the money will be, in the past…see Harvard is an integrated institution. We do the research. The research gets embodied in components that we call “cases and articles.” We assemble the components into courses. We produce the courses, we market the courses, we soak our students for money to finance the courses and everything.

And the architecture of the MBA program is interdependent, and so you can’t study marketing if you don’t study product development, and you can’t study product development if you don’t study manufacturing, and you can’t study manufacturing if you don’t study cost accounting, and you can’t study cost accounting if you don’t study organization design, because management is a big hairball, and you’ve got to study all of it in order to understand any of it.

But online or corporate education is modular, and so somebody will call up corporate training and say. “Geez, we got to give a strategy presentation to the board in June? Can you give us a week of strategy?” And it’s not good as Michael Porters’ course, but it does a job. And then they learn a little strategy and go practice it, and then they call up and say, “We got to get better in product development? Can you find us two weeks of product development?” and so on.

And on that side, in a modular world where the money is made is not in the teaching of the courses, the assembly of the course. In our world, that’s where the money has been made. In that world, if Harvard could become the Intel inside of every corporate university — and I’ll put my tongue in my cheek — if we could make it foolproof for all of these unwashed instructors in corporate training programs to teach great stuff in compelling ways, we will create so much growth in education and we would still make all the money in the industry. But it is a very different business model than what we have now, but that would be the right play, but we do not have any data to support that.

Should we do just one more question? Well, we better not. Talk to you afterwards.

Guys, thanks for wasting the end of your day with me! You’re very helpful.

Doug Kaye: Thank you for listening to this special edition of IT Conversations. This edition was recorded at the Open Source Business Conference held in San Francisco on in March 2004. For more information regarding this and future OSBC conferences, please visit www.osbc.com. IT Conversations is a production of RDS Strategies LLC. This program is copyright 2004 by the presenter, Open Source Business Conference, and RDS Strategies LLC. My name is Doug Kaye, and I hope you will join me the next time for anther edition of IT Conversations.