When I applied the modularity dichotomy to smartphone operating system there were several implications that came to light. One was the question of whether the market has reached the point where products were “good enough” and the speed of innovation became less important than price. Another was: will integrated vendors be able to hold on to a healthy share of growth against non-consumption?
Now I bring up another implication of modularity: the concept of “law of conservation of modularity”.
Clayton Christensen illustrated this as follows:
If you are writing a software application to run on top of a platform, are you permitted to modify the platform to make your application run better?
If the platform is interdependent, changing it could cause unintended side-effects and you will not be allowed to make them. If the platform is modular, changes are possible and you will have access to the code.
The reason is not not political. It’s that in the case of interdependence, the application has to be suboptimized and conform itself to the platform so that the platform could remain optimized (to its goals). The platform needs to remain optimized is because it needs to keep improving (since it’s not good enough).
In the case of modularity, the application can be optimized and the platform can conform itself to the application because the application needs to keep improving while the platform is good enough and does not.
Conformability is what constrains up-market progress.
This also holds in hardware architecture. This was plainly evident in the WinTel era. During that time the microprocessor and the operating systems had a proprietary and interdependent architecture even while Dell’s product has a modular architecture.
So the microprocessor wasn’t good enough. That meant that the desktop computer had to have a modular architecture to conform itself in order to allow this to be optimized because the line widths on the circuit were not good enough.
This is what is meant by conservation of modularity: one side or the other of an value chain boundary needs to be modular and conformable to allow what’s not good enough to be optimized. The key question is what’s not good enough: the device or the platform (the whole or part?)
How does this apply today in the post-PC era?
If you think about it in a hardware context, because historically the microprocessor had not been good enough, then its architecture inside was proprietary and optimized and that meant that the computer’s architecture had to be modular and conformable to allow the microprocessor to be optimized. But in a little mobile computer like the smartphone, it’s the device itself that’s not good enough, and you therefore cannot have a one-size-fits-all Intel processor inside of a BlackBerry or iPhone, but instead, the processor itself has to be modular and conformable so that it has on it only the functionality that the BlackBerry needs and none of the functionality that it doesn’t need (hence the ARM modular architecture is dominant). So again, one side or the other needs to be modular and conformable to optimize what’s not good enough.
Now today (and, I believe for some time to come) in smartphones, or in any situation where logic gets embedded in a system it’s the device itself that is not yet good enough and, therefore, you cannot back off the frontier of what’s technologically possible. It has to be optimized with a proprietary interdependent architecture. That means that the processor and the operating system inside of a Smartphone has to be modular and conformable to allow that to be optimized.
In order for you to believe otherwise you have to convince yourself that smartphones, as a system, have reached the limit of what is technologically possible.
 The source is a presentation by Clayton Christensen at Open Source Business Conference in 2004