iTunes now costs $1.3 billion/yr to run

The iTunes store continues to grow. The data that Apple published in the last event included the following:

  • 15 Billion iTunes song downloads
  • 130 million book downloads
  • 14 billion app downloads
  • $2.5 billion paid to developers
  • 225 million accounts
  • 425k apps
  • 90k iPad apps
  • 100k game and entertainment titles
  • 50 million game center accounts

As this data is added to the existing data and cross-referenced additional insight into the economics of iTunes is emerging.

Since we know something about the average price of songs and apps, and we know the split between developers and Apple (and roughly between music labels and Apple) we can get a rough estimate of the amount Apple retains to run its store.

The following chart shows iTunes “content margin” by month. This margin is what Apple “keeps” after paying content owners but before paying for other costs like payment processing and delivery/fulfillment which should be accounted as variable costs. Strictly speaking this margin is not “gross margin”.

If we add the content margins from music and apps and assume the store runs at break even we can get an idea of what it costs to operate the store.  The latest number is $113 million per month (from a total income of $313 million/mo.). It implies over $1.3 billion per year.

Much of that cost does go into serving the content (traffic and payment processing). Some of it goes to curation and support. But it’s very likely that there is much left over to be invested in capacity increases.

I would like to hear alternative opinions, but my guess is that much of the capex that went into the new data centers Apple built came from the iTunes operating margin.

  • Why do you assume that the store is running at break even?

    • The apple execs are always saying that.

      • Ok, this is from your link, written February 2010:
        "[Apple CFO] Peter Oppenheimer
        …Regarding the App Store and the iTunes stores, we are running those a bit over break even and that hasn’t changed. We are very excited to be providing our developers with a fabulous opportunity and we think that is helping us a lot with the iPhone and the iPod touch platform.
        As Oppenheimer says, this isn’t a new development. Apple (AAPL) has always maintained that iTunes wasn’t a real money maker. It’s supposed to help sell iPods, iPhones, and soon, iPads.
        For years, industry observers figured that as the iTunes business scaled, this would change. An alternate theory, held by some of Apple’s media partners–the company was being overly modest about its success."
        I am with the industry observers, iTunes business has scaled, it is no more just above break heaven.
        Operational cost could not increase with same rate of income, it should increase more as a ladder.

    • David V.

      It's probably an approximation based on Tim Cook's repeated assertion (during quaterly conference calls) that the iTunes store operates slightly above break-even mode.

  • Ondrej

    Related to this, there's an interesting source of cash not discussed very often: Apple pays both developers and labels (or other right owners) in specified intervals, so they hold onto the customer payments in the meantime. While that amount might not be that interesting compared to their cash at hand from their income, it is still something worth looking into, because as any other money source, it can be further invested or otherwise used.

    • Walter.French@FAFAdvisors.Com French

      Apple is known for extremely conservative investments overall so “escrowed” monies probably sit in extremely low interest rate money market funds

    • Yeah, Walter's right, but also – lump payments have more to do with transaction costs, which can be huge percentage-wise for micro transactions. The float here is not really relevant compared to Apple's ~$60Bn in cash.

  • Eas

    @ondreij, that float works both ways. Apple doesn’t bill customers immediately because they try and bundle multiple iTunes transactions into a single credit card transaction to keep processing fees down.

  • newtonrj

    Datacenters such as Maiden, NC are long term CapEx projects of 3-5yrs. Funding is usually forecasted against operating margin to a capital expenditure portfolio project.

    Additionally, my assumption is that Maiden is not simply a stand-alone data center but also has failover capacity elsewhere. Losing Maiden shouldn't mean a loss to iCloud. -RJ

    • Actually DCs serve for far more than 5 years, and the only thing that is trashed after 3-5 is outdated hardware, mainly for running costs reasons.

      So you can count the DC building, cooling, electricity and all that as 20 years instead.

  • Pingback: $1.3 Billion: Annual Cost Of Running iTunes | StatSpotting!()

  • Stefan Youngs

    It's very much to Horace Dediu's credit that comments on his articles demonstrate a quality of intelligence and perception and seem to be devoid of the shrill bias that passes for comment on many other blogs and articles. Good job Mr. Dediu and your contributors. May it long continue.

    • Well said. One only has to listen to Episode 1 of The Critcal Path to understand why!

    • huxley

      Probably not much, I've seen articles that claim that as few as 70% of gift cards are ever reimbursed. Working on the assumption that reimbursement rates at the macro-scale can be estimated, you can avoid eating into your margin, even if the cards are discounted at less than face-value.

  • Childermass

    I wonder if Tim Cook was referring to 'a bit over break-even' as a model or as the current state of affairs. If it was the model then Apple would have had to ramp up expenditure to avoid earning a better return as the more recent history is at a better margin. If it was as it was (continued below)

  • Ziad Fazel

    Fine work again, Horace. Bit of clarification needed, because it appears you are mixing two kinds of analysis.

    For an Income Statement kind of analysis based on "break-even", the accumulation of "content margin" – aka profit contribution by product line – for a capital investment would not appear. Capital investment is an application or sink of cash which does not appear on the Income Statement, just its depreciation and amortization, which does not kick in until the new investment starts generating revenue.

    For a Cash Flow kind of analysis, then definitely the accumulation of cash generated by the "content product line" or iTunes Store for capital investment would appear. Cash generated by the Operations section is applied into the Investments section, regardless of whether the investment has been put into use yet.

    So when CFO Oppenheimer says the iTunes Store is operating around break-even, he is likely talking on an income statement basis, against the depreciation and amortization of the iTunes Store, not its cash flow, which may be very heavily negative during times like these of heavy investment.

    People might take him out of context as "iTunes lost $4 billion last year!!!" when that may just be the cash flow from investment, on which we expect Apple to earn its usual wonderful returns.

    • Childermass

      CapEx. First, we would need to know what else the new facility does. If it is solely to service the iTunes store then its depreciation will go there, but maybe it has many uses. Second, we need to know Apple's policy on depreciation. They seem like the kind of cautious business that would like to write it off as fast as possible.

      • Ziad Fazel

        Childermass, we can find that information in Apple's financial statements, segmented by Apple's geographical basis in some cases: policy on depreciation, amount of depreciation in each period, amount of capital investment in each period, and gross and net asset values after depreciation and amortization.

        However I think that you are making the same mistake as Horace by deducting the capital investment in each period from the revenues. There is a delay between when the investment is being made, and recorded on the cash flow statement, and when its value is depleted over its useful economic life, as depreciation in income statement for years to follow the investment.

        Apple is very much in an investing and growth period, which it describes repeatedly in its financial statements. It has been investing in PPE at more than twice the amount of annual depreciation, for 2010, 2009 and 2008. See also Note 4 on page 66 which shows the gross PPE at end 2010 as $7.2b with a net $4.8b still to be depreciated or amortized.

        [p49] "Payments for acquisition of property, plant and equipment (2,005) (1,144) (1,091)

        [p56] " Property, Plant and Equipment

        Property, plant and equipment are stated at cost. Depreciation is computed by use of the straight-line method over the estimated useful lives of the assets, which for buildings is the lesser of 30 years or the remaining life of the underlying building, up to five years for equipment, and the shorter of lease terms or ten years for leasehold improvements. The Company capitalizes eligible costs to acquire or develop internal-use software that are incurred subsequent to the preliminary project stage. Capitalized costs related to internal-use software are amortized using the straight-line method over the estimated useful lives of the assets, which range from three to five years. Depreciation and amortization expense on property and equipment was $815 million, $606 million and $387 million during 2010, 2009 and 2008, respectively.

        Long-Lived Assets Including Goodwill and Other Acquired Intangible Assets

        The Company reviews property, plant and equipment and certain identifiable intangibles, excluding goodwill, for impairment. Long-lived assets are reviewed for impairment whenever events or changes in circumstances indicate the carrying amount of an asset may not be recoverable. Recoverability of these assets is measured by comparison of their carrying amounts to future undiscounted cash flows the assets are expected to generate. If property, plant and equipment and certain identifiable intangibles are considered to be impaired, the impairment to be recognized equals the amount by which the carrying value of the assets exceeds its fair market value. The Company did not record any significant impairments during 2010, 2009 and 2008."

      • Childermass

        Thank you.

        I do not think I have suggested deducting capex from revenues, rather I was asking, firstly, what proportion of the new centre is allocatable to iTunes and, secondly, how aggressively Apple will write the asset down? All of it or some of it? Three years, or five? For the iTunes P&L the differences could be large.

        We are working in a bit of a fog, but the end game is to find out how profitable iTunes is.

      • Ziad Fazel

        Enjoying our conversation, Childermass.

        The new centre would not be a single asset. It would be composed of thousands of assets, all in different classes with their own depreciation rates: land, building, furniture, computers, capitalized software development, etc. Those rates are defined partly by IRS for taxes, and partly by the SEC for GAAP reporting, and Apple would have to be as consistent in their application for the assets in the new centre as for its existing business.

        We haven't seen the new centre in Apple's income statements yet, but it is part of the PPE line in the Cash Flow Investing section.

        How Apple allocates the depreciation, and overhead in general, to its different product lines will be interesting. Just off the top of my head, I would do it by throughput, allocating the costs of iCloud to the iOS and Mac divisions by volume of transactions with each device. Or maybe by revenue from iOS apps v OS X apps.

        Apple can run iTunes as a break-even because it generates its own sales for its content, like Amazon's formidable servers. So iTunes ability to drive more sales of Macs and iOS devices is pure gravy. Not like Microsoft's Online Services Division, whose massive perennial losses despite search ad revenue are very weakly justified (not at all to me) by stimulating or enhancing Windows or Office. Heaven forbid, if Apple pulled a Vista and Mac sales tanked for a couple of years, the iTunes Store would continue as a break-even. Without the cash from Windows and Office, Bing and most of Windows Live would burn up Microsoft's cash reserves until the board fired Ballmer and put the division out of its misery.

        I am sure Apple will segment the information from the new centre internally, and sift it 5 ways from Sunday to manage the business profitably. But like Google, Facebook, and Amazon, they won't share detailed information about the cost of their centres externally.

        Unlike iTunes, I think the new centre will be a large net expense, intended to drive sales of iOS and Mac devices. It may be divided as part of the P/L of those business units, and app sales may move from iTunes to the iCloud P/L. But yeah, we will be in a fog about that because Apple usually resists analyst questions about the profitability of particular product lines.

  • Childermass

    (continued) then the new business since then will have made a proper contribution.

    It is hard to imagine Apple deliberately increasing cost, or running a long-term break-even activity.

    Maybe looking at February 2010 as the real cost base for that level of business would allow us to estimate how profitable the business actually is eighteen months later.

    • If the CFO said it in a conference call (and he has more than once), he's telling the truth under penalty of law. He'd be going to country club prison, but still prison, if he said anything fraudulent. If, at the next conference call he says something different, believe him. He has no reason to lie, and many reasons not to lie.

      • Childermass

        I think you have missed my point. The statements we are referring to were made nearly eighteen months ago. Maybe the business is more profittable now, as HD's graphs suggest.

      • Ziad Fazel

        Marcus, neither Childermass nor I is accusing the CFO of lying.

        My point is that by using the word "break-even" he is taking about an Income Statement, where you would see the depreciation on the assets as expense. Nothing misleading there.

        I believe Horace is improperly deducting a Cash Flow item – capital investment – from the iTunes revenues to reach the break-even. That is where the confusion is coming from.

      • that’s a naive view. every single accountant is a professional liar, CFOs as well.

        There are many ways to present the truth, and saying iTunes is a free service (lol) not generating incredible margins (lol) is pure PR.

  • Pingback: Report: iTunes costs $1.3 billon per year to run « Mac City()

  • Pingback: likeithateitshareit » Report: iTunes costs $1.3 billon per year to run()

  • Pingback: Report: iTunes costs $1.3 billon per year to run - Veletech()

  • Pingback: Apple Spends $1.3 Billion A Year To Run iTunes And App Store (AAPL) |

  • Tatil

    $1.3 billion is a lot to run a business at cost. I understand that it is part of en ecosystem, so it helps sell iPods, iPhones, iPads and maybe even some Macs, but then the gross margins in those business lines are misleading and it should include iTunes operations as part of their cost somehow.

  • Pingback: Apple Spends $1.3 Billion A Year To Run iTunes And App Store (AAPL) | Wordwide News Exposed()

  • Eric D.

    If Apple is breaking even, then iTunes is a very good deal for them. They have sparked a renaissance in software development, an explosion in the use of apps. Because iTunes eliminates so many middlemen — manufacturers, retailers, DRM coders, packagers, traditional marketers — software is available to consumers at all-time low prices, even as more indie developers are thriving than ever before. The mobility and simplicity of use of the iOs line, combined with the lowered monetary threshold to delevelop and launch an app, has brought the power of computing into many new spheres of life. Even Apple is surprised at the success and growing ubiquity of the iPad.

    If Apple needs 1.3 billion dollars to maintain an environment where the efforts of developers are sheltered, then they a) attract more developers, which in turn b) expands their potential consumer and enterprise market and c) raises the bar very high for any competitors. So far, only Amazon and Google seem to have the financial and technical means to challenge Apple in this area. But it's just a loss-leader for Apple, since the bulk of their profits come from selling hardware. Actually, if it breaks even, it's not even a loss-leader.

    Maintaining the iCloud will no doubt be an order of magnitude higher in cost and complexity. But again, that means the competition has an even loftier bar to reach. Or to use Horace's military metaphor, a wider moat to bridge. The iCloud is going to be a big part of the solution for the millions of us who are overwhelmed by our ever-growing mountain of media and data. But Apple's solution also represents a huge economy of scale. By recognizing the titles in your iTunes library and matching them with virtual, high-fidelity avatars on-line, Apple eliminates at one stroke a tremendous potential for redundancy. Google and Amazon -must- follow this model, or they will be drowning in duplicated media files down the road.

    I like Apple's chances in this new battle zone. Thanks, Horace, for bringing some light into this new sphere.

    • nice words, but you’re mostly wrong
      1) iOS is a piece of crap, and so is android, both are simple on the surface and crappy below, both are spyware
      2) software has never been as expensive as it is with apps, where a single feature costs you 3 dollars – still better if you only need that one feature though
      3) apple doesn’t have any technical strength, they’re a marketing machine, they haven’t demonstrated any technical superiority in 15 years
      4) as soon as a serious competitor arrives in the mobile sector, greedy models such as apple’s and google’s will crumble against a technically superior and cheaper alternative.
      5) google and amazon and everyone in the world use block-level deduplication and single instance storage, so nobody lives in your duplicated media file world.
      6) money wise, apple is on the down slope, big time. The ipod>iphone>ipad>icrap era is at an end and they have nothing up their sleeve

      • I’ve never heard of any modern filesystem that does block-level deduplication. *Certainly* not in the consumer market. HFS+ doesn’t do it – that’s iOS and OS X right there. NTFS is a crap filesystem and definitely doesn’t do this – Windows there. And ext4 doesn’t do this either – GNU/Linux. People are using btrfs more and more, though – and btrfs does the exact opposite (it does data duplication). Pretty much every enterprise is using GNU/Linux, and they’re using either ext4 or btrfs, both of which I covered above.
        If anything, people do more duplication, with RAID (e.g. btrfs, hardware RAID, LVM). If you’re Google, and you only store someone’s emails on one disk to save space, you’re an idiot. Because what happens when that disk fails? Your customers get angry.

        Not only that, but duplication of media isn’t just about block-level optimization. It’s also annoying for users to sort through duplicate files even if those files are the same on disk.

      • It’s called ZFS.
        You’re welcome.

      • Okay, so there’s maybe one filesystem that does block-level deduplication (and note that ZFS is largely superseded by btrfs nowadays). You still haven’t addressed any of my other points, especially the fact that you shouldn’t necessarily be doing block-level deduplication at all.

      • Dear Alex, I have spent a lot of time studying modern enterprise storage, built my own 1MIOPS Flash NAS system and I tell you that ZFS is still on top in that area, which is about the only area where it matters, i.e. centralized storage.

        Block-level deduplication yields excellent ratios on any kind of large amount of data.

        Running ZFS on your web server or desktop machine is indeed not that great, but on a storage appliance it’s a must.

        SIS (single instance storage) or other ways to avoid deduplication are indeed useful, but for such systems they are complementary.

        The other very important feature of ZFS is compression, as many files are still in uncompressed formats and thus see good results.

        ZFS isn’t just one file system it’s the only modern kick-ass file system that has seen widespread enterprise use, and if you knew something about storage you wouldn’t diss the z.

        BTRFS may see victory some day, but right now, it’s nowhere close to prime time it seems.

        Now of course if you’re talking about file systems for your own personal machine, I bet you won’t ever see the difference between indexed ntfs and any of the others for standard use, so who cares ?

      • Indexed NTFS? uhhhh… UNIX permissions??
        also just for the record, btrfs also does compression.

  • Pingback: Report: iTunes costs $1.3 billon per year to run()

  • Pingback: Apple’s iTunes Store, App Store now cost roughly $1.3 billion a year to operate | IT News Post()

  • Pingback: Apple Spends $1.3 Billion Per Year To Run iTunes | HDSmokecig()

  • Wow, imagine how much it generates if it cost this much to run.

    • Ian Hawley

      You can see how much it generates for developers. Take that figure as 70% and you can calculate the 30% that Apple take from the cut of their sales.

      However, the ecosystem exists as an attractor so people will buy iOS devices. It is hard to say what the tangible worth of the stores existence is, but I would guess that owning an Apple product is greatly enhanced by its existence and the success of iOS devices is in large part, down to the success of the store and the wealth of stuff you can buy from it.

  • Pingback: Održavanje iTunesa košta 1,3 milijarde dolara godišnje | Netokracija()

  • Pingback: iTunes kost Apple 1.3 miljard per jaar()

  • Pingback: Mantener iTunes cuesta 1.300 millones de dólares al año | TICbeat()

  • Pingback: iTunes Costs As Much As $1.3B To Run Yearly()

  • Pingback: iTunes costs as much as $1.3B to run yearly | Rohan Sood's Blog()

  • Pingback: Mantener iTunes cuesta 1.300 millones de dólares al año |()

  • Pingback: Apple zahlt für iTunes Store 1,3 Milliarden im Jahr | PlayPhones()

  • Joe

    I was wondering if the relationship between hardware sales and itune sales is discussed as well as the per sale margin for each?

  • Pingback: Anonymous()

  • Pingback: Report: iTunes costs Apple $1.3 billion a year to run »

  • Pingback: $113 million per month to manage the iTunes Store()

  • Pingback: iTunes Costs $1.3B to Operate Each Year (Estimated) - Technabob()

  • Pingback: Mobile Lowdown 6-14-11: Lodsys; Ericsson/Telcordia; Skype/Comcast; iTunes | Market To Phones()

  • Pingback: Report: iTunes costs $1.3 billon per year to run | Alternative downloads blog()

  • Pingback: 每日观察:关注iTunes商店运营成本等消息(6.15) | 游戏邦()

  • Pingback: iTunes costs Apple up to 1.3 Billion to run yearly | OddPattern()

  • Pingback: Custos de manutenção do iTunes - Addicted2Apple()

  • Pingback: Al Bar Sport | Script | iCreate()

  • Pingback: Why Is Apple Negative in 2011? | MONEY()

  • Pingback: Why Is Apple Negative in 2011? | INSURANCE()

  • Pingback: How iTunes Arrived at $0.99 « Music Revolt()

  • Pingback: Why Is Apple Negative in 2011? | APNA JAHANIAN | City Portal()

  • I am confused by Mr. Dediu's music content split of 90% Music Content 10% Apple. As a music content owner with direct contract, daily experience, accountings from iTunes, along with most if not all published payout numbers in the press, the split is 70% to music content owners and 30% to iTunes. Am I misreading Mr. Dediu's presumptions? And if not, how does that effect his analysis?

    • asymco

      Thanks for pointing this out. I was using an assumption that the split was 90:10. If your experience is 70:30 then I'll take that as valid data.

      I'll revisit the model to see how it affects the curves. My guess now is that the music business will be materially higher relative to apps but the slope of apps growth will still suggest a cross-over point in the near future.

  • ber

    Impressive number – yet a bad product. I think it is an annoying bottleneck from a user perspective. So much so that I even heard of people giving up their iPhones for an Android device just to get rid of iTunes.

    • sdasg

      With iOS5 iTunes will not be a problem anymore.

  • Pingback: Ziemia Niczyja | Mariusz Herma » Regeneracja (Kiosk 6-7/2011)()

  • Pingback: 1 Jahr WP7 - 3G-Forum von

  • Pingback: iTunes costs Apple $1.3 billion a year - TechnoGrate - TechnoGrate()

  • Ludovic Urbain

    Their store doesn’t cost that to operate, they’re reinvesting all of the profits into something else.

    I.E. a few billion mp3 and app downloads doesn’t cost much if you have your own data rooms.

    It’s basically like saying that downloading 30 billion times (extremely rough estimate probably 4x bigger than reality) 20 megs + processing 30 billion transactions (surely much less, ain’t gonna buy all those mp3s one by one) would cost 1 billion (let’s say all their marketing and dev cost 300 million and we’ll be inflating figures way too much already), which is ludicrous.

    For a transaction, paypal charges 3% and makes money. so let’s remove 3%, that’ll be 970 million.

    For the rest … seriously if megaupload can live with a little advertisement when people download 200 megs, I’m pretty sure 30+ cents for 20 megs is beyond 1000% profit.

    Hell I could have my servers serve 20 gigs for 30 cents …

    And then … there’s the part where in fact that apple store and that iTunes store are in fact selling tons of apple iDevices just by existing and actually bring in much more than their own sales.

    The thing simply is that music is more expensive with iTunes than on CD, and Apple doesn’t want too many competitors in this ridiculously lucrative business, so they keep on downplaying the ridiculous profit rates.

    • Ian Hawley

      You grossly underestimate the infrastructure Apple has in place. I am an iOS developer, if you’ve ever logged into the developer portals, iTunes Connect for instance, you will see a LOT of machinery behind the curtain.

      Your comparisons are crazy. A dumb upload site backed by S3, funded by advertising, is easy to make and requires little maintenance. There are hardware costs, even if Apple are backed by AWS, the cost of running a scaling web infrastructure with a store, developer portals, on-demand, always on with millions of transactions an hour is intense and VERY expensive.

      Yes, Apple sell devices, they are a HARDWARE company. These numbers are talking about what it costs to run their ecosystem. Clearly the ecosystem is an attractor to the hardware, but the point is, it’s not cheap to run.

      You think Apple are making the numbers up? They’re a public company, they can’t just pluck figures out of the air.

      Your maths, costs and comparisons make sweeping assumptions that have no basis in fact and also know nothing about the underlying infrastructure or technology. Or, if you did, you would surely see how many holes are in your argument.

      • I believe you grossly overestimate your understanding of IT.

        But fear not, most IT professionals are like you.

      • Ian Hawley

        Care to qualify that?

        Seems your response is more a bit of a dig than any kind of intelligent debate.

      • If you believe it can cost anywhere close to that price, you must lack severely in the areas of hardware design, low-level coding and general solutions design as well as procurement.

        For a service as simple as iTunes and a budget as huge as the one mentioned, investing a couple millions in finding the most optimized architecture, and another few millions in rewriting everything in C on a stripped-down *nix will yield a level of performance above 50% of pure hardware speed, meaning there will not be large server costs at all.

        The main price element will be internet bandwidth, but since 95% of the bandwidth will be used by the top 5% content, it makes sense to factor in heavy CDN use which will drop real bandwidth costs closer to 10% than 100%.

        For any large scale service like iTunes, all costs are eclipsed by hardware and bandwidth costs.

        Considering fast 3-year cycles, how many downloads and payments do you think a $3B infrastructure can process ?

        30 billion downloads of <20MB would net you 600PB of traffic yearly.. which in CDN wouldn't even cost you $1M.

        Sure you need to take the rest into account, but it can be done for less than $100M imo.

      • Ian Hawley

        “For a service as simple as iTunes”

        You lost al credibility at this point.

        Your numbers are plucked out of thin air. These are

      • I just think you lack perspective on complexity.

        About the numbers, if you want me to write the 60 pages it would take you to understand them, I’ll do it for only 5K.

      • Ian Hawley

        “I just think you lack perspective on complexity”

        This is my point. You have said above that you could replicate the entire infrastructure and pay all the costs to replace iTunes for $100 million.

        And… you think this sounds sensible.

        I wouldn’t trust you to accurately count your own toes, I’m hardly going to pay you 5K for 60 pages of conjecture.

        In all of the above you seem to assume no hardware costs, no websites or content to maintain, not staff to pay or buildings to house the staff.

        Apple’s ecosystem = $100 million. And you think I lack perspective on complexity.

      • $1M CDN=all downloads
        $99M infra=shop, sites, people, hardware

        With $1M in hardware per year (i.e. $3M over 3 years), I can build you a system that does over 100 Million IO operations per second, with redundancy, deduplication and of course some processing (but we all know IO is the bottleneck).

        I believe that’s already far more than you could ever need for a thing like iTunes, where no heavy lifting happens ever.

        With $98M, I think you should be able to find and house a dozen excellent programmers, a dozen designers and a bunch of people to fill in the content without too much trouble, right ?

        I’d say that if you have the time and interest to go present such a plan to Apple, $5K for the plan is pretty cheap.

        You should really purchase it right now !

      • Ian Hawley

        Oh I think you should present your figures to Apple and see their response.

        Just out of interest I looked at the list pricing for 1 million DynamoDB operations of 1Kb/transacton, 1TB of data/month, 1 million reads and writes per second. The monthly bill from AWS is ~$600K. Just to handle simple database transactions.

        Now you probably wouldn’t do it like that, but 100 Million IOPs/Second?

      • Why would you use Amazon’s overpriced services if you’re not doing small-scale early-stage startup stuff ?

        They cost roughly 20 times the price of real servers, and far more when you look at pure performance as you just did.

      • Ian Hawley

        Industry standard benchmark?

        For your ‘costs 20 times the price of real servers’ [Citation Needed]

      • I don’t think you’re reading my posts actually.

        And remember, “AWS cloud engineer” means “AWS service user”, there’s no need for cocky titles when all you do is use what others created in the way they intended it to be used.

        Lastly, I did talk extensively about the role of optimization in cost reduction, which you seem to be forgetting completely.

        AWS kills a bit of performance, their choice of server configurations is not optimal for your workload, your service doesn’t use a stripped down optimized unix, I’m pretty sure your code is far from being optimized, and may not even be in C.

        In large scale projects, optimization has a major impact on costs, and you can be sure that your service is (with good reason) much more inefficient than any large scale service.

      • Ian Hawley

        “…all you do is use what others created in the way they intended it to be used”.

        Again you underline how little you appreciate the complexity of the problem. I used the term ‘Cloud Engineer’ in the same way someone else might term themselves ‘Database Engineer’. I also used it to illustrate that I am talking from a position of knowledge, I wasn’t being snooty as you seem to think, I was using a legitimate title to describe the work that I do.

        I used the title to give context to the kind of programming that I do. Because, like the database engineer, there are different problems to solve. Obviously many engineering problems are the same, but database programmers, have to think about transactions and rollback for instance, and cloud engineers have to think about an application running remotely, not only running on many different machines, but having the whole solution, the code, the configuration, split across many machines, with much more infrastructure, such as subnets and VPCs, that a regular .Net engineer may never have to deal with. That and not even getting near scaling, redundancy, security and availability.

        We’re all using technology stacks written by other people in the way those stacks were intended to be used. It doesn’t mean the products created are any less complex because they were built with appropriate tools. .Net for instance is a great framework, but I am certainly not going to be so rude (unlike you) as to suggest that, all .Net engineers do, is “..use what others created in the way they were intended to be used.”

        We build on other tools/frameworks because by doing so, we can be more productive. It’s also for this reason that many people use C#, Java, JavaScript for that matter. C is not necessarily the best language for Cloud computing. It is efficient yes, but many people will tell you that Scala is better suited, especially when trying to leverage that infinite compute power people talk about. Perhaps were companies coming to the cloud as startups they would use Scala (or C or C++ ) but the reality is that companies have a set of engineers and a set of legacy programs. It is not efficient to sack all your engineers in favour of Scala or C/C++ ones, because you think that will be more efficient It is also naive to assume where you need to be efficient. Also, ‘best’ is a hard thing to quantify really. We should do work that meets the acceptance criteria for the problem we are solving. Part of that criteria might be to be in market quickly and not five years from now. In fact, it is very rare that a software product will take such a long time to come to market, but then we are getting into the realm of a BDUF versus Agile conversation and well, this is already far too long a response.

        The beauty of scaling systems however, is that you can always add more machines. The advice we have adhered to is, worry about the electric bill later – because then you’re spending a lot of money in the cloud and you know what the problems are (don’t be foolish and think you will know ALL of these before you make your services) and you are dealing with a success: don’t spend 5 years making the most efficient system, only to find that 50 people use it. Don’t be afraid to iterate on your solution.

        AWS Service User” ? AWS is a little more than half a dozen REST calls. Have you actually USED AWS?

        AWS produce a number of publications on their services. Check these out, they are not a few A4 sides (for the most part) and it looks like there’s 54 ‘books’ available published by AWS, some of which I have read.

        “AWS kills a bit of performance, their choice of server configurations is not optimal for your workload, your service doesn’t use a stripped down optimized unix, I’m pretty sure your code is far from being optimized, and may not even be in C.”

        Lord, AWS works. It goes down now and then, but it works. There are reasons they are the industry leaders here. You’re suggestion you could do better is ridiculous. The speed of the OS is nothing to the speed of I/O, which I think we agree here is king and is your main bottleneck, so using a cut-down Unix OS and writing everything in C is going to take you a lot longer and for probably no tangible benefit. It is received wisdom to observe a performance problem and then fix it, to optimise at the end. Sure, get the right approach to begin with, but prove it if necessary.

        Also, take a look at this:

        I haven’t read all of this, but the reality is that C# isn’t a million miles behind C++ and that the convenience of working in C# is often warranted. Some of the conveniences being the extensive .Net framework, third party libraries and the ease of testing and maintenance.

        If we had to pick the most efficient thing we’d all be programming in assembler and circumventing the OS, but it would take orders of magnitude to produce working software AND it would likely be far less stable.

        At last year’s AWS conference in NY, Netflix CEO said that “Netflix is a service that writes logging and occasionally streams some video.” This statement goes some way to explaining the complexity of cloud computing. You are likely to generate terabytes of logs for a large system and not only logs, but transactional data, there are some many moving parts and some many parts of the system can fail and you have to recover. It’s not one piece of software anymore that one user is using, it’s one system comprised of many pieces of software with distinct responsibilities, that millions are using. And when something goes FUBAR for user 4,300,157, the system cannot explode with stunning force for everyone else.

        Anyway, I will try and extract myself from this thread now. Suffice to say, the knowledge and experience I have means that I can say with confidence that iTunes is a very complicated system that will cost billions to run, inclusive of all costs. The idea that a dozen developers and designers (completely the wrong ratio incidentally) and a bunch of people to “fill in the content” could recreate all that is iTunes is extremely short sighted at best, not going to happen, and far from reality. The staffing numbers you suggest there are probably out by 100 or more at a very rough guess.

        Still, given the fact that you keep on with this, I can see that we will never agree on any real part of it, except perhaps to agree to differ. I say it’s expensive (because of all the above) you say it isn’t, because you can add up a few ballpark figures without any references and assume you understand the whole problem and everything it entails. Let’s be clear: I do NOT understand the whole problem either, but I understand enough of it, can see what our tiny (by comparison) service costs to run on AWS and I know that this stuff is expensive.

      • AWS is pretty unreliable for a cloud service.
        C# is nowhere close to touching C (and Cpp is not as fast as C).
        You talk about RAM, I tell you AWS servers are not tailored to your needs.
        I know a lot about building on top of what others built, and I know that most of the time you need to start from scratch because they did a bad job.
        Hell, even Linux’s raid1/10 is still a failure, why would you expect some obscure library or a commercial service (AWS) to be any better ?
        The speed of the OS is irrelevant to you because your whole stack is inefficient, and just speeding up the OS would do little for the total, but once you optimize your structures and code, you will discover that changing language makes a huge difference, and same goes for OS optimization, hardware selection, and all the other optimization bits.
        You may even discover that your OS has a lot of impact on your IO, once you start removing the layers of inefficiency from your stack.

        You live in a world of inefficiency, where most programmers live, thinking that running crappy code using a crappy library on top of a crappy language on top of a crappy OS on top of a random machine isn’t much slower than good code, good lib, good language, good OS.

        I live in a world where those optimizations very often yield a 10x better efficiency or more (far more if you’re using slow languages to begin with).

        Now maybe your company can’t afford that optimization, but I’m pretty sure Apple can, and if they didn’t it’s their loss.

      • Ian Hawley

        What do you actually do for a living?

        You say lots of things but provide no references to support your arguments. Did you even read the comparison link?

        Where’s your metric to support AWS being down a lot?

        Stop talking and start backing up your answers with facts.

        Lets start with what you do for a living and how many cloud applications you’ve built.

      • People who don’t have logic require arguments of authority.

        If your link supports your theory, it’s garbage written by ignorants, focusing on unrealistic use cases and calling 2x overhead “not an issue”.

        If it does not, why would I read it ?

        Actually if you had any idea of what I built you wouldn’t even have started this discussion.

      • Ian Hawley

        And I STILL don’t have any idea what you build. I have told you the kind of thing I build, I have expressed an opinion based on my experience. I do not see you doing the same.

        You seem to think the iTunes is just S3. I suggest you go and log into Apple’s dev portal, it’s free to register I believe:

        So sure of your argument you will not consider that I may have information that refutes your position, so you don’t even look at the links I am providing, saying that if it doesn’t agree with YOU then it is nonsense.

        Let me make something clear. I am not trying to convince you of anything, it is abundantly clear you are not listening – you won’t even read my links – and given you won’t tell me what you do, what your insight and background and experience is, I can only assume you are a pundit, with very little technical skill or experience.

        I started this thread by refuting your position, because it’s extremely short-sighted (read all of the above as to why), and to inform you and anyone else who might read it that the position is very much as advertised – iTunes costs a lost.

        You can believe whatever you like, of course, and yes you don’t need my permission to do that, that’s not what I am saying. I am saying do, do believe what you’re like, but black will continue to be black, however hard you try argue that it is green.

        Good Luck to you.

      • You do realize I gave you numbers based on facts and you were unable to give any fact-based reason that would imply otherwise.

        I have an opinion based on my experience, which is one of thinking, designing, creating and building things, many of which are software and some of which are service architecture.

        My experience includes understanding of the capabilities of old and modern hardware, nothing incredible but more than enough to know the best server config for a given workload.

        It also includes software design, as my framework enables me to build applications in 30 minutes while being only 94KB of backend and 85KB of frontend code, both in human readable format.

        And it also includes programming, as I have consistently been able to both find an elegant and efficient solution to any problems I wanted to solve, and drastically improve the performance characteristics of any piece of code I’ve touched.

        I believe the problem is that you do not have enough of those specific skills to realize that a much better job can be done.

        It is obvious from our discussions that you lack insight on performance matters, and since most of the savings I believe Apple could have made are performance related, it does make sense that my numbers don’t compute for you.

      • Ian Hawley

        “It is obvious from our discussions that you lack insight on performance matters, and since most of the savings I believe Apple could have made are performance related, it does make sense that my numbers don’t compute for you.”

        In what way is it obvious I know nothing about performance?

        I have been a software engineer for 20 years, I have been programming since I was ~10. I have programmed in 6502, 8086, 68000 at the machine level (i.e. pushing in bytes) and in assembly language. In ~1998 I was using C++ and writing my own triangle rasterization algorithms in assembly language. Lacking a 386 compiler in my 32bit compiler, I wrote macros to emit into the C++ the 386 op-codes. I have (very rusty now) knowledge of cache stalls ALU stalls and pipelining techniques used in ASM to improve throughput of instructions and ensure each execution unit can do something and isn’t stalled waiting on cache or ALU.

        I have learned a lot about performance during my time with assembly language but I now use C++, C# and am familiar with modern technologies. If I coded everything in assembler it would all run faster, but would it be appreciably faster, no it would not and probably very little of it would benefit from that choice of language. The headache of coding that way means a huge cost on productivity, so we use more modern languages. I point this out because your altruistic, idea of using a cut down BSD is crazy. You would have to have the cloud infrastructure in your hands in order to deal with the OS image and YOU think that people, not knowing their success should build a cloud infrastructure themselves with NO idea how big their business is going to be. This is complete garbage. NOBODY does this. OnLive spent too much up front on their infrastructure, expecting huge success and had to go under and re-surface (in a highly dodgy maneuver). The whole point of infinite compute power is that as your success grows you can buy more and the risk is reduced and your spend increase with your success. Yes, when you have a huge success like Apple you can start building data centres so you can bring down the cost. But it’s not trivial, it costs billions to do this, because you have to a least build two to ensure you have redundancy and then you have to build all the software infrastructure you are likely missing and you then have to re-work your software to go through an interference layer a HAL for the cloud OS as it were so that you can work with YOUR datacentre or your typical provider like AWS. Still, people do this, but only WHEN they know what they need.

        I write cloud applications for a living.

        You do NOT

        You are basing your entire argument on CONJECTURE and not experience.

        It is clear to me that you have some experience with the web, with programming but you do not have any experience with cloud computing.

        If you are REALLY going to prove you are a capable individual who wishes to LEARN things instead of stay rooted to their conviction EVEN in the weight of all evidence then you would go and LOOK at this stuff, you would read the links I have posted, you would look at the iTunes Connect for instance, or create a free AWS account and see, but you do not do this and I don’t hear you saying you’ve already done it.

        INSTEAD, you say things like Amazon aren’t doing a good job. You have NO CREDIBILITY now, the LEADING cloud platform, are doing it wrong and YOU know better.

        You have, ABSOLUTELY NO IDEA, what you are talking about with respect to the cost of cloud services.

        Go learn something before you speak again.

      • If your C programs are not appreciably faster than your C# programs, you are probably not very good at programming, or at appreciating speed.

        I told you several times that the cloud has its uses, an onWhatever is a clear example of a company that would have benefitted from it.

        I also told you several times that Apple, for a service like iTunes, would make insane savings by using a fully optimized architecture.

        If you want to turn that into another unrelated discussion, have fun.

        I understand that you have a horse in the race too, so it’s probably best we leave it at that.

      • Ian Hawley

        I refer you to my link comparing the speed of C# and C++, if you want to refute THAT evidence, knock yourself out.

        I never said my C++ was slower than my C# I said there is no point to write it (all) in C++. You pick those things that need to be fast and those are the bits you write in C++, I am doing that very thing today as it happens.

        What would be the cost of implementing that fully optimised architecture. You have to live in the real world. The reality is that businesses (should) start with a cloud provider and then build out their offering using partly owned data centres and punch out to the cloud when they need the extra compute power. However, by the time you’ve got an idea that you need a data centre, time has passed, your solution is running using existing services and so it is a:) hard to use your own data centre and support the same services and b:) requires a lot of re-factoring. If you then choose to cut everything down to do the leanest, most cost effective thing, you incur a large cost in development to switch to that leanest thing, with which few of your team are familiar.

        People have legacy software, the cost and downtime or trade-off in time to market to refactor that is too hard to measure vs adding new features.

        Note that if you own your own data centre, it’s still the cloud, you just happen to own it.

        On an unrelated note, I apologise if my previous reply was a little ranty. 😀

        And, I concede that yes, you could do it cheaper, BUT, a:) where we started however was your saying that the store doesn’t cost that to operate and b:) though it could be done cheaper, I believe it is at far greater risk – I would not want to set out or even suggest to my manager or our director of engineering for our small group, that we should make our own cloud because we could do it better and more efficiently than Amazon.

      • The article you cited is full of nonsense.
        When one example shows C# to be 5 times as fast as C++, it’s obvious that there are important details being overlooked.

        Plus, these are basic tasks, i.e. where C# is 100% translated to C, like how javascript v8 is very fast when it simply calls a C lib.

        I use the goddamn cloud, and of course everyone should use the cloud and not hard servers to begin with.

        BUT, once your service gets any kind of size, you should move to a real optimized infrastructure, because the savings are insane.

        That is true for hardware, but also OS, database, backend, etc.

        At one point, the running costs savings simply pay for the optimization and you save a lot of money by switching to an optimized infrastructure.

        The bigger the savings, the earlier that shift needs to happen, that is why I believe it is critical that people understand that:

        a) the cloud costs 20x the price of a dedicated server for comparable CPU power / RAM

        b) an optimized C application is OFTEN 100x faster than your C# / Python / Java / PHP prototype.

        c) optimized data structures save storage space but more importantly memory and IO

        d) if you make your own infra you get IO for 1/1000th of the AWS price.

        I’m working on creating a cloud service that does way more than the cloud does in terms of time saving and I understand that this service is the optimal choice from day 0 to day lots_of_money.

        There will be an attempt to make that service efficient enough that our clients will never gain anything by leaving us at all, but that will often result in us developing the optimized infrastructure for them.

      • Ian Hawley

        There is no helping you when you see an article laying down the facts and refuse to believe them.

        You would stand on a black pacing slab and despite the evidence of your eyes claim that it was red.

      • You don’t really think C# can be faster than C with no major measuring mistakes, right ?

      • Ian Hawley

        That’s not what I said. You said c++ could be 100s of times faster. I said prove it.

        I don’t see you proving it.

      • read. it’s great.

      • Ian Hawley

        Proof. Show me the program, same algorithm that is 100x slower in c# than cpp.

        Stop talking and prove it.

        But you can’t can you, you won’t write a program to prove your wild, ridiculous claim.

        Because you can’t.

      • You and I both know that there are MANY programs out there with an inefficiency factor over 100x, and most of them are written in OOP languages, like C#.

        C#, like most languages, appears to be pretty close to C when coded just like C (do note that even PHP does that, outside of arrays that is), and that’s always going to be true for small benchmarks.

        The real plague of it is not C#, or the C# compiler, it’s people who think “performance does not matter, so I’ll just use C# and OOP”.

        The direct result of that mentality (which enables the choice of C# in the first place) is pure inefficiency cake.

        And don’t forget, I said C, not C++.

        Now I believe we can drop this conversation, everyone knows optimized vs non-optimized is often a 25x improvement, and you have slides saying C# can be 4 times slower than Cpp using bad compiler settings for Cpp and not the best compiler or platform in the world to begin with, and Cpp is slower than C for any real-world task, although for very simple examples, Cpp can be just as fast.

      • Ian Hawley

        Yes lots of people writing bad programs. That doesn’t make a language bad or slow. If you use the languages correctly then c# is bit 100c behind C or C++ and if you’re snubbing C++ because C is faster than you have other problems as well.

        Everyone can drive a Ferrari slowly, and in this scenario C# is a BMW M3

      • IF on average, your application uses 20x less resources, you save 95% on your server bill, especially if you’re in da cloud.

        Accepting GC or JIT means you don’t care about performance.

        Not caring about performance usually yields programs that are 100x slower, what are you not understanding ?

        One thing at a time too, we may go back to the iTunes cost discussion once you understand that indeed there can be major cost savings.

        Otherwise the very possibility of an optimized iTunes doesn’t exist, and whether or not apple has one is a moot discussion.

      • Ian Hawley

        Prove it.

        JIT happens once.

        GC is to be managed. Much like an efficient C program you can avoid unnecessary allocation and deal location.

        Clearly YOU don’t know how to program in C#, you’ve just decided it’s rubbish.

        Again. Prove it.

        Don’t spout nonsense. Everything you save costs you in time to market and development and staffing costs.


        Prove your ridiculous c# point.

        You do not optimise unnecessarily or you are WASTING time and effort.

      • Ian Hawley

        Okay let’s squash some of your none sense. Write a c++ function and then write the c# one that is 100x slower.

        Off you go…

      • It’s a program we’re talking about, not a function.

        A program can easily stack layers of inefficiency and reach ridiculous numbers like that, and OOP makes it worse.

        If you remove the unneeded layers and optimize the hell out of the rest in C, you will definitely be able to find ratios between 50 and 200.

      • Ian Hawley

        Fine, write the same algorithm in both languages

      • “The speed of the OS is nothing to the speed of I/O, which I think we agree here is king and is your main bottleneck”
        Well that’s just wrong.
        “so using a cut-down Unix OS and writing everything in C is going to take you a lot longer and for probably no tangible benefit.”
        It does take a few days to take a BSD and strip it down, I believe that is indeed too expensive for a company such as Apple.

        In many cases, optimization is not the way to go. When you spend millions on servers, it is.

      • Ian Hawley

        Right, these numbers I have discounted. Explain where your pricing comes from and we will talk more. I have provided you with a brief pricing estimate for transactions alone from Amazon, YOU tell me, WHERE I can do the same for less.

        While we are at it, explain:

        Why Amazon are not a good company.

        Why they are not reliable and down a lot as I believe you said.

        Who you would use instead.

        If you’re answer is build it yourself, then epic fail again. See below and just generally learn about the process of starting up cloud services and when to build data centres.

      • You can do the same for less, in your backyard, using real hardware instead of overpriced cloud services.

        Amazon is a great company, all cloud services are overpriced.

        A cloud service’s main attribute for someone like me is it’s reliability. I fully expect it to have 100% uptime outside of catastrophic and unlikely events. If it does not meet that availability criterion, it cannot possibly justify its price.

        The answer is build it yourself, and it will always be the best answer as soon as you’re spending a lot of money on servers, that’s also why every large scale service is running on company-owned servers, in company-owned datacenters.

        When you reach 1billion in server costs, you are way past the level where it makes financial sense to build your own DC.

        When you reach 1 million in server costs, you are way past the level where it makes financial sense to use dedicated servers.

        There are very niche cases where cloud is better, and most of those cases revolve around business models and growth strategy, not technological reasons.

      • Ian Hawley

        Well the received wisdom is to start in the cloud and the build your own data centres, but in all honesty I wouldn’t bother with that, because I don’t believe you want to get into the business of managing your own data centre, it adds a cost not just in making the thing, but in retrofitting your code to make use of it.

        If you run servers in your backyard that are connected to the web, as they must be to perform functions required across the internet, then:

        a:) you’re access to the backbone to provide you with the traffic requirements you have, your bandwidth, had better be good.

        b:) you need at least two backyards and possibly two additional back yards in other countries so that if one of your backyards goes down your others can continue to manage your services.

        c:) you have a lot of programmes to handle the management of services such as SWF, S3, Redshift and Dynamo you would typically need.

        Amazon is not up 100% of the time, but as I said further down, do you really want to stand up and say, Amazon only gaurentee 99.5% uptime (, we can do it better?

        This stuff is very hard. Doing it yourself is extremely hard and Amazon are doing it very, very well.

        Is it overpriced? You’re working on your cloud services, you decide you need another machine to house another services, you’re using AWS, so you pick an instance and boot it up. You need some region-level redundancy so you boot another up in another AZ; the traffic needs to be balanced, so you make a ELB to split the traffic between the two of them.

        Right, so you’re happy with that, but nobody can tell you how much load the serves are going to be under, so you pick some rules so that when the CPU hits a threshold, you start up some more machines, alternating AZs in your region.

        Now consider the same scenario if you’re using your own hardware. If it’s a bunch of bare metal hypervizors then you could vritualize it, but then you’re dumbing down the rest of the machines on the system. If you’re using real hardware, then you have to go buy something and then network it up. Wait, you only bought one machine, okay, go buy another. Right, what happens with the load. Hmmm, time for someone to stick a finger in the air and guess whether your machine is good enough in tandem (presuming you set up your load balancing right?), or whether you might need more. You can’t automatically make more, so you just have to buy the right number. Off you go and fetch those and do all the networking.

        The real-world analogy of this problem above means that you move really slowly. If you make a mistake on your machine type, you’re stuffed, you have what you bought, you either lump it or go buy more. These are all rack mount units too, so they cost a lot of cash, more than a self-build desktop machine. They probably have an IBM badge on the front and so are even more expensive, but you like that they don’t fail very often… but they do fail.

        With AWS, you can make those machines, have them load balanced and install your software on them in probably the time it took you to go to the store and buy your machines. Though as you’re buying a rack-mount IBM box, there is no store in all likelihood. You did check that your uninterruptible PSU could handle the load and your generator too right? Not to mention your aircon in your rack room is still going to cool all your machines? And of course, you need to do this in both your backyards.

        The cost saved by the speed at which you can do this in AWS is huge. The cost of the machine is charged hourly and you can reduce that hourly rate by paying up-front fees. You can also grab machines based on the market price for them (spot instances) so that you are paying a lot less than the on-demand price. You can pay as little as a few cents per hour for a machine on-demand and on-demand and spot instance types provide some significant horse-power and can be had for reasonable costs. You can spin things up and down merrily and so controlling the cost, is something you can bake into your implementation. Not knowing how much capacity you need means you can scale things up and reserve instances as you go.

        I cannot imagine trying to make something like iTunes using servers in your backyard or attempting to beat the reliability, ease of use or feature set of AWS.

        I’ve already conceded that yes, you could do things a lot cheaper, from a hardware perspective. Yes, you could write code in C++ and you could everything on a flavour of linux, but the cost of doing all that is measured in time and money spent on expertise, the cost of getting it wrong, of having to make all the base services that would otherwise just be there in AWS, the cost of being slow to market.

        It’s an insane proposition.

        Recently for personal projects, I started using Jira. I can host that myself and pay $20 for the bits I need, but they will host it for me for $200/year, so it’s really a no brainer – I need my time more than I need $200/year.

      • There is an upfront cost to doing things right, it does save you 95% on cloud though.

        That’s all I said and it seems you actually agree with me.

        I think we can agree that it doesn’t cost more than 13 million to design the perfect scalable infrastructure for iTunes, so it makes zero sense to use cloud at that point.

        Or do you believe it costs way more ?

      • Ian Hawley

        I don’t agree with you. It’s cheaper it’s sure as hell not 95% saving and by the time you’ve done it and struggled to make everything you need your competitors have taken your business and you’ve spent your saving on staff and tech, much of which is the wrong thing because your ideas changed during’ve no experience, you don’t know what you’re talking about.

        Did you log into AWS to see what it is you’re saying you can do better in your backyard?

        No, I don’t think you did.

  • james braselton

    hi there your data is way old last ancoumncmet live of new app products and new os 75 billion downloads now notification widgets wich are alwosume and my favorite apple pay