"Objectively" Correct Public Market Hyperscaler Investment
Skinning cats, stacking full stacks
If you’ve been an investor since I developed market cognizance (2011-2012 Facebook IPO) you’ve seen the business trajectory of Amazon, Facebook, Microsoft, and Google change dramatically. These 4 of the FANG/MAG7 remain distinct:
Google had a monopoly search business and was pursuing multiple avenues for deploying this into “moonshots”
Facebook had an Ads business and was pursuing AR/VR/Oculus
Microsoft had Office and Windows to eventually get them into Azure
Amazon’s retail business gave way to the cloud business (AWS) which would ultimately be the bigger of the two. The ad business would come later
Throughout the decade and a half, these companies had varying degrees of success for organically developing market-leading products and services outside of their quasi-monopoly. Ultimately, being in the “Cloud business” (e.g. Azure/AWS/GCP) became a distinction without a difference for these businesses. All 4 decided to build out their own compute capacity, either to rent out B2B, B2C or to just use internally (Meta in particular).
The genesis of this post comes strictly from one rhetorical question: how might someone, buying one of these five names on a monopoly-core-product-GARP thesis any point in the 2012-2018 period, react to the idea that the capital-light business model would eventually lead to the plowback of $100B+ in data center buildouts? That all that modeled free cash flow wouldn’t go to buybacks, or acquiring the next Instagram/Youtube, but to data centers?
The Correct Response
“Obviously, we have made the decision that building out our own compute capacity is the best way to defend our core product. We’re already in the Cloud business, and AI development will revolutionize our product/service. Owning and operating this hardware and power on which it runs is simply the best way to ensure I don’t lose market share or have my core product, or my Cloud business completely displaced. Pouring my cash flow into my own data centers is playing offense through defense. I’ll be able to ship new products, improve my own, and have the optionality that comes from owning my own supply. The compute capacity will be there when I need it and, partly, I’m using OPM to do it (credit funds have been getting involved in DC buildouts) replicating the economics of real estate development, in a way.”
Wait, didn’t your core business just get much worse? If it becomes necessary to do this at this scale, surely it’s hard to justify a higher multiple than before this competitive dynamic emerged? Or should I trust the management and process and expect healthy paybacks from these investments?
A 500 MW hyperscale build could easily run $5–10+ billion (rough order of magnitude: ~$10–15M/MW all-in, depending on design).
~40% of capex: building shell & power/cooling infra → depreciated mostly over 15–40 years.
~50%: servers & networking → depreciated over 3–5 years.
~10%: land (non-depreciable) & misc.
For tax (MACRS, U.S.), servers can qualify for 5-year recovery, some power/cooling as 7–15 year, and buildings as 39-year property. Bonus depreciation rules can accelerate the write-offs further.
A 500 MW hyperscale data center would likely have 3–5 year depreciation on ~half the assets (servers, networking), 10–20 years for cooling/power, and 30–40 years for the shell. This creates a P&L profile with very heavy depreciation expense in the first 3–5 years, tapering off as server refresh cycles settle into steady-state capex.
The dollar-value pace of capital expenditures has continued to escalate. Perhaps its a one time blip, and the rate of change will slow when a requisite level of compute capacity has been built out. Perhaps that entire generation of GARP-focused investors has turned over, and a new generation of investors see the potential in data centers better than those who were simply interested in investing in their core product. Perhaps there’s a coming data center bust, and the compute capacity buildout is over its skis (much like the cable overbuild of the dot-com days).
I am certainly taking advantage of this free compute/22 year-old analyst capacity by peppering various LLMs with mundane questions that 22 year-olds would generally take 1-2 weeks to come back on. Fantastic for idea throughput and rough guesswork, but lulls one into a sense of false confidence (GIGO, YMMV).



Didn't really understand the post but lovely to see you!