Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A core business problem, the core business problem may or may not be the same.[1] At this scale infrastructure cost is a critical problem that can afford to have dedicated teams/vertical trying to solve for.

Hypothetically if you are spending $50 Million+ / Year on the cloud a dedicated team of even 10 senior engineers to setup your co-location with your hardware to consider migration of your costliest and also least cloud native components would maybe cost $2-5M more. With attractive debt financing that is readily available these days you can easily amortize your purchase expenses over the 2-3 year hardware lifecycle and realize savings, there is not much justification not to also pursue this along with all other features you are also pursuing.

The cost is a very low investment compared to your costs with potential for very high saving ROI, so even if the chance of success is low you should give it a shot. i.e. If you can save say $10 Million on the $50 Million, your $2 Million investment needs only 20% probability of success to have expected value in the green.

[1] I don't think there should be only one core (the) business problem for a startup, there are always few critical problems startups have to solve for at any given stage.



The parent was mentioning operational agility, and ability to quickly nearly ‘hot swap’ in live fully formed instances - and with a footprint of multiple petabytes of storage.

Their core business problem is providing databases, and apparently they see leveraging the huge VM and storage pools available at AWS as a major advantage here (and I for one can’t blame them), over hardware spend absolute efficiency.

Being able to providing a couple hundred TB of extra SSD with a config file change (or return it and stop paying for it almost immediately), has real advantages over rolling it yourself, especially if you only have a 10 person ops team or the like.

Considering the apparent business model, I can see their point.

This project being discussed on the thread is likely a couple folks for a few months - low hanging fruit to save millions. What you’re referring to is a major business effort, if not doubling of headcount, for such a company with at best similar payoff. Running their own colos also means a lot of thinking, planning, and lifecycle management when it comes to equipment generations, upgrades, making sure you’ve got the right amount of spare capacity but not too much, etc.

Also, let’s not forget geo/availability zones.

Not saying co-located hardware is not always worth it - rather they seem to be aware of the trade offs, and are making a rational decision based on their business model.

Later, if they have switched from ‘rapid growth and adjustment’ to a more stable state where they can predict things more in advance, maybe they’ll switch. Maybe they won’t.

Like a large energy consumer, running on utility grid at a certain size in a certain area is often much better than rolling your own generation capacity. Sometimes it’s impossible or less cost effective. Sometimes it doesn’t make sense to even try to do the math, and just get hooked up to the grid.


It’s impossible to know the best approach without understanding the specifics. It’s true that adding more resources quickly when you own the hardware will take more, but given how much cheaper it is, you can seriously over provision. Using aws is probably going to be more efficient usage of hardware, that’s why it’s not even more expensive. The thing is that it’s often not the more efficient use of money, specially at that scale.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: