Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really agree with a lot of this.

I'll tell you what though, if you have a programming task that you find boring, over-engineering it and over-architecting it can make it _so_ much more enjoyable.



You better have the self-delusive ability to love and cherish that over-engineered system, because you will go from "not enough to do" to "can't keep up with the business because every little change requires 40 hours of heads-down work" in no time flat.


I think if you end up in that situation because of overengineering your engineering is not very good in the first place.

Even overengineered systems should be simple to change.


Even overengineered systems should be simple to change.

Clearly you've never met a 10x over-engineer

Over-engineering (for example over-generalising or adding too many layers of abstraction) is typically done with certain sorts of potential future changes in mind, so in the resulting system some changes are easy but some types of change (the ones the over-engineer didn't anticipate) can only be done with massive refactoring.


I worked with such a person, briefly. It seems he had never heard the phrase "the shortest distance between two points is a straight line". He had plans for the product stretching out 5 years, and architected accordingly. He knew exactly why each piece was done the way it was done, and could hold it all in his head. But when he was reassigned to a different project after 6 months, the rest of us had to pick up the pieces.


The general problem is that it is hard to estimate the amount of changes needed in the future. You have to make a guess. Sometimes this leads to over-engineering. Sometimes it leads to under-engineering. Depends on the case, the future, and the skill of the engineer.


My favorite designs are the ones that can't even solve current problems, much less anything that might need to change later, because they spent so much time optimizing for non-existent future scenarios that the present ones suffer.

"Sure, our web application might take a minimum of two seconds to handle trivial get requests, and logging in doesn't work half the time, but say the word and I can instantly migrate from Postgres to MySQL."


That's a good example, preparing for a future which probably never comes.

It is not easy to predict the future of a software application. In Louisiana they were building levees to withstand a once in 100 years flood or something like that. Based on historical weather reports they are able to estimate how high and strong the levees would need to be. But with software it is hard to see how we could estimate what kinds of requirement-changes might be needed during the next 100 years for any application.


To me, overengineered means you engineered something with the best intentions (speed of development, malleability, testability, operational resilience, etc.) but you made design choices that carry stiff costs and only pay off at scales that aren't relevant to your actual situation.

So, for example, you architected the system so that a dozen teams with a dozen developers each can hack independently on the system with minimum conflict, but you only have one team with four developers, who would have been able to iterate much faster on a simpler codebase, and who pay a high cost for maintaining and operating such a complex system.

Ofter the realization that a system is overengineered in one dimension happens simultaneously with the realization that it is underengineered in another. For example, you realize that your system that is engineered to scale to petabytes of traffic per day scales awkwardly to more than three or four customers. It takes weeks to onboard a new customer, and it's a impractical to onboard more than one customer at a time. Meanwhile your first four customers are only sending you hundreds of megabytes of traffic per day, and the mechanisms you added to scale to petabytes of traffic are making it really awkward to reengineer the onboarding process. Salespeople are quitting because they have prospects in the pipeline that they aren't allow to move forward on, and upper management is demanding to know why we need more integration engineers when they already outnumber our existing customers. But by god, if one of these customers wants to send us tens of millions of requests per second, we're ready (for some untested meaning of the word "ready.")

People who overengineer systems are often looking for a challenge because the real needs of the business (such as onboarding new customers, making the UI brandable, integrating with a dominant third party ecosystem) don't seem like engineering challenges. They want to do good engineering work, so they pick a challenge that they think of as engineering (tail latency, resilience, "web scale," you know, engineer stuff) and they throw themselves into it. And they run foul of the truth that LOC (and architectural complexity) are liabilities.


> Even overengineered systems should be simple to change.

One of the most significant qualities of a over-engineered system is an unusual resistance to change.


not to nitpick, but "quality" usually means some desirable aspect... here I would have used the word "characteristic" :)


Totally fair. Good nitpick.


I disagree. A well engineered system that 2,000 devs work on will look vastly different from a well engineered system that 10 devs work on. If you try to build the former for the latter you will make a system that is hard for 10 devs to change. And if you build the latter for the former you'll build a system that is impossible for 2,000 devs to change.


one could argue a well engineered system wouldn't require 2000 devs to work on it...


My favourite unpopular opinion: Microservices are a great way to enable 100 engineers to work on a product when otherwise you would have needed at least 10 engineers to achieve the same.


So any system that has required >= 2,000 devs could have been engineered to need less than that many? Seems like a pretty lofty goal: a hard cap on any system you could imagine.


How does it relate to Linux?


You missed the second part: work on it for a year, declare it well scoped and on track then move on to greener pastures, dumping the actual work on some juniors.

Best part about the people that do such things is that they may not even realize they do it!


unless your engineering efforts are spent entirely on making the system highly operable, your code easy to read, document, and test.


And that, my friends, is why the FAANGs need to hire tens of thousands of software engineers to copy protobufs from one place to another...


I'm glad that works for you, but I am also really glad that I don't work with you.


But that makes it difficult for others to maintain it.


Yes, it does. Better get some more head count for that.


Of course. I'm trying to point out one of the causes of over-engineering


Sorry for obvious reaction. Already burned by this. :-)


Yes, it's a widespread problem. Crafting code for maximum maintainability is often really important and often overlooked


Maintainable code can be changed more easily than unmaintainable code. Each change will tend to make code either more or less maintainable. Over time, the probability of maintainable code being refactored into unmaintainable code approaches 1. Most older code based observed were in fact found to be unmaintainable, supporting the theory.


Pity those that have to maintain it later though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: