Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> That's just flat wrong. A global solution can solve global concerns and also allow for local conditions.

Past a certain level of complexity, that's no longer true.

_Seeing Like a State_ is a great introduction to this, but I think Carol Sanford's work goes much more into detail. The main thing with the high-modernist view that James Scott was critiquing is that it comes from what Sanford would call the Machine World View. This is where the entire system can be understood by how all of its parts interact. This view breaks down at a certain level of complexity, of which James Scott's book is rife with examples.

Sanford then proposes a worldview she calls the Living Systems World View. Such a system is capable of self-healing and regenerating (such as ecologies, watersheds, communities, polities), and changing on its own. In such a system, you don't affect changes by using direct actions like you do with machines. You use indirect actions.

Kubernetes is a great example. If you're trying to map how everything work together, it can become very complex. I've met smart people who have trouble grasping just how Horizontal Pod Autoscaling works, let alone understand its operational characteristics in live environments. Furthermore, it can be disconcerting to be troubleshooting something and then have the HPA reverse changes you are trying to make ... if you are viewing this through the Machine World View. But viewed through Living Systems World View, it bears many similarities to cultivating a garden. Every living thing is going to grow on its own, and you cannot control for every single variable or conditions.

For similar ideas (which I won't go into detail), there is Christopher Alexander's ideas on Living Architecture. He is a building architect that greatly influenced how people think about Object Oriented Programming (http://www.patternlanguage.com/archive/ieee.html) and Human-Computer Interface design (what the startup world uses to great affect in product design).

Another is the Cynefin framework (https://en.wikipedia.org/wiki/Cynefin_framework). Cynefin identifies different domains -- Simple, Complicated, Complex, and Chaos. Engineers are used to working in the Complicated domain, but when the level of complexity phase-shifts into the Complex domain, the strategies and ways of problem-solving that engineers are used to, will no longer work. This includes clinging to the idea that for any given problem, there is a global solution which will satisfy all local conditions.



> He is a building architect that greatly influenced how people think about Object Oriented Programming (http://www.patternlanguage.com/archive/ieee.html)

The funny part about this speech is that he's just telling everyone they did it wrong, and Richard Gabriel agrees:

https://dreamsongs.com/Files/DoingItWrong.pdf

The point of his pattern language is to enable people to create their own architecture for their own needs. The point of OOP design patterns is to lock you in a prison of enterprise Java programming in the Kingdom of Nouns. Of course, I think everyone realized this like a decade ago.


> This includes clinging to the idea that for any given problem, there is a global solution which will satisfy all local conditions.

The parent comment wasn't stating this. It was stating that there could be a partial global solution that would benefit all microservices, a solution which teams would have to adapt for covering local conditions as well. A middle ground per se.

Thanks for sharing the "Living Systems World View" btw, very interesting!


Truly, a fascinating perspective, thank you.

Just for the context: I would say I'm a natural intuitive bottom-upper, except that I can't help but reconsider everything my intuitive self learns from a strongly analytical top-down way.

From that perspective and 30+ years of experience (where I like to think I'm at least open to being completely wrong about anything and everything), I think top-down, prescriptive solutions can be useful and effective, but need to understand and carve out the holes (and conservatively large ones at that) for "local" concerns - BTW, "local" often typically just means lower, where there the lower level itself can have "global" and "local" concerns.

Now, I know this often doesn't happen, so let's lay out how it can work:

- there's a top-down person -- "Tim" in the article -- who has responsibility for for developing a solution

- there are the separate teams, who are responsible for communicating feedback on potential solutions.

Also, I wish I didn't need to point this out, but "responsibility for" === "authority/control over".

(If that's not the case, then never mind: you essentially have a "free-for-all" organization, and just better hope someone who's not too crazy wins the cage-match and can hang on long enough to be a net positive.)


A point made that I think you are missing is that unless all of the separate teams fully understand the potential solution, then they can't provide useful feedback.

If team X doesn't know Kafka, then they can't tell you the ways in which it's not as good as their existing (potentially ad-hoc, but definitely working!) message system. There may be things that their system does that the team just automatically assumes all message-brokers will do because it's "obvious" that it's needed.

If, on the other hand, someone on team X organically considers Kafka as a local solution, learns it, tries it out, all of this stuff becomes obvious immediately.

So the pure top-down approach has two possible solutions:

1. It gets useless feedback "meh, seems fine"

2. All N organizations actually take the time to try out the solution before giving feedback, which means you spend a lot of resources evaluating each top-down hypotheses

The suggested solution from TFA is to have a top-down person embed in one team, find some improvements that work locally, then embed in a second team and do the same. Only then should one try to generalize from the shared experiences of the team. It recognizes that good feedback is expensive and bad feedback is likely, so just cut out the whole "give feedback" stage and have the top-down person learn from actually being embedded in a couple of teams.


Thanks for the reading recommendation! Learning about the Cynefin framework and thinking about those kinds of problems led me to James Scott and to Hayek, but I haven't come across Sanford's work before.


Oh yeah, and I just remembered -- Go. It's a great way of training strategic analysis and making decisions. After moving past the basis, what one explores are global vs. local, influence vs. territory, discerning urgent vs. big moves, direction of play, and so forth. It is theoretically a perfect-information game, but it is sufficiently complex enough for humans that it simulates the fog of war and having to make decisions in face of uncertainty.


"It is theoretically a perfect-information game"

Ha, ha! A concept for suckers.


Perfect information must not be confused with perfect information processing.

GIGO does not imply its opposite.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: