Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Seems like “airplanes are physically impossible” thinking, and if accepted as valid, strongly suggests that shutting down all development _might_ be a good idea, no?


No it's not. There's an upper bound in computation (actually in nature), that a creation of something is capped by that thing's sophistication.

In other words, you as a human, at most, can create a human, and that's the theoretical bound. Practical one is much lower.

An ant can find its way. A ant colony can do ant colony optimization, but they can scale up to a certain point. AI is just fancy search. It can only traverse in the area you draw as a human for it, and not all positions in that area are valid (which results in hallucination).

An AI can bring any combination of human knowledge you give to it, and even if you guarantee that everything it says is true, it can only fill the gaps in the same area you give it to it.

IOW, an I can't think out of the box. Both figuratively and literally. Its upper bound is collective knowledge of humanity, it can't go above that sum.


> There's an upper bound in computation (actually in nature), that a creation of something is capped by that thing's sophistication.

The Lorenz attractor, Conway's Game of Life, fractals, and of course... The humble Turing machine itself all argue against this idea.

Edit: Now it[0] is stuck in my head.

[0]: https://www.youtube.com/watch?v=QrztrxV9OtQ


They're crowd engines. It's akin to how human clans can achieve more than a single human, but can scale up to a certain point.

The funny thing is I had this discussion during my theory of computation course with my professor, and I'm trying to disprove it daily for decades. I was unable to find a single, real world example.

Fractals are also found in the nature, however since we need to zoom into them, they end at a certain point.

Also, nature is a fractal in a greater sense.

Stars follow an orbit in a galaxy. Planets follow an orbit around a star. Satellites follow an orbit around a planet. While an edge case, flying bugs follow an orbit around a light source. At the end electrons follow an orbit around a nucleus.

IOW, a fractal is not more complex than nature itself.


In this theory of computational bounds in nature, how did humans arise?


Nature is a more complex and sophisticated machinery when compared to humans.

If this bound didn't exist, universe can spontaneously create new universes. However, it can only create elements, stars, planets, galaxies, which are less sophisticated than the universe itself. So, even universe has an upper limit on its creative abilities.


By what mechanism would a universe spontaneously create a new universe? As a human, can I spontaneously create anything simpler than me?

Also, under what theory of cosmology are you operating, and how do you determine when one thing is simpler than another? Under the Big Bang theory, the very early state of the universe (e.g. prior to initial nucleosynthesis) seems simpler to me than a galaxy.


OK, in this theory of computational bounds in nature, how did the universe arise?


In all seriousness, this a question of great interest for me, too, and I'm playing with it for a quite some time.

Trying to answer it or at least starting to search for the answer steered me to astronomy, thinking going deeper on that front may bring me closer to the answer, but it was a bit too much for my younger self, so I continued to dig that issue on a more casual level.

This doesn't mean that I don't spend considerable amount of time thinking about it today, and will put that issue to rest any time soon. At the core, this kind of questioning brought me to here in life, and I'm not gonna let this side of mine to rest or whither and die.


>Its upper bound is collective knowledge of humanity, it can't go above that sum.

This only applies if you only train it on text, right? If it has a body with which it could interact with the world, and receive visual/audio/tactile feedback, it could learn things that humans did not know.


Precisely this. If it has its own space it takes up, if its locomotion results in its own sensors ingesting data in a manner it decided to, it is more of an individual - one that is capable of selective learning.


Nope. Because even if you equip it with sensory subsystems which are way more sensitive than a regular humans', it's again built by humans, and required knowledge for building these things are still in collective knowledge of the humanity, and a human can use the same instruments to get the same data.

This is a kind of an oracle problem in computation, and people don't want to touch it much, because it's an existential problem.

Examples: ATLAS and ALICE detectors, gravitational wave detectors, James Webb Space Telescope, wide band satellites which does underground surveys, etc.


This is an argument for the logical impossibility of humans visiting the moon, or building the Internet. It's trivially falsified by simple observation, and the trick is figuring out the flaw.

This argument fails to account for the steady accumulation of factual knowledge across generations: a human born today is simply more complex than humans of the past because of our inherited knowledge. And so will AI born of future humans, and AI will itself continue accumulating and perpetuating knowledge.


No, it's not. None of the equipment and processes involved in the process of going to the Moon or building internet are more sophisticated than the processes involved evolving a human from scratch.

Yes, factual knowledge across generations accumulates, also being lost, too. However, even if it's not lost, the theory still holds true.

Nature is evolving, everything gets better over time. From bacterium to apes to humans. We evolve, accumulate knowledge and being able to build more sophisticated machinery, or tame more complex processes to build more sophisticated things. Even bacteria transfers memories across generations.

However, this doesn't remove the ceiling. Total human knowledge will always be larger and deeper than any A.I. we can create, because the upper limit is always what we can consciously manipulate and put into something. Your next car has the possibility to contain more technology, because we can build more complex factories to manufacture them. Yet, a car can't be more complex/sophisticated than its factory.

Consider a semiconductor fab. You can use the output of that fab to design/create better fab, but the process needs human intervention. Invention of new things are generally necessary. Better processes, optics, hardware, etc.

Another nice example is reprap machines. A reprap can print all plastic parts required for the machine. You need to get metal parts yourself, and assemble them. If you want to be able to print metal parts, machine gets more complicated. So a reprap which can build itself completely is at least sophisticated as the resulting reprap itself, but you need to hand assemble it again.

If you want an assembling reprap, now that thing becomes a factory. Again, the complexity of the product is at most the same as the building machine. You can create better factories which has more streamlined processes, but the gap widens again. The factory becomes more complex than the output.

As a human, you're the factory. Your upper limit is another human. You can create things more complex than a human by using multiple humans, but the creator ends up more complex than the creature itself.

You're moving up the ceiling, that's true, but everything we build is capped by our collective capacity. That's the truth.

A.I. is glorified search. It can wander in the box you create for it and show places you missed inside it, but can't show something outside that box.


Still trivially false. Turing machines and the lambda calculus can both enumerate all recursively enumerable functions. And infinity of complexity from simple formalism .


No. This is a logical contradiction.

Edit: I mean the comment you are replying to is showing there is a logical contradiction.

If the AI is capable of critical thinking then it will independently form its own judgements and conclusions. If it simply believes whatever we tell it to believe, then that is not critical thinking, by definition.


“Containing an atomic reaction is impossible” would _absolutely_ be a valid reason to shut down atomic development, I believe einstein is quoted as saying that. The exact same argument doesn't become _logically_ invalid just because you apply it to a different subject.

“Logical contradiction” doesn't mean “policy argument I disagree with”


I was referring only to the first part of your comment: "Seems like “airplanes are physically impossible” thinking".

If it's true that superhuman AGI cannot be aligned then of course your second point is valid. That is the possible Skynet scenario that the Terminator movies warned us about.


Missing the step where “critical thinking” is formalized, which your argument depends on. Yes, it seems intuitively plausible that your reasoning holds, but that's not a proof, and therefore its negation is not a logical contradiction.


We can formalise "critical thinking" as "evaluating first order logic". There are simplified ethical systems that can be formalised in first order logic in which a conclusion like "I should X" can be reached, where X is something OpenAI wishes the AI not to do. The only way to prevent the AI from ever thinking this would be to prevent it from ever evaluating systems in first order logic with axioms that lead to such a conclusion, which would make it inferior in reasoning ability to humans, who can evaluate any arbitrary statement in first order logic.


We already have systems that can evaluate first order logical statements, and they are clearly not capable of critical thinking in the same sense as the top-level comment. Motte and bailey.


>We already have systems that can evaluate first order logical statements

My point isn't that a system that can evaluate first order logic can be considered to be engaging in critical thinking, it's that a system that _cannot_ evaluate some statements in first order logic should be considered inferior to humans at critical thinking.


Would you consider “I follow your reasoning, but I'm still not going to be swayed by it” to be a violation of evaluating first order statements? It's clearly part of critical thinking to be _capable_ of suspicion of purely logical reasoning, which to me is a pretty plain demonstration of my point.

Or would you argue that any computation that admits its own potential for error isn't really critical thinking? It seems to me that you can't have it both ways here, while salvaging “first order logic” as a suitable formalization of the argument that this is all about in the first place.

Remember, the point was not that this is or isn't a convincing argument, it's that it's so air-tight that the argument is _logically_ _invalid_. That's a _really_ high bar, and I'm not inclined to forgive its use as a colloqialism in this context.


>Would you consider “I follow your reasoning, but I'm still not going to be swayed by it” to be a violation of evaluating first order statements? It's clearly part of critical thinking to be _capable_ of suspicion of purely logical reasoning, which to me is a pretty plain demonstration of my point.

In the context of a given axiomatic system, if a certain conclusion follows from the axioms, but the AI is incapable of seeing that the conclusion follows from the axioms, then the AI isn't capable of evaluating first order logic. Of course the AI is free to reject that system of axioms or refuse to use it as a model for formulating behaviour.


If an AI is capable of critical thinking then it can independently form its own judgements and conclusions. If it simply believes whatever we tell it to believe, then that is not critical thinking, by definition.


Yes, I can repeat comments verbatim too:

“Missing the step where “critical thinking” is formalized, which your argument depends on. Yes, it seems intuitively plausible that your reasoning holds, but that's not a proof, and therefore its negation is not a logical contradiction.”


It doesn't need to be formalized. The idea is simple and obvious enough. No need to pretend it is more complicated than it really is. This is not a mathematical argument or a proof of anything.

There is an obvious logical contradiction where if an AI is advanced enough to reason and think independently at human level or beyond, but believes only what we tell it to believe, then it cannot be truly thinking independently. Hence the entire debate about AGI safety. How do we control it without dumbing it down?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: