Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's the argument that understanding neurons is necessary?

Perhaps intelligence is like a black box input to our bodies (call it the "soul", even though this isn't testable and therefore not a hypothesis). The mind therefore wouldn't play any more of a role in intelligence than the eye. And I'm not sure people would say the eye is necessary for understanding intelligence.

Now, I'm not really in a position to argue for such a thing, even if I believe it, but I'm curious what argument you might have against it.



You can actually hypothesize that a soul exists and that intelligence is non-material, its just that your tests would quickly disprove that hypothesis - crude physical, mechanical modifications to the brain cause changes to intellect and character. If your hypothesis was correct you would not expect to see changes like that at all.

Some people think that neurons specifically aren't necessary for understanding intelligence but in the same way that understanding transistors isn't necessary to understand computers, that neurons comprise the units that more readily explain intelligence.


I’m here playing devil’s advocate - this test doesn’t work. Here are some related thought experiments.

Suppose a soul is an immaterial source of intelligence, but it controls the body via machine-like material hardware such as neurons.

Or an alternative, suppose there is a soul inside your body “watching” the sensory activations within your brain like a movie. The brain and body create the movie & have some intelligence, but other important properties of the consciousness are bound to this observer entity.

In both these cases, the test just shows that if you damage the hardware, you can no longer observe intelligence because you’ve broken the end-to-end flow of the machine.


Its fine if you are playing or supposing in seriousness but with good humor, it doesn't really change how anyone else should interact with you :)

But yes, supposing that then you would expect to only see damages that correspond such as different forms of paralysis or other purely mechanical damages, not things that change the interior perspective.

Otherwise you start postulating the existence of a thing whose sole justification is your desire for the existence of that thing, which is natural when you start questioning beliefs and kick out all the supports without meaning to.

I think this is what Bertrand Russel's teapot was meant to ellucidate.


> You can actually hypothesize that a soul exists and that intelligence is non-material, its just that your tests would quickly disprove that hypothesis - crude physical, mechanical modifications to the brain cause changes to intellect and character. If your hypothesis was correct you would not expect to see changes like that at all.

That’s not necessarily a disproof. It’s also not necessarily reasonable to conflate what we call “the soul” with intelligence.

This is entering the world of philosophy, metaphysics and religion and leaving the world of science.

The modern instinct is to simply call bullshit on anything which cannot be materially verified, which is in many ways a very wise thing to do. But it’s worth leaving a door open for weirdness, because apart from very limited kinds of mathematical truth (maybe), I think everything we’ve ever thought to be true has had deeper layers revealed to us than we could have previously imagined.

Consider the reported experience of people who’ve had strokes and lost their ability to speak, and then later regained that ability through therapy. They report experiencing their own thoughts and wanting to speak, but something goes wrong/they can’t translate that into a physical manifestation of their inner state.

Emotional regulation, personality, memory, processing speed, etc… are those really that different from speech? Are they really the essence of who we are, or are they a bridge to the physical world manifest within our bodies?

We can’t reverse most brain damage, so it’s usually not possible to ask a person what their experience of a damaged state is like in comparison to an improved state. We do have a rough, strange kind of comparison in thinking about our younger selves, though. We were all previously pre memory, drooling, poorly regulated babies (and before that, fetuses with no real perception at all). Is it right to say you didn’t have a soul when you were 3 weeks? A year? Two years? When exactly does “you” begin? I can’t remember who I was when I was when I was 2 months with any clarity at all, and you could certainly put different babies in doctored videos and I wouldn’t be able to tell what was me/make up stories and I’d probably just absorb them. But I’m still me and am that 2 month old, much later in time. Whatever I’m experiencing has a weird kind of continuity. Is that encoded in the brain, even though I can’t remember it? Almost definitely, yeah. Is that all of what that experience of continuity is, and where that sense is coming from? I’ve got no idea. I certainly feel deeper. Remember that we all are not living in the real world, we’re all living in our conscious perception. The notion that we can see all of it within a conscious mirror is a pretty bold claim. We can see a lot of it and damage the bridge/poke ourselves in the eyes with icepicks and whatnot, and that does stuff, but what exactly is it doing? Can we really know?

Intuitively most people would say they were still themselves when they were babies despite the lack of physical development of the brain. Whatever is constructing that continuous experience of self is not memory, because that’s not always there, not intelligence, because that’s not always there, not personality, because that’s not always there… it’s weird.

I think it’s important to remember that. Whenever people think they have human beings fully figured out down to the last mechanical detail and have sufficient understanding to declare who does and doesn’t have a soul and what that means in physical terms, bad things tend to happen. And that goes beyond a statement to be cautious about this kind of stuff purely out of moral hazard; the continual hazard is always as empirical as it is moral. We can never really know what we are. Our perceptual limitations may prove assumptions we make about what we are to be terribly, terribly wrong, despite what seems like certainty.


Brain damage by physical trauma, disease, oxygen deprivation, etc. has dramatic and often permanent effects on the mind.

The effect of drugs (including alcohol) on the mind. Of note is anesthesia which can reliably and reversibly stop internal experience in the mind.

For a non-physical soul to hold our mind we would expect significant divergence from the above. Out of body experiences and similar are indistinguishable from dreams/hallucinations when tested against external reality (remote viewing and the like).


> Brain damage by physical trauma, disease, oxygen deprivation, etc. has dramatic and often permanent effects on the mind.

That's not a completely watertight argument.

Consider a traditional FM/AM radio. You can modify it, damage it, and get notable changes to its behaviour...

> Of note is anesthesia which can reliably and reversibly stop internal experience in the mind

...turn it off and on again...

> For a non-physical soul to hold our mind we would expect significant divergence from the above.

... yet concluding that all the noises produced from the radio are purely internal, mechanical and physical would be the wrong conclusion.

(I'm not arguing that the human brain/mind is anything like analogous to a radio, just pointing out the limits of this approach.)


I mean, if we're really going to go there, who's to say that a large enough LLM doesn't automatically receive a soul simply because that's one of the fundamental laws of the universe as decreed by the Creator?


Going where? I wasn't arguing for the existence of a soul.

Although, sure, if we could somehow manage to determine that souls did exist then presumably an AI model as capable as a human would also be eligible for one.


“For a non-physical soul to hold our mind we would expect significant divergence from the above.”

This sounds like it assumes a physical mind could access a non-physical soul. All we probably know is that we have to be using an intact mind to use free will.


The other comments have pretty much covered it. We can pretty clearly demonstrate that neurons in general are important to behavior (brain damage, etc) and we even have some understanding about specific neurons or populations/circuits of neurons and their relation to specific behaviors (Grid cells are a cool example). And this work is all ongoing, but we're also starting to relate the connectivity of networks of neurons to their function and role in information processing. Recently the first full connectome of a larval fruit fly was published - stay tuned for the first full adult connectome from our lab ;)

Again, IANA neuroscientist, but this is my understanding from the literature and conversations with the scientists I work with.


Why would you doubt neurons play a roll in intelligence when we've seen so much success in emulating human intelligence with artificial neural networks? It might have been an interesting argument 20 years ago. It's just silly now.


> It might have been an interesting argument 20 years ago. It’s just silly now.

Is it?

These networks are capable of copying something, yes. Do we have a good understanding of what that is?

Not really, no. At least I don’t. I’m sure lots of people have a much better understanding than I do, but I think its hard to know exactly whats going on.

People dismiss the stochastic parrot argument because of how impressive big neural nets are, but it doesn’t really invalidate that argument. Is a very, very, very good parrot that learns from everyone at once doing basically the same as what we do? I’d argue no, at least not fully. It’s absorbed aspects of us extremely well/is a very weird, sophisticated mirror, yes, and is copying something somehow, probably in a way reminiscent of how we copy. Is it basically the same as what we’re doing when we think? Partially? Fully? Not at all?

A typical engineer would say “good enough”. That type of response is valuable in a lot of contexts, but I think the willingness to apply it to these models is pretty reckless, even if it’s impossible to easily “prove” why.

To be clear on the exact statement you made, I think you’re right/it’s pretty clear neurons play some very important role/potentially capture a ton of what we consider intelligence, but I don’t think anyone really knows what exactly is being captured/what amount of thought and experience they’re responsible for.


That person's argument is borderline insane to me - a severe lack of knowing what is unknown, a reverence of current best-models (regards modern science, including neurology - yet, open minded investigations beyond are also a requisite here.) And the pompousness is what truly boggles my mind ("Its silly to believe this, now.) A look in the mirror would suffice to say the least...

Anyway, thank you for a great answer and conversation throughout this thread.

Regards neural networks, parroting and the emulation of intelligence (or the difference between an emulation and the "real thing"):

Well, somewhat like you say, we cannot propose a valid comparison from one to the other without an understanding of one (consciousness) or both. It's fascinating that there are some open, valid and pressing questions about what / how the output of this new wave of software is concretized (from foundational, semi-statistical algorithms in this case.)

Yes, I do agree neurons have something to do with the "final output". But this is a subsection of the problem - organic neurons is-an/are order(s) of magnitude in complexity beyond what the tricky "parrot" is up to. Moreso, these components perform very different functionally - the known functions of the neuron compared to ANN, backprop etc. The entire stack.)

P.S: One interesting theory I like to simulate and/or entertain is that every organic cell in the body has something to do with the final output of consciousness.


Please read the comment I was responding to. I was addressing the suggestion that perhaps the brain is as relevant to intelligence as the eye. Cognitive Neuroscience has been a thriving field for nearly half a century now. I didn't say we have it all figured out, just that it's obvious neurons are a piece of the puzzle.


Your theory makes sense in an evolutionary context. It is possible that all cells and organisms have some general intelligence. Humans do not have the ability to recognize this because evolutionarily it was only helpful to recognize intelligence when it could pose a threat to us. And the biggest threat to us in general was other humans as we are tribal animals. So we don't see it, we only see specialized intelligence that historically posed a threat to us.

It would explain why most "experts" didn't see GTP-4's abilities coming. Many of them expected that it would take a major algorithm or technology improvement to do "real intelligent" things, because they fundamentally misunderstood intelligence.


Thank you, appreciate the compliment.

And yeah, are definitely a lot of open questions related to all of this. Love how its brought so many deep questions into focus.


If anything the experience with artificial neural networks argues the opposite - biological neurons are quite a bit different than the "neurons" of ANNs, and backpropagation is not something that exists biologically.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: