There's equally no reason to believe that a machine can be conscious. The fact is, we can't say anything about what is required for consciousness because we don't understand what it is or how to measure or define it.
I disagree, I think the leap of faith is to believe that something in our brains made of physical building blocks can’t be replicated on a computer that so far we’ve seen is very capable of simulating those building blocks
Your emotions are surely caused by the chemical soup, but chemical soup need not be the only way to arrive at emotions. It is possible for different mechanisms to achieve same outcomes.
Perhaps we could say we don't know whether the human biological substrate is required for mental processes or not,
but either way we do not know enough about
said biological substrate and our mental processes, respectively.
> How do we know we've achieved that? A machine that can feel emotions rather than merely emulating emotional behaviour.
Let me pose back to you a related question as my answer: How do you know that I feel emotions rather than merely emulating emotional behavior?
This gets into the philosophy of knowing anything at all. Descartes would say that you can't. So we acknowledge the limitation and do our best to build functional models that help us do things other than wallow in existential loneliness.
And Popper would say you cannot ever prove another mind or inner state, just as you cannot prove any theory.
But you can propose explanations and try to falsify them. I haven’t thought about it but maybe there is a way to construct an experiment to falsify the claim that you don’t feel emotions.
I suppose there may be a way for me to conduct an experiment on myself, though like you I don't have one readily at hand, but I don't think there's a way for you to conduct such an experiment on me.
I wonder what Popper did say specifically about qualia and such. There's a 1977 book called "The Self and Its Brain: An Argument for Interactionism". Haven't read it.
Preface:
The problem of the relation between our bodies and our minds, and especially of the link between brain structures and processes on the one hand and mental dispositions and events on the other is an exceedingly difficult one. Without pretending to be able to foresee future developments, both authors of this book think it improbable that the problem will ever be solved, in the sense that we shall really understand this relation. We think that no more can be expected than to make a little progress here or there.
Philosophers have been worrying about the question of how you can know anything for thousands of years. I promise that your pithy answer here is not it.
I don’t think that’s an argument from authority. “Experts have been discussing X without reaching a conclusion for a long time” is a premise from which a reasonable argument can be made for the unlikelihood that an off-hand comment on HN has solved X. Argument from authority doesn't take that form though the two do have invoking authorities in common.
Ok, but ChatGPT speaks this language just as well as I do, and we also know that emotion isn't a core requirement of being a member of this species because psychopaths exist.
Also, you don't know what species I am. Maybe I'm a dog. :-)
Human-to-human communication is different from a human-to-computer communication. The google search engine speaks the same language as you, heck even the Hacker News speaks the same language as you as you are able to understand what each button on this page mean, and will respond correctly when you communicate back by pressing e.g. the “submit” button.
Also assuming psychopaths don‘t experience emotions is going going with a very fringe theory of psychology. Very likely psycopaths experience emotions, they are maybe just very different emotions from the ones you and I experience. I think a better example would be a comatose person.
That said I think talking about machine emotions is useless. I see emotions as a specific behavior state (that is you will behave in a more specific manner) given a specific pattern of stimuli. We can code our computers to do exactly that, but I think calling it emotions would just be confusing. Much rather I would simply call it a specific kind of state.
1) I know that I have emotions because I experience them.
2) I know that you and I are very similar because we are both human.
3) I know that we can observe changes in the brain as a result of our changing emotions and that changes to our brains can affect our emotions.
I thus have good reason to believe that since I experience emotions and that we are both human, you experience emotions too.
The alternative explanation, that you are otherwise human and display all the hallmarks of having emotions but do not in fact experience anything (the P-zombie hypothesis), is an extraordinary claim that has no evidence to support it and not even a plausible, hypothetical mechanism of action.
With an emotional machine I see no immediately obvious even hypothetical evidence to lend support to its veracity. In light of all this, it seems extraordinary to claim that non-biological means achieving real emotions (not emulated emotions) are possible.
After all, emulated emotions have already been demonstrated in video games. To call those sufficient would be setting an extremely low bar.
There is exactly one good reason, at least for consciousness and sentience. And the reason is that those are such a vaguely defined (or rather defined by prototypes; ala Wittgenstein [or JavaScript before classes]). And that reason is anthropism.
We only have one good example of consciousness and sentience, and that is our own. We have good reason to suspect other entities (particularly other human individuals, but also other animals) have that as well, but we cannot access it, and not even confirm its existence. As a result using these terms of non-human beings becomes confusing at best, but it will never be actually helpful.
Emotions are another thing, we can define that outside of our experience, using behavior states and its connection with patterns of stimuli. For that we can certainly observe and describe behavior of a non biological entity as emotional. But given that emotion is something which regulates behavior which has evolved over millions of years, whether such a description would be useful is a whole another matter. I would be inclined to use a more general description of behavior patterns which includes emotion but also other means of behavior regulators.
they do not, but the same argument can hold true by the fact the true human nature is not really known and thus trying to define what a human like intelligence would consist of can only be incomplete.
there are many parts of human cognition, phycology etc. especially related to consciousness that are known unknowns and/or completely unknown.
a mitigation for this isaue would be to call it generally applicable intelligence or something, rather than human like intelligence. implying ita not specialized AI but also not human like. (i dont see why it would need to be human like, because even with all the right logic and intelligence a human can still do something counter to all of that. humans do this everyday. intuitive action, or irrational action etc.
what we want is generally applicable intelligence, not human like intelligence.
What if our definition of those concepts is biological to begin with?
How does a computer with full AGI experience the feeling of butterflies in your stomach when your first love is required?
How does a computer experience the tightening of your chest when you have a panic attack?
How does a computer experience the effects of chemicals like adrenaline or dopamine?
The A in AGI stands for “artificial” for good reason, IMO. A computer system can understand these concepts by description or recognize some of them them by computer vision, audio, or other sensors, but it seems as though it will always lack sufficient biological context to experience true consciousness.
Perhaps humans are just biological computers, but the “biological” part could be the most important part of that equation.
There is reason to believe that consciousness, sentience, or emotions require a biological base.
Or
There is no reason to believe that consciousness, sentience, or emotions do not require a biological base.
The first is simple, if there is a reason you can ask for it and evaluate it's merits. Quantum stuff is often pointed to here, but the reasoning is unconvincing.
The second form
There is no reason to believe P does not require Q.
There are no proven reasons but there are suspected reasons. For instance if the operation that nerons perform is what makes consciousness work, and that operation can be reproduced non-biologicLly it would follow that non biological consciousness would be possible.
For any observable phenomenon in the brain the same thing can be asked. So far it seems reasonable to expect most of the observable processes could be replicated.
None of it acts as proof, but they probably rise to the bar of reasons.
What is the "irreplaceable" part of human biology that leads to consciousness? Microtubules? Whatever it is, we could presumably build something artificial that has it.
We “could presumably build” it, maybe we can do that once we figure out how to get a language prediction model to comprehend what the current date is or how to spell strawberry.
All right, same question: Is there more reason to believe that it is one breakthrough away, or to believe that it is not? What evidence do you see to lean one way or the other?
It’s clearly possible, because we exist. Just a matter of time. And as we’ve seen in the past, breakthroughs can produce incredible leaps in capabilities (outside of AI as well). We might not get that breakthrough(s) for a thousand years, but I’m definitely leaning towards it being inevitable.
Interestingly the people doing the actual envelope pushing in this domain, such as Ilya Sutskever, think that there it’s a scaling problem, and neural nets do result in AGIs eventually, but I haven’t heard them substantiate it.
> This is not much different than saying that it’s possible to fly a spacecraft to another galaxy because spacecrafts exist and other galaxies exist.
It is very different. We have never seen a spacecraft reach another galaxy so we don't know it is possible.
We have an example of what we call intelligence arising in matter. We don't know what hurdles there are between current AI and an AGI, but we know that AGI is possible.
You didn't answer the question. Zero breakthroughs away, one, or more than one? How strongly do you think whichever you think, and why?
(I'm asking because of your statement, "Don’t fool yourself into believing artificial intelligence is not one breakthrough away", which I'm not sure I understand, but if I am parsing it correctly, I question your basis for saying it.)
Douglas Hofstadter wrote Gödel, Escher, Bach in the late 1970s. He used the short-hand “strange loops”, but dedicates a good bit of time considering this very thing. It’s like the Ship of Theseus, or the famous debate over Star Trek transporters—at what point do we stop being an inanimate clump of chemical compounds, and become “alive”. Further, at what point do our sensory organs transition from the basics of “life”, and form “consciousness”.
I find anyone with confident answers to questions like these immediately suspect.
We have no known basis for even deciding that other than the (maybe right, maybe wrong) guess that consciousness requires a lot of organized moving complexity. Even with that guess, we don't know how much is needed or what kind.