According to the theory, an attention model. Here's artificial consciousness in three steps:
1. Have a robot build perception models of its environment and itself
2. Have the robot allocate computational resources and sensory bandwidth to the models using attention
3. Have the robot control attention using model predictive control
Because the attention model is less detailed than its actual attention, by virtue of being a model, it doesn't represent the mechanisms of attention or modeling accurately. Instead, it uses non-physical concepts such as "mental possession" to model itself or other agents paying attention to things, or "qualia" to denote the recursion that occurs when percepts we attend to are summarized by the attention model (which in turn can be attended to, and so on).
Hard to say. It might be simply a supply problem- most movies, books etc. don’t really promote this vision of humanity’s place in the world, so it indeed might be a question of accessibility. Historically, belief systems definitely had a wide range of “content” aimed at different levels of society.
But there also seems to be a real interest in “spiritual” needs, whatever that might mean in particular. And so a purely scientific approach may not be enough in the first place.
Personally I think it may be too much of a past-focused narrative to be very compelling. Most successful religions have a vision of the future, not merely the past.
So a proposal might be: construct a narrative that defeats these two biases in a convincing way:
But there are two very human biases on display here: the idea that occupying large amounts of physical space is indicative of "importance"; and that things which exist for long durations of time are inherently more valuable. These are human biases and there any many examples in nature of the exact opposite being true.
are they getting in the way of finding a middle ground between being lost in space and being the center of the universe? do we need to find a middle ground?
in my opinion a compelling narrative primarily needs to address the problems humanity is facing today: poverty and wealth, climate change, gender equality, war and conflict, disagreement of religion, racism and prejudice, injustice...
i think to address these, it doesn't matter much whether we are lost in space or the center of the universe
It isn’t a comforting story. It doesn’t give meaning or explanation. People imagine themselves as a tiny ape on a giant rock hurtling through the void and shiver at the nihilistic vision.
Now, being made by a magic man who will welcome you to his magic wonderful home after this short, nasty, brutish life? There’s a comforting story.
It's not a comforting story, but I think it can provide meaning. It does for me at least. While we are not gods, we are different from all other animals - we are where "the fallen angel meets the rising ape" as Terry Pratchett wrote.
While we might not have free will in an absolute, metaphysical sense, we can self-reflect, practice self-control, shape our environment, and even change our nature. What will we do with this power?
There is no eternal afterlife to go to, but we now understand what actual life is - and it's no longer so nasty and brutish. Perhaps soon it won't be so short either. We are certainly capable of extending it in principle, we just need to get our shit together.
Comforting stories are cozy, but we are growing up. We've become smart enough to cause a whole lot of trouble for ourselves, and are not yet wise enough to fix it. We're confused, can't make sense of things and constantly whine about it. But that's how growth works. Humanity might just be in its awkward emo teenage phase.
You've called J and T into question, so let's do B as well. Physicists know that QM and relativity can't be true, so it's fair to say that they don't believe in these theories, in a naive sense at least. In general anyone who takes Box' maxim that all models are wrong (but some are useful) to heart, doesn't fully believe in any straightforward sense. But clearly we'd say physicists do have knowledge.
Sure we'd say physicists have knowledge of quantum mechanics and general relativity. And we can also say physicists have knowledge of how to make predictions using quantum mechanics and general relativity. In this sense, general relativity is no more wrong than a hammer is wrong. Relativity is simply a tool that a person can use to make predictions. Strictly speaking then relativity is not itself right or wrong, rather it's the person who uses relativity to predict things who can be right or wrong. If a person uses general relativity incorrectly, which can be done by applying it to an area where it's not able to make predictions such as in the quantum domain, then it's the person who uses relativity as a tool who is wrong, not relativity itself.
As a matter of linguistic convenience, it's easier to say that relativity (or theory X) is right means that people who use relativity to make predictions make correct predictions as opposed to relativity itself being correct or incorrect.
My point is that QM and GR make very different claims about what exists. Perhaps it's possible to unify the descriptions. But more likely there will be a new theory with a completely different description of reality.
On small scales, GR and Newtonian mechanics make almost the same predictions, but make completely different claims about what exists in reality. In my view, if the theories made equally good predictions, but still differed so fundamentally about what exists, then that matters, and implies that at least one of the theories is wrong. This is more a realist, than an instrumentalist position, which perhaps is what you subscribe to, but tbh instrumentalism always seemed indefensible to me.
If you are aware that "Maxatar's conjecture is that 1 + 1 = 5", then it's correct to say that you have knowledge about "Maxatar's conjecture", regardless of whether the conjecture is actually true or false. Your knowledge is that there is some conjecture that 1 + 1 = 5, not that it's actually true.
In that sense, it's also correct to say that physicists have knowledge of relativity and quantum mechanics. I don't think any physicist including Einstein himself thinks that either theory is actually true, but they do have knowledge of both theories in much the same way that one has knowledge of "Maxatar's conjecture" and in much the way that you have knowledge of what the flat Earth proposition is, despite them being false.'
It seems fairly radical to believe that instrumentalism is indefensible, or at least it's not clear what's indefensible about it. Were NASA physicists indefensible to use Newtonian mechanics to send a person to the moon because Newtonian mechanics are "wrong"?
What exactly is indefensible? The observation that working physicists don't really care about whether a physical theory is "real" versus trying to come up with formal descriptions of observed phenomenon to make future predictions, regardless of whether those formal descriptions are "real"?
If someone choses to engage in science by coming up with descriptions and models that are effective at communicating with other people observations, experimental results and whose results go on to allow for engineering advances in technology, are they doing something indefensible?
Yes, it's correct to say that I have knowledge of your conjecture, and in the same way that physicists have knowledge of QM and GR regardless of their truth status, but beyond just having knowledge of the theory, they also have knowledge of the reality that the theory describes.
>Were NASA physicists indefensible to use Newtonian mechanics to send a person to the moon because Newtonian mechanics are "wrong"?
No, it was defensible, and that's exactly my point. Even though they didn't believe in the content of the theory (and ignoring the fact that they know a better theory), they do have knowledge of reality through it.
I don't think instrumentalism makes sense for reasons unrelated to this discussion. A scientist can hold instrumentalist views without being a worse scientist for it, it's a philosophical position. Basically, I think it's bad metaphysics. If you refuse to believe that the objects described by a well-established theory really exist, but you don't have any concrete experiment that falsifies it or a better theory, then to me it seems like sheer refusal to accept reality. I think people find instrumentalism appealing because they expect that any theory could be replaced by a new one that could turn out very different, and then they see it as foolish to have believed the old one, so they straight up refuse to believe or care what any theory says about reality. But you always believe something, whether you are aware of it or not, and the question is whether your beliefs are supported by evidence and logic.
I think a much closer analogy to function inversion is MCMC (or Bayesian inference in general), where we can easily compute the density p(x) of any point x, but finding the x given a p(x) is intractable. Strictly speaking it's about finding a set of x's that are distributed as p(X), not finding the x given any single density p(x), but it's close.
Relatedly, probabilistic programming was originally imagined pretty much like your second quote: you define a model, get some data, run them both through the built-in inference engine, and you get the parameters of the model likely to have produced the data. In practice though, there's no universal inference engine that works for everything (some people disagree, but they're NUTS ;)
I guess pretty much for the same reason P is probably not equal to NP.
> I guess pretty much for the same reason P is probably not equal to NP.
Yep, in particular there are classes called #P and PP that are closely connected to NP that can capture the hardness of problems like computing partition functions, sampling from posterior distribution and so on.
Didn't know about these, thanks for the pointer! Do you have a good resource for learning about these (specifically about the hardness of sampling from posterior distributions)?
The argument in the paper (that AGI through ML is intractable because the perfect-vs-chance problem is intractable) sounds similar to the uncomputability of Solomonoff induction (and AIXI, and the no free lunch theorem). Nobody thinks AGI is equivalent to Solomonoff induction. This paper is silly.
NP-hardness was a popular basis for arguments for/against various AI models back around 1990. In 1987, Robert Berwick co-wrote "Computational Complexity and Natural Language" which proposed that NLP models that were NP-hard were too inefficient to be correct. But given the multitude of ways in which natural organisms learn to cheat any system, it's likely that myriad shortcuts will arise to make even the most inefficient computational model sufficiently tractable to gain mindshare. After all, look at Latin...
Even simple inference problems are NP-hard (k means for example). I think what matters is that we have decent average case performance (and sample complexity). Most people can find a pretty good solution to travelings salesman problems in 2D. Not sure if that should be chalked up to myriad shortcuts or domain specialization.. Maybe there's no difference. What do you have in mind re Latin?
>it's what capable and smart people value and pursue that makes all the difference.
How do you know capable and smart people will keep having good values? Seems to me that it's true until it isn't - populism takes over politics, ideology takes over the humanities, science gets Goodharted to death, etc. Values are highly circular - we value what high-status people in our (sub)culture value, and you become high-status by getting what people value. This holds for smart people as well.
Fair enough, but for the sake of this conversation, if we say 'good' values are those that keep things from staying the same, aren't the values of smart people just as likely to evolve towards 'bad' ones? For example, I'm sure most people know at least one smart person who only plays video games; it does seem that we'll keep inventing forms of entertainment that wirehead people more and more effectively, which seems in line with the Brave New World scenario.
Do you still see such dynamics in the coefficients if you have an order of magnitude more data or fewer dimensions? 100 points in 6D is not much even for linear regression, the model might just be too high variance to interpolate monotonically between the two populations.
The amount of data rapidly factors-out. One cat get the effect even for millions of points in 6d. It is the complexity of the model solution which is going to be degree D-1 polynomials over a shared degree D-1 denominator that drive the effect.
"Rives sees ESM3’s generation of new proteins by iterating through various sequences as analogous to evolution."
Except for the part where a sequence is actually deemed more fit, ie natural selection? And the part where mutations are random, instead of sampled from the training data manifold, so much more constrained?
...so really it's a worse version of random search?
1. Have a robot build perception models of its environment and itself
2. Have the robot allocate computational resources and sensory bandwidth to the models using attention
3. Have the robot control attention using model predictive control
Because the attention model is less detailed than its actual attention, by virtue of being a model, it doesn't represent the mechanisms of attention or modeling accurately. Instead, it uses non-physical concepts such as "mental possession" to model itself or other agents paying attention to things, or "qualia" to denote the recursion that occurs when percepts we attend to are summarized by the attention model (which in turn can be attended to, and so on).