> the open-domain Frame Problem is equivalent to the Halting Problem and is therefore undecidable.
Diaconescu's Theorem will help understand where Rice's theorm comes to play here.
Littlestone and Warmuth's work will explain where PAC Learning really depends on a many to one reduction that is similar to fixed points.
Viewing supervised learning as paramedic linear regression, this dependent on IID, and unsupervised learning as clustering thus dependent on AC will help with the above.
Both IID and AC imply PEM, is another lens.
Basically for problems like protein folding, which has rules that have the Markovian and Ergodic properties it will work reliably well for science.
The basic three properties of (confident, competent, and inevitable wrong) will always be with us.
Doesn't mean that we can't do useful things with them, but if you are waiting for the hallucinations problem to be 'solved' you will be waiting for a very long time.
What this new combo of elements does do is seriously help with being able to leverage base models to do very powerful things, while not waiting for some huge groups to train a general model that fits your needs.
This is a 'no effective procedure/algorithm exists' problem. Leveraging LLMs for frontier search will open up possible paths, but the limits of the tool will still be there.
Stable orbits of the planets is an example of another limit of math, but JPL still does a great job as an example.
Obviously someone may falsify this paper... but the safe bet is that it holds.
The problem is confabulations. In my benchmark (https://github.com/lechmazur/confabulations/), you see models produce non-existent answers in response to misleading questions that are based on provided text documents. This can be addressed.
> the open-domain Frame Problem is equivalent to the Halting Problem and is therefore undecidable.
Thank you, Code as Data problems are innate to the von Nuemman architecture, but I could never articulate how LLMs are so huge they are essentially Turing-complete and equivalent computationally.
You _can_ combinate through them, just not in our universe.
this is very wrong. LLMs are very much not Turing complete, but they are algorithms on a computer so they definitely can't compute anything uncomputable
Turing machines are typically described as having an infinite tape. It may not be able to access that tape in finite time, but the tape is not bound to a finite tape
But it doesn't matter, it is an abstract model of computation.
But it doesn't matter Church–Turing thesis states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine.
It doesn't matter if you put the algorithm on paper, on 1 or k tapes etc...
Rice's theorem I mentioned above is like Scott–Curry theorem in Lambda calculus. Lambda calculus is Turing complete, that is, it is a universal model of computation that can be used to simulate any Turing machine.
The similar problems with 'trivial properties' in TMs end up being recursively inseparable sets in Lambda calculus.
https://www.mdpi.com/1999-4893/13/7/175
> the open-domain Frame Problem is equivalent to the Halting Problem and is therefore undecidable.
Diaconescu's Theorem will help understand where Rice's theorm comes to play here.
Littlestone and Warmuth's work will explain where PAC Learning really depends on a many to one reduction that is similar to fixed points.
Viewing supervised learning as paramedic linear regression, this dependent on IID, and unsupervised learning as clustering thus dependent on AC will help with the above.
Both IID and AC imply PEM, is another lens.
Basically for problems like protein folding, which has rules that have the Markovian and Ergodic properties it will work reliably well for science.
The basic three properties of (confident, competent, and inevitable wrong) will always be with us.
Doesn't mean that we can't do useful things with them, but if you are waiting for the hallucinations problem to be 'solved' you will be waiting for a very long time.
What this new combo of elements does do is seriously help with being able to leverage base models to do very powerful things, while not waiting for some huge groups to train a general model that fits your needs.
This is a 'no effective procedure/algorithm exists' problem. Leveraging LLMs for frontier search will open up possible paths, but the limits of the tool will still be there.
Stable orbits of the planets is an example of another limit of math, but JPL still does a great job as an example.
Obviously someone may falsify this paper... but the safe bet is that it holds.
https://arxiv.org/abs/2401.11817
Heck Laplacian determism has been falsified, but as scientists are more interested in finding useful models that doesn't mean it isn't useful.
All models are wrong, some are useful is the TL;DR