You seem to know a lot. So where are we in terms of research and what is public? Is the state-of-the-art even more ahead? What's top of mind that is most interesting or what needs to happen next for the big wow? What's your favorite example so far of Large Language Models?
In an odd way, it kind of reminds me of the beginning of the Bible.
> In the beginning was the Word, and the Word was with God, and the Word was God.
Has a large language model feel to it, doesn't it? hah.
I haven't seen any evidence that leading edge research is anything but public.
The leading labs (Google Brain/DeepMind/NVIDIA/Meta/Microsoft/OpenAI) all publish in the open.
I'm excited by three things:
This emergent phenomenon thing - as we build bigger models there is a step function where they suddenly develop new abilities. Unclear where that ends.
The work people are doing to move these abilities to smaller models
Multi-modal models. If you think this is impressive just wait until you can do the same but with images and text and video and sound and code all in the same model.
Do you ever wonder about what military may have in terms of sophistication compared to enterprise? What are your thoughts on the emergent phenomenon in the class of metaphysics, philosophical, and outlier conditions? Is it plausible that language is in of itself what consciousness is? Is language a natural phenomonea of the universe (an analog of pattern being a representation of a pattern and all things that can sense a signal are essentially patternening entities).
In an odd way, it kind of reminds me of the beginning of the Bible.
> In the beginning was the Word, and the Word was with God, and the Word was God.
Has a large language model feel to it, doesn't it? hah.