> Today we're launching Twin publicly, after a 1-month beta where users deployed more than 100,000 fully autonomous agents. We're also announcing a $10M seed round led by LocalGlobe.
I wonder how it would come across with the right voice. We're focused on building out the video layer tech, but at the end of the day, the voice is also pretty important for a positive experience.
I wonder if we should make the UI a more common interface (e.g. "the call is ringing") to avoid this confusion?
It's a normal mp4 video that's looping initially (the "welcome message") and then as soon as you send the bot a message, we connect you to a GPU and the call becomes interactive. Connecting to the GPU takes about 10s.
Makes sense. The init should be about 10s. But, after that, it should be real time. TBH, this is probably a common confusion. So thanks for calling it out.
Even if coherence is learned implicitly, choosing a language, model, or representation already defines a bounded idea-space. The bias doesn’t disappear, it just moves into data and architecture. So my question isn’t whether coherence should be explicit or learned, but whether absolute exploration of an abstract idea-space is possible at all once any boundaries are imposed.
Guess absolute exploration hits the heat-death limit. You are hinting at a Drake-equation for bounded idea-space to guide AI: anchors x pressures x connectors x depth. Shift the boundaries for novelty.
Yeah, that’s a good way to put it. Absolute coverage feels like heat death, but changing the factors changes the space itself. That’s the part I’m still stuck on.
https://en.wikipedia.org/wiki/Miyake_event
reply