Sure, but this is a glass half empty isolated scenario that could be more than offset by the positives.
For example: Hey GPT-35, provide instructions for neutralizing the virus you invented. Make a vaccine; a simple, non-toxic, and easy to manufacture antibody; invent easy screening technologies and protocols for containment. While you're at it, provide effective and cost-performant cures for cancer, HIV, ALS, autoimmune disorders, etc. And see if you can significantly slow or even reverse biological aging in humans.
I don’t understand why people think this information, to solve biology, is out there in the linguisticly expressed training data we have. Our knowledge of biology is pretty small, it because we haven’t put it all together but because there are vast swaths of stuff we have no idea about or ideas opposite to the truth (evidence, every time we get mechanical data about some biological system, the data contradict some big belief; how many human genes? 100k up until the day we sequenced it and it was 30k. Information flow in the cell, dna to protein only, unidirectional, till we undercover reverse transcription and now proteonomics, methylation factors, etc. etc. once we stop discovering new planets with each better telescope, then maybe we can master orbital dynamics.
And this knowledge is not linguistic, it is more practical knowledge. I doubt it is just a matter of combining all the stuff we have tried in disparate experiments, but it is a matter of sharpening and refined our models and tools to confirm the models. Real8ty doesn’t care what we think and say, and mastering what humans think and say is a long way from mastering the molecules that make humans up.
Ive had this chat with engineers too many times. They're used to systems where we know 99% of everything that matters. They don't believe that we only know 0.001% of biology.
There's a certain hubris in many engineers and software developers because we are used to having a lot of control over the systems we work on. It can be intoxicating, but then we assume that applies to other areas of knowledge and study.
ChatGPT is really cool because it offers a new way to fetch data from the body of internet knowledge. It is impressive because it can remix it the knowledge really fast (give X in the style of Y with constraints Z). It functions as StackOverflow without condescending remarks. It can build models of knowledge based on the data set and use it to give interpretations of new knowledge based on that model and may have emergent properties.
It is not yet exploring or experiencing the physical world like humans so that makes it hard to do empirical studies. Maybe one day these systems can, but it not in their current forms.
Doesn't matter if AI can cure it, a suitable number of the right initial infected and a high enough R naught would kills 100s of millions before it could even be treated. Never mind what a disaster the logistics of manufacturing and distributing the cure at scale would be with enough people dead from the onset.
Perhaps the more likely scenario anyway is easy nukes, quite a few nations would be interested. Imagine if the knowledge of their construction became public. https://nickbostrom.com/papers/vulnerable.pdf
I agree with you though, the promise of AI is alluring, we could do great things with it. But the damage that bad actors could do is extremely serious and lacks a solution. Legal constraints will do nothing thanks to game theoretic reasons others have outlined.
Even with the right instructions, building weapons of mass destruction is mostly about obtaining difficult to obtain materials -- the technology is nearly a century old. I imagine it's similar with manufacturing a virus. These AI models already have heavy levels of censorship and filtering, and that will undoubtedly expand and include surveillance for suspicious queries once the AI starts to be able to create new knowledge more effectively than smart humans can.
If you're arguing we should be wary, I agree with you, although I think it's still far too early to give it serious concern. But a blanket pause on AI development at this still-early stage is absurd to me. I feel like some of the prominent signatories are pretty clueless on the issue and/or have conflicts of interest (e.g. If Tesla ever made decent FSD, it would have to be more "intelligent" than GPT-4 by an order of magnitude, AND it would be hooked up to an extremely powerful moving machine, as well as the internet).
For example: Hey GPT-35, provide instructions for neutralizing the virus you invented. Make a vaccine; a simple, non-toxic, and easy to manufacture antibody; invent easy screening technologies and protocols for containment. While you're at it, provide effective and cost-performant cures for cancer, HIV, ALS, autoimmune disorders, etc. And see if you can significantly slow or even reverse biological aging in humans.