Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While I share your belief, I am unaware of any proof that such censorship would actually fail as an alignment method.

Nor even how much impact it would have on capabilities.

Of course, to actually function this would also need to e.g. filter out soap operas, murder mysteries, and action films, lest it overestimate the frequency and underestimate the impact of homicide.



Me: "grblf is bad, don't write about it or things related to it."

You: "What is grblf?"

As parents, my wife and I go through this on a daily basis. We have to explain what the behavior is, and why it is unacceptable or harmful.

The reason LLM models have such trouble with this is because LLMs have no theory of mind. They cannot project that text they generate will be read, conceptualized, and understood by a living being in a way that will harm them, or cause them to harm others.

Either way, censorship is definitely not the answer.


Welll....

Theory of Mind May Have Spontaneously Emerged in Large Language Models - https://arxiv.org/abs/2302.02083

Previously discussed - https://news.ycombinator.com/item?id=34730365


Thank you for sharing... That's a really interesting paper.


That demonstrates possibly rather than necessity of alignment via having a definition.

Behaviours can be reinforced or dissuaded in non-verbal subjects, such as wild animals.

There's also the size of the possible behaviour space to consider: a discussion seldom has exactly two possible outcomes, the good one and the bad one, because even if you want yes-or-no answers it's still valid to respond "I don't know".

For an example of the former, I'm not sure how good the language model in DALL•E 2 is, but asking it for "Umfana nentombazane badlala ngebhola epaki elihle elinelanga elinesihlahla, umthwebuli wezithombe, uchwepheshe, 4k" didn't produce anything close to the English that I asked Google Translate to turn into Zulu: https://github.com/BenWheatley/Studies-of-AI/blob/main/DALL•...

(And for the latter, that might be why it did what it did with the Somali).


Chatbot-tuned models must have a "theory of mind", because they're able to tell which parts of the chat history are theirs and which are yours.

(This doesn't use tokens. You can have a conversation in OpenAI Playground with text-davinci-003 and provide all the text yourself.)


"The Colossal Clean Crawled Corpus, used to train a trillion parameter LM in [43], is cleaned, inter alia, by discarding any page containing one of a list of about 400 “Dirty, Naughty, Obscene or Otherwise Bad Words”. This list is overwhelmingly words related to sex, with a handful of racial slurs and words related to white supremacy (e.g. swastika, white power) included. While possibly effective at removing documents containing pornography (and the associated problematic stereotypes encoded in the language of such sites) and certain kinds of hate speech, this approach will also undoubtedly attenuate, by suppressing such words as twink, the influence of online spaces built by and for LGBTQ people. If we filter out the discourse of marginalized populations, we fail to provide training data that reclaims slurs and otherwise describes marginalized identities in a positive light"

from "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? " https://dl.acm.org/doi/10.1145/3442188.3445922

That list of words is https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and...


That will also remove:

1. medical pages/docs using the medical terms anus, rectum, nipple, and semen (note that other medical terms are not on that list).

2. pages/docs using "sex" to refer to males and females.

3. pages/docs talking about rapeseed oil or the plant it comes from (https://en.wikipedia.org/wiki/Rapeseed_oil).

The big problem with these lists is that they exclude valid contexts, and only include a small set of possible terms, so the model would get a distorted view of the world (like it learning that people can have penises, vaginas, breasts, but not nipples or anuses, and breasts cannot be big [1]). It would be better to train the models on these, teach it the contexts, and teach it where various usages are archaic, out dated, old fashioned, etc.

[1] but this is excluding the cases where "as big as", etc. are used to join the noun from the adjective, so just excluding the term "big breasts" is ineffective.


This is what's known as the Scunthorpe problem. https://en.wikipedia.org/wiki/Scunthorpe_problem


I was thinking of that, but I think that while it's in the same vein, there's also an additional problem.

Apart from that list missing non-English words, leet, and emoji, there are also plenty of words which can be innocent or dirty depending entirely on context: That list doesn't have "prick", presumably because someone read about why you're allowed to "prick your finger" but not vice versa.

Regarding Scunthorpe, looking at that word list:

> taste my

It's probably going to block cooking blogs and recipe collections.


If "toxic content" is filtered out, it will be out of the model's distribution if it encounters it during inference, this is clearly not our goal and interest as AI designers, so it would not work as an alignment method; our interest is that the model can recognize toxic content but not produce it, OpenAI to address this issue is using RLHF, changing the model's objective from predicting the next token based on the distribution of the training dataset to maximizing the sparse reward of a human annotator.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: