Non-Edgy Gamer
Grand Dragon
- Joined
- Nov 6, 2020
- Messages
- 17,656
Yep.They were also interpreting words like "youthful" and "young" to mean underage, which means that just about any fiction writing from the 90's (and even now) that used such words to refer to younger adults in their 20s, or just attractive people, would get banned. Trying to write word for word any normal book that even young adults (i.e kids) read for school assignments would get you banned for child pornography. They just fucking shanked themselves, no simpler way to say it than that.
Welcome to corporate sanitization. The lack of logic is due to politics, of course. The society that permits movies like Kingsman, wherein one of the main characters slaughters a church full of people and we're supposed to be ok with it because they were mildly racist, but wants to ban speech that might lead to hate.I'm curious about them saying its driven by Open AI. Does Open AI just want their model exclusively used to generate columns for Vox for something? Because the stuff they banned would be completely normal stuff to find in fiction books in the library.
However, the extent they're willing to go to in cases like these can often have very ration reasons, usually related to banking. The banking laws have basically made banks responsible for just about anyone they do business with. You can see a good example of this in OnlyFans, which is BANNING PORN starting in September (get rekt, thots).
So, my guess is that either OpenAI's bank(s) came to them and said something along the lines of "cut it out or we'll cut you off", or else their legal team was worried they would (more likely).
Latitude was then likely told something similar by OpenAI and then likely overreacted with their efforts to get rid of problem users. When users began to push back on these methods, they likely overreacted again, cracking down harder in response to a challenge rather than seeing that their userbase was justifiably upset.
These three paragraphs, along with a commitment to user privacy, would have been very effective back at the beginning of all this. I really doubt users would have been nearly as angry, or as desperate to find alternatives. Now it's too little, too late.As most of you know, we’ve been testing various filters and suspension systems. We recognize that these systems have had false positives and unintended side effects that we know have frustrated many of you (trust us, we haven’t wanted those either). We regret the frustration these caused and the way we had to roll those out.
What some may not know is that, behind the scenes, we needed to implement these changes to prevent AI Dungeon from being required to use the much stricter content filter that would normally be required for use of OpenAI’s GPT-3 models. That filter often blocks even mild fantasy violence, which would obviously hurt the magic of AI Dungeon—we believe that you should be able to fight off bandits, slay dragons, and have all of the kinds of epic adventures that make AI Dungeon so fun.
While we’ve been working with OpenAI to find a joint path forward, it’s clear that we have different approaches to providing AI experiences to users safely, and that Latitude needs to be able to decide on the approach that best supports our users. Because of this, we have decided to transition to a new implementation that gives us more flexibility in how we let you experience the magic of AI.