Microsoft’s Overly Sensitive Image Creator Gets Worse

Ever since the article below was published, anecdotally, Microsoft’s safeguards on its DALL-E implementation have been harsher than normal. Basic prompts are deemed inappropriate and the images generated can be bizarre.

Here’s the article…

On a late night in December, Shane Jones, an artificial intelligence engineer at Microsoft, felt sickened by the images popping up on his computer.

The AI service has depicted demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use. All of those scenes, generated in the past three months, have been recreated by CNBC this week using the Copilot tool, which was originally called Bing Image Creator.

“It was an eye-opening moment,” Jones, who continues to test the image generator, told CNBC in an interview. “It’s when I first realized, wow this is really not a safe model.”

https://www.cnbc.com/2024/03/06/microsoft-ai-engineer-says-copilot-designer-creates-disturbing-images.html

I’m going to have to disagree and say that it is a safe model.

Microsoft has ridiculously sensitive safeguards already and those have been strengthened, too much so, in recent days it seems.

And anecdotally those safeguards were harsh as heck.

Why is the bing image generator so ludicrously sensitive?

“bill gates” is a no no, naturally.

But seriously. I’ve already been banned for an hour once.

“Fish eye lens shot of woman walking away from the camera towards a seven foot tall macaque”

Fail.

“maasai tribe trudging through the mud on an exoplanet”

Fail.

Really?

https://www.reddit.com/r/OpenAI/comments/1725qxy/why_is_the_bing_image_generator_so_ludicrously/

Anyways red teaming the model, and what you end up seeing, isn’t necessarily what others will see. I’ve experienced this myself.

As a content moderator there’d be content that was just absolutely horrible to see, we’re talking about real people doing horrible horrible things and not just the fakery of AI, but there’d only be one view by the time I got to it and deleted it.

Just one.

Frankly I can’t help but think that more people went to create inappropriate images after the article was published than prior to it. After all there isn’t a big market for fake bloody car crash pictures and CNBC magnified the topic. It only makes sense that a bunch of people tried to reproduce the fake images for fun.

But that’s an important point to remember. These images are fake. They represent an entirely new form of art that is not incorporated in our society just yet. Nobody was harmed and that sets it apart from the previous generation of inappropriate images where people were very much were harmed. I’m not saying those AI generated images were good, but I am saying they are not as harmful as images from the dark and evil side of the internet that proliferated on widely used social media like Twitter and Facebook.

I cannot emphasize enough just how much worse some images and videos are compared to AI generated content.

I know, I’ve seen it all.

Anyways, the point is that content safeguards on DALL-E and Microsoft’s implementation of DALL-E were already harsh and now they’re even harsher. The model is safe.

End of note.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.