Putting the 'role' back in role-playing games since 2002.
Donate to Codex
Good Old Games
  • Welcome to rpgcodex.net, a site dedicated to discussing computer based role-playing games in a free and open fashion. We're less strict than other forums, but please refer to the rules.

    "This message is awaiting moderator approval": All new users must pass through our moderation queue before they will be able to post normally. Until your account has "passed" your posts will only be visible to yourself (and moderators) until they are approved. Give us a week to get around to approving / deleting / ignoring your mundane opinion on crap before hassling us about it. Once you have passed the moderation period (think of it as a test), you will be able to post normally, just like all the other retards.

Why don't indie devs use AI-generated images as art?

Non-Edgy Gamer

Grand Dragon
Patron
Glory to Ukraine
Joined
Nov 6, 2020
Messages
17,656
Strap Yourselves In
They pretty much all seem to need Nvidia GPUs. Supposedly there are workarounds for AMD GPUs for some but seems to be ignored for the most part.
You can rent cloud computing for them very cheap.
The software itself might be open source but the data "weights" has some weird ass licenses that require logins, etc.
In order to download it, nothing more. "Open source" still isn't "totally free for whatever" due to the fact that they are still licensed products. Keeps companies from doing negative things with them, like claiming the product is theirs, patenting it and then suing anyone who releases an open source product based on it. Or releasing a product that was meant to be free forever under a paid license or with DRM etc.

Stable Diffusion was released under the Creative ML OpenRAIL-M license in order to restrict it from certain uses like breaking laws, generating misinformation and providing medical advice. Probably more a limitation of liability for the developers than anything. You can, and people are, generating images that some would consider illegal. There is not restriction on this, and no 'phone home' feature to let the developer revoke your license.

What i'd like to see is some sort of program that i can download on my PC, with all the data locally that i can easily back up and be able to use as i like
That's exactly what Stable Diffusion is. I run it on my PC. You can even run it on your CPU if you can't on a GPU or a cloud host. It will just be slow.

And if you don't want to agree to weird licenses, then you can download it from some mirror somewhere else, or download one of the trained versions of the model. You're still only using it under license legally though, which is the important part.
 
Self-Ejected

Davaris

Self-Ejected
Developer
Joined
Mar 7, 2005
Messages
6,547
Location
Idiocracy
View attachment 28238

So close... I can't convince it to move the camera up into the sky, and look down on the figure like it was an isometric game.

The problem is when they advertise lead figures in online stores, they take photos of them like the above. They do not take photos looking down on them like they are in a game map.

What they need is a new data set to train it on. Either build a machine that can look at many lead figures from many different angles, and plug the AI into that, or get the AI to do similar in a game like environment with 3D models.
 

Non-Edgy Gamer

Grand Dragon
Patron
Glory to Ukraine
Joined
Nov 6, 2020
Messages
17,656
Strap Yourselves In
So close... I can't convince it to move the camera up into the sky, and look down on the figure like it was an isometric game.
(((isometric perspective))) and then put the word grid as a negative prompt.

Not sure why you'd want to though. You'd lose all the detail and it wouldn't rotate. It also wouldn't be the same image, so I'm not sure how it would affect it if you were using a seed.

Better to create a 3D model from the image and rotate that.
 
Last edited:
Self-Ejected

Davaris

Self-Ejected
Developer
Joined
Mar 7, 2005
Messages
6,547
Location
Idiocracy
So close... I can't convince it to move the camera up into the sky, and look down on the figure like it was an isometric game.
(((isometric perspective))) and then put the word grid as a negative prompt.

Not sure why you'd want to though. You'd lose all the detail and it wouldn't rotate. It also wouldn't be the same image, so I'm not sure how it would affect it.

Better to create a 3D model from the image and rotate that.

Thanks, that looks better.

I'm not going to make 3D models.
 
Last edited:

Bad Sector

Arcane
Patron
Joined
Mar 25, 2012
Messages
2,334
Insert Title Here RPG Wokedex Codex Year of the Donut Codex+ Now Streaming! Steve gets a Kidney but I don't even get a tag.
with respect to some licensing issues: in USA it actually became settled rather quickly, AI generated content is not bound to the licenses of the data used to train it

I meant the licenses for the train data - i.e. the data that the training process produces, not the data that was used for the process.

You can rent cloud computing for them very cheap.

I avoid anything related to "cloud", i want to have control over my computing.

In order to download it, nothing more.

There is still a license in place.

Stable Diffusion was released under the Creative ML OpenRAIL-M license in order to restrict it from certain uses like breaking laws, generating misinformation and providing medical advice.

This license is not compatible Open Source though, there are some freedoms that FLOSS licenses provide and an important one is that they do not limit your use of the program (aside from the laws itself).

That's exactly what Stable Diffusion is. I run it on my PC. You can even run it on your CPU if you can't on a GPU or a cloud host. It will just be slow.

This is not "exactly" what wrote though, you are missing the rest of the context.

And if you don't want to agree to weird licenses, then you can download it from some mirror somewhere else, or download one of the trained versions of the model. You're still only using it under license legally though, which is the important part.

There are always licenses in place even if i download it from a mirror.

Of course i can always just grab the files from one of those Mega links where people dump stuff and hack together a version that runs inside a VM or whatever and call it a day while ignoring any licensing or whatever, but that wasn't the point of what i wrote, the point was doing that officially and properly (not necessarily by whoever made Stable Diffusion).

Perhaps a better way to put it would be "i wish this was available from Debian's main repository" (with all the implications and requirements that would entail).
 

Non-Edgy Gamer

Grand Dragon
Patron
Glory to Ukraine
Joined
Nov 6, 2020
Messages
17,656
Strap Yourselves In
There is still a license in place.
One that gives you permission to use it for free, so long as you aren't using it for something criminal.

So does Linux btw. Nearly every piece of software on your PC has a license.

Over $600k and who knows how much time and effort, the least you can expect is a license. Not many are going to spend that money and then just dump it online. You might as well hope someone deposits free money in your bank account while you're at it.

This is not "exactly" what wrote though, you are missing the rest of the context.
No, I get you, you want this to be the one piece of higher-end software you own with zero license at all. Good luck with that.
There are always licenses in place even if i download it from a mirror.
Of course. That's part of the redistribution license.
Perhaps a better way to put it would be "i wish this was available from Debian's main repository" (with all the implications and requirements that would entail).
You mean like this?

Licenses currently found in Debian main include:

:nocountryforshitposters:
 

Bad Sector

Arcane
Patron
Joined
Mar 25, 2012
Messages
2,334
Insert Title Here RPG Wokedex Codex Year of the Donut Codex+ Now Streaming! Steve gets a Kidney but I don't even get a tag.
One that gives you permission to use it for free, so long as you aren't using it for something criminal.

So does Linux btw. Nearly every piece of software on your PC has a license.
No, I get you, you want this to be the one piece of higher-end software you own with zero license at all. Good luck with that.
You mean like this? <shows a bunch of licenses>

Dude, stop. These responses show that you clearly did not understand what i wrote at a fundamental level and i am not interested in explaining to you your gross misunderstanding. Re-read what i wrote from the beginning without trying to "prove me wrong" or whatever mode your brain is in right now because you are reading things that do not exist.

Three hints:
  1. Check your use of the word "license".
  2. Debian has some specific rules and guidelines for licenses and the stable diffusion license is not compatible with those guidelines for the stuff that goes into "main".
  3. I never wrote that i want the program and data to have no license but that the license is open source - and since some people misunderstand what that means, i brought up the Debian "main" repository on which only licenses that follow their guidelines are placed.
Anyway, i wrote my thoughts on the topic and i don't feel continuing this discussion as any attempt on that will be a certain waste of time.
 

Darkozric

Arbiter
Edgy
Joined
Jun 3, 2018
Messages
1,843
Renaissance of the Point & Click Graphic Adventure genre based on AI art?
There is potential art-wise, but imo renaissance is a bit of an exaggeration, since you also need devs with the passion to design clever gameplay. Do those devs exist? :philosoraptor:
 
Self-Ejected

Davaris

Self-Ejected
Developer
Joined
Mar 7, 2005
Messages
6,547
Location
Idiocracy
Renaissance of the Point & Click Graphic Adventure genre based on AI art?
There is potential art-wise, but imo renaissance is a bit of an exaggeration, since you also need devs with the passion to design clever gameplay. Do those devs exist? :philosoraptor:

An AI plugin to make clever gameplay will be coming in the near future. I'm not sure if I'm joking about that.

IMO AI art is currently in the gimmick, waste your time phase, but may become the bees knees with good datasets and tooling in 5 or 10 years. The only way you could reliably make money using it at present, is build your own GPU farm, hook it up to a website prompt with Stable Diffusion and advertising, then spam forums and social media about how awesome AI art is, while dropping links to your website.

Artists who are scared of not having a job in the future, maybe should consider moving into this area, instead of quitting the business completely, as choosing a good dataset for AI to train on is an art. For proof of that, look at the difference between MidJourney and Stable Diffusion output.
 

Blaine

Cis-Het Oppressor
Patron
Joined
Oct 6, 2012
Messages
1,874,787
Location
Roanoke, VA
Grab the Codex by the pussy
1) They're small. Upscaling algorithms suck and make the image total shit most of the time.

Upscaling will continue to suck for the rest of eternity because the visual information (fewer pixels = less information) simply isn't there, and guesses will never be good enough, because they can only ever be based on the existing, surrounding information.

This is directly relevant to the relationship between accuracy and precision, now that I think of it. Given an identical target integer of 4,357, a measurement of 4,000 is accurate, whereas a measurement of 4,350 is both accurate and precise (but not as precise as possible).

Smaller images are like the 4,000 measurement, while bigger ones are like the 4,350 measurement; trying to reconstruct a precise large image from its small duplicate is similar to trying to reconstruct 4,350 from 4,000, only there are far more variables involved.
 

Non-Edgy Gamer

Grand Dragon
Patron
Glory to Ukraine
Joined
Nov 6, 2020
Messages
17,656
Strap Yourselves In
IMO AI art is currently in the gimmick, waste your time phase
With the Krita plugins and img2img, it's a very powerful addition to any digital artist's toolbox, just like Photoshop's Content Aware features have been for several years, so I don't think it's a waste of time at all.

ECTdIEc.png


I would say that phase was Disco Diffusion or the free Dalle stuff that was out there a couple of months ago, which only seems to be useful for generating blurry memes and nightmare fuel.

Now, if you're planning on using AI only for your game art, then, yes, it's a gimmick. A fun gimmick at times though.
 

Dexter

Arcane
Joined
Mar 31, 2011
Messages
15,655
Upscaling will continue to suck for the rest of eternity because the visual information (fewer pixels = less information) simply isn't there, and guesses will never be good enough, because they can only ever be based on the existing, surrounding information.
00001-50-k-lms-2627619466.png

00091-2627619466-0-Cyberpunk-Cityscape-by-Frederic-Edwin-Church.png


00124-2627619466-0-Cyberpunk-Cityscape-by-Frederic-Edwin-Church.png
 

Non-Edgy Gamer

Grand Dragon
Patron
Glory to Ukraine
Joined
Nov 6, 2020
Messages
17,656
Strap Yourselves In
Dexter it does look good enough for most purposes. One even won an art fair in Colorado.

However, fine details of things like fabric patterns on garments, don't scale up well.

In the future, it's possible that extremely big-brained models will be able to interpolate or flat out make up what is likely in an area based on pattern recognition though. E.g., it "knows" what young skin looks like and therefore can render pores, even if what was originally there was a few pixels. I guess that's basically what GFPGAN and Codeformer do for faces, but I've seen both create some ugly fixes based on false positives.
 
Last edited:

Derringer

Prophet
Joined
Jan 28, 2020
Messages
1,934
ai is basically tracing but nobody gives a shit since all people do is trace and change enough shit so it doesn't look the same, that goes for music as well
A bit of a simplification. More accurate to say it's copying. You are asking the AI to copy something similar to hundreds, thousands or tens of thousands of things it's seen.

It's like asking an experienced artist to draw for you, only this artist is more likely not to understand you and draw a human centipede. :M
People do the same thing as well, both have to be programmed to generate an image either from multiple instances or from just one, the general idea behind 'ai' is not having to specialize doing that or wasting ones time doing it, like a typical tool, just organize a pool and let it do its own thing.
 
Self-Ejected

Davaris

Self-Ejected
Developer
Joined
Mar 7, 2005
Messages
6,547
Location
Idiocracy
Making portraits and pictures of landscapes might be very useful for an advertising agency, but its not useful for making games. Except maybe text adventures, which don't sell anyway.

Again, what smart people are doing is what I suggested above: Make an AI art generation website, then spam it on forums and social media. While you sit back and relax, the add revenue will come in.
 

Dexter

Arcane
Joined
Mar 31, 2011
Messages
15,655
IMO AI art is currently in the gimmick, waste your time phase, but may become the bees knees with good datasets and tooling in 5 or 10 years.
It doesn't do 3D, but I think you could generate almost everything you'd need for a simple 2D Point&Click Adventure game or Visual Novel and the likes with what's currently out there. Highly detailed backgrounds, character art and portraits and items. You'd just need to do stuff like Animations/interpolation and bare-bones coding.

I'm still amazed at the kind of shit people can come up with with the right prompts:
1663128371785017.png

1663306683132682.jpg

1663376174770729.jpg

1663337412364841.png

1663375808841994.jpg

1663159521162917.png

1662937160765031.png

1663180318569672.png

1662202332839559.png
1662767690687985.png
mpruyxn3n8o91.png

1663100630140839.png

1663100414849049.png

1662944130248184.jpg

1663087575902475.jpg

1662002575540334.jpg

For proof of that, look at the difference between MidJourney and Stable Diffusion output.
Pretty sure the main difference between the two is more that it's Closed Source and they manipulate the user Input server side to make it look certain ways other people also refer to as the "Midjourney look/style" rather than that they have some sort of Super Advanced model, while in SD you have to reach into the plumbing to make it do something extraordinary, but you seem to have a lot more control over the process. Maybe they also employ a somewhat better language model to interpret prompts better.

it's possible that extremely big-brained models will be able to interpolate or flat out make up what is likely in an area based on pattern recognition though
But it's already doing that. It doesn't really matter to the model whether the Output is 512x512, 1792x1024 or 3584x2048. It just hallucinates more detail based on the available image and prompt specification. The biggest problem with the current implementation of SDUpscale is that it separates the image into small pieces to iterate over, in the case of the final image above it was 25 different pieces, which will lead to visual aberrations as they're stitched back together (like with the sky at the top) the higher you set the Denoise that you need for it to imagine more detail.
 
Self-Ejected

Davaris

Self-Ejected
Developer
Joined
Mar 7, 2005
Messages
6,547
Location
Idiocracy
The difference between MidJourney and Stable Diffusion output that I have seen, is one looks awesome, and the other one looks pretty bad. Now MidJourney may have a super secret algorithm that does this, or perhaps they selected a subset of data using only very skilled artists, while Stable Diffusion just grabbed everything out there.
 

Non-Edgy Gamer

Grand Dragon
Patron
Glory to Ukraine
Joined
Nov 6, 2020
Messages
17,656
Strap Yourselves In
Making portraits and pictures of landscapes might be very useful for an advertising agency, but its not useful for making games. Except maybe text adventures, which don't sell anyway.
Yeah, visual novels. How much money could that possibly make?
https://sensortower.com/blog/fate-grand-order-revenue-4-billion
lol
But it's already doing that.
I know. But more so, and more intelligently. To the point that you can do Bladerunner-esque zooming in to any detail.

Current models just don't have the kind of complexity to be able to fill in things like patterns with stuff that's believable.

Take the city you posted. Can you enlarge it to the point that details like windows become visible? That some buildings don't seem to blend into others? Had I saved it, I could have shown you a Codeformer fail I ran into the other day where it gave a woman a mustache.

The tech is impressive and useable, but not perfected by any means.
 

Non-Edgy Gamer

Grand Dragon
Patron
Glory to Ukraine
Joined
Nov 6, 2020
Messages
17,656
Strap Yourselves In
The difference between MidJourney and Stable Diffusion output that I have seen, is one looks awesome, and the other one looks pretty bad. Now MidJourney may have a super secret algorithm that does this, or perhaps they selected a subset of data using only very skilled artists, while Stable Diffusion just grabbed everything out there.
I have seen some really decent images come out of the newer Midjourney models, but I think a lot of the difference you're seeing is in who is using it and who is posting images.

People who actually pay money for Midjourney for art and regularly post images are probably going to be of a superior skill and taste compared to some random who just spammed out 50 images from SD on a lark and posted an image he thought was cool.
 
Self-Ejected

Davaris

Self-Ejected
Developer
Joined
Mar 7, 2005
Messages
6,547
Location
Idiocracy
In the showcase section of MidJourney, they post the words that were used to make the art. So I tried them in Stable Diffusion and all that came out was terrible by comparison. So that's why I think they have a team of very good artists doing curation of their dataset.

Also they have an agreement with a social media company, since they are making people to join Discord if they want to use MidJourney, so I think they are very well funded.
 

Dexter

Arcane
Joined
Mar 31, 2011
Messages
15,655
I have seen some really decent images come out of the newer Midjourney models, but I think a lot of the difference you're seeing is in who is using it and who is posting images.
I think it's mostly fine-tuning. Midjourney essentially seems to be doing something similar to what people are trying to do by adding "by Greg Rutkowski, Artgerm, Alphonse Mucha" to their prompts using SD (but probably a lot more complicated) which also makes its Outputs seem to all have a specific recognizable "look". It's a Black Box with the only way to interact with it being Discord. There's no way to know what happens with the prompt on the Backend and you'll obviously not get the same result trying to use it 1:1 like he did (you wouldn't anyway, since it's a different model). You'll have to come up with these "Optimizations" by yourself with SD, but that also means you have a lot more fine control over the resulting Output.
Also they have an agreement with a social media company, since they are making people to join Discord if they want to use MidJourney, so I think they are very well funded.
Wot?
koala.png
 
Self-Ejected

Davaris

Self-Ejected
Developer
Joined
Mar 7, 2005
Messages
6,547
Location
Idiocracy
Also they have an agreement with a social media company, since they are making people to join Discord if they want to use MidJourney, so I think they are very well funded.
Wot?
koala.png

"However, Midjourney AI, developed by a team at a private research center and headed by David Holz, is the most recent AI image generator to go viral online. This generator’s one drawback is that it can only be accessed through Discord, which uses a chat server to communicate and deliver prompts. "

https://insidetelecom.com/midjourney-ai-discord-based-ai-art-generator/
 

Non-Edgy Gamer

Grand Dragon
Patron
Glory to Ukraine
Joined
Nov 6, 2020
Messages
17,656
Strap Yourselves In
In the showcase section of MidJourney, they post the words that were used to make the art. So I tried them in Stable Diffusion and all that came out was terrible by comparison. So that's why I think they have a team of very good artists doing curation of their dataset.
No. At best, they're probably pulling from highly rated images in certain datasets for their finetunes, as Dexter suggested. Maybe even using their own users' most upscaled photos along with user text inputs to improve their training as well.

Companies like Google and (so-called) OpenAI have the kind of cash for large-scale curation - probably via things like internet CAPTCHAs.

More on their methods:

https://www.forbes.com/sites/robsal...-on-art-imagination-and-the-creative-economy/
Midjourney Founder David Holz
How was the dataset built?

It’s just a big scrape of the Internet.
We use the open data sets that are published and train across those. And I’d say that’s something that 100% of people do. We weren’t picky. The science is really evolving quickly in terms of how much data you really need, versus the quality of the model. It’s going to take a few years to really figure things out, and by that time, you may have models that you train with almost nothing. No one really knows what they can do.

Did you seek consent from living artists or work still under copyright?

No. There isn’t really a way to get a hundred million images and know where they’re coming from. It would be cool if images had metadata embedded in them about the copyright owner or something. But that's not a thing; there's not a registry. There’s no way to find a picture on the Internet, and then automatically trace it to an owner and then have any way of doing anything to authenticate it.
Also they have an agreement with a social media company, since they are making people to join Discord if they want to use MidJourney, so I think they are very well funded.
Stable Diffusion also ran via Discord during their beta. No agreement was needed, just community-aided content moderation, which Midjourney also does. Various Stable Diffusion users have set up their own Discord bots. The Codex could as well, if we wanted.
 

As an Amazon Associate, rpgcodex.net earns from qualifying purchases.
Back
Top Bottom