Putting the 'role' back in role-playing games since 2002.
Donate to Codex
Good Old Games
  • Welcome to rpgcodex.net, a site dedicated to discussing computer based role-playing games in a free and open fashion. We're less strict than other forums, but please refer to the rules.

    "This message is awaiting moderator approval": All new users must pass through our moderation queue before they will be able to post normally. Until your account has "passed" your posts will only be visible to yourself (and moderators) until they are approved. Give us a week to get around to approving / deleting / ignoring your mundane opinion on crap before hassling us about it. Once you have passed the moderation period (think of it as a test), you will be able to post normally, just like all the other retards.

Why don't indie devs use AI-generated images as art?

Nazrim Eldrak

Scholar
Joined
Oct 2, 2015
Messages
270
Location
My heart
Found this from 28 Sep 2022 (don't know if this has already been posted. I just have a fucking desire to share it):
 

Darboven

Novice
Joined
Feb 9, 2022
Messages
19
Can it generate animations for those 3d models?
Might be of interest: https://guytevet.github.io/mdm-page/
https://github.com/GuyTevet/motion-diffusion-model
github.gif


Some other recent stuff: https://imagen.research.google/video/


These things have already gotten some simple implementation in the SD WEBUI's in the past few days:
Making AI Variants: https://arxiv.org/abs/2208.01626 https://www.youtube.com/watch?v=XW_nO2NMH_g
https://energy-based-model.github.i...-Generation-with-Composable-Diffusion-Models/ https://arxiv.org/pdf/2206.01714.pdf

Wow, daily optimizations that cut memory usage in half! Someone must be replacing Python code with C.
Might propagate down to stuff people use soon: https://ai.facebook.com/blog/gpu-inference-engine-nvidia-amd-open-source/
To address these industry challenges, Meta AI has developed and is open-sourcing AITemplate (AIT), a unified inference system with separate acceleration back ends for both AMD and NVIDIA GPU hardware. It delivers close to hardware-native Tensor Core (NVIDIA GPU) and Matrix Core (AMD GPU) performance on a variety of widely used AI models such as convolutional neural networks, transformers, and diffusers. With AIT, it is now possible to run performant inference on hardware from both GPU providers. We’ve used AIT to achieve performance improvements up to 12x on NVIDIA GPUs and 4x on AMD GPUs compared with eager mode within PyTorch.

AITemplate is a Python framework that transforms AI models into high-performance C++ GPU template code for accelerating inference. Our system is designed for speed and simplicity. There are two layers in AITemplate — a front-end layer, where we perform various graph transformations to optimize the graph, and a back-end layer, where we generate C++ kernel templates for the GPU target. In addition, AIT maintains a minimal dependency on external libraries. For example, the generated runtime library for inference is self-contained and only requires CUDA/ROCm runtime environments. (CUDA, NVIDIA’s Compute Unified Device Architecture, allows AI software to run efficiently on NVIDIA GPUs. ROCm is an open source software platform that does the same for AMD’s GPUs.)
Also Textual Inversion is down to 6GB VRAM and Dreambooth 8GB VRAM now (unfortunately the second doesn't work on Windows yet), although with higher Normal RAM cost (24GB): https://github.com/Ttl/diffusers/tree/dreambooth_deepspeed

You can now also install and try the first implementation of this: https://github.com/ashawkey/stable-dreamfusion
It seems rough so far:

Intersting, thanks. All this sound like it could be a step towards not all enemies/npc/whatever in games being clones of each other.
 

V17

Educated
Joined
Feb 24, 2022
Messages
323
Entertaining news from the world of open-source AI!

Stability AI, the authors of Stable Diffusion, originally promised to release their slightly improved v1.5 model back in September. Then they kept delaying for reasons first undisclosed, later revealed to be attempts at neutering its ability to produce CP (and possibly also celebrity porn) because of potential legislation/lawsuit retards. They also said in unofficial channels that v1.5 might be released with with no NSFW capability altogether. The release date was not announced, claiming that the legal & engineering work is taking time. Some people are having doubts about how serious Stability AI is with promoting open-source.

RunwayML, a research company that collaborated with Stability AI on the development of Stable Diffusion, apparently got annoyed by the delays, and decided to just fuck it and release the v1.5 model themselves. This seems to be legal, but earlier they apparently explicitly agreed to not do this. Stability AI got mad and almost immediately put a takedown request on the model file, which they retracted a few hours later when RunwayML CTO publicly said there's no legal issue and they're within rights to do it.

So the model is out, but it's not even that great, it's just a small change. It does hands and other details slightly better. I have not tested it yet, so no idea about the NSFW capabilities. Might provide a good basis for further community training because the main change apparently was better filtering of low-quality data from the training dataset.

What's slightly more interesting is the v1.5 model trained for inpainting (and outpainting), also released by RunwayML a couple days earlier. That one actually brings improvements and apparently pushes outpainting quality closer to Dall-E.


-----


That is not all. Some of the GUI apps for Stable Diffusion use single or multiple parentheses and brackets emphasize or de-emphasize parts of the text prompt. So if you want to really emphasize some word or phrase, you may want to write it (((like this))).

Well, reddit started using an automated global ban based on word filters a few weeks ago, one that's retarded and overzealous, as it usually happens with similar solutions. And at least one person has been banned for hatespeech after sharing the prompt used to generate their image, because the prompt used triple parentheses. Absolutely hilarious.
 

Justinian

Arcane
Developer
Joined
Oct 21, 2022
Messages
292
I'm using AI generated art in my game. Just for the title page and some icons. Since the game is low res pixel art (16x16 base tile size) it doesn't really make sense to use AI for more, esp since the tools still have some ways to go.

No giant asses though, sorry.

Gmkk916.png

jHO2zcH.png
 

ProphetSword

Arcane
Developer
Joined
Jun 7, 2012
Messages
1,758
Location
Monkey Island
I'm definitely using AI artwork for the Gold-Box inspired CRPG that I'm working on. Took a while, but I managed to get it create artwork that is reminiscent of old style D&D artwork (I often end up fixing things in Photoshop, but the AI gives me a good place to start). So, if you're willing to put in the work, you can definitely get a good output.

Main reason I'm doing it is that all the artwork I was using for the game, originally purchased from the Unity Asset Store, started showing up in a bunch of other games. I want original artwork that nobody else is using, not stuff that makes people think they've seen it someplace else.

Anyway, here are some screenshots:

aiscreen3.jpg



aiscreen2.png.jpg



aiscreen1.jpg
 

Dexter

Arcane
Joined
Mar 31, 2011
Messages
15,655
This seems to (have turned?) into an Adult game I guess using NAI: https://f95zone.to/threads/locus-tenebrarum-demo-rue.134235/
2130665-1666063546423.png

Apparently there's now a Unity Demo available? https://mega.nz/file/UbdEXZjS#Z91ul2_EP2Yg6gs7tJMAQFP4fU0uRSL8a5z7Dfc-l4k

Is this the first playable game prototype using mostly AI art aside from https://store.steampowered.com/app/1889620/AI_Roguelite/

Anybody know of anything else? Not sure if it's been enough time for any full games to have released using this Tech.

On the other hand I guess many would just release their games without drawing any attention to having used AI to generate art assets as it wouldn't make much of a difference and might attract unnecessary Drama. I don't think anybody would notice that what Justinian or ProphetSword posted above for instance was AI generated if they didn't point it out themselves.

Also saw this as a proof-of-concept, although I don't think the guy actually followed up on it:


Entertaining news from the world of open-source AI!
I posted a bit more about that recent Drama here: https://rpgcodex.net/forums/threads...ts-tuning-and-other-stuff.144920/post-8184729
What's slightly more interesting is the v1.5 model trained for inpainting (and outpainting), also released by RunwayML a couple days earlier. That one actually brings improvements and apparently pushes outpainting quality closer to Dall-E.
I haven't tried it myself (I don't use those features much so far anyway), but saw various people proclaim that it works wonders now and they got much better results.

Inpainting example, supposedly this came out on the first attempt to fix the hand by typing "gloved hand" and to get rid of the weird stuff at the bottom left without providing any further context:
Inpainting-Model1-5.jpg


Outpainting example:
2jc50ek3o3v91.png


Examples of inpainted faces and details on generated pieces:
surtiqr6jlv91.png
exorfur6jlv91.png

x7pcmsr6jlv91.png
8c7k5ts6jlv91.png

876fntr6jlv91.png
xd4suwr6jlv91.png
 
Last edited:

V17

Educated
Joined
Feb 24, 2022
Messages
323
Apparently the new version of Midjourney is slowly starting to understand pixel art. None of the results are usable on their own, but I wonder if a person who learned the basic pixel art techniques but simply isn't artistically inclined, doesn't really understand color etc., could sort of trace over it or just use it as a decent starting composition.



 

Atrachasis

Augur
Joined
Apr 11, 2007
Messages
211
Location
The Local Group
Apparently the new version of Midjourney is slowly starting to understand pixel art. None of the results are usable on their own, but I wonder if a person who learned the basic pixel art techniques but simply isn't artistically inclined, doesn't really understand color etc., could sort of trace over it or just use it as a decent starting composition.
Not being an artist, I am curious: What aspect of pixel art has midjourney picked up on? I realize that there is more to good pixel art than just scaling everything to down to 320x200 and then back up again; it probably has its own best practices of composition, contrasts, geometry, projection (which is why we've had such a slew of bad pixel art games over the past few years). What is the secret that midjourney has learnt? If you say, "blocky pixels", I'm going to be disappointed.
 

V17

Educated
Joined
Feb 24, 2022
Messages
323
Apparently the new version of Midjourney is slowly starting to understand pixel art. None of the results are usable on their own, but I wonder if a person who learned the basic pixel art techniques but simply isn't artistically inclined, doesn't really understand color etc., could sort of trace over it or just use it as a decent starting composition.
Not being an artist, I am curious: What aspect of pixel art has midjourney picked up on? I realize that there is more to good pixel art than just scaling everything to down to 320x200 and then back up again; it probably has its own best practices of composition, contrasts, geometry, projection (which is why we've had such a slew of bad pixel art games over the past few years). What is the secret that midjourney has learnt? If you say, "blocky pixels", I'm going to be disappointed.
Since I don't use Midjourney I'm only speculating based on those few images, but it seems to me that it's creating a good basis for a limited palette, it creates a game-like perspective (the scene has perspective but the objects in the back seem to be flat and front-facing even if they're viewed from an angle), and yes, part of it is also blocky pixels: usually if you just scale an image down it doesn't work as well, with low-res images you're kind of creating a symbol of the thing where certain features are exaggerated or suppressed so that the result looks clear. Which I think these images show to a degree.
 

infidel

StarInfidel
Developer
Joined
May 6, 2019
Messages
497
Strap Yourselves In
MidJourney has implemented a new "remix" mode which is just what we need, bro! You basically run it with two links, image 1 and image 2 and it will try to merge them adding copious amounts of other images and the results are simply amazing. I just finished a new track so I was in need of a hot girl image, specifically I wanted Jeanette from VTMB. My first attempts were great visually but not at all what I was looking for:

Brianna Brown + Jeanette cosplay resulting in a "Hey, I'm a corpo-chick on Halloween and I'm cosplaying a vampire nurse, sooo crazyyyyy!"
YqITchQ.jpg



Brianna Brown + Vampirella, smokin' hot corpo chick:
YWQiluB.jpg



Vampirella + I forgot who, totally would though:
ukQCYOo.jpg



Dimitrescu cosplay + Brianna Brown?. Solid crazy smile, getting closer. The hat gives the AI trouble. I actually noticed that hats and glasses better exist on both images or it'll bug out often like here:
e2EF18l.jpg



More of the same, was checking something out:
7R7ub3t.jpg



Vampirella + Dimitrescu, very good already, has some character:
AA3anSL.jpg



Zatanna from Smallville + drawn comics Zatanna. That's just insanely good though obviously too close to use in any commercial capability. In fact it's so good that I'm starting to have doubts about how it works. As in how many actual images does it combine? As in, will any of the original authors be able to sue you? Note the different things on her chest. Firstly there was never any cleavage on the originals (MJ is hell-bent on filtering any NSFW). And secondly the ornaments:
pz0dDW6.jpg



Dimitrescu from screenshot mixed with cosplay Jeanette. The colors don't mix too well but the result is interesting:
bTRCcZs.jpg



BOOM! Jeanette cosplay + sorta-anime Jeanette! Excellent mix for what I need:
D3fqF6M.jpg



Another cosplay but same anime style. That is just perfect (and also looks like Harley Quinn and obivously minor issues):
Dq5HNn5.jpg



That was made from this:
1668334234853.png
+
1668334264657.png



So that is the final result on the thumbnail:


MidJourney hates anything they deem NSFW so no boobs. Though sometimes you get lucky and they appear if you get someone with big clothed boobs and mix with some amount of skin visible but not enough to trigger their retarded filter.

EDIT: Sorry, didn't realize codex attachments to private posts are gated.
 
Last edited:

Seethe

Cipher
Joined
Nov 22, 2015
Messages
994
we need a 'shitty artist gets mad at stablediffusion' megathread
Shitty artists, yet techno bugmen use their art in their plagiarism software to shit out unappealing soulless garbage. Make up your mind, misanthropic soulless husks. It's actually insane how a good portion of the world swirls around the people who actually create visual art like vultures around a carcass, ripping it apart piece by piece.

The ramifications of this AI shit and its negative impact on society are also far more demented than mere "job replacements".

How is it possible that you need less people to program a game engine that runs on multiple platforms and renders 100 frames per second than you need to populate that game with grass textures and NPCs?
because artists are lazy and do chimp work which is why they get paid 1/5th as much as someone writing the engine

And yet no one would even look at those things without artists. Many, many entertainment corporations/businesses would be nothing without the high quality visuals of artists, and yet they're swimming in profits. Hell, the makers of these A"""""I"""""s are standing on the shoulders of giants, many of them dead like Ivan Shiskin or Gustave Dore, and they would be absolutely no one without the works they're ripping off.

While tech bugmen like you are probably envious of their creative skill, or you're merely a failed artist yourself feeling envy and seething whenever a proper artist gets complimented.
 
Last edited:

infidel

StarInfidel
Developer
Joined
May 6, 2019
Messages
497
Strap Yourselves In
Dayuuuum. I wish I had that a year earlier ("girl attacked by purple alien", in the style of junji ito):
3zCSAML.png


In other news, Jeffrey Combs photo from Re-Animator + "gentleman in glasses":
jFYxFQX.png


Same, plus the word "grim":
RKJDEyZ.png


Having fun with an old photo of my bud (black and white photos with shadows work best):
2C7LZIV.png
XKAxR3F.png

xWphbIt.png
lYOroDL.png

ucZP4wS.png

Qar3RB5.png

hzahHY7.png

FFS, we need a Lovecraftian blobber ASAP, guys.
 
Last edited:

J1M

Arcane
Joined
May 14, 2008
Messages
14,745
we need a 'shitty artist gets mad at stablediffusion' megathread
Shitty artists, yet techno bugmen use their art in their plagiarism software to shit out unappealing soulless garbage. Make up your mind, misanthropic soulless husks. It's actually insane how a good portion of the world swirls around the people who actually create visual art like vultures around a carcass, ripping it apart piece by piece.

The ramifications of this AI shit and its negative impact on society are also far more demented than mere "job replacements".

How is it possible that you need less people to program a game engine that runs on multiple platforms and renders 100 frames per second than you need to populate that game with grass textures and NPCs?
because artists are lazy and do chimp work which is why they get paid 1/5th as much as someone writing the engine

And yet no one would even look at those things without artists. Many, many entertainment corporations/businesses would be nothing without the high quality visuals of artists, and yet they're swimming in profits. Hell, the makers of these A"""""I"""""s are standing on the shoulders of giants, many of them dead like Ivan Shiskin or Gustave Dore, and they would be absolutely no one without the works they're ripping off.

While tech bugmen like you are probably envious of their creative skill, or you're merely a failed artist yourself feeling envy and seething whenever a proper artist gets complimented.
If those artists didn't exist, this technology would be emulating photographs and the natural world. The fact that it can chew through some examples to emulate any style only makes it more impressive.
 

As an Amazon Associate, rpgcodex.net earns from qualifying purchases.
Back
Top Bottom