Putting the 'role' back in role-playing games since 2002.
Donate to Codex
Good Old Games
  • Welcome to rpgcodex.net, a site dedicated to discussing computer based role-playing games in a free and open fashion. We're less strict than other forums, but please refer to the rules.

    "This message is awaiting moderator approval": All new users must pass through our moderation queue before they will be able to post normally. Until your account has "passed" your posts will only be visible to yourself (and moderators) until they are approved. Give us a week to get around to approving / deleting / ignoring your mundane opinion on crap before hassling us about it. Once you have passed the moderation period (think of it as a test), you will be able to post normally, just like all the other retards.

Bringing D&D/AD&D campaign settings to life with Stable Diffusion

Joined
Oct 1, 2018
Messages
2,323
Location
Illinois
And the "Best female lich picture Cow's ever seen" award goes to... THIS!
THE GEM EYES! THE MASK! THE GRIMACING SKULL WITH THE TATTERS OF FLESH! THE WAIFU OF THE AGES!
:necro:

xsFAdru.png


Edit: Alright I fucked around more and this isn't as good but I can still dig it.

7uL4rl7.png


Edit 2: And more fucking. I think I've lost the plot.

vhyhaeY.png
 
Last edited:
Joined
Oct 1, 2018
Messages
2,323
Location
Illinois
Been messing with a porn model, takes more wrestling with the settings to reduce tits and ass and get some fantasy out of it but it's not too shabby. First being trying some Dark Sun and getting decent barbarian cheesecake but still desperately needs more Dark Sun-ness (Looks like metal in there god DAMN it computer), other two being fucking around with adventurers. Could ultimately be a losing proposition but seems promising enough to keep fucking around.

IINIF7g.png

ijeLK1c.png

r141482.png
 

orcinator

Liturgist
Joined
Jan 23, 2016
Messages
1,704
Location
Republic of Kongou
Still too lazy to setup the downloadable ai programs so I've been using midjourney and burning through throwaway discord accounts.
Getting "okay" results using screenshots of npcs from an open world game and the greg rutsomethingski artstyle prompt for a large amount of consistent looking portraits. Wish I knew how to make them more weird looking while still maintaining consistency since so far I can produce basic modern humans and not the weird bioware enhanced ones.
 
Joined
Oct 1, 2018
Messages
2,323
Location
Illinois
Still too lazy to setup the downloadable ai programs so I've been using midjourney and burning through throwaway discord accounts.
Getting "okay" results using screenshots of npcs from an open world game and the greg rutsomethingski artstyle prompt for a large amount of consistent looking portraits. Wish I knew how to make them more weird looking while still maintaining consistency since so far I can produce basic modern humans and not the weird bioware enhanced ones.
Haven't the foggiest idea of how to use Midjourney properly. The results you get based on a prompt vary wildly from AI to AI (Even if in the case of stable diffusion the majority are using the same data and just change how the AI weights things behind the scenes, that in turn makes what it does with a prompt radically different) and I'm almost completely in the dark when it comes to paid-AI like Midjourney and NovelAI. I will say that you might be hitting a weak point of MJ because I believe the way they've got their shit set up it's less "Creative" since they're sacrificing weird shit in favor of consistency. You can run into that with stable diffusion as well, especially with the more "Refined" spinoff models. It's why for example the porn model I was trying up above is really fucking good at photorealistic bodies but also has a tendency to have tits out even if you're describing clothing and is inordinately fond of having wizards holding wands and staves near their mouths.

Two quick tips which may not actually be helpful in your case since I've never used Midjourney, try increasing emphasis on stuff you want and absolutely make use of negative prompts. In a perfect world you hammer out some basics using the same seed and adjusting things until you find a look you like but I don't know if you even CAN do that with MJ, and since it's a paid service and you're using burner accounts for trials it would probably be a pain in the ass. Uh... Actually lemme look something up, I think I saw a place a while back that shared MJ prompts since I bumped into it while I was eyeballing a SD trained model on MJ output.

Bingo. Take a look at this and cannibalize how others are assembling prompts and that's the best way to start learning how to do shit yourself. Might just be MJ isn't as good at doing fucky stuff though.
https://prompthero.com/midjourney-prompts

Edit: Just struck me I was talking shit about wizards but hadn't posted them so I'll toss a few in but spoiler tag since I'm spamming this shit too much. They're actually relatively old but I haven't been doing anything noteworthy lately anyway. Not particularly happy with any of them (Except the last one I liked enough to give him a face since it was completely fucked) but sorcerer wearing sorceress robes for some inexplicable reason is funny at least.

tuZ9egS.png

hv52dHF.png

eYVlERi.png

VUAtwpI.png
 
Last edited:

orcinator

Liturgist
Joined
Jan 23, 2016
Messages
1,704
Location
Republic of Kongou
Bingo. Take a look at this and cannibalize how others are assembling prompts and that's the best way to start learning how to do shit yourself. Might just be MJ isn't as good at doing fucky stuff though.
https://prompthero.com/midjourney-prompts
That's certainly more raliable than the discord which has search function outages most of the time.
Though upon examination I can't find more examples of that Chrome Lords style, despite being able to get something similar myself.
 
Last edited:

Zed Duke of Banville

Dungeon Master
Patron
Joined
Oct 3, 2015
Messages
11,756
The Infinite Monster Manual

The latest version of the webui for Stable Diffusion includes functionality for Textual Inversion, which can be used to generate new concepts for Stable Diffusion from a set of images. I attempted to train it on 45 artworks of David A. Trampier, almost entirely from the AD&D Monster Manual, and then generated output with this Trampier embedding and the Stable Diffusion v1.5 checkpoint. The results are tantalizingly close to being good enough for use but not quite there, so if anyone has any suggestions for obtaining a better embedding from the webui's training function or is aware of good instructions for using another version of Textual Inversion, this information would be quite helpful.

iNqcwpV.png
SF3PoPH.png

SCDGwa9.png
UKCzCZA.png

G8MeunT.png
RVJ5IG3.png

kueU9EQ.png
bVPjfA6.png

sdeOObX.png
wwFsVi8.png
ZxCbg8N.png

mjMQdah.png
Up2ef0c.png

6MgiXdw.png
wD7GrkD.png

detFbk1.png
81DHrbc.png
SYIIpbD.png

0fOwyyH.png
lwwbcPH.png

j0c6RQA.png
s2wXaTc.png
LsU7Tt2.png


  1. Dragon
  2. Orc
  3. Mind Flayer
  4. Cthulhu
  5. Tyrannosaurus Rex
  6. Werewolf
  7. Elf
  8. Dwarf
  9. Beholder
  10. Ogre
 
Joined
Oct 1, 2018
Messages
2,323
Location
Illinois
The Infinite Monster Manual

The latest version of the webui for Stable Diffusion includes functionality for Textual Inversion, which can be used to generate new concepts for Stable Diffusion from a set of images. I attempted to train it on 45 artworks of David A. Trampier, almost entirely from the AD&D Monster Manual, and then generated output with this Trampier embedding and the Stable Diffusion v1.5 checkpoint. The results are tantalizingly close to being good enough for use but not quite there, so if anyone has any suggestions for obtaining a better embedding from the webui's training function or is aware of good instructions for using another version of Textual Inversion, this information would be quite helpful.
I would hazard a guess that your embed may already be good enough as it is and the rest may come down to prompting and futzing with settings to squeeze the juice out of it. No idea what you did for a prompt on those but I made these with these general settings on a horror porn model with the normal old 840000 VAE. They aren't anything too special BUT they look close enough for government work and slapping your embed on top could wring better results out of them. Could call for additional fuckery with the embed however, since I added things like noise in the prompt since it gives it a slightly scratchier texture and added art nouveau to the negative since it was inexplicably art nouveauing up my shit. Noise may be unnecessary with the embed since it's clearly pulling enough of his style in to get the idea so extra embellishing may go overboard. Steps and CFG can also be really persnickety with embeds so setting yourself a specific seed and repeatedly adjusting settings and changing the prompt to work toward your goal is a good idea to figure out the sweet spot.

As for your actual question of getting best results out of making an embedding yourself, I'm not much good. I made a few months ago as a test before they put it in the webui since I was running it through a colab notebook but that's a disadvantage of my crusty old 970, that limited VRAM's not good for making embeds and I can't be bothered to use colab much. Keep thinking I need to buy one of those non-TI 3060s since it would be a big upgrade for gaming ANYWAY and also give me 12 gigs of VRAM for more fucking around with this but I've been dragging my heels since I've been doing 3+ hours of this a night for the past 6-7 months anyway so it's not like I NEED it.

Prompt: masterpiece, best quality, an exquisitely detailed black and white pen sketch portrait of a skeleton wizard wearing an elaborate robe, thick line art, dungeons and dragons, (monster manual art), (david a trampier), (noise)

Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, extra fingers, mutated hands and fingers, 3d, (blurry), background, art nouveau

Steps: 30
Sampler: DPM++ 2M Karras
WidthXheight: 768x768
Highres fix: Yes, and denoising strength 0.6
CFG scale: 7
Seed: 123456789 (And second is obviously 123456790)

Ok3lAGj.png

wg9mRwW.png


Note: I haven't done a git pull in ages so I'm on the old highres fix system so on the new one it should be setting the resolution to 384x384 and then highres fix and doubling. You can also change pen sketch to pencil sketch for a softer and more detailed picture, but pen sketch is closer to the style you're shooting for. But the important thing is trying to use your embed.

fadyU6q.png


i8kpxLs.png


NOT REALLY AN EDIT BUT I GOT OFF MY ASS AND LOADED IN SD1.5:
Alright unsurprisingly vanilla SD1.5 behaves wildly differently so I rejiggered shit to get a good launch point. Went for a plainer approach with vanilla since theoretically it'll be picking up more detail and style from your embed. Actually not sure which direction would work better for the embed, a more detailed initial prompt you then slap the trigger word on and it chunks it up, or a simpler prompt and then it gives more implied detail. Regardless, I'd say keep fucking with your embed since the results you posted look like it might have enough in there for it to do what you want. ALSO bear in mind you can increase and decrease the emphasis of your embed like you can with anything else, (zedduke:1.2) or (zedduke:0.8), there's such a thing as overdoing an embed so it's possible it may need a lighter than normal touch to give good results.

Prompt: a black and white pen sketch (portrait:1.2) of a skeleton wizard with glowing eyesockets wearing an elaborate robe, thick line art, (classic dungeons and dragons monster manual art), (david a trampier), (noise)

Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, extra fingers, mutated hands and fingers, 3d, (blurry), background, art nouveau

Steps:15
Sampling method: DPM++ 2M Karras
WidthXheight: 768x768
Highres fix: Yes, and denoising 0.6
CFG scale: 15
Seed: 123456789

FujmtVm.png

71CnTyD.png


I'd definitely keep fucking with your current embed for a while just to make sure there isn't a certain angle of attack that makes it work well, because it's looking like it's got the gist of it so that's why I'm guessing you just need to figure out if a certain manner of prompt makes it pop, or increasing/decreasing the emphasis of the trigger word if it's coming in like a ton of bricks, etc. Embeds are cool since you can get consistent results with them but they can also be a big pain in the ass to figure out how they behave on each individual model, how they behave with your prompts, settings, etc.

And actual edit this time: Oh yeah, another small general protip when it comes to doing highres fix, I would recommend iterating on style without that enabled and just working at standard 512x512 resolution and then once you're getting results that are looking promising, kick in the highres fix and bump the resolution. Increases the time to generate the images fairly dramatically but DOES also tend to produce better detail and good results, so after you do it enough you start getting a sense of "I like how this is looking and I know it'll look better once I do a higher res version".
 

Dexter

Arcane
Joined
Mar 31, 2011
Messages
15,655
The latest version of the webui for Stable Diffusion includes functionality for Textual Inversion, which can be used to generate new concepts for Stable Diffusion from a set of images. I attempted to train it on 45 artworks of David A. Trampier, almost entirely from the AD&D Monster Manual, and then generated output with this Trampier embedding and the Stable Diffusion v1.5 checkpoint. The results are tantalizingly close to being good enough for use but not quite there, so if anyone has any suggestions for obtaining a better embedding from the webui's training function or is aware of good instructions for using another version of Textual Inversion, this information would be quite helpful.
You might want to try Dreambooth instead of Textual Inversion for this kind of stuff:
vl01e5grs6ca1.png

https://www.youtube.com/watch?v=dVjMiJsuR5o
https://civitai.com/

To keep it simple, "Textual Inversion" doesn't add or change anything about the model, but injects itself at the Text Encoding stage by trying to find concepts/vectors that are as close to your example images as available in the model data and "bind" them to a new keyword pointing it at the right keywords already inside the model. As such it can sometimes work, but isn't always optimal for what you're trying to do. Dreambooth continues training the Original model adding the concepts it learns from example images to it, which is what you want if you want the results to be as close as possible to your examples or are training some new concept that isn't already somewhat present in the model data (like a face or a specific art style).
 

Zed Duke of Banville

Dungeon Master
Patron
Joined
Oct 3, 2015
Messages
11,756
Had better luck using textual inversion with classic AD&D art of Illithids than for Beholders. Guess the artists:

OhF8tPA.png
mbpbMpx.png

vCbmRGW.png
P8vYNOX.png

l92thJR.png
mxGL6g1.png

UcXtqIv.png
GhGT2S4.png

sxEJ1I9.png
eKDXg5c.png

z0EEjbg.png
SH9NeLH.png

KCrLPq3.png
KjIkl8Z.png

mmNVvQt.png
GnKezCk.png


  1. Hieronymous Bosch
  2. El Greco
  3. Rembrandt van Rijn
  4. William Blake
  5. Caspar David Friedrich
  6. Arnold Böcklin
  7. Henri Rousseau
  8. David A. Trampier
 

Zed Duke of Banville

Dungeon Master
Patron
Joined
Oct 3, 2015
Messages
11,756
I had generated a second embedding on the same set of David A. Trampier artwork but assigning a larger number of vectors. This seems to create somewhat better results generally, though still short of what I had hoped for.

Dragon:
W4EXTLY.png
dnMdmsc.png


Orc:
FHtDBpx.png
pDBSrtl.png


Cthulhu:
fNd8yJO.png
qSc2jKE.png


Tyrannosaurus Rex (which oddly generated the best Trampier-style dragon thus far):
lSDaN9S.png
i5QNNMl.png


Werewolf:
LhhLwcQ.png
8oulsQH.png


Wolf:
tvVWTc0.png
irJkRU7.png


Elf:
FSYojvo.png
xOa0ncG.png


Dwarf:
RbiUkek.png
BbmK0zG.png


Ogre:
2DUZsp1.png
JONeIsE.png


Owlbear:
oOMfik3.png
OBDCdV5.png


PO0wOIy.png


Winged owlbear!
 

Zed Duke of Banville

Dungeon Master
Patron
Joined
Oct 3, 2015
Messages
11,756
Anyone managed to replicate Erol Otus yet?
Erol Otus is still alive, and I wouldn't use textual inversion (or any alternative methods) to replicate the style of a living artist, or even Keith Parkinson who sadly passed away in 2005 while in his 40s and who has an operative website maintained for him.

David A. Trampier famously abandoned his art career in 1988 (the last appearance of his Wormy comic was in Dragon Magazine #132 in April of that year) and died in 2014, so his style seemed fair game. :M
 

PapaPetro

Guest
David A. Trampier famously abandoned his art career in 1988 (the last appearance of his Wormy comic was in Dragon Magazine #132 in April of that year) and died in 2014, so his style seemed fair game.
Was looking him up not too long ago. Fell off the face of the earth and resurfaced as a taxi driver years later; found only because he appeared on some errant college newspaper story.
 

As an Amazon Associate, rpgcodex.net earns from qualifying purchases.
Back
Top Bottom