Putting the 'role' back in role-playing games since 2002.
Donate to Codex
Good Old Games
  • Welcome to rpgcodex.net, a site dedicated to discussing computer based role-playing games in a free and open fashion. We're less strict than other forums, but please refer to the rules.

    "This message is awaiting moderator approval": All new users must pass through our moderation queue before they will be able to post normally. Until your account has "passed" your posts will only be visible to yourself (and moderators) until they are approved. Give us a week to get around to approving / deleting / ignoring your mundane opinion on crap before hassling us about it. Once you have passed the moderation period (think of it as a test), you will be able to post normally, just like all the other retards.

Can AI make pre-rendered backgrounds (e.g. for a mod of Deadfire, say?)

gurugeorge

Arcane
Patron
Joined
Aug 3, 2019
Messages
8,059
Location
London, UK
Strap Yourselves In
I've just been noodling around with Deadfire in turn-based mode and it occurred to me how wonderful it would be if you could mod a whole Deadfire campaign, or I suppose basically a POE quest or questline (using any part of the lore), with the help of AI.

The nice thing about Deadfire is that graphics-wise it pretty much carries on an improves the strictly isometric, jewelline graphics vibe of the classic Infinity games. If you could mod Deadfire in any flexible way, there could be a treasure trove of user-made content. But the big problem is the labour-intensivenes of the pre-rendered backgrounds method.

Given that AI usually does a nice job of rendering quasi-realistic art styles, would it be capable of taking some input (say a drawing or painting of a scene by an artist) and tuning that into a classic pre-rendered isometric level (given the ability to place items and spawn things, etc.)

(On a secondary note, the basic gameplay for turn-based Deadfire is very enjoyable from the point of use. But I wonder if you could improve some of the numbers, abilities and balance to be a bit meatier?)
 

deuxhero

Arcane
Joined
Jul 30, 2007
Messages
12,070
Location
Flowery Land
In theory, yes. The problem is
1: It's a whole nother layer to get the backgrounds to actually be a level. Beyond actually putting chests, making the actual navmesh (etc.) in, you have to come up with a way to get the AI to generate an actual, playable, map under the pretty pictures rather than have a unsolvable maze full of inaccessible rooms than exist for no reason (let alone actually be fun to explore or good for tactical combat) that looks like the style of image. I think you could pull this off with image to image (maybe train on navmeshs+map picture?) by making the skeleton of the actual level first and transforming it but it wouldn't be an off the shelf thing.
2: Backgrounds that hold up to scrutiny are actually something AI struggles with, even more than hands. The widespread models already struggle to understand depth and will frequently produce things like a character in the foreground leaning on a table that's in the background. Again, something you can fix with lots of manual work via image to image, but don't expect a miracle machine.

I think you could, right now, rip the backgrounds from BG/BG2/IWD/IWD 2/ToEE/PoE/PoE2/their expansions/etc., crop them into reasonably sized chunks (like each building as its own file), tag them, and train them as a LoRA (or something like it such as a DoRA or Lycrois). It would make pictures that look like the artstyle, but point 1 and 2 are going to need work. I'm fully interested in someone trying that then doing to image to image approach to make a basic sketch into a full background, but I doubt it would work as more than a novelty.
 
Developer
Joined
Oct 26, 2016
Messages
2,440
Basically no. Many reasons. Scale is off, projection is off, style is off, lots of artifacts.

How I would use it if I had to.

Layer 1. I would set up an isometric grid in Krita.

Layer 2. I would add the generated image as a new layer.

Layer 3. I would sketch out what I wanted by hand using a drawing tablet.

Layer 4. I would grab something out of the generated image like a brick or a stone or a plank, pull it into a new layer. Fix up projection using reference grid.

You could assemble a set of "brushes" or "stamps" to work with. I have in the past used photographs or pictures to make brushes. These are like a brick, stone or plank. I assemble a more complex object from these sources using the layer 3 as a reference.

Layer 5. Apply local shadow.

Layer 6. Apply global shadow.

Probably less work to just use the original reference material if I am being honest.
 

Twiglard

Poland Stronk
Patron
Staff Member
Joined
Aug 6, 2014
Messages
7,534
Location
Poland
Strap Yourselves In Codex Year of the Donut
You can generate areas, scenery objects and such using a particular projection, stitch them up and fix the seams. This is what I got for a completely different project using mostly the Dreamshaper 8 model. It works better with custom projections than SD 2.1.

_isometric_2__dirty_apartment_room__trash__wall_la_ra8hnal3-webp.53554


{
"prompt": "(isometric:2) dirty apartment room, trash, wall lamp and desk and wall cabinet and bookshelf, 300mm, blender 3d render, isometric projection, axonometric projection, sharp focus, 16-Bit",
"negative_prompt": "blurry, grainy, badly drawn, bad composure, bad composition, missing image",
"seed": 747920500,
"use_stable_diffusion_model": "dreamshaper_8",
"clip_skip": false,
"use_controlnet_model": null,
"control_alpha": null,
"use_vae_model": "",
"sampler_name": "euler_a",
"width": 768,
"height": 512,
"num_inference_steps": 80,
"guidance_scale": 3.5,
"use_lora_model": null,
"use_embeddings_model": null,
"tiling": null,
"use_face_correction": null,
"use_upscale": null
}


_isometric_2__dirty_apartment_room__trash__wall_la_8inf4vm0-webp.53555

Prompt: (isometric:2) dirty apartment room, trash, wall lamp and desk and wall cabinet and bookshelf, 300mm, blender 3d render, global illumination, isometric projection, axonometric projection, Cold Color Palette, Global Illumination, Dynamic Lighting, Digital Art, Electric Colors
Negative Prompt: window
Seed: 7877484
Stable Diffusion model: dreamshaper_8
Clip Skip: False
ControlNet model: None
ControlNet Strength: None
VAE model:
Sampler: euler_a
Width: 768
Height: 512
Steps: 50
Guidance Scale: 7.5
Prompt Strength: 0.7
LoRA model: None
Embedding models: None
Seamless Tiling: None
Use Face Correction: None
Use Upscaling: None


isometric_small_dive_bar_with_counter_and_liquor_cabinet_and_round_sto_S9630453_St50_G7.5.jpeg

isometric_small_dive_bar_with_counter_and_liquor_cabinet_and_round_sto_S9630453_St50_G7.5.jpeg

small__isometric_2-0__dilapidated_basement_room__c_8j2h0kx0-webp.53557

small__isometric_2.0__dilapidated_basement_room__c_8J2H0KX0.webp


Then there are keywords such as
  • 3d blender render
  • simple render
  • negative: global illumination, lamp
Actually most people use this specific model for fantasy stuff so you might have a better experience than I did. It's fairly easy to use with the EasyDiffusion software [1] [2].
 

Attachments

  • _isometric_2__dirty_apartment_room__trash__wall_la_RA8HNAL3.webp
    464.7 KB · Views: 123
  • _isometric_2__dirty_apartment_room__trash__wall_la_8INF4VM0.webp
    425.5 KB · Views: 120
  • small__isometric_2.0__dilapidated_basement_room__c_8J2H0KX0.webp
    355.8 KB · Views: 107
Developer
Joined
Oct 26, 2016
Messages
2,440
You can generate areas, scenery objects and such using a particular projection, stitch them up and fix the seams. This is what I got for a completely different project using mostly the Dreamshaper 8 model. It works better with custom projections than SD 2.1.

_isometric_2__dirty_apartment_room__trash__wall_la_ra8hnal3-webp.53554


{
"prompt": "(isometric:2) dirty apartment room, trash, wall lamp and desk and wall cabinet and bookshelf, 300mm, blender 3d render, isometric projection, axonometric projection, sharp focus, 16-Bit",
"negative_prompt": "blurry, grainy, badly drawn, bad composure, bad composition, missing image",
"seed": 747920500,
"use_stable_diffusion_model": "dreamshaper_8",
"clip_skip": false,
"use_controlnet_model": null,
"control_alpha": null,
"use_vae_model": "",
"sampler_name": "euler_a",
"width": 768,
"height": 512,
"num_inference_steps": 80,
"guidance_scale": 3.5,
"use_lora_model": null,
"use_embeddings_model": null,
"tiling": null,
"use_face_correction": null,
"use_upscale": null
}


_isometric_2__dirty_apartment_room__trash__wall_la_8inf4vm0-webp.53555

Prompt: (isometric:2) dirty apartment room, trash, wall lamp and desk and wall cabinet and bookshelf, 300mm, blender 3d render, global illumination, isometric projection, axonometric projection, Cold Color Palette, Global Illumination, Dynamic Lighting, Digital Art, Electric Colors
Negative Prompt: window
Seed: 7877484
Stable Diffusion model: dreamshaper_8
Clip Skip: False
ControlNet model: None
ControlNet Strength: None
VAE model:
Sampler: euler_a
Width: 768
Height: 512
Steps: 50
Guidance Scale: 7.5
Prompt Strength: 0.7
LoRA model: None
Embedding models: None
Seamless Tiling: None
Use Face Correction: None
Use Upscaling: None


View attachment 53556
isometric_small_dive_bar_with_counter_and_liquor_cabinet_and_round_sto_S9630453_St50_G7.5.jpeg

small__isometric_2-0__dilapidated_basement_room__c_8j2h0kx0-webp.53557

small__isometric_2.0__dilapidated_basement_room__c_8J2H0KX0.webp


Then there are keywords such as
  • 3d blender render
  • simple render
  • negative: global illumination, lamp
Actually most people use this specific model for fantasy stuff so you might have a better experience than I did. It's fairly easy to use with the EasyDiffusion software [1] [2].
You could fix up the couch and table and TV and some other small assets. It seems to mostly be generating these kinds of rooms. But I think this will not work well at all if you are going for those BG type of assets.
 

Twiglard

Poland Stronk
Patron
Staff Member
Joined
Aug 6, 2014
Messages
7,534
Location
Poland
Strap Yourselves In Codex Year of the Donut
You could fix up the couch and table and TV and some other small assets. It seems to mostly be generating these kinds of rooms. But I think this will not work well at all if you are going for those BG type of assets.
These are the results I actually needed for my own specific project. Try the model for something more suitable and judge for yourself.

I don't have my old SD images of isometric evening downtown streets but you can try that too.
 
Developer
Joined
Oct 26, 2016
Messages
2,440
You could fix up the couch and table and TV and some other small assets. It seems to mostly be generating these kinds of rooms. But I think this will not work well at all if you are going for those BG type of assets.
These are the results I actually needed for my own specific project. Try the model for something more suitable and judge for yourself.

I don't have my old SD images of isometric evening downtown streets but you can try that too.
I was referring to what I have seen. Its particularly bad at medieval isometric assets. Maybe just a lack of training data or whatever but thats how it is.
 

As an Amazon Associate, rpgcodex.net earns from qualifying purchases.
Back
Top Bottom