Still trying to figure out what the "best" combination of settings is but I've gotten satisfactory results so far.
This has been my experience as well. If you give it a very simple prompt and let it kind of do what it wants, it can give you stunning results, reasonably real looking people etc. But as soon as you need to wrangle it into creating something more specific or complicated, with more different features on the image, it becomes a struggle. For example I tried creating a "photo" of a building with a large white billboard with text on it, so there was usually a part of some street with a few cars, streetlights and various other city things. It could not produce anything that didn't immediately look incredibly AI-like, with recognizable shapes that fall completely apart on a closer look.[...]
I'll give it another shot, but I think it is better to illustrate "regular fantasy/medieval scenes" than unusual ones.
As for game assets, I don't think we are there yet at all, even though some have used it to generate tilesets.
This is basically the thought process behind population reductions via vaccines or whatever.We have a growing mass of mostly incompetent humans who only ever were capable of doing things like moving boxes and flipping burgers and they're going to be out of anything to do in their life, forever.
and I thought Perkel was a tech illiteratein comparison to generic mobile game art i think anything created with DALL-E (or even better, Stable Diffusion) is exceptionally good.. but again, it's just the start.. it's not like the first 3D models were amazing either :p This will be bigger than even 3D engines i believe because it's just so easy to use. Maybe we'll see new 2D engines with graphics we didn't think was possible using 2D, there's so much happening right now with it, it's unlikely to slow down any time soon. Every month is a big step forward.
For anything AI, this is a good channel to follow: https://www.youtube.com/c/KárolyZsolnai/videos
Closest thing to tile generation I've seen is this. It isn't isometric, but serves as a PoC even if it is a little dated. Given enough time the software may mature to the point where you can actually get the results you desire with relative ease. For now I'll screw around with the image-to-image processor and see if I can force some isometric perspectives.Same old yet again None
TBF a couple of the portraits look ok, but theres always been programs that generate faces. So not won over.
Some of the forest scenes look ok, where you are starring at leaves or noise. Could possibly be used here as loading screen/backdrops.
Everything else requires a lot of work to correct.
The isometric stuff maybe you could chop up for tiles but TBH a huge amount of work, not saving any time IMO.
Can you just generate a regular isometric tile? Say a wall piece or ground, stairs etc?
I think what's particular about this technology that has people scared and excited about it is how close the prompt interface is to actually working with a concept artist. It's just missing the ability to segment and iterate on details like "create a light source here and make her eyes green".you can create anything with ai, you just use an init image and you can have hundreds or thousands of variations of that image and in any style you can imagine, any additions you want. Let it batch process for 10-15 minutes, come back and go through the images and hit delete on the crappy ones.. out of 100 images you're more than likely to have 5-10 great ones.
I could use my own art or someone elses, it will change it beyond recognition, depending on init image strength. Guiding the AI (instead of using noise, which is default) is really the way to use this in the current state, imo.. even something quick in ms paint to guide the AI of what you want is better than noise and hoping to get lucky.
This is (in its current state) all about mass producing, mass deleting and luck. The only skillset is really taste, knowing what to keep and what to delete.
i've done plenty of commission work now and people have been happy. No one cares if you're not sold on AI lol.. it's obviously in its infancy. Every month is another leap forward.
I think what's particular about this technology that has people scared and excited about it is how close the prompt interface is to actually working with a concept artist. It's just missing the ability to segment and iterate on details like "create a light source here and make her eyes green".you can create anything with ai, you just use an init image and you can have hundreds or thousands of variations of that image and in any style you can imagine, any additions you want. Let it batch process for 10-15 minutes, come back and go through the images and hit delete on the crappy ones.. out of 100 images you're more than likely to have 5-10 great ones.
I could use my own art or someone elses, it will change it beyond recognition, depending on init image strength. Guiding the AI (instead of using noise, which is default) is really the way to use this in the current state, imo.. even something quick in ms paint to guide the AI of what you want is better than noise and hoping to get lucky.
This is (in its current state) all about mass producing, mass deleting and luck. The only skillset is really taste, knowing what to keep and what to delete.
i've done plenty of commission work now and people have been happy. No one cares if you're not sold on AI lol.. it's obviously in its infancy. Every month is another leap forward.
Ironically, the same thing has detractors who haven't worked with artists acting dismissive of it. I can only assume they think art directors communicate via mind melds or something.
i know, i'm just saying it's even closer than you perhaps thoughtThat sounds great. My point was that the interface is different than working with a real artist for those details.
How can ou iterate, though? For instance, if you want to add something, like a tree to an AI generated image, do you just erase the part where you want the tree, and hand draw the tree yourself, and ask the AI to iterate on that?I think what's particular about this technology that has people scared and excited about it is how close the prompt interface is to actually working with a concept artist. It's just missing the ability to segment and iterate on details like "create a light source here and make her eyes green".you can create anything with ai, you just use an init image and you can have hundreds or thousands of variations of that image and in any style you can imagine, any additions you want. Let it batch process for 10-15 minutes, come back and go through the images and hit delete on the crappy ones.. out of 100 images you're more than likely to have 5-10 great ones.
I could use my own art or someone elses, it will change it beyond recognition, depending on init image strength. Guiding the AI (instead of using noise, which is default) is really the way to use this in the current state, imo.. even something quick in ms paint to guide the AI of what you want is better than noise and hoping to get lucky.
This is (in its current state) all about mass producing, mass deleting and luck. The only skillset is really taste, knowing what to keep and what to delete.
i've done plenty of commission work now and people have been happy. No one cares if you're not sold on AI lol.. it's obviously in its infancy. Every month is another leap forward.
Ironically, the same thing has detractors who haven't worked with artists acting dismissive of it. I can only assume they think art directors communicate via mind melds or something.
It's not missing that at all that's how you guide the AI with the init image, again, you can create anything, and you can direct the AI very, very easily. A crude MS paint drawing of a human with green blobs where they eyes are, this will direct the AI of doing a human, and that it also should have green eyes. A light source, try drawing a sun and with light from it coming onto the subject, or maybe draw white on the black crude human form, make sure to direct it via text too.
if you already have an image close to what you want, even better, just import it to photoshop cut and paste stuff (does not need to look good at all), guide the AI with that.
How can ou iterate, though? For instance, if you want to add something, like a tree to an AI generated image, do you just erase the part where you want the tree, and hand draw the tree yourself, and ask the AI to iterate on that?
Anyway, it is not that different from working with artists, as sketching things is also much faster than just using text to communicate.
img2imgHow can ou iterate, though? For instance, if you want to add something, like a tree to an AI generated image, do you just erase the part where you want the tree, and hand draw the tree yourself, and ask the AI to iterate on that?
Anyway, it is not that different from working with artists, as sketching things is also much faster than just using text to communicate.
How is it always cartoonists getting so irrationally angry? The pattern repeats, "oh, I made a small controversy, quick, let's use it for advertising". But so far it's never people who make actual art doing it, just mediocre cartoonists.
Bonus points for using the word "hellsite".
The hype behind AI is a psyop to acclimate the public to the idea of impersonal intelligence so that the """""""elite""""""" can do what they want and shift the blame for the consequences on AI's purportedly impartial, optimized calculations when things get ugly.
Apparently the result of: "fallout 5 tarkov stalker 2, canon50 first person movie still, ray tracing, 4k octane render, hyperrealistic, extremely detailed, epic dramatic cinematic lighting;
width:768 height:448 steps:50 cfg_scale:10 sampler:k_euler_a"
I think you're way off on a temporal frame. Maybe like two decades from now something like what you're talking about might exist, but so far I can only render 1FPS at 512x512 on a 3080TI with 12GB VRAM under the best of circumstances.I was just thinking about the technology that is becoming obsolete. Game engines, 3D editors, 2D editors, IDEs, programming languages. Billions of dollars of R&D up in smoke.