Nazrim Eldrak
Scholar
Found this from 28 Sep 2022 (don't know if this has already been posted. I just have a fucking desire to share it):
Might be of interest: https://guytevet.github.io/mdm-page/Can it generate animations for those 3d models?
https://github.com/GuyTevet/motion-diffusion-model
Some other recent stuff: https://imagen.research.google/video/
These things have already gotten some simple implementation in the SD WEBUI's in the past few days:
Making AI Variants: https://arxiv.org/abs/2208.01626 https://www.youtube.com/watch?v=XW_nO2NMH_g
https://energy-based-model.github.i...-Generation-with-Composable-Diffusion-Models/ https://arxiv.org/pdf/2206.01714.pdf
Might propagate down to stuff people use soon: https://ai.facebook.com/blog/gpu-inference-engine-nvidia-amd-open-source/Wow, daily optimizations that cut memory usage in half! Someone must be replacing Python code with C.
Also Textual Inversion is down to 6GB VRAM and Dreambooth 8GB VRAM now (unfortunately the second doesn't work on Windows yet), although with higher Normal RAM cost (24GB): https://github.com/Ttl/diffusers/tree/dreambooth_deepspeedTo address these industry challenges, Meta AI has developed and is open-sourcing AITemplate (AIT), a unified inference system with separate acceleration back ends for both AMD and NVIDIA GPU hardware. It delivers close to hardware-native Tensor Core (NVIDIA GPU) and Matrix Core (AMD GPU) performance on a variety of widely used AI models such as convolutional neural networks, transformers, and diffusers. With AIT, it is now possible to run performant inference on hardware from both GPU providers. We’ve used AIT to achieve performance improvements up to 12x on NVIDIA GPUs and 4x on AMD GPUs compared with eager mode within PyTorch.
AITemplate is a Python framework that transforms AI models into high-performance C++ GPU template code for accelerating inference. Our system is designed for speed and simplicity. There are two layers in AITemplate — a front-end layer, where we perform various graph transformations to optimize the graph, and a back-end layer, where we generate C++ kernel templates for the GPU target. In addition, AIT maintains a minimal dependency on external libraries. For example, the generated runtime library for inference is self-contained and only requires CUDA/ROCm runtime environments. (CUDA, NVIDIA’s Compute Unified Device Architecture, allows AI software to run efficiently on NVIDIA GPUs. ROCm is an open source software platform that does the same for AMD’s GPUs.)
You can now also install and try the first implementation of this: https://github.com/ashawkey/stable-dreamfusion
It seems rough so far:
you forgot to include (((big boobs))) in your prompt
This seems to (have turned?) into an Adult game I guess using NAI: https://f95zone.to/threads/locus-tenebrarum-demo-rue.134235/
I posted a bit more about that recent Drama here: https://rpgcodex.net/forums/threads...ts-tuning-and-other-stuff.144920/post-8184729Entertaining news from the world of open-source AI!
I haven't tried it myself (I don't use those features much so far anyway), but saw various people proclaim that it works wonders now and they got much better results.What's slightly more interesting is the v1.5 model trained for inpainting (and outpainting), also released by RunwayML a couple days earlier. That one actually brings improvements and apparently pushes outpainting quality closer to Dall-E.
I'm not allowed into those parts yet, so I guess I may be reposting shit when talking about things that are not immediately game-related.I posted a bit more about that recent Drama here: https://rpgcodex.net/forums/threads...ts-tuning-and-other-stuff.144920/post-8184729
Always was going to end up the main use of this.use it as a decent starting composition.
Not being an artist, I am curious: What aspect of pixel art has midjourney picked up on? I realize that there is more to good pixel art than just scaling everything to down to 320x200 and then back up again; it probably has its own best practices of composition, contrasts, geometry, projection (which is why we've had such a slew of bad pixel art games over the past few years). What is the secret that midjourney has learnt? If you say, "blocky pixels", I'm going to be disappointed.Apparently the new version of Midjourney is slowly starting to understand pixel art. None of the results are usable on their own, but I wonder if a person who learned the basic pixel art techniques but simply isn't artistically inclined, doesn't really understand color etc., could sort of trace over it or just use it as a decent starting composition.
Since I don't use Midjourney I'm only speculating based on those few images, but it seems to me that it's creating a good basis for a limited palette, it creates a game-like perspective (the scene has perspective but the objects in the back seem to be flat and front-facing even if they're viewed from an angle), and yes, part of it is also blocky pixels: usually if you just scale an image down it doesn't work as well, with low-res images you're kind of creating a symbol of the thing where certain features are exaggerated or suppressed so that the result looks clear. Which I think these images show to a degree.Not being an artist, I am curious: What aspect of pixel art has midjourney picked up on? I realize that there is more to good pixel art than just scaling everything to down to 320x200 and then back up again; it probably has its own best practices of composition, contrasts, geometry, projection (which is why we've had such a slew of bad pixel art games over the past few years). What is the secret that midjourney has learnt? If you say, "blocky pixels", I'm going to be disappointed.Apparently the new version of Midjourney is slowly starting to understand pixel art. None of the results are usable on their own, but I wonder if a person who learned the basic pixel art techniques but simply isn't artistically inclined, doesn't really understand color etc., could sort of trace over it or just use it as a decent starting composition.
Shitty artists, yet techno bugmen use their art in their plagiarism software to shit out unappealing soulless garbage. Make up your mind, misanthropic soulless husks. It's actually insane how a good portion of the world swirls around the people who actually create visual art like vultures around a carcass, ripping it apart piece by piece.we need a 'shitty artist gets mad at stablediffusion' megathread
because artists are lazy and do chimp work which is why they get paid 1/5th as much as someone writing the engineHow is it possible that you need less people to program a game engine that runs on multiple platforms and renders 100 frames per second than you need to populate that game with grass textures and NPCs?
Learn to dig coal.Shitty artists, yet techno bugmen use their art in their plagiarism software. Make up your mind, misanthropic soulless husks.we need a 'shitty artist gets mad at stablediffusion' megathread
Oh I'm sure you enjoy digging coal.Learn to dig coal.Shitty artists, yet techno bugmen use their art in their plagiarism software. Make up your mind, misanthropic soulless husks.we need a 'shitty artist gets mad at stablediffusion' megathread
If those artists didn't exist, this technology would be emulating photographs and the natural world. The fact that it can chew through some examples to emulate any style only makes it more impressive.Shitty artists, yet techno bugmen use their art in their plagiarism software to shit out unappealing soulless garbage. Make up your mind, misanthropic soulless husks. It's actually insane how a good portion of the world swirls around the people who actually create visual art like vultures around a carcass, ripping it apart piece by piece.we need a 'shitty artist gets mad at stablediffusion' megathread
The ramifications of this AI shit and its negative impact on society are also far more demented than mere "job replacements".
because artists are lazy and do chimp work which is why they get paid 1/5th as much as someone writing the engineHow is it possible that you need less people to program a game engine that runs on multiple platforms and renders 100 frames per second than you need to populate that game with grass textures and NPCs?
And yet no one would even look at those things without artists. Many, many entertainment corporations/businesses would be nothing without the high quality visuals of artists, and yet they're swimming in profits. Hell, the makers of these A"""""I"""""s are standing on the shoulders of giants, many of them dead like Ivan Shiskin or Gustave Dore, and they would be absolutely no one without the works they're ripping off.
While tech bugmen like you are probably envious of their creative skill, or you're merely a failed artist yourself feeling envy and seething whenever a proper artist gets complimented.