How do you use Daz3D products and AI together? My two cents
How have you been combining Daz3D products and AI together?
Various forms of AI are being incorporated into digital art tools. This is true with or without Daz3D. This is true with or without Midjourney, Dall-E, etc.
(1) Postwork processing? Digital image editors are incorporating AI in the background for generative fills. So no matter what Daz3D does, many 2D renders of its assets will be postworked with some form of AI by some users, perhaps most. Filter Forge is essentially an AI image processor.
(2) Pre-work? Does anyone use Daz3D products to 'block out' a scene? An example would be rendering a base scene and then using Controlnet to detect the basic lines, or the depth, or the... for the composition of the desired final output.
(3) Backgrounds and Shadow Catchers? Some people may use an AI processor to generate a background farm or office or ship or..., and then render their characters using Iray, and then integrate the two in an image editor.
(4) Iterative? One could use AI to block out a basic scene, use that as a backdrop for Daz3D character renders, which are then sent back to AI, and then reloaded in Daz Studio, then...
(5) Other?
I am glad to see Daz3D make their first stab at incorporating their large library of assets. I call this 'first stab' because I am not confident this is the way to go. It seems to me like they will be giving themselves the incentive to shut off their library from use with the other big AI programs. Adobe, OpenAi, etc. ain't Smith Micro / Poser. The Daz library of assets, as impressive as it is, is no match for Adobe's stock photo collection.
--- Another work flow is to start with low res objects, not highly detailed realistic objects. For this, I used my own low res custom female figure, my own simple terrain (2), and my own low res windmill. I used Stable Diffusion to go from my low-res scene to the output, relying on SD's extension Controlnet to detect edges and depth from my 3D renders. So I know I can do anything without the genesis figures and their associated content. But being able to do anything is not the same as wanting to do without the library of assets. So I am going to be curious to see where this all leads. Will Daz content library be the front end of art projects to block them out? Or, the database used to go from low res to high res? Or the back end to add fine detail? All? Something else?
So, how do YOU use Daz3D products and AI together?
Comments
My First DAZ AI prompt output
My prompt was "A man rowing a boat on an ocean under a stormy sky"
The result was
I cover some of these workflows in my tutorial at the other store. There are really many opportunities for using Daz and Stable Diffusion and other AI tools together. Some other things:
Great info. Thanks.
I usually treat whatever random AI image I get (I have used a lite version of Stable Diffusion, then the DALL-E that Copilot uses, and Craiyon) as a "base", then render a DAZ human character that looks similar to the one in the AI image, and manually combine them with GIMP.
In all, it's just been entertaining photomanipulation practice/doodles for me. If I wanted to create a more "heartfelt" illustration, I'd probably draw it by hand (with DAZ serving as a pose reference.) I could possibly base it on AI, but there's no fun in brainstorming in that.
I agree with others here and elsewhere that this should be a feature within DS itself, for instance for backgrounds, changing the lighting, doing postprocessing on the render etc. That would be very useful! The way it is now, it seems like a nice, fun thing to play with, especially since DAZ say the images were sourced ethically. That's a good thing for sure. But how do I get it to include what's in my library?
I read somewhere that the model Daz trained was trained over SDXL. So, if this is true, the base model is the same as every other SDXL model, trained with images "on the wild". I stress that I don't agree with the view that this is a copyright infringement in itself (we coud have a long technical/philosophical discussion here and get nowhere as everyone else, I'm only saying this to make it clear that this is not a problem FOR ME), but if they really have used SDXL as a base, there's no difference to any other SDXL-based model.
My hope is that in a few years we can use all the licensed 3D models as input for generative AI.
This would be a huge help to create consitent characters from different angles.
###
For now I have been using Daz Studio to render out different types of images for img2img workflows with ComfyUI.
Some examples:
Face Swap
I made a template scene in Daz Studio that creates portrait renders of licensed Daz characters without hair.
Those renders are then used as source in Stable Diffusion to replace just the face of a character.
This works similar to the Face Transfer option of Daz Studio.
For research purposes I created a workflow to generate and save two versions of each image.
One with a face generated by a Stable Diffusion checkpoint and one with a licensed Daz Character.
Workflow example:
Result with Joan 9 HD High Elf Add-On:
Poses
I setup a scene with one or two Genesis 9 default characters in Daz Studio.
It seems sufficient to render the image in White Mode.
That pose render can be used as input for different ComfyUI ControlNet nodes.
Result with Xola 9 HD:
Canny is a way to create a line art version when you want to capture more details for the pose. (example attached)
OpenPose is an option if you are just interested in the pose and want Stable Diffusion to have more freedom. (example attached)
Clothing
I did some experiments with rendering out images of clothing from Daz Studio.
With IP-Adapter you can then use them as img2img source.
Some of those workflows require the characters to face the viewer to work properly.
As soon as the head is turned to the side the faces become distorted or the clothing gets a more different look.
###
Daz AI Studio could now provide an alternative option for those challenges.
Guess we will have to test what happens when we stack LoRA of characters, hair and clothing together when those features become available as inputs...
Profile shot generated in Daz AI Studio:
Some additional examples are attached:
this is the most interesting thing I read today so far :-)
My only hope is that, FWIW, if they ever bring AI features into Studio itself, I hope they'll have a universal off switch. Adobe doesn't and it's one of the things that sucks about using Photoshop these days. Those of us who don't want to use the features shouldn't have to dance around them. Just let us turn them off altogether.
Agree, but having to opt-in not having to opt-out. I don't want to tell the waiter all the food I don't want. I'd prefer to just say what I do want. Takes less time.
Here is my Gen 8.1 Flight Attendant rendered in DAZ Iray (top) and the same rendered image brought into Leonardo AI Img to Img with a simple prompt (below).
Very nice. Love the sky. Wondering why he is rowing away from the ship and tossing hir oar away. Is Daz AI suicidal? or maybe homicidal? Do they still use the 3 laws of robotics?