Remixing your art with AI

1101113151621

Comments

  • ArtiniArtini Posts: 9,458

    And yet another conversion from SD

  • ArtiniArtini Posts: 9,458
    edited June 2023

    With fingers appeared in images, I could not get very good results

    and it also took significant number of attempts to get something nice.

    My conclusion is, that one need to practice a lot, even if it is just a prompt

    with text to image.

     

     

    Post edited by Artini on
  • RenderPretenderRenderPretender Posts: 1,041
    edited June 2023

    Artini said:

    With fingers appeared in images, I could not get very good results

    and it also took significant number of attempts to get something nice.

    My conclusion is, that one need to practice a lot, even if it is just a prompt

    with text to image.

     

     

    I like what you're doing, using your rendered poses in ControlNet. Sadly, I don't have the VRAM to use ControlNet, so I've had to get creative with IMG2IMG as in my above posts, basically producing a hybrid of my original render and the AI's interpretation of it, followed by some post-processing. My goal is to break further past the barrier between 3D rendering and photographic realism. Great work with ControlNet!

    Post edited by RenderPretender on
  • ArtiniArtini Posts: 9,458

    Thanks a lot, @RenderPretender

    My graphics card has 8 GB of VRAM, so I also need to experiment with diiferent settings to get SD to work.

    Your images are great and I like postprocessing on them.

     

  • wolf359wolf359 Posts: 3,828
    edited June 2023

    @Cris Palomino
    Actually I am using the 360 sky boxes more as as fake world environments much like a game engine does.
    I am mostly doing NPR/Toon styles these days 
    so true HDR lighting is not really needed

     

    Post edited by wolf359 on
  • RenderPretenderRenderPretender Posts: 1,041

    Artini said:

    Thanks a lot, @RenderPretender

    My graphics card has 8 GB of VRAM, so I also need to experiment with diiferent settings to get SD to work.

    Your images are great and I like postprocessing on them.

     

     Thank you. I'm working on another set of images right now, with the goal of honing a technique for getting a character's face to be reliably and consistently recognizable across a series of images.

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,214
    edited June 2023

    any 360 panoramic image works as IBL in Carrara, Octane render, Poser and presumably 3Delight Uberenvironment, it's just iray and filament that seem to have issues with them

    probably an unbiased render thing

    I definitely get shadows and coloured lighting  from the Blockadelabs Skydomes in Octane 

    ( well did, am without that PC right now)

    Post edited by WendyLuvsCatz on
  • WendyLuvsCatzWendyLuvsCatz Posts: 38,214
    edited June 2023

    the AI is the music and singing (SongR AI and Meta's MusicGen)

    the video itself entirely rendered in Carrara (OK a bit of Particle Illusion postwork added in Blender video editor)

    Post edited by WendyLuvsCatz on
  • FirstBastionFirstBastion Posts: 7,760

    Are those your lyrics for the potion shoppe? A nice little descriptiove story in there.

    Music-wise, I liked the first version better,  the second one had a few too many dissonant chords,

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,214

    FirstBastion said:

    Are those your lyrics for the potion shoppe? A nice little descriptiove story in there.

    Music-wise, I liked the first version better,  the second one had a few too many dissonant chords,

    no, SongR AI wrote it from the prompt magic potion shop 

  • RenderPretenderRenderPretender Posts: 1,041

    Attached are a few more trial images that I have put through my experimental DAZ-through-SD workflow. One of the most frustrating aspects of AI/Stable Diffusion for me, even when using my DAZ renders as input/reference images, has been trying to produce consistently recognizable facial characteristics across image series. I used the technique of prompt scheduling for this set of four images, and I'm quite pleased with the result. Even to a very discerning viewer, I think there is no question that these images are of the same woman.

    G8F-Michaela - AI Test-merge.png
    706 x 1024 - 399K
    G8F-Michaela - AI Test 2-merge.png
    706 x 1024 - 454K
    G8F-Michaela - AI Test 3-merge.png
    706 x 1024 - 447K
    G8F-Michaela - AI Test 4-merge.png
    706 x 1024 - 448K
  • juvesatrianijuvesatriani Posts: 556
    edited June 2023

    https://youtu.be/iAhqMzgiHVw

    here cool video which talking about generating same  character in SD  with help of controlnet  and some manual fix.

    The video itself actually about preparing character sheet or data set for LORA training  but I think we can use several  trick from that video to generate same character with bunch of poses .( ehich we can reuse or copy pastee into another scenes)

    You can start with DAZ preview or Filament render as controlnet sources or IMG2IMG guide 

    Post edited by juvesatriani on
  • WendyLuvsCatzWendyLuvsCatz Posts: 38,214

    SongR +ScreamingBee 

  • ChakradudeChakradude Posts: 260

    Sven Dullah said:

    I think it looks neat, like some old Elton John albumyes 

     

     

    Loved that album! and cover

  • Griffin AvidGriffin Avid Posts: 3,764

    For the LenZmen video "Heartbeat" I ran Daz Studio Renders through Kaiber to get the morphing reels and also turned the characters into rappers using AI. Here are some of the images I used. 

    There's also some animated segments, rendered with Daz Studio Filament engine. I stitched everything together using Adobe Premiere.

    Image of the Lenzmen in the studio

    Lenzmen Earthadox for Heartbeat video.

    Lenzmen Dynamics Plus sitting on a giant tape machine from Heartbeat song.

     

    Lenzmen Dokta Strange standing by his car from the Lenzmen music video heartbeat.

    Dokta Strange Heartbeat video

     

  • csaacsaa Posts: 823
    edited July 2023

    juvesatriani,

    Thanks for sharing that video! My first impressoins?

    There are a number of software to master whose output cascades into one another. For someone new to generative AI art, there's a good amount of technical learning involved here. There's also  a lot of room to make mistakes, but just the same there's room to develop one's own style and adapt the process to achieve one's unique goals.

    Also, even with generative AI's assistance, the techniques here still require effort and judgement on the part of the user. This isn't just with the manual steps of cleaning the intermediate images, or organizing them into sheets; more importantly; it takes a trained eye to separate the good from the bad images the AI outputs. Clearly it takes a fair amount of patience to put together a good training set. As with anything to do with software, garbage in equals garbage out. The better the garbage man -- in this case, the better the artistic vision and the higher the artistic standards -- the better the overall output will be, so to speak.

    Again, thanks for sharing the video.

    3Diva, the video is worth looking into.

    Cheers!

    juvesatriani said:

    https://youtu.be/iAhqMzgiHVw

    here cool video which talking about generating same  character in SD  with help of controlnet  and some manual fix.

    The video itself actually about preparing character sheet or data set for LORA training  but I think we can use several  trick from that video to generate same character with bunch of poses .( ehich we can reuse or copy pastee into another scenes)

    You can start with DAZ preview or Filament render as controlnet sources or IMG2IMG guide 

    Post edited by csaa on
  • DiomedeDiomede Posts: 15,169
    edited July 2023

    Regarding the video training for creating custom characters.  Didn't Daz recently change its licensing for its content?  Doesn't the new wording exclude incorporating Daz content in the training of AI?  Or maybe I either misread that, or my memory is incorrect.  I am posting this as a 'I am not sure,' not as a 'you can, or cannot, do something.'  With that in mind, there may be, just may be, an issue with using Daz renders as part of the training for custom characters.

    With that 'I am not sure' out of the way, thanks for posting the video.  Looks like a great resource with or without input from Daz stuff.  Could be combined with Daz stuff in post-processing in any case.  I will need to watch it several times to get the steps.  yes

    Post edited by Diomede on
  • WendyLuvsCatzWendyLuvsCatz Posts: 38,214

    certainly not as a Lora model for redistribution

    for your own use would likely be the same as 2D renders

    nonetheless I myself tend to use other AI images, selfie video and other real life footage/photos for OpenPose as otherwise why wouldn't I just be doing a render

    I and I dare say most DAZ users want the opposite, OpenPose to 3D pose like Plask mocap does

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,214

    a few of my video renders were used as OpenPose animations in ControlNet for this

    although I cannot say it's at all coherent 

    I just love the clothes and wish I had this Victorian Gothic fantasy stuff for my DAZ people 

    (yes I have just about everything of that genre from the store already but these look nicer)

  • wsterdanwsterdan Posts: 2,348

    WendyLuvsCatz said:

    a few of my video renders were used as OpenPose animations in ControlNet for this

    although I cannot say it's at all coherent 

    I just love the clothes and wish I had this Victorian Gothic fantasy stuff for my DAZ people 

    (yes I have just about everything of that genre from the store already but these look nicer)

    Neat! I hear what you're saying, some of the clothing (an in particular some sci fi uniforms) are great inspiration for people modelling clothing.

    -- Walt Sterdan

  • 3Diva3Diva Posts: 11,518

    csaa said:

    juvesatriani,

    Thanks for sharing that video! My first impressoins?

    There are a number of software to master whose output cascades into one another. For someone new to generative AI art, there's a good amount of technical learning involved here. There's also  a lot of room to make mistakes, but just the same there's room to develop one's own style and adapt the process to achieve one's unique goals.

    Also, even with generative AI's assistance, the techniques here still require effort and judgement on the part of the user. This isn't just with the manual steps of cleaning the intermediate images, or organizing them into sheets; more importantly; it takes a trained eye to separate the good from the bad images the AI outputs. Clearly it takes a fair amount of patience to put together a good training set. As with anything to do with software, garbage in equals garbage out. The better the garbage man -- in this case, the better the artistic vision and the higher the artistic standards -- the better the overall output will be, so to speak.

    Again, thanks for sharing the video.

    3Diva, the video is worth looking into.

    Cheers!

    juvesatriani said:

    https://youtu.be/iAhqMzgiHVw

    here cool video which talking about generating same  character in SD  with help of controlnet  and some manual fix.

    The video itself actually about preparing character sheet or data set for LORA training  but I think we can use several  trick from that video to generate same character with bunch of poses .( ehich we can reuse or copy pastee into another scenes)

    You can start with DAZ preview or Filament render as controlnet sources or IMG2IMG guide 

    Thank you for the suggestion! The video is really cool. I don't think my PC can handle model training though. I'm sure soon though, they'll probably make model training a lot more user friendly for lower end computers. :) At least I hope so!

    Here are a before and after image I sent through Stable Diffusion. I think it turned out kinda cool, but replicating the results has proved ellusive.

  • DeeceyDeecey Posts: 136
    edited July 2023

    I am looking through these renders mixed with AI and they are pretty cool.

    But I have to admit I am overwhelmed with which AI engine to use, and was wondering if someone could help point me in the right direction.

    I'm trying Dream Studio and the results are kind of sad, probably because I am lost with how to prompt and whatnot.

    Can someone recommend a good AI engine to start with, and some sort of tutorial on how you can get good results from an uploaded render?

    Thanks 8-)

    This is the image that I am playing with .... (EDIT: Or maybe not, it's not uploading LOL)

    OK let's see if this link works:

    https://www.daz3d.com/gallery/user/6147139440738304?edit=albums#image=1301334

    Post edited by Deecey on
  • ArtiniArtini Posts: 9,458

    There is an AI based editing program at HumbleBundle.

    Maybe that one will be easier to use.

    StableDiffusion and Midjourney may disappear if AI regulations will be implemented

    as proposed by Adobe (C2PA data metasystem).

     

  • csaacsaa Posts: 823

    Re-sharing info posted by wolf359 about his NPR/Anime project with AI elements. The links are here and here. Worth looking into.

    Cheers!

  • ArtiniArtini Posts: 9,458

    From this Daz 3D render of https://www.daz3d.com/oso-cat-for-genesis-9

  • ArtiniArtini Posts: 9,458
    edited August 2023

    To these ai edited images - 01

    Post edited by Artini on
  • ArtiniArtini Posts: 9,458

    No 02

  • ArtiniArtini Posts: 9,458

    No 03

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,214

    I am so glad it didn't misinterpret your pokethrough as a happy boy cat

  • ArtiniArtini Posts: 9,458

    I have increased scalling on the shorts to 300%, I think, and still got pokethrough in Daz Studio.

    AI treatment was much easier to handle and still got some guidance from DS render.

    It is also possible to get digitigrade feet in ai, if one desire.

     

Sign In or Register to comment.