Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
I shared this in the Carrara forum earlier but will share here too
(it wasn't until I had uploaded it I realised I had rendered it in Carrara not DAZ studio )
the embedding seems particulary good at replacing faces, my only prompt was wendyluvscatz which brings up the trained embedding
so the DAZ characters all get my face
Intersting, uh, hat?
Did you try to make it render the character with ....? Could it actually try to render a hat like a squirrel in isolation, so one could use that to add to a character, in theory. Maybe that's asking too much from the ai, as it doesn't follow descriptions like we can, instead combines based on patterns (yes there is grammar contained in language models, but the constructive part is not like a servant doing stuff you say, unless i am mistaken).
oh I misunderstood your request
I thought you wanted me as a squirrel
a hat should be easy, I can try later
Another danger to ponder: Bureaucrat politics might actually want this kind of transformation. No more of those random artists doing arbitrary stuff, with no way to tell if and what and when. Instead you want to have the corporate model providing everything in an easy to overview, as well as a simple to administer fashion!
While "ai" can help with many developments, it doesn't provide formal means of understanding, what we can do and what we can not do. Advancing in science of course, keeps enabling us to. Still i am somewhat weary with too much of hype on either side there.
"AI" could enable the zombie apocalypse, where we eat each other (to the benefit of a few people), and in the end we are left without any civilization (a few nobles or a few specialists can not pose for a civilization, unless you're ok with the archeologic terms, meaning "dead already"). Letting cloud services emerge based on unregulated scraping, seems to be along the lines of those scenarios. Less fancy words: race to the bottom, structurally.
If you need an example, for where these attitudes can lead, once in charge for good, consider the irony: "While you're not 'communists', you'll still walk down the same path, and look where they are now!"
And it's simple to explain on abstract terms: not protecting the people means ending up on the same page. No matter how efficient "(more) free markets" are for this and that. So the decision is about having a civilization "worthwhile", directed at "eternity" - that's the only possible continuation, virtually from anywhere, for the more broad scale. And again we see, that both the random folks, as well as random 'great leaders'" of the more ancient times had had good ideas, at least one, but couldn't manage to go there, technically, probably because they had all been ending up on the same page with the 'communists' in the end?
Not convinced? Not my job ;).
Apologies for the misunderstanding! I was just trying to make you try stuff at your expense...
Really meant a Squirrel as a hat rather, or a squirrel-hat, possibly to be distinguished from the hat-squirrel.
my expense? I am not paying for this except time maybe but have plenty of that, just busy rendering stuff at the moment, DAZ takes longer than ai
and of course uses my GPU
and will share a horrifying video I uploaded to Facebook while I am here
https://www.facebook.com/wendy5/videos/3290977477714530
I heard the Graham voice on another YT clip the other day. Can A.I. say take your voice & warp it to speak with Graham's accent? Or vice versa? That would be very helpful.
Just wonder about how many of those, who posted to the forum threads
are actually AI presenting opinions about topics discussed.
This very expensive service can completely alter your voice but I do not think it can recreate your voice with a different language accent .
https://www.altered.ai/
Oh, hmmm. Thanks for that. I'll bookmark it but I'll try & hold out for something more akin to my original question. Keep up the good work on your YT.
I don't have a link, but that kind of system is being worked on or at least within consideration, probably within scope. There has been a news report about a system "removing" the accent, with the (thinkable) use case for call centers, e.g. from India. Not judging that approach in this post.
Ai messes up fine details like zippers, buttons, and fingers on hands....so it will be a while until it really is a threat.
JD
A vey short while IMHO.
My interest is how animation will be assisted by AI
This Video is entirely created by AI, from the script to the talking head and all of the other images.
It aint happening with me. Ever.
I get paid to make art in my style, and that style has to be consistant.
If I go the AI route, then it`s no longer my style.
It would be the style of an algorythm. This would be insulting to me.
I could not call it my own, and my followers/clients would know the differenece.
What if you had to duplicate it?
Just my take on it.
...
I happened to come across this ad on my phone when my ad block was off, and I just had to screen shot it.
The more you look at it, the worse it gets. But at any rate, this is one less stock image used for whatever this is. I think the stock image sites are going to be hit first by this stuff.
It makes me wonder if the AI is trained on the faces of some of the people who pop up in lots of stock images. The image looks a lot like a routine stock image.
still looks better than the Salad hands Game of Thrones render DAZ supplied Nifty
but both seriously need a human hand in Photoshop
this is the sort of stuff I am trying to do with ai
one of mine
Short: I judge it differently: A lot more "correctly tagged data" is needed to make that work, if at all, or a much different architecture, which may not even exist in theory, yet. That might not happen too fast. I don't believe that adding a few images of "just zippers" will suffice there, but never know. Yet virtually all images are lacking descriptions for all sorts of contained details, so i wouldn't assume such to be possible to resolve quickly. To correct from the end-result, e.g. on users reporting, we might need some form of accountability, for which training data has been used in the image predominantly, which appears not to exist with these models yet. At least it's not like a bug fix.
Animation: From a text prompt? That would be interesting too, like an assistant. I'd assume less generative means of support could work out well too, like auto JCMs learned from sports clips for humans and animals, similar to motion capture from 2d-video +- real-time, with the ability to structurally adjust whatever parts, with smart adaption or recalculation, and so on. Even without complex language models, you could do things like matching bones/rigging somehow, and do stuff like animate like dog walking and the wings like a dove, possibly with a more simplistic interface (drop down menus instead of arbitrary text). Not sure what exactly you had in mind in the first place...
(Concerning the tagging: Imagine the recaptcha data set for image recognition and a player like google... ok it's photos, no one knows where that will go, except perhaps, Google. It probably forms some kind of a monopoly position on that data concerning generative models / on photos.)
There is already text to animation published: https://guytevet.github.io/mdm-page/
Demo: https://replicate.com/arielreplicate/motion_diffusion_model
Steps to make a bvh (never tried so no idea how workable this is): https://github.com/GuyTevet/motion-diffusion-model/issues/32
You could train it on only your art/art style, then see if it helps speed up your process or is useful as a reference piece, etc... That would be an interesting experiment.
It will be good, if someone, who knows, will explain in simple words, how these art AI works.
I have only heard, that reference images are converted to different noise patterns
and together with the keywords create some kind of database, that AI algorithm work with.
One of the videos above has a good rough explanation elaborated. I'll try to summarize:
Training data: Images with text description. (Often inaccurate, incomplete) This may or may not be tweaked, or improved.
Language model: A language model is somehow included to make the text possible to reference. (Not trivial.)
Noise: (Training data may get processed, e.g. scaled to a certain size, before the actual training.) A certain type of neural network is trained with the training images transitioning to noise with mutliple stages, and learns to reverse the process as well. This together with somehow relating to the text description makes both noise vs. image and description to "somehow work" in the reverse way as well (train on "to noise", and have learnded "from noise"). Of course you can't see from pure 100% noise what it had been before, but the technique roughly is explained. This is now baked into the model, which of course is tied to the construction of the neural network/s contained.
In the end you have a system that makes up an image for your text description, starting from noise and your text input. In a way, you might imagine, "Hamburger with a gun", that it starts off with just noise, and then successively makes hamburger and with a gun "materialize" in terms of going from very noisy versions of it to something more distinct. However imagining (which may not be 100% what happens) that you plung together very noisy versions of a hamburger and a gun, the whole thing gets made less noisy based on the whole abstraction/"experience" that is built into the "machine", which includes the language-model-based "ideas" of "with a" and may be specifically "with a gun" - so it could end up adding a gun somewhere or adding extremities holding a gun. So it's not just copying images together, but deducing them somehow from the depths of it's "from noise knowledge". Imagined as an iterative process, the system moves the whole thing from noise to image somehow, in the end, which probably makes plausible, that the images aren't just a collage. Of course it doesn't learn or gather experience, it doesn't interact, when you're using it. It's all baked in, like with an encyclopedic system.
(Feel free to correct me, where i am wrong. Of course there are more questions than explanations, if your start at zero.)
Thanks for the explanation - it is enough for me, now.
Also wonder, if chatGPT would elaborate more on such AI techniques.
Will wait for any legal solutions to AI generated art, then.
meanwhile...
back to
Remixing your art with ai
some postwork in Gimp mostly erasing backgrounds of the ai ones with the DAZ one added back
Nice.
How is it with posing characters generated with AI?
Are you also describe it in text, what pose is supposed to be?
pretty good and no I don't describe the pose
it was dwarf with luscious braided beard and hair in front of a building
negative prompt the standard
disfigured kitsch ugly oversaturated grain low-res Deformed blurry bad anatomy disfigured poorly drawn face mutation mutated extra limb ugly poorly drawn hands missing limb blurry floating limbs disconnected limbs malformed hands blur out of focus long neck long body ugly disgusting poorly drawn childish mutilated mangled old surreal
a video using EbSynth
another video with every frame put through Stable Diffusion and interpolated
Thanks for the explanation, Wendy.
Hope to try it at some point. It seems unavoidable...
more or less my reasoning
I don't do commercial work, 3D is just a hobby for me so feel less guilty
I understand the artists point of view but I am not on one side or the other and resent being called a thief because I am exploring and educating myself on new technology
the minute I deprive someone of some income by my actions I will accept the thief label
I haven't
it's not black and white for me, it's a spectrum and I am in the middle or grey area on my feelings
I am sad DAZ PAs who I have even recently like in the last week bought products from are so angry about it
for them it seems a black and white issue but they created their models, I am just a user/customer and IMO not an artist either (me personally that is) I am a hobbyist that uses premade models by others
I don't steal 3D models, I don't upload other people's art
I have Greg Rutkowski's Witcher art in my desktop rotation (it came with my bought Witcher games and he was paid by them)
I have looked at it and my DAZ stuff and considered doing a render based on it, shoot me!
I don't think you need to feel any guilty at all. Might keep checking the news in this context...