Mimic: Who owns what?

wsterdanwsterdan Posts: 2,344
edited September 2023 in Daz Studio Discussion

I'm just trying to wrap my head around DAZ's Mimic history to see if I really understand the software's situation.

This is how I remember the history, but I suspect I may have misunderstood something along the way and would like any of my misunderstandings clarified.

I think my main misunderstanding may be who actually wrote which versions of the Mimic software.

I've purchased Mimic, Mimic 2 Pro for Poser, Mimic Lite 3.1, Mimic Pro 3.1 and the Mimic Pro for Lightwave Plug-in. I didn't purchase the Carrara Plug-in, nor the Windows-only Mimic Live.My understanding was that DAZ licensed the Talkback™ engine and wrote all of the listed Mimic products.

We've been told that the reason that only DAZ Studio 32-bit exists and why it is not present in the 64-bit version was that they didn't license the 64-bit version (I assume they're talking about the 64-bit version of the Talkback™ engine?).

Where I get confused is that both the Carrara and I believe the Lightwave Mimic plug-ins are 64-bit.

I get further confused when the Windows-only Mimic Live version is also 64-bit.

Someone wrote in a Mimic Live review that "If I remember correctly, Daz lost access/rights to the original source code at one point so when Daz decided to bring it back, they had to code it from scratch.".

That makes no sense to me, since I assume they still have the source code in their 32-bit version inside DAZ Studio, 32-bit version. Moreover, if they actually rewrote all of the code themselves, from scratch, why not just add it into the 64-bit version of DAZ Studio instead of creating a "Live", Windows-only version.

Can anyone clarify the actual history/situation?

Is there any chance at all of getting a Mimic version inside DAZ Studio 5?

For the record, I'm more than willing to pay for a 64-bit Mimic plug-in for DAZ Studio 5.

-- Walt Sterdan

 

Post edited by wsterdan on
«1

Comments

  • it is still better than all the latest Apple iPhone hardware dependent products they are pushing now angry

  • wsterdanwsterdan Posts: 2,344

    WendyLuvsCatz said:

    it is still better than all the latest Apple iPhone hardware dependent products they are pushing now angry

    That always confuses me, to be honest. I know that they get more accurate readings from Apple's TrueDepth cameras, but if I were writing the software, I'd either sell two versions of the software, one that uses TrueDepth and a less-accurate one that uses regular cameras, or a single version of the software that has both options available. I use the camera on my computers with Reallusion's Cartoon Animator and Adobe's Character Animator and they work more than well enough for what I and I think most people need.

    For what I'm doing, Mimic is the hands-down best for me for 3D; If I have 100 audio files to lip synch, the last thing I want to do is sit in front of my computer recording me doing the lip syncing.

    -- Walt Sterdan
     

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,208
    edited September 2023

    yeah Mimic live had that issue too, Windows updates destroyed the stereomix option I used for playing audio files making it useless

    PoseRecorder I resorted to using videos rendered in iClone for the source

    mostly though I use the DAZ 32bit but sometimes I cannot save reload and do that easily because one can only run one instance of D|S

    it means using my other computer

    Post edited by WendyLuvsCatz on
  • wolf359wolf359 Posts: 3,828

    For a straight forward ,audio based, lipsync the 32 bit mimic in Daz studio still holds up IMHO,
    even more so now that we have viable AI voice generators to create dialogue without voice actors.

  • wsterdanwsterdan Posts: 2,344

    wolf359 said:

    For a straight forward ,audio based, lipsync the 32 bit mimic in Daz studio still holds up IMHO,
    even more so now that we have viable AI voice generators to create dialogue without voice actors.

    Totally agree, I haven't seen anything that does what it does as easily as it does it. There are some similar features for plug-ins for Maya and Blender, but even they miss the odd feature here and there. Mimic does exactly what I want it to do for the dialogue-driven animation I'm trying, except work in 64-bit; not being able to hear the audio in the scene file to help fine-tune facial animation or have five or six different characters talking in the same file, though, just adds so much more work to the animation it starts to lose its advantage.

    Since DAZ has been working on SAZ Studio 5 for at least three years now and the only thing we know is "Coming soon to DAZ Studio 5" is Omniverse, I think the time has come to take down the hopes I'd pinned on using DAZ Studio and move on. If and when it does come out, if it manages to add the things it lost moving to 64-bit, I'll take another run at it.

    -- Walt Sterdan

  • DartanbeckDartanbeck Posts: 21,568

    This is a really interesting topic that I toil over as well. Not having Mimic for Daz Studio 4 Pro 64 bit truly is a bummer. 

    I went back to Mimic Live in the store many times to check (again) if I can use it like Mimic Pro or Live, but... no. Just live :(

     

    I haven't tried it yet, but I doubt that it would work to try a pose convertor to get Genesis 1 facial animations to work on Genesis 3, 8 (or now 9). My friend made a Genesis 1 Head CR2 for Mimic 3.1 Pro, so I can use the amazing Mimic Pro for Genesis, but that, unfortunately is where it ends.

     

    I do have Mimic Pro for Carrara, which allows us to use it with literally anything that we can make a config file for. And then we can customize everything beyond that even. But even that still isn't the same as using that amazing little Mimic 3.1 Pro interface for making lip sync. 

     

    In a sale, I ended up grabbing Anilip 2, which provides a method for me to get from audio to lip - which should be fine for me. The rest of the face and head... I actually enjoy animating those parts.

    On that note, however, I more recently got PoseReceorder. Pretty sure that one requires video. I did a simple and quick test with it: The real Rosie acting a simple phrase

    It turned out really cool! However, I asked her to exaggerate her facial expressions. With PoseRecorder, no need for that. It picks it all up very efficiently.

     

    So with all of these options, my first real attempt forward from here will be with PoseRecorder. Like the FACS systems, PoseRecorder does a true facial performance, which I think will work a LOT better for me than audio to lip. I'm lucky in that I have several people that would be happy to perform in their phone for me - and PoseRecorder actually says that phone recordings are preferred over web cams. 

    So what about people who don't have actors?

     

    How about this for food for thought:

    Use your Mimic Pro on M4 or V4 (or whatever) and dial that in to your liking, and render it - face-on. No camera movements and, if the head is moving, parent the camera to the head so it stays focussed on the face. Now feed that rendered video into PoseRecorder and save the result as an animated pose file.

     

    FYI Note about PoseRecorder:

    The instructions say to save your results to your Textures folder. That part really through me, but I did it anyways. Nope. Save to your Pose folder! LOL

    PoseRecorder

  • DartanbeckDartanbeck Posts: 21,568
    edited September 2023

    Back to the idea of Mimic Pro for Carrara, there are workarounds for getting Genesis 3 through 9 into Carrara, and Fenric's plugins (BVH/PZ2 Exporter, in this case) are now free.

    So we could import Genesis 9, for example into Carrara, create a new (whatever it's called) config file, and use Mimic in a much less realtime way, then save the result as a BVH or PZ2 animated pose file.

     

    The attractive part of this idea is that we could do a lot more with it before exporting the PZ2.

     

    Even better, we could export the PZ2, then make changes and export as another option, rinse and repeat.

     

    Again... just more food for thought - and Mimic Pro for Carrara really does have a lot of power with a nice little manual explaining how to configure and use it, etc., with tips and such.

     

    Wait... I was just looking to grab a link for you. Mimic Pro for Carrara is no longer available? What a freaking Crime!!! This is starting to go Too Far!

    Post edited by Dartanbeck on
  • DartanbeckDartanbeck Posts: 21,568
    edited September 2023

    Back to the PoseRecorder idea, like I said before - it suggests using a smart phone for recording the performance, trying not to move the head while performing.

    The promo page has three example videos - one shown here. This is the result to expect from the software, which is really quite good performance capture for something so incredibly inexpensive.

    The reason I'm showing this video: Watch closely - even a few times over. Perhaps also the other two on the store page. Watch the facial performance of the actor and watch the results on the figure. You'll see what I mean that we don't need any embellishment from the actors. Just deliver the lines as if acting.

     

    I plan to buy a bracket for the head that holds the phone directly in front of the face, so that my actors can actually act while delivering the speach if they want/need to. My cast of actors is very small, so this won't be a problem as far as I can tell. I do most of the background people, creatures, monsters and many sound effects, etc., Rosie does Rosie, and my crew from the sound studio said that they would love to do whatever I need - just for the fun of it. 

     

    What I love about this product is how well it translates the video performance into facial morph recognition. I mean... it works the morph dials really well! 

    Post edited by Dartanbeck on
  • wsterdanwsterdan Posts: 2,344
    edited September 2023

    There are already .DMC fiiles for DAZ characters going from at least M3/V3/Aiko3/Hiro3 all the way up to Genesis 8. Wendy has a post somewhere that tells you how to edit the .DMC for other characters as well (thanks again, Wendy!).

    When I did the lip syncs for my very first 3D attempts at animation (Hrimfaxi Adventures Episodes One and Two https://www.youtube.com/channel/UCuCYsS1h_OIJxpipx893eNA) I loaded one of my characters and using the 32-bit version of DAZ Studio, loaded my first sound file, applied the Genesis 8 DMC and lip sync, saved a partial pose file of *just* the head and head parts, hit undo, loaded the next voice clip, lip sync, save, undo, open the next one, etc. I created 65 lip sync'ed head/face poses in under 90 minutes. I then switched to DAZ Studio 64-bit and started animating. Selecting your characater and applying the partial poses doesn't change any other animation applied to the character so if, for example, you have two people walking, you can apply the poses and it doesn't affect their walking, or running, etc. You can also set your character's mood (angry, confused) pose and then apply the face pose. 

    One thing to note, there aren't different "Genisis 8" .DMC files for males and female, or anything other than generic Genisis 8. so you can load any character from that generation, do all your lip syncs using that one character, then apply it to a hundred different Gen 8 characters.

    -- Walt Sterdan

    Post edited by wsterdan on
  • DartanbeckDartanbeck Posts: 21,568
    edited September 2023

    Something to note about that video is that it's just a Genesis 8 Male figure - no shaping morphs applied, yet it did an amazing job of shaping the muscles to the actor's delivery.

    This is important because it shows that it doesn't try to 'resemble' the actor, but deliver the muscle movements.

     

    This way we can have any character shape use the performance. Imagine that same performance added to Dasan 8. Now imagine it on one of your characters. 

    So, if you have someone who can message you a video of themselves, you can, if you like, direct them about the character they're protraying. Like: Misc always talks from one side of his mouth or the other. Rarely does he ever use his whole mouth - as an example. 

     

    This sort of thing goes pretty far beyond anything that audio/text to speech can do. Sure, we can work in the expressive morphs ourselves - no problem. But it is time-consuming. The thing is, we can also edit morph changes to PoseRecorder results too. Pretty powerful stuff.

     

    The thing about Mimic is that we might be remembering it to be a lot more superior than what it ever was - just because we were so excited to see it work. Don't get me wrong - I love Mimic Pro. But the results aren't nearly what PoseRecorder puts out. Here's a simple, quick test I did years ago on Genesis using Mimic Pro for Carrara without any tweaks to the Mimic result

    Post edited by Dartanbeck on
  • wsterdanwsterdan Posts: 2,344

    One other tip for using this method when you have background characters that aren't speaking, I lip synced some extended speeches and did partial pose saves on only the eyes. When I'd go to animate a scene, I'd apply the eyes-only partial pose to the background characters, applying it at a different point in the timeline to each background characters (so they wouldn't all blink in unison) so after only a minute or two you can have five different characters blinking during the scene, making them a tiny bit more "alive". I also used Riversoft's random pose generator to have them move a bit as well, bodies and/or heads.

    -- Walt Sterdan

  • wsterdanwsterdan Posts: 2,344

    Dartanbeck said:

    Wait... I was just looking to grab a link for you. Mimic Pro for Carrara is no longer available? What a freaking Crime!!! This is starting to go Too Far!

    Thanks for thinking of me, but yeah, it's been gone for a while now. I think I looked into buying it about a year ago and I think it was gone by then.

    -- Walt Sterdan 

  • DartanbeckDartanbeck Posts: 21,568

    wsterdan said:

    One other tip for using this method when you have background characters that aren't speaking, I lip synced some extended speeches and did partial pose saves on only the eyes. When I'd go to animate a scene, I'd apply the eyes-only partial pose to the background characters, applying it at a different point in the timeline to each background characters (so they wouldn't all blink in unison) so after only a minute or two you can have five different characters blinking during the scene, making them a tiny bit more "alive". I also used Riversoft's random pose generator to have them move a bit as well, bodies and/or heads.

    -- Walt Sterdan

    That's beautiful!

    I bought the Face Controls for Genesis 3 & 8, which comes with example animations. I use bits of those examples for my background characters too. I saved chunks that don't have speech, but many of my background characters don't mind having things to say that we just can't here! :)

  • DartanbeckDartanbeck Posts: 21,568

    wsterdan said:

    There are already .DMC fiiles for DAZ characters going from at least M3/V3/Aiko3/Hiro3 all the way up to Genesis 8. Wendy has a post somewhere that tells you how to edit the .DMC for other characters as well (thanks again, Wendy!).

    When I did the lip syncs for my very first 3D attempts at animation (Hrimfaxi Adventures Episodes One and Two https://studio.youtube.com/channel/UCuCYsS1h_OIJxpipx893eNA/videos/upload?filter=%5B%5D&sort=%7B%22columnType%22%3A%22date%22%2C%22sortOrder%22%3A%22DESCENDING%22%7D) I loaded one of my characters and using the 32-bit version of DAZ Studio, loaded my first sound file, applied the Genesis 8 DMC and lip sync, saved a partial pose file of *just* the head and head parts, hit undo, loaded the next voice clip, lip sync, save, undo, open the next one, etc. I created 65 lip sync'ed head/face poses in under 90 minutes. I then switched to DAZ Studio 64-bit and started animating. Selecting your characater and applying the partial poses doesn't change any other animation applied to the character so if, for example, you have two people walking, you can apply the poses and it doesn't affect their walking, or running, etc. You can also set your character's mood (angry, confused) pose and then apply the face pose. 

    One thing to note, there aren't different "Genisis 8" .DMC files for males and female, or anything other than generic Genisis 8. so you can load any character from that generation, do all your lip syncs using that one character, then apply it to a hundred different Gen 8 characters.

    -- Walt Sterdan

    I tried the link but it says that I don't have permission to view the page. Bummer. I want to see it!!!

    Yes. Partial pose files are Awesome!

    I use aniMate 2 a lot. We can save facial animations (partials) to aniBlocks as well by checking the Morphs box when creating a new aniBlock. This way I can take my partials and chop them up, speed them up or down, reverse them... it's amazing how far we can take this stuff. 

    With my PoseRecorder result, the first thing I did was to make it into a partial aniBlock.

    Now if only GoFigure could add a "Strength" control for aniMate 3! (hint hint) so that we could add an aniBlock and raise or lower the peaks and valleys of the values. That would be SWEET!!!

  • DartanbeckDartanbeck Posts: 21,568
    edited September 2023

    I couldn't take it anymore, so I typed "Hrimfaxi Adventures" into YouTube. OMG!!!

    Instant Fan!!! Sub Freaking Scribed!!!

     

    I Love the expressiveness of these Wonderful Characters!!! Lovely! Just Lovely!

    Your actors, by the way, are Excellent!

    Post edited by Dartanbeck on
  • DartanbeckDartanbeck Posts: 21,568

    Looks like these are all very recent. So you're in the middle of these? You've got a lot done and it looks Fantastic! I'm also a big Gerry Anderson fan! I have the full Stingray collection and after watching The Probe, I'm inspired to watch the whole series again. Well... after watching Part two of the Hrimfaxi Adventures, and then Starship Hrimfaxi episodes!!!

    Kudos!

  • wsterdanwsterdan Posts: 2,344

    Dartanbeck said:

    Looks like these are all very recent. So you're in the middle of these? You've got a lot done and it looks Fantastic! I'm also a big Gerry Anderson fan! I have the full Stingray collection and after watching The Probe, I'm inspired to watch the whole series again. Well... after watching Part two of the Hrimfaxi Adventures, and then Starship Hrimfaxi episodes!!!

    Kudos!

    Thanks very much! Sorry I botched the link, it's been fixed now, hopefully.
    When Mimic first came out, I thought it made animations much more possible, so I sat back and gathered assets for when I would time to try animation, which turned out to be this year, as I finally retired. Sadly, DAZ only has the 32-bit Mimic that I can use, I can no longer add audio tracks into my DAZ scenes to help sync facial reactions better, dozens and dozens of characters that I'd prepared over time now have to be remade because an update this year made Genisis 1 and Genesis 3 characters less compatible (characters that I'd assembled using both have to be remade but without the ability to use Genisis 1 with G3) and so on.
    The actors' recordings were actually for a simple teaser I'd wanted to put together, but it turned out I had enough for two short episodes. With any luck I'll be able to track down all the actors and do a third episode of Hrimfaxi Adventures at some point.
    In the meantime, without knowing what DAZ Studio 5 will bring, I switched to 2D using Reallusion's Cartoon Animator for Starship Hrimfaxi, which allowed me to do a 40-minute cartoon in 65 hours. Well, that and computer-generated voices. I'll probably stick wtih 2D until we see what D|S 5 gives us.

    I'm currently prepping to do a slightly different style of characters for a show that's even more Fireball Xl5 and Tom Corbett Space Cadet called "Polaris X5", hopefully in the next couple of weeks and then back to another Starship Hrimfaxi story, "The Dark Nebula Jump".

    So much to learn, but having fun while I do.

    -- Walt Sterdan

  • DartanbeckDartanbeck Posts: 21,568

    Very, very cool!

    Are the "Starship Hrimfaxi" episodes 2d then? I guess I'll be finding out. I only take internet breaks when I'm simulating or rendering. Otherwise I'm animating too. Love it!

    I have long looked at Reallusion with wishful eyes but every time I do, I come to the realization that I'm really quite happy - for what I do. I think we're agreed on that. I just keep learning and pushing forward. Then change... etc., etc.,

    I thought that Carrara was my end game software. I probably will always use it for a lot of things. Discovering what I can do with my other apps that are constantly staring me in the face turned my head pretty hard. It was that darned hair!!! LOL I had to figure a lot out and make my own tools, but I got to a happy entry point. 

    As long as we're having fun, Walt... as long as we're Having Fun!!!

  • DartanbeckDartanbeck Posts: 21,568

    I'm eager to see your next offerings!

    The characters in this first series are really nicely thought out, written, and portrayed. And then the art direction is really Really good! I love the use of textures, rendering style, product use... it's all a very cohesive story that's delicious to see, nice to listen to, and is an overall very nice production from start to finish. Oh man... the way we can tell who is who just by how they express their faces... fantastic! Very identified, interesting and well made.

    Big Fan!

  • wsterdanwsterdan Posts: 2,344
    edited September 2023

    Dartanbeck said:

    Back to the PoseRecorder idea, like I said before - it suggests using a smart phone for recording the performance, trying not to move the head while performing.

    The promo page has three example videos - one shown here. This is the result to expect from the software, which is really quite good performance capture for something so incredibly inexpensive.

    The reason I'm showing this video: Watch closely - even a few times over. Perhaps also the other two on the store page. Watch the facial performance of the actor and watch the results on the figure. You'll see what I mean that we don't need any embellishment from the actors. Just deliver the lines as if acting.

     

    I plan to buy a bracket for the head that holds the phone directly in front of the face, so that my actors can actually act while delivering the speach if they want/need to. My cast of actors is very small, so this won't be a problem as far as I can tell. I do most of the background people, creatures, monsters and many sound effects, etc., Rosie does Rosie, and my crew from the sound studio said that they would love to do whatever I need - just for the fun of it. 

     

    What I love about this product is how well it translates the video performance into facial morph recognition. I mean... it works the morph dials really well! 

    Sounds pretty cool, looking forward to seeing more. I have a plug-in for Cartoon Animator that uses the computer camera to do face tracking, but I find that if I have the computer-generated audio (which, as wolf359 mentioned, are getting better all the time) it was easiest and fastest to just use the face puppeteering instead. Select a character, apply the audio, next character, their lines, and so on until everyone has said their llines, then select the first character and select face puppeteering and start recording; as different people say their lines, you use the mouse to change the character's focus until the scene is complete, then do the same for the next character, and so on. Once you have everyone animated, render out the whole scene or parts to change camera focus, etc. Everything is done in almost real-time.

    -- Walt Sterdan

    Post edited by wsterdan on
  • DartanbeckDartanbeck Posts: 21,568
    edited September 2023

    EDIT: This reads rather rudely since it was written before I saw the above post. No rudeness or disagreement was meant in this post ;)

    Oh, and I really like the human actors. I haven't seen the Starship series yet - are those the digital voices?

    I bought in on some digital actor AI and have tried it quite a bit. Made some things that I think are kinda cool. But it's just not there yet for what I'd need. So I'm keeping dialog out until I get some stuff recorded. I tried using various methods to add more human feel to the virtual voices, which actually aren't bad to start with, but they are absolutley Nothing like what an actor would do. 

    "Okay, you've just landed your ship and walked down the ramp. You see a guard eager to give you grief, so you say"

    Say that to a real actor, and you'll get something So different than if you typed "My credentials are in the aft bay. Come on, I'll take you to them" into a text-to-speech. 

    Post edited by Dartanbeck on
  • DartanbeckDartanbeck Posts: 21,568
    edited September 2023

    wsterdan said:

    Dartanbeck said:

    Back to the PoseRecorder idea, like I said before - it suggests using a smart phone for recording the performance, trying not to move the head while performing.

    The promo page has three example videos - one shown here. This is the result to expect from the software, which is really quite good performance capture for something so incredibly inexpensive.

    The reason I'm showing this video: Watch closely - even a few times over. Perhaps also the other two on the store page. Watch the facial performance of the actor and watch the results on the figure. You'll see what I mean that we don't need any embellishment from the actors. Just deliver the lines as if acting.

     

    I plan to buy a bracket for the head that holds the phone directly in front of the face, so that my actors can actually act while delivering the speach if they want/need to. My cast of actors is very small, so this won't be a problem as far as I can tell. I do most of the background people, creatures, monsters and many sound effects, etc., Rosie does Rosie, and my crew from the sound studio said that they would love to do whatever I need - just for the fun of it. 

     

    What I love about this product is how well it translates the video performance into facial morph recognition. I mean... it works the morph dials really well! 

    Sounds pretty cool, looking forward to seeing more. I have a plug-in for Cartoon Animator that uses the computer camera to do face tracking, but I find that if I have the computer-generated audio (which, as wolf359 mentioned, are getting better all the time) it was easiest and fastest to just use the face puppeteering instead. Select a character, apply the audio, next character, their lines, and so on until everyone has said their llines, then select the first character and select face puppeteering and start recording; as different people say their lines, you use the mouse to change the character's focus until the scene is complete, then do the same for the next character, and so on. Once you have everyone animated, render out the whole scene or parts to change camera focus, etc. Everything is done in almost real-time.

    -- Walt Sterdan

    Wow! really? That is really cool! 

    Oh... do you have aniMate 2? I know that there's an Import Audio on that. I wonder if that would work/give you a waveform? Never tried it. 

    I haven't even done any dialog stuff yet except for tests.

    Post edited by Dartanbeck on
  • DartanbeckDartanbeck Posts: 21,568

    The lovely part about AI voices is that we can 'just do it'. That's huge right there!

  • wsterdanwsterdan Posts: 2,344

    Dartanbeck said:

    The lovely part about AI voices is that we can 'just do it'. That's huge right there!

    Very true. I found, with the current cartoon I'm working on, I had only written a few lines of dialogue when I started recording the voices, and as I heard the opening lines being "read" back to me by the AI generator, I started writing the dialogue for everyone one line at a time inside the voice geneartor, hearing it read, then doing the next one. In a half an hour or so I had the first draft done, but instead of having a first written draft, I had a sort of radio play running roughly nine minutes that I then listened to on my phone a few times the next day and used that to visualize how the animation would look, as well as spotting errors here and there that I went in and corrected, deleted or added to in the next run. 
    It's a totally different way of working, for sure, but very liberating.

    -- Walt Sterdan

  • DartanbeckDartanbeck Posts: 21,568

    For some of my experiments I made more than one 'take' for the spoken lines and blended between them in Resolve, and used the retiming tools in Resolve in an attempt to add more of a human feel. I could see in that experiment how I could easily get lost in working out dialog alone - it was so fun and had some very pleasing results!

    The crazy part comes whenever I bring a line to the real Rosie. The first time I wrote them down. Now I just mention what she should say and she breaks into character. She should have been a Hollywood actress, I swear! So for the Rosie character, my AI attempts are out of the question - but that's a good thing because Rosie will save me a Lot of time and already knows that I like several versions of the same lines, so she just does that instinctively. 

    I have folders full of partially written scripts and ideas, but then I forget about them as I continue to animate. One day I'll get something cohesive together and churn out a video with dialog. I love acting too. I did a lot of it. Used to be a professional monster when I was a teen! Zekiel Raven (everyone tells me it's really Ezekiel, but they're wrong) Lord of the Dark. Zekiel then did the introduction to Tales from the Darkside on our local TV - so I'm eager to get going as well. 

  • wsterdanwsterdan Posts: 2,344

    Dartanbeck said:

    EDIT: This reads rather rudely since it was written before I saw the above post. No rudeness or disagreement was meant in this post ;)

    Oh, and I really like the human actors. I haven't seen the Starship series yet - are those the digital voices?

    I bought in on some digital actor AI and have tried it quite a bit. Made some things that I think are kinda cool. But it's just not there yet for what I'd need. So I'm keeping dialog out until I get some stuff recorded. I tried using various methods to add more human feel to the virtual voices, which actually aren't bad to start with, but they are absolutley Nothing like what an actor would do. 

    "Okay, you've just landed your ship and walked down the ramp. You see a guard eager to give you grief, so you say"

    Say that to a real actor, and you'll get something So different than if you typed "My credentials are in the aft bay. Come on, I'll take you to them" into a text-to-speech. 

    I missed this as we were leap-frogging each other; I don't see any rudeness at all, so no worries.

    I agree that human actors can do a better job, no question, but the AI voices are getting better every day. I had started to record voices for the first Starship Hrimfaxi 2D using two services I'd subscribed to, but then tried the "Ultra-Realistic" voices by PlayHT and jumped over to that immediately, tossing everything I'd already done. I think those voices are among the best I've sampled so far, and overall I was very happy with what I got in the four parts of the 2D Starship Hrimfaxi.

    What's interesting is that just as with human actors, the first read might not come out quite right, so you ask the human actor (and the AI) to read it again, and possibly again until they get it "right". The advantage with the human actor is that you can describe how you want them to change it; with the PlayHT actors I don't feel that same amount of control, but what I did find that was that every time you ask them to redo the lines, they come back a little differently and usually I can get what I wanted on the second or third "take". Sometimes, I *can't* get what I wanted with the lines "as is" but I find that I might rewrite the line a litlte and suddently they'd nail it.

    Personally, I'm happy with the quality of the voices used in the 2D Starship Hrimfaxi cartoon (you can judge for yourself if they're good enough), and the little bit of higher quality I would get from the human actors is more than balanced by the ability to decide I want to change a few lines at 2:00 am and then get the new lines recorded by 2:15 am.

    Two of the best things about all of the AI voices today is that they're all getting better as time goes on, and that they all have free trials available.

    -- Walt Sterdan

  • wsterdanwsterdan Posts: 2,344

    Dartanbeck said:

    For some of my experiments I made more than one 'take' for the spoken lines and blended between them in Resolve, and used the retiming tools in Resolve in an attempt to add more of a human feel. I could see in that experiment how I could easily get lost in working out dialog alone - it was so fun and had some very pleasing results!

    The crazy part comes whenever I bring a line to the real Rosie. The first time I wrote them down. Now I just mention what she should say and she breaks into character. She should have been a Hollywood actress, I swear! So for the Rosie character, my AI attempts are out of the question - but that's a good thing because Rosie will save me a Lot of time and already knows that I like several versions of the same lines, so she just does that instinctively. 

    I have folders full of partially written scripts and ideas, but then I forget about them as I continue to animate. One day I'll get something cohesive together and churn out a video with dialog. I love acting too. I did a lot of it. Used to be a professional monster when I was a teen! Zekiel Raven (everyone tells me it's really Ezekiel, but they're wrong) Lord of the Dark. Zekiel then did the introduction to Tales from the Darkside on our local TV - so I'm eager to get going as well. 

    Having an actor that's both good and already knows what you want is awesome, it really takes a lot of the work out of it.

    For the human actors, I wrote a basic script and my daughter took care of tracking down the actors and getting the voice work done. The voices were recorded by them back in early 2021 but I never had time while I was working to use them. When I finally finished and posted my first 3D animations, I decided to do a third part to at least finish this story but so far, we've only heard back from two of the actors to date, so finishing the story may take a bit of a different format.

    -- Walt Sterdan

  • DartanbeckDartanbeck Posts: 21,568

    Fantastic! I'm in the middle of an animation, so I haven't watched the Starship Hrimfaxi series yet, but I will very soon.

    The cost is a bit out of my reach currently. My blown back and nervous system forced me into early retirement without being well insured or financially ready to do so, so... yeah.

    At least I can still play my drums. That brings in enough for a second helping of beans at least! LOL

     

    But really. That's not a bad price tag if it gets used at all. And the fluctuations are pretty darned cool. I have Talkia. I was told about it at a time when I could get a lifetime subscription for a single fee to help get them financed. It works pretty well and I haven't checked for updates in quite some time.

     

    Back to the original topic, using your(Wolf's) Puppeteer technique seems to make the 32 bit Mimic a real working solution. I could do the same using my FACS, perhaps along with Puppeteer - though I really enjoy adjusting it manually. I don't know if you have my course, but I made special controls for the FACS system to make it easier for dial-spinning me. I can't get used to that gizmo to save my life! LOL

    While I really am looking forward to employing PoseRecorder for real, I still have Anilip 2 and Mimic for 32 bit DS to fall back on. Anilip 2 has some impressive features to work with. Their promo could use the help of an animator for the rest of the motions, but the lip control is pretty cool how editable things can be - which is what I always loved about Mimic.

    The initial speach in this has some pretty iffy lips ync going on, but the software is capable of getting it better. The render is great but the mouth looks funny. After he's done, the lady has a bit better lip sync with a scarier voice. The visemes she speaks of in the beginning... PD Howler comes with some pretty cool versions of that for hand-drawn 2D animations. He includes a full set in at least two styles of cartoon, but I think there might be a third as well. Anyway, like I said before, this promo could look more realistic if an animator did the animation for everything but the lips. Not just head movements but expressives and FACS can really make the software created lip motions come alive.

  • wsterdanwsterdan Posts: 2,344

    I checked out Talkia, it's pretty decent; the abilty to adjust the different speech parameter definitely pumps it up a notch or two.

    As a Mac user, Anilip 2 isn't an option for me, though, same as Mimic Live.

    I'm slowly catching up on your Youtube postings (many of which I've seen before) but between you, Wendy and wolf359, a guy's day can swallowed up pretty quickly!

    Thanks for sharing.

    -- Walt Sterdan

  • DartanbeckDartanbeck Posts: 21,568

    It really looks to me like you've got a great speach system going already.

    Yeah... hard to fin time to watch everything there is to watch! I've been a fan of Wolf's stuff for many years now.

Sign In or Register to comment.