At home iray server. Is it possible?
milliethegreat
Posts: 304
I have a bunch of old (yes still compatible with iray) gpu servers and I was wondering if I could use one (or all three) as an iray server instead of renting one since I already have them. Is that possible and if so how?
Comments
You would need the Iray Server application, $299 per machine per year I belive. You could have each rendering a different scene independently, however.
Do they have a monthly subscription?
Well, something has changed.
I just went onto the iray server site, and apparently, it's 'free' now, just have to provide your name, email, and some other info.
I can get it running locally, single system, but haven't quite figured out how to get cluster mode enabled or connect to any other machine on my network.
Anybody playing with the current version of iray server know how to do this?
EDIT/UPDATE:
Did some testing with stand alone mode, and the overhead isn't going to be worth it.
Lost too much vram on my gpus, 384MB on my m40's, and 559MB on my P40, plus 200MB on my video out card on my main workstation, quadro m4000.
Since i regularly do scenes that are right at the edge of the gpus Vram, the time it might save me, compared to opening a DS instance and loading the scene on my render box, isn't worth having a render fall back to cpu. Especially when i only have a quadcore in one of my render boxes.
What m40 and p40 are you using? How
many? how big of scenes? Depending on the scene and the type of m40 and p40 (and in my case also what type of quadro) 2x 12gb quadro and 8x 24gb m40 should be enough plenty as that's a total of 216gb vram. If that's not enough for you you're either a doing professional productions or horrible at scene optimization. That's plenty (if not overkill) for me
A little late in my reply, but yes, I have done this. I haven't played around too much with the cluster mode since I'm not doing animation streaming, but I have set up a local "farm" using the farming mode consisting of two of my GPUs, one on my the same box where I run Studio and one on a seperate PC on my local network. The two machines share a folder where result files are dropped by the iRay servers. I turn the CPU render off on both of them, and let the "manager" node handle farming the work out to the next available GPU.
So far, it hasn't been all that bad. Granted, my main Studio machine has an RTX 4090, so I have plenty of overhead to run Studio and the iRay Server master on the same box. Basically you send the scene to the iRay Server master and let it figure out which of your GPUs to render on. I've also written a batch script to use the iRay Server bridge to submit a bunch of scenes to the farm at once, including submitting on multiple camera angles and animation frames. I
As long as the scenes themselves are not horrendous in the amount of VRAM they consume, and you have a little bit of VRAM overhead for normal ops, this setup seems to work decently, though at times it can slow things down depending on what I'm working on in Studio. My normal workflow is to leave the iRay Server queue itself turned off and just batch up the scenes during the day and then when I'm ready to stop work, turn the queue on and let it crunch through the night. It is surprisingly quick for most of the scenes that I work on, even relatively complex ones -- though I'll be the first to admit it could just be dumb luck. :-)
The setup is not that difficult, but there are some gotchas that one needs to be aware of. The documentation isn't exactly clear or exact in that it doesn't always match the version of the server. One of the more complex issues was making sure that the iRay server was only attempting to give work to a worker of the same protocol version as the DAZ iRay bridge. If you don't do this, and your iRay server has been through a few updates, it will sit there and try all the previous protocols until it finds the right one, which makes it look like it's hanged or crashed. Adding an argument to the shortcut to start the server fixed that, and so no problems on that score.
All in all, it's mostly a stable solution and I'm quite pleased at how much is has improved my workflow now that I can continue to use Studio while the rendering is happening somewhere else.
Circling back around to this and doing a bit more indepth benchmarking, IMHO, it's fine, if you have multiple computers, and/or multiple gpus in single system, and the overhead to handle it, but otherwise, it can be rather problematic.
My usual caveat of "your results may vary" applies.
Test system:Dell t5810, Xeon e5-1630 v3(4c/[email protected]), 64GB ddr4(2x32GB 2133mhz ecc), m4000(8GB, Video out), p40(24GB, render only).
DL380 g8, Xeon e5-2680 v2(10c/20th(20/40)@2.8ghz), 128GB ddre(16x8GB 1333mhz ecc), K2200(4GB, video out), M40x4(12GB, render only).
All system running Server 2019, Driver 551.61, studio version 4.22.0.16, iray server 2024.0.4-377400.3959.
The biggest problem is, the additional overhead.
In the latest version of server, it's worse than what i had previously.
Starting Iray server takes ~367MB of vram off my M4000, and 756MB off my P40, and takes up 1.8GB of system ram in my t5810.
for my dl380, k2200 266MB lost, m40 505MB, with 865MB of system ram used.
Not that big a deal, ir you're judicious in resource management.
Except for the memory leak problem.
For the unaware, iray doesn't fully release vram after a render is completed and saved, canceled, or if running in Realtime mode then switching modes. In my case, the p40 generally reports 2430MB of vram unreleased for Studio iray, and 3186MB for Iray server, via SMI(nvidia command line utility).This isn't much of an issue, in my case, as the total ram useage doesn't increase when i render, unless i've changed something in scene.
For instance, in studio, if i have a scene that takes up 2770MB at render, it won't go above that if i run the same scene again. Same applies to iray server.
The issue here is if you run a render on server, than in studio, or vice versa. In this case, it can push your gpu over it's limit.
In a couple of tests, i setup a scene that when rendered took up 21500MB vram in studio iray, and 22266MB in iray server. Pretty close to the 24570MB of the gpu.
If i rendered it in studio, or server, than in the other, the total unreleased vram jumped to 5434MB, and any subsequent render attempt, in either, would drop to cpu or fail.
Even if i closed iray server, studio iray would still fall/fail and had to be closed out to get gpu rendering reenabled. Iray server would render a pending job in the queue on gpu, after studio was closed out.
Side note: if the command prompt for iray server crashes or you manually close it, while connected to Studio, it may crash studio.
In regards to iray server being faster, at least in my tests, not so much.
While iray server may report significantly faster render times, such as 39 seconds for the 22GB scene compared to 3 minutes 3.55 seconds in studio(log), it's not accounting for processing/load time like studio does.
When i accounted for that, with a stopwatch, the total render time was 5m 5.77s, over 2 minutes slower.
The studio log was basically useless for determinig processing time, with only about 1 minute reported, compared to the 3 minutes on the popout. The Command prompt was a bit more useful, with a total time of 2m 3s, based off the time indexes.
Even just taking the reported times into account, it's a wash.
Lastly, network rendering.
There's a few different ways you can use iray server for network rendering.
Stand alone mode. In stand alone mode, you connect directly to the iray server on the render server through the studio bridge. IRay server does not need to be running on any other machine.
Cluster mode. This enables you to use all your machines together for 'faster' rendering. In this case, you'll need iray server running on all sytems to be used, with one set to master. Preferably your main workstation/computer.
Farming mode. In this mode, each instance of iray server only renders a single frame as opposed to working together on a particular frame. Like cluster mode, you'll need ot have iray server running on all systems to be used.
Regardless of which mode you run it in, you'll still have the additional overhead for running iray server as I covered earlier.
Render times, were universally worse for iray server compared to a studio render. With the gap getting worse the more system resources the scene took up at render time.
Using the same 22GB vram scene from earlier, the render time from the Dl380 to the t5810, added over one minute to the total render time for iray server, and over 2 minutes compared to the same scene rendered directly from studio on the t5810.
3 minutes 3.55s on the p40 in studio, 5m 5.57s iray server local p40, and 6m 17.63s network to p40.
In conclusion, I'll need to do some more benchmarking comparing the higher render time of iray server, to my current method of loading then rendering on my render server, and the power usage difference.
The only issue i'd have, is not being able to monitor the render as i do with studio. I often will catch things that i may have missed in the test renders, or need ot make adjustments to render settings,such as increasing itterations, that i can't do with iray server, afaik.
I allso need to figure out why having more than one of my m40's connected to the dl380 is causing iray server to throw an api error.
Regardless of my results or personal decision, i'd say give it a whirl and see if it works for you.
peace folks.
.