Is there a roadmap to fix the world centre offset rendering issues?

Hi

I'm still finding my way around Daz so please forgive me if I get the terminology wrong!

I've run into the issue where certain textures or transparencies don't render correctly the further the character is from the world origin point. I know the work around now, but is there any plan to fix the issue?

I've searched around but can't find anything.

Thanks,

Rich.

Comments

  • TogireTogire Posts: 408

    You mean fix the problem in iray by nvidia? Probably never. This not really a bug, but a limitation inherent to the way data is coded.

    Data in a given code have a precision. For instance assume that they can hold 3 digits after the decimal point. If you have two objets at a distance of one meter of the origin that are separated by 1mm, for the machine, they are at 1.000 and 1.001 which is coded differently and the processing can be correct. If they are at 10m of the origin, the machine will code their distances as 1.0000 x 10 and 1.0001 x 10 (only one digit is allowed at the left of the decimal point). But as only 3 digits after the decimal point are kept, these distance will both be considered as 1.000 x 10 and the objects will collide.
    The only simple solution offered by the underlying coding system (IEEE 754), is to double the number of bits to get more decimal places. But this has the drawback to double the size of the occupied memory and the transfer time between the memory and the operators (which has a negative impact on the processing time). Not sure that the users would be happy with this "fix" that will more or less reduce by two the effective VRAM size and increase the render time.
    Other solutions would be to rethink completely the way images are rendered, but this is much more complex and it is probably not in the roadmap of nvidia.

  • DustRiderDustRider Posts: 2,717

    It's an issue with Nvidia Iray, so it's not something DAZ can fix. You would need to check on their web site to see if there is a proposed fix in the future. My guess is there isn't one planned (but I could be wrong) since it seems to be the way the render engine is designed. Changing the behavior (coordinate precision/rounding) might slow Iray down quite a bit, even when the coordinates are near the world center.

  • PerttiAPerttiA Posts: 10,015

    It's about how much memory is used to store the location of each vertex, "fixing" it would mean doubling the amount of used memory, so it's highly unlikely it will be "fixed"

    It's also not about DS or even Iray, the first time I have encountered that problem (floating point limitations), was in the 90's with AutoCad.

  • Hi

    Thanks all. I understand the issue now (Thanks Alain, excellent description). It's a bit annoying but I can see how it's unlikely to get fixed!

    Rich

  • stefan.humsstefan.hums Posts: 132

    alainmerigot said:

    You mean fix the problem in iray by nvidia? Probably never. This not really a bug, but a limitation inherent to the way data is coded.

    Data in a given code have a precision. For instance assume that they can hold 3 digits after the decimal point. If you have two objets at a distance of one meter of the origin that are separated by 1mm, for the machine, they are at 1.000 and 1.001 which is coded differently and the processing can be correct. If they are at 10m of the origin, the machine will code their distances as 1.0000 x 10 and 1.0001 x 10 (only one digit is allowed at the left of the decimal point). But as only 3 digits after the decimal point are kept, these distance will both be considered as 1.000 x 10 and the objects will collide.
    The only simple solution offered by the underlying coding system (IEEE 754), is to double the number of bits to get more decimal places. But this has the drawback to double the size of the occupied memory and the transfer time between the memory and the operators (which has a negative impact on the processing time). Not sure that the users would be happy with this "fix" that will more or less reduce by two the effective VRAM size and increase the render time.
    Other solutions would be to rethink completely the way images are rendered, but this is much more complex and it is probably not in the roadmap of nvidia.

    Well, it is less a problem with doubling the number of bits and the "loss" of VRAM because of that. The actual problem that prevents using double-precision floating point arithmetics (FP64) is the architecture of the GPU cores - they are made and optimized for single-precision (FP32), and so is Iray. And also a FP64 version of Iray would not solve the problem, at FP64 the GPU cores get completely inefficient, especially the Nvidia Ampere architecture is 64 times slower than with FP32. Any 8-core mainstream CPU would start to laugh about the FP64 "performance" of a RTX 3090 and would finish a render in less time. ;)

  • The simple solution is to position whatever you're rendering at or near the origin. Given that there's really no fix for this, its reason enough to opt for large environments that are modular & can be shifted around so that you can move the modules instead of moving a character thousands of units away from the origin to render them in a specific spot.

Sign In or Register to comment.