Announcement

Collapse
No announcement yet.

Disable Swarm on local machine whilst Rendering

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Disable Swarm on local machine whilst Rendering

    Problem: Currently when there's multiple people rendering the machines will render to each other (as they are also a part of the swarm)

    You will then get a situation as follows PC1 is rendering to PC2-5, then PC 2 starts their own render which uses PC1 and 6-10. The problem is, PC 1 is already rendering, but because its not marked as rendering on the swarm, the swarm allocated it to PC2. This PC then has 2 renders being processed by it. This causes problems for PC2 because it gets allocated segments (and i don't know how it works in a technically speaking) but its like those threads are then pulled away from the swarm process to focus on the local one. This then leaves sections of the render unprocessed and because it hasn't actually lost connection to PC1 it doesn't try and switch them over to a different machine.

    The only way i have found to resolve this problem so far is to go into the swarm network management and disable that PC so the swarm then switches over to one of the other PC.

    The more people on the network rendering the worse it is. In our situation we have 20 3D Designers any of them could be rendering at any time especially when deadlines are due.

    Now that i got that awfully confusing situation out of the way here's what i would like to see happen.

    Suggestion: Automatically disable swarm on the local PC when rendering.

    It can be done manually as simply as going into the start menu selecting V-Ray Swarm UI and then toggling the "Enable/Disable" Switch, so i cant imagine it would be difficult to build into V-Ray itself. This will stop the machine from being "Double Booked" by the swarm.

    As the IT Manager for the company i am constantly keeping an eye on who is rendering and manually disabling/enabling the swarm nodes so they experience the lease problems possible. But i cant catch them all, and i cant watch the swarm constantly all day. It would be fantastic for it to be able to manage itself.

    I am aware that you can toggle "Cap CPU Utilization" however due to other issues, which i will likely outline in another post, the swarm doesn't kick in 100% reliably. which means if it doesn't the render is then left to go on a single thread which is any designers worst nightmare.

    To expand on my suggestion. It would be awesome to be able to see in the Vray Swarm Network which devices were automatically disabled and which manually disabled. Some of our designers have watched how i fix this issue and go about doing it themselves only to forget to re enable the nodes.


    So yeh, id be interested to know if anyone who reads this has the same issue and how they get around it.





  • #2
    Agreed. As a workaround, I've manually overridden (overrode?) the core count on all my workstation swarm slaves so that swarm maxes out at 50% cpu usage. Most people won't really notice this, so it ends up working out OK. Not great and not ideal, as I have to manually change it back if I need max compute resources in a pinch and that person has gone for the day.

    Comment


    • #3
      Hmm thats an interesting idea. I hadn't thought of it to be honest. We have about 40 machines in our swarm now i just cant fathom loosing half of the threads. Maybe im just being greedy? At the end of the day id rather the designers have a smooth experience using 200 threads between them than a rocky and frustrating experience using 400. Food for thought

      Comment


      • #4
        Hello!

        Thanks for the feedback.

        As you pointed it out, the problem is that there are more than 1 render process on the machine. V-Ray Swarm is tracking only the processes that are started by it. So if there is another render process, the described above is happening. At the moment, the suggested workaround by delineator is OK, and we plan to improve this workflow in future releases.

        King regards,
        Pavel Angelov

        Comment

        Working...
        X