Announcement

Collapse
No announcement yet.

RTX speed increase?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    same here, trying to get away from facebook...

    Muhammed_Hamed thanks for sharing that here as well. thats very helpful since the logs will tell you how many samples were taken and not subdivs. now i can easily convert!
    --=============--
    -DW
    -buck.co
    --=============--

    Comment


    • #32
      Glad this helped
      I think the thought behind having samples limit in 3Ds Max is to make render settings easier(specially for new users)
      It was using Max subdivs in the past similar to all other Vray plugins..
      And again, sampling in Vray is quite easy these days.. I use the same settings for pretty much all scens.
      Muhammed Hamed
      V-Ray GPU product specialist


      chaos.com

      Comment


      • #33
        Originally posted by francomanko View Post
        So does anyone have a scene where they have seen a noticeable speed increase using rtx over cuda? Before getting my 2080ti i was reading that you could expect upto 30% increase and I know its scene specific but i would have thought i would see some increase.
        It is at least 50% faster for my interior scenes, and yeah .. I don't use Hybrid rendering anymore on my machine
        2x Titan RTXs in RTX mode are quite fast

        And yeah things like fur or grass will benefit a lot from RTX, this part of Vray GPU has been rewritten to use the RT cores ..Same for interior scenes where you have a lot of bounces
        I can send you a scene that shows over 50% speed increase .. let me know
        Muhammed Hamed
        V-Ray GPU product specialist


        chaos.com

        Comment


        • #34
          Originally posted by seandunderdale View Post
          Ive found some serious slow downs on GPU using stochastic flakes, but more simple stuff, with denoiser, it can be really quick.
          I think this is a bug, I came across this multiple times.. some setups with stochastic flakes can make Vray slow..specially LC pass
          I will report this in the GPU forum when I get a chance
          Muhammed Hamed
          V-Ray GPU product specialist


          chaos.com

          Comment


          • #35
            Originally posted by Joelaff View Post
            But we really NEED independent user control over bucket sizes for GPU, with independent control for the GPU bucket size and the CPU bucket size. The default bucket sizes are pretty absurd
            They use big buckets for GPU for maximum GPU utilization, they don't allow user control here on purpose(I tested that in Cycles as well, big buckets are necessary for GPU rendering..it is just how it works).. still would be nice to control the CPU buckets at least (when using Hbyrid)
            Oddly in Vray for sketchup users can control GPU bucket size, so this is definitely possible.

            Muhammed Hamed
            V-Ray GPU product specialist


            chaos.com

            Comment


            • #36
              Originally posted by Karol.Osinski View Post
              Just the fact it often takes AGES to load IPR or Production render in CUDA / RTX drives me crazy...
              The GPU IPR starts instantly for me similar to Octane or Fstorm, you guys need to follow up with support to see what is causing the slowdowns in your scenes.
              It is important to follow along and make sure the issues are solved,
              I have been using GPU for most projects since 2017.. there are issues, but I think it is in a pretty good spot right now, it is fair to say it is production ready at least for what we do
              Muhammed Hamed
              V-Ray GPU product specialist


              chaos.com

              Comment


              • #37
                Originally posted by Muhammed_Hamed View Post
                They use big buckets for GPU for maximum GPU utilization, they don't allow user control here on purpose(I tested that in Cycles as well, big buckets are necessary for GPU rendering..it is just how it works).. still would be nice to control the CPU buckets at least (when using Hbyrid)
                Oddly in Vray for sketchup users can control GPU bucket size, so this is definitely possible.
                I think the user actually knows best what bucket size to use. Fixed hard coded sizes are not useful. Each scene and each graphics card requires its own tuning. A one size fits all approach does not work with a heterogeneous render farm.

                Sure smaller may be less efficient, but if you are waiting on a big block to finish then you lose any advantage. Efficiency has to be viewed based on the total time per frame, not time per bucket.

                I want more user control. As a pilot I know the more I have control over every aspect of my plane the better. The same applies to my software

                Comment


                • #38
                  Originally posted by Joelaff View Post
                  I think the user actually knows best what bucket size to use
                  I disagree.. this is something that users mess up all the time in other engines, how would people know that bigger buckets work best for GPU rendering?
                  I think the right call is to hide these controls honestly, I have talked to Blago about this when they first released bucket mode.. there is just no point
                  In Cycles I only use 256 bucket size .. I never changed that for any configuration and their docs recommend against it

                  Originally posted by Joelaff View Post
                  Each scene and each graphics card requires its own tuning
                  This is not how it works, how did you come to this conclusion?

                  Originally posted by Joelaff View Post
                  I want more user control. As a pilot I know the more I have control over every aspect of my plane the better.
                  We can agree to disagree here as well, I prefer a cleaner UI.. Vray has been about ease of use recently which is great
                  I'm glad the devs did a cleanup of rendering settings with Vray 5, removing min shading rate for example, and local subdivs controls.. these settings I haven't touched in years
                  Muhammed Hamed
                  V-Ray GPU product specialist


                  chaos.com

                  Comment


                  • #39
                    Originally posted by Muhammed_Hamed View Post
                    I disagree.. this is something that users mess up all the time in other engines, how would people know that bigger buckets work best for GPU rendering?
                    I think the right call is to hide these controls honestly, I have talked to Blago about this when they first released bucket mode.. there is just no point
                    In Cycles I only use 256 bucket size .. I never changed that for any configuration and their docs recommend against it


                    This is not how it works, how did you come to this conclusion?
                    Let me paint a scenario...

                    For simplicity let's say you have a render farm with the following nodes:

                    1. A ThreadRipper and a 2880Ti.

                    2. A Dual Core Xeon and a 780Ti

                    3. A 3 year old Intel Proc and a 1080Ti

                    You are using hybrid CUDA (because on those ThreadRippers in my tests CUDA is ALWAYS faster than RTX... ) and because not all machines have RTX cards, and the cards they have are not all faster than the CPUs.

                    If you use bucket mode and enable the GPUs and the CPUs in hybrid CUDA here is what happens.

                    The GPUs all use the absurdly large 256pixel bucket size (correct me if that is not the default, but it is big). The CPUs use 32 (I think, whatever hard coded value (programmers should see "hard coded value" and cringe!) default is).

                    On Machine 1 the frame finishes the fastest of course. The default bucket sizes work pretty well. All seems good. Great default values! But...

                    On Machine 2 the render hangs on the last buckets (or perhaps even the FIRST buckets) from the GPU because they are absurdly large. This machine takes way longer to render than it should because we are waiting for those last GPU buckets to render. We could disable the GPU and just use the CPU, but now we are wasting the perfectly good (albeit not lightning fast) GPU.

                    On Machine 3 we end up waiting on the CPU buckets. Since the GPU buckets were so absurdly large to begin with we end up with regions where there is no more room for the giant buckets and they must be finished with the CPU. Maybe it intelligently will subdivide the GPU buckets (don't recall), but let's say the CPU buckets are arranged such that there are no large enough spaces for the GPU buckets.


                    So we have two machines running very inefficiently. If the user could tune the bucket sizes then we could finish those frames faster. Ideally this is something like a config file or environment variable per node that is a multiplier. So for Machine 2 for instance you could have the bucket size always be 0.25x (1/4th the size) of the bucket size set in the scene. Note that these two options (config file or environment variable) add ZERO clutter anywhere because the average user who only has one or two machines doesn't even know about them.

                    I still think we need to be able to set the sizes for workstations too. When doing DR this is especially the case. If you have ever done DR even with CPU it typically works best with very small bucket sizes because this is what finishes the frame the fastest. Each pixel may take more CPU resources to render, but we are never waiting on that last block from the slowest machine (Last Block Syndrome-- LBS.. It is EVIL!) IN the case of DR with GPU CUDA hybrid we are waiting on the last block from the slowest CPU or GPU, and if it's the GPU it will take forever because the block is absurdly large.

                    We can agree to disagree here as well, I prefer a cleaner UI.. Vray has been about ease of use recently which is great
                    I'm glad the devs did a cleanup of rendering settings with Vray 5, removing min shading rate for example, and local subdivs controls.. these settings I haven't touched in years
                    I know lots of people disagree with me here For them there are the other display modes in the UI (Basic, Advanced, Expert). If you don't want to see so many controls then don't change this mode to expert. It's really that simple.. But don't limit users' tools in the name of de cluttering things. (Man, I have said this a million times.) I have seen replies that "everyone just sets it to expert." Well, guess what, then it is their problem! (Or really their benefit.)

                    Min Shading Rate is (thankfully) still there for CPU. We tune that on every scene. It sometimes make very little difference, and sometimes makes a good amount of difference (like with scene with a lot of DOF effect (limited focus)).

                    Seriously, I thought the Basic, Advanced, Expert thing was a brilliant UI compromise that was going to let us keep all of the settings while placating people who thought the interface was cluttered. They could add a "Gimme All The Switches" setting, just don't remove anything else.


                    Comment


                    • #40
                      Right now, with the hard coded block sizes, GPU DR mode is useless to many users in bucket mode. (As outlined above.)

                      In fact even frame rendering (normal farm rendering) it is useless if you have slower GPUs at all. You are forced to disable them completely, and then what is the point of using GPU vs CPU? Sure we could buy like 40 RTX cards for $60k LOL
                      Last edited by Joelaff; 11-08-2020, 04:15 PM.

                      Comment


                      • #41
                        can you buy me 2?
                        e: info@adriandenne.com
                        w: www.adriandenne.com

                        Comment


                        • #42
                          Originally posted by francomanko View Post
                          can you buy me 2?
                          I would if I could. I mean 62 is no biggie once you buy 60.

                          Comment


                          • #43
                            Hey, if it brings your expenses up high enough, it might knock your profits down below autodesks 100k per year and save you a fortune on max licenses

                            Comment


                            • #44
                              Originally posted by joconnell View Post
                              Hey, if it brings your expenses up high enough, it might knock your profits down below autodesks 100k per year and save you a fortune on max licenses

                              Rotfl! That was funny!

                              Comment


                              • #45
                                Originally posted by Joelaff View Post
                                Let me paint a scenario...

                                For simplicity let's say you have a render farm with the following nodes:

                                1. A ThreadRipper and a 2880Ti.

                                2. A Dual Core Xeon and a 780Ti

                                3. A 3 year old Intel Proc and a 1080Ti
                                I'm not a big fan of this setup and using DR on a single frame for GPU rendering is not a good idea

                                You should render a frame per each device to avoid stuck buckets, and this way you will be able to use the 2080ti in RTX mode
                                So a frame on the ThreadRipper and a different frame on the 2080Ti, and a frame on the Dual Xeons, and a frame on the 1080ti

                                This is how we render our animations, we have 7x 2080Tis per node.. each card works on a frame simultaneously..

                                On another note, there is no point using a 780Ti in GPU rendering..it will only cause issues ..it has 3 GB of VRAM(windows probably takes half of that) and it doesn't support the new Studio drivers.
                                Same goes for the old CPU in the third machine, we only use Hybrid if it is worth the trouble(on stronger CPUs, because it can slow down rendering a lot)

                                Originally posted by Joelaff View Post
                                If you have ever done DR even with CPU it typically works best with very small bucket sizes because this is what finishes the frame the fastest
                                using smaller buckets with GPUs is quite bad, your GPUs will not be at 100% load, this is a bigger issue in my view.
                                but fair enough, if you have an outdated GPU like the 780ti you might want to control GPU bucket size?
                                With this kind of hardware you should reconsider using GPU rendering honestly, because of memory limitation (3GB in your case) and you give up on a lot of features that the CPU engine has

                                Not telling you what to do here, a 2060 Super is 400 Euros.. you can stack 4 of these in one of your systems, and stick to GPU rendering in RTX mode for good
                                It is gonna score around 900 in Vray benchmark, twice as fast as your 3 machines combined and you will not need to bother with GPU rendering.. (you don't need to spend 60k on hardware to use GPU rendering)

                                Muhammed Hamed
                                V-Ray GPU product specialist


                                chaos.com

                                Comment

                                Working...
                                X