Announcement

Collapse
No announcement yet.

Vray Next for Modo, 28 new features, 50 new improvements to the workflow and new GPU rendering architecture

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Vray Next for Modo, 28 new features, 50 new improvements to the workflow and new GPU rendering architecture

    In this thread, I will discuss the new features and workflow improvements in V-ray Next for Modo. I will show you how V-ray Next can optimize your workflow, speed up your renders, and give you amazing looking renders out of the box.
    There are drastic improvements to GPU rendering in V-ray Next, A new kernel architecture increases GPU performance overall, doubling it in many situations(examples, comparison and more on that later), The new Adaptive Dome Light halves the time of your image-based environment lighting with no setup or parameter tweaking. Quality and capability have really climbed starting with Displacement, which produces better results, renders faster and consumes less memory than before. The new HairNext shader provides additional controls and realism, and with NVLink, a system housing two 2080Tis provides 22GB of GPU memory for handling large scenes.
    V-ray Next includes a fully redesigned GPU IPR, that is packed with new features to make look development smoother and more flexible.


    List of all new features,

    -Adaptive Dome Light.
    -new reworked GPU IPR.
    -Fully featured volumetrics on GPU.
    -bucket mode on GPU .
    -Cryptomatte support on GPU.
    -dispersion support on GPU.
    -Metalness option in the V-ray material for PBR workflow and achieving physically accurate metals.
    -improved NVlink support for RTX cards for memory doubling and rendering big scenes(Titan RTX, 2080ti and 2080)
    -ALSurface material support on GPU.
    -Curvature texture support on GPU.
    -Procedural support for Bercon noise and Vray's native noise on GPU.
    -New physically based hair material with Glint and Glitter controls.
    -New Vray Toon options.
    -Volume Grid support on GPU

    -Vray Volume scatter shader, supports random walk SSS like Arnold.
    -VRscans support on GPU , and built-in Triplanar support.
    -Glossy Frensel on GPU, for realism and physically accurate shaders.
    -New updated GPU displacement, consumes less memory, renders faster and produces better results.
    -New light Grid for Faster GI calculations.
    -Rolling shutter option in Vray's Physical camera.
    -Lighting Analysis render element.
    -Nvidia Optix denoiser, works interactively in the IPR.
    -Denoised render elements, accelerated by GPU
    -Lut weight and controls in the VFB
    -new lens effects in the VFB, accelerated by GPU, produces same quality as Corona or Fstorm.
    -Support for GPU rendering on V-ray cloud
    -Production mode for GPU rendering, directly inside Modo, enables LC, animation rendering, bucket mode..etc
    -New Cleaner UI for render settings and V-ray's advanced material through Modo's "proficiency levels"
    -New tessellation options for V-ray Fur.


    Changelog for total of 77 changes in this release,
    https://drive.google.com/file/d/1ryy...ew?usp=sharing

    The new Adaptive Dome Light

    Image-based lighting involves a lot of sampling, especially for interior scenes. Light typically enters interior spaces through small openings like windows and doors, which makes sampling of the dome light with an HDR image very hard, and it requires using Portals on openings like doors, windows, that help direct light samples. But this is a manual process that takes time, and it’s not accurate.
    The new Adaptive Dome Light uses the Light Cache calculation phase to learn which parts of the dome light are most likely to affect the scene, It automatically figures out which portions of the environment to sample and which ones to ignore without need for setting up Portals.
    For more technical details on how it works, check out this post on Chaos Group Blog,
    https://www.chaosgroup.com/blog/v-ra...ive-dome-light


    Advantages,
    -Doubles or triples your rendering speed on average for interior scenes by just ticking the adaptive checkbox and you get cleaner result! exterior scenes benefits from Adaptive Dome Light as well.
    -Setting up Image-based lighting is a single click setup in V-ray Next, you no longer need to manually add Portals on windows or doors.
    -More accurate lighting, difference can be massive in few situations.


    Examples,

    This first example was rendered in 4k resolution on 2x 1080tis(You can right click on the image, then open in new tab to view in full resolution)
    scene by the talented Michele Faccoli
    Check more of his work here,
    https://michelefaccoli.com/en/
    He uploaded the Modo scene so you can take a look at his setup
    https://www.dropbox.com/s/mqki21b1t3...Villa.rar?dl=0
    Click image for larger version  Name:	Villa_DT_02.jpg Views:	1 Size:	4.77 MB ID:	1034326

    This example, rendering was nearly 4 times faster!! by just ticking the Adaptive checkbox in the Domelight settings, Click image for larger version  Name:	Interior_Night_B_On_9m59s_03.jpg Views:	1 Size:	260.2 KB ID:	1034327







    More examples from ChaosGroupTV on Youtube
    https://www.youtube.com/watch?v=NFa1aDmeBcQ


    Conclusion,
    Using an HDRI with good dynamic range, produces nice soft shadows, and adds nice contrast/saturation to your renders, it is very effective in lighting your interior scenes compared to using only area lights. Throughout the years, Archvis artists tend to avoid HDRI lighting because of render times and how hard it is to get rid of noise, this is the case with Modo's native renderer for example,
    The new Adaptive Dome Light in V-ray Next solves all these issues and it is just a single click setup.
    You can download my scene here and experiment with lighting, or use Michele's scene above
    https://drive.google.com/file/d/11wJ...fAUfHO8In/view


    One slider to control all sampling settings and new cleaner UI for render settings

    In V-ray Next, there is a new cleaner UI for render settings using Modo's "Proficiency Levels" to hide advanced channels. By default, you will see standard controls like Image Sampler, Anti-Aliasing filter and Denoiser settings. You can click on "More" to show all controls or "Less" to show only the core controls. This works for the Global Illumination and RT tabs as well.
    On the left is standard controls(default), and on the right is core controls(if you click on Less)


    Sampling in V-ray next,
    This is where V-ray is miles ahead of competition, Noise Threshold is the only slider you use to control quality/render time in V-ray. You don't need to touch any local samples for lights, GI, Reflections, Refractions, DOF, SSS, ...etc, there is no such thing as "dealing with noise guide" in V-ray
    Noise is a big deal in many engines specifically Modo's native renderer and Redshfit. Sampling workflows can get very technical and optimizing your scenes will take time. In Redshift, you have to use local samples for your lights, reflections, refractions, SSS or you will end up with noise or super sampling.


    Advantages of smart sampling in V-ray,
    -One slider to control quality and noise, you waste zero time on optimizing your scenes. Default settings for GI, Min/Max subdivs work for all scenes.
    -Depending on the scene, it is totally up to the user to choose Progressive or Adaptive(Bucket) rendering. Both are very fast, and rely on smart sampling, and most importantly render results will match. I will explain why this is very important. For example, I use Progressive rendering for animation to set a render time limit for each frame(and to avoid stuck buckets). On the other hand, Bucket mode can be handy for high resolution renders as it uses less system memory, and for DR Bucket mode will use 10 times less Network Traffic.
    In Redshfit, Progressive rendering ignores all sampling settings, it can only use BF/BF which makes it very slow compared to Buckets, and if you have Point Cloud SSS, it will use Raytraced SSS, so your render will look different. You need to use Buckets for your final renders in Redshift.
    In Cycles, Progressive rendering is very slow as well, you need buckets for your final rendering, in other engines like Octane, Fstorm, Corona you only have progressive mode, V-ray is the only renderer that can do both(without issues or giving up on speed/biased GI)
    This video talks briefly about sampling and how you can make use of Progressive or Bucket(adaptive) modes in V-ray,
    https://www.youtube.com/watch?v=oAiUy4in7Zo
    -Light Cache is very fast to calculate in V-ray Next with the new Light Grid, it has 100 bounces by default which gives you nice soft shadows and balanced contrast specially when you have bright colors in your interior. It is far superior to any secondary GI solutions in the industry, it gives a massive speed boost in GI intensive scenes, and information from LC is used by the new Adaptive Dome Light, Adaptive lights, Auto-exposure and Auto-white balance features
    This is how LC can affect render time for interior scenes, and pay attention to the difference in shadows,
    https://www.youtube.com/watch?v=J6J3...youtu.be&t=194
    More details on how Adaptive Lights works in V-ray here, some of the scenes rendered 8 times faster. Vray Next includes version 2 of Adaptive Lights,
    https://www.chaosgroup.com/blog/unde...daptive-lights
    -This leads up to the final point of this section, V-ray Next is very fast! old GPU scenes render 50% faster out of box. Using the new Adaptive Dome Light you can double or triple your render speed as shown in the examples above. Adaptive Lights version 2 helps with complex scenes where you have many lights, up to 8 times faster render times. You only see these numbers in V-ray.
    Download Michele's scene from the first example or use my scene here
    https://drive.google.com/file/d/11wJ...ew?usp=sharing
    Last edited by Muhammed_Hamed; 14-05-2019, 12:32 PM.
    Muhammed Hamed
    V-Ray GPU product specialist


    chaos.com

  • #2
    New reworked GPU IPR and new Denoising options,

    This is where V-ray is miles ahead of competition again. The GPU IPR has been improved quite a lot in V-ray Next. The first thing you will notice when you click the green GPU IPR button, is that the time between clicking the button and the time that the first pixels start to render has significantly reduced; this allows you to begin to see your renders much faster, same for stopping or pausing the IPR. Undersampling and interactivity overall are much better!
    you will need to turn on "immediate camera update" in settings for camera movement to register without releasing the mouse, and it will switch to the current active viewport if you are not on your render camera, this is a Modo SDK limitation. See the difference in interactivity between 3.6 and V-ray Next here,
    https://www.youtube.com/watch?v=gdhY...ature=youtu.be

    V-ray's IPR integrates nicely with Modo's Shader Tree, and supports many nice features like isolating textures/materials/objects, you can control the camera's depth of field or shutter speed interactively in the virtual frame buffer, you can bookmark your render region and more. Vladimir Nedev did an amazing job on Vray's IPR integration in Modo, many of these features are not available in any other V-ray plugin.
    This video explores V-ray's IPR in Modo,
    https://www.youtube.com/watch?v=Joi_U7aeZ44


    The GPU IPR in V-ray scales perfectly with multiple GPUs, In my case I have 3x 1080tis in my machine at home and 7x 1080tis at office. I usually use the "low GPU thread priority option", this keeps Modo's viewport smooth and interactive while using the IPR. V-ray has no limits over the number of cards that it can use for IPR or production rendering, it will use as many GPUs as your operating system can recognize. Redshift is limited to 8 GPUs per machine and you will need to use 2-3 cards for the IPR for best interactivity, using more cards will not scale linearly and can cause issues.

    In V-ray Next, you can use Nvidia's AI Denoiser with the CPU or GPU IPR interactively to get instant feedback, which is what the AI Denoiser is designed for(In Modo's native renderer you cannot use Nvidia's denoiser with Preview/IPR). To make the AI Denoiser update more frequent, simply change the "image effects update frequency" to 100 in the main V-ray tab. This is very helpful in Lighting and Look development, see here
    https://www.youtube.com/watch?v=kIRpajvrwZc

    If you are on Mac OS or if you don't have an Nvidia GPU, you can use V-ray's own Denoiser for the IPR, and for final rendering it is recommended to always use V-ray's denoiser for many reasons.
    -Supports denoising of multiple render elements.
    -works better for animation, to avoid flickering and other issues.
    -Respects high frequency details in bump mapping, stochastic flakes...etc
    -Uses much less memory compared to Nvidia's Denoiser, literally something like 200 times less memory compared to Nvidia's Denoiser
    -Supports all CPUs/GPUs and all operating systems.
    More details about Denoising in V-ray and examples here,
    https://www.chaosgroup.com/blog/v-ra...-in-production


    Fully-featured Volumetrics and V-ray Volume Grid on GPU

    The Volumetric effects in V-Ray simulate fog, atmospheric haze..etc, Volumetrics also include Volumetric Grid rendering that works with grid based cache formats to create effects such as dynamic plumes of smoke.
    Volume rendering brings a realistic cinematic look to your GPU-rendered shots. VRay Environment Fog can instantly improve the look of most shots. Lighting will behave more accurately, giving you a realistic hazy look as it scatters through the fog and loses some of its intensity, this also affects reflections and refractions in a physically correct way. In the examples below by Dabarti Studios, you can see the advantages of using fog in your scenes,



    Without fog vs With Fog


    V-ray Next GPU has a novel sampling strategy for fog, which works extremely well in all cases, including dense fog and many lights. GPU scenes using fog will render 3 or 4 times faster compared to old builds of Next. See here,
    https://www.youtube.com/watch?v=djbFT7YSDeQ

    Check the difference in render time, Old vs New


    Examples for Volume Grid rendering with V-ray GPU,


    Watch here,

    https://vimeo.com/263008392
    https://vimeo.com/262755536
    And make sure to check out Dabarti Studios

    New Lens Effects accelerated by GPU and new LUT controls in the VFB

    In V-ray Next, Bloom and Glare are very fast to calculate and produce amazing results, similar to Corona and Fstorm. Personally I try to get final renders right out of the frame buffer, without using any post production or color grading outside of V-Ray. The new bloom and glare and LUT controls in the virtual frame buffer helped with that. In many cases I can use my renders without any post work in Photoshop. Few examples of the new Lens Effects by one of my favorite Automotive artists Andre Matos, Using the obstacle image option



    Jaw dropping results with fog and the new lens effects by Andrea Matos for Porsche

    https://vimeo.com/300474233
    https://vimeo.com/301204712


    See Bloom and Glare in action here,
    https://www.youtube.com/watch?v=t_G9QmDkWiM

    There is a new slider to control strength of your LUT, and you can now burn it into your saved image,


    Last edited by Muhammed_Hamed; 30-04-2019, 05:19 AM.
    Muhammed Hamed
    V-Ray GPU product specialist


    chaos.com

    Comment


    • #3
      New Metalness parameter for achieving physically accurate metals and PBR workflow

      PBR materials allow people to use textures in many different types of renderers, especially real-time such as the Unreal Engine, Unity, Quixel etc.Another popular tool is Allegorithmic’s Substance Designer. Substance allows artists to paint maps that fit PBR shading and include a metalness map, which is more of a mask between two different types of materials: dielectric or conductive. Substance is not just popular for game designers, it is also being used in arch viz, VFX and other industries that do CG rendering and V-ray Next has adapted by adding the Metalness slider,
      On the other hand, the new Metalness parameter makes it easier for people to create metals. In physics, conductive materials have a different reflective property, which is why most people see them as very reflective with no diffuse property. In V-ray 3, the V-ray material was based on dielectric properties, and metals were generally approached by removing the diffuse and giving the shader a very high fresnel value. In V-ray Next, the new Metalness parameter controls the reflection model of the material from dielectric to metallic. A material is either dielectric or conductive; there is no in-between state.
      The new metalness workflow is more accurate than how people used to approach metals in the past using falloff maps or frensel curves.
      To create a metal material,
      -first step is to change the Metalness value to 1
      -Since there is no diffuse color in metals, the diffuse color becomes what is known as the base color or albedo color.
      -The Reflection color should be set to white so as to get the proper reflectivity and preservation of energy; without this, the glancing angle will never be 100% reflective(it needs to be)
      -How much that reflection is blended in is still controlled by the same IOR value of the Fresnel effect, you'll notice this now has a very subtle effect as the entire material is basically reflective.
      -Glossiness controls how shiny your metal is; however, if you use a roughness map(such as what Substance does) you will need to invert it.
      The Index of Refraction plays an important role when creating a physically based material. Therefore, it has to be considered as part of the shader. The site ReflectiveIndex.info is a great resource for understanding the proper IOR of different materials. However, inputting those numbers into a V-Ray shader will not match exactly, so Vlado has created this chart to help make the translation easier.

      For More information about Metalness check out this post on Chaos Group Blog and this video on ChaosGroupTV

      Talking about Metals in V-ray, I have to mention GGX tail falloff which is something very special about V-ray, it has been one of my favorite features throughout the years. Basically you can fully control how GGX looks on shiny metals and Plastics, you can get nice highlights without the need to layer multiple materials, saving a lot of render time. Check out the examples below, in other engines you will need to layer many coats to achieve the same highlights.
      For GGX tail Falloff to work you need to have reflection glossiness of something other than 1, for metals try 0.9 to 0.95 then adjust GGX tail falloff to 1.4 or similar. For something like metallic carpaint you will need a high value like 2.4 or 3
      First 2 models by Grant Warwick, Jug rendered in V-ray for Modo and you can download here, RC car by Luca Veronase (right click and open image in new tab for full resolution)





      Fernando Bernardo, Lighting TD at Digital Domain talked about the work they did on Avenger Endgame, and how the used GGX tail falloff to avoid multiple coats and blends. It is an interesting article about their pipeline, you can check on ChaosGroup Blog here

      New Physically based Hair shader

      This is where V-ray is miles ahead of competition again, instead of tweaking arbitrary colors that mix together, the new Hair Next material uses a simple melanin slider that determines the hair color based on the physiology of real hair. The material is the product of research based on the paper A Practical and Controllable Hair and Fur Model for Production Path Tracing.
      you can set hair color with just a single slider control. A value of 0 is white hair, and a value of 1 is black hair. All other hair colors fall somewhere in between.

      This video by CG Labs explores the various controls of the new Hair shader,

      https://www.youtube.com/watch?v=Bpg5IF9rBgA

      And here is the post on CG Blog.

      Ian Springs has been using the new Hair Material in V-ray Next for his stunning Portraits,


      The HairNext material works nicely with Fur as well, here is few examples I did in V-ray for Modo,



      New VolumeScatter material for Random Walk SSS

      Unlike other SSS models in V-ray which uses a single bounce, The new VolumeScatter material offers a bounce controller with a default value of 5 bounces, you can set it up to something like 50 for example and compare the result to using a single bounce(Which is the case with FastSSS2 or Alsurface material), the difference is just massive.
      The VolumeScatter material is a brute force option for SSS to produce an unbiased result, for complex examples, see below. It
      has to be used in a V-ray Blend as base, then use a V-ray material as coat with 100% white as blend amount. Check out the setup here,
      https://docs.chaosgroup.com/display/...olume+Material
      For now it only works on CPU, but it should be supported on GPU very soon. Few examples by Eugenio







      Support for ALsurface shader on GPU
      AlSurface shader is V-Ray's implementation of Anders Langland's alShader, which is designed to reproduce the appearance of skin. It takes into account diffuse, 2 levels of specular, and sub surface scattering. Unlike V-ray's skin material, Diffuse and Sub-Surface color are driven by the same map and there is a SSS mix that controls how much that surface is effected by the subsurface scattering versus how much is driven by the Lambertian diffuse. It also has a 3 level depth for the SSS but has weight controls on each one. It offers several SSS modes. Vlado has implemented two of them for the V-Ray version of the alSurface shader. Diffusion, which preserves more detailed compared to the standard dipole model, and Directional, which allows for an approximation of Single Scatter maps for even greater detail preservation. It uses a two lobe specular model. However, instead of using a smooth BRDF like Phong or Blinn, it uses a microfacet one. The Fresnel effect is computed as part of the BRDF calculations (a.k.a. “glossy Fresnel”) and takes into account the viewing direction, the surface normal, and the light directions. The user has a choice between GGX and Beckmann BRDF models. Based on the nature of the micro faceting, it can avoid the darkening effect at the glancing angles through retro-reflection. Additionally it does not cover the SSS at the same glancing angles.

      Few Examples by ChaosGroup,



      Last edited by Muhammed_Hamed; 01-05-2019, 12:46 AM.
      Muhammed Hamed
      V-Ray GPU product specialist


      chaos.com

      Comment


      • #4
        Support for Glossy Fresnel on GPU

        This makes the standard material physically accurate and will help with realism. in a nutshell, it fixes the bright halo effect at the grazing angle which was a big issue before specially with low glossiness values(left is glossy fresnel on, right is glossy fresnel off)

        A microfacet BRDF such as GGX, mimics the roughness of the surface as if it was made of a bumpy surface at a microscopic level. It is therefore a more accurate representation of what the surface is actually doing. Instead of imagining a single ray being evenly scattered, you can imagine individual rays scattering in different directions based on the microfacets. Each microfacet has it’s own normal which, based on the surface is generally oriented towards the ray’s direction. The grazing angle on a rough surface has normals that are not as acute as ones on a smooth surface, meaning that they point towards the camera more compared to a glossy surface. Therefore, grazing angles of a rough surface will have less of a Fresnel effect compared to a glossy surface.
        More information and comparison on CG blog here

        Support for dispersion on GPU

        In V-ray Next dispersion is supported on GPU, you can control the effect through Dispersion Abbe, lowering it widens the dispersion and vice versa. See here,

        https://www.youtube.com/watch?v=q_qszlnZdPE


        Support of Curvature texture on GPU

        the Curvature texture can be used to create procedural edge-wear for example, it will sample the underlying mesh for curves based on normals, similar to V-ray Dirt. See the workflow here

        Support for Procedural Bercon Noise and V-ray noise on GPU

        You can create endless variations with these Procedurals, few examples by Luca Veronese




        You can use them for displacement as well, example by Ariko Ninio

        New cleaner UI for V-ray's standard material to hide advanced channels

        in V-ray Next the standard material has a new smart UI using Modo's "Proficiency Levels". By default, you will see the standard controls and you can click on "more" at the bottom to show all controls, and you can click on "less" to have even cleaner UI with only core controls like Diffuse color, Reflection color...etc
        On the left is standard controls and on the right is all controls,


        New V-ray toon shading

        V-Ray Mtl Toon is an add-on to the V-ray material that produces cartoon-style outlines on objects in the scene. This includes both simple, solid color shading but also outlined borders on the edges of the objects. See the workflow here

        Triplanar mapping for VRscans and GPU support

        I've been using VRscans for almost every project since they were introduced 2 years ago. What makes them very special is that each scan has its own Bidirectional Texture Function, which is far better than the usual BRDF that renderers use. The kind of variation and realism you get is just insane, it would take hours of traditional shading to get even close in matching these scans. They render very fast on CPU and GPU and they barely use any memory. They are customizable and they can be part of any shading networks so you can combine them with displacement, normal mapping, bump mapping, dust, scratches...etc
        In V-ray fro Modo, Triplanar support was added so you don't need to worry about UVing your meshes, they work on GPU in addition to few other improvements. The library has over 1100 scans now, 400 of them are fabrics.

        This video explores VRscans in Modo,
        https://www.youtube.com/watch?v=MMyWjcwj35A

        Few examples rendered with V-ray for Modo,




        Last edited by Muhammed_Hamed; 30-04-2019, 05:21 AM.
        Muhammed Hamed
        V-Ray GPU product specialist


        chaos.com

        Comment


        • #5
          New fast, memory efficient GPU Displacement

          In V-Ray 3.6, low settings for displacement did not necessarily achieve the most pleasing results. The workaround was to crank up subdivisions high enough to get your desired quality. One of the side effects was that it required a lot more geometry, which put a strain on the GPU memory, which can be at a premium. First example is using V-ray 3.6 (2 subdivs), and second example is using V-ray Next(2 subdivs)


          Another example using 8 Subdivs this time, right is V-ray Next, Left is V-ray 3.6

          New Rolling shutter option in Vray's Physical camera


          This option will work when motion blur is enabled, it is a nice addition to V-ray's physical camera. See here,
          https://www.youtube.com/watch?v=QSejIVKuT84

          New Lighting Analysis render element

          It will help you measure and analyze the light levels in your scene. You’ll be able to create false color heat maps and data overlays to show luminance (in candelas) or illuminance (in lux) values.
          Check out Vlado's post about it here,

          https://www.chaosgroup.com/blog/how-...-in-v-ray-next




          Muhammed Hamed
          V-Ray GPU product specialist


          chaos.com

          Comment


          • #6
            Great summary and reference, thank you!

            Comment


            • #7
              Thanks! Will do another post on GPU production rendering in Modo
              Muhammed Hamed
              V-Ray GPU product specialist


              chaos.com

              Comment


              • #8
                Nice summary Muhammed! Great that you include links to examples and setup docs too so we can get started right away.

                Comment


                • #9
                  Cheers Gideon!
                  Muhammed Hamed
                  V-Ray GPU product specialist


                  chaos.com

                  Comment


                  • #10
                    Very good job and Amazing Muhammed !!
                    Main computer : Threadripper 2990WX - 2 x GForce 1080 8Go - 1 GTX 1070 8Go - 64 Go
                    Softimage 2015 x64 | Lightwave 2020 | modo v14

                    Comment


                    • #11
                      Thanks Phil!
                      Muhammed Hamed
                      V-Ray GPU product specialist


                      chaos.com

                      Comment


                      • #12
                        Excellent introduction to V-Ray Next for MODO, Muhammed! Thanks for doing this.

                        Now we only need an easy way to upgrade, it should be possible to fix that hurdle eventually. We did put a man on the Moon after all!

                        Comment


                        • #13
                          Thanks Nils
                          I agree on the upgrade path, I've made a thread about this here and Vlado replied. They will improve this in future
                          https://forums.chaosgroup.com/forum/...not-buy-online
                          Muhammed Hamed
                          V-Ray GPU product specialist


                          chaos.com

                          Comment


                          • #14
                            Great recap! Thanks for your time in organizing this post!

                            Comment


                            • #15
                              Cheers Aluxado
                              Muhammed Hamed
                              V-Ray GPU product specialist


                              chaos.com

                              Comment

                              Working...
                              X