Announcement

Collapse
No announcement yet.

Extract camera from EXR metadata?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Extract camera from EXR metadata?

    So I've been reading a bit about a Reactor script for Blackmagic Fusion which will extract a 3D camera from the metadata in an EXR output from Redshift. Does Vray also have this metadata available to do the same?

  • #2
    Indeed: saving a camera to use in post is vital to achieve a number of effects (from space transforms for normals to reprojections and augmentations of the 3d footage with cheaper moblur and dof.), and V-Ray had this since time immemorial.

    Ensure the following conditions are met:
    a) Actually have a camera in the scene.
    b) Save the file from the raw file output, *not* off the VFB after the render.

    See the attached screenshot with the camera-related metadata viewed inside Nuke.
    Attached Files
    Lele
    Trouble Stirrer in RnD @ Chaos
    ----------------------
    emanuele.lecchi@chaos.com

    Disclaimer:
    The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

    Comment


    • #3
      Interesting. Thanks Lele.

      Would you call this deep compositing or a maybe pseudo-deep? Working in archviz has probably kept me too sheltered from these kinds of advanced compositing techniques. I'm keen to learn more about the things you mentioned, preferably with Fusion. Are there any training resources you could recommend that cover these?

      Comment


      • #4
        It'd be standard 2.5D compositing, with all its pitfalls (f.e. moblur and DoF don't work too well with transparencies, and require a metric ton of artist's skills in comp to look half decent.).
        For deep compositing you'd need to write deep exrs, which would contain the extra information that'd allow a number of those standard 2.5D effects to work better.
        It still requires some serious skill when one gets to the nitty-gritty part, as the fragments aren't quite perfect (I personally still long for the old RPF format...).

        I've learnt on the field (i'm old enough to have seen the approaches develop over the years.), so i'm not quite privy, or current, on tutorials on either subject.
        In general, however, both approaches are quite Canon, these days: I'm sure Google, and some judgment (it may be with another renderer, or compositing package.), will get you where you'd like.
        Lele
        Trouble Stirrer in RnD @ Chaos
        ----------------------
        emanuele.lecchi@chaos.com

        Disclaimer:
        The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

        Comment


        • #5
          Yeh. I'd imagine that to be the case. I'd just be interested to know what more is possible in comp using advanced techniques like these, rather than always depending on doing everything in render. When you don't know what you don't know Googling isn't even an option.

          Comment


          • #6
            Ah!
            Well, in general when the task is enormous (so, say, 300 shots of an average 100 frames each), it's *very* advantageous to have flexibility.
            If one can hold off rendering expensive effects (say, motion blur. Assuming -wrongly- 2.5D and 3D moblur are analogous.), one gets both a quicker render, and more directability for the effect (as the moblur in post is then near-realtime).
            Some things will be nearly unavoidable: a curved motion blur trail for, say, a plane propeller will need 3d motion blur, or as many plain renders as the motion blur segments, which would quickly become more expensive than rendering motion blur in 3d.
            Also, the artifacts the technique generates may or may not be tolerable, depending on conditions: f.e., if it's a quick camera pan, one's likely OK with any artifacts, as the screen becomes essentially just a smear.
            But if the motion blur has to be applied to a lead character walking in and out of glass, then the process becomes all of a sudden a lot more complex, and will involve masks, multiple passes, and a frame wholly reassembled from said pieces in Post.
            That's often when 3d Moblur becomes the advantageous option.

            In all my years of production, i have *never* once used a Beauty render, and at the very least never one with all the elements in it (i.e. the finished frame.), for the final product.
            If we needed the propeller motion blurred, we'd render passes: the plane with its specific settings, the propeller with its own, and the various ancillary bits (reflections, refractions, data passes) with their own set of specific settings as well.
            The man-hours tend to grow this way (moderately so if one has a pipeline and a TD or ten to write the tools for it.), but the time to a complete frame drops dramatically, when considered overall.

            If you search for "deep compositing tutorial", f.e., you'll get some quite decent material to take a look at.
            Then it's a case of following the stream of tutorialness from this, that or the other channell, to get to the specific piece of info (that by then you'll know you'll need! :P).

            p.s.: i have been long out of the loop. The likes of joconnell and Dave_Wortley will be able to share up-to-date info, and from the top dog in the industry, no less.
            Last edited by ^Lele^; 19-10-2021, 04:23 AM.
            Lele
            Trouble Stirrer in RnD @ Chaos
            ----------------------
            emanuele.lecchi@chaos.com

            Disclaimer:
            The views and opinions expressed here are my own and do not represent those of Chaos Group, unless otherwise stated.

            Comment

            Working...
            X