Announcement

Collapse
No announcement yet.

render stereo panoramas

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • render stereo panoramas

    my wish thread on rendering stereo panoramas in vray -

    After sharing panoramic renderings through the Oculus today, I had to post. The quality just isn't there trying to use a depth map to shift the view for the second eye. The ability to render a panoramic stereo pair is crucial in getting a proper view for look dev with an Oculus Rift. Also no other renderer supports this yet!

    At the moment there's another potential lo tech approach. Poppy3d (http://poppy3d.com/), which is basically a view master that your iPhone goes into. I've been using it for look dev and it works wonderfully with the stereo helper & non-spherical cameras. It gives the viewer a much better understanding of the depth of a space than a flat render. And I could also see an app in the future be developed to load a stereo panorama and be viewed with the poppy & iPhone gyroscope. But with the Oculus it would be killer having the level of immersion that it brings to the table.
    Brendan Coyle | www.brendancoyle.com

  • #2
    current wish:

    An option to set within the VRayStereoscopic helper when rendering both views together, whether they render side by side or over/under. If the views could be rendered over/under it could help simplify my current process of obtaining a stereo pair through slices.



    Edit:

    Thought I'd detail out my current process to help anyone else interested in creating stereo panoramas. This is the best & easiest way I've found so far to create them in V-Ray.

    - Set timeline to 360 frames
    - Animate camera to rotate a full 360 degrees (keys on frame 0 - 360)
    - Create a VRayStereoscopic helper
    - Set eye distance to 3.5 inch separation (this is the distance I've been currently experimenting with (not sure why the default is 6.5, that seems too wide (2.5 in or 2.6 in. might be closer to the average)))
    - Set view to Left (if the images could render over/under you could render left & right at the same time)
    - Keep Interocular method to shift both, so both views are captures away from the center rotational point
    - V-Ray Camera properties, Type: Spherical, Override FOV, 1.0
    - Render frames 1-360, saving as a .jpg since Stereomaker is limited in its input

    This will render 360 .jpgs that need to be stitched back together

    - Fire up Stereomaker, http://www.stereomaker.net/
    - File / Mosaic Images, fine all the slices and select them all, Connect Selected Files
    - Tada! This will auto save a final image to your desktop by default.

    Now do the process over for the right eye and finally join the two images, either over/under or side by side for viewing in VR Player
    Attached Files
    Last edited by cheerioboy; 09-02-2014, 05:31 PM. Reason: detailed process
    Brendan Coyle | www.brendancoyle.com

    Comment


    • #3
      Click image for larger version

Name:	140130_Riverhouse_VR_Full.jpg
Views:	1
Size:	465.7 KB
ID:	851387Click image for larger version

Name:	140201_Riverhouse_VR.jpg
Views:	1
Size:	520.3 KB
ID:	851388

      I finally have a little project I can share as I work on this process --

      Progress renderings of my method at the moment. 360 slices, stitched together x2 for the stereo pair
      If anyone has an oculus rift, you can load these into VR Player and take a look.

      Hopefully I can get more interest so that Chaos Group might code up a stereo cam for spherical panos
      Brendan Coyle | www.brendancoyle.com

      Comment


      • #4
        I tried the depth map method with the oculus as well and found it mostly lacking. The stip method seems like the only real solution I can think of for a stereo panoramic. Would be great if Vray supported this as a stereo camera type of some kind.

        I looked up the average and it was about 6.4cm from the chart I saw. A quick check with me and my girl friend shows we are both are near right about 5.7cm but that was very rough so maybe 6cm. I'm guessing that is where the 6.5 comes from?

        But in reality the interocular will vary a lot depending on your display type, viewer distance, scene scale and volume/depth of your 3d scene. So without creating a tool that takes your focal/screen plane and 3d volume and viewing display and viewing distance into account the a generic 6.5 or 60 or .65 or what ever it converts to in inches seems like a fair enough starting point. I rarely dive into the formulas which let you work lots of the stereo variables for volume, scale and displays and viewing distance to display more "scientifically" accurate 3d volumes and sizes. I find I just eyeball it while I'm working 98% of the time. Of course nothing I work on needs to be "scientifically" accurate in any way so I can get away with it. By day I work on stereo feature animations and the target screens are 60 foot movie screens (not sure what the viewer distance we use is but center of an average theater) and that's what all of the initial settings are targeted to look good on. There are all kinds of fancy in house tools to help with this but at the end of the day they do test screenings on a target size screen to make sure everything is working as expected. From there it's mostly just eyeballing the stereo adjustments to make the final stereo image look as good as possible. I know in many of my shots I'm just nudging stuff around in nuke randomly till it feels good then we have a stereo specialist who reviews everything and will send back notes. I hardly ever have my non scientific adjustments sent back. Granted most shots start from some generic formula but even if you were to look at those you would find interocular all over the place depending on lenses, volume and screen plane/focal distance and viewer distance. And even after that we still find lots of shots that just need nudging more than you would expect to really look good in stereo.

        I guess what I'm getting at is 6.5 seems as good as any other number as a default unless we are going to get a much more advance tool to help automate stereo settings.

        I'm sure with your scene scale and the fact that you will be displaying on the oculus you can find a setting that will work really well most of the time. If so you may be able to save out those settings with your default start scene so you get them instead of 6.5? Not sure if that will stick unless you save out the default scene with the stereo helper which would be easy enough to do. I also tend to save out generic template scenes for different projects. So if am on a stereo project and an interocular of 24 was the magic number I would save out a template with the stereo helper and vrayRT interocular set to 24.

        What would be really cool would be an Oculus option in the vrayRT stereo settings -:]

        Well along with the panoramic stereo camera strip mode type thing as well.
        t1t4
        www.boring3d.com

        Comment


        • #5
          Hey t1t4,

          Thanks for putting in some info.

          I updated my previous thread to be more specific on what measurement I was using for the eye distance - since I'm in the US and do things wonky, ad I'm working in inches. So all those numbers were in inches. 6.5 cm is a solid starting point for eye separation, but for some reason the v-ray stereoscopic helper defaults to 6.5 inches, which when switched to cm is 16.51 - So I think that needs to be adjusted, no?

          Oculus VR shared a 'best practices' guide (http://static.oculusvr.com/sdk-downl...tPractices.pdf, page 10) that mentions setting the eye separation at 63.5 mm, or 2.5 inches which is where I'm now setting my tests.

          And exactly like you mention, a co-worker of mine is also interested in using vrayRT to just live render a scene. No idea how that would look, maybe with enough GPUs it could work... but I think right now when you turn on side by side stereo in vrayRT and try to position the left and right windows - oh but you also need the images to be warped to counter the lens on the Oculus.. would need to look into the right lens file to distort a vray phys cam.. things to look into

          Someone on the Oculus forum tried to say they were getting stereo panos out of Octane - but I just emailed them and they said they only render spherical panos, not in stereo. So c'mon V-Ray
          Brendan Coyle | www.brendancoyle.com

          Comment


          • #6
            I think in max if you have units set to inches and switch to cm or what ever it does the conversion or at least asks you if you want to do the conversion. I think. I usually have max in generic mode since most of the time I don't really care much about real world scale so I don't run into that problem much. I Figuring 6.5 was in cm since that would make sense for a base interocular. I tend to just guesstimate most of my stereo stuff. I also do a lot of 3d printing and for what ever reason if I set max to any scale, inches, mm, cm or what ever it throws things out of whack when I export my STLs. If I leave it in generic mode then the units always seem to export in mm. I think that's another reason that although I'm in the US as well have for the past 3 or 4 years been thinking in metric vs our weird English system.

            But with the oculus it does make sense to use that as the normal setting. But depending on the situations I can still imagine that needing to change from time to time. I think the most advance stereo settings even for the oculus would be dynamic depending on the situation. Imagine a superman flying sim (no cockpit or very near objects), if you don't spread the interocular you will probably get very little depth unless something comes pretty close to you. Of course that assumes you are using real world scales and object sizes. When flying most things are pretty far away and to make them more 3d you usually us a much larger interocular. A stereo rig that is intended to not be scientifically real but give you a better sense of depth would do some analysis of the view and adjust the interocular on the fly to make the stereo more fun. As you came closer to the ground it would lower the interocular slowly or as you moved away increase it. Another fun trick is to use multiple interoculars or do some stereo 3d space warping. The multiple interocular is the easiest thing to do. Take a flying situation this time in an airplane where you want the cockpit and instruments to look more correct but still make the outside feel like it has some depth. You render the close objects like the cockpit with the normal 6.5cm/2.5" interocular and adjust the outside world on the fly. I've heard talk of even more intersting ways of doing 3d space transforms using 3d gizmos and such to do this in a much more advanced way though I've never seen it in action.

            I think I tried one time to use vrayRT with the oculus and it was kind of working. I really haven't had enough time to play with my oculus and the computer. I've been working more on some FPV stuff with my oculus when I've had time to play with it.

            The other missing piece for the oculus and 3ds max is a driver so you can use it as a realtime controller to connect to a camera or other rig of some kind. With that and if we had a good lens profile to plug into vray I bet I would use my oculus a lot more.
            t1t4
            www.boring3d.com

            Comment


            • #7
              update --

              George Renner (http://rennervfx.com/) posted a script on the Oculus VR forums for automating the slice rendering technique for a stereo pair of spherical renderings - http://rennervfx.com/share/StereoPanoRender.ms

              Although I haven't had much luck with it (getting gaps between slices -or- alignment issues with too few slices) - I had told a friend about my situation earlier and he'd put a crude script together that does basically the same thing. I thought I'd share the results - It's especially interesting if you have an Oculus to view it. Otherwise just loading it into VR Player with mouse look could work.

              https://www.dropbox.com/s/6pcmokajxh...mation_001.rar
              https://www.dropbox.com/s/brwwnh4j5p...ated_3d_v2.rar

              I could see this script process working well for hassle free single images - but it needs the ability to utilize a render farm - either through DR or Backburner to really be viable for animations.
              Brendan Coyle | www.brendancoyle.com

              Comment


              • #8
                Now with the release of developer kit 2 (http://www.oculusvr.com/dk2/) - there's a new challenge at hand. With '6-degrees-of-freedom', there's a head tracking component that recognizes your head translating in space. So you've got rotation & translation to a certain degree.

                My immediate thought was some sort of combination of rendering a stereo panorama along with corresponding z-depth renders which could be used to displace the images. The depth map worked terribly for creating a second eye, but hopefully with two renders for each eye, the displacement tearing will be less painful, and the subtle shifting/parallax will only increase the believability/realism of the view you're looking at.

                So I'm back to rendering slices and stitching manually so I can get the depth pass - and hoping a vr player updates with a solution to what I have in mind.

                Another direction to go would be some way to pull vrays beautiful work into a real time engine - either baking onto the mesh (which has never worked smoothly for me), some sort of point cloud capturing, meshing, and reprojection - can deep exr's store data behind points? what if vray rendered what wasn't visible from the camera... anyone have any thoughts on methods I might experiment with?
                Brendan Coyle | www.brendancoyle.com

                Comment


                • #9
                  I've since been exploring a baking & export workflow into a real time engine.

                  But check this out, http://render.otoy.com/newsblog/?p=547
                  It looks like an interesting method of capturing a digital point of view, while giving you some freedom of movement. Would be perfect for viewing in the Rift with its positional tracking.

                  Static points of view are still the best way to let people peer into a virtual world, with the least amount of motion sickness.
                  Brendan Coyle | www.brendancoyle.com

                  Comment


                  • #10
                    this method looks to be in the right direction, lightfield synthesis
                    and looks quite similar to what otoy was showing in the previous post

                    https://forums.oculus.com/viewtopic....234521#p231200
                    http://www.reddit.com/r/oculus/comme...orking/cn6pc43
                    Brendan Coyle | www.brendancoyle.com

                    Comment


                    • #11
                      That'd be a hell of a lot of rendering with Vray though yeah?

                      Or I guess an optimised version would mean there's be shared rendering of diffuse / lighting just individual reflection/refraction/specular.....


                      Interesting stuff...
                      Maxscript made easy....
                      davewortley.wordpress.com
                      Follow me here:
                      facebook.com/MaxMadeEasy

                      If you don't MaxScript, then have a look at my blog and learn how easy and powerful it can be.

                      Comment


                      • #12
                        Originally posted by Dave_Wortley View Post
                        Or I guess an optimised version would mean there's be shared rendering of diffuse / lighting just individual reflection/refraction/specular.....
                        Diffuse lighting can be optimized, but reflections/refractions/speculars will need to be retraced. They need quite good precision and I don't think this particular technique can handle the amount of data needed for high-resolution reconstruction of an arbitrary view.

                        Best regards,
                        Vlado
                        I only act like I know everything, Rogers.

                        Comment


                        • #13
                          Another interesting video of creating pre-rendered content with parallax. Not sure how they're doing it.
                          https://vimeo.com/116051376
                          Brendan Coyle | www.brendancoyle.com

                          Comment


                          • #14
                            It appears that Octane's VR rendering is out now
                            http://www.reddit.com/r/oculus/comme...for_rendering/
                            http://render.otoy.com/newsblog/?p=547
                            Last edited by cheerioboy; 06-02-2015, 12:40 PM. Reason: added another link
                            Brendan Coyle | www.brendancoyle.com

                            Comment

                            Working...
                            X