I've been hearing that the guys at Vision are working on a DL2 fixture for their visualizer. How will the content be handled for this? Will I need to network an Axon to my pC in order to view the DL2 in Vision, or will there be some sort of low low res version of the content (maybe watermarked) that I can download and use with Vision?
The way to visualize DL-2s using ESP Vision involves using a live video input. They have a fixture that will move this projected video around in your 3D space. It wouldn't be practical for any visualizer to simulate all DL-2 functionality natively, because it would require including our entire graphics engine in their application and running an instance of it for every DL-2 in the rig. This just isn't feasible and would also have the added hassle of having to worry about the DL-2 software version.
The current functionality works very well and ESP Supports multiple video inputs. You can use Axon servers in your pre-visualisation suite and be very successful writing accurate cues.
I hope this answers your questions. Please let me know if you have any other questions.
This is something I was also about to touch on since I try to do as much precuing as I can.
I am more concerned though about using scaling, positioning, and spherical correction as I program in visions since many times when I visualize, I am doing so to show a client realistic possibilities (such as putting a dozen DL2s in a nightclub).
Rendering in Max only goes so far.
Scott, I am interested in collaborating with you off list if you're interested.
You're absolutely correct. ESP Vision is only going to be listening to the movement channels of the fixture and using that to move the input from the video input around in virtual space. You would actually output DMX to your media server for your global and graphic layers that would then feed video to the Vision input.