When digging around the other day, I found this really cool experimental film that was shot on a Canon EOS 5D Mark II and combines the use of the Kinect for Xbox 360 with the HDSLR footage and it has some cool potential for future use. It turns out that RGB+D team, James George and Jonathan Minard, are also releasing the software as open source for anyone to use and improve upon – it is called the RGBDtoolkit.
On the RGBDtoolkit page, they also have several more video samples and instructions on how to build a mount to hold the kinect and the HDSLR on the same tripod as well as the checkerboard you'll need to print to calibrate the kinect.
We also found an article with an interview of the team…
Though they used a stationary one camera set-up, the 3D Kinect data they capture allows them to render out the images any way they can dream up (and code up) and rephotograph their subjects from any angle in post production.
Clouds features prominent and emerging computer hackers, media artists, and critics talking about the creative use of code, the future of data, interfaces, and computational visuals. It is presented as a series of conversational vignettes centered around thematic topics. The subjects of the film float in a black void, their figures composed of tiny points connected by lines that flicker and break apart at the edges. Theyre made out of pure computational matter — the same class of material the artist depicted work with on a daily basis. The entire film is shot using a DSLR attached to a Microsoft Kinect sensor and rendered using our open source editing suite the RGBDToolkit to generate a true CGI and video hybrid. The Clouds video is a step along the longer path for this concept. We are continuing to chase the idea of creating an infinite conversation in form of an interactive application to navigate clouds of related dialog. The final instance of the documentary, the user will send a query resulting in a stream of figurative points clouds speaking to the subject. We imagine this working in a real time environment where the viewer can control the camera and choose who to listen to by flying through space virtually.
A Kinect + DSLR overview[tentblogger-vimeo 40058384]
Documentation of our workshop at Resonate Festival in Belgrade 2012.
We taught a one day intensive workshop on how to record a unique type of video using Kinect paired to a digital SLR using the RGBDToolkit.
rgbdtoolkit.com + rgbd.tumblr.com
The RGBDToolkit has been developed with support from the STUDIO for Creative Inquiry at Carnegie Mellon University (studioforcreativeinquiry.org/)
Edited by Jonathan Minard
We also found an article with an interview of the team and more info – here's a sample…
What was the goal in developing the RGB+D toolkit and the documentary itself?
James: The goal behind the toolkit is to capture our aesthetic research and make discoveries repeatable. We’ve been working with the Kinect+SLR technique since Alexander Porter and I first experimented in the NYC subway with a sensor rubber-banded to an SLR. Every time we do a new project the tool grows. Because everyone involved has different backgrounds, it’s super important that the tool can be used by people who aren’t programmers, so everything is controlled through a graphical user interface.
Jonathan: As a documentary filmmaker with a background in new media art and anthropology, I am interested in stories of invention and discovery: how people develop new tools and how those tools shape the collective imagination. This project has all those elements. It’s no coincidence that this documentary, realized in a computational format, explores the subject of code and culture. Form should reflect content. A principle feature of this project is that it’s a documentary about media art in which the movie is itself a new media experiment.
What do you think?
Is that cool? What would you use it for?
(cover photo credit: snap from the video)