This past weekend, I participated in the most interesting hackathon: Sponsored by the PBS Documentary Series POV, this hackathon paired media makers with technologists, challenging the teams to experiment with adding interactivity to the documentary process. My colleague Greg Prentice and I were paired with Kelly Sears, an animator and film maker and our challenge was to use technology to help her tell her story about tracking and surveillance.
In two days time, we ended up creating a Windows Store app that used multiple layers of video, imagery, mapping imagery, webcam output, and multiple audio tracks to dynamically collage and surface content based on the user’s interaction and location. We decided to take advantage of the fact that a Windows tablet (like many other tablets) has an array of sensors, like cameras, location, inclinometer (measures orientation of the device), and coupled with the touch screen, allows a wide range of inputs that can be triggers for dynamically altering the experience of the documentary.
We wanted to make the technology fade into the background, not be obvious – truly content over chrome. We also wanted to experiment with breaking the rules of traditional video playback that is seen commonly on the web and in apps. Like the cinema, there are no playback controls or time display – just the visual imagery taking up 100% of the screen throughout the entire playback of the experience.
If you look at Kelly’s previous film work you will see that she employs a collage animation technique where she uses After Effects for compositing, masking, and collaging video with images. We wanted to use interactive techniques to create dynamic layers in the experience that could be manipulated by the user’s interaction with the device both intentionally and unintentionally. We wanted the dynamic technique in the app to both mirror and complement her static techniques in After Effects.
I did a quick sketch/watercolor in my journal about the prototype which we built in Visual Studio and Expression Blend:
In the app, we had multiple layers of content (video, map imagery, webcam, looping texture) and varied opacity depending on how the user interacted with it. We also had multiple simultaneous audio streams and varied the volume of the streams based on how the user interacted with it:
- Putting the tablet flat triggered a visual state where the map imagery was visible and a droning sound grew louder
- touching the screen also triggered a visual state where the map imagery was visible but it showed up more quickly and played a different droning sound.
- In the narration, when certain words like “Viewer” were said, we flashed a quick image of the viewer through the webcam (like Fight Club) – these were triggered by adding media markers with time codes to the video.
- We used the geo-location sensor (GPS) to animate a slowly scrolling satellite imagery map from Bing. Starting out at a location 0.5 degrees lat/long from the current location in a random direction, we timed the map to come to the user’s current location and end up right over it once the video finished running.
- At the start of the playback we log the user’s geo-location to a Microsoft Azure Mobile Service and at the end of the video we zoom out the map to show where everyone who has viewed the experience is located. We also superimpose that view with the view of the user from their own webcam. The film maker wanted to make a point about surveillance that we often take for granted how we are being tracked.
I would recommend participating in hackathons to anyone who uses code for their craft. This is where we got to experiment with some really fun and interesting concepts and ideas. You can read in the press that interactive technology is coming to film and video – and Microsoft is making big investments in this area with Xbox Entertainment Studios.
Take a look at this video that we put together to describe the prototype we built in two days. I’d love your feedback! Do you have a film or video project that you want to bring to life?