Video Sculpture – Ideas For Dozens http://urbanhonking.com/ideasfordozens Wed, 30 Mar 2016 22:39:34 +0000 en-US hourly 1 Skeleton Tracking with Kinect and Processing http://urbanhonking.com/ideasfordozens/2011/02/16/skeleton-tracking-with-kinect-and-processing/ http://urbanhonking.com/ideasfordozens/2011/02/16/skeleton-tracking-with-kinect-and-processing/#comments Wed, 16 Feb 2011 00:56:44 +0000 http://urbanhonking.com/ideasfordozens/?p=440 Continue reading ]]> Ever since PrimeSense open sourced the Kinect middleware, it’s been clear that body tracking interfaces were going to rapidly come within reach of hobbyists and artists.

PrimeSense demo video of skeleton tracking with the Kinect and their OpenNI middleware

The PrimeSense middleware takes the depth data coming from the Kinect camera and performs a process called “skeletonization”. It detects individual users who are within view of the camera and then tracks the positions of their bodies. All of a sudden, as a programmer, rather than having access to the pixels of a flat image, or even the depth information from the raw Kinect data, you now have a description of your users as a series of joints in space. For many applications, especially gesture-based interfaces and motion capture, this is exactly the data you need to get started.

Unfortunately, when the PrimeSense middleware was released the software reference implementation that went along with it was Windows only. Slowly but surely since then, this situation has improved. PrimeSense has released packages for Linux and OSX and the open source community has started to work towards integrating them into more accessible environments like Processing and Open Frameworks. This process is not yet complete, but it has gotten to the point where and adventurous person can get started. This post documents my first successes doing just that.

The first step is getting all of the dependencies installed. A bit of a challenge at this point, but possible. (Note: this post is based on my experiences of doing this on OS X and will be specific to that.) Tohm Judson’s OpenNI to Max/MSP via OSC tutorial is the best place to start. If you have never installed MacPorts before his directions will go rather smoothly (though there are a lot of steps). If you are unfortunate enough to have installed MacPorts before upgrading to Snow Leopard, you’re in for a bit of a struggle as the MacPorts automatic upgrade path seems to have gotten badly broken with that change. After much frustration I managed to uninstall all the ports that were affected by the upgrade problem (the MacPorts migration page is a good place to start if you’re in a similar situation) and then proceeded through the rest of the steps outlined in Judson’s tutorial.

Judson’s tutorial is based around OSCeleton, a proxy that broadcasts the skeleton data from the Kinect middleware as OSC messages. OSC is a standard format for real time messaging similar to midi and is supported in many languages and platforms, including Processing and Open Frameworks. Once you’ve successfully gotten to the end of Judson’s tutorial, you’ll have OSC messages representing the skeleton data being transmitted and then you can start writing your own code that receives those messages and does whatever you want with the information.

Once I’d gotten everything successfully installed, I ran the OSCeleton Stickmanetic example just to make sure things were working:

This sketch simply uses the skeleton position information in 2D as an obstacle to some particles falling out of the sky with Box 2D for physics. It’s relatively silly, especially the choice of connecting the shoulder joints directly to the head rather than to the neck as seems a lot more intuitive, but it did prove to me that everything was installed and working successfully.

Then, as a basis for my own code I started with the OSCeleton Processing MotionCapture3D example. This is a Processing sketch that reads the incoming OSC messages from OSCeleton, converts them into points in 3D space representing each of the joints of the body and draws a series of spheres at those points.

I wanted to also add lines between each of the joints so, after some experimentation, I used Processing’s beginShape() function and treated each adjacent pair of joints as vertices for lines. In working through this exercise I constructed the following map of how OSCeleton names each joint:

Obviously, I’m only going into detail on the right side of the body, but equivalent nodes are available for the left arm and leg as well. In addition, it’s worth noting that for whatever reason I wasn’t actually seeing any collar, finger, or ankle joints. I don’t know what causes these to not come across, but in my setup they were not appearing in the OSC messages sent by OSCeleton.

Once I’d successfully finished drawing the vertices, I tried my sketch out with my roommate. Lo and behold, you can track two completely separate users no problem.

A couple of notes about this video. Its sluggishness was caused by the screen capture software I used to record it, not the code itself. When not recording, it ran smoothly at a much higher frame rate on my machine. Also, many of the glitches here are caused by the constrained space of my room. The Kinect can obviously only process the parts of your body that it can see. My room is cramped enough that the two of us could barely fit within the Kinect’s field of view simultaneously. Some of the weird glitches you’re seeing here are when individual joints disappear from view and my code draws them as if they were at the top left corner of the screen.

But now that I’ve gotten the skeleton data into a form that I can use, what to do with it? The first thing I thought of was to use it to change the view of this skeleton itself. After all, even though I’m gathering this data in 3D, you’d barely know it from the display you’re seeing here. And most 3D browsing interfaces are incredibly unintuitive and hard to learn, maybe that’s an area of design where full-body gestures could actually be useful.

I added the Obsessive Camera Direction library to my Processing sketch. OCD is the best Processing camera library I know for intuitive control of the viewport. It has slightly more controls than the commonly used PeasyCam, but is less surprising in my experience.

After I had OCD installed, I configured it to always aim at the joint representing the right hand of the detected figure. Then I calculated the distance between the right and left hand and made it so that controlled the zoom. Moving your hands closer together would cause the camera to zoom in, moving them further apart would zoom out. Finally I made it so that raising both hands above your head would rotate the camera around the figure and moving both hands below the hips would rotate the camera around the opposite way.

Here’s what it looked like when I started playing with it:

The code for this is available here: controlling a 3d camera via gestures with kinect in Processing. This video is dramatically improved from the one above because, in the interim, I discovered MovieMaker, a built-in class that makes it incredibly easy to record movie files of a sketch from directly within Processing.

A next obvious experiment to conduct along this path would be to use this interface to navigate around more interesting 3D data, like a pre-existing 3D model. It would be especially cool to use your head to determine the location and angle of a camera within a 3D space to provide navigation and recording of a virtual environment. And then to use the position of your hands to fast forward or rewind various 3D motions being played back within the virtuality.

Another interesting area that I plan to explore soon is creating 3D “hot spots” for interfaces. In other words, mapping particular parts of 3D space to various application controls that can then be triggered by moving different parts of your body into them. Matching these hot spots to actual physical objects or locations within a real room is particularly interesting. Imagine: bringing your left hand near the top of your bookcase turns on a light, putting your right hand there turns it back off, etc. The possibilities are endless.

]]>
http://urbanhonking.com/ideasfordozens/2011/02/16/skeleton-tracking-with-kinect-and-processing/feed/ 18
Night Flight: a miniature light sculpture cityscape using fiber optics http://urbanhonking.com/ideasfordozens/2011/02/14/night-flight-a-miniature-light-sculpture-cityscape-using-fiber-optics/ http://urbanhonking.com/ideasfordozens/2011/02/14/night-flight-a-miniature-light-sculpture-cityscape-using-fiber-optics/#respond Mon, 14 Feb 2011 19:39:45 +0000 http://urbanhonking.com/ideasfordozens/?p=432 Continue reading ]]> Flying over a dense city at night is a singly modern experience. The confusions and dislocations of street level life flatten into schematic clarity; the electric glow gives the terrain the sci-fi feeling of electronic circuitry; the unlit natural areas, — parks, rivers, bodies of water — disappear into invisible black pools of absence. All of it is far distant below you, beautifully complex and detailed, but simultaneously seemingly laid out specifically for your comprehension.

Recently, Allison Eve Zell and I set out to build a light sculpture that would reproduce some of these qualities.

We started by acquiring fiber optic lights from a local lighting store. These are glass or plastic fibers that transmit light from one of their ends to the other. Hence, when attached in a bundle to a strong light source at one end, the fibers can produce a series of tiny intense points of light, the perfect thing for emulating a night time cityscape.

One of the inspirations for trying this technique was my knowledge of fiber optics having been using in miniature shots for Blade Runner. In fact, I recently saw one of the actual miniatures used in that film at the Museum of the Moving Image, albeit without the fiber optics turned on:

Blade Runner miniature fiber optics

This youtube video: Mark Stetson Blade Runner Tyrell Pyramid (embedding disabled) shows some behind the scenes footage of the making of these miniatures and the Blade Runner DVDs have great a documentary on the subject.

After some experimentation with these and a small halogen light, we started working on shaping the fibers into the city grid. We downloaded and printed out a map of lower Manhattan to act as a guide.

We decided to focus on a slice of the city around the lower edge of Central Park, including both rivers, and parts of New Jersey and Brooklyn. We attached the map to a piece of black construction paper and proceeded to punch holes along the streets and buildings anywhere we wanted to place a light.

Punched midtown holes

Once this was in place, I built a wooden box to hold the map in a fixed position above the lights so we could mount the fibers and thread them through the holes. We mounted a piece of translucent orange material over one of the bundles of fiber so that some of the lights would have that signature orange quality of street lights in Manhattan. Then we spent a couple of days in the painstaking hand work of putting individual fiber optic strands through tiny pin prick holes.

Side view of inside of sculpture with fibers installed

Once these were all installed, I gave the fibers a kind of a haircut: chopping them down to size so they’d actually float close above the black cardboard like building and street lights.

At this stage, all that remained was to install a mirror within the box and to close up the box with black foam core so that the viewer would see it through a peephole. The mirror acted to increase the illusion of depth within the box, to make it so we could control the angle and framing of the lights, and to echo the round shape of an airplane window. In addition to the obvious necessity of keeping the box dark, forcing the viewer to see it through a peephole had the additional effect of helping to transport you to another perspective. With your eye pressed up against the small hole, it really felt like you were suddenly a few thousand feet up, getting ready to descend towards one of the area airports. In the critique, one of our classmates even suggested we should add the smell of stale coffee and carpet to complete the effect.

The final piece is very hard to document, but here’s a photograph that will give you an idea of what it looked like through the peephole:

Night Flight

]]>
http://urbanhonking.com/ideasfordozens/2011/02/14/night-flight-a-miniature-light-sculpture-cityscape-using-fiber-optics/feed/ 0
Looking Up At The MJT http://urbanhonking.com/ideasfordozens/2011/02/07/looking-up-at-the-mjt/ http://urbanhonking.com/ideasfordozens/2011/02/07/looking-up-at-the-mjt/#respond Mon, 07 Feb 2011 18:28:08 +0000 http://urbanhonking.com/ideasfordozens/?p=418 Continue reading ]]> Last month, I participated in two sessions at the 2011 Modern Languages Association conference in Los Angeles. I was recruited by acquaintances associated with the MLA because of my relationship with David Wilson at the Museum of Jurassic Technology. Given the conference’s presence in LA, the MLA wanted to have David give a talk, but were having trouble getting ahold of him.

After some back and forth, we agreed that Craig Svonkin from the Metropolitan State College of Denver (the organizer of PAMLA at which I presented in 2009 on Project Cybersyn) and I would interview David on stage and that I would participate in a panel discussion afterwards with four other scholars.

David is a terrific speaker, but he’s usually quite cagey on the topic of the Museum itself and the motivations behind it, at least in public, preferring to discuss the content of individual exhibits — early 20th century Russian spaceship designers or micro-miniature sculptors. The idea of the interview format was to draw him out on that subject while making the experience less painfully confessional for him.

After much correspondence and an extensive trip to the Museum to talk with David in the days before the conference, Craig and I came up with a set of eight questions. We also left time for David to give a brief slide-based tour of the Museum, to add follow-ups on the fly, and to let the audience get in a few of their own. Unfortunately, the session wasn’t recorded, but I’m hoping to gather some notes taken by a few audience members and I’ll post those here when I get them. In the meantime, here were the questions Craig and I composed in advance:

1. CRAIG: Can you tell us a bit about your childhood experience of museums? Were there museums that you visited frequently or of which you have particularly strong memories? Why those?

2. GREG: For many lovers of the MJT, the qualities of light and sound within the museum are important to its aesthetic and emotional impact. You studied film at Cal Arts and worked in special effects and animation and much of your focus now is on making films. How does making the museum exhibits relate to filmmaking?

3. CRAIG. Can you describe how a single exhibit, such as Tell It To The Bees or Kircher, came about, from its inception to its completion?

4. GREG: It seems to me that the MJT has really transformed in the last ten years or so, perhaps focusing more of its exhibits around extraordinary people. Do you agree that the MJT has changed, and if so, how do you see this change?

5. CRAIG: Entering the MJT often feels like departing from the quotidian world outside, but much of the museum’s aesthetic seems to reflect the wider cultural values of Los Angeles. How has the museum’s immediate surroundings and neighborhood shaped it over time?

6. GREG: Having worked briefly at the museum, I know that there’s very much a family or community of people behind the scenes helping shape and execute your ideas. The museum is, in many ways, your home. Will you talk about how this domestic and social environment has shaped what the public sees at the museum?

7. CRAIG: Are there any ideas for exhibits that just didn’t pan out?

8. GREG: Mortality and decay seem to be repeating themes at the MJT. To broach an uncomfortable subject, have you thought about how the MJT might continue after you’re gone?

The session went really well. I think David felt comfortable talking directly to Craig and me and was able to, somewhat, forget the presence of the larger audience. Lots of people came up afterwards to say that it was a much more detailed and intimate look at the Museum than what they’d seen elsewhere.

After the conversation session was over, the panel began. Organized by Andrew Howe, the panel also included Kristen Koster, Jem Axelrod, Jeanne Scheper, and M. Catherine Coleman. Unlike the other scholars who gave theoretical, historical, and political interpretations of the MJT, I tried to present some close-observations of how the Museum uses light to create specific emotional and spatial effects. As an artist and visual person, I find that too much of the academic discussion of the Museum treats it as a “text” to be read rather than the profoundly visual, theatrical, and bodily experience that it is. Also, as someone close with the Museum staff I didn’t feel that I had the detachment or impartiality for straight interpretation. But, simultaneously, as someone who’s spent quite a lot of time in the Museum over the last ten years (as opposed to a few of the the panelists who had briefly visited for the first time during their trips to LA for the conference) I thought that my detailed observations might be able to inform the other interpretations, possibly even restraining some of their more theoretical flights of fancy.

I’ve included the slides from my presentation at the bottom of this post, they’re mostly pictures of the parts of the Museum above, around, and behind the exhibits: the clusters of lights and mirrors that illuminate the objects in the displays. Below, I’m also including a brief essay I wrote on returning to New York that tries to translate some of the visual experience of navigating the MJT into language.

White text glows with reflected golden light. Deep shadows rake angles across the text’s plexiglass panel, leaving its corners illegible. But its edges catch internal reflections and their incandescence seems to float the panel off the wall.

As you lean to look closer at the jawbone fragment the text explains, the darkness around you thickens. You’re blind besides this text, this bone. And the space of the museum behind you stretches out, other exhibits rushing away, footsteps and whispered chats muffled.

The warm light pinkens the bone of the jaw, but somehow leaves the teeth a pearlescent white. The cracks on the bone’s underside cast deep shadows and you only notice the black rod that floats it above its platform when you’re leaning over it, casting your own shadow.

This movement, this break in the light, feels so violent that you reel, turning to look over your shoulder for its source. You see instead the source of the light: two ordinary bulbs racked on a totally unromantic clutter of hardware and clamps in the room’s corner. They’re cobweb-covered and dusty. Beneath them you notice a gap between wall panels — a wound in the space — metallic tubing and sloppily splayed cable stuffed just behind.

You’ve never looked up before, when you’ve been here.

Until this moment your image of the place ended not far above your head, fading into unconsidered fuzziness. But now you’re really seeing it. The tops of false walls that don’t meet the ceiling end abruptly without molding. Apparatus bristles: lights in jury-rigged fixtures, cheap commercial speakers, even extension cords. A whole hidden Home Depot shelf’s worth of guts labors to produce the soft pools of light, the gentle encompassing blindness.

Slowly you turn back towards the jawbone to look again at the lights’ effects. This time you notice your eyes adjust, you see the bright light force your irises to clamp down against it, hiding the dimmer surroundings like the low glass case of pinned-butterflies off to the left.

As you settle in to look you realize that even though you’ve now seen how the trick is done, its effect remains undiminished. The box of warm light reflected up onto the text by the brass rectangle beneath it, the lushness of the red cloth covering the plinth, the rich shadow on the riser behind the bone, these things still stir an unnameable feeling triangulated between reverence, delight, and a deep melancholy.

After letting this feeling hold you a moment more you pull back and turn away, moving past the butterflies and on into the rapidly receding dark.

Looking Up at the MJT

]]>
http://urbanhonking.com/ideasfordozens/2011/02/07/looking-up-at-the-mjt/feed/ 0