Showing at Ventana244 Gallery

I’m proud to announce that I’m going to be participating in my first gallery show, It Reminds Me, at Ventana244 in Williamsburg. The show opens February 11 at 7:30pm and will up up for the rest of February.

I’ll be showing my piece “LO: October 29, 1969”, which is a multimedia monument to the birth of the Internet.

Side by side detail

I’ll be posting more thorough documentation of ‘LO’ soon, but for now, here’s the description from the It Reminds Me flyer:

‘LO’ tells the story of the first Internet transmission between UCLA and Stanford on October 29, 1969. The piece dramatizes the interaction between the counterculture and the military industrial complex that was key to the formation of our contemporary technoculture. Two half-cube miniature scenes with projection and electronic screens are networked together to represent the two labs.

It Reminds Me also features two of my fellow ITP students, Nick Yulman and Merche Blasco. The curators at the gallery organized the show around common ideas in our work about “the relation between memory, technology and change”. Nick and Merche will both be performing with their pieces at the opening on February 11.

Posted in Art | Leave a comment

The Incident: Sweded!

Yesterday for the last day of ITP’s 4-in-4, Mike Cohen, David Phillips, Spike McCue, Liza Singer and I sweded The Incident. “Sweded The Incident,” of course, when written out makes about as much sense as “Yahoo Serious Festival”, but let me explain what that means.

“Sweding” is a style of lo-fi recreation of movies that was invented by Michel Gondry for his film, Be Kind Rewind. Check out Gondry’s original explanatory video, how to swede:

The basic idea of sweding is that you make your own version of an existing movie using whatever you have around. There are a ton of them on YouTube. It’s clearly a popular high school video class assignment. The thing that’s great about sweded movies is that since we all know the original movie being sweded, it’s easy to tell what’s supposed to be happening even when the camera work is dim and unclear, the actors are babbling near-incoherently, the props and costumes are made of garbage, etc. Your brain pretty much fills in the gaps from memory. It’s the same thing that makes Star Wars Uncut work.

One thing that both Mike and I love is The Incident, a retro 8-bit style iPhone game from Neven Mrgan and Matt Comi at Big Bucket Software. In The Incident you play Frank, an ordinary seeming guy in a suit who begins to have a very bad day when every object in the world mysteriously starts falling out of the sky:

We’d never seen anyone swede an iphone game and we thought this might be a perfect candidate. So, we recruited a group of fellow ITPers and we were off. Here’s the result (more about the making below):

The Incident: Sweded from Greg Borenstein on Vimeo.

We had a couple of basic planning meetings and had gathered cardboard around the floor for a few days. Then yesterday we met at ITP around 11 and started clearing out the furniture from a classroom. We made a quick supply run to K-Mart and then started cutting out cardboard. We focused on memorable elements from the game the balloons, the taxi cab, the flux capacitor, the monolith, etc. Once we’d built some of those we basically moved on to trying to build any object we could think of.

The whole process was frantic and fun. Then, around 4pm, we started shooting. Mike had volunteered to play Frank, the main character of the game and so he changed into his suit. He has a beard, so we thought (after I failed to convince him to shave for the project) that it would be a nice reference to Beard Mode in the game, which I won’t explain here for fear of spoilers.

We’d figured out before hand that we wanted to frame the video by having it appear to take place within an iphone. We spent the first while of the shoot trying to work out the details of the opening credit and the transition into the phone. Once we finally got that working the rest of the shoot went fast and was chaotic fun.

One of the hardest parts was actually playing the game long enough to achieve the Death by Monolith to match the footage we’d shot. We almost gave up a couple of times before I got it by luck, when I was barely looking at the phone.

We finished shooting around 6 or 7 and then Mike and I spent another few hours working on it, him recording the music and sound effects, me doing the editing. We finished around midnight and uploaded it. It was a classic ITP day: 14 hours of crazy group work doing silly things that take you out of your element all to bring a totally goofy fun vision to life with each person pushing the group to take the idea even further.

After we uploaded the video, I mentioned it to Neven and Matt on Twitter who loved it, which was very satisfying:

Neven and Matt helped spread the word today and we’ve gotten lots of nice comments and even some coverage on Game Set Watch.

Posted in Opinion | Leave a comment

What Makes a Programming Language Good

A year and a half ago, I quit being a professional programmer. For the four years or so previous to that, I’d made websites for a living, either for clients or for my own startup.

At that time, if you’d asked me what matters in a programming language — what makes one good — I would probably have talked about features. The languages I knew (and loved) best were Ruby and Javascript so I would have sung you a song of the epic deeds of dynamic languages: the clean uncluttered syntax, the power of easy extensibility, etc.

At that time I thought of a programming language’s virtues as inherent to the language. Having struggled, and mostly failed, to learn C and PHP before picking up Ruby, I felt strongly that I’d succeeded with Ruby due to its technical elegance. I’d failed with C and PHP for very different reasons: C because it was too technical, too close to the machine, PHP because it was too messy and cluttered. Ruby was the right balance.

But then I packed up my laptop and went to art school where I share a floor with 200 other students, most of whom are beginner programmers. Hence, I’ve found myself doing a lot of helping with other people’s code. This help ranges from debugging a simple typo, to working with someone until they really get objects for the first time, to teaching a semester-long weekly tutorial on web technologies.

Simultaneously, I’ve been learning a lot myself. I’ve wildly expanded the number and variety of languages that I work with. While I haven’t been programming professionally, I have been doing so creatively in a way that’s required me to take on everything from ATMega8 assembly to C++, Java, and Python.

These parallel experiences have simultaneously deepened my love for Ruby while completely transforming my understanding of what exactly Ruby got so right. While I started off thinking it was Ruby’s intrinsic qualities that made it good, now I realize that most of what I love about Ruby are not language features, but rather systemic and cultural qualities that I took for granted at the time never having experienced their lack.

Put simply, Ruby makes it easy to find and install code, easy to figure out what the code does and how to use it, and easy to run the code and see its results. That may sound like a statement so simple as to approach absurdity. Getting, writing, and running code are pretty much the only things that programmers do. However, it is increasingly shocking to me how painful these things can be in each new language I learn. I think the reasons for this are partially cultural and partially historical and I’ll return to them at the end of this post, but first I want to be more specific about what I think Ruby does so well — the parts of the Ruby ecosystem every other language most desperately needs to emulate.

Getting Code

It sounds kind of crazy when you type it out, but Rubygems might just be Ruby’s best feature. Rubygems is a package manager. It makes it possible to install Ruby libraries from the internet with a single command and then to use them with a single line of code. Importantly, Rubygems doesn’t force the user to think about where on their system the code is installed; they don’t need to know anything about paths or multiple versions of libraries or nested dependencies or anything like that. They just install the library and then use it. Rubygems also has a built in search command that lets users find libraries so, for example, if you need a gem to wrap the Foursquare API, you’d immediately find four results which you’d then be one additional command away from installing.

Rubygems has two major characteristics that distinguish it from other languages’ dependency management systems: it is centralized and it takes the existence of the internet for granted. The gem system is organized around Rubygems.org, a community-maintained site that provides free hosting for gems and their documentation as well as APIs that make the command line discovery and publishing tools work so well.

While anyone is free to run their own gem server or distribute gems by Jazz drive or whatever other means they find appropriate (in fact the current rubygems.org started off as Gemcutter, a competitor to the original centralized gem host), in practice 99% of publicly available code is on rubygems.org. And the technical infrastructure around gems makes it simple for users to configure their machines to access multiple hosts transparently. As we’ll see over and over again, this is an example of Ruby’s culture which values competition to come up with the best way to do something combined with a rapid total adoption of new systems whenever consensus is reached.

These characteristics also make it extraordinarily easy to publish a gem. It is a simple matter of moving your code into a couple of files in a specific format and then running some command-line tools that package and submit the gem via rubygems.org’s API. There’s no hosting to setup or maintain. It really is fire and forget.

Figuring Out Code

But Ruby programmers don’t tend to fire their code and forget it. They tend to write thorough blog posts and clear READMEs, produce slick tutorial videos, and publish regular podcasts explaining it and teaching people how to use it. This is the next “feature” of Ruby that I most miss when dealing with other languages: the culture of documentation.

With many other languages and frameworks, when you google looking for code or answers, you’re most likely to find forum threads, q&a sites, and a heap of broken links. With Ruby, you’ll usually find the gem’s Rubyforge page and a recent blog post. To continue the Foursquare example I started above, I just popped “ruby foursquare api” into google. The first two results were the Rubyforge site for the API wrapper and a blog post from November about it.

At a deeper non-day-to-day level, Ruby has amazing materials for beginners: Chris Pine’s “Learn to Program”, which may be the best introductory programming text I’ve ever seen and, of course, the legendary _why’s Poignant Guide to Ruby. It also has great podcasts and about a billion blogs tracking all the most recent developments.

Ruby has these things because they are valued by the community. People like Geoff Grossenbach at Peepcode can make a living creating awesome, beautifully well-crafted educational materials because Ruby programmers are willing to pay for something so good. I know from personal experience that people who write about Ruby well and often get offered conference gigs and jobs.

Writing Code

When you’re working on software anything that isn’t thinking, typing, or watching your code run is extraneous bullshit. This includes waiting for your editor to launch, waiting for your code to compile, editing build scripts, creating arcane directory structures, and modifying configuration files or boilerplate code, amongst many other things.

There is simply no other development environment I’ve found that includes as few of these things as the Interactive Ruby Console. Being able to input code and immediately see the results means you can explore an API, build a data structure, or test out an algorithm at a rate limited only by your mind and fingers.
While other languages and platforms have similar setups (notably Javascript with Firebug and the WebKit Error Console) in my experience, code written in IRB better translates into “real” code in a project than any other such system.

Further, while I know there are people who like IDEs, for me they serve to cripple the exploratory play of coding so that it’s about as fun as filling out tax forms. A big exception here for the Processing and Arduino IDEs which, though imperfect as text editors, really do eliminate an immense amount of bullshit usually associated with building and running programs in Java and C++. But, in general, if your language, platform, or environment, has such a complex dependency management situation, maybe you should solve that by, I don’t know, improving the packaging system, rather than also ruining the editor.

Granted the command line also has a learning curve and can be intimidating for beginning users. But I don’t think these compare in any way to the problems created by IDEs or complex builds. There’s even an eccentric tradition in the Ruby community of trying to automate and GUI-ify the running of command line scripts to address this problem (see, for example Aaron Quint’s Vegas and Locomotive for Rails from back in the day).

Having only the tab key, the up arrow, and the return key between having made changes in your code and seeing them run on the command line or in the browser is priceless.

Conclusion

Obviously there are ways in which these extrinsic qualities are made possible or at least easier by some of Ruby’s intrinsic properties. Rubygems would be harder to implement in a static language that couldn’t override its own built-in “require” method. The interactive console is mainly the domain of interpreted dynamic languages (even though compiled Erlang has one too).

But other languages with very similar technical aspects, notably Python, have not succeeded as fully on important things listed here mostly for cultural reasons. For example Python has at least three package distribution systems you’ll encounter when attempting to install software of any complexity (eggs, easy_install, and plain old download files into place). And great ideas in Python that need standardization (such as WSGI) have failed to achieve universal adoption only to be picked up by the Ruby community to great effect (i.e. Rack).

Why? I don’t know enough about that community to really answer, but since it’s basically the only major contrast between them I’ll bet it has at least partially to do with the languages’ difference in age.

What makes Ruby different? Why does Ruby get these things right where so many other languages get them wrong? How did Ruby acquire the good parts of its culture? I think the answer lies, mainly, in Ruby’s youth. Ruby was first created in the mid-90s and didn’t even begin to achieve adoption until 2003 or 2004. Hence it is the first widely influential language born and bred in the web era.

Nearly all of the positive aspects I described in this post would be hard to imagine without the omnipresence of the internet. The Ruby community comes to consensus quickly because all of its communication tools have always been online (and as Web 2.0-ish communities rather than old school geek slugfests) and many of its users are “digital natives”. It uses centralized but loosely-coupled technical infrastructure (such as Rubygems.org and Github) because that’s the kind of infrastructure the internet encourages. Reputations are made online via public contributions to code and documentation rather than privately through corporate jobs and invisible projects.

If I were a zealot for one of the older languages (C++ or one of its frameworks; Java or one of the new languages running on the JVM, etc.) I wouldn’t be porting web application design patterns or bragging about performance. I’d be making sure it’s stupid simple for web-connected users to find and install libraries, that no google search ever returned an 18-month old forum thread, that any user who wrote awesome blog posts was treasured, that creating a new project or running an example app was always a one click operation, and, above all, that getting, writing, and running code on my platform was easy and surrounded with joy.

Posted in Opinion | 14 Comments

Rise of the 21st Century City State Final Presentation

Last month, I wrote about my group presentation for my scenario planning class, The Rise of the 21st Century State. Since that post we, of course, finished the presentation and publicly performed it at ITP. In this post, I’ll present my portion of the final presentation as well as the full slide deck used by the rest of my group members.

You may remember from my earlier post that our group presented a future

“where a lost decade of economic doldrums have left federal governments paralyzed by enormous deficits and defaulting on their debts, increased global competition over energy and reduced post-peak oil supply has driven up the cost of transportation dramatically, the last century’s rapid migration from rural areas to urban centers continues at an ever-increasing rate, and maturing digital fabrication technologies make local production a real possibility even in slums and favelas.”

In my segment, I read a Facebook message received from my high school friend Mikey Clark in the year 2022 (note: Mikey is a real high school friend of mine who’s actually in the Marines. He’s not, as far as I know stationed at Pendelton, I have no idea what he’ll be doing in 2022, and he in no way approved or even knew of this presentation).

I prefaced the reading of the message by saying the following:

“We’ve all been hearing so much about what’s been going on this year in California, of course. Well, I wrote to an old high school friend who’s in the Marines to see if he knew any more about what’s going on.”

I then proceeded to read the text of this hypothetical Facebook message:

Yo Greg,

Maria, Carlos, and I are all fine. No worries!

And yup, I’m still down here at Pendleton. Supposed to be flying EA-6B Prowlers, but haven’t been doing much of that lately since the cuts after the most recent government shutdown last year. Thought we’d get deployed to Nigeria against the Chinese, but it never happened. A couple of my friends have even cut ranks for BHX. Now with the Gulf Cartel causing problems it actually looks like we might be getting into the air again sooner than we thought.

The situation is…interesting, but not as bad as the President’s tweets are making it sound. Here’s the deal:

Do you remember Proposition 420 a couple of years back? Well, after the legalization that giant farming company Cargill came in and really started growing the stuff. They took over most of the abandoned eastern California farmland and just put in Mary Jane as far as the eye could see. All those old onion and soybean farms that got ditched when the Feds couldn’t pay the farm subsidies got replanted with the state’s new hot cash crop: reefer.

President Palin tried to stop it, but Cargill got half the House of Representatives elected, so it may be illegal, but it’s been the way things are down here.

Well the one person who took to Cargill’s move even worse than President Palin was El Chapo. He’s the kingpin of the Gulf Cartel and he pretty much ran the drug trade with their smuggling tunnels, but with Cargill’s cheap stuff on the scene and California’s new open border policy, American pot is actually moving into Mexico nowadays. Imagine that!

Anyway, Chapo’s been trying to put a dent in Cargill’s business. And he’s been doing it with some surprisingly sophisticated stuff: Apache helicopters, drone attacks, even some low-grade anti-crop chemical weapons, apparently. This guy makes Osama Bin Laden look like a school yard bully.

So, obviously, the Governor asked the Pres to send in the marines, right? I mean, that’s what we’re fucking for! And the Pres keeps sounding all kinds of gung ho, but she can’t get Congress to get off their duffs and make it happen. So now, apparently the Governor is talking directly to Colonel Hodgins, the base Chief, about stepping in without waiting for the official order. We’re right here in it and apparently, the Governor offered to use some of those spare billions of Marijuana tax profits to help fix up the base and get us some of our back pay the Feds owe us.

No word yet on whether that’s going to happen, but I’ll do my best to let you know. I’m sure we could wrap these punks up in no time if they let us in there.

Hope to see you in LA over the holidays,

Mikey

The slides for the full presentation are available here: The Rise of the 21st Century City State (slides).

Posted in Opinion | Leave a comment

43 Books in 2010

Last year, in the midst of moving across the country, adjusting to New York, and starting school, I didn’t, unlike my normal habit, read much. In fact I read exactly 13 books. This year, however, things were quite different. I spent a few hundred hours by myself in the dark shooting a stop motion film; I had a summer of semi-idle working part time for artists; I commuted by foot every day for 45 minutes or more. The result was a lot more reading of physical books, digital books, and audio books. In the end I read more than three times as many books this year as last year for a total of 43.

Looking back over the list now, I see a couple of themes. There’s a cluster of older sci-fi titles that I’d always meant to get around to reading: All Tomorrow’s Parties, Singularity Sky, Zodiac, The Lathe of Heaven, etc. There’s a bunch of special effects-related books, a long-standing and fast-growing aesthetic, theoretical, and practical interest of mine that’s becoming more and more central for me through my work at ITP: The Making of Star Wars, Filming the Fantastic, From Vatican to Vegas, Filming the Future. There’s an increasing amount of tech history (Engelbart Hypothesis, JCR Licklider and the Dream Machine, etc.) again relating to my ITP thesis work. And there’s a smattering of art-related writing: I revisited Dave Hickey’s classic Air Guitar on the occasion of the re-release of his long-out-of-print and controversial Invisible Dragon. Following the lead of John Powers I dove deeply into Robert Smithson’s writing and thought.

Anyway, here they are, more or less in the order that I read them:

  1. The Atrocity Archive by Charlie Stross
  2. The Jennifer Morgue by Charlie Stross
  3. The Big Short by Michael Lewis
  4. The Lathe of Heaven by Ursula K. Le Guin
  5. All Tomorrow’s Parties by William Gibson
  6. The Island at the Center of the World: The Epic Story of Dutch Manhattan and the Forgotten Colony That Shaped America by Russell Shorto
  7. Singularity Sky by Charlie Stross
  8. Shop Class as Soulcraft: An Inquiry into the Value of Work by Matthew Crawford
  9. You Are Not A Gadget by Jaron Lanier
  10. Zodiac by Neal Stephenson
  11. Air Guitar: Essays on Art and Democracy by Dave Hickey (reread)
  12. Point Omega by Don DeLillo
  13. The Invisible Dragon: Four Essays on Beauty by Dave Hickey
  14. Rural Studio: Samuel Mockbee and an Architecture of Decency by Andrea Oppenheimer Dean and Timothy Hursley
  15. The Book of Illusions by Paul Auster
  16. Cognitive Surplus Creativity and Generosity in a Connected Age by Clay Shirky
  17. The Big U by Neal Stephenson
  18. Liar’s Poker Rising Through the Wreckage on Wall Street by Michael Lewis
  19. What The Doormouse Said: How the 60s Counterculture Shaped the Personal Computer by John Markoff (reread)
  20. Robert Smithson: The Collected Writings
  21. The Fuller Memorandum by Charlie Stross
  22. Seven Days in the Art World by Sarah Thornton
  23. Star Wars Thrawn Trilogy by Timothy Zahn
  24. Art/Work: Everything You Need to Know (and Do) As You Pursue Your Art Career by Heather Darcy Bhandari and Jonathan Melber
  25. A Century of Stop Motion Animation: From Melies to Aardman by Ray Harryhausen and Tony Dalton
  26. The Making of Star Wars by JW Rinzler
  27. The Making of Empire Strikes Back by JW Rinzler
  28. How To Start and Run a Commercial Gallery by Edward Winkleman
  29. The Engelbart Hypothesis: Dialogs with Douglas by Valerie Landau and Eileen Clegg
  30. Medium Raw: A Bloody Valentine to the World of Food and the People Who Cook by Anthony Bourdain
  31. Kitchen Confidential: Adventures in the Culinary Underbelly by Anthony Bourdain (reread)
  32. Au Revoir To All That: Food, Wine, and the End of France by Michael Steinberger
  33. Where Good Ideas Come From: The Natural History of Innovation by Steven Johnson
  34. Listen To This by Alex Ross
  35. Filming the Fantastic: A Guide to Visual Effects Cinematography by Mark Sawicki
  36. How To Build Dioramas by Shepherd Paine
  37. 2001: Filming The Future by Piers Bizony
  38. From The Vatican To Vegas: The History of Special Effects by Norman Klein
  39. The Art of 3d Computer Animation and Effects by Isaac Kerlow
  40. An Object of Beauty by Steve Martin
  41. Shopgirl by Steve Martin
  42. The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal by M. Mitchell Waldrop
  43. The Lord of The Rings by JRR Tolkein
Posted in Opinion | Leave a comment

Design and Laser Cutting Tests for Sculpture about the First Internet Transmission

After a semester of pre-production work on my Augment project — outlining, storyboarding, writing — I decided it was finally time to build something. I wanted to build one of the scenes in order to further investigate the materials I’ve chosen and to see about the storytelling power of this format.

Since the first piece will inherently need to stand without the others, I needed to select a scene from Engelbart’s story that’s capable of making sense on its own without the context of the larger arc. After some deliberation, I chose the first transmission of data on the internet between the Augment team at Stanford and UCLA on October 29, 1969, scene 10 in the full story, near the end of my second act:

After some sketching in which I imagined the piece as consisting of two-halves of a divided cube, one containing the UCLA side (all white, smooth, and institutional with men-in-suits leaning over massive mainframes) and one the Augment side (funky and cluttered furniture in lurid 60s colors, unkempt hippie-hackers sitting at the small familiar-looking NLS system). Each half would have animated projected figures moving around and small screens enacting the transmission of data between the two halves. The two halves would be connected by a series of miniature telephone polls that would carry the actual connecting cables that would coordinate the two sculptures.

I got started by building a cardboard prototype of half of the piece and then experimenting with projecting onto a scrim suspended in it, using LED lighting to blast out the parts of the image that make it through the scrim onto the walls and furniture:

Prototype for UCLA side of first ARPANet Transmission

Here’s a video of the prototype that makes it a little clearer how it works:

First ARPANet transmission: UCLA set prototype from Greg Borenstein on Vimeo.

While I learned a lot from this prototype, I want the final version of the UCLA half to have a slick, shiny finish with the classic coldness of marble, but achieved with laser-cut plexiglass. But I also want to be able to include a level of “gizmology” detail that gives a feeling of the texture of the real environment, for example here’s a photo of the model of Sigma-7 mainframe that was in use at UCLA at the time of the transmission (and is modeled in cardboard in my prototype):

But how to achieve this level of detail while still retaining the clean classic white plastic look I want for the whole piece?

After extensive discussions with ITPers proficient in the use of the laser cutter, I decided to test out using the laser to raster etch the plexi to create a kind of bas relief to represent the Sigma-7’s panels. After designing a fragment of the panel in Illustrator, I sat with ITP resident Paul Rothman and ran a series of tests with different laser settings. Here’s what the results looked like:

Laser Cut Test for Sigma-7

(Check out the larger size on Flickr)

While I was skeptical of this etching approach at first, even the first test (on the very left) was quite encouraging. Though shallow and hard to see from far away, the details were definitely legible. So, we proceeded with more tests, constantly tweaking the laser settings to achieve a deeper etch and more definition between the three depths we were cutting.

Here’s a table of the settings we used for each iteration, moving from left to right in the sample illustrated above (Layer 1 is the deepest cut, layer 2 is in the middle, and layer 3 is the shallowest):

#laser-settings tr td {
padding: 5px 7px;
text-align: left;
}

Attempt 1 2 3 4 5 6
Power Speed Power Speed Power Speed Power Speed Power Speed Power Speed
Layer 1 70 100% 100 100% 100 90% 100 50% 100 30% 100 20%
Layer 2 50 100% 60 100% 60 90% 60 70% 60 60% 60 55%
Layer 3 30 100% 20 100% 20 90% 20 90% 20 90% 20 90%

In the end, I was very happy with how the relief turned out. As the cut got deeper, shadows started to emerge that made the details readable even at a pretty good range and the parts actually got to be even somewhat close to their real scale.

I’m currently proceeding towards designing out the rest of the hardware details for cutting and assembly. I’ll post more pictures of the final pieces as they come together.

Posted in Art, thesis | Leave a comment

The Rise of the 21st Century City-State

As my scenario planning class, Future of the Infrastructure, starts to move towards its conclusion this semester we’ve divided into four groups each working on imagining a different possible divergent future.

I’m in a group working on a scenario titled “The Rise of the 21st Century City-State”. We’re imagining a future where a lost decade of economic doldrums have left federal governments paralyzed by enormous deficits and defaulting on their debts, increased global competition over energy and reduced post-peak oil supply has driven up the cost of transportation dramatically, the last century’s rapid migration from rural areas to urban centers continues at an ever-increasing rate, and maturing digital fabrication technologies make local production a real possibility even in slums and favelas.

In such a future, cities begin to come into conflict with federal governments that no longer have the wherewithal to support them financially, but still make policy demands on them. A 2014 Headline: “Mayors of Los Angeles and San Diego Defy President Palin, Continue Amnesty for Mexican Immigrants”. Or maybe even sooner: “Beijing Implements Passports for its Residents to Stem Tidal Wave of Rural Migrants”.

Cities suffer the effects of climate change and other natural disasters and are forced to cope without aid from national governments. A 2016 Headline: “Blackwater Emergency Management Forces Arrive in Hong Kong Following Tsunami: HK Gov Shelling out $100M A Day for Relief Services”. Or, closer to home, in 2015: “New Orleans Residents Refuse to File Federal Tax Returns in Protest on 10th Katrina Anniversary: Newly-Elected Tea Party Mayor to Hold April 15th Rally in Still-Unrebuilt Ninth Ward.”

In order to more vividly imagine this future, we recently conducted an exercise where we pictured how it would affect the normal life of a single imaginary person in the year 2022. After generating some demographic details (age, place of birth, place of residence, occupation, etc.) we spent ten minutes composing a quick portrait of our fictional characters and their lives in this City-State world.

Here was mine:

My name is Alan Ornstein. I’m 17 years old. I’m originally from Walnut Creek, California, but two years back my parents moved us to Paris. We live in the 5th Arrondisement — the New Rice Quarter — in one of the many new co-op local organic restaurants that grew up on the swollen banks of the overflowing Seine.

I’m an only child and most of the seven or so people who come and go from our communal apartment work in the patties or in the kitchen. The sous chef even lives which us, which is great because he tries out new recipes on us. I cook a bit as a hobby, but don’t actually work in the restaurant. My parents are crazy foodies so they’re a little disappointed because they were hoping I’d take over the restaurant.

Though I do miss seeing my American friends in real life, I didn’t mind leaving Walnut Creek too much. After all, I can do my job from anywhere. See, I’ve worked for Lucasfilm since I was about 12 doing 3d modeling and scanning for the movies. My friends and I taught ourselves after the school got really bad and our parents stopped sending us.

We’ll be presenting our scenario next Sunday, December 12th at 5pm at ITP. The event is open to the public and there should be quite an interesting discussion.

Posted in Opinion | Leave a comment

Kinect Experiments: compositing and 3d

So, I’ve gone a little Kinect crazy. Ever since the open source community was able to figure out the USB protocol and start working on a set of drivers and libraries for it (libfreenect) I’ve been following the progress closely and scheming about making my own projects with it.

Thankfully, the progress in the open source community has been absolutely astounding and within a week of the Kinect’s launch there were already libraries available for working with it in Processing, Open Frameworks, and Cinder amongst other platforms.

A couple of weeks ago I hit the point where I could no longer restrain myself and so I ran out to Best Buy and bought myself a Kinect. Pretty much immediately, I got the Processing hello world example working. Here’s the required hello world screen shot:

But what do I really want to do with the Kinect? The two main areas I’m interested in exploring are live compositing (combining two images to create the illusion of a coherent scene) and 3d printing (capturing data about the 3d shape of objects in order to print small versions of them and to use them in 3d animations). Towards those ends, I wrote two basic Processing sketches that use the Kinect to begin to explore these areas.

The first one uses the depth image that comes from the Kinect and allows you to select a particular depth within the image by clicking. It then replaces all the parts of the image at that depth with a pre-loaded static image (in this case, a random picture of a shoe from the internet). In this example video I demonstrate the possibilities for using this technique for both background replacement (replacing the wall behind me with the shoe) and in-scene compositing (replacing the box on the table in front of me with the shoe).

Background Replacement with Kinect and Processing from Greg Borenstein on Vimeo.

If you watch the logging below the image you can also see that I’m calculating the exact distance to the object targeted for replacement in inches and increasing and decreasing a threshold for the pixels we want to replace. The full code for this example is here: kinect_processing_background_replacement.java

Obviously, it will eventually be desirable to do this with live video instead of just a static image, to smooth out the mask being created from the noisy depth image the Kinect provides, and to use the Kinect’s rgb image as part of the composite rather than just the grayscale depth image. All of these features are things I’m working towards that are held up by various factors ranging from the need to learn Cinder in order to rewrite this example in C++ for performance to waiting for the authors of the open kinect libraries to implement the math for aligning the Kinect’s RGB pixels with its depth image

Creating 3d models from the Kinect depth data is a greater challenge than basic compositing. It involves translating the 255 values of depth data that the Kinect provides into accurately scaled physical measurements and then using those to translate the positions of pixels in the two-dimensional plane into an arrangement in 3d space.

Having never worked with 3d in Processing before and not really fully understanding the math involved in these kinds of transformations, it took me a while to come up with code that even generated a 3d point cloud and even then I was never able to overcome some of the distortions that come from the naive math I applied. Hence, my 3d examples are not as impressive as some of the more sophisticated things people are doing out there.

Here was my first attempt:

Live 3D Point Cloud with Kinect in Processing from Greg Borenstein on Vimeo.

Obviously, in that example, the 3d space is somewhat flattened. After some work, I managed to produce a 3d rendering that while still wildly distorted at depth, was at least significantly more 3d:

Kinect 3D Point Cloud in Processing with improved projection from Greg Borenstein on Vimeo.

The code for this is here: kinect_3d_point_cloud.java

More than a week has past since I produced these two demos and in that time I’ve made some progress working with Cinder and doing some experiments with using the Kinect for projection mapping. I’ll write those up soon once they’re a little more fleshed out.

Posted in Art | Leave a comment

Artist Statement, Bibliography, and Sibling Work

In both the thesis preparation process and in the context of this recent video piece, I’ve been working on articulating my artistic project and the various influences, references and ideas around it. This post will be a brain dump of the results of that process.

First the “artist statement”. This is meant to communicate what drives my work in as brief and clear a format as possible. It’s what you would find on a one-page takeaway sheet in a gallery after seeing my work:

My work explores the use of special effects as an artistic medium. I am fascinated by how special effects techniques cross the boundary between images and the physical objects that make them: miniatures, animatronics, motion control photography, 3D rendering, physical fabrication.



I frequently use this medium to examine the cultural history of technology, a history that is intimately tied up with the evolution of these techniques themselves through their common roots in an encounter between the military industrial complex and the counterculture at midcentury.

Next is the “project proposal” for my thesis. This will be increasingly fleshed out in the next few weeks as we reach the end of the thesis preparation process and start work on the actual thesis. It includes some of the background that drives my thesis ideas and the beginnings of an outline for how I will achieve them. Here it is as it currently stands (I’ll probably post a revised draft in the next few weeks when it is officially “completed”):

The mid-century American techno-cultural movement which created the personal computer and internet industries is beginning to enter history. As the internet grows towards ubiquity, the ideas, artifacts, and culture of the small group of people involved in this movement become massively influential on the larger society.

It is time for an art that addresses that history, an art that will tell its story, explore its material forms, and investigate its way of seeing the world. This art should use tools of representation appropriate to the mid-century technological milieu as well as contemporary tools sympathetic to its aesthetic and descended from its ideas.

Both the personal technology that surrounds our daily lives and the special effects tradition of image making arose from an encounter between the military industrial complex and the counterculture at midcentury. Hence the techniques and methods appropriate to representing this material will derive from special effects movie-making

These techniques/media include:

  • special effects
  • interactive electronics
  • networking
  • rapid prototyping and fabrication
  • motion control
  • miniature photography
  • 3D modeling

Further, I plan to approach producing the project as if it was itself a special effects-laden hollywood blockbuster, dividing the process into pre-production, production, and post-production phases. This methodology will force me to focus on the structure, clarity, and strength of the storytelling elements as well as the technological developments. This semester I’ve made great strides in the pre-production process; some of what’s listed below in that area, I’ve already written about here before, clearly.

Pre-production:

  • story work, character work, set design, and effects proof-of-concept
  • much of this work has begun this semester but it will continue for the first few weeks of next semester
  • this process will end with a full production schedule

Production:

  • building the sets and materials for a series of sculptures (3-6)
  • organized into 1-2 week chunks based on medium and material
  • producing whatever video and digital content is necessary for the pieces

Post-production:

  • combining the materials into final pieces
  • editing and refinement
  • finish and details
  • last 4-6 weeks of the semester

Next is the bibliography. Some of this is reading I’ve already done, some is stuff I intend to read over break and during the actual thesis process. I’ve organized it by area: Computer History, Art, Special Effects.

Computer History:

Art

Special Effects

Another exercise from the thesis project was to list keywords we associate with our projects. This is part of the process of examining theaesthetic and intellectua terrain immediately surrounding our project. Here were mine:

  • movies
  • special effects
  • phantasmagoria
  • pepper’s ghost
  • technology
  • history
  • history painting
  • miniature
  • monument
  • narrative
  • counterculture
  • psychedelia
  • projection

And, finally, here’s another link dump, this time of links to artists, films, and other aesthetic work relevant to my project. The idea here was to find the “siblings” to our practice: people and groups who make work that has a family resemblance and maybe even a bit of sibling rivalry.

Posted in Art, thesis | Leave a comment

Tabula Rasa finished

A while back, I mentioned my idea for a video piece to accompany Arvo Pärt’s Tabula Rasa. After exploring a number of technical and aesthetic options for achieving the effect I had in mind, I finally finished this video at the end of this past week. In this post, I’ll talk about some of the compositing experiments I conducted as part of this process, but first, here’s the final video:

Tabula Rasa from Greg Borenstein on Vimeo.

In the end, the procedure I used was very similar to the basic sketches I showed in my earlier post: embedding a still photograph of the white plastic car in the space of a video. However I learned a number of things about how better to do this by trying a few wildly different approaches first.

My initial composite tests had two major flaws: poor perspective matching and poor lighting matching. The white car never really looked like it was sitting on the street and it didn’t look like it was affected by the lights around it. To work around these problems, I first attempted to achieve my shots in-camera by using traditional forced perspective techniques. Before heading out into the night street, I tested this technique in a controlled environment:

White Car table top test from Greg Borenstein on Vimeo.

This first example is the closest to a true in-camera forced-perspective shot. I put the white car on a tripod in front of my camera and moved it around until it appeared to be sitting on top of the table. After the fact, I did end up doing a little bit of clean-up in After Effects to remove some of the lines between the black plank on which the car sat and the table around it, but it was basically a straight forced-perspective shot.

The main problem with this approach was that in order to get a great enough depth of field to have both the car and the table in focus simultaneously I had to shoot at my camera’s highest f-stop and iso setting which meant a very dark and noisy image. And even at those settings I was only able to get the car in focus at about 12 inches away from the camera, a scale at which the size gain from the forced perspective was really not dramatic at all.

This next clip shows my second approach, a kind of compromise between full in-camera forced perspective and compositing. I used a lower f-stop than last time to get a brighter image, which resulted in a shorter depth of field so I took two clips: the first one with the car in focus and the second one with the table in focus, without moving the camera in between. Then I used After Effects to glue them together:

White Car Table Top Test 2 from Greg Borenstein on Vimeo.

This is nice because it lets the car be bigger and the whole image brighter, but I thought there was something not quite right about the composite. A big part was the limitations of the profile positioning. The car itself is not in enough depth. So I tried one more time with basically the same approach and a different pose:

White Car Table Top Test 3/4ths view from Greg Borenstein on Vimeo.

This was the first composite that I was really happy with. The recession of the car into depth really helps sell the effect.

This trick in hand, I set out to the street to try to use it to achieve my in situ shots. The problem was just not enough light. Even at the highest f-stop and iso settings in the dark of the street there just wasn’t enough light to get a solid focus on the car. Especially not close enough up in order to get it to composite at scale.

When I got back to the lab and tried and failed at these composites I decided to go back to my original approach, but with one big new advantage: since I’d shot footage of the car in position, I had a guide to as to what position I needed to put the car in for it to mate to the scene. I just setup the car in a well-lit spot and moved it around until the shadows were in the right places and it was at just the right angle.

Once I had those stills, getting them to mate into the scene completely was just a matter of getting the brightness and contrast just right, doing some mattes for things moving in front, and using AE’s remap distort function to push parts of the lights in the scene onto the car (the neon storefronts, the passing car headlights, etc.).

Posted in Art | Leave a comment