Engelbart Haunts SRI

Last summer when I first began seriously thinking about making art about the history of the personal computer, one of the early images that emerged for me was of a ghostly Doug Engelbart floating over the classically-white colonnaded building of the Stanford Research center.

I imagined a table-top monument: a square box of shiny ipod-white plastic containing a rolling Silicon Valley vista of blue foam covered by a thin veneer of fake grass. At the center of this would sit the temple-like SRI building in a rough 3d-printed white, like a post-modern Parthenon. And, hovering over this, would be a translucent Engelbart, a pepper’s ghost reproduction of the classic portrait: pointing forward to the future with the dangling cord of some unknown machine augmentation dangling out of his ear.

I even made a furious sketch of the idea on a flight out west, complete with classical pedestal:

Englebart's ghost

When I began talking about my project around ITP this semester, I used this image as a starting point, taking off from it to explain the larger idea of making art about the history of the personal computer. I found that it worked quite badly for this purpose. In order to get fellow students and professors to understand any of the importance I attached to this image — why I’d want to make such a seemingly strange and specific monument — I found myself gropingly telling the story of Augment as a linear narrative. It was a monument that only had meaning if you already knew the details of the story and why they mattered. Hence all the work I’ve been doing (and documenting on this blog) to figure out how to tell Doug Engelbart’s story in a way that is engaging and powerful even for the uninitiated.

Last week, in my Materials and Building Strategies class, we presented prototypes for larger projects we intend to work on for the rest of the semester. Even though I now realize that this piece about Engelbart haunting SRI is not a good starting point viewers of my project, I’ve yet to decide which scene is, exactly. Should it be the scene of Engelbart and English assembling the first mouse prototype? Or the scene of SRI participating in the first ARPAnet transmission? Or the scene of Stewart Brand and Ken Kesey visiting Augment with the Merry Pranksters bus?

Due to this indecision, I went ahead and prototyped this original idea. I figured that a lot of its elements will end up being included in whichever scene I do decide to take on.

Engelbart Haunting 333 Ravenswood prototype from Greg Borenstein on Vimeo.

I built the box out of foam core. The base is architectural modeler’s grass over blue foam. The building is blue foam painted white and the pepper’s ghost is just a piece of plexi attached at a 45 degree angle to a cheap digital picture frame.

Posted in Art, thesis | Leave a comment

The Era of Big VC is Over

The US venture capital industry is a transient epiphenomenon built upon the greatest bull market in the history of capitalism.

— William Janeway, Warburg Pincus

The venture capital industry underwent an epochal expansion around the year 2000 in the wake of the internet boom. In response to the incredible success of web companies, the amount of money invested in VC funds increased ten-fold before falling rapidly back to pre-boom levels after the crash.1

The result was a decade in which an unprecedented quantity of capital was available to technology entrepreneurs as venture capitalists invested the money raised during the boom. This period has now come to an end. Startups in the next ten years will have an order of magnitude less money available to them during the early stages of their development. This change will have dramatic effects on the kinds of opportunities available to entrepreneurs and technologists and on the kinds of change effected by the products they develop.

The History and Structure of Venture Capital Firms

Since its advent in the late 40s2, the venture capital industry has provided essential support to companies developing and commercializing new technology. Venture firms provide capital to startup businesses lacking access to traditional bank loans due to the unproven or risky nature of their enterprise. VCs played a central role in developing and scaling up many important technologies in the 20th century from the semiconductor to the personal computer and the web.3

The size of the industry grew dramatically in the 80s and 90s. During the PC boom in the 80s the amount of money VC firms had under management grew from $3 billion to $31 billion dollars and the number of operating firms shot up to 650.4 This expansion then accelerated during the late 90s dot com bubble when, in 2000, VCs raised nearly the previous total holdings of the industry in each quarter, spiking the total funds up to $230 billion in just a few years.5

Quarterly venture capital investment data courtesy of the National Venture Capital Association

After 2000, fundraising returned to pre-boom levels with around $9 billion raised so far in 2010.6

In other words, in both the 80s and 90s, the great body of venture capital investment acted as a trailing indicator of technical innovation — first following opportunity in personal computers after its early successes had become obvious and then doing likewise with the web.

Most venture capital funds are structured as private partnerships with a fixed term of seven to ten years.7 Funds spend the first half of their terms selecting companies for investment and the second half shepherding their companies through “liquidity events”, acquisitions or Initial Public Offerings which end the ownership of the company by the fund and provide the investors with a return on their money.

The combination of this sudden spike in dollars invested and this fixed-term structure resulted in a unique decade-long period in which a lump of cash on the order of $200 billion worked its way through the startup ecosystem. Much of our conventional wisdom about the culture and economics of startups are based on these conditions and are unlikely to hold up well outside of them.

The Last Ten Years: No Exit

In addition to a surplus of venture capital funding there were three other major economic factors shaping the startup ecosystem in the last decade: the significantly lower cost of starting an online business, the weakness of the Initial Public Offering market, and the flight of institutional investors to other more attractive forms of high risk capital. Together they formed a kind of Bermuda Triangle for the VC industry: many entered, but few who did were ever seen again.

During the web boom both hardware and software costs were high for nascent startups. Web server software was mostly proprietary and hence involved large licensing fees. For example in 1995, a single license for Windows NT Server 3.5 cost $1,495.8 Worse still, typical server hardware for running this operating system would have cost around $15,500.9

Today, open source software, cheap, extremely capacious commodity hardware, and the use of “cloud computing” services such as Amazon Web Services have driven both software and hardware costs down substantially,10 a situation which reached a notable data point recently when Amazon announced a free pricing tier for its EC2 service. This lowering of costs has reduced startups’ demand for venture capital. Since it no longer takes millions of dollars up front to start a web company, founders are less motivated to sell off equity in exchange for funding.

On the other side of the equation, venture funds found fewer opportunities to profitably exit from their investments. In the 80s and 90s VCs had two exit strategies: acquisition by existing large companies and the issuing of public stock. Due to a murky combination of factors including increased regulation of publicly owned companies enacted by Sarbanes-Oxley in 200211, the number of venture-backed technology IPOs was down precipitously in the past decade. Between 1995 and 2000 the market averaged 204.5 IPOs per year while the next seven years saw an average of just 49.12

Taken together, these two factors meant a large decrease in profit-making for Venture Capital firms, down from 85% return on investment for an average firm during the bubble years to -3% in the last five.13 This catastrophic drop-off has caused venture firms to issue far fewer funds and some firms to close altogether.14

And, to make the situation even worse for VC firms, institutional investors have begun withdrawing their vast sums under management from the VC industry. During the 80s and 90s these investors gradually increased their investment in VC contributing significantly to the build-up of VC funds. But after the crash of 2008 made them decidedly more risk averse, they have fled for safer options.15

What About Biotech?

One objection to this rather bleak portrait of the prospects for venture capital might be to argue that the problem is constrained to the web. A case might be made that the web has reached maturity as a technology and hence no longer needs VC investment to develop or scale its technology and so we should look for bright spots for venture investors in other sectors.

A prime candidate for such a sector would be biotechnology. And VC investment in biotechnology has increased since 2000 both as an absolute number and as a portion of total VC funds, growing from a mere 7.5% in 2000 to 34.6% in 2009.16

Life Sciences Investment data courtesy of the National Venture Capital Association

However, there are reasons to doubt venture capital’s effectiveness in supporting biotechnology. William Janeway, a Managing Director and Senior Advisor at venture firm Warburg Pincus has argued that the long time it takes for life science innovations to progress from lab to clinic combined with the high rate of attrition and the lack of positive cash flow for early investors makes the biotech sector a tough environment for venture capitalists.17 Further the seeming profitability of the industry seems to have been artificially buoyed by the presence of a single highly profitable company: “if we take the largest and most profitable firm Amgen, out of the sample the industry has sustained heavy losses throughout its history.”18

Finally, counterintuitively, the prospects for profitability by biotech startups may actually be getting worse over time. “The field gets more complicated every year (from the Central Dogma, to Epigenetics, to Systems Biology),” Janeway says. “The big winners in BioTech came from the synthetic production in bacteria of naturally occurring proteins (insulin, human growth hormone) whose function had evolved over millions of years; the effort to produce truly novel therapeutics (e.g., fragments of protein) has been radically less productive. This suggests that it will get harder, not easier to make money in Biotech.”19

Conclusion

The state of the venture capital industry affects the kinds of startups that get founded and, through them, the types of jobs that are available in technology-related industries as well as the kinds of technologies that reach scale and affect our daily lives. In the 80s, when venture capital supported the personal computer industry, the jobs were at Microsoft, Apple, and Dell doing hardware and operating systems engineering. In the 90s, when VC supported the web, the jobs were at Amazon and Ebay and short-lived very well-funded flashes in the pan like Pets.com. They mainly encompassed server programming. In the last decade, VC massively supported social media sites like Facebook, Twitter, and Flickr. Most of the jobs at those companies were in still softer fields such as customer support, design, and marketing. In many ways these startups are more like media companies than the technology companies of the past.

What will the startups that thrive in the low-VC environment of the next decade look like? What will it feel like to work for web companies that bootstrap their own growth through revenue rather than taking big investment up front? Which technologies will fail to achieve wide adoption for lack availability of big up front investment? Will another non-web industry come along and initiate a new boom and another trailing order-of-magnitude growth in VC investment?

Only time will tell.

Notes
  1. Based on quarterly fundraising data gathered by the National Venture Capital Association. The data is available interactively on the NVCA website. [back]
  2. Georges Doriot founded the first venture firm, American Research and Development Corporation, in 1946. The firm’s greatest success was Digital Equipment Corporation, a leader in the development of the minicomputer. Spencer Ante, Creative Capital: Georges Doriot and the Birth of Venture Capital, Harvard Business School Press, 2008. [back]
  3. Legendary silicon valley firm Venrock Capital alone invested in Fairchild Semiconductor, Intel, and Apple. See the Venrock company timeline. Similarly Kleiner Perkins Caufield and Byers supported many of the most important web companies from Netscape to Google. Kleiner Perkins Caufield and Byers portolio. [back]
  4. Andrew Pollack, “Venture Capital Loses Its Vigor”,
    The New York Times, October 8, 1989. [back]

  5. The total numbers under holding come from NVCA data sent to me via email by William Janeway, Managing Director and Senior Advisor at
    Warburg Pincus. [back]

  6. ibid [
    back]
  7. Jennifer A. Post, “An overview of US venture capital funds”, Alt Assets, 2001. [back]
  8. Jason Pontin,”Windows NT starts to pick up steam” InfoWorld. January 30, 1995 [back]
  9. When InfoWorld ran a web server comparison test in April of 1996, they used “a Compaq 1500/5133 Prliant with two 133-MHz Pentiums, 6GB of storage, 128MB of RAM and two Netflex 3P network interface cards” for their NT tests, a system which clocked in at $15,569 and was still less expensive than its Sun/Unix equivalent which cost $22,995. “Are You Being Served?” InfoWorld. April 8, 1996 [back]
  10. Guy Kawaski, “By the Numbers: How I built a Web 2.0, User-Generated Content, Citizen Journalism, Long-Tail, Social Media Site for $12,107.09”,
    blog.guykawaskai.com.June 03, 2007 [back]
  11. Opinions differ on the effect of “SOX” on the IPO market. See Amy Feldman, “What Does Sarbanes-Oxley Mean for Companies That Want to Go Public?”, Inc. September 1, 2005. for pro and Lynn Stephens and Robert G. Schwartz “The Chilling Effect of SARBANES-OXLEY: Myth or Reality?” The CPA Journal. June 2006. for con. [back]

  12. Jay Ritter,
    IPO Data. Venture Expert, Thomson Financial. [back]

  13. Data sent to me via email by William Janeway of
    Warburg Pincus. [back]

  14. ibid. Pincus’s numbers place the number of funds issued at 653 in 2000 and 124 in 2010 for a decline of 94%. [
    back]

  15. Udayan Gupta “Why Institutional Investors Are Turning Down Venture Funds”,
    Institutional Investor. September 21, 2010. [back]

  16. Data sent to me via email by William Janeway of
    Warburg Pincus [back]

  17. Email conversation with the author. [
    back]

  18. Gary P. Pisano, “Science Business”, HBS Press. 2006. p.117 [
    back]

  19. Email conversation with the author. [
    back]
Posted in Uncategorized | Leave a comment

Tabula Rasa

For a recent assignment in my Materials and Building Strategies class, we were supposed to make a mold and cast of some object. I chose a matchbox car:

Car and cast

I was thinking about a scene from the Augment storyboards where Engelbart parks by the side of the road and has a vision of the personal computer and imagining the cast car could be a prop.

However, once I had the small, perfectly smooth, white plastic car in my hand, it started to take on a life of its own. I started imagining it at full-scale on the city streets, at night, immobile, the multi-colored lights of the city reflecting in its glossy surface. There was something about the image I found both haunting and calming.

The feeling reminded me of the first movement of
Tabula Rasa by Arvo Part. The piece has a combination of spiritual calm, large expanses of negative space, and a delicate sense of longing or melancholy.

I shot some night footage of the streets around ITP and did a few very basic experiments in After Effects with placing the car into the scene.

White car experiments from Greg Borenstein on Vimeo.

White car test footage from Greg Borenstein on Vimeo.

In both of these shots the car is moving, which feels wrong to me, a violation of the eloquent stasis of Part’s music and the effect I’m going for.

Also, compositing the car into these night shots results in lighting on the car which is totally different than that in the scene around it. In theory this was a desirable effect, emphasizing the car’s otherness, but in practice it just made the car not quite read as part of the scene. This problem was further exacerbated by the difficulty of matching the angle of the car to the perspective of the footage so that it would truly appear to sit on the street.

While discussing the project in class a fellow student, Chris Langer, suggested achieving the effect via forced perspective. In other words, simply placing the miniature car in front of the camera so that it will appear to be at full scale and sitting correctly on the street. This would allow the car to actually be in the correct lighting environment and make the whole process much simpler. I intend to do some experiments with this technique in the next few days.

A few final thoughts on the combination of the music and the image of the white car. While I was presenting the idea in class I realized that, of course, “Tabula Rasa” means ‘blank slate’, which presents an interesting resonance with the white form of the car. Further, the title of the first movement of the piece is “Con moto” which means “with movement”, while my car will be surrounded by movement while remaining absolutely still itself.

Posted in Art | Leave a comment

Augment Mood Board

While working so hard on shaping the Augment story, it’s important not to lose track of the aesthetic component: what will the pieces look and feel like? What will they be made of?

This week I explored those questions by creating a “mood board” — a collage of existing visual elements that give a sense of the colors and textures that I want the final piece to have.

Augment mood board

Checkout the larger size on Flickr.

As I worked on the collage, three clusters of images took shape: color snapshots from the 60s, special effects making-of imagery, especially blue screen work, and white architectural imagery, especially that with a Greek, neo-classical, or minimalist look.

The 60s snapshots (many of which are of actual period Augment activities) have a saturated color palette and a kind of soft focus that is core to how I visually imagine that style. I imagine this look being especially prevalent for the psychedelic and countercultural material in acts 2 and 3.

The blue screen and special effects imagery comes out of my sense of the relationship between the technical form of image making that make up the modern “cinema of attractions” (computer graphics, motion control, compositing) and the early personal computer world of Augment. Behind-the-scenes process shots from that world feel like a contemporary counterpart to equivalent 60s scenes of bearded men in blue jeans playing primitive computer games and building the first robots.

Finally the classical/minimalist white architectural imagery has a relationship to the material that’s a little bit harder to define than the other two clusters. Partially it comes out of my sense that what I’m trying to build is a kind of monument. Art historically, monumental sculpture (and, ironically, the ground-based minimalist sculpture that most sought to resist it) has tended to have a strong relationship to architecture via the formats of the plinth and the pedestal. Further, the austere white in which classical architecture and sculpture comes down to us (however distant that is from how it was originally constructed) has developed deep associations with the arts of memorialization and monument building. I envision this portion of the aesthetic spectrum as, at least in part, relating to the framing and presentation of the work.

Posted in Art, thesis | Leave a comment

Augment Storyboards

A couple of posts ago, I presented an outline for the story of the rise and fall of Doug Engelbart. Since then, I’ve been working on how to translate that outline into something visual. The core of the challenge is: how do you condense a large amount of narrative material into a short series of sculptures/visual objects? How does each sculpture communicate the plot and character points necessary to convey that step in the story? Further, how do you get across the changes that the characters, places, and ideas involved in the story are undergoing across the pieces? Specifically in the case of the story of Engelbart and Augment, how do you show the transition from the military/engineering context to the countercultural one? How do you get across the tragic arc of the whole story through a series of static scenes?

To try to start answering these questions, I’ve been working on expanding the outline into a series of storyboards. In this post, I’ll go through the storyboards for each of the acts and mention a few things that I learned while working on them.

Act 1:

Act 1 Scene List/Storyboard

As a reminder, here’s the written outline for this act:

  • Engelbart ships out on VJ day. The end of the war is declared as his ship pulls away from the dock
  • Engelbart is bored in a grass hut in the Phillipines on radio duty. Someone drops off a copy of the Atlantic Monthly with Vannevar Bush’s As We May Think in it. He dreams of Memexes.
  • Engelbart, returned to California, happily married, and bored, pulls over on the side of the road suddenly and realizes he’s achieved all of his dreams. He has a vision of using computers to help people better understand their complex world.
  • Now, having finished grad school, and working as a junior researcher at the Stanford Research Institute, Engelbart meets Bill English, a well-liked fellow researcher who has a knack for actually implementing things and getting people to follow him. Doug tells Bill about his Augment dream.
  • Bob Taylor from NASA hears about Doug’s Augment idea and decides to support it with its first real funding.
  • Doug gets a lab for the first time.

Working on this act, I was surprised at which scenes were easy and which were difficult. For example, scene #5 here, Bob Taylor giving Engelbart his first federal funding seemed incredibly difficult and dry to me in theory but in the process of drawing it, something about having the Washington Monument out the window and figuring out Taylor’s square military haircut as the core of his caricature really made it come together. Similarly, the moment of Engelbart and English meeting at SRI seemed overly abstract in spec, but was crystalized for me when I saw the photo on SRI’s page about the history of its magnetic logic lab.

On the other hand, a scene which had been especially vivid in my imagination, Engelbart’s vision by the side of the road, doesn’t look like much when I draw it out. I’m going to have to think hard about how to bring that scene to life.

Act 2:

Act 2 Storyboard

  • Engelbart has 2 LSD experiences at the International Foundation for Advanced Study. In the first one he is by himself and stares catatonic at a wall for 8 hours. In the second one, he’s with other engineers and co-workers and has a vision of a “tinkle toy” for helping potty train young boys.
  • Engelbart and English build the first mouse prototype and it performs extremely well in early tests of input devices
  • Augment people visit the Stanford Artificial Intelligence Labratory and play Space War
  • Augment participates in sending the first ARPAnet communication to UCLA, the command “LOGIN”, which causes the system to crash after “LOG”
  • Stewart Brand helps Augment prepare the Mother of All Demos presentation, their public triumph

One of the challenges here is going to be to figure out how to bring out the counterculture thread that starts in this act. Opening it with Engelbart’s LSD vision gets us off in the right direction (and I love the image of the toilet vision), but it’s something of a challenge to keep that in focus in the other scenes. One important opportunity for that is going to be in the costume and set dressing of the scene at SAIL: the engineers are going to need to start looking scruffy and the environment should start having some psychedelic art/coloration going on. A key to this as well, I think, is going to be shifting the color palette away from the austere almost black-and-white of the 50s towards the washed out rich colors we associate with the 60s. That transition should start here and really reach completion in Act 3.

Also, the idea I came up with for how to tell the story of the first ARPAnet message — combining the UCLA and Augment sides of the scene so that they almost appear to be in the same room — will be a bit of a challenge to realize. How do you represent the lightning bolt/scene divide? Also who are the people on the UCLA side and how do we know they’re not just some Augment extras we haven’t encountered before?

Act 3:

Act 3 Storyboard

  • Stewart Brand brings Ken Kesey to the lab to use the Augment technology and he says “It’s the next thing after acid.”
  • Augment researchers visit Lama, a hippie commune in Taos, New Mexico with Brand and live temporarily amongst the Bucky domes and hippies.
  • Augment researchers become obsessed with Est, a pseudo-psychological cult of ‘interventions’; members of the lab are caught doing a drug-fueled, computer-enhanced encounter session while Engelbart is giving a tour to pentagon funders
  • Bob Taylor recruits Bill English away to Xerox PARC; he’s the first of many top Augment staff to leave.
  • JCR Licklider returns to ARPA and cuts off Augment’s funding
  • Engelbart is left alone, using NLS by himself in a closed-down Augment lab.

Now here we have lots of new elements: exterior sets for the first time with the Lama visit and English’s recruitment at PARC. Also there are some interesting echoes of early scenes, for example the est intervention session makes an interesting parallel with the SAIL visit from Act 2. Also, and this was not something I realized until I drew the storyboard, the last two scenes of this act (Licklider cutting off Augment’s funding and, Engelbart alone in a cleared-out Augment) echo the last two scenes of Act 1 in a way that I think is really satisfying. You kind of get the whole arc of the story right there: he’s back where he started, but now the personal computer is a real thing sitting on his desk.

Another question for all of these acts is: what role can a moving camera play? If this was a traditional hollywood film, you’d want to vary the camera approach from scene to scene so that the film took on a more dynamic exciting feeling during important action sequences. What’s the equivalent of that here? It seems important to use some motion to break up the static quality things could fall into, especially as the arc of the story moves from the 50s into the 60s countercultural parts, but how to do it?

One big positive thing that’s come out of making these storyboards is that they’ve gotten me thinking about the kinds of things I’m going to have to build or represent in each scene. I’ve got the beginnings of a catalog of effects, props, people, and locations going in my head now.

Posted in Art, thesis | Leave a comment

Augment Caricatures

I mentioned last time that I’m starting to approach my thesis as if it was a Hollywood blockbuster. Right now I’m in the pre-production phase which means: story work, character work, casting, design of the sets and props, special effects planning, etc.

In this post, I’d like to talk about one component of that process: the character design. Unlike most blockbusters, I’m starting with real people and using media and objects instead of actors to represent them. Hence character design is extremely important. Will the viewer immediately be able to recognize each of my four or five main characters across different pieces that render them in different forms and possibly different media?

In order to ensure that they will, I’m conducting a series of visual studies of my main characters designed to reduce them to caricatures. For each person who figures prominently in the story, I’m making study drawings of them from historical photographs with an eye towards the question: what is the single defining visual characteristic needed to represent them? How simply can I indicate their visual identity.

Here are a few examples of what I’ve come up with so far.

Bill English Using Mouse and Chording Keyboard

Bill English. Engelbart’s right-hand man, the Chief Engineer and Assistant Director of Augment. His two defining features seem to be his big dark glasses and his round domelike forehead, emphasized by his slightly receding haircut.

Doug Englebart

Doug Engelbart. Founder of Augment. Our hero. This is a little bit of a tough one as in many ways Engelbart was the typical suit-and-tie engineer-type. But after making this drawing and a few others, I focused in on the hair and the head shape. His hair has this signature pompadour flow to it and striking shocks of gray and his whole head has a very solid rectangular shape with a very strong chin.

Now, here’s an example of how these two caricatures in action. The following is a frame from my storyboard of the scenes in Act 1 (about which more in the future):

Founding of Augment storyboard (closeup)

Hopefully, even with just those few extremely messy strokes, the presence of the hair and the glasses let you know that what you’re seeing is Engelbart and English looking through a door into an empty room.

Let’s take a look at two of our secondary characters as well. In many ways these are harder as they are likely to get less “screen” time and hence need to be instantly recognizable.

John McCarthy

This first one is John McCarthy, the head of the Stanford Artificial Intelligence Lab, which had a kind of sibling rivalry relationship with Augment. Thankfully, McCarthy is a caricaturist’s dream: giant halo of frizzy hair and beard, big black glasses and prominent nose. The glasses alone wouldn’t do it for us as we’ve already got glasses as the primary element of English’s caricature, but with all of McCarthy’s frizzy hair poking out in every direction they should be confused.

This last “character” is a little different, but I think it plays an important enough role in the story to actually be worth considering as a character rather than a prop or an effect.

Mouse prototype

The original mouse prototype. English built it out of wood and two potentiometers from a sketch by Engelbart; its invention was one of the most famous achievements of the lab. As a caricature, I think the things most important to emphasize are its large size compared with today’s mice, its primitive construction, and the fact that it’s made out of wood.

Next time, I’ll have an example of using these caricatures: a basic storyboard from the outline I presented last time.

Posted in Art, thesis | Leave a comment

Augment: The Rise and Fall of Doug Engelbart, an Outline

In my last post, I made a case for myth making as an important role for art which has once again become possible after a long absence during the modernist period. “Could art actually be important to how a group of people define and understand themselves?” I asked.

So, what group of people, what myth to tell, and how to tell it? As an answer, here’s the start of a manifesto or mission statement:

The mid-century American techno-cultural movement which created the personal computer and internet industries is beginning to enter history. As the internet grows towards ubiquity, the ideas, artifacts, and culture of the small group of people involved in this movement become massively influential on the larger society. It is time for an art that addresses that history, an art that will tell its story, explore its material forms, and investigate its way of seeing the world. This art should use tools of representation appropriate to the mid-century technological milieu as well as contemporary tools sympathetic to its aesthetic and descended from its ideas. These include special effects (miniature photography, compositing, etc.), rapid prototyping and fabrication, motion control, computer graphics.

Who created the personal computer? What were they trying to achieve in doing so and what became of them? It is amazing that, in contrast to other technologies with equal impact on our lives, we have no shared myth to answer these questions. What’s personal computer equivalent to the story of Ben Franklin with his kite or Alexander Graham Bell with his wire?

The answer lies in the biography of Doug Engelbart, a World War II radio operator and early computer engineer who is the first person to have conceived of the computer as a device that could be used by individuals to aid in their own personal work for memory recall, information organization, communication, etc.

Sculpting the rich facts of Engelbart’s life into a story that has the shape and impact of myth is a challenging prospect. As a way into that problem, I’ve been employing the process used by screenwriters in developing movie scripts: trying to boil down the events of Engelbart’s life into a three act outline that focuses on the ways in which Engelbart and a small number of supporting characters around him were changed by the events that took place. As an artistic choice, emulating the “pre-production” process of a major motion picture seems especially appropriate for this subject matter as I intend to use many of the techniques and materials of movie special effects to create the images and objects that will be the final result. As I understand it now, the final result will not likely be a big budget blockbuster, but I’m treating the storytelling and design process as if that was what I was working towards.

So, here’s my current draft outline for a three act story that could be called something like “Augment: The Rise and Fall of Doug Engelbart.”

Act 1

  • Engelbart ships out on VJ day. The end of the war is declared as his ship pulls away from the dock
  • Engelbart is bored in a grass hut in the Phillipines on radio duty. Someone drops off a copy of the Atlantic Monthly with Vannevar Bush’s As We May Think in it. He dreams of Memexes.
  • Engelbart, returned to California, happily married, and bored, pulls over on the side of the road suddenly and realizes he’s achieved all of his dreams. He has a vision of using computers to help people better understand their complex world.
  • Now, having finished grad school, and working as a junior researcher at the Stanford Research Institute, Engelbart meets Bill English, a well-liked fellow researcher who has a knack for actually implementing things and getting people to follow him. Doug tells Bill about his Augment dream.
  • Bob Taylor from NASA hears about Doug’s Augment idea and decides to support it with its first real funding. Doug gets a lab for the first time.

Act 2

  • Engelbart has 2 LSD experiences at the International Foundation for Advanced Study. In the first one he is by himself and stares catatonic at a wall for 8 hours. In the second one, he’s with other engineers and co-workers and has a vision of a “tinkle toy” for helping potty train young boys.
  • Engelbart and English build the first mouse prototype and it performs extremely well in early tests of input devices
  • Augment people visit the Stanford Artificial Intelligence Labratory and play Space War
  • Augment participates in sending the first ARPAnet communication to UCLA, the command “LOGIN”, which causes the system to crash after “LOG”
  • Stewart Brand helps Augment prepare the Mother of All Demos presentation, their public triumph

Act 3

  • Stewart Brand brings Ken Kesey to the lab to use the Augment technology and he says “It’s the next thing after acid.”
  • Augment researchers visit Lama, a hippie commune in Taos, New Mexico with Brand and live temporarily amongst the Bucky domes and hippies.
  • Augment researchers become obsessed with Est, a pseudo-psychological cult of ‘interventions’; members of the lab are caught doing a drug-fueled, computer-enhanced encounter session while Engelbart is giving a tour to pentagon funders
  • Bob Taylor recruits Bill English away to Xerox PARC; he’s the first of many top Augment staff to leave.
  • JCR Licklider returns to ARPA and cuts off Augment’s funding
  • Engelbart tries to convince Bob Taylor to bring all of Augment over to PARC, but Taylor refuses
  • Engelbart is left alone, using NLS by himself in a closed-down Augment lab.

Now, with the story in mind, let’s return to the question of what group of people mind find this myth important to their self-definition and identity. (Note: every point in that outline is true insofar as it supported by one or more party in the current historical record of this period as it is available to us; it is a myth solely in the sense that I’ve tried to reduce a complex and multifaceted story into a linear one with a clear and iconic shape.)

At the broadest level, Engelbart’s story is important to all of us who live surrounded by personal computers, smart phones, social networking sites, and the other media through which we conduct contemporary life, all of which have their root in this encounter between the military industrial complex and the counterculture in the bay area in the 1960s.

At a deeper level, this myth is vitally important to people who work in these fields: making websites, designing and building computers and the next generation of technology. It puts their work in context and gives it a sense of importance and tradition. From the founders of Google, Apple, and Microsoft to the young engineers and designers working on Twitter and Facebook, this story is at the heart of the battle over the meaning of what they do.

Even more specifically, this story is a matter of active concern to the older generation of technologists who were actually part of it and who are now entering the phase of life where they find themselves concerned with their place in history. Many of them have specifically dedicated their energies to ensuring that aspects of the history of technology, from Andy Herzfeld’s work at folklore.org documenting the creation of the Macintosh to Vint Cerf’s role at Google as Chief Internet Evangelist, which often includes memorializing and storytelling.

I want all of these groups to be the audience for this work, especially that latter two who are not especially well-known as patrons of the arts and who have a particular interest in the meaning and interpretation of this story. In my last post, I described the “complex terrain of political and aesthetic geography” provided by princely patrons as being creatively stimulating for artists as they tried to work “within and against it”. Similarly, I want these people as patrons because they have powerful opinions about the meaning of this story in itself. The friction and feedback they provide as I attempt to render it into meaning and myth will make for better art and a more lasting monument to this story of invention.

Posted in Art, thesis | Leave a comment

Who is art for?

Note: this is the first in a series of posts exploring ideas for my ITP thesis. These posts will range in style and subject matter from general background philosophizing to concrete documentation of my own peculiar process.

Who is art for and what can it be?

Up until the advent of the art market in the late 19th Century, one of art’s central roles was to tell important historical stories in public ways that helped define the identity of countries and peoples. Artists worked for kings and depicted their military victories; they worked for churches and depicted their doctrines and stories. They created myths that gave these groups their collective meaning.

These relationships had obvious downsides: restrictions on the subject matter, stylistic approaches, and political positions artists could employ. But they also had one big upside: they placed art at the core of the process of defining our communal identities; they gave artists a complex terrain of political and aesthetic geography to work within and against.

When the market came along art became wildly more diverse. Instead of funding their work through the direct support of individual patrons, artists sold their work to a diverse collective enterprise that abstracted the demands of its buyers: galleries, dealers, agents. As long as they could find some critical mass of interested buyers, artists could find support for the work they wanted to be doing no matter its subject, style, or politics.

However, the art market did have one large limitation that gradually became obvious as it matured. To wit: in such a complex and diverse enterprise the only thing that all of the market’s members had in common was art itself. Hence any work that could be widely appreciated by the full market needed, almost by necessity, to be about art itself. The result was that art moved away from the kind of direct engagement with important issues of collective identity that it struggled with in the era of princely patronage. In the process the market for art became increasingly circumscribed: as art became more about its own concerns the barrier to entry for new viewers got higher and hence the average level of insider knowledge rose further, and on and on in an upward spiral of sophistication, specialization, and isolation.

In summary, we call this period ‘modernism’. And, in general, we think of it as having come to an end sometime in the last quarter of the 20th century.

Obviously, historical periods don’t end neatly or all at once, sometime in that period, the regime in which the market — and the attendant navel-gazing it induced in art — fully dominated the art world began to wind down and be replaced by something new and, as of yet, not fully defined. This new world promotes museums and international biennials to the rank of first-class patrons. It has also opened up art to a whole new range of interests outside the art world, from politically-driven work to pieces inspired by the insights of science.

But what about making myths and defining collective identities? Is this something art can now start to do again? Could art actually be important to how a group of people define and understand themselves? If so, who would fund it and where would it be shown or sold?

I think the answer to these questions is yes. And I want to set about making work that aspires to do exactly these things. In future posts, I’ll explain exactly the myth I want to make, to whom I think it will matter, and how I propose to go about doing it.

Posted in Art, Opinion, thesis | Leave a comment

3D Space and Alignment in After Effects

I mentioned earlier that I’ve been working on matching camera movements from After Effects in the physical world. Since that earlier post, my experiments ran into a bit of trouble: I found that it was extraordinarily difficult to get a physical model to line up proportionally to one constructed in 3D within After Effects.

In this post, I’ll detail the failed experiments that lead me to realize that I didn’t understand the relationship between the real physical world in front of the camera and the virtual 3D world inside of After Effects. Then, I’ll proceed to explain the controlled experiment I conducted to nail down this relationship and my current, much improved, understanding of it.

Building on my earlier success with the servo, I set about reproducing a two-axis move. I borrowed an X-Y table from Shahar Zaks and set about figuring out how to control it:

Motion control rig with exacti and janky tripod

Once I was up-and-running with Shahar’s electronics, I put together a simple shot in After Effects: a slow push-in on the Stanford Research Institute through the window of which you would see Doug Englebart and the rest of the Augment Research Center’s staff working on early computers. Here’s a draft (i.e. low-res) render of the animation:

333 Ravenswood test comp (augment around a table) from Greg Borenstein on Vimeo.

My goal was to replace the architectural drawing with a miniature 3D-printed version that I would place in front a green screen, shoot with the moving camera, and live-composite into this animation, giving it something of the texture or early special effects movies from the 60s and 70s. And allowing the viewer to see how the image was constructed from physical objects in real time.

333 Ravenswood 3D print

While I was waiting for the miniature to come back from the 3D printer, I did some experiments with a scale-model I constructed out of blue foam. I exported the camera position data from my After Effects Composition in the manner I described in my previous post and played it back through a Processing sketch which communicated with an Arduino to move the real video camera in a manner that would reproduce the virtual camera move from my animation:

Motion-controlled test shot for tracking from Greg Borenstein on Vimeo.

You can see some obvious flaws in this footage: some camera jitter from the X-Y table’s stepper motors and some back-and-forth jerking at the end of the shot when the table runs into the side of its moveable area. However, when I tried to composite this shot in with my After Effects animation, I discovered much more dramatic problems: the camera seemed to be moving faster in the physical version than the virtual one and it was impossible to arrange it in the virtual space so that it fit the composition.

I discovered two problems with the footage. The first one was simple to solve: I was recording video at 30 frames/sec and the animation had been rendered at 30 frames/sec, but in between I was playing back the camera position information at 24 frames/sec (because that was the frame rate I happened to have set for my Processing sketch). Once I tried the experiment again with that problem corrected, I discovered a deeper one, the parallax was incorrect.

Parallax is the optical effect that causes objects that are further away from a given point of view to appear to move more slowly as the point of view shifts compared to objects that are closer. The distance between the physical camera and the miniature building was not proportional to the distance between the virtual camera and the digital building. And further these ratios didn’t reflect the size relationship between the physical building and its digital stand-in.

In order to get the physical shot to match the digital one, all of these proportions would need to come into alignment. But how to achieve that without an enormous, time consuming, and un-reproducible guess-and-check process?

After talking to Chris Kairalla extensively about the problem, I began to understand that what I really needed was to learn the proportion between the pixels that measure After Effects’ 3D world and the inches of the real one. Chris and I devised an experiment to determine exactly this proportion.

The experiment was as follows: print out some reference image, for example, a 1 inch grid. Place the grid in front of a camera on a tripod at, say, 3 feet and take a photograph. Then, move the camera back one foot and take a second photograph. Now, bring both of these photographs as well as the un-scaled grid into an After Effects composition. Make the un-scaled grid a 3D layer so that it is affected by virtual cameras within the comp. Leave the two photographs as 2D layers so they are not affected by the virtual cameras. Now introduce a virtual camera. Move it back away from the origin while viewing only the un-scaled grid and the photograph taken at 4 feet. When the un-scaled grid appears to match the picture perfectly, stop. Now, introduce a second camera. Hide the 4 foot picture and reveal the 3 foot picture. Move this camera back from the origin until the un-scaled grid appears to match the 3 foot picture. You will now have two measurements each of which corresponds to a known physical distance. If the measurements are consistent, you now have an answer to the relationship between pixels and inches: simply divide the pixel distance of the closer camera by 36 and the later one by 48. These should come out to about the same number.

Chris predicted the number would be either 72 because that is “a magic number for pixels” or 100 which would have been the sensible choice.

After some fiddling and false starts arising from trying to be too anal retentive about getting the match between layers (I forgot to take into account that my photo setup was imperfectly aligned and so I shouldn’t expect a pixel perfect match), I came up with evidence in favor of a 100px/inch correspondence. Here’s the output of rendering the comp arranged in the manner described above:

After Effects 3D alignment test 2 from Greg Borenstein on Vimeo.

Note here that I setup the un-scaled grid as a “difference layer” so that the places where it perfectly matched the photo’s grid would be black and the imperfect places would be white. The background field is gray because the exposure on the stills didn’t render the raw paper perfectly white.

With this result, we now know for sure: in After Effects 100 pixels equals 1 inch in the real world.

So now, with this new knowledge, I can go back and redesign my comp and my motion control setup in order to ensure that they are in scale with each other.

The one other possible wild card would be if my camera (a Canon T2i, by the way) somehow did something different when shooting video that affected the variables in this result. To test this possibility I made another comp in which I used similar techniques as I’ve already described to to match up the video I took at 3 and 4 feet with the stills taken from the same position. Thankfully, things came out as expected:

Test of matching still and video footage in After Effects from Greg Borenstein on Vimeo.

The video was smaller and had a wider aspect ratio (
actually making it a much better match for my comp, the way I had it set up), but once I overcame these differences via scaling, the “difference blend” test yielded these tight results confirming no other strange problems.

The next experiment will be to design a still shot that reproduces the proportional positions of a few objects in a comp and make sure that it composites correctly. After that, I’ll move on to a one-axis camera move and, eventually, build back up to a complex two-axis move like I attempted here.

Posted in Art, Science | 1 Comment

360 Degree Interactive Camera with HTML5 Video, Canvas, and Processing.js

A couple of days ago, I posted about a project I’m working on with some fellow students to create a system for making 360 degree documentation of circuits. At that point we had our demo working in Open Frameworks, while our eventual goal was to implement it on the web and also add annotations as overlays.

We’ve made progress! Today Rune got Processing.js working on top of a video element. His demo uses Processing.js to draw on a canvas element that is absolute positioned on top of a video tag. This was proof that the basic structure would work: using javascript to overlay annotations on top of a moving piece of video.

This afternoon, I took the next step: using javascript to control the playback of the video. In HTML5, the video element has a pretty straightforward API, allowing you to play, pause, and position the currentFrame, everything one would need to recreate our rotation-via-scrubbing effect. The basic idea is to move to a frame in the video’s full duration that corresponds proportionally to the position of the mouse horizontally over the width of the video on the screen: move the mouse left the video “rewinds”, move it right it “fast forwards”.

At first I encountered some frustrating problems due to the fact that when the video first renders it is not ready to be manipulated. While you can read attributes like currentFrame and duration and call functions like play() and pause(), you can not set attributes until the video’s readyState attribute is set to 4. This was rather hidden in the documentation and was hard to figure out as code I would construct in Firebug would work successfully there and would then fail when run within the page’s script tag.

Once I understood this problem and worked around it, implementing the scrubbing as described was pretty straightforward. Play with the demo (movie may take some time to load; be patient). Read the code on GitHub. And keep an eye out for further advancements coming soon.

Posted in Opinion | Leave a comment