Opinion – Ideas For Dozens http://urbanhonking.com/ideasfordozens Wed, 30 Mar 2016 22:39:34 +0000 en-US hourly 1 Compelling vs Compulsive Games: Threes, Super Hexagon, and The Replay Urge http://urbanhonking.com/ideasfordozens/2014/02/19/compelling-vs-compulsive-games-threes-super-hexagon-and-the-replay-urge/ http://urbanhonking.com/ideasfordozens/2014/02/19/compelling-vs-compulsive-games-threes-super-hexagon-and-the-replay-urge/#comments Wed, 19 Feb 2014 21:01:15 +0000 http://urbanhonking.com/ideasfordozens/?p=791 Continue reading ]]> Note: As part of an independent study this semester at the MIT Media Lab on game design, I’ve begun blogging about games here:playedgame.tumblr.com. I’ll ocassionally cross-post more developed pieces like this one, but feel free to follow that Tumblr if you’d to read more of my writing about games.

Like their coin-op arcade ancestors, today’s casual mobile games live or die on their ability to create The Replay Urge: that itch you get to play again immediately at the end of each round. The Urge consists of complex emotional components: the frustration of failure, the drive towards self-improvement, the joy of skill development, the tactile pleasure of manipulating an interface, the inherently addicting quality of random rewards.

Games that induce enough of these become “addicting” and find success through deep user engagement and viral spread. Beyond commercial success, however, the specific mix of components a game uses to achieve addictiveness determines its aesthetic effect: whether or not it is actually fun. Depending on each game’s particular combination of these components the result can be truly compelling or merely compulsive.

Compelling games reward you for repeated play, investing the time they extract from you into a deepening experience. Usually this means the gradual building of skills. Each round of play, however frustrating, acts as practice, building reflexes and sharpening systemic understanding. Compelling games repay your time investment with the feeling of mastery.

For me, an example of a compelling game is Terry Cavanaugh’s Super Hexagon.

Super Hexagon is a brutally hard, fast-paced obstacle avoidance game where you rotate a small triangle to avoid a series of incoming geometric patterns of walls. The goal is to survive for 60 seconds on each level of difficulty without crashing into any walls. As a beginner, dying almost immediately is routine. The game displays your survival time down to the hundredths of a second.

However, with each repetition you slowly build skill: increasing the speed and precision with which you move, learning how to avoid new patterns of walls, executing more consistently. You gradually survive longer and, eventually, beat each round. The difficulty ramps up smoothly as your skill improves, the current speed and batch of puzzles always seeming frustratingly impossible and the just-completed ones insultingly easy.

Despite having played for countless hours, I’m proud of having beaten the Hexagonest stage and one of the three Hyper Mode stages (and of my current best time of 58:29 on “HARDESTEST”). These hours (built out of binges on long bus and plane trips and carved out of the interstices of the day) were devoted to building a skill. And, however arcane and useless that skill, its very difficulty gives it value to me.

Compulsive play, on the other hand, wastes the time it wrings out of you. While compulsive games successfully kindle the Replay Urge, they fail to convert the resulting repetitive play into sufficient skill building. Playing compulsively leads to a feeling of self-disgust that grows stronger the longer you play. And it leaves behind no pride or skill, but only wasted time.

For me, Threes is an example of a compulsive game. Created by Asher Vollmer and team, Threes is a clever variation on the Match Three genre of puzzle game made ubiquitous by Bejeweled and Candy Crush Saga. It presents you with a 4-by–4 grid, sparsely populated with numbered tiles. The goal is to merge tiles, combining their values into an ever-growing sum.

On each turn, you swipe the grid left, right, up, or down, merging adjacent tiles that match, moving tiles into empty squares, and making room for the next arriving tile. White tiles start at 3 and match if they have the same number, combining into a doubling sum: 3s combine into 6, 6s into 12 and so on. Red and blue tiles, numbered 1 and 2, can only be merged with each other to create a 3. When your board is full and there’s no swipe direction that can merge two tiles to create an opening for the next tile, the game is over. You’re rewarded for creating the highest numbered tiles, with scores increasing geometrically with each doubling.

The game has a minimalist style and satisfyingly smooth touch interactions both of which are highly reminiscent of Loren Brichter’s Letterpress. On first playing, it’s natural to enjoy simply batting the board back and forth, watching a few tiles merge as the empty spaces quickly fill up until the game ends.

However, the problems begin as soon as you knuckle down and start trying to build skill and learn strategies to improve your score. Upon first hitting this phase, I wasn’t even sure if it was possible to have a strategy. Tiles seemed to arrive randomly, my options for how to deal with them were extremely limited, and, once the board started filling up, there didn’t seem to be any way to empty it again.

After seeing Sermad Bruni tweet an impressive score, I asked him for tips and he replied with a few.

All of these are techniques for maintaining the maximum number of move options. As rows or columns on the Threes board fill up with un-combinable tiles, swiping no longer moves them. When multiple red or blue tiles end up next to each other or move into the interior of the board, they become impossible to match and cause the board to quickly congeal. Large-numbered tiles have a similar effect.

After putting this new understanding into action, my results improved somewhat. I started routinely scoring in the 2,000–4,000 range and, even had a couple of games where I constructed a 384 tile and scored over 8,000.

I began to see that Threes does require certain skills that can be learned and improved. Like with a Rubix cube, a toolkit of move combinations can be learned by rote to handle specific tactical situations: the leftward move that keeps two tiles on an outer row or column centered to leave more options to handle arriving tiles, the choice not to combine neighboring red and blue tiles in order to first correctly place an incoming tile, etc.

However, my skill quickly plateaued. I wasn’t able to come up with any additional strategies that would improve my results and there were no Super Hexagon-like twitch skills to practice. Regardless, the power of the game’s Replay Urge kept me playing long batches of repeated rounds, my sense of self-disgust growing.

I began to suspect that Threes is merely a compulsive game.

But then Ashley Esqueda tweeted a link to this TouchArcade post by a user, y2kmp3, who created the highest-numbered tile in the game: 6144. y2kmp3’s post describes playing a single game of Threes for 10–15 hours to achieve this result. Rather than batting tiles back and forth y2kmp3 plans each move like a chess grandmaster, considering options and repercussions across a number of play sessions. This could not have been more different from the way in which I was playing Threes. Coming across this thread was like discovering a completely different game being played with the same pieces and rules I’d been using.

Part of what boggles the mind about such an effort is that Threes doesn’t seem to give you enough information to conduct long-term planning. At any given point you know the state of the board: the potentially squashable pairs, the free directions of movement, and the next tile. That’s it.

The other two key elements of the game, on the other hand, are random: which tile will come next and onto which of the multiple open squares it will drop. (y2kmp3 also seems to have a mental model for proportional odds of future tiles, as expressed in their image of the “stack”. This would help manage the latter source of randomness but is, as we’ll see, a very advanced skill; I have no inkling of it after 10–15 hours of play.)

The presence of these random elements distinguishes Threes from Chess and Go and the other fully deterministic games where you normally see this kind of highly contemplative play. In these games the advantages or disadvantages of a proposed move could theoretically be fully analyzed given sufficient time, resources, and skill.

High-level Threes strategy is designed entirely around managing randomness. y2kmp3 describes aiming to “maintain two separate chains” of high number tiles in order to create the possibility of later merges. Apparently this strategy fails in the later stages of the game as higher numbered tiles began appearing in the stack and y2kmp3 was forced to change strategy to “create only one ‘high number’ tile card of each kind, so that whatever the random ‘high number’ tile card appears, you can make use of it to escalate.”

This is obviously quite different from a game like Chess or Go where high-level strategy emerges as a negotiation between game rules and player personalities. Chess gandmasters are known for their inclinations towards particular styles of defensive or aggressive play, how they value different intermediate game objectives like movement and king protection, whether they’re stronger in tactics or positional play, etc. High-level Threes play seems to offer no possibilities for this type of personality expression. Instead strategies are dictated by the composition of the stack and the odds that govern random tile appearance.

Beyond Threes’ limitations as a platform for high-skill strategic play, the game’s design has a bigger problem. Discovering y2kmp3’s post shocked me because the gap between that level of play and my own is so huge as to seem unbridgeable. Perusing the rest of that TouchArcade thread reveals an amazing research effort, chiefly lead by a user named kamikaze23, to reverse engineer the algorithm the game uses to populate the stack of next tiles. kamikaze23 solicited logs of Threes games from users, studied the patterns of tile appearance, and put together a speculative account of the algorithm that players like y2kmp3 use to the achieve their high scores.

The need for this kind of research to understand the odds of the stack is evidence of a profound discontinuity in the curve of difficulty vs. increasing skill the game presents to players. It implies that all Threes players will hit the wall of frustration at which I currently find myself, unable to improve our abilities through iterative practice. Only a tiny minority of such players will decamp to the abstract realm of research and statistical study that’s apparently necessary to reach the higher levels of play. Abstract systematic analysis like this is how we solve problems that don’t yield to our basic ability to learn from experience. They’re at the core of our most rarefied rational endeavors like science and mathematics. The incongruity of requiring this mode of thought in a causal game is profound.

As a game designer, I wonder about changes you could make to Threes to ameliorate this problem, to provide a smoother learning curve that would allow more users to ascend to these higher levels of play. The chief idea that occurs to me is to show the user more of the stack. The ability to see the next three or four tiles or more would facilitate longer-term planning and would make it easier to incrementally internalize the behavior of the complex stack algorithm described by kamikaze23. Further, currently all of the random elements are both biased against the player and purely destructive. I wonder if you could introduce an additional random element that would benefit the player and rebalance the game. Maybe a “wild tile” that could be combined with any neighbor like a blank Scrabble tile? This would let the player dig out of full and badly wedged boards and could result in an interesting set of strategic options around how long to save it and when to play it.

Threes is obviously a carefully considered game that was the product of a great deal of design work. So, I’m aware of the hubris of cavalierly suggesting changes like these without the ability to playtest them. For all I know Vollmer tried these variations and found downsides I haven’t anticipated. And creating a game with players as dedicated to discovering high-level strategies as y2kmp3 and kamikaze23 is an achievement in itself.

But, for me, as the game currently stands, playing feels compulsive instead of compelling and I regret much of the time I’ve spent playing it rather than feeling pride in it.

]]>
http://urbanhonking.com/ideasfordozens/2014/02/19/compelling-vs-compulsive-games-threes-super-hexagon-and-the-replay-urge/feed/ 1
OOO and OOP http://urbanhonking.com/ideasfordozens/2013/06/21/ooo-and-oop/ http://urbanhonking.com/ideasfordozens/2013/06/21/ooo-and-oop/#respond Fri, 21 Jun 2013 17:26:44 +0000 http://urbanhonking.com/ideasfordozens/?p=709 Continue reading ]]> Graham Harman has always denied any deep connection between his Object-Oriented Philosophy and Object-Oriented Programming, from which he borrowed the name. Initially, fellow OOO-er Ian Bogost even resisted the use of “object-oriented” as confusing due to the failure of the computer science sense of the term to “mesh” with the philosophical one.

But, the more I’ve read of Harman and other OOO thinkers, the more I disagree with Bogost. While Harman’s objects are not identical to those that are familiar to programmers, the relationship goes well beyond shared terminology. They are deeply enmeshed, both conceptually and etymologically.

In this post, I’ll try to tease out some of these connections, by looking briefly at two key terms in Harman’s philosophy: “black boxes” and “withdrawal”.

Black Boxes

The term “black box” plays a key role in Harman’s Prince of Networks. In Prince of Networks, Harman reads French sociologist Bruno Latour’s oeuvre in order to spell out the metaphysical system at play within it. He then goes on to define his own Object-Oriented Philosophy by building on – and diverging from – Latour’s ideas.

Latour defines the term “black box” thusly (as cited by Harman in Prince of Networks):

A black box is any actant so firmly established that we are able to take its interior for granted. The internal properties of the black box do not count as long as we are concerned only with its input and output.

Latour deploys the term to solve a key problem in his Actor Network Theory: how can we talk about individual members of the Network when we know that, if we looked more closely, they’d teem with other actants and their relations. We can only consider an individual actant in itself by hiding the other actants that make it up within a “black box” abstraction that reveals only those actants’ collective effects, properties, and relations.

In the sciences, the term “black box” seems to have first appeared in the work of German mathematician Wilhelm Cauer, but it was made famous by Norbert Weiner, the legendary mathematician who created the field of Cybernetics. In Cybernetics: or Control and Communication in the Animal and the Machine (1948), Weiner defined a “black box” as

a piece of apparatus which performs a definite operation on the present and past of the input potential, but for which we do not necessarily have any information of the structure by which this operation is performed.

(Latour explicitly mentions this cybernetic origin when he introduces the term in Science in Action.)

This idea has had many applications in engineering and programming from code breaking to electronics to the design of programs in the intervening 65 years.

In its most common programming usage, the “black box” stands for the principle of abstraction: the goal that individual components should be built with as few assumptions as possible about the wider system so that they can be reused in different situations without needing to be changed.

Latour’s use of “black box” is perfectly resonant with its mathematical and programming usage. As is Harman’s.

Harman adopted the term from Latour to battle what he calls the “undermining” instinct: the tendency to reduce an object to its components. An underminer would look at the laptop on which I’m composing this blog post and say: well the laptop is really just a collection of aluminum, glass, and silicon components. And those components are really just collections of atoms. And the atoms are really just collections of sub-atomic particles. And the sub-atomic particles are really just probability waves in a vacuum.

Harman deploys the black box in defense against this, essentially nihilistic, gesture. The black box allows Harman to think about objects as unitary wholes, with their own effects and relations distinct from those of their components, even while knowing that they contain complex networks of other objects inside of them. Again from Prince of Networks:

The black box replaces traditional substance…while traditional substances are one, black boxes are many – we simply treat them as one, as long as they remain solid in our midst. Like Heidegger’s tools, a black box allows us to forget the massive network of alliances of which it is composed, as long as it functions smoothly.

The distinction between Harman’s use of “black box” and Weiner’s is subtle. For Harman, an object has effects and relations in excess of those of its components. For Weiner, a black box represents a (frequently voluntary) boundary of knowledge. How an object transforms its input into its output may be completely determined by the objects inside its black box, but we either can’t, or choose not, to know exactly how they contribute to that process. However, in the practice of programming our black box abstractions always do have effects beyond simply hiding their internals. The abstractions we inherit and choose to create play a major role in the social and technical evolution of our systems, helping to determine the boundaries between different pieces of software, teams and companies, and even pieces of hardware infrastructure. They certainly have effects and relations distinct from their components.

Withdrawal

“Withdrawal” is an even more interesting example of this resonance. Withdrawal is a core concept in Harman’s thought. It is what gives objects their continuity over time and their persistence despite change.

For Harman, real objects withdraw. There’s always more to them than the qualities and relations we observe them to posses. Harman calls the part of the object we can observe and relate to the “sensual object” in distinction to this “withdrawn object”.

In Prince of Networks, Harman introduces withdrawal in order to solve a problem he saw in Latour with the continuity of objects. For Latour, an object is nothing more than its relations. Objects (or “actants” in Latour’s terminology) sit in a network of other objects, acting as their supporters, components, evidence, masters, associates, etc. They are entirely determined by these relations. However, Harman observed that if any of these relations changed (which they obviously do over time) then, in this understanding, the object must cease to exist and become a different object (objects being nothing more than their relations).

For Latour, the demitasse out of which I drink my espresso becomes a fundamentally different object when the barista fills it, another different one when she passes it to me, and a third, fourth, fifth, and sixth with each sip I take from it. Six completely different objects.

Harman observed that this point of view makes it hard to think about change. There’s no object that persists through this series of alterations, undergoing changes. He introduced withdrawal to explain change. The withdrawn object is what persists even as its qualities and relations change.

How does this relate to Object-Oriented Programming? Let me explain through some example code.

In Ruby, when you create a new object, it has an object_id:

o = Object.new
puts o.object_id
#=> 70152737816360

This object_id is just a number. The Ruby environment uses it to remember where the object is located in memory, but it is independent from the particulars of the object itself.

You can add data to the object by defining instance variables on it:

o.instance_variable_set("@name", "spot")

and its object_id won’t change:

puts o.object_id
#=> 70152737816360

even though the data is now there:

puts o.instance_variable_get("@name")
#=> "spot"

You can also remove data without affecting the object_id:

o.send(:remove_instance_variable, "@name") 
puts o.object_id
#=> 70152737816360
o.instance_variable_get("@name")
#=> nil 

And, of course, the same thing holds for adding and removing methods:

def o.cat!
    puts "meow"
end

def o.dog!
    puts "arf"
end

o.cat!
#=> "meow"
o.dog!
#=> "arf"
puts o.object_id # no change
#=> 70152737816360 

class << o
    remove_method(:dog!)
end

o.dog!
#=> NoMethodError: undefined method `dog!'...
puts o.object_id # no change
#=> 70152737816360 

In a very real way, a Ruby object’s id is the essence of the object; it’s how the system knows where to find the object in memory. If the object_id was somehow removed or changed the object would be lost. On the other hand, the object_id is utterly uninteresting. It doesn’t tell us anything about the object’s data or methods – the parts of the object that interact with other objects and that we spend most of our time thinking about and manipulating.1

This is profoundly parallel with Harman’s idea of withdrawal. The object_id is an analogue for Harman’s withdrawn object while the methods and data make up the “sensual object” we perceive and to which we (and other objects) relate.

One common misinterpretation of Harman’s position is to imagine that the withdrawn portion of the object is always its most important part, even its “soul”. However, as spelled out in The Quadruple Object, Harman’s system sees objects as having a four-fold structure. And, in some regards, the withdrawn object is the least interesting of the four. Just like the Ruby object_id, even though the withdrawn object is mysterious and inaccessible, it doesn’t determine the object’s qualities and relations, which are frequently what we’re most interested in.

Politics and the Fear of Technical Associations

There are many additional facets of Object-Oriented Philosophy that have similarly interesting relations to Object-Oriented Programming. There’s probably a dissertation-scale study to be done just in comparing Harman’s thought with that of Alan Kay, the legendary computer scientist who coined the phrase “object-oriented programming”. Such a study could serve as an intellectual history tracing the movement of these ideas back and forth between technical and philosophical disciplines, provide a source of additional philosophical material unearthed by computer scientists, and introduce a new axis for understanding the work of computer scientists by unpacking their various philosophical positions.

However, I find it very unlikely that such work will be done in a philosophy department any time soon – and not just because of the difficulties involved in mastering both the computer science and philosophical technicalities involved (after all, Harman’s work reading Latour as a philosopher required a similar engagement with the sociological specificities of that oeuvre).

One of the chief criticisms leveled against OOO is political, specifically that it fails to provide a basis for radical political action against the state. Alex Galloway has been maybe the most-prominent spokesman for this position, accusing Harman of “depoliticization and neutralization” and arguing that politics must precede other forms of thought: “to be political means that you have to *start* from the position of incompatibility with the state”.

Galloway blames these political failings exactly on OOO’s technical heritage, arguing that “OOO is politically naive because it parrots a kind of postfordist/cybernetic thought”. Essentially saying that OOO’s roots in programming make it complicit with “digital capitalism”.

Harman, and other OOO thinkers, have responded vigorously to this critique (see Bryant and Donkey Hottie, amongst many others). In addition to criticizing the weak ties that Galloway sloppily used to bind OOO to “digital capitalism”, the OOOers have, validly questioned Galloway’s basic premise that politics must always precede and determine other forms of philosophical inquiry.

Despite this vigorous defense, these kinds of attacks have left OOO with a lasting fear of an over-close association with technical fields like computer science and programming. This is a serious shame as it obscures a significant part of OOO’s heritage, prevents what could be a powerfully productive interdisciplinary collaboration, and (maybe worst) leaves OOO’s critics as the only voices defining this connection.

  1. The class of a Ruby object is also unchangeable, but this feels like a way that Ruby is a bit more “class-oriented” than “object-oriented”. Relatedly, one of Bogost’s objections to the philosophical use of object-oriented was that computational objects “describe a pattern, not a thing.” I think Bogost, here, mistakes classes, which are patterns for objects with the instantiated objects themselves, a common and understandable slippage since in most mainstream object-oriented languages, like Ruby, programmers spend much more of their time thinking about and writing classes than they do manipulating individual objects as I’ve been doing in this example code.)
]]>
http://urbanhonking.com/ideasfordozens/2013/06/21/ooo-and-oop/feed/ 0
In-Screen Graphics as Religious Experience: On the Purpose of Invoviz http://urbanhonking.com/ideasfordozens/2012/07/01/in-screen-graphics-as-religious-experience-on-the-purpose-of-invoviz/ http://urbanhonking.com/ideasfordozens/2012/07/01/in-screen-graphics-as-religious-experience-on-the-purpose-of-invoviz/#comments Sun, 01 Jul 2012 17:14:57 +0000 http://urbanhonking.com/ideasfordozens/?p=654 Continue reading ]]> In a recent Domus piece on In-Screen Sports Graphics, Max Gadney reports on a talk given by Ryan Ismert at Strata. Ismert works at Sportsvision, which makes the graphics that appear in most US TV sports telecasts.

In the piece, Gadney sets outs Sportsvision’s work as a model for other infoviz practices, especially those aimed at business decision makers and designers of public space. He offers some compelling evidence of Sportsvision’s success at integrating rich sensor-derrived data into concise and comprehensible on-screen graphics.

Then Gadney’s takes things a little farther. He asks “how do we go about getting the freeflowing, more subjective data that might better communicate life in buildings, football and business?” In sport he’s thinking about problems like getting the system to capture the subtleties of the fluid “whirling patterns of play” demonstrated in a FC Barcelona football match. He wants systems like Sportsvision’s on-screen infoviz to be able to understand and represent such seemingly ineffable properties (and, by analogy, for our business and building modeling systems to be able to do the same) not just hard numbers.

And Gadney has a suggestion for an answer: “The answer, for both football and buildings, will emerge from a more holistic, performative sophistication in collection and visualisation, as a ‘total design’”. He suggests that this “total design” could be based on “parametric models describing the interdependency between performative elements in buildings”:

“Parametric models indicate how a change to one component of a structure causes ripples of changes through all the other connected elements, mapped across structural loads but also environmental characteristics, financial models and construction sequencing. FC Barcelona’s activity is also clearly parametric in this sense. It cannot be understood through sensors tracking individuals but only through assembling the whole into one harmonious, interdependent system.”

Here, I think, is where Gadney gets into trouble. The problem with this idea of “total design” is that it bleeds into Cybernetics on the one side and AI on the other and hence falls into the problems that haunt both of those disciplines.

What is the victory condition for an advanced parametric model of the subtle strands of inter-relations between players on a football pitch?

Start with the Cybernetics option. Is the goal predicting the future of the game? Predicting when and if individual goals will be scored? If so, that falls into the classic modeling problem that doomed large-scale cybernetics efforts such as World3 and George Van Dyne’s work in the Colorado grasslands. Capturing increasing amounts of data doesn’t cause your model of the complex system to converge on the real world behavior. Instead it causes it to either act stochastically (as in Van Dyne’s case) or fail catastrophically because of flawed assumptions (as in the case of World3’s failure to predict the green revolution and the liberalization of global trade).

The failures of these statistics-based complex systems models lead directly to the rise of chaos theory and a new modesty across disciplines like ecology. The scientists involved learned respect for how little rich data actually helps in the problem of modeling complex systems. However, many other industries did not learn such respect. One of the most prominent of these was the financial industry which built probably the most sophisticated and detailed data-driven systems modeling tools in the world. As is now emerging these tools were a major contributor to the overdeveloped sense of confidence and control that plagued financial industry operators, leading directly to the 2008 crisis.

On the AI side, we have another proposal for the goal of such a “total design”. Maybe the victory condition is that the system shares our aesthetic appreciation of the game? The traditional aim of AI is to reproduce human capacities in the machine. Towards that aim we might use increased tracking of players and increasingly sophisticated models of game dynamics to make our computational systems into passionate fans of games, fans that can appreciate the complex shifting patterns of FC Barcelona’s tik-tak passing the way we do.

However, how you would measure (or even clearly define) machine “appreciation” is a philosophical problem that has plagued hard AI proponents since the beginning of the discipline. And, I would argue, a problem on which they’ve made basically no progress in that time. The reason for that lack of progress to my mind is that the core of AI itself is a bad metaphor. As Bruce Sterling argued compellingly in his Long Now talk, The Singularity: Your Future as a Black Hole: “we don’t know what cognition is and we don’t even really know what computation is”. So how can we expect to jump straight into subtle problems like building aesthetic appreciation into computation. Even if you’re not convinced by the Serle arguments against hard AI, I think you’d have to see this as something of an obstacle to setting something like this as the goal of a sport- or business- or building-infoviz system, despite the potential poetic beauty of doing so. (I wrote more about this problem and how I think it’s evolving in the context of present day technology in my post, AI Unbundled).

All of that said, what is the goal of such a “total design” system if it cannot be complex systems management or AI aesthetic appreciation? An alternate goal for such a system was articulated by Doug Engelbart at SRI in the 60s: that of human augmentation. Computational systems should seek to augment human experiences and abilities: learning, recall, access, communication.

What would an infoviz system look like that was aimed at the Augment goals? Rather than offering an illusory sense of control or the pathetic fallacy of some machinic aesthetic understanding, such a system would aim to enhance what human beings gain from sport: a sense of the beauty of perfected human movement, the thrill of competition, especially when rooted in the emotion-sharing and amplification herd-behaviour of crowds, etc. I don’t know how you’d go about using data to augment these human experiences, but I know that you’d be much better off with David Foster Wallace’s NY Times essay on Roger Federer as Religious Experience as a starting point than Cybernetics or AI.

]]>
http://urbanhonking.com/ideasfordozens/2012/07/01/in-screen-graphics-as-religious-experience-on-the-purpose-of-invoviz/feed/ 1
Designing for and Against the Manufactured Normalcy Field http://urbanhonking.com/ideasfordozens/2012/06/24/designing-for-and-against-the-manufactured-normalcy-field/ http://urbanhonking.com/ideasfordozens/2012/06/24/designing-for-and-against-the-manufactured-normalcy-field/#comments Sun, 24 Jun 2012 22:30:29 +0000 http://urbanhonking.com/ideasfordozens/?p=647 Continue reading ]]> This post tells the story of the session at FOO camp this year that I co-ran with Matt Webb on the Manufactured Normalcy Field. It explains the background of the idea, describes the structure of the brainstorming session, outlines its results, and then tracks some of the uptake of the idea since FOO, specifically in a recent episode of A Show with Ze Frank.

A few months back, Nick Pinkston turned me on to Ribbonfarm, the blog of Venkatesh Rao, a researcher and entrepreneur. Ever since, it’s become a reliable source of mind-grenades for me: explosive ideas that carve up reality in a way I’d never imagined and stimulate new ideas. Ideas you can not just think about, but think with.

The most productive of these ideas for me so far has been the Manufactured Normalcy Field. The Field is Rao’s attempt to explain the process of technical adoption. Rao argues that when they’re presented with new technological experiences people work hard to maintain a “familiar sense of a static, continuous present”. In fact, he claims that we change our mental models and behaviors the minimum amount necessary to work productively with the results of any change.

In cultural practice this process of minimal change takes two primary forms. First, we create stories and metaphors that map strange new experiences back to something we already understand. Rao gives a number of examples of this: the smartphone uses a phone metaphor to make mobile computing comprehensible, the web uses a document metaphor, which has persisted in our user interfaces even as the underlying technology has changed, and “we understand Facebook in terms of school year-books”.

Secondly, we make intentional design choices aimed to de-emphasize the strangeness of new technologies. Here, Rao explains via the example of air travel (a field in which he was educated as an engineer):

"A great deal of effort goes into making sure passengers never realize just how unnatural their state of motion is, on a commercial airplane. Climb rates, bank angles and acceleration profiles are maintained within strict limits. Airline passengers don’t fly. They travel in a manufactured normalcy field.

When you are sitting on a typical modern jetliner, you are traveling at 500 mph in an aluminum tube that is actually capable of some pretty scary acrobatics. Including generating brief periods of zero-g. Yet a typical air traveler never experiences anything that one of our ancestors could not experience on a fast chariot or a boat."

Given this framework, much of the way we currently market new technology is misguided. Geeks, especially, are prone to praise an innovation as disruptively, radically new. But if we believe Rao, that’s the worst way we could advocate on its behalf. What we should do instead is try to normalize the new technology by figuring out the smallest stretch needed to get the Manufactured Normalcy Field to encompass it.

In fact, taking this into account, Rao describes a new role for user experience design:

“Successful products are precisely those that do not attempt to move user experiences significantly, even if the underlying technology has shifted radically. In fact the whole point of user experience design is to manufacture the necessary normalcy for a product to succeed and get integrated into the Field. In this sense user experience design is reductive with respect to technological potential.”

The Manufactured Normalcy Field and Design (at FOO)

Rao’s essay proceeds to examine the threats he currently sees to the MNF and the anxiety that produces in us. It’s a fascinating (and important) line of thought and I recommend you read the full article.

For my part, though, Rao’s account of the MNF got me thinking about how it might be useful to me as a designer. It occurred me that, when making, marketing, or designing products, there are two different relationships to the Field you might want to forge.

First, as already hinted at, you might have a new technology whose adoption you want to encourage. In this case, you would design the product to disturb the existing state of the Field as little as possible. You’d search for existing well-understood products and experiences to analogize it to. You’d try to make it familiar. Think of Apple’s advertising for the iPad, which depicts the device as a totally natural and harmless part of normal domestic life, basically a “glass magazine”.

Second, you might have the opposite situation: a product that’s become boring to the point of invisibility. Air travel. Routers. Refrigerators. If you wanted to make these seem more exciting or innovative, you’d want to “denormalize” or defamiliarize them: push them to the edge of the Manufactured Normalcy Field so that we notice them again and they feel new. For example, imagine an airplane with as much visibility for the passengers as was feasible: huge windows that really let you feel and see the speed and angle of the plane’s flight.

So, I came to FOO with this broad structure in mind for a brainstorming session based on Rao’s Manufactured Normalcy Field. I was feeling nervous about the idea because it was new and I’d barely talked to other people about it, let alone leading a brainstorming session with people of the incredible caliber that O’Reilly gathers for FOO.

Despite my trepidation, I reserved a session time: “Designing for and Against the Manufactured Normalcy Field”. And to buttress my nervousness, I recruited Matt Webb, CEO of the excellent BERG London to co-lead the session with me. Webb is an experienced invention workshop leader and I thought this idea would be right up his alley. He was generous enough to agree immediately with just a short semi-fevered pitch from me to go on.

In the run-up to the session, I explained a little bit more of what I was thinking to Matt (basically gave him a short, verbal, version of the above). He then boiled that down into a structure for a brainstorming session. After a short introduction from me, Matt divided the white board into three sections, labeled respectively “Things That Feel Weird” (i.e. things that need to be pushed further inside the Field), “Things That Feel Normal” (boring things that need de-normalization), and “Things That We Use To Feel About Things” (strategies for normalizing and de-normalizing).

The results of the session

Much to my surprise and delight what ensued was a fantastic brainstorming session. Part of that was the incredibly creativity of the FOO audience. You couldn’t hope for a better group for this kind of exercise than one that contains the likes of Ze Frank, Tom Coates, Tim O’Reilly, etc. etc. And another part of that was Matt’s expert execution of our structure.

Here’s a photo of the white board with the results:

Manufactured Normalcy Field board

The first category we started with was Things That Feel Weird. Unsurprisingly, given the audience, these tended towards cutting-edge technologies:

  • chips that can see smiles
  • Mechanical Turk
  • self-driving cars
  • smart prosthetics
  • Google Glass
  • smart drugs
  • brain reading

The Things That Feel Normal were interestingly more diverse, stretching from long-mundane parts of domestic life to bits of technology only recently incorporated into The Field:

  • keeping pets
  • earth
  • refrigerators
  • crowd-sourcing
  • screens
  • phones
  • centralized banking
  • producing things in China
  • self GPS-tracking
  • yeast

The last category, Things That We Use To Feel About Things, may have been the most fascinating and useful. It ended up eliciting existing cultural techniques that we use to normalize weird things or to allow us to defamiliarize the mundane.

  • personification / anthropomorphism
  • repetition / routine
  • empathy
  • desktop metaphor
  • skeuomorphs
  • gamification
  • domestication
  • medicine / pathologizing (treating something as an illness)
  • sport / play
  • treating as a moral failing

It’s an amazing list, both conceptually and practically. I don’t think I would have seen anything in common between these practices before seeing them emerge in this context. Also, they’re all things I can now actively imagine using in a design process.

After we’d filled in these three areas, Matt suggested a final step of the process that would lead towards actionable design concepts. He asked people to call out Things That Need Weirding and Things That Need Normaling and, for each thing, he asked the rest of the group to think of ways to make that thing either weirder or more normal, as appropriate.

Here were the candidates (time was getting short at this point so we only got to do a few):

Things That Need Weirding

  • advertising
  • money
  • driving

Things That Need Normaling

  • refrigerators
  • flying

(there were others of these called out, but I didn’t capture them)

And here were the concepts that emerged by trying to weird the normal things and normal the weird ones:

  • Everyone starts the plane together (passengers have placebo controls)
  • Pathologize driving (communicable?)
  • Fridge as Narnia
  • CCTV in toilets
  • AR that lets you see CCTV fields-of-view
  • Advertising in cemeteries
  • Advertising made just for you
  • Grinning Currency

This is a partial list I’m reconstructing from the white board photo and my own memories. It doesn’t do a great job capturing the thrill and playfulness of the ideas and the energy and excitement of the participants.

It was an incredibly fun session. I was surprised and very pleased by how well it came out. I can imagine running a similar brainstorming session with other groups in more targeted environments with productive results.

Ze Frank and Object-Orineted Ontology

After FOO camp, last week, Ze Frank (who was in the audience at the session and was a major contributor to the brainstorming) made an episde of his show, breaking normal, where he talked about the session. Ze focused on the making-normal-things-weird side of the spectrum. He gave the example of re-imagining how he pictures himself standing on the earth:

Breaking Normal by Ze Frank

Instead of always imagining himself standing on the top of the earth, he started imagining himself standing on the side of it looking down:

I started imagining that I was facing down when I was standing and looking forward when I was lying down and suddenly I got dizzy. So I lied down, but now lying down had the same feeling as this (dangling feet off the edge of a building), like my back was stuck to a ball and below me was just space.

At the end of the episode, Ze asked his audience to play along, inviting them to describe a normal thing in a way that reveals its inherent weirdness. Ze’s viewers did an amazing job of it. Here are some of my favorites from the comments on that video:

Fishspawned described a thermos:

a thermos is a container that contains a container inside of it surrounded by nothing because if you put stuff into something and surround it with nothing it will keep on being what it is and can’t change into something else. so a thermos acts as a sort of mobile suspended animation device

Ark86 on computers:

In reality, I’m staring at a flat panel made from superheated sand that is connected via strips of ores and really heavily processed dinosaur remains to a thing that we all pretend to understand called “the internet”. Also, I’m sitting on a cow skin painted black and stapled onto some more processed dinosaur remains. I think it’s weird how much ancient animal matter is still being used to make everything we do possible. Thanks, Stegosaurus!

Grendelkhan on work and money:

Five out of seven days, a significant proportion of people go to a small, confined space and sit still for roughly eight hours, staring at a screen and typing. They do not physically move or construct anything.

Later on, they go to other buildings, and take food and other necessities. These two activities are related in an entirely conceptual way–no physical tokens are moved, and the providers of physical goods don’t know anything about the small, confined space.

NephilimMuse on clapping:

Applauding a performance is weird. More specifically, clapping is weird. We just smack our hands together to make a noise that expresses some sort of satisfaction or adoration. It makes the receiving person(s) feel validated. I don’t get it. the motion of clapping is weird. Smack smack smack.

In reading these descriptions, it struck me that they are very resonant with Object-Oriented Ontology (which I’ve written about before here and here). Breaking the abstraction of some behavior or acculturated object (or “opening the black box” as Graham Harman describes it in Prince of Networks) lets us see all the objects and materials that actually constitute these concepts and abstractions. This weirding process puts the material of superheated sand, the air inside a thermos, and cow skin painted black on the same footing as computers, thermoses, and jobs – culturally important categories we routinely consider.

In Object-Oriented Ontology terms, this weirding process is pushing us towards a “flat ontology” where everything exists equally. It’s great that Ze and his viewers have found this game that vividly flattens their personal ontologies and that the result is wonder.

]]>
http://urbanhonking.com/ideasfordozens/2012/06/24/designing-for-and-against-the-manufactured-normalcy-field/feed/ 10
Object-Oriented Sci-Fi: Harman’s Four Methods http://urbanhonking.com/ideasfordozens/2012/04/17/object-oriented-sci-fi-harmans-four-methods/ http://urbanhonking.com/ideasfordozens/2012/04/17/object-oriented-sci-fi-harmans-four-methods/#comments Tue, 17 Apr 2012 05:13:25 +0000 http://urbanhonking.com/ideasfordozens/?p=551 Continue reading ]]> The following is an excerpt from a talk by Graham Harman at the “Hello Everything” symposium. In it, Harman describes four methods for reversing common errors in failing to see objects. These methods are: counter-factuals, the hyperbolic method, simulation, and falsification. Each of them is an imaginative strategy for revealing the withdrawn core of objects, the aspect of them that makes them real for Harman’s Object-Oriented Ontology.

As philosophical techniques these four methods are quite striking. Together they constitute a kind of science fictional approach to philosophical thinking; each advocates imagining the world as different from reality in order to explore the limit and meaning of that reality.

I reproduce these methods here because I think they are promising ingredients in a recipe for something like an Object-Oriented Aesthetics or artistic methodology. Like much good SF I find them to be rich compost for my own imaginings, in this case of a set of procedures for generating multimedia art that inhabits an Object-Oriented perspective.

Here’s Harman:

"How do we reverse the error of seeing objects as events? We do that through counter-factuals. This is already a known method. You can imagine objects in different situations and imagine what the effects would be.[…]

"Imagining Lincoln in ancient Rome. How might he have played out there? Imagine a middle east with an Iranian atomic bomb or imagine an invaded Iraq instead. What are the possible things that would have happened in either of those cases. These help as allude to the thing as a style. Lincoln isn’t something that was confined to that historical period and that country but is something over and above that that could be translated.

"There are computers that do this. They take On Top of Old Smokey and turn it into a Bach fugue.

"Counter-factuals would be the first method for getting at the reality of things. The second would be what I call hyperbolic analysis, which I’ve used in three publications. This is reversing the error of impact. This is reversing the tendency to see things in terms of the effects they have. Instead of critique, also. I did this in the article on deLanda; I did this in the book on Latour; and I did this in the book on Meillassoux that hasn’t been published yet.

"In order to look at the impact of these philosophers what I did is not critique mistakes that they’ve made, but imagine that they have total success. Imagine that they become the dominant philosopher on the planet 20, 30 years from now. And then you imagine what would still be missing. What would still be missing if Meillassoux was the dominant world philosopher in 2050. Don’t fuss around with detailed mistakes that he makes but grant him everything and then see what’s still missing.

"If a philosophy can not survive the hyperbolic test then its less of a real philosophy, I would say. If you take some perfectly respectable minor article about some detailed point and then try to imagine that this is the most important philosophical text of the 21st century it can’t survive that test, obviously. It needs to be a work of a certain level, a certain comprehensiveness and that’s a more real philosophy. The more it can pass that sort of imaginative test the more real it is.

"The other two are a little harder. What we’re trying to do is talk about the mutual independence of a thing and its pieces where the thing is not reducible to its pieces and the pieces are not reducible to the thing. And we actually do this all the time: we call this simulation – where you’re removing a thing from its pieces and simply trying to treat it as a formal model. You’re testing the behavior of a tornado or the 1976 Cincinnati Reds – drawing on my sports writing career – without having to reassemble all the physical pieces that made them those things, of course. You’re simply testing them to see what will happen.

"And what I’ve realized while thinking about this is that paradoxically a thing is more real the more it can be simulated, the more it can be parodied. You can parody good poet better than bad ones, can’t you? If imitation is the sincerest form of flattery then simulation and parody are an even more sincere form. The less real something is the harder it is to simulate. It’s harder to simulate a bad writer, a bad philosopher than a good one.

"In other words the style of a thing is not just an aggregate of all of the deeds it has done. The style of a thing is something over and above those that can be simulated. And so here I would say, against some Luddite principles, if there were truly a computer that was able to write new Shakespeare plays I think that would be outstanding. I think this would be a tribute to Shakespeare, not some kind of cheapening of his greatness. It would show that the style there is perhaps something more real than the mass of works that one person wrote.

"And that leaves one last feature of pseudo-objects which is reducing them to sets, reducing them to pointing at an extensive number of things and saying that’s just a set it’s not a real thing with a unifying principle. We already saw that Rilke or earthquakes are substantial forms independent of their material components that can be removed and put on a computer and generate effects. What about the reverse? Is there a reverse situation where we can show those material components are real beneath all simulation?

"Actually yes. The answer to this is accidents: when things happen that weren’t expected. In what sense are accidents a method? Well, all the time. This is what falsification is about in science. You’re finding accidental things that happen to a theory that weren’t expected, things that point to the independence of the material components from the model that you had of them. So that would be the forth method to use.

“So now there are four methods to use: counter-factuals, the hyperbolic method, simulation, and falsification. And you could say that the humanities tend to benefit more from the first two and the sciences from the latter, but that’s not necessarily the case. There are significant exceptions. And what this suggests to me is that if this way of setting out the different methods is valid, the division between the human and natural sciences is actually an imperfect approximation to the real fissure running through human knowledge, which has to do with the kind of knowledge that shows the independence of a thing from its pieces and the kinds that show its difference from it outer effects, which are not strictly identifiable with either the sciences or the humanities.”

]]>
http://urbanhonking.com/ideasfordozens/2012/04/17/object-oriented-sci-fi-harmans-four-methods/feed/ 1
A Menagerie of CV Markers http://urbanhonking.com/ideasfordozens/2012/04/08/a-menagerie-of-cv-markers/ http://urbanhonking.com/ideasfordozens/2012/04/08/a-menagerie-of-cv-markers/#respond Sun, 08 Apr 2012 22:30:38 +0000 http://urbanhonking.com/ideasfordozens/?p=541 Continue reading ]]> This week, the net’s been exploding with responses to James Bridle’s work on the New Aesthetic. Bruce Sterling set the fuse for this particular conflagration with his Essay on the New Aesthetic in Wired.

My own response, published in The Creator’s Project on Friday, was called What It’s Like to be a 21st Century Thing. I tried to put NA in the context of Object-Oriented Ontology arguing that NA “consists of visual artifacts we make to help us imagine the inner lives of our digital objects and also of the visual representations produced by our digital objects as a kind of pigeon language between their inaccessible inner lives and ours.” This is an approach I’m excited about and plan to flesh out more here soon.

Today, though, I want to engage in a bit of OOO ontography and close-looking as a way of responding to what I thought was one of the more interesting takes on Sterling’s essay.

In his post Why the New Aesthetic isn’t about 8bit retro, the Robot Readable World, computer vision and pirates, Rev Dan Catt tries to address the 8-bit quality of much New Aesthetic visual work. Specifically, he’s trying to answer a criticism of NA as retro, a throwback to “the colors and 8 bit graphics of the 80s” as Tom Coates put it.

For Catt that resemblance comes from the primitive state of computer vision today. “Computer vision isn’t very advanced, to exist with machines in the real world we need to mark up the world to help them see”, he says. In other words, the current limitations of computer vision algorithms require intentionally designed bold blocky 8-bit graphics for them to function. And therefore the markers we design to meet this requirement end up looking like primitive computer graphics, which resulted from similar technical limitations in the systems that produced them. As Catt says, “put another way, current computer vision can probably ‘see’ computer graphics from around 20–30 years ago.”

In a conversation about this idea, Kyle McDonald argued that Catt’s taking the comparison too far. While there is a functional comparison between the current state of computer vision and the state of computer graphics in the 80s, the actual markers we’re using in CV work today don’t much resemble 8-bit graphics aesthetically.

To explore this idea, Kyle and I decided to put together a collection of as many different kinds of markers as we could think of along with links to the algorithms and processes that create and track them (though I’m sure there are many we’ve missed – more contributions are welcome in the comments). It was our hope that such a collection might widen the New Aesthetic visual vocabulary by adding additional ingredients as well as focusing some attention on the actual computational techniques used to create and track these images. Since so many of us were raised looking at 8-bit video games and graphics I think it quite helps to look at the actual markers themselves in their surprising variety rather than just filing them away with Pitfall Harry’s rope, Mario’s mushroom, and Donkey Kong’s barrel, which we already know so well.

So, what do real CV markers actually look like? Browse the images and links below to see for yourself, but I’ll make a few quick general characterizations. There is a lot of high contrast black and white as well as stark geometry that emphasizes edges. However the grid that characterizes 8-bit images and games is nearly never kept fully in tact. Most of the marker designs are specifically trying to defeat repetition in favor of identifying a few specific features. Curves and circles are nearly as common as squares and grids.

I’d love to collect more technical links about the tracking techniques associated with each of these kinds of markers. So jump in with the comments if you’ve got suggestions.

figcaption {
display: none;
}

OpenCV calibration checker pattern for homography

opencv checkerboard
opencv checkerboard

Reactivision

reactivision
reactivision

(Original paper: Improved Topological Fiducial Tracking in the reacTIVision System)

Graphtracker

Graphtracker
Graphtracker

(Original paper: Graphtracker: A topology projection invariant optical tracker)

Rune Tags

Rune Tags
Rune tags

(Original paper: RUNE-Tag: a High Accuracy Fiducial Marker with Strong Occlusion Resilience)

Corner detection for calibration

corner detection
corner detection

Dot tracking markers

dot trackers
dot trackers

Traditional bar codes

bar codes
bar codes

Stacked bar code

stacked bar code
stacked bar code

Data Matrix 2D

Data Matrix 2D
Data Matrix 2D

Text EZCode

Text EZCode
Text EZCode

Data Glyphs

Data Glyphs
Data Glyphs

QR codes

qr code
qr code

Custom QR codes

custom qr codes
custom qr codes
custom qr rabbit
custom qr rabbit

Microsoft tags aka High Capacity Color Barcodes

microsoft tags
microsoft tags

Maxi Codes

Maxi Codes
Maxi Codes

Short Codes

Short codes
Short codes

Different flavors of Fiducial Markers

fiducial markers
fiducial markers
fiducial marker
fiducial marker
Ftag
Ftag
fiducial
fiducial

9-Point Landmark

9-point landmark
9-point landmark

Cantags

Cantag
Cantag

AR tracking marker for After Effects

AR tracking marker
AR tracking marker

ARTag markers

ar toolkit tracking markers
ar toolkit tracking markers

Retro-reflective motion capture markers

motion capture markers
motion capture markers

Hybrid marker approaches

hybrid
hybrid
]]>
http://urbanhonking.com/ideasfordozens/2012/04/08/a-menagerie-of-cv-markers/feed/ 0
26 Books in 2011 http://urbanhonking.com/ideasfordozens/2011/12/30/26-books-in-2011/ http://urbanhonking.com/ideasfordozens/2011/12/30/26-books-in-2011/#respond Fri, 30 Dec 2011 20:34:30 +0000 http://urbanhonking.com/ideasfordozens/?p=538 Continue reading ]]> Last year, I read 43 books, a relatively high annual total for me. This was largely due to spending so much time that year working on a stop-motion animated music video which lead to a huge amount of audio book listening. This year, I read much less. The two main factors in this falloff were my busy last semester at ITP and the fact that I spent much of the second half of the year writing a book. The total for this year came out to 26 books. Plus an eight additional comics, an area I’ve started dabbling in due to the influence of Matt Jones and Jack Schulze of BERG London who I had the pleasure to meet this year.

Looking at the list, the topics of this year’s books much resemble the list from last year with sci-fi and special effects behind-the-scenes making up the lionshare. Of these, I wanted to specially point out The Gone-Away World by Nick Harkaway, which I just finished recently. It’s a great weird mix of post-apocalyptic sci-fi, coming-of-age college novel, and Tarrantino-esque madcap kung-fu. But somehow darker and more moving than that description makes it sound. There are also a few tech/business history books: the Steve Jobs bio, Steven Levy on Google, The Toyota Way, and The Gun by CJ Chivers, which is an excellent history of the AK-47 and one of the best books on design I’ve ever read.

Here are the comics I read this year (I would link to these too, but, weirdly enough, I have no clue of the best place to acquire them online having, amazingly, actually bought nearly all of them from in-person “stores” such as Forbidden Planet and St. Marks Comics.):

  • SVK by Warren Ellis
  • Invincible Iron Man: The Five Nightmares by Matt Fraction
  • Invincible Iron Man: Extremis by Warren Ellis
  • Transmetropolitan Vol 1 by Warren Ellis
  • Planetary Vol 1 by Warren Ellis
  • The Punisher: Born by Garth Ennis
  • The Punisher MAX, Vol 1 by Garth Ennis
  • Usagi Yojimbo Book 2: Samurai by Stan Sakai
]]>
http://urbanhonking.com/ideasfordozens/2011/12/30/26-books-in-2011/feed/ 0
A Personal Fabrication Nightmare http://urbanhonking.com/ideasfordozens/2011/11/21/a-personal-fabrication-nightmare/ http://urbanhonking.com/ideasfordozens/2011/11/21/a-personal-fabrication-nightmare/#respond Mon, 21 Nov 2011 19:38:26 +0000 http://urbanhonking.com/ideasfordozens/?p=535 Continue reading ]]> Just received the following story from my friend Devin Chalmers. I asked for his permission to publish it because I think it is telling and disturbingly likely to come true.

I had a personal fabrication nightmare last night. I’d just gotten off a roller coaster, and at the photo booth where you can get commemorative prints of your shit-your-pants face they had just gotten a whole 3D printing/lasercutter workflow set up. I was overwhelmed by the choices of materials and patterns: the sample book was like 40 pages long. They could do steins, shot glasses, brass plaques, 3D and 2.5D scene reconstructions, six different sorts of wood, marquetry, choices of how to define figure and ground—it was all very confusing. I came back after an hour to let the crowd die down and I still couldn’t decide what the best way to physicalize my roller coaster adventure would be. I awoke still anxious.

]]>
http://urbanhonking.com/ideasfordozens/2011/11/21/a-personal-fabrication-nightmare/feed/ 0
Announcing ofxaddons.com, a directory of OpenFrameworks extensions http://urbanhonking.com/ideasfordozens/2011/11/14/announcing-ofxaddons-com-a-directory-of-openframeworks-extensions/ http://urbanhonking.com/ideasfordozens/2011/11/14/announcing-ofxaddons-com-a-directory-of-openframeworks-extensions/#respond Mon, 14 Nov 2011 21:52:09 +0000 http://urbanhonking.com/ideasfordozens/?p=527 Continue reading ]]> At Art && Code 3D a few weeks back I met James George. We immediately found we had a lot in common, kicking off a wide-ranging conversation about everything from miniature worlds to Portland food carts to ways of making the OpenFrameworks community more accessible. On this last topic, we even conceived a project: an website that searches Github for OpenFrameworks addons written by the community and indexes them for easier discovery. Today, I’m proud to announce the launch of exactly that site: ofxaddons.com.

The site features nearly 300 addons that we’ve divided into 13 categories: Animation, Bridges, Computer Vision, Graphics, GUI, Hardware Interface, iOS, Physics, Sound, Typography, Utilities, Video/Camera, and Web/Networking. We’ve also put together a how-to guide on creating your own addons. That guide includes standards for how to structure an addon so it is easy to install and will work smoothly for all users of OpenFrameworks. It’s based on the emerging standards coming out of the community of addon authors.

While categorizing them, James and I came across a bunch of really remarkable addons. In the rest of this post, I want to highlight a few of the addons that most struck us.

ofxGrabCam

ofxGrabCam by Elliot Woods provides an intuitive interactive camera for 3D apps. It was inspired by the camera in Google Sketchup: it uses the z-buffer to automatically select the object that’s under your mouse when you click as the center of your translations and rotations. Here’s a video Elliot made showing it in action:

And here’s Elliot’s full write-up. Rumor on the street is that this might make it into OF core in a future version, so check it out now.

ofxGifEncoder and ofxGifDecoder

Both by Jesus Gollonet, this pair of libraries lets you create and parse animated GIFs. ofxGifEncoder does the creating and ofxGifDecoder does the parsing. You can create GIFs programmatically to look however you want. The animated GIF above shows an awesome glitch I achieved recently while screwing up some pixel math on one of the sample OF videos.

FUGIFs is an app that use ofxGifEncoder to automatically turn video files into animated GIFs. Sounds like it was made by a frustrated designer of animated flash banners. Useful.

ofxGts

ofxGts is an addon from Karl D.D. Willis that wraps the Gnu Triangulated Surface Library, a useful set of tools for dealing with 3D surfaces. GTS can add vertices to meshes to make them smoother (as shown in the horse model illustrated above), it can simplify models, it can decompose models into triangle strips, etc., etc.

Karl’s version of the addon seems to have some compatibility issues with OF 007 so James put together a fork that fixes those: obviousjim/ofxGts. Merge that pull request Karl!

ofxKyonyu: Kinect Breast Enlarger

This addon by novogrammer was too absurd not to share. It seems (the site (and most of the documentation/code comments) is in Japanese) to use the Kinect to enlarge the breasts of people it detects. I’m sure this will get reused in tons of projects.

ofxSoftKeyboard

ofxSoftKeyboard

Here’s a great addon that could have a lot of application in accessibility and kiosk work: ofxSoftKeyboard by Lensley. This addon provides an onscreen software keyboard that generates key events when the user clicks (or taps, etc.) on a key. It works well and they’ve already accepted James’ pull request updating it to full OF 007 compatibility!

ofxUeye

Last, but not least, we’ve got this addon which provides an interface to the GigE uEye SE, a small form-factor Gigabit Ethernet camera that looks really useful. It’s windows only at the moment so we haven’t been able to actually run it, but it seems quite well put together.

That’s just a sampling of all of the great addons that are available. If you browse around the site for just a few minutes I’ll bet you’ll be amazed at what you find. In fact, I bet, like me, you’ll immediately think of three projects ideas just seeing what kinds of cool things are possible.

]]>
http://urbanhonking.com/ideasfordozens/2011/11/14/announcing-ofxaddons-com-a-directory-of-openframeworks-extensions/feed/ 0
Techniques of the Observer http://urbanhonking.com/ideasfordozens/2011/09/30/techniques-of-the-observer/ http://urbanhonking.com/ideasfordozens/2011/09/30/techniques-of-the-observer/#respond Fri, 30 Sep 2011 05:08:40 +0000 http://urbanhonking.com/ideasfordozens/?p=500 Continue reading ]]> Last night at ITP’s Theory Club (a group that meets bi-weekly to discourse on abstract topics of interest), I gave a presentation on Jonathan Crary’s Techniques of the Observer. I called the talk Techniques of the Observer: Vision and Technology from the Camera Obscura to OpenGL. It was based on one portion of the proposal for a Platform Studies book on OpenGL I wrote over the summer. In Techniques of the Observer, Crary proposes a technique for characterizing a historical period’s ideas about vision by looking at its optical technologies and the metaphors they embody. The Camera Obscura tells you a lot about the Renaissance’s objective and universal geometric world view. Stereographs, phenakistoscopes, and film, all from the Modern era, couldn’t be more different from the Camera Obscura: they build the image inside the user’s mind using tricks of perception, hacks of the user’s sensorium. These resonate with a Modern world view of a series of independent subjectivities bound together into a consensual democracy.

In that earlier blog post and in this talk I set out to extend this way of thinking to cover contemporary computer-generated imagery. For the last 20 or so years, our most contemporary images have been the product of computer simulations designed to emulate an objective Renaissance perspective, but convert it into something fungible enough to become interactive and, when we want it, fantastical. And now, right now, we’re beginning to connect a new set of powerful artificial eyes to this simulation. We’re introducing something like the Reality Effect but to an inhuman mind’s eye. I think this combination explains some of the new Sensor Vernacular aesthetic that many of us have been struggling to put our fingers on. It is comprised of the first works of a new regime of vision struggling to be born.

I’ve uploaded my slides to Speaker Deck, a great new service that actually makes the process of uploading and viewing slide decks online simple and pleasurable. Here they are:

]]>
http://urbanhonking.com/ideasfordozens/2011/09/30/techniques-of-the-observer/feed/ 0