Clearleft presents

dConstruct 2009

Designing for Tomorrow

04 September 2009 · Brighton Dome UK

Let's See What We Can See (Everybody Online And Looking Good)

Mike Migurski and Ben Cerveny

Download the audio

Let’s See What We Can See (Everybody Online And Looking Good) by Mike Migurski and Ben Cerveny on Huffduffer

Huffduff it

Podcast transcripts brought to you by the Opera Developer Network


I’m Mike, this is Ben, and thank for the introduction and hello Brighton! I’m here from Stamen Design in San Francisco, California and as you’ve heard, we’re kind of a design technology mapping data visualisation company, and I’m here to talk about a few of the projects that we’ve done, kind of in the deep past and then also more recent stuff and to talk a little bit about kind of trends as we’ve seen, sort of, things informationalising as Ben likes to say, this idea that information that might historically have been offline is moving online, you know, a lot of echoes into what Adam was saying earlier this morning. So that’s basically the entire company right there, there’s just a few of us, we’re all kind of in a tiny office in San Francisco and we’re growing, I think, in the kind of work that we’re doing while at the same time, trying to maintain sort of a small studio, small office environment. I’m going to start off talking about two fairly old projects of ours from 2004 and 2006 respectively, to kind of give a little bit of a background context to the newer stuff and why we think it’s interesting. This is a piece that we did back in the 2004 Presidential Election for a political group in the US called MoveOn dot org. If you’re not familiar with them, they’re kind of a left-winging online advocacy group that really made their name, kind of in the run-up to 2004 Election, and they did a lot of, you know, heavily online, heavily networked, heavily kind of email and web based advocacy during this kind of really contentious Bush/Carey election around that time. And one of the hallmarks of their organising activities was these kind of self organised house parties that they would encourage their membership to do. So in particular, they used to do these kind of conference calls that would involve, you know, tens of thousands of people around the country and they weren’t really happy with how the conference calls were going. So they contacted us to build an application that sort of wrapped the basic conference call functionality, the kind of real audio player plug-in that you see up in the upper right hand corner of the screen there, in this kind of map based environment where, as an audience member or as a participant, you would enter in information about where you are—in this case, your post-code—and a couple of details about, you know, how many people you had with you and you immediately got this kind of visceral feedback, visual sense that showed you exactly, you know, where you were, where other people in the nation were, and a little bit of kind of basic interactivity kind of layered on top of this radio call-in show type format. So you know, as you zoom out of this map, as you progress through the piece, as you move through this two hour presentation format that would typically have guests like, you know, Michael Moore, Howard Dean or other kind of Democratic party luminaries, you would over time kind of get this sense of there being other people around the country.

So MoveOn was really happy with this project and they were really happy with this idea that rather than having the kind of dead zone of the conference call or the call-in where, you know, you would basically have a bunch of people over to your house for chip and dip and you would call into this thing, and you would hear but you wouldn’t be able to talk back. And I think more crucially, you wouldn’t be able to hear that there were other people out there in the country kind of, you know, agreeing with you or sympathising with you at the time. This application had this kind of nice outcome for a lot of people, where if you were from, you know, kind of a leftie coastal town, you got this sense that there was like a critical mass of people there with you, and if you were from the opposite of the leftie coastal town, you at least felt a little bit less alone when you were driving on the freeway with the, you know, George W bumper stickers around you! So it was kind of.. it had this, like, dual function of helping people feel less alone around the country and interact with this application. So what you can see here is a couple of layers of information that would be presented to you as you were listening to this thing, you could see these kind of ghosted dots on the East Coast which showed the previous incarnation of this piece, so they might do, you know, kind of an early evening and a late evening presentation in order to deal with the two time zone differences on the East Coast and the West Coast. You got the kind of coloured dots and the little pie charts in there that show what happens when the application presents you with poll questions about, you know, your intent, your ability to help, your opinion on certain issues. In later presentations, this application morphed into kind of a straw polling application for MoveOn to be able to get some kind of understanding of what their membership felt about things, as opposed to just, you know, getting whipped up in this sort of frenzy. And you could also zoom in a little bit deeper and get some sense of, you know, a direct kind of surround around you. This happens to be the San Francisco bay area. And this also pre-dates Google maps, so I think it was later in 2004 the Google maps was released, so we were still dealing with these kind of, you know, proto-map application, the entire thing would run in the client, the entire thing would run in the browser and really it was about sort of minimising the amount of visual work in order to push forward the amount of social work and the amount of social interactivity that was going on around in this project.

One of the rules of thumb that we took away from this was that it was really important to kind of show the next most obvious thing here. So in this case, you know, it’s basically just coloured dots, it’s size of dot to indicate how many people are in these places. Really kind of simple visual presentation that isn’t strictly speaking new to this project, but because it was the most basic thing, a lot of the feedback that MoveOn got around this stuff was extremely positive, people felt like it was just that little bit of a tech above the conference call application that helped them understand, like, ok people interactivity kind of movement online, other people that feel the same way that I do about these things. And you know, like I said, this is not new stuff, these are scans from an Alberta Canada atlas that we have kicking around the office that really shows a lot of that same sort of visual decision from 1969 in particular, but when you bring it into this network environment and when you start to connect it to people live rather than, you know, the collating and publishing process, you can take this theme that’s sort of a familiar visual presentation and turn it into something that’s, like, reactive and kind of personal for a lot of people.

A second sort of historical project of ours is something that we did in 2006, 2007, 2008. It’s called Digg Labs, it’s available at labs dot digg dot com and if you’re familiar with Digg, I’m not sure how many people are so I’ll explain a little bit about this kind of online voting website where people who are members of the site can submit stories for inclusion on the front page, and other people who are members of the site can vote on those stories. Really simple, thumbs up, thumbs down, good story, bad story and the things that are popular or, you know, well received, make it onto the front page, receive torrents of traffic, act as a sort of focal point of online activity around, you know, the news or product announcements or celebrity death of the day. And Digg Labs in particular is sort of… the way that it was expressed was a number of these kind of interactive visual Zeitgeist pieces. This is a screen grab of one that was one of the first ones that we launched called Digg Swarm. So what you see here is kind of stories represented by these white circles, users represented by the kind of yellow dots, and then a kind of animated process that lets you move between the kind of overall Zeitgeist and zooming into an individual story. So here you’ve got a screenshot that was taken from the recent Presidential Election in 2008. Here you’ve got a screenshot that was taken from 2006 when Saddam Hussein was executed, and you can immediately see that you’ve got this kind of focal swarming activity around stories where, kind of, depending on what’s going on in the world, the visualisation is giving you a very different point of view or a very different feeling of what you’re looking at. So this in particular is kind of a… almost a pathological example where you’ve got, you know, this story about Saddam Hussein that’s kind of like being inflated like a balloon by all the activity going on around it, and it’s got all these little yellow users kind of swarming around it and like, blowing it up into these outside proportions and at the same time, those white lines connected to other sort of related things that are going on. Those are kind of a visual artifact of the idea of people moving from one story to another. You can see what that looks like in the kind of animated form. This video was recorded right around the time that the Apple iPhone was announced in early 2007, so you can see that there’s a few stories that are about Apple in some way, you’ve got other stuff going around in the periphery but the focal point of the activity that day was obviously around, you know, iPhone Apple release, Steve Note, this kind of, you know, product launch sort of stuff, and you can see that there’s this cluster of stories in the middle that’s getting like swarmed around and grabbed on by people. You know, Swarm is one of the public facing entities that we produce but there was also a lot of back-end work that went into this kind of project. So, you know, in particular the creative brief from Digg wasn’t so much about producing this or that interactive project, but it was more about taking essentially the first holistic look at their database and the first holistic look at their user base, and trying to see if there was something there that was worth kind of mining for a visual interface, visual activity. So this stuff was, you know, essentially the very first things that we produced for Digg, you know, basically day two we have access to your database, let’s see what these stories look like over time. And you know, right away we’re starting to kind of create these very basic visual signatures for stories to get a sense of what temporal activity looks like on that site and only later, after this stuff has kind of bubbled and flowed for a while, does it get turned into like a proper public facing, screensaver, Zeitgeist application. This is another of the Digg Labs pieces, it’s called Stack. It’s roughly analogous to something like an animated Tetris barchart, you’ve got users kind of coming in from the top, falling under stories, making them grow, animating from the right to the left as new stuff moves in and old stuff falls off. And it also has that kind of query-ability where you can dive into any one of those stories and kind of pick it apart. And so you can see these sort of echoes here of the initial work that was being done to show Digg itself what its database looked like and then how that kind of translates into something that’s showing the rest of the world what the Digg database looks like. We saw a lot of use of this stuff that was focused around kind of screensaver activities, people that might have, you know, a second monitor at their work that they could look over and say, ‘Oh my God, you know, Britney Spears or Saddam Hussein’ or kind of whatever the exciting news of that moment was. They had this kind of peripheral sense of what was going on in the world through the lens of the Digg community.

So those are two kind of contextual projects that I think, you know, initially made our name. You know, you’ve got like data visualisation in the second project, and there’s map making in the first project. And I want to talk about a few newer projects and how, kind of, the lessons of those things moved into some of our newer work that we’re doing, in particular the lessons of looking at things that are kind of Internet native and Internet reactive where you’ve got kind of data that originates online and ends up online, moving into a world where you have data that originates offline and slowly informationalises and moves online. So one example of this is something that we did for MSNBC this past year called the Hurricane Tracker. It’s actually two applications, one of which is kind of a current hurricane viewer that makes sense during hurricane season in the late summer months and then you’ve got a second piece that’s a historical viewer of essentially all hurricanes that have ever happened in the Atlantic since they started measuring in the nineteenth century. So this is the current storm view—very similarly to the Alberta atlas, again here we’re borrowing kind of standard meteorological ways of communicating information about weather. In this particular case you’ve got, you know, a particular storm, it’s called Hurricane Bill—they’re named alphabetically—this is the second one of the 2009 season and this screenshot was taken on about August 19th I think, so just a couple of weeks ago, so it’s a fairly new thing. And you’ve got a couple of things going on in a view like this. You’ve got kind of this storm probability cone that shows you where it’s been in the past so you can click on any one of these points and see, you know, where it’s been up until now. You can see where it’s going to, so this is kind of like the current moment of the hurricane, this is its last observed position. And then you’ve also got what its five day forecast is. So you’ve got past/present/future kind of laid out in this single visual interface. You can zoom into this thing, it’s a fairly standard kind of slippy map UI, and when you get into the current position of the storm, you can really begin to investigate these kind of meteorological ways of communicating storm information in a more networked visual reactive way. So what you’ve got here is a current position of a hurricane right there in the centre, you’ve got wind speeds around it, so you can imagine that if you were looking at this on kind of a, you know, radar or satellite view, you would see the sort of, you know, spinning storm right there in the middle, but in this case it’s sort of organised as a kind of concentric rings of wind speeds with the historical wind speeds moving off to the south east and in the future predicted wind speeds moving up into the north west. But because we have access to all this information, we can do a lot more kind of almost widgetry around it, ways of you know turning off and on the different wind cones, in the upper right hand corner are using those radio buttons. And in particular, something that I’m really happy with is what’s going on in the left hand side of the screen which gives you this sense of past, present and future in this, like, wind speed diagram. So I’m just going to show this animated in a second here, and what you can see as this moves forward and backward is something that an ordinary visitor to the site wouldn’t normally see, but what you can see if you take screen grabs of a storm over time. So what this is showing is Hurricane Gustav from August 2008, it’s about a year old, we’re moving forward and backward in time and we’re seeing how predictions around storm movement change over time. So as it moves forward, you can see this kind of probability cone, you know, extend… point towards Louisiana and then slowly shrink as the storm loses speed once it moves over land. As it moves back, you can see exactly how the meteorological predictions and then the later, you know, actuality of the storm kind of play out. So you can watch the graph in the lower left hand corner there basically change shape as the now point moves forward and accretes information about what the storm actually did compared to information about what they think the storm will do in the future. So, you know, as you can imagine, they’ve got this very broad possibility of where that storm might go when it’s just passing Cuba, but then as it actually hits land, the cone sort of narrows as it begins to get closer.

There’s a second half to this project which addresses the seasonality of hurricanes and that has to do with this idea of being able to look at hurricanes over the past. So you’ve got this kind of current view which I just showed that gives you a sense of, you know—is my house in danger? Where is Hurricane Gustav heading right now? Do I have to leave town?—as compared to this thing that’s more based on the idea of hurricanes over time. So this is, you know, every single slide overlaps and you can see pretty much every hurricane that’s ever been recorded and mapped in the Atlantic ocean. So you can see that they all have a very similar wind pattern, they all come from Senegal, pretty much, they move through the Caribbean, they kind of arc back across the US and then if we’re unlucky, they hit, you know, Florida, Texas, do millions of dollars worth of damage and then ultimately slow down and turn, you know, green—slower wind speed—over the continental US. This interface is sort of an example of how we deal with these enormous data sets where you’ve got, you know, literally thousands and thousands of hurricane measurement points over time, and then how do you kind of narrow that down? Or what kind of interface do you provide in order to make it possible to query this stuff? So we’ve been thinking about this in a lot of different ways. We’ve got this kind of way to essentially query it by wind speed, so this is just the, you know, top ten or so kind of most destructive, fastest wind speed hurricanes of the past hundred and fifty years. Katrina and Andrew, fairly well known storms, are in this list and it’s controlled by a very basic kind of slider UI on the left, you can essentially say like, ‘Give me, you know, weaker hurricanes that are just tropical depressions, or give me stronger hurricanes that are actually, you know, these like hundred, hundred and fifty mile an hour crazy storms that come around every couple of decades.’

Another thing that we’ve done there is we’ve thought about how you represent the location of these things. So obviously all the storms are kind of nested and overlaid on top of one another and we thought about, you know, how do you deal with the fact that they hit different points of land and how might you kind of express that as a, you know, web interface widget that people might be familiar with. So what we did was we took the kind of Eastern seaboard United States and pretended that it was a line that was scratched out, and made that a vertical slider on the right hand side of the screen something that you could basically narrow. So what we’ve done here is we’ve narrowed it down to just parts of Texas. Here it is showing only hurricanes that hit land in Florida, obviously there’s a lot more. And then here it is showing hurricanes that have hit land in New England. So you can kind of see that Florida really bears the brunt of a lot of these storms, while places up in New England like, you know, Connecticut, Maine and so forth, aren’t really known as hurricane territory because you’ve got this very tiny minority of storms that actually, you know, arcs up and hit those places. This is the slider kind of zoomed in a little bit, and you can see that this whole UI has been instrumented with all these different ways of querying and interacting and dealing with the data. Any one of those lines you can roll over and figure out which storm it is, any time period like a year you can zoom into. So if you’re somebody that remembers a storm from your childhood or sees a storm on the news, this is a way for you to understand how representative that is of other kinds of storm activity that are going on. There’s also output that’s going on here that’s, you know, a little bit less flashy than the main kind of visual browser but it also gives you this plain English sentence that tells you what it is that you’re actually seeing. So this interface is really about kind of reiterating the basic ways of looking at these storms, and then providing feedback in terms of, you know, showing them, letting you animate, zoom in, pan around, but then also giving you this kind of really basic way to kind of put into language what it is that you’re looking at on all these different sliders. So it’s a way to sort of reiterate what you’ve done by reading back to you what those sliders’ effect is.

The Hurricane Tracker is kind of one of many examples of map based applications that we’ve done, and as we’ve been doing a lot of these over the years, we’ve been finding that we kind of get into these interesting almost like self assisting modes of designing these things, so as we learn something from something like, you know, the MoveOn virtual town hall or Digg Labs or whatever, we take lessons from those projects and transfer them into new ones. This is a Photoshop file that our designer, Geraldine, uses any time we do a new map application. Or we’ve basically developed something like this tool box full of, you know, brushes or tools or crutches, you might say, essentially ways of, you know, instantaneously jumping into a new project that deals with geography, or deals with data or deals with online Internet flows in some ways and kind of bringing these habits that we’ve developed and these lessons that we’ve learned from previous works, and instantaneously kind of dropping into a new data set with old tools as we develop new tools to work with that data set. So one project that kind of exemplifies this is something called Crime Spotting which we’ve been working on for the past two years or so. What Crime Spotting is is a visual interactive browser for crime reports, initially in Oakland and then just recently with the release of the DataSF dot org, data set from the city of San Francisco, we’ve got kind of both sides of the Bay or both major cities in the Bay area. So what you’ve got here is a view of West Oakland—my house is not quite within this view, but it’s pretty close! And you can see that there’s this kind of division between crimes that are happening in the more populated bits, no crime that’s happening out in the industrial waterfront on the lower left hand side there. You’ve got CloudMade open street map based maps in the background that have been explicitly designed to sort of fade back to allow you to see the interactive, you know, clickable, draggable, interrogatable crime icons in the foreground, and it really is this whole interface that’s tuned towards taking this temporal data, this really basic kind of, you know, points in time data and making it as interrogatable as possible. So we’ve borrowed a couple of tricks from other designers to show this stuff. One thing that we’ve been interested in is the way that Apple does this kind of spotlight effect in the system preferences dialogue. If you have a Mac, if you have OS10, you’ll know that when you kind of do a search in system preferences, Apple has this nice way of kind of darkening the whole screen and then highlighting only certain things that are in the foreground. So we borrowed a little bit of that to think about, how do you pull, you know, from a view like this where you might have like a confusion of icons all overlaid on top of one another, how do you pull out very specific kinds of crime, like if you want to look at just, you know, just robberies? So there’s this way of, like, essentially hovering your mouse over a thing or clicking on a thing and getting this kind of foreground/background effect going on.

We’ve also been thinking a lot about temporal navigation on a much finer scale. So you saw the sort of century wide temporal navigation of the Hurricane Tracker, now we’ve got this much more twenty four hour base kind of pie of time interface that deals with things like 6am, 6pm as opposed to, you know, 1850, 2000. This is something that… you can think of this almost as like a radio collection of check boxes. It’s kind of the simplest thing that could possibly work but it gives you a lot of flexibility in terms of how you want to think about time. You’ve got a bunch of little hot links up at the top that let you, for example, only look at crimes that are happening at night, so you’ve got that hint in the background, the light/dark half of the pie that shows you the current time of the year, light/dark division, and here you’ve got just the dark crimes, just the night time crimes selected. Here you’ve got the daytime crimes. You can do things like look at specific times of day that you might be interested in, so like nightlife for example, this would be like from when you knock off work at 6pm to when all the bars close at 2am. You might be interested in just that time of crime in the city. And it also, because it’s also kind of check box based, you end up being able to do a little bit more flexible stuff than just a slider would give you. So you can, for example, say you know, just give me the commute hours, so you’ve got evening commute on the right, morning commute on the left, so it’s the twenty four hour clock and you can really be specific about which times do I care about—you know, when am I asleep, when am I going to work, when am I at work and when am I actually likely to be, you know, potentially victimised in these places or, you know, have some potential risk come to me.

The last recent project that I want to talk about is the SFMOMA Artscope. This is something that we did just this past year for the San Francisco Museum of Modern Art and what we were exploring here is this idea of using geographic browsing metaphors for data that isn’t actually explicitly geographic. So you can imagine, you know, the brief for this project is essentially a six thousand strong art collection from the SFMOMA, you know, it’s all the stuff that’s in their Museum as well as like tons more stuff that’s sitting in a warehouse in the back of the Museum that they have these mechanical reproduction rights for. And the brief is essentially—take all of these pieces, all of this artwork that they’ve been collecting for the past eighty odd years and put it into some sort of interactive browsing environment on the Web. They obviously have a website already, this stuff has been published, these pieces have kind of permanent URLs attached to them. But what they don’t have is a way to very quickly get like a zeitgeist or large scale sort of Digg Labs like sense of what’s going on with this artwork. So one of the places where we decided to go with this project was to borrow some ideas from online mapping. So you know, this is a screenshot from Microsoft terraserver from about 2003, you know, probably like a year and a half before Google maps was released. And this represented I think, at a certain point in time, the standard in how you dealt with geographic information online. You know, if you remember how Mapquest used to be, you would enter an address or enter directions and they would kind of cook up a map specifically for your page and give you, like, the jpeg representation of just where you were and it wasn’t a really supple and interactive and reactive kind of thing. And what happened was, you know, Google created Google maps as we all know, and the main thing that that changed, if you can kind of picture yourself back in 2004 seeing this for the first time, was that you could now pan and scroll and zoom infinitely. When this came out, I must have spent, you know, kind of a solid four or five hours just kind of following the freeways around my house, just totally blown away by this idea that, you know, the entire country and then later the world was now available to interrogate and pan and zoom in this interactive way. And the particular sort of sleight of hand trick that Google does in order to make this possible is to pre-render the entire planet in these little 256 pixel image tiles. So these things exist all over the place, there’s a kind of Ajax Javascript based application that on the fly picks out which ones you need to look at, and then assembles them together into this final map. So it’s kind of an interesting trick. It had been done before. Google did it on a scale that no-one had really seen before and it became this kind of, almost like a standard way of navigating this stuff. I remember talking to a potential client, the New York Times, around 2005 when Google maps was first released and we had this suggestion, like—oh, what if you use, you know, this browsing metaphor to think about your travel section? And they said, ‘Well, we tried Google maps and nobody knew that you could drag the map, everybody was just clicking on the little T’s for travel and they didn’t know that you could interact with this thing in this kind of supple way.’ I think, you know, given the kind of fullness of time, the four years that have intervened, that’s probably the biggest change that’s happened in online cartography is that now people drag and swipe the thing. It’s the same way that, you know, once somebody’s been kind of conditioned with an iPhone for a week, they just want to just kind of swipe at every single screen because now they’re used to this idea that it reacts to you in a way that they hadn’t been accustomed to before.

So, thinking about art, thinking about kind of how you represent this massive landscape of art, we essentially said—ok, well what if we just do kind of the same thing but with SFMOMA’s collection. Split it up into tiles, put it back together and then present it in this kind of map-like interface that borrows a lot of ideas and tricks from Google maps and presents it to people in a way that suggests this idea of kind of an infinite landscape. So what you can see here is the final product. You’ve got this kind of round magnifying glass or loop in the foreground that you can move around the application. You’ve got this infinite—well, not infinite, but you’ve got a very large field of artwork in the back and you have the possibility of moving it around and zooming in and out, and being able to see the SFMOMA’s entire collection. The way that we particularly decided to spatialise this collection was to organise everything according to the accession date, or the date that it made it into the Museum. So if you go all the way up under the upper left hand corner of this piece, you get like the first Diego Rivera that they bought, and if you go all the way into lower right hand corner, you get whatever it is that they bought two weeks ago. And a lot of interesting patterns come out of this, so you can see just below Luke there, there’s kind of a line of white drawings. It really lays bare some of the curatorial decisions that get made at a museum like the SFMOMA, or they might come into, you know, a particular architect or photographer’s collection, buy it all in one shot and then you can see, like, well there’s like one Diego Rivera, there’s maybe one Picasso and then there’s like, you know, a hundred photographs from one particular photographer or a whole bunch of drawings from a particular architect that just pop in an interface like this. It gives you a sense of, you know, where their changing tastes are over time. Part of the brief involved a huge data set involving, like, where these artists were from, what their birth dates were. We explored a lot of different ways of looking at this data but we found that almost, like, simplifying it into this funnel was both easy for people to understand and also avoided a lot of kind of political minefields that a museum like this SFMOMA runs into. In particular, their curators are really stressed about the idea that they used to not buy women’s artwork, and then they started buying women’s artwork and there was this point in time at which they decided that they were going to be more inclusive of the kind of artists that they were dealing with, which was, you know, basically capping off fifty years of buying art by dudes. And they didn’t want people to be able to look at this interface and say ‘Oh my God, you have, you know, eighty per cent, you know, only men’s artwork’, when the reality of the situation that they were dealing with was that they’re now ameliorating this kind of historical inadequacy but the data still shows a kind of past, you know, predilection that they had for a particular kind of artwork. So they were really sensitive about these things and I think what they really wanted was something that let people sort of, kind of slip around and zoom around this stuff, and get a sense of the entire data set whilst still allowing a kind of per item digging in, but also not allowing, you know, a complete data, you know, spreadsheet interactivity. The idea wasn’t so much like—we’re publishing our entire collection and you can see every single detail about it—but it’s rather—we’re giving you this kind of creative or directed interface to this stuff that let’s you understand what’s going on while at the same time kind of skip into and out of it, and essentially play with it.

So that’s a few projects that I wanted to talk about and now, I think Ben has some meta words to say about them! Thank you.


Thanks Mike. My role at Stamen is that of the conceptual strategist and adviser, and in the design discourse around the office, I guess my sort of function would be sort of the hyperspace button, which we will press now! So I am going to sort of re-approach a lot of the same material that Mike has talked through but now from a little bit of a, sort of a meta perspective in terms of how this, you know, the types of kind of cultural studies and the artifacts that are… kind of the conceptual artifacts that inform how we approach these materials. Part of, I’d say sort of the largest framing of this movement into the culture of visualisation I would say comes from our growing literacy in two things. One is in the culture of interaction with computation, or sort of the arc with which we’ve dealt with computing devices has become a more personal and a more fluid relationship as we go. There’s sort of a refresh rate of the way that we can understand frames of data coming at us. And that also has to do with our growing literacy and complexity which comes a lot from the culture of video gaming, the culture of, you know, the understanding of urban spaces Adam was speaking about, and the understanding of relationships and social networks. So we have a lot of… we’re sort of soaking in information now and we’re sort of becoming more and more self aware about that situation, and our sort of fluidity with which we deal with the composition of information is now changing. And so we’re going from an era where we used to describe very complex models through language and potentially diagrammatic illustration, to an era where we can explore models interactively and we’re comfortable in that space. I’d say that, you know, there is a literacy that’s been growing in the last generation of people that, you know, can sit down, you know, in a relationship with data and not be given explicit instruction in how they interact with it, but rather are pulled into the experience by trial and error and watching, you know, investigation results in things that they can actually comprehend the relationship between their action and the result. And so as we, you know, begin to intuit the ways that we can make that connection, the way that we can actually draw people in, it means that we can begin to sort of turn up the complexity dial without losing them as we go. And so, you know, visualisation is definitely, you know, sort of founded on the premise that you’re actually, you know, turning up the complexity without turning off people’s ability to understand it. And that’s really, you know, part of the mission of Stamen is to begin to, or continue to sort of popularise and make available these sort of tropes for ways into complexity and ways into understanding systems and data sets that were otherwise illegible. So I’d say that, you know, in a huge sort of…. it’s a redefinition of the relationship in data and computation.

So the other thing I think that’s really interesting about the way that our design process works and the way that visualisations work is that they are basically small worlds. In other words they are sort of holistic physicses—I don’t know if that’s the right plural—physii?—that describe rule sets that map variables to dynamic presentation, whether that’s sort of in the Swarm case actual behaviour of different agents in the environment, or you know, it’s a rule for expression in a visual language or a way that data is related to a geographic situation and map. But the interesting part about that which goes back to the sort of same systems literacy issue that I was just talking about is that that physics is coherent and because of the coherence, it means that you, that as humans that are, you know, we are modeling minds, I mean that is what we do, we build models of the world and so these small worlds we also build models of, and therefore as we grow, our relationship grows with one of these visualisation pieces, our literacy in its rule set becomes internalised. We become sort of friends with the visualisation space and therefore, you know, we have a very different relationship to it and we can actually explore it more casually and we sort of begin to see patterns in the information in a very different way because, now that we, you know, have internalised the rule set the expression begins to take on a different type of meaning.

The other thing that’s, I would say, probably the most, the hardest thing to wrap one’s mind around in the design of these things is the fact that the data, before we get to it in visualisation—Mike was showing earlier some of the studies that we did as part of the creative brief with Digg where we were sort of the first investigations that, you know, the employers from Digg and the people from Stamen, you know, together were kind of driving visually through the relationships and the data set before we sort of locked down onto a specific instance of visualisation relationships to illustrate. And that’s, I think, part of what’s sort of mind blowing about what visualisation represents is the multi-dimensionality of the data set, the fact that we have only a limited sort of set of capabilities to perceive. We have, you know, our senses which usually with visualisation rely on our ability to perceive space and that, you know, that spatial model is then projected into a space that can have an incredible amount of dimensionality. Most data sets are filled with variability, you know, end dimensional models, lots of different variables that you have to cook down into representation that creates relationships in 2D or 3D space. And that is actually one of the hardest first steps to make in the way that you do this type of data modeling process, you have to decide out of all of these sort of spaces, what is it that I’m trying to represent. But one of the tricks in still leaving open the door to the end dimensionality is the idea that that interaction, the fact that you have these sort of behaviours that you can actually create handles onto the way that you’re sort of pivoting the view, 2D or 3D view, into the larger dimensionality of the data space through the use of sort of sliders and buttons that form the sort of controls that take you into that space. So Stamen—often what we’re doing, in terms of our ability to pivot the view into the sort of hyper-dimensionality of the data, is that we’re moving along a timeline. But here we have a couple of, you know, in this particular visualisation there are a couple of axes of variability that we’re actually allowing people to sort of transit and therefore end up with this resultant what I’m calling here a sculpture and possibility space which is that we have the total data set which is all of hurricane activity in the United States, and then we have sort of a resultant sculpture, which is the winnowing down based on the sort of selections that we’re making, you end up with something that is legible as an object of sort of composite data, and an object sort of interrogatable and you can actually, you know, begin to understand the incidences that compose it. But you can also sort of see it’s, you know, you have again with that system literacy, you begin to understand, you know, the overall composition and that begins to make sense as a… on a larger scale.

So in a lot of ways, sort of the action of the user in these spaces or what you hope as a designer when you’re designing visualisations, you’re empowering the user to tune in patterns so there are, you know, again in huge data sets that you might have, you know, like in the Instrumented City for instance, or in other sort of contexts where enormous amounts of real time data are flooding into a database, a lot of that data actually is not very interesting. And therefore you don’t really want to build a visualisation that spends a lot of time sort of driving around this multi-dimensional data space into sort of the hinterlands of, you know, there being just a flat line on the screen describing a relationship. You know, that’s not what’s really compelling about what…. we’re… as humans, the other things that we’re sort of tuned into ourselves is to find complexity because we actually, you know, part of, you know, science and, you know, philosophical investigations are always sort of, you know, sort of zooming in on the surface of, you know, sort of order and chaos and that’s where a lot of our, you know, understanding of the world comes from is in investigating that threshold. And so what you want to be able to do in building the handles in these tools, first you sort of you know, in the first place you realise ok, we’re going to make a mapping in this possibility space, this sort of hyper dimensional super set of data, we’re going to, you know, take a chunk out of it, we’re going to map it onto these, you know, sort of sub set of dimensions and now we’re going to make it… we’re going to design the handles so that someone can steer this little vehicle through the space and, you know, cover the terrain that’s going to be most, you know, that bears the most information, you know. Make sure that there’s sort of tourism in the overall data set is tourism into the most beautiful parts of it. And that’s sort of the design of the handles means that you want to make sure that the vehicle’s going to go in the right direction to take you to those places.

So, you know, probably the best way to understand how these mappings from the super set of, you know, data dimensionalities onto sort of these small worlds, the little physics that each of these visualisations represents, is through dimensional synesthesia and I don’t know how many people are familiar with synesthesia but it’s a medical condition that people have experienced where they, you know, maybe they hear a sound and that makes them have a specific smell, you know, they smell a specific smell or they see words and there’s different words appear to them as different colours. And it’s something that a lot of people, you know, involve, you know, in some people it’s not really a disability necessarily because a lot of people actually find it quite useful, the fact that they have these strange, you know, neural mappings between one sort of sensory input and a sort of cognitive effect the input gives them. And that in a way is what we’re building sort of these cybernetic synesthesia, we’re building you know, sort of extensions of the ability to map one sort of set of senses onto another set of sort of inputs. And so part of, you know, for instance in this case with Artscope, it was about the idea that we were taking something that was ultimately temporal and turning it into a spatial representation. And the fact that that is, you know, sort of a… that artifact of the sort of the entire timeline of the collection is now, you know, been made into sort of a physical artifact in the sense that it’s actually, you know, sort of spatially mapped, is very much something that we’re coming to… you know, we’re becoming familiar with as people that sort of consume these types of experiences, we can now, you know, popular culture is giving us ways to understand what it means, you know, a lot of times music videos now use, you know, dynamic visualisation to represent audio and, you know, there’s sort of these crossover spaces where we’re seeing a lot of translation, real time translation between one type of stimulation and another type of cognition. And I think that visualisation is sort of a culmination of that in a very utilitarian perspective, the idea that we’re actually using an aesthetic system and kind of a piece of art in and of itself, which is the system of relating one sort of axis of variability to another, and I think that that’s actually, you know, part of what is… the opportunity for designing these things is, you know… the people that use visualisations, especially I think, you know, part of our goal at Stamen is to, you know, we have the design process is actually, you know, we like that, you know, the medium is also somewhat the message to the extent that how we choose to frame up the data sets is actually also, you know, a craft in and of itself, it’s not just the data that’s sitting out there beyond the lens that we’re applying to it, but the actual instrumentation has an aesthetic quality. And I think that’s actually something that is going to begin to propagate out into, you know, as the democratization of tools, you know, which seems to always accompany sort of in the, you know, recent years, certainly as we get into different media arts innovations, those innovations become popularised and so I think we’re looking forward to a time in which this type of art because it does… there is something inherently, you know, when you’re taking a super set of data and cooking it down into a small world, there are decisions involved. And that is actually, there is an editorial process, there is a subjectivity there and there is also an aesthetic there, and so those two things are actually, you know, mean that there’s plenty of room for conversation around pivoting on the same data sets through thousands of these lenses and so I think what we’re, you know, just coming into now is kind of, you know, because of the popular literacy that’s growing in visulation, there will actually be popular expression through visualisation as well. I think tools, you know, we’re entering a time when tools are arriving where, you know, we won’t be the only people talking about this on stage to you guys, you guys are going to be talking back to us with the examples that you’re making with this type of experience. And I think that’s, you know, something that we should all be looking forward to.


Clearleft is a user experience design consultancy based in Brighton, UK.

We make websites, and in our spare time we like to give something back to the web design community by running dConstruct. It's a grass-roots conference that gathers some of the brightest minds in the industry from around the world, and brings them to our little home by the sea for a cup of tea and a slice of cake.

Previous years: 2005 | 2006 | 2007 | 2008