Ben Shneiderman and Data Visualization

18 Oct

Last Wednesday I attended a talk by Ben Shneiderman at Wellesley.  He mostly spoke about his research in the field of data visualization – basically exploring new ways to present large sets of data.  Not surprisingly, the talk was *very* well attended…I even saw a few professors from the art department which seemed random (or so I thought).

Ben showed a few examples of how powerful alternate methods of visualization can be.  He presented a set of data in tabular form and then showed the same data plotted on a graph.  Immediately obvious was that although the data sets had the same means, the actual functions that described the data were completely different!  It was pretty amazing to be able to find relationships between data points in less than 1 second.

In addition to convincing the audience that data visualization was indeed a powerful tool, Ben let us explore some of his specific projects and tools that he uses to find different patterns.  Ben developed tree maps, which are pretty awesome:

He used these tree maps to show how we can find anomalies and trend in things like stock patterns very quickly.  For example, he showed what the tree map looked like when only one group of companies (for example, the tech industry) is succeeding while others are experiencing a fall in their stock prices.

Along these lines, I really like the idea of taking a nebulous of unstructured data points and producing meaning.  Since I’ve been working on a thesis to create large interactive spaces, I’ve been doing a lot of thinking lately about how I can use very simple sensor data to produce BIG, inspiring results within a space.  I like the idea of having an installation that can leverage a simplistic sensor network and work in conjunction with users to capture and expand upon simple patterns in data.  Much like Ben’s talk, I definitely think the way you choose to present data is directly related to how much attention the data will garner.  For example, if I keep track of traffic patterns in an academic building, I could choose to post a list of statistics on a wall.  Alternately, I could use light and color installations to ambiently highlight areas of high movement.  By exploring more innovative ways of looking at information, we can leverage the kinds of input that humans are naturally attracted to in order to draw attention to interesting occurrences.

Advertisements

Lights to music

11 Oct

Woohooo!  So at our last meeting, BPE made a few changes to our costume design.  One major change is the function of the hat.  Since we’re already storing bubbles elsewhere on the body, it seemed redundant to have a special bubble hat to hold bubbles in.  Truthfully, I just wanted to work in some kind of headpiece.  We’ve now decided to transform the hat into a “crowd enthusiasm” detector.  We’d have a sound sensor somewhere in the audience.  When the crowd gets louder, the hat will light up a new ring of LEDs.  The hat would have a spectrum of colors (kind of like the terror alert level, but not as freaky), so when the crowd’s at excitement level 0, only a blue ring of light would illuminate, etc., until all rings of LEDs are lit up.  When the crowd reaches its max excitement level, the rings of light could all flash.  I really wanted the “max excitement level” to trigger a bubble blower, but not sure if the team members are up on that.

Another LED-actuator we’ve been thinking about is associating the “intensity” of different music bubbles with the light intensity of their LED.  The user could place active bubbles on their sash, twist the bubble to turn up or turn down it’s intensity…then the LED inside of the bubble would automatically change brightness according to the level.  This could be done with a potentiometer or something…

Here’s an example of LEDs flashing to music.  These LEDs are wired directly to speakers with speaker wire.  Check it out, dance party!

Soft Rock

8 Oct

Part of Bubble Pop Electric’s goal is to create an organic, “soft” way to create music.  Although we are using tokens to mix and max music samples, we are also using our musical outfit to control music via body movement.  Whether the user is stomping with their shoes or changing the intensity of a musical element by stretching a Lycra suit, we’re providing a way for the user to control music intuitively.

The Opera of the Future group at the MIT Media Lab has been exploring “soft” musical creation for some time now.  Their Toy Symphony project from ~2002 focuses on allowing children to make their own sounds using non-traditional instruments.

Here’s an example of a “shaper”, an instrument children can squeeze and manipulate with their hands to manipulate music.  This particular type of shaper is an “embroidered musical ball”.    In this case,  music can be generated by squishing a pillow-like object.  Eight pressure sensors made from conductive thread are embroidered onto the ball.  The conductive sensor data is processed by a microcontroller and sent to a computer where music is produced by generating midi tones.  This musical ball is a nice example of an instrument that doesn’t use slider sensors, buttons, or other hard, bulky sensors.  All sensors can be used at once, without a second thought.

More on embroidered musical balls can be found here: http://www.cc.gatech.edu/~gilwein/images/ballchifinal.pdf

Bubble Pop Electric

30 Sep

Bubble Pop Electric is Ali J. McKenna, Lorraine Shim, and Alex Olivier.  Bubble Pop Electric is a bubble-covered electronic pop mixing station.  Bubble Pop Electric is the future of performance.  (And yes, Bubble Pop Electric is a Gwen Stefani song, please don’t sue us, Gwen).

Bubble Pop Electric combines musical performance, lighting design, and fashion into one wearable, portable package.  Instead of delegating aspects of an artist’s performance to costume designers, light and sound technicians, and the editing studio, Bubble Pop Electric returns all control to the artist.

Using bubble tokens stored in a beautiful headpiece, the artist can decorate her outfit and mix music.  As each token is attached to her bodysuit, it lights up and is automatically assigned a selection of sound clips that the artist can choose to play.  At this point, we are still considering different options for how to play each clip.  The artist may tap the bubble to play part of a clip, or the bubble may cause the clip to continuously play while it is connected.  In order to differentiate the musical bubbles from the decorative ones, each type of bubble will have a separate color of LED.

Bubble Pop Electric will use conductive strips of a Lycra-like fabric to transform the artist’s body into a variable resistor.  As the artist moves and dances to her musical creation, the conductive pseudo-Lycra will subtly modify portions of her music, and potentially the lights in her bubbles.

Here’s a picture of a silver conductive stretch fabric from http://www.lessemf.com:

The last portion of our project is a pair of drum shoes.  As the artist walks on the stage, she can walk, stomp her feet, or dance, causing vibration sensors in her shoes to produce percussion sounds.  We hope that by dancing to the beats she is playing, the artist can create a sense of unity between the various sound clips in the bubbles.

Concerns:

Keeping in mind that this is just a conceptual design, we want to address the following potential issues:

1) Not overwhelming the user with an excess of options.  Interaction should feel natural, yet expressive.

2) How can we allow the artist to play different sound clips without producing a cacophony of horrible music?

3) What other controls can we add to the suit?  How will we control how the bubble’s music is played?

4) What functionality can the hat serve besides a “holder”?

Technical Details:

We’d like to use a Bluetooth chip to send sensor information to a controller computer.  This computer will then send musical data back to Bluetooth speakers located in the outfit’s shoulderpads.  This will allow us to process and store musical data without taping a computer to our outfit.

Bubbles will be connected to the suit via conductive Velcro.  This will allow bubbles to turn on only when connected to the suit.  We’re thinking of embedding a magnet and using a magnetic sensor to detect when a bubble is present.  We’ll then use event-based programming to manage when songs are playing and not.

heavily take advantage of a user’s sense of naive physics (NP) – a user will be able to sense when a landscape structure is precarious or unstable rather than relying on a computer’s computation.  Much more intuitive than a CAD program, the user is able to mold, build up, and depress the material instinctively instead of searching through a library of complex extruding, sweeping, or filleting options.

Dance, Stretch, Distort

29 Sep

A potential inspiration for a stretchy bodysuit that controls music and lights?

Fashion bubbles

25 Sep

Ali, Lorraine, and I met tonight to discuss plans for our TUI project…we’re trying to think about what boring types of humans we can transform into superheros!  So far we’ve been tossing around ideas involving crowd-sourcing and social media.  For instance, could our “superhero” actually be a neutral being whose goodness is controlled by tweets?  I kind of appreciate the commentary on social media – it can be used for good or for evil.  While Twitter, blogs, etc. can be a great way to spread news and information, they can also deteriorate any real social contact and spread false rumors (hello, twitterbombs…).

The only issue we had with social media is that we all had mixed feelings on its merits.  I personally own a crappy, electrical-taped phone and DON’T have a Twitter account, so I am not an expert on tweeting about my life.  I appreciate the way Twitter can be used for mass input, but I also kind of resent the superficiality of it all.

Moving on from the social media sphere, our group all agreed on one thing – fashion is important to us.  Instead of trying to come up with a superhero from the technological standpoint onward, we decided to let fashion guide our brainstorming session.

We start with Gaga.

Bubble Gaga to be exact.

Looking at Lady Gaga’s bubble costume, we absolutely love the 3-D nature and the modularity of the outfit.  If Lady G’s costume were a TUI, her bubbles would undoubtedly be perfect for multiple tokens…which reminded us of tangible bubble messages!

Friday in class, Ali did a presentation on “Tangible Message Bubbles”, a project at UC Berkeley.  Basically you have bubble-shaped tokens (AKA speech bubbles) that you can speak into and record messages.  You can then dump out these messages on an interactive board and send them to people.

So the message bubbles are very cool, as are Lady Gaga’s bubbles.  How can we use tokens to make a gorgeous bubble creation?  This leads me to our next fashion inspiration, the hat.  Forget capes, I move that futuristic superheroes wear fantastic headgear.  Case in point:

Eventually we came up with the idea of using a hat to store a more updated version of the message bubbles.  Bubbles will be queued in the hat, then come out of a tube.  The wearer can take a bubble from the queue and attach it to her dress.  When attached to the dress, the bubble will light up in response to the sound that it contains.  Incorporating our original social media idea, we’re considering letting people “tweet in” a song sample, which would be stored in the next bubble in the queue.

Sandscape and Illuminating Clay in the Context of TUI Frameworks

24 Sep

Sandscape and Illuminating Clay, projects by the Tangible Media Group at the MIT Media Lab, are TUIs for landscaping and architectural modeling that allow users to mold organic materials to actively construct a model of their design.  A camera or laser detects the changes in the physical environment and models the changes on a computer display.  In addition, information such as wind flow is projected directly onto the physical model, allowing the user to dynamically see how their model will interact with the environment.

Like most TUIs, Sandscape and Illuminating Clay aspire to enhance a user’s experience behind the tired and true “WIMP” (window, icon, mouse, pointing device) interface.  While there is indeed a display screen involved in this project, these projects are more akin to playing in a sandbox than playing a computer game.  This TUI contains many of the elements of a “Reality-based Interaction” framework, which is described in “Reality-based Interaction: A Framework for Post-WIMP Interfaces”.

Reality-Based Interaction (RBI) Themes:

Naive Physics (NP):

These featured TUIS heavily take advantage of a user’s sense of naive physics- a user will be able to sense when a landscape structure is precarious or unstable rather than relying on a computer’s computation.  Much more intuitive than a CAD program, the user is able to mold, build up, and depress the material instinctively instead of searching through a library of complex extruding, sweeping, or filleting options.

Body Awareness and Skills (BAS):

The dexterity present in the human hand is much greater than the dexterity possessed by a mouse pointer.  When we mold sand and clay with our fingertips, we are able to delicately press and mold exact features.  We are much more in-tune with our fingers than features in a software program.

Environmental Awareness and Skills (EAP):

In both of these TUIs, the user is able to model environmental conditions, projecting additional information onto the organic form.  With both TUIs, users can view information about elevation, water flow, etc. designated by the color gradient projected onto it’s surface.  As the user manipulates the material, they can see how the changes they make directly influence how the landscape interacts with the environment.  In this way, users are not limited by being unable to physically model all factors they would want to consider while landscaping a space.

Social Awareness and Skills (SAS):

Rather than using one input (a mouse), multiple users are able to create a model together.  A large rectangular container allows at least 4 people to stand on each side and make their own respective changes to a model.  One user can literally “make suggestions” to a model by molding another person’s creation.  These TUIs would also allow persons unfamiliar with computer modeling software to design a landscape.  For instance, a sculpture artist that has no previous experience with landscaping but creates beautiful forms may be able to translate their art into a designer landscape.

Tradeoffs:

1)  There is no “undo” button.  While a digital representation of a model may be preserved, it may be difficult to return to a previous version of the physical model once changes have been made.  Users have to be judicious when deciding to make major changes.

2)  The physical model is most likely not permanent.  Clay dries out.  Sand degrades.  Not to mention that once you’ve created a physical model, you cannot make a different model until you destroy the first one.  Both of these TUIs are likely intended for exploratory models.  I assume permanent digital models will have to be created in any case.

3) Textures and complex shapes may be difficult to mold.  Some shapes are more easily created with a computer.  When attempting to repeat similar shapes or structures, users interacting with these TUIs would have to hand-mold the same shape over and over if they do not already possess multiple tokens resembling their design.  In addition, when it comes to “perfect” shapes, computers are better than humans.  We leave fingerprints and may frequently mold uneven shapes.