Physical Computing Project 1: Sensors and LEDs

Although I’m in the very beginnings of my thesis research, my interests are currently centralized around interactive technology, meditation, and breath. Last spring, I started these biofeedback meditation training sessions offered on campus by Student Counseling Services to help learn how to better manage my stress. These sessions guided me through various breathing exercises to help reach a relaxed state of awareness.

For my first project for Physical Computing, I envisioned a large lighting sculpture of a lotus flower, which symbolizes enlightenment, divinity, fertility, wealth, and knowledge. The lights would help guide the breath during meditation.


I imagine the lights starting from the center and extending outwards during an inhalation, and contracting inwards during an exhalation.

To help guide the breath, there would be pre-determined lighting (dictating the blue channel of the LED). The user’s breath would influence the lighting sculpture as well (dictating the red channel of the LED).  When the user’s breath matches the pre-determined timing, the resulting LED color would be purple.

I imagine the lotus frame to be made out of wood and the petals out of rice paper. The petals would be backlit by the RGB LEDs.

To measure the breath, I used an stretch sensor band wrapped around the abdomen. When the user inhales, the abdomen would rise, causing the band would stretch, increasing the resistance. Exhaling would cause the abdomen (and the band) to contract, decreasing the resistance. I inserted the output voltage of a voltage divider circuit incorporating the sensor to one of the analog inputs of my Arduino’s. This value was used to map the user’s breathing to the the lighting animation of the red LED channels.

Here are some pictures of my prototype…. I didn’t get a chance to make a video of it in action. But the circuit is pretty simple to recreate, so I might just have to go back and do that later.

20140204_160925 20140204_160241

Gallery Layout Plan and Final Thoughts

This floor plan is just to illustrate that ideally I would like to have two sections to my exhibit: video and print. The divided sections would allow for different lighting settings since video likes to be in the dark, while prints need to be lit.layout3d

Here is an example of the planned print layout. Each column features songs from the same musical artist. I would love to have as many prints as the space allowed, wrapping around the walls. I think what is most interesting about this project is the differences in song prints across musical artists, as well as similarities within each musical artist set.


Here are some interesting qualities I have noted in a previous post:

(from left to right)

Kishi Bashi: Many white strokes. With highlights of blues, teals, purples, and yellows. Extremely wavy/stringy.
Absolutely makes me think of Kishi Bashi. He plays the violin and utilizes a loop pedal. (Amazing artist if you haven’t listened to him yet).

ZZ Top: Lots of pink, purple, and red! Strokes/colors are pretty even throughout the song.
These selections of songs have a steady rhythm and tonality, which could probably explain the consistency of strokes.

Nujabes: Boxy strokes. Bold colors. Black accent lines.
The most interesting thing about Nujabes is the boxy strokes. I have not encountered another artist with this pattern yet.

Snoop Dogg: Distinct vertical bands of color (mainly blue, green, red, and yellow). Very jagged strokes.
This is probably due to the rhythmic variation throughout the song. Also, the loops used to make the beats can come from a variety of selections.

Emiliana Torrini: Mostly vertical strokes. Lots of white, with highlights of pink and yellow.
She probably produced the most consistent outputs. The songs on her album Fisherman’s Woman do mostly consist of soft gentle beats, so I feel like these images are very suiting.

I would still really like to play around with my color algorithm to including the range of the frequency spectrum. Throughout the past few weeks I shifted my focus to the creating my prints, and making sure they work on three viewing distances. This was also my first time working with prints, so I was unaware about how much is lost in translation from the screen to print. This was definitely a good learning experience though! And now I know a lot more of what to look out for in the future.

Final Song Painting Print Experiments

So one comment that I received during the previous critique was the lines were pretty jagged. I realized that this would be easily solved by using curveVertex() instead of vertex() in Processing .

Here was the resulting image:


Another suggestion I received was that the image did get a bit busy. So I experimented a bit, having the program draw only every other beat with a smaller line in the middle (double stroke).


I did like this a little better, but I disliked how thick the single stroke (one without the overlayed middle line) looked. So I decided to do a few tests on varying this thickness. The above thickness is 4.

Single stroke (width = 1):


Single stroke (width = 2):


Single stroke (width = 3):


Ultimately, I ended up settling with the single stroke line with a width of 2. And a double stroke line with widths of 1 and 4. I liked the new variations in strokes. The effect was subtle, but added just another detail that can only been seen at a closer level.

After reviewing my print tests, I realized that a lot of the purple in my original image was no longer showing up. After struggling a bit, I learned about the gamut warning in Photoshop.

Here is my previous brightness test print. The yellow displays all unprintable colors. So once I raised the brightness to account for how dark the printer, a lot of color information was lost.


This was a pretty easy fix, by just converting my image to use a CMYK color profile.

I did a few more test prints using varying brightness/contrast for optimal image adjustment information for the four songs I picked from different musical artists.

Along the x-axis I have varying values of adjustment for contrast levels in increments of 20. Along the y-axis I have varying values of adjustment for brightness levels in increments of 10.





Here are the printed results (apologies for the bad cell phone picture):


The differences were very subtle (even harder to see in the photograph, I’ll try to put a better quality photo up in the future), but I ended up settling with a 40 brightness level increase and a 40 contrast level increase (fourth row from the top, third column). It will be nice to keep these for future reference.

After making the appropriate adjustments, I was ready for final prints!

Here are the four final images (I would take pictures of them, but without a proper camera, the quality wouldn’t do it justice):

Kishi Bashi – It All Began With a Burst
Nujabes – Feather
ZZ Top – Sharp Dressed Man
Snoop Doggy Dogg – Who Am I

Mountings at Hobby Lobby ended up to be MUCH more expected than what others had mentioned in class. So, I only had one done. There has been talk that Copy Center is cheaper, so I either plan on checking their prices or try to do it myself.

Song Painting Initial Printing Tests

Alright, so I know I’m a bit behind on my updates for this past semester.

Here is some copy pasta from my Generative Art specific blog, documenting my test prints/algorithm adjustments for my generative art final.

First, I set up some test prints. I had heard from Catherine that the printer in the print lab tends to print a bit darker.

I set up a photoshop layer with different layers. Each image had a varying brightness level adjusted by the amount indicated to the right.


Here is a scan of the printed result:


I know that the scan has probably lost some of the actual color information, but the prints were noticeably darker than the digital version. I continued to use this as a key for future prints.

My first print: I printed “Sharp Dressed Man” at full resolution, with in retrospect was rather small (18×10 inches at 300 pixels/inch).

Feedback I received was to print at a larger scale, and research various ways to accomplish this in Processing.

So here is some information I found at saving high resolution images from Processing:

PDF Export: “The PDF library makes it possible to write PDF files directly from Processing. These vector graphics files can be scaled to any size and output at very high resolutions.”

PGraphics (#16 on 25 Life Saving Tips for Processing by Amnon): “…create[s] a high resolution copy of the regular draw() loop.”

I ended up sticking with the PDF Export, that way I could scale the image to whatever resolution I needed for printing. But the PGraphics “hack” does seem like a good alternative once I settle on the exact resolution I want for my prints.

I ended up making my print as big as the Print Lab in our department allowed me (which was approximately around 43×24 in).

The print did result in some valid feedback from my review in class.

Unfortunately, I currently do not have a great camera, so I cannot take a decent picture of the actual print.

But here is a snippet of my print that shows a good indication of the problem.


Although the individual lines are actually quite crisp, due to the quantity and transparency of the strokes, the resulting printed image looks blurry when viewed at a closer distance.

Phil gave very good advice that a print should work at three distances: across the room, a few feet away, and up close. He said that I accomplished the first two, now just have to find a way to reward the viewer when they view my piece at a super close distance.

So I got some feedback to try overlaying a smaller line over the stroke to perhaps regain a sense of depth in the print, and add more visual interest at a closer viewing point.

Here are some tests:

I printed one line (strokeWidth = 4) and a second line (strokeWidth = 1) both at an alpha level equal to half of my original, this way the middle line would result in the full original alpha value.



I printed one line (strokeWidth = 4) and a second line (strokeWidth = 1) both at an alpha level equal to my original, this way the middle line would result in twice the full original alpha value.



Yay! This actually made a huge difference! The sound waves were much better defined. I decided to print this version of my line strokes in a full print to get more feedback.


As I had stated in a previous post, I realized the best first step to adjusting my color algorithm was to look at the actual input I was feeding into it! This is something I really should have done when I originally started running into issues.

Using Processing, I produced full spectrograms of three contrasting songs.

Spectrogram, Nujabes, “Feather”
Spectrogram, Emiliana Torrini, “Heartstopper”
Spectrogram, The Avett Brothers, “Paranoia in B Flat Major”

After viewing these outputs, it is obvious that I was getting muddied results due to the fact that I was using a linear distribution when averaging.

Screenshot from 2013-11-10 13:44:07
Close-up of Spectrogram, Nujabes, “Feather”

These results also pointed out how much my current color algorithm excludes out of the frequency spectrum. I was only using the first three bands, which correspond to the bottom three rows (easier to distinguish in the close-up).

I read through the Minim documentation more closely, and found a function called logAverages that will group frequency bands by octaves. The logarithmically spaced averages correlate more closely to how humans perceive sound than the linearly spaced averages (that I was using initially).

Spectrogram with Logarithmically Spaced Averages, Nujabes, “Feather”
Spectrogram with Logarithmically Spaced Averages, Emiliana Torrini, “Heartstopper”
Spectrogram with Logarithmically Spaced Averages, The Avett Brothers, “Paranoia in B Flat Major”

Using logarithmically spaced averages shows a more clear difference between the songs, though the latter third of the octaves are still similar. I will take this consideration when I start composing a new color algorithm.

Final Project Proposal: Technical Details and Projected Schedule

So, for my final project I’d like to revisit my music paintings from the chance-based system project.

My color algorithm was a happy accident. However, it only incorporated a small section of the frequency spectrum.

My initial experiments of utilizing the full frequency spectrum resulted in very muddy images. And unfortunately, I ran out of time, so I went back to my previous algorithm.

For my final project, I would like the resulting colors in the paintings to carry more meaning. I would like to be able to answer questions concerning the color. Like, why are ZZ Top’s music paintings primarily purple? What characteristics in the song cause it to be so?

To do this, my first step will be taking a closer look at the FFT graphs of songs over time. This will allow me to see what is the actual input I’m using in my color function, giving me a better understanding of what is happening. I also have a very loose understanding of sound and FFT analysis. I feel like studying this would also give me a greater comprehension of the technical aspect of music.

I found a paper entitled “Time-frequency Analysis of Musical Rhythm” by Xiaowen Cheng, Jarod V. Hart, and James S. Walker. I haven’t really read through it quite yet, but one of the diagrams seemed very applicable.

Time-frequency Analysis of Musical Rhythm, Xiaowen Cheng, Jarod V. Hart, and James S. Walker

I would also like to go through Diego Bañuelos’s Beyond the Spectrum of Music: An Exploration through Spectral Analysis of Sound Color in the Alban Berg Violin Concerto.

Diego Bañuelos, Beyond the Spectrum of Music: An Exploration through Spectral Analysis of Sound Color in the Alban Berg Violin Concerto

To help keep myself on track, here is a projected schedule of what I’m planning on doing each week till the final project presentation.

Week 11:

  • Create various spectrograms of contrasting/similar songs (See former post for song choices).
  • Begin analysis of spectrograms
  • Test prints of existing color algorithms (considering varying scales and types of paper).

Week 12:

  • Start composing new color algorithm based on correlations found in spectrograms.
  • Tweak line density/thickness/opacity depending on test prints.
  • More test prints using new line distribution.

Week 13:

  • Finalize color algorithm/line distributions from previous week testing.
  • More test prints for color matching.

Week 14:

  • Create ideal gallery floor plan.
  • Final prints.
  • Final video.

Flocking Cellular Automata with Color/Chaos (plus final thoughts)

Apologies for not keeping up with my blog posts! This post will contain a bunch of information on what I had been working on.

So after the first critique, I started thinking a lot about playing with chaos.

First I utilized the Hénon system equations to add chaotic dynamics to my system.
xt1 = 1.29 - xt0^2 + 0.3*yt0
yt1 = xt0

When the system was initialized, it would choose a random float starting value of y between -100 and 100. This value would be a global multiplier for the amplitudes of the forces applied to the boids (separating, alignment, and centering).

Previously, boids would bounce off the edges of the screen if it went out of bounds. However, this created some sharp edges that I was not happy about. So I decided to include steering (making the edges of the screen collision objects).

My first few images ended up bland, as the overlapping black lines ended up creating wide opaque areas. A lot of details were lost. So, I decided to experiment with color, by adding a hue increment for each timestep. This way, lines would still be able to be distinguished.


Here are some final image stills of my stable system with color. This system would stabilize once boids reached a state of contentment, their velocity/acceleration would be set to zero.





I really liked these images. The added chaos added some variation in the spacing of the lines. I felt a sense of movement and dancing.

I was also interested in what Phil suggested during the first critique on having a continuously moving animation, rather than stills. So in my state of contentment, I did not force the velocity and acceleration to be zero. Instead, I set the amplitude of the forces to an extremely low value 0.005. This way, the system would blow out of equilibrium if the global chaotic constant (given by the Hénon equations) grew to be large enough.

Since the lines would eventually run over each other and become too opaque, I decided to add an overlay every 7 frames.

Here is a video of that outcome.

I was actually quite intrigued by these results. I thought it was interesting how the system would come into a seemingly stable state, creating an interesting design if only for a brief moment before exploding out again. I felt the overall effect of the visual was very calming. I also liked how the animation looked hand-drawn.

A suggestion was made in class to keep to my original color scheme of black and gray. I initially added color because the black was overlaying itself so much it would end up with wide areas of black. However, since I started the added overlay, this no longer happens.

Here is a video with black lines on gray, which is very reminiscent of a pencil sketch. I really like these results.

I know my gallery presentation was weak. And I know that the majority of that stemmed from the fact that I was unsure about what I wanted to show.

After more thought, I ideally would like to show off both the projection, as well as prints of my still images (on a moderate scale, about 2-3ft across). A smaller display would show the split screen video, as to provide an explanation of how these images were created.

Flocking Cellular Automata Video and Future Direction

Here’s a video of my program from the previous post in action! (Btw Apowersoft Free Online Screen Recorder is awesome).

The red lines shown on the left panel are lines that haven’t moved from their previous position. These lines are rendered with a lower opacity, as to avoid quick opaque black line buildup (as shown in Iteration III of my program, see previous post).

Here’s a few thoughts on where I can take this within the next two weeks:

  • Give boids different tolerances for each state.
    This would mirror how people have different preferences for “contentment.” Some people prefer to be left alone, others don’t mind huge crowds. It would be interesting to see how this affects the image output.
  • Enable steering for edges.
    Right now the boids bounce off the edges. This causes problems when many boids get stuck in the corner. It is difficult for them to escape.
  • Play with color.
    I honestly do like the monochrome outputs. It feels very elegant to me, but it would be interesting to see how I can incorporate color into these images.

Flocking Cellular Automata

Alright, so I decided to experiment a bit with my flocking cellular automata system inspired by Jared Tarbell. And I am definitely liking the results!! I guess I have a thing for moving line exposures.

My system has a small number of boids (for these iterations I played around 10-15). Each boid has one of the following states :

  • “Loneliness” (0 neighbors): alignment and cohesion forces applied
  • “Contentment” (1-2 neighbors): no motion
  • “Anxiety” (>2 neighbors): separation forces applied

Here is the basic structure of my program:

while (boids are not all content){

  • update boid positions
  • draw


Iteration I:

  • Boids generated from center image
  • Linear link drawn between each boid




Iteration II:

  • Boids generated from random point
    float fx = (0.1*randomGaussian()+ 0.5) * width;
    float fy = (0.1*randomGaussian()+ 0.5) * height;
  • Linear link drawn between each boid




Iteration III:

  • Boids generated from random point
  • Curvilinear link drawn between each boid




Iteration IV:

  • Boids generated from random point
  • Curvilinear link drawn between each boid
  • If line has not moved from previous frame, line is drawn with a lower opacity (this is to help get rid of those opaque black lines that develop when line hasn’t moved for a while)

v04_000 v04_001 v04_002