Gallery Layout Plan and Final Thoughts

This floor plan is just to illustrate that ideally I would like to have two sections to my exhibit: video and print. The divided sections would allow for different lighting settings since video likes to be in the dark, while prints need to be lit.layout3d

Here is an example of the planned print layout. Each column features songs from the same musical artist. I would love to have as many prints as the space allowed, wrapping around the walls. I think what is most interesting about this project is the differences in song prints across musical artists, as well as similarities within each musical artist set.

wallLayout

Here are some interesting qualities I have noted in a previous post:

(from left to right)

Kishi Bashi: Many white strokes. With highlights of blues, teals, purples, and yellows. Extremely wavy/stringy.
Absolutely makes me think of Kishi Bashi. He plays the violin and utilizes a loop pedal. (Amazing artist if you haven’t listened to him yet).

ZZ Top: Lots of pink, purple, and red! Strokes/colors are pretty even throughout the song.
These selections of songs have a steady rhythm and tonality, which could probably explain the consistency of strokes.

Nujabes: Boxy strokes. Bold colors. Black accent lines.
The most interesting thing about Nujabes is the boxy strokes. I have not encountered another artist with this pattern yet.

Snoop Dogg: Distinct vertical bands of color (mainly blue, green, red, and yellow). Very jagged strokes.
This is probably due to the rhythmic variation throughout the song. Also, the loops used to make the beats can come from a variety of selections.

Emiliana Torrini: Mostly vertical strokes. Lots of white, with highlights of pink and yellow.
She probably produced the most consistent outputs. The songs on her album Fisherman’s Woman do mostly consist of soft gentle beats, so I feel like these images are very suiting.

I would still really like to play around with my color algorithm to including the range of the frequency spectrum. Throughout the past few weeks I shifted my focus to the creating my prints, and making sure they work on three viewing distances. This was also my first time working with prints, so I was unaware about how much is lost in translation from the screen to print. This was definitely a good learning experience though! And now I know a lot more of what to look out for in the future.

Final Song Painting Print Experiments

So one comment that I received during the previous critique was the lines were pretty jagged. I realized that this would be easily solved by using curveVertex() instead of vertex() in Processing .

Here was the resulting image:

sdm_closeupCurves

Another suggestion I received was that the image did get a bit busy. So I experimented a bit, having the program draw only every other beat with a smaller line in the middle (double stroke).

sdm_closeupCurvesEO4

I did like this a little better, but I disliked how thick the single stroke (one without the overlayed middle line) looked. So I decided to do a few tests on varying this thickness. The above thickness is 4.

Single stroke (width = 1):

sdm_closeupCurvesEO1

Single stroke (width = 2):

sdm_closeupCurvesEO2

Single stroke (width = 3):

sdm_closeupCurvesEO3

Ultimately, I ended up settling with the single stroke line with a width of 2. And a double stroke line with widths of 1 and 4. I liked the new variations in strokes. The effect was subtle, but added just another detail that can only been seen at a closer level.

After reviewing my print tests, I realized that a lot of the purple in my original image was no longer showing up. After struggling a bit, I learned about the gamut warning in Photoshop.

Here is my previous brightness test print. The yellow displays all unprintable colors. So once I raised the brightness to account for how dark the printer, a lot of color information was lost.

gamutWarning

This was a pretty easy fix, by just converting my image to use a CMYK color profile.

I did a few more test prints using varying brightness/contrast for optimal image adjustment information for the four songs I picked from different musical artists.

Along the x-axis I have varying values of adjustment for contrast levels in increments of 20. Along the y-axis I have varying values of adjustment for brightness levels in increments of 10.

bcTest_burst

bcTest_feather

bcTest_sharpdressedman

bcTest_whoami

Here are the printed results (apologies for the bad cell phone picture):

IMAG0507

The differences were very subtle (even harder to see in the photograph, I’ll try to put a better quality photo up in the future), but I ended up settling with a 40 brightness level increase and a 40 contrast level increase (fourth row from the top, third column). It will be nice to keep these for future reference.

After making the appropriate adjustments, I was ready for final prints!

Here are the four final images (I would take pictures of them, but without a proper camera, the quality wouldn’t do it justice):

burst_final
Kishi Bashi – It All Began With a Burst
feather_final
Nujabes – Feather
sharpdressedman_final
ZZ Top – Sharp Dressed Man
whoami_final
Snoop Doggy Dogg – Who Am I

Mountings at Hobby Lobby ended up to be MUCH more expected than what others had mentioned in class. So, I only had one done. There has been talk that Copy Center is cheaper, so I either plan on checking their prices or try to do it myself.

Song Painting Initial Printing Tests

Alright, so I know I’m a bit behind on my updates for this past semester.

Here is some copy pasta from my Generative Art specific blog, documenting my test prints/algorithm adjustments for my generative art final.

First, I set up some test prints. I had heard from Catherine that the printer in the print lab tends to print a bit darker.

I set up a photoshop layer with different layers. Each image had a varying brightness level adjusted by the amount indicated to the right.

brightnessTest01

Here is a scan of the printed result:

brightnessTestScan

I know that the scan has probably lost some of the actual color information, but the prints were noticeably darker than the digital version. I continued to use this as a key for future prints.

My first print: I printed “Sharp Dressed Man” at full resolution, with in retrospect was rather small (18×10 inches at 300 pixels/inch).

Feedback I received was to print at a larger scale, and research various ways to accomplish this in Processing.

So here is some information I found at saving high resolution images from Processing:

PDF Export: “The PDF library makes it possible to write PDF files directly from Processing. These vector graphics files can be scaled to any size and output at very high resolutions.”

PGraphics (#16 on 25 Life Saving Tips for Processing by Amnon): “…create[s] a high resolution copy of the regular draw() loop.”

I ended up sticking with the PDF Export, that way I could scale the image to whatever resolution I needed for printing. But the PGraphics “hack” does seem like a good alternative once I settle on the exact resolution I want for my prints.

I ended up making my print as big as the Print Lab in our department allowed me (which was approximately around 43×24 in).

The print did result in some valid feedback from my review in class.

Unfortunately, I currently do not have a great camera, so I cannot take a decent picture of the actual print.

But here is a snippet of my print that shows a good indication of the problem.

sdm_closeup01

Although the individual lines are actually quite crisp, due to the quantity and transparency of the strokes, the resulting printed image looks blurry when viewed at a closer distance.

Phil gave very good advice that a print should work at three distances: across the room, a few feet away, and up close. He said that I accomplished the first two, now just have to find a way to reward the viewer when they view my piece at a super close distance.

So I got some feedback to try overlaying a smaller line over the stroke to perhaps regain a sense of depth in the print, and add more visual interest at a closer viewing point.

Here are some tests:

I printed one line (strokeWidth = 4) and a second line (strokeWidth = 1) both at an alpha level equal to half of my original, this way the middle line would result in the full original alpha value.

sharpdressedman_pi00

 

I printed one line (strokeWidth = 4) and a second line (strokeWidth = 1) both at an alpha level equal to my original, this way the middle line would result in twice the full original alpha value.

sdm_closeup02

 

Yay! This actually made a huge difference! The sound waves were much better defined. I decided to print this version of my line strokes in a full print to get more feedback.

Spectrograms!

As I had stated in a previous post, I realized the best first step to adjusting my color algorithm was to look at the actual input I was feeding into it! This is something I really should have done when I originally started running into issues.

Using Processing, I produced full spectrograms of three contrasting songs.

feather_freqFull
Spectrogram, Nujabes, “Feather”
heartstopper_freqFull
Spectrogram, Emiliana Torrini, “Heartstopper”
paranoia_freqFull
Spectrogram, The Avett Brothers, “Paranoia in B Flat Major”

After viewing these outputs, it is obvious that I was getting muddied results due to the fact that I was using a linear distribution when averaging.

Screenshot from 2013-11-10 13:44:07
Close-up of Spectrogram, Nujabes, “Feather”

These results also pointed out how much my current color algorithm excludes out of the frequency spectrum. I was only using the first three bands, which correspond to the bottom three rows (easier to distinguish in the close-up).

I read through the Minim documentation more closely, and found a function called logAverages that will group frequency bands by octaves. The logarithmically spaced averages correlate more closely to how humans perceive sound than the linearly spaced averages (that I was using initially).

feather_logFreq
Spectrogram with Logarithmically Spaced Averages, Nujabes, “Feather”
heartstopper_logFreq
Spectrogram with Logarithmically Spaced Averages, Emiliana Torrini, “Heartstopper”
paranoia_logFreq
Spectrogram with Logarithmically Spaced Averages, The Avett Brothers, “Paranoia in B Flat Major”

Using logarithmically spaced averages shows a more clear difference between the songs, though the latter third of the octaves are still similar. I will take this consideration when I start composing a new color algorithm.

Final Project Proposal: Technical Details and Projected Schedule

So, for my final project I’d like to revisit my music paintings from the chance-based system project.

My color algorithm was a happy accident. However, it only incorporated a small section of the frequency spectrum.

My initial experiments of utilizing the full frequency spectrum resulted in very muddy images. And unfortunately, I ran out of time, so I went back to my previous algorithm.

For my final project, I would like the resulting colors in the paintings to carry more meaning. I would like to be able to answer questions concerning the color. Like, why are ZZ Top’s music paintings primarily purple? What characteristics in the song cause it to be so?

To do this, my first step will be taking a closer look at the FFT graphs of songs over time. This will allow me to see what is the actual input I’m using in my color function, giving me a better understanding of what is happening. I also have a very loose understanding of sound and FFT analysis. I feel like studying this would also give me a greater comprehension of the technical aspect of music.

I found a paper entitled “Time-frequency Analysis of Musical Rhythm” by Xiaowen Cheng, Jarod V. Hart, and James S. Walker. I haven’t really read through it quite yet, but one of the diagrams seemed very applicable.

spectrograms
Time-frequency Analysis of Musical Rhythm, Xiaowen Cheng, Jarod V. Hart, and James S. Walker

I would also like to go through Diego Bañuelos’s Beyond the Spectrum of Music: An Exploration through Spectral Analysis of Sound Color in the Alban Berg Violin Concerto.

banuelos
Diego Bañuelos, Beyond the Spectrum of Music: An Exploration through Spectral Analysis of Sound Color in the Alban Berg Violin Concerto

To help keep myself on track, here is a projected schedule of what I’m planning on doing each week till the final project presentation.

Week 11:

  • Create various spectrograms of contrasting/similar songs (See former post for song choices).
  • Begin analysis of spectrograms
  • Test prints of existing color algorithms (considering varying scales and types of paper).

Week 12:

  • Start composing new color algorithm based on correlations found in spectrograms.
  • Tweak line density/thickness/opacity depending on test prints.
  • More test prints using new line distribution.

Week 13:

  • Finalize color algorithm/line distributions from previous week testing.
  • More test prints for color matching.

Week 14:

  • Create ideal gallery floor plan.
  • Final prints.
  • Final video.

Flocking Cellular Automata with Color/Chaos (plus final thoughts)

Apologies for not keeping up with my blog posts! This post will contain a bunch of information on what I had been working on.

So after the first critique, I started thinking a lot about playing with chaos.

First I utilized the Hénon system equations to add chaotic dynamics to my system.
xt1 = 1.29 - xt0^2 + 0.3*yt0
yt1 = xt0

When the system was initialized, it would choose a random float starting value of y between -100 and 100. This value would be a global multiplier for the amplitudes of the forces applied to the boids (separating, alignment, and centering).

Previously, boids would bounce off the edges of the screen if it went out of bounds. However, this created some sharp edges that I was not happy about. So I decided to include steering (making the edges of the screen collision objects).

My first few images ended up bland, as the overlapping black lines ended up creating wide opaque areas. A lot of details were lost. So, I decided to experiment with color, by adding a hue increment for each timestep. This way, lines would still be able to be distinguished.

v06_black

Here are some final image stills of my stable system with color. This system would stabilize once boids reached a state of contentment, their velocity/acceleration would be set to zero.

v06_001

v06_003

v06_004

v06_005

I really liked these images. The added chaos added some variation in the spacing of the lines. I felt a sense of movement and dancing.

I was also interested in what Phil suggested during the first critique on having a continuously moving animation, rather than stills. So in my state of contentment, I did not force the velocity and acceleration to be zero. Instead, I set the amplitude of the forces to an extremely low value 0.005. This way, the system would blow out of equilibrium if the global chaotic constant (given by the Hénon equations) grew to be large enough.

Since the lines would eventually run over each other and become too opaque, I decided to add an overlay every 7 frames.

Here is a video of that outcome.

I was actually quite intrigued by these results. I thought it was interesting how the system would come into a seemingly stable state, creating an interesting design if only for a brief moment before exploding out again. I felt the overall effect of the visual was very calming. I also liked how the animation looked hand-drawn.

A suggestion was made in class to keep to my original color scheme of black and gray. I initially added color because the black was overlaying itself so much it would end up with wide areas of black. However, since I started the added overlay, this no longer happens.

Here is a video with black lines on gray, which is very reminiscent of a pencil sketch. I really like these results.

I know my gallery presentation was weak. And I know that the majority of that stemmed from the fact that I was unsure about what I wanted to show.

After more thought, I ideally would like to show off both the projection, as well as prints of my still images (on a moderate scale, about 2-3ft across). A smaller display would show the split screen video, as to provide an explanation of how these images were created.

Flocking Cellular Automata Video and Future Direction

Here’s a video of my program from the previous post in action! (Btw Apowersoft Free Online Screen Recorder is awesome).

The red lines shown on the left panel are lines that haven’t moved from their previous position. These lines are rendered with a lower opacity, as to avoid quick opaque black line buildup (as shown in Iteration III of my program, see previous post).

Here’s a few thoughts on where I can take this within the next two weeks:

  • Give boids different tolerances for each state.
    This would mirror how people have different preferences for “contentment.” Some people prefer to be left alone, others don’t mind huge crowds. It would be interesting to see how this affects the image output.
  • Enable steering for edges.
    Right now the boids bounce off the edges. This causes problems when many boids get stuck in the corner. It is difficult for them to escape.
  • Play with color.
    I honestly do like the monochrome outputs. It feels very elegant to me, but it would be interesting to see how I can incorporate color into these images.

Flocking Cellular Automata

Alright, so I decided to experiment a bit with my flocking cellular automata system inspired by Jared Tarbell. And I am definitely liking the results!! I guess I have a thing for moving line exposures.

My system has a small number of boids (for these iterations I played around 10-15). Each boid has one of the following states :

  • “Loneliness” (0 neighbors): alignment and cohesion forces applied
  • “Contentment” (1-2 neighbors): no motion
  • “Anxiety” (>2 neighbors): separation forces applied

Here is the basic structure of my program:

while (boids are not all content){

  • update boid positions
  • draw

}

Iteration I:

  • Boids generated from center image
  • Linear link drawn between each boid

v01_000

v01_001

v01_002

Iteration II:

  • Boids generated from random point
    float fx = (0.1*randomGaussian()+ 0.5) * width;
    float fy = (0.1*randomGaussian()+ 0.5) * height;
  • Linear link drawn between each boid

v02_000

v02_001

v02_002

Iteration III:

  • Boids generated from random point
  • Curvilinear link drawn between each boid

v03_000

v03_001

v03_002

Iteration IV:

  • Boids generated from random point
  • Curvilinear link drawn between each boid
  • If line has not moved from previous frame, line is drawn with a lower opacity (this is to help get rid of those opaque black lines that develop when line hasn’t moved for a while)

v04_000 v04_001 v04_002

v04_003

v04_004

v04_005

Complex Systems Inspiration

Alright, so I spent a good chunk of today catching up on readings and searching for inspiration for my complex systems project. And to be completely honest, I’m still not entirely sure what I want to do.

Cellular automata definitely piques my interest. So I’m trying to find a way to create something visually interesting.

I came across Nervous System, “a generative design studio that works at the intersection of science, art, and technology. [They] create using a novel process that employs computer simulation to generate designs and digital fabrication to realize products. Drawing inspiration from natural phenomena, [they] write computer programs based on processes and patterns found in nature and use those programs to create unique and affordable art, jewelry, and housewares.”

nervousEarrings nervousEarrings2 nervousEarrings3

This company also offers an OBJ export library for Processing! Definitely worth checking out.

I also came across the work of Jared Tarbell. His series “Happy Place” caught my attention.

happyPlace0000 happyPlace1000

This directed my thoughts into creating a flocking system using cellular automata rules. Boids could hold a particular state which would be determined by it’s neighbors. Unlike my previous flocking system, the rendered image would be an exposure of the connection between the boids, rather than the boids themselves.

Further research will be needed. I’m not entirely sure which direction I want to go into. I might try a few tests tomorrow.

Painting Boids Retrospective

So, I took a lot of the feedback from today to heart. And even though today was the final presentation for biologically inspired systems, I felt super motivated to improve my program when I got home.

Anywho, here is a video of my program as it stands right now.

The biggest change I incorporated was the color gene. I was struggling with how to incorporate color this weekend. Mainly, how was I going to incorporate a gene that would allow the boids to be different from each other, but still cohesive. However, today during VIZA616 (Rendering and Shading) while Ergun was lecturing on interpolating colors with different hues for our shaders, I realized that I could interpolate between two colors to create color palettes for the boids. I incorporated that into a color picker for the user to manually choose the two colors. However, I completely agree with Phil’s comment about incorporating this into a gene, especially since this is a biologically inspired system. Plus taking away those extra sliders made my interface significantly more simple.

I interpreted my previous color picker idea into six new color genes: hue, hue-variation, saturation, saturation-variation, value, and value-variation. The HSV values would define the mid-color. The variation values would indicate the amount the respective value would vary within the boid system. I set limits to ensure the values stayed between 0 and 1.

So here is a list of my final boid genes:

  • Behavior: separation, alignment, and cohesion force coefficients
  • Shape: height and width of the triangle
  • Color: hue, saturation, value, hue variation, saturation variation, and value variation

Each category has it’s own mutation coefficient slider in order to allow the user more control over the mutation in each generation. I feel like this is appropriate considering the amount of genes.

I also added some minor changes. The mutation sliders are hidden in Canvas Mode. The “No Paint Trail” mode boids are now drawn with a thicker stroke, so they are more readable. The boid simulation also doesn’t restart when this mode is toggled.

I know there is much more than can be further done. Especially to make the GUI itself more aesthetically pleasing. I already plan on fixing the control panel on either a separate window or perhaps on a semi-transparent background. That way there won’t be that awkward line at the top of the boid’s canvas.

Overall, especially with the most recent changes, I am quite pleased with how this project turned out. It is incredibly fun to play with. And I am loving the resulting paintings.

I will try to get a web app up, once I can figure out the best way to do so.

Here are some more screenshots!

v7_mutationMode

v7_canvasMode

v7_noPaintTrail

v7_canvasMode2

v7_noPaintTrail2

(Forgot to show “No Paint Trail” in Mutation Mode in the video above, so here it is below!)
mutationMode_noTrail

Just thought I would add my source code if anyone was curious.

As I had stated in a previous post, I used the flocking example on Processing.org as a starting point.

Unfortunately, WordPress restricts the file types that can be uploaded. I was able to upload my .pde and .csv file as a .doc:

paintingboids.pde

genePool.csv