Further Research and Inspirations

I had a discussion with my advisor, Jinsil Hwaryoung Seo, today about my research direction and a potential project for the USA Science & Engineering Festival in April. After little success with my pulse sensor experimentation, I had been struggling with what kind of biosensor to utilize in my handheld device, as well as what kind of feedback I would be giving the user. Hwaryoung suggested to look into a GSR (Galvanic Skin Response) sensor, so that will be my next exploration on the hardware side.

As for feedback for the user, we talked about potentially having haptic and visual feedback. My device could potentially connected to a computer or mobile device to provide additional visual feedback.

So tonight, I have been reflecting on visualizations for relaxation.

I did a quick literature search (will definitely dive deeper when I have the time) for studies tying together visual imagery with relaxation.

However, most of what I found either did not directly address this connection. There appears to be many studies with an audio/visual feedback component. But the few papers that I grazed over did not explicitly go over what the visual component consisted of. There were a few that did, but these visuals fell into a more video narrative of nature and life.

I am curious about generative art and relaxation. I find mathematical systems to be beautifully intriguing.

I contemplated what I personally found visually relaxing. One of the first thing that came to mind was Casey Reas’ Process works. If you are unfamiliar this video gives an excellent overview of the system’s evolution:

Here is an sample of one piece in the series:

Here are some more pieces I came across:

John Whitney, “Arabesque”

Subliminal Phoenix, “Lucid Surrender”

Glenn Marshall, “The Jewel in the Heart of the Lotus”

Physical Computing Project 1: Sensors and LEDs

Although I’m in the very beginnings of my thesis research, my interests are currently centralized around interactive technology, meditation, and breath. Last spring, I started these biofeedback meditation training sessions offered on campus by Student Counseling Services to help learn how to better manage my stress. These sessions guided me through various breathing exercises to help reach a relaxed state of awareness.

For my first project for Physical Computing, I envisioned a large lighting sculpture of a lotus flower, which symbolizes enlightenment, divinity, fertility, wealth, and knowledge. The lights would help guide the breath during meditation.


I imagine the lights starting from the center and extending outwards during an inhalation, and contracting inwards during an exhalation.

To help guide the breath, there would be pre-determined lighting (dictating the blue channel of the LED). The user’s breath would influence the lighting sculpture as well (dictating the red channel of the LED).  When the user’s breath matches the pre-determined timing, the resulting LED color would be purple.

I imagine the lotus frame to be made out of wood and the petals out of rice paper. The petals would be backlit by the RGB LEDs.

To measure the breath, I used an stretch sensor band wrapped around the abdomen. When the user inhales, the abdomen would rise, causing the band would stretch, increasing the resistance. Exhaling would cause the abdomen (and the band) to contract, decreasing the resistance. I inserted the output voltage of a voltage divider circuit incorporating the sensor to one of the analog inputs of my Arduino’s. This value was used to map the user’s breathing to the the lighting animation of the red LED channels.

Here are some pictures of my prototype…. I didn’t get a chance to make a video of it in action. But the circuit is pretty simple to recreate, so I might just have to go back and do that later.

20140204_160925 20140204_160241

Initial Thesis Research: Meditation, Breath, and Technology

Image from freedigitalphotos.net
Image from freedigitalphotos.net

My primary motivation for exploring meditation, breath, and technology is my interest in stress and anxiety. Stress is something everyone experiences in varying degrees throughout their lifetime. Personally, I have a difficult time dealing with stress and anxiety. In order to learn how to manage these issues, I came across some biofeedback meditation training sessions offered on my campus by Student Counseling Services. I also have been practicing yoga on and off for the past few years. I am committed to making both of these practices more regular in my life.

Benefits of Meditation

I came across an excellent TED talk about meditation:

Here are some of the notable benefits of meditation:

  • Improved attention span
  • Sharpened focus
  • Improved memory
  • Reduced stress, pain, depression, and anxiety

Current Relevant Technologies

Transforming Pain Research Group (SFU)

AIST: Paro Therapeutic Robot

  • Motivation: to allow the benefits of animal therapy for patients in environments where live animals present difficulties
    Video Demo

Yohanan and McLean: The Haptic Creature

  • Motivation: to investigate the display, recognition, and emotional influence of affective touch
    Video Demo

Initial Thesis Brainstorming

So here are my current ideas for my thesis topic!

Create a portable object to guide breath in meditation.

  • Size: currently playing with the idea of a small handheld object, a larger huggable object, or a pendant
  • Material: silicone material Insert that could be warmed up (reminiscent of therapeutic spa masks)
  • Incorporate use of biofeedback sensors

Until Next Time…

My main goals for the upcoming month:

  • Reading scientific papers about benefits of meditation
  • Researching more projects/studies dealing with breath, touch, and therapeutic technologies
  • Experimenting with Arduino and biofeedback sensors
  • Developing a more thorough thesis idea

Wearable Computing and Dance

For class, we had to find an example of a dance piece incorporating wearable computing. I found this particular piece interesting.

Shadows of aikia – documentation cut from stratofyzika on Vimeo.

For this performance art piece we design an environment where the stage performers are not bound to static expressions of sound and visuals. We are working with behavior-based approaches in reaction with interactive software settings for sound and visuals. These approaches and technologies affect the environment; adjusting the situations of appearance of shadows, their recognition as mirror effect and projections, their acceptance in the complex of the personality, change of the position of ego towards those elements. We want to achieve an emotional and physical sensation for the audience where the sound and visuals are in interaction with the dancer embodying the space we will create. This will be done with original visual and audio content, software and hardware interaction. This dynamic interactivity engages both the stage performers and audience to a deeper level of interest and identification.
Another possible interpretation Shadows of aiKia is research and examination of the relationship in between body and its biological existence and system of elements, light and sound, both vibrations in the space existing in due to time line. What is the relationship between human consciousness, space where it’s placed and its perception of time. We reproduce a metaphorical creation of the relativity of those dimensions, according to theory of special relativity, testing the limits of speed and trying to win over the equation defining energy and mass.

The visual and audio elements will be connected with the dancer’s movements speed and rotation, using an Arduino board, 1 accelerometer and 1 gyroscope, which will be the consequences of mind and body reactions onto the audiovisual environment.

The dance and movement will focus on duality and the juxtaposition of one’s ego of light and/or dark shadow in the space defined by and in relationship to the visuals and sound.

The space and time, light and sound will become other shadows of the person in the labyrinth of its existence.

We used Max/jitter, AbletonLive, and built a prototype with Arduino, 1 accelerometer and 1 gyroscope.
both sound and visuals do react depending on the movement of the dancer

official website:
sound: https://soundcloud.com/spiritantipode/love_u_ty_tu
StratoFyzika members: Akkamiau / audio (CZ) Alessandra Leone / video (IT) Hen Lovely Bird / movement (USA)
special guests: Giovanni Marco Zaccaria / interactive technology (IT)
Nicolas Berger / creative coding (CH)
curator and production: Laura Biagioni (IT)

Interactive Performance and Technology Introduction

This specific post serves as an introduction to my fellow peers in VIZA 689: Interactive Performance and Technology.

Hello everyone! This is Antoinette Bumatay. I am a 2nd year MS Visualization graduate student.

Here is a semi-recent picture of me:

And a bonus of me as a little ballet dancer (I don’t quite remember what performance this was for…):

So I have seen many dance performances throughout the years. One performance that was particularly memorable was Bad Unkl Sista’s “The Study of Soft” in San Francisco, CA in the spring of 2010.

From their company site:
“Bad Unkl Sista is a performance installation ensemble that fuses Butoh-inspired choreography, improvised music, couture costuming, and physical theater elements while producing site-specific experiences that seek to move each witness to a state of extraordinary and memorable being.”

Here are some images I found of the performance (also from their website).



It was almost four years ago, so the details are quite hazy. But from what I remember, I was completely entranced by the entire performance. It was morbid, yet beautiful, and absolutely mesmerizing.

Edit: Here is a video of the same company, but different performance.

Alright… now here are a few samples of my work.

Music Paintings:

Kishi Bashi – It All Began With a Burst
Snoop Doggy Dogg – Who Am I

Here is a video example of how the above works were made. This was an earlier version of the system, but will explain the main functionality:

Complex System Inspired:



Again, here is a video example of how the above works were made (also an earlier version of the system):

Gallery Layout Plan and Final Thoughts

This floor plan is just to illustrate that ideally I would like to have two sections to my exhibit: video and print. The divided sections would allow for different lighting settings since video likes to be in the dark, while prints need to be lit.layout3d

Here is an example of the planned print layout. Each column features songs from the same musical artist. I would love to have as many prints as the space allowed, wrapping around the walls. I think what is most interesting about this project is the differences in song prints across musical artists, as well as similarities within each musical artist set.


Here are some interesting qualities I have noted in a previous post:

(from left to right)

Kishi Bashi: Many white strokes. With highlights of blues, teals, purples, and yellows. Extremely wavy/stringy.
Absolutely makes me think of Kishi Bashi. He plays the violin and utilizes a loop pedal. (Amazing artist if you haven’t listened to him yet).

ZZ Top: Lots of pink, purple, and red! Strokes/colors are pretty even throughout the song.
These selections of songs have a steady rhythm and tonality, which could probably explain the consistency of strokes.

Nujabes: Boxy strokes. Bold colors. Black accent lines.
The most interesting thing about Nujabes is the boxy strokes. I have not encountered another artist with this pattern yet.

Snoop Dogg: Distinct vertical bands of color (mainly blue, green, red, and yellow). Very jagged strokes.
This is probably due to the rhythmic variation throughout the song. Also, the loops used to make the beats can come from a variety of selections.

Emiliana Torrini: Mostly vertical strokes. Lots of white, with highlights of pink and yellow.
She probably produced the most consistent outputs. The songs on her album Fisherman’s Woman do mostly consist of soft gentle beats, so I feel like these images are very suiting.

I would still really like to play around with my color algorithm to including the range of the frequency spectrum. Throughout the past few weeks I shifted my focus to the creating my prints, and making sure they work on three viewing distances. This was also my first time working with prints, so I was unaware about how much is lost in translation from the screen to print. This was definitely a good learning experience though! And now I know a lot more of what to look out for in the future.

Final Song Painting Print Experiments

So one comment that I received during the previous critique was the lines were pretty jagged. I realized that this would be easily solved by using curveVertex() instead of vertex() in Processing .

Here was the resulting image:


Another suggestion I received was that the image did get a bit busy. So I experimented a bit, having the program draw only every other beat with a smaller line in the middle (double stroke).


I did like this a little better, but I disliked how thick the single stroke (one without the overlayed middle line) looked. So I decided to do a few tests on varying this thickness. The above thickness is 4.

Single stroke (width = 1):


Single stroke (width = 2):


Single stroke (width = 3):


Ultimately, I ended up settling with the single stroke line with a width of 2. And a double stroke line with widths of 1 and 4. I liked the new variations in strokes. The effect was subtle, but added just another detail that can only been seen at a closer level.

After reviewing my print tests, I realized that a lot of the purple in my original image was no longer showing up. After struggling a bit, I learned about the gamut warning in Photoshop.

Here is my previous brightness test print. The yellow displays all unprintable colors. So once I raised the brightness to account for how dark the printer, a lot of color information was lost.


This was a pretty easy fix, by just converting my image to use a CMYK color profile.

I did a few more test prints using varying brightness/contrast for optimal image adjustment information for the four songs I picked from different musical artists.

Along the x-axis I have varying values of adjustment for contrast levels in increments of 20. Along the y-axis I have varying values of adjustment for brightness levels in increments of 10.





Here are the printed results (apologies for the bad cell phone picture):


The differences were very subtle (even harder to see in the photograph, I’ll try to put a better quality photo up in the future), but I ended up settling with a 40 brightness level increase and a 40 contrast level increase (fourth row from the top, third column). It will be nice to keep these for future reference.

After making the appropriate adjustments, I was ready for final prints!

Here are the four final images (I would take pictures of them, but without a proper camera, the quality wouldn’t do it justice):

Kishi Bashi – It All Began With a Burst
Nujabes – Feather
ZZ Top – Sharp Dressed Man
Snoop Doggy Dogg – Who Am I

Mountings at Hobby Lobby ended up to be MUCH more expected than what others had mentioned in class. So, I only had one done. There has been talk that Copy Center is cheaper, so I either plan on checking their prices or try to do it myself.

Song Painting Initial Printing Tests

Alright, so I know I’m a bit behind on my updates for this past semester.

Here is some copy pasta from my Generative Art specific blog, documenting my test prints/algorithm adjustments for my generative art final.

First, I set up some test prints. I had heard from Catherine that the printer in the print lab tends to print a bit darker.

I set up a photoshop layer with different layers. Each image had a varying brightness level adjusted by the amount indicated to the right.


Here is a scan of the printed result:


I know that the scan has probably lost some of the actual color information, but the prints were noticeably darker than the digital version. I continued to use this as a key for future prints.

My first print: I printed “Sharp Dressed Man” at full resolution, with in retrospect was rather small (18×10 inches at 300 pixels/inch).

Feedback I received was to print at a larger scale, and research various ways to accomplish this in Processing.

So here is some information I found at saving high resolution images from Processing:

PDF Export: “The PDF library makes it possible to write PDF files directly from Processing. These vector graphics files can be scaled to any size and output at very high resolutions.”

PGraphics (#16 on 25 Life Saving Tips for Processing by Amnon): “…create[s] a high resolution copy of the regular draw() loop.”

I ended up sticking with the PDF Export, that way I could scale the image to whatever resolution I needed for printing. But the PGraphics “hack” does seem like a good alternative once I settle on the exact resolution I want for my prints.

I ended up making my print as big as the Print Lab in our department allowed me (which was approximately around 43×24 in).

The print did result in some valid feedback from my review in class.

Unfortunately, I currently do not have a great camera, so I cannot take a decent picture of the actual print.

But here is a snippet of my print that shows a good indication of the problem.


Although the individual lines are actually quite crisp, due to the quantity and transparency of the strokes, the resulting printed image looks blurry when viewed at a closer distance.

Phil gave very good advice that a print should work at three distances: across the room, a few feet away, and up close. He said that I accomplished the first two, now just have to find a way to reward the viewer when they view my piece at a super close distance.

So I got some feedback to try overlaying a smaller line over the stroke to perhaps regain a sense of depth in the print, and add more visual interest at a closer viewing point.

Here are some tests:

I printed one line (strokeWidth = 4) and a second line (strokeWidth = 1) both at an alpha level equal to half of my original, this way the middle line would result in the full original alpha value.



I printed one line (strokeWidth = 4) and a second line (strokeWidth = 1) both at an alpha level equal to my original, this way the middle line would result in twice the full original alpha value.



Yay! This actually made a huge difference! The sound waves were much better defined. I decided to print this version of my line strokes in a full print to get more feedback.


As I had stated in a previous post, I realized the best first step to adjusting my color algorithm was to look at the actual input I was feeding into it! This is something I really should have done when I originally started running into issues.

Using Processing, I produced full spectrograms of three contrasting songs.

Spectrogram, Nujabes, “Feather”
Spectrogram, Emiliana Torrini, “Heartstopper”
Spectrogram, The Avett Brothers, “Paranoia in B Flat Major”

After viewing these outputs, it is obvious that I was getting muddied results due to the fact that I was using a linear distribution when averaging.

Screenshot from 2013-11-10 13:44:07
Close-up of Spectrogram, Nujabes, “Feather”

These results also pointed out how much my current color algorithm excludes out of the frequency spectrum. I was only using the first three bands, which correspond to the bottom three rows (easier to distinguish in the close-up).

I read through the Minim documentation more closely, and found a function called logAverages that will group frequency bands by octaves. The logarithmically spaced averages correlate more closely to how humans perceive sound than the linearly spaced averages (that I was using initially).

Spectrogram with Logarithmically Spaced Averages, Nujabes, “Feather”
Spectrogram with Logarithmically Spaced Averages, Emiliana Torrini, “Heartstopper”
Spectrogram with Logarithmically Spaced Averages, The Avett Brothers, “Paranoia in B Flat Major”

Using logarithmically spaced averages shows a more clear difference between the songs, though the latter third of the octaves are still similar. I will take this consideration when I start composing a new color algorithm.