Introducing choir.io

by tomazmurauson 8/15/2013, 9:32 PMwith 40 comments

by goodmachineon 8/16/2013, 8:16 AM

This is an excellent project, congrats.

However, it is in no sense "new or unique" as the authors suggest. Extensive (20+ years) of research literature on data sonification is out there, so...

http://www.icad.org/knowledgebase

Note also (very many) art-led sonification projects, turning everything from live IP traffic to gene-sequence or x-ray astronomy datasets, carried out since the early 90s. Prix Ars Electronica may be a good place to look for these.

My summary of the field in general, FWIW, is this - it's trivial to turn a realtime data stream into sound. It's slightly harder to turn the stream into either a) music or b) non-dissonant sound streams, and it's very hard indeed to create a legible (ie useful, reversible) general purpose sonification framework, because auditory discrimination abilities vary so widely from individual to individual and are highly context-dependent.

Of course, because sound exists in time not space, there's no simple back-comparison of data with and relative to itself, as when one looks at a visual graph. Listeners rely on shaky old human memory: did I hear that before? Was it lower, louder? And so on.

That said, I remain fascinated by the area and propose that a sonic markup language for the web would be interesting.

Sneaky plug: My current project (http://chirp.io) began by looking at 'ambient alerts' until we reached the point above, and decided to put machine-readable data into sound, instead of attaching sound to data for humans.

Good luck, and I very much look forward to hearing more!

by ramanujanon 8/16/2013, 5:45 AM

Very interesting and highly creative. A few thoughts.

1) If a graphical plot turns data into something visual, an audio "plot" turns data into something audible. Your output is an audio file rather than an image or video file. The typical applications of this are to turn a boolean flag into a chime (e.g. text message received). Your important insight is that this can be extended to longer-form audio outputs.

2) When is audio more advantageous than image or video?

  - When you cannot look at a screen (driving, working out)
  - When there are too many screens (control room)
  - In a very dark environment where visibility is impeded
  - If you are blind or vision-impaired
This could find real application in cockpits/control rooms, to ensure that a pilot is perceiving data even if they aren't looking at a particular dial. It could also be useful for various fitness and health apps that don't need you to look at the screen all the time.

Perhaps the most interesting application would be in a car, which is where people spend a great deal of time and have their ears and brains (but not their eyes) free. Some ideas:

a) Could you generate different sounds based on the importance of a text message (doing something like Gmail's importance filtering) signaling that you don't really need to respond to this particular message right now while driving?

b) Could you have audio feedback for important things along the road? For example, the problem with the Trapster app (trapster.com) is that I need to look at the phone to see where the speedtraps are. You can imagine an integrated audio feed that could give information like this and also tell you your constantly updated ETA (via Google Maps API call). Or you could listen to the pulse of your company on the road to do something semi-useful, and drill down into notable events via voice.

c) The really interesting thing is if you could pair this with a set of defined voice control commands. As motivation: an audible plot can't be backtracked like a visual plot. With a visual plot your eyes can just scan back to the left. To scan back and re-heard the sound you just heard requires rewinding and replaying. But it could be interesting to set up a small set of voice commands that allow not just rewinding, but rewinding and zooming. So you hear an important "BEEP" and you want to say something like "STOP. ZOOM" and set up the heuristics such that this identifies the right BEEP and then gives an audio drill-down of exactly what that BEEP represented.

d) Done right, you might be able to turn a subset of webservices into a sort of voice-controlled data radio for the road. People spend thousands of hours in their cars so it's a real opportunity.

by jpalomakion 8/16/2013, 2:11 AM

Very interesting.

Watching log files scroll by I have noticed that once you have stared at them for too long you start recognizing the patterns. There's not enough time you read everything that scrolls by you quite often you just know that now something is out of place.

Maybe these soundscapes could provide something similar in a non obtrusive way. Just by listening your brain would be wired to expect certain sounds as consequence for certain actions. If something goes wrong, you would just know it.

I think one challenges is how to take something like this into use. Setting up the the triggers and configuring the sounds feels like too much trouble ("What is the correct sound for this event"). Might be just better to take some ready provided set and learn the sounds.

by nmcfarlon 8/15/2013, 11:49 PM

So I’ve been listening to the demo in the background for a bit now. And I think it does convey info in a non-intrusive way, though I’d imagine it’ll take a long while to know exactly what’s going on just by listening.

It seems like the big trick when implementing an app on top of this is appropriately assigning the "level" of the event. Every time the Alarm or Horn goes off it’s fairly intrusive.

Regardless an awesome, uniqueº and useful service.

--

º In my experience.

by schwambraniaon 8/16/2013, 3:38 AM

This is great stuff, congrats on launching.

Instead of simply generating a fixed sound for each event, have you considered synthesizing a continuous multi-track score? Like a baseline piece of orchestra music being modulated by the events. Or something like Brian Eno's http://en.wikipedia.org/wiki/Generative_music

Also, perhaps consider streams of data other than discrete events: perhaps continuous metrics like CPU utilization, or stack traces from profiles, or percentiles of latency, or ...

by dotBenon 8/16/2013, 6:50 AM

Ok, I'm going to say it -- I don't get it.

The problem I see from the GitHub demo and the discussion here is that you zone out of the "background noise" and focus on the important/out-of-bound/etc sounds. Great, so why not just remove the background sounds and just alert the user to an urgent notification. There's nothing new to this, however, this is just audible notification alerts.

If you are going to run the sounds in the background, your brain is going to process out the on-going "normal" sound anyway.

by kaweraon 8/16/2013, 12:38 AM

Book on the subject: Gregory Kramer - "Auditory Display - Sonification, Audification and Auditory Interfaces"

by ajhit406on 8/16/2013, 3:21 AM

I was just thinking about this today. How it would be nice to be able to listen to the pulse of our analytics.

I had the pleasure of meeting the Mailbox app crew at Dropbox's offices a few months ago. They had a really cool light show on what looked like a table tennis net strung up with networked LEDs and pasted to the wall. When a user signed up, it would create a blue pattern across the net. When a message was sent, the screen flashed red. You can imagine the screen was a dancing symphony of visually encoded events -- it was and really remarkable and quite beautiful to watch. Chaotic at first, but once you memorized the patterns you could glance at the screen and immediately feel the pulse of the application. After a few hours I think you'd almost be in touch with the application where you could recognize errors without even having to check your logs / analytics / etc...

So @cortesi, definitely build in a hook for the Mixpanel API. It'd be great to get a sound everytime a user signs up, signs in, or triggers certain events.

I can imagine all the SF startup folks walking around the mission with boomboxes on their shoulders networked to pick up their audio feed from Choir.io, broadcasting their own encoded analytics melody to the world. Or PMs with headphones on at their spin class, keeping up with their engineers' progress on the new sprint. Ok yes, I'm mocking the movement now, but it's still pretty cool, congrats =)

by DigitalSeaon 8/16/2013, 4:56 AM

I really dig the idea and can see the coolness in hearing analytical data, but is it just me or is the Github real-time demo super annoying? First couple of minutes were okay, but in this instance where there is a constant flow of data playing sounds, it gets really old, really quick.

No doubt a super cool and out-of-the-box idea, but I quite personally would go crazy if I had to hear water droplet sounds any longer than an hour.

by ulisesrmzrocheon 8/16/2013, 5:59 AM

I think it's badass!. Can I hear what it would sound like in a production environment? Maybe just record like an hour or something like that so I can get a good feel for what it would sound like. Github got too annoying after a little bit. But yeah, I really really want to hear what it sounds like for real.

by tripzilchon 8/18/2013, 10:10 AM

> How do we construct soundscapes that blend into the background like natural sounds do?

wetter reverbs, in particular the late reflections are pretty strong with far-away background noises. maybe even stronger than the original sound itself (though I'm not sure if that makes physical sense, it's easy to do with a regular reverb effect, and really muffles the sound into background)

also something with the stereo image.

if funds allow, ask a professional sound mastering studio, maybe? there's people that might know just the tricks.

oh and if you want to place the sound in the room, bury in the other ambient sounds, tell the users they really need somewhat decent speakers, not plastic desktop speakers and definitely not headphones (even if they're really good headphones).

by bionsubaon 8/16/2013, 2:16 AM

Broken in Firefox 23 on OSX

Error log:

  Blocked loading mixed active content "http://api.choir.io/stream/f9c750f2bedb0c0f" @ https://choir.io/static/media/lib.967f1395.js:8671

by nadavivon 8/16/2013, 12:58 AM

The sound in the demo doesn't seem to work for me, on Chromium 28.0.1500.71 running on Ubuntu 13.10.

This looks awesome - I been wanting to setup something similar in our office that makes a sound every time a sale is made for some time now, so this can be pretty handy.

by shreeshgaon 8/16/2013, 2:20 AM

Awesome application of sound sonification[1]. Generate ambient sound based on data is hard, hope they can nail it.

[1] http://en.wikipedia.org/wiki/Sonification

by kragnizon 8/15/2013, 11:49 PM

Watching the github realtime activity with sound was mesmerising. I spent at least fifteen minutes listening to it.

You mentioned there will be Windows and OSX standalone clients coming soon. Will there be an API for writing clients?

by nabeardson 8/16/2013, 2:13 AM

I just get a flat tone in Safari 6.1 on OS X 10.8.4. Looks interesting though!

by j2d3on 8/16/2013, 3:41 AM

I love this and know what I'll be doing all day tomorrow at work!

by kgogolekon 8/16/2013, 1:06 PM

I really like it, however the demo makes me wanna pee a little bit ;)

by shmerlon 8/16/2013, 4:36 AM

Is it going to be open source?

by rfnslyron 8/16/2013, 12:18 AM

THIS IS SO AWESOME!

https://choir.io/player/f9c750f2bedb0c0f

Been listening to this for awhile now. Love it. Can't wait for a standalone client. Do you have a mailing list? I'd love to keep track of an ongoing feature list of sorts.