The digital humanities privileges the visual. There is however no necessary reason why something like a topic model should be represented by a graph. In this project, I am exploring what different grammars of sonification might be like for representing historical data aurally. NB I’m not talking about archaeoacoustics here or recreating past soundscapes or soundsheds. Rather, I want to take the born-digital results of our data machinations and listen to what they might be tellling us.
If instead of visualizing the past, we tried to listen to it? Not in an archaeo-acoustic sense, but rather, let’s forget the screen for a moment, and instead try to develop a grammar, a framework, some compositional rules, to enable us to hear the meaningful patterns in our data? This necessarily moves us along a spectrum from ‘mere’ dataviz to actual performance, which moves us into interesting public history territory.
Notes organized by research archive.
|16 May 2016||Experiment - Determining Bad OCR via Automated Spellcheck|
|11 May 2016||Experiment - Bad Equity|
|21 Apr 2016||Items to Read concerning Glitch|
|27 Mar 2016||a twitter conversation re soundbashing|
|21 Mar 2016||clement2012|
|16 Mar 2016||sonification of john adams|
|Modified: 31 Dec 1969||History||Permalink|