Tuesday, November 28, 2006
Helping Annotate Music
For my PhD thesis, I'm working on a computer system that learns to associate music with appropriate and descriptive words . The machine listens to some music and reads the corresponding text description. After doing this for long enough, our clever little computer learns what words ought to be used to describe new music or, if you give it some words, it can play you songs that match your description.
While I have lots of music, I'm lacking the corresponding text descriptions. This is where you come in. I've built a website where you can listen to songs and fill out a form that records what you hear in the music. It involves clicking some boxes and buttons - no typing - and it's very straight-forward. Every click that you make is recorded and then I can use your descriptions to train my computer system.
So I ask you, please;
1) Go to http://cosmal.ucsd.edu/cal/friends/
2) Register with a username (e.g. DougTurnbull)
3) Answer a few questions about your musical background
4) Listen to some music!
Once you've registered, you can log on again at any time and pick up where you left off. If you can annotate even 2 or 3 songs, it would be a great help to me and you might discover some cool new tunes. If you have any more questions about what it all means, how it really works and what, exactly, I do... please get in touch.
Thanks,
Doug Turnbull & Luke Barrington
ps - Please forward this to anyone you think might be interested in helping out.
--
-- --- ----- ------- ----------- ------------- -----------------
Douglas Turnbull
Graduate Student
Computer Science
UC San Diego
Wednesday, November 30, 2005
New Musical Interface: Air Guitar
I am guessing some of you have seen the website for the "Project: Air Guitar", but if not, here is some light reading:
http://www.newscientist.com/article.ns?id=dn8383
http://airguitar.tml.hut.fi/tech.html
The authors discuss gesture recognition and musical style. It seems like a pretty cool, though somewhat kitschy project to me. I bet it would have made waves at the Cal-IT2 opening.
Doug
--
-- --- ----- ------- ----------- ------------- -----------------
Douglas Turnbull
Graduate Student
Computer Science
UC San Diego
Friday, November 18, 2005
Re: branch prediction and multithreading
here is the class website !
http://www-cse.ucsd.edu/classes/fa05/cse240a/lecture.html
> Here is a list from Dave on issues of computer architecture.
>
>
>> Multithreading
>>(papers found in following website)
>>http://www.cs.washington.edu/research/smt/
>>1) Simultaneous Multithreading: Maximizing On-Chip Parallelism
>>2) Compilation Issues for a Simultaneous Multithreading Processor
>>3) Simultaneous Multithreading: A Platform for Next-generation Processors
>>
>>Branch Predictors
>>
>>Trading Conflict and Capacity Aliasing in Conditional Branch Predictors.
>>www.lems.brown.edu/~iris/en291s9-04/ papers/seznec-branch-isca97.pdf
>>
>>(papers below found in following website)
>>(http://www-cse.ucsd.edu/classes/fa05/cse240a/proj.html)
>>
>>McFarling, Combining Branch Predictors, WRL TN-36. Good description of
>>local, correlating, gshare, and combining predictors.
>>
>>Kessler, The Alpha 21264 Microprocessor, IEEE Micro, 1999. Uses variant
>>of McFarling's combining (tournament) predictor.
>>
>>A. Eden, and T. Mudge. The YAGS branch predictor. 31th Ann. IEEE/ACM
>> Symp.
>>Microarchitecture (MICRO-31), Dec. 1998. Good description of a variety
>> of
>>anti-aliasing predictors (predictors that still work with large working
>>sets of branches). Don't take their word that theirs is best without
>>testing it out...
>>
>>C.-C. Lee, I.-C. Chen, and T. Mudge. The bi-mode branch predictor. 30th
>>Ann. IEEE/ACM Symp. Microarchitecture (MICRO-30), Dec. 1997. Although
>> the
>>previous had a good description of bi-mode, this is the original.
>>
>>A. Seznec, S. Felix, V. Krishnan, Y. Sazeides. "Design trade-offs for the
>>Alpha EV8 conditional branch predictor", in : Proceedings of the 29th
>>International Symposium on Computer Architecture, May, 2002. Take a deep
>>breath before you enter...
>>
>>
>>Thanks,
>>David Camargo
>
Thursday, November 17, 2005
branch prediction and multithreading
Here is a list from Dave on issues of computer architecture.
> Multithreading
>(papers found in following website)
>http://www.cs.washington.edu/research/smt/
>1) Simultaneous Multithreading: Maximizing On-Chip Parallelism
>2) Compilation Issues for a Simultaneous Multithreading Processor
>3) Simultaneous Multithreading: A Platform for Next-generation Processors
>
>Branch Predictors
>
>Trading Conflict and Capacity Aliasing in Conditional Branch Predictors.
>www.lems.brown.edu/~iris/en291s9-04/ papers/seznec-branch-isca97.pdf
>
>(papers below found in following website)
>(http://www-cse.ucsd.edu/classes/fa05/cse240a/proj.html)
>
>McFarling, Combining Branch Predictors, WRL TN-36. Good description of
>local, correlating, gshare, and combining predictors.
>
>Kessler, The Alpha 21264 Microprocessor, IEEE Micro, 1999. Uses variant
>of McFarling's combining (tournament) predictor.
>
>A. Eden, and T. Mudge. The YAGS branch predictor. 31th Ann. IEEE/ACM Symp.
>Microarchitecture (MICRO-31), Dec. 1998. Good description of a variety of
>anti-aliasing predictors (predictors that still work with large working
>sets of branches). Don't take their word that theirs is best without
>testing it out...
>
>C.-C. Lee, I.-C. Chen, and T. Mudge. The bi-mode branch predictor. 30th
>Ann. IEEE/ACM Symp. Microarchitecture (MICRO-30), Dec. 1997. Although the
>previous had a good description of bi-mode, this is the original.
>
>A. Seznec, S. Felix, V. Krishnan, Y. Sazeides. "Design trade-offs for the
>Alpha EV8 conditional branch predictor", in : Proceedings of the 29th
>International Symposium on Computer Architecture, May, 2002. Take a deep
>breath before you enter...
>
>
>Thanks,
>David Camargo
Friday, November 04, 2005
ICMC miami paper- network performance
Friday, October 21, 2005
Follow up: Moving with or without phase delay
One of the conclusions of our little test today was that moving sounds with
phase delay seem to create flanging (it is not clear why this does not
happen with panning, or if it does, why we are less sensitive to it). An
interesting idea that Miller suggested was to see if phase helps in helping
the brain better localize non-moving sources at different locations in
space. This means running the experiment with actually recordings and
constant localization using the two methods.
There will be no meeting next Friday due to CALIT2 building opening. Please
come to the presentations!
Our general plans for future meetings are:
1. Follow up on the "phase for localization" idea.
2. Windowing / instanteneous freq. within one fft (Miller / Shlomo?)
3. Grace paper from ICMC04 and survey of Spatialization techniques.
4. David: awaiting your bibliography
5. Joe: ADIOS ? (algorithm)
6. Doug: talk about his recent work on annotation.
Moving with or without phase delay
I think it is possible to tell the difference in sounds moving with and
without a phase delay. I'll play some examples today.
The code is at http://music.ucsd.edu/~sdubnov/Mu270d
Sunday, October 16, 2005
Please ignore my last post. I've decided to make a blog all for myself so I won't clog this one. Here it is.
Thanks, Grace
Spatialization techniques
Spatial Perception
- D. G. MALHAM “Approaches to spatialization” Organised Sound 3(2): 167–77.
- J. Chowning, "The simulation of moving sound sources", Journal of the Audio Engineering Society, vol. 19, no. 1, pp. 2-6, 1971.
- F. R. Moore, "A general model for spatial processing of sounds", Computer Music Journal, vol. 7, no. 6, pp. 6-15, 1983.
- M. Kleiner, B.-I. Dalenback, P. Svensson, "Auralization - An overview", Journal of the Audio Engineering Society, vol. 41, no. 11, pp. 861-875, 1993
- Thiele, G. and G. Plenge “Localization of Lateral Phantom Sources: JAES 25(4): 196-200.
Hrtf-based binaural
- Véronique Larcher et Jean-Marc Jot “Techniques d'interpolation de filtres audio-numériques: Application à la reproduction spatiale des sons sur écouteurs” Congrès Français d'Acoustique, Marseille, France, Avril 1997
- Psychophysical calibration of auditory range control in binaural synthesis with independent adjustment of virtual source loudness William L. Martens PDF (JASA)
- Practical system for recording spatially lifelike 5.1 surround sound and 3D fully periphonic reproduction Robert E. (Robin) Miller III PDF (JASA)
- Individualized HRTFs using computer vision and computational acoustics, JASA Volume 108, Issue 5, p. 2597 PDF
- Cooper, D. H., and Bauck, J. L. 1989. "Prospects for transaural recording". J. Audio Eng. Soc. 37(1/2).
Cinema-style loudspeaker arrays (5.1, 8-channel)
- VBAP (Vector-Based Amplitude Panning)
- Jot, Jean-Marc et Olivier Warusfel:
Spat~ : A Spatial Processor for Musicians and Sound Engineers, CIARM: International Conference on Acoustics and Musical Research, Mai 1995.
- Jot, J.-M., and Warusfel, O. 1995. "A real-time spatial sound processor for music and virtual reality applications". Proc. 1995 ICMC
- Dérogis, Philippe, René Caussé et Olivier Warusfel:
On the Reproduction of Directivity Patterns Using Multi-Loudspeaker Sources, ISMA: International Symposium of Music Acoustics 1995
Ambisonics
- Peter Felgett. ``Ambisonics. Part One: General System Description''. Studio Sound, 1:20--22,40, August 1975.
- Michael A. Gerzon. ``Periphony: With-Height Sound Reproduction''. Journal of the Audio Engineering Socitey, 21(1):2--10, 1973.
- Michael A. Gerzon. ``Ambisonics. Part two: Studio Techniques''. Studio Sound, pages 24--30, October 1975
- Malham, David 'Higher order Ambisonic systems for the spatialisation of sound' Proceedings, ICMC99, Beijing, October 1999
- Malham, D and Myatt, A “3-D Sound Spatialization using Ambisonic Techniques” CMJ 19(4) 1995
- D.G. Malham “Homogeneous And Nonhomogeneous Surround Sound Systems”.
- Ambisonics Encoding of Other Audio Formats for Multiple Listening Conditions Jérôme Daniel, Jean-Bernard Rault, and Jean-Dominique Polack, AES Convention 1998
- David Malham at University of York
Holophonics and Wave-field synthesis
Multiple speakers are used to represent secondary sources of the wave front (see Huygens’ Principle). Because the actual sound wave is reproduced, this technique creates an acceptable image over a larger area, but spatial aliasing is a potential error.
- Acoustic rendering with wave field synthesis, Marinus M. Boone
- Wave field synthesis: A promising spatial audio rendering concept, Gunther Theile and Helmut Witteky, Acoust. Sci. & Tech. 25, 6 (2004)
- Wave Field Synthesis And Analysis Using Array Technology, Diemer de Vries and Marinus M.Boone, Proc. 1999 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, New York, Oct. 17-20, 1999
- Nicol, R and M. Emerit 1999
Miscellaneous
- Hyper-dense transducer array (Malham)
- Hybrid speaker-headphone approach?
- Huopaniemi, J. 1999. Virtual acoustics and 3-D sound in multimedia signal processing
- Organized Sound Issue 3(2)
- Computer Music Journal Issue 19(4)
- Jean-Marc Jot Synthesizing Three-Dimensional Sound Scenes in Audio or Multimedia Production and Interactive Human-Computer Interfaces 5th International Conference: Interface to Real & Virtual Worlds, Montpellier, France, Mai 1996
- Belin, Pascal, Bennett Smith, L Thivard, Sophie Savel, Séverine Samson et Yves Samson:
The functional anatomy of sound intensity change detection, Society for Neuroscience, 1997.
- Belin, Pascal, Stephen McAdams, Bennett K Smith, Sophie Savel, Lionel Thivard et Séverine Samson:
The functional anatomy of sound intensity discrimination, Journal of Neuroscience, 1998.
- Jean-Pascal Jullien, Olivier Warusfel Technologies et perception auditive de l'espace. Cahiers de l'Ircam (5), mars 1994
- Jot, Jean-Marc:
Efficient models for reverberation and distance rendering in computer music and virtual audio reality, ICMC: International Computer Music Conference, Septembre 1997.