[Sorry for cross-posting, please distribute]
We are happy to announce the next issue of the Linux Audio Conference
(LAC), May 1-4, 2014 @ ZKM | Institute for Music and Acoustics, in
Karlsruhe, Germnany.
http://lac.linuxaudio.org/2014/
The Linux Audio Conference is an international conference that brings
together musicians, sound artists, software developers and researchers,
working with Linux as an open, stable, professional platform for audio
and media research and music production. LAC includes paper sessions,
workshops, and a diverse program of electronic music.
*Call for Papers, Workshops, Music and Installations*
We invite submissions of papers addressing all areas of audio processing
and media creation based on Linux. Papers can focus on technical,
artistic and scientific issues and should target developers or users. In
our call for music, we are looking for works that have been produced or
composed entirely/mostly using Linux.
The online submission of papers, workshops, music and installations is
now open at http://lac.linuxaudio.org/2014/participation
The Deadline for all submissions is January 27th, 2014 (23:59 HAST).
You are invited to register for participation on our conference website.
There you will find up-to-date instructions, as well as important
information about dates, travel, lodging, and so on.
This year's conference is hosted by the ZKM | Institute for Music und
Acoustics (IMA). The IMA is a forum for international discourse and
exchange and combines artistic work with research and development in the
context of electroacoustic music. By holding concerts, symposia and
festivals on a regular basis it brings together composers, musicians,
musicologists, music software developers and listeners interested in
contemporary music. Artists in Residence and software developers work on
their productions in studios at the institute. With digital sound
synthesis, algorithmic composition, live-electronics up to radio plays,
interactive sound installations and audiovisual productions their
creations cover a broad range of what digital technology can inspire the
musical fantasy to.
The ZKM is proud to be the place of the LAC for the fifth time after
having initiated the conference in 2003.
http://www.zkm.de/musik
We look forward to seeing you in Karlsruhe in May!
Sincerely,
The LAC 2014 Organizing Team
(apologies for any x-post)
Dear all,
if you happen to be in Dresden this weekend. On Thursday and Saturday night
I'll be at CYNETART Festival performing a new version of Hypo Chrysos, an
action art piece for Xth Sense biophysical media, multiple loudspeakers and
subwoofers, and real-time visuals. Info below.
wishing you well,
M
\\\\\\\\\\\\\\\\\\\\\\\\\\
in occasion of
Metabody Performance Night @ CYNETART Festival
Hypo Chrysos v.2
action art for vexed body and biophysical media (Xth Sense)
"During this twenty minutes action I pull two concrete blocks in a circle.
My motion is oppressively constant. I have to force myself into accepting
the pain until the action is ended. The increasing strain of my corporeal
tissues produces continuous bioacoustic signals. The sound of the blood
flow, muscle contraction bursts, and bone crackling are amplified,
accumulated, distorted, and played back through 4 loudspeakers and 4
subwoofers using the biophysical instrument Xth Sense (Pd-based), developed
by the author...
When the performer’s muscle vibration becomes tangible sound breaching into
the outer world, it invades the audience members’ bodies through their
ears, skin, and muscle sensory receptors. The sound makes their muscles
resonate, establishing a nexus between player and audience. The listeners’
bodies, the player’s body, and the performance space resonate synchronously"
VIDEO: http://marcodonnarumma.com/works/hypo-chrysos/
WHEN: Thursday 14th November / Saturday 16th November
WHERE: Dresden, Great Hall of the Hellerau Festpielhaus, Germany
COST: Thursday £3, Saturday £5/8
PUBLICATION: http://cec.sonus.ca/econtact/14_2/donnarumma_hypochrysos.html
Metabody Conference is part of the Metahuman/Metaformance Studies programme
of the European Community-funded Metabody Project, that embraces the series
of presentations taking part in more than 25 events in more than 15 cities
of 11 countries. Coordinated by Reverso.
--
Marco Donnarumma
New Media + Sonic Arts Practitioner, Performer, Teacher, Director.
Embodied Audio-Visual Interaction Research Team.
Department of Computing, Goldsmiths University of London
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Portfolio: http://marcodonnarumma.com
Research: http://res.marcodonnarumma.com
Director: http://www.liveperformersmeeting.net
Hello,
I am happy to announce version 0.15.0 of PuREST JSON, code name: The API
they are a-changing.
PuREST JSON is a library for working with RESTful HTTP webservices, and
JSON data.
Authentication and authorization for webservices are available with
basic HTTP auth, cookie authentication, and OAuth. As an example for
OAuth authenticated webservices, a Twitter client is included.
Changes in the new version:
- Cancellation is now faster
- Switch to json-c 0.11
- Refactoring of code
- Breaking changes:
-- [oauth] and [rest]:
* [write( method is now called [file(
* [url( method is now called [init(
* init errors only output to console
* changes to status outlet:
** on success output bang
** on HTTP error output numerical HTTP status
** on cURL error output list: error code and message
-- [rest-json] has been removed
-- [json-decode]:
* string values will not be checked for numbers or boolean
If your build tool or platform does not have json-c 0.11 available, use
the branch json-c-0.10 for compilation.
Github repository:
https://github.com/residuum/PuRestJson
Source code packages:
https://github.com/residuum/PuRestJson/releases
Binaries for Windows and Debian i386 and amd64:
http://ix.residuum.org/pd/purest_json.html
Build instructions for all platforms:
https://github.com/residuum/PuRestJson/wiki/Compilation
Have fun,
Thomas
--
"Chaney was aware that anything, however small, can get the eye of the
media if it's repulsive enough." (Robert Anton Wilson - The Universe
Next Door)
http://www.residuum.org/
Hi,
The CICM is pleased to share the first release of HoaLibrary for Pure Data.
HoaLibrary is a collection of C++ classes, FAUST functions and objects for
Pure Data, Max and VST destined to high order Ambisonics sound reproduction.
HoaLibrary allows musicians and composers to synthesize, transform and
render sound fields in a creative and artistic way. This library
facilitates the understanding and the appropriation of key concepts of
Ambisonics. Thanks to original graphical interfaces a lot of new signal
processings are allowed like diffuse sound field synthesis, perspective
distortion or spatial filtering.
HoaLibrary is free, open-source and made available by CICM, the centre
of research
on musical composition and computer science of the Paris 8 University.
Objects:
hoa.decoder~ : An ambisonic decoder (ambisonic, binaural, irregular
configurations).
hoa.encoder~ : An ambisonic encoder.
hoa.map~ : An ambisonic sources spatializer.
hoa.delay~ : An ambisonic sound field delay.
hoa.freeverb~ : An implementation of the freeverb algorithm for Ambisonics.
hoa.grain~ : An ambisonic granular synthesizer.
hoa.map : A GUI to spatialize sources on a map.
hoa.meter~ : A circular meter with sound field descriptor.
hoa.optim~ : An ambisonic sound field optimization.
hoa.pi : A good pi number.
hoa.projector~ : A plane wave decomposer.
hoa.recomposer~ : A plane wave recomposer to harmonics domain.
hoa.ringmod~ : An ambisonic sound field ring modulation.
hoa.rotate~ : An ambisonic sound field rotation external.
hoa.scope~ : An ambisonic harmonic scope.
hoa.space : A GUI to design ambisonic space.
hoa.space~ : A plane wave spatial filter.
hoa.wider~ : An ambisonic fractional orders simulator.
Release:
http://www.mshparisnord.fr/hoalibrary/en
Sources:
https://github.com/CICM/HoaLibrary
Feedbacks are welcome.
Thanks.
Pierre
apologies for x-post
~~~ DMT live ~~~
FREE TONIGHT, Saturday 2 Nov, Bar 19.00
Watermans Arts Center, Downstairs Bar
Brentford, London
DMT is Marco Donnarumma, Christos Michalakos, and Atau Tanaka, a trio of
visceral electronic musicians that interface corporeal gesture and physical
gesture with pulsing electronic noise. Donnarumma plays the Xth Sense
biophysical muscle sensor system to sonify the performer's body. Tanaka
runs granular synthesis algorithms on the iPhone, and Michalakos plays the
Augmented Drum-Kit, a bespoke electroacoustic instrument based on the
acoustic drum-kit. Together they create a wall of sound that is live
technological thrill.
http://www.watermans.org.uk/exhibitions/exhibitions/digital-art--performanc…
Hope to see some of you there,
--
Marco Donnarumma
New Media + Sonic Arts Practitioner, Performer, Teacher, Director.
Embodied Audio-Visual Interaction Research Team.
Department of Computing, Goldsmiths University of London
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Portfolio: http://marcodonnarumma.com
Research: http://res.marcodonnarumma.com
Director: http://www.liveperformersmeeting.net
Georgia Tech is now accepting applications for the MS and PhD programs in music technology for matriculation in August 2014. All PhD students, and a limited number of MS students, receive graduate research assistantships that cover tuition and pay a competitive monthly stipend. The deadline for applications is January 31, 2014.
The MS in Music Technology is a two-year program that instills in students the theoretical foundation, technical skills, and creative aptitude to design the disruptive technologies that will enable new modes of music creation and consumption in a changing industry. Students take courses in areas such as music information retrieval, music perception and cognition, signal processing, interactive music, the history of electronic music, and technology ensemble. They also work closely with faculty on collaborative research projects and on their own MS project or thesis. Recent students in the program have worked and/or interned at companies such as Apple, Avid, Dolby, Bose, Gracenote, Rdio, Sennheiser, Ableton, Echo Nest, and Smule, and gone on to PhD studies. Applicants are expected to have an undergraduate degree in music, computing, engineering, or a related discipline, and they should possess both strong musical and technical skills.
Students in the PhD program in Music Technology pursue individualized research agendas in close collaboration with faculty in areas such as interactive music, robotic musicianship, music information retrieval, digital signal processing, mobile music, network music, and music education, focusing on conducting and disseminating novel research with a broad impact. PhD students are also trained in research methods, teaching pedagogy, and an interdisciplinary minor field as they prepare for careers in academia, at industry research labs, or in their own startup companies. PhD applicants are expected to hold a Masters degree in music technology or from an allied field, such as computing, music, engineering, or media arts and sciences. All applicants must demonstrate mastery of core masters-level material covered in Music Technology, including music theory, performance, composition, and/or analysis; music information retrieval; digital signal processing and synthesis; interactive music systems design; and music cognition.
Both the MS and PhD programs are housed within the School of Music at Georgia Tech, in close collaboration with the Georgia Tech Center for Music Technology (GTCMT). The GTCMT is an international center for creative and technological research in music, focusing on the development and deployment of innovative musical technologies that transform the ways in which we create and experience music. Its mission is to provide a collaborative framework for committed students, faculty, and researchers from all across campus to apply their musical, technological, and scientific creativity to the development of innovative artistic and technological artifacts.
Core faculty in the music technology program include Gil Weinberg (robotic musicianship, mobile music, and sonification), Jason Freeman (participatory and collaborative systems, education, and composition), Alexander Lerch (music information retrieval and digital signal processing), Timothy Hsu (acoustics), Frank Clark (multimedia and network music), and Chris Moore (recording and production).
More information on the MS program is at: http://www.music.gatech.edu/academics/graduate/overview
More information on the PhD program is at: http://www.music.gatech.edu/academics/phd/overview
More information on the GTCMT is at: http://www.gtcmt.gatech.edu
To apply, please visit: http://www.gradadmiss.gatech.edu/apply/
To contact us, please visit: http://www.gtcmt.gatech.edu/contact_us