apologies for doubles...
Anyone with problems with the deadlines, please email me,
-David
The 22nd International Conference on Auditory Display (ICAD 2016)
Australian National University, Canberra, July 2-July 8 2016
CALL FOR PAPERS, POSTERS, SONIFICATIONS, INSTALLATIONS,
COMPOSITIONS, WORKSHOPS, PANELS AND DEMONSTRATIONS
Co-chairs:Dr David Worrall, Australian National University and
Dr Stephen Barrass, University of Canberra
Please check the conference website for updates:
http://icad.org/icad2016/
ICAD is a highly interdisciplinary academic conference with relevance to
researchers, practitioners, musicians, and students interested in the
design of sounds to support tasks, improve performance, guide decisions,
augment awareness, and enhance experiences. It is unique in its singular
focus on auditory displays and the array of perception, technology, and
application areas that this encompasses. Like its predecessors, ICAD
2016 will be a single-track conference, open to all, with no membership
or affiliation requirements.
ICAD 2016-the 22nd International Conference on Auditory Display-will be
held at the Australian National University in Canberra, Australia, from
July 2 to 8, 2016. The conference venue is the ANU School of Music, in
the downtown centre of Canberra. Workshops and the graduate student
ThinkTank (doctoral consortium) will be on the weekend of July 2 and 3,
before the main conference.
Note that ICAD is back-to-back with the conference on New Interfaces for
Musical Expression (NIME) which will be held in Brisbane the following
week, so international attendees can attend two international
conferences for the one trip to Australia!
THEME: SONIC INFORMATION DESIGN
The designed world is rapidly replacing the natural world. Design has
been called the "third culture" and has been distinguished from the
Sciences and Arts by Nigel Cross in terms of
* /things to know/: the natural world in science, human experience in
art, and the artificial world in design.
* /ways of knowing/: rationality and objectivity in science,
reflection and subjectivity in art, and imagination and practicality
in design.
* /ways of finding out/: experiment and analysis in science, criticism
and evaluation in art, and modelling and synthesis in design.
This year's theme - Sonic Information Design - has the aspiration that
artificial sounds may be designed to make the world a better place. Like
other design disciplines, Sonic Information Design takes a synergetic
user-centred view of the relationship between artefacts, those that are
affected by them, and the social contexts in which they occur. A Design
orientation pays particular attention to the phenomenology of user
experience - including physical, cognitive, emotional, and aesthetic
issues; the relationship between form, function, and content; and
emerging concepts such as fun, playfulness and design futures.
Practice-based research is considered as a generative process of
exploration, speculation and discovery, with outcomes that can be
provisional, contingent and aspirational, while aiming for richer, more
situated understandings that lead to the advancement of knowledge and
the proliferation of new realities.
Sonic Information Design draws on theoretical approaches from multiple
disciplines to guide hypothesis testing at multiple points during an
iterative process -what Bill Gaver calls "humble theory". Sonic
Information Design recognises usefulness as critical for evaluating
artefacts, and the perceptual alignment with data characteristics as
critical for effective designs.
ICAD 2016 invites contributions that take a design approach, introduce
design theory and apply design methods to Auditory Display and Data
Sonification, with a view to building a conceptually robust foundation
for Sonic Information Design.
TOPICS
Topics for ICAD2016 include new and emerging themes, as well as more
traditional ICAD ones. Themes include but are not limited to:
* Sonic Information Design
* Stream-based Sonification and Auditory Scene Design
* Acoustic Sonification
* Small Data (personal, intimate) sonification and the quantised self
* Sonification, soundscape and screensound
* Sonification in Health and Environmental Data (soniHED)
* Musification - sonifications and music
* Sonification, personal fabrication and maker culture
* Sonification in the Internet of Things
* Auditory Data Mining and Big Data sonification
* 3D and Spatial Audio
* Aesthetics, Philosophy, and Culture of Auditory Displays
* Accessibility
* Applications
* Design Theory and Methods
* Evaluation and Usability
* Human Factors and Interaction
* Mappings from Data to Sound
* Psychology, Cognition, Perception, and Psychoacoustics
* Sonification and Exploration of Data through Sound
* Sound as Art
* Technologies and Tools
*Presentations will be organised according to four major themes:*
* Auditory Data Mining
* Interactive Sonication, including for sports and health.
* Musification and Aesthetics
* Auditory Perception, including streaming, spatialisation and
inter/poly modality.
KEY DATES (2016)
29 February Submission Deadline for Full Papers, Posters and
Extended Abstracts
14 March Submission Deadline for Workshop proposals
28 March Acceptance Notification of Papers, Posters and Extended
Abstracts
04 April Submission Deadline for Sonifications / Installations /
Compositions / Extended Abstracts
11 April Acceptance Notification of Workshop proposals
09 May Submissions Deadline for Camera-Ready materials
16 May Acceptance Notification of Sonifications / Installations /
Compositions
2-3 July Conference ThinkTank and Workshops
4-8 July ICAD1016 Conference Proper (Programme details TBA)
PUBLICATION
We are aiming to select papers for a special issue of a leading
journal. Details to follow.
WORKSHOPS
Proposals for half and full-day workshops are called for.
*Deadline for Submission of Workshop proposals: 14 March 2016*
INSTALLATIONS
Installations at ICAD 2016 will be afforded their own individual
space and, depending on the number of submissions, will likely be
featured for an entire day. Spaces available include
o A public but relatively quiet space
o An entrance foyer space
o A pub and a café space (with table-top Bluetooth speakers if
applicable)
*Deadline for submission of Installation proposals: 4 April 2016*
EXTRA-CURRICULA
We have organised a rich array of natural and cultural activities to
ensure your trip down under is not all work and no play!
MORE INFORMATION
Visit the conference website (currently in development) for updates
and other information:
http://icad.org/icad2016/
CORRESPONDENCE
Please address correspondence to: icad2016chair _at_ icad.org
<http://icad.org/>
WELCOME!
We look forward to you joining us in making a wonderful conference!
--
------------------------------------------------------------------------
Prof. Dr. David Worrall
International Audio Laboratories Erlangen
Fraunhofer-Institut für Integrierte Schaltungen IIS
Email: david.worrall(a)iis.fraunhofer.de
Adjunct Senior Research Fellow
School of Music, Australian National University
david.worrall(a)anu.edu.au
personal website: avatar.com.au <http://avatar.com.au> /NetSon/
<http://avatar.com.au/netson>
ICAD2016logo <http://icad.org/icad2016> Co-Chair ICAD2016
Canberra 2–8 July
icad.org/icad2016/ <http://icad.org/icad2016/>
======================
2nd CALL FOR SUBMISSION
======================
((( MUME 2016 )))
4th International Workshop on Musical Metacreation
http://www.musicalmetacreation.org
<https://www.easychair.org/conferences/?conf=mume2016>
June 27, 2016
MUME 2016 is to be held in Paris at Université Pierre et Marie Curie
(UPMC), in conjunction with the Seventh International Conference on
Computational Creativity, ICCC 2016.
=== Important Dates ===
Workshop submission deadline: May 1, 2016
Notification date: June 1, 2016
Camera-ready version: June 10, 2016
Workshop date: June 27, 2016
======================
We are delighted to announce the 4th International Workshop on Musical
Metacreation (MUME 2016) to be held June 27, 2016, in conjunction with the
Seventh International Conference on Computational Creativity, ICCC 2016.
MUME 2016 builds on the enthusiastic response and participation we received
for the past occurrences of MUME series:
- MUME 2012 (held in conjunction with AIIDE 2012 at Stanford):
http://musicalmetacreation.org/index.php/mume-2012/
- MUME 2013 (held in conjunction with AIIDE 2013 at NorthEastern):
http://musicalmetacreation.org/index.php/mume-2013/
- MUME 2014 (held in conjunction with AIIDE 2014 at North Carolina):
http://musicalmetacreation.org/index.php/mume-2014/
Metacreation involves using tools and techniques from artificial
intelligence, artificial life, and machine learning, themselves often
inspired by cognitive and life sciences, for creative tasks. Musical
Metacreation explores the design and use of these tools for music making:
discovery and exploration of novel musical styles and content,
collaboration between human performers and creative software “partners”,
and design of systems in gaming and entertainment that dynamically generate
or modify music.
MUME aims to bring together artists, practitioners, and researchers
interested in developing systems that autonomously (or interactively)
recognize, learn, represent, compose, generate, complete, accompany, or
interpret music. As such, we welcome contributions to the theory or
practice of generative music systems and their applications in new media,
digital art, and entertainment at large.
Topics
======
We encourage paper and demo submissions on MUME-related topics, including
the following:
-- Models, Representation and Algorithms for MUME
---- Novel representations of musical information
---- Advances or applications of AI, machine learning, and statistical
techniques for generative music
---- Advances of A-Life, evolutionary computing or agent and multi-agent
based systems for generative music
---- Computational models of human musical creativity
-- Systems and Applications of MUME
---- Systems for autonomous or interactive music composition
---- Systems for automatic generation of expressive musical interpretation
---- Systems for learning or modeling music style and structure
---- Systems for intelligently remixing or recombining musical material
---- Online musical systems (i.e. systems with a real-time element)
---- Adaptive and generative music in video games
---- Techniques and systems for supporting human musical creativity
---- Emerging musical styles and approaches to music production and
performance involving the use of AI systems
---- Applications of musical metacreation for digital entertainment: sound
design, soundtracks, interactive art, etc.
-- Evaluation of MUME
---- Methodologies for qualitative or quantitative evaluation of MUME
systems
---- Studies reporting on the evaluation of MUME
---- Socio-economical Impact of MUME
---- Philosophical implication of MUME
---- Authorship and legal implications of MUME
Submission Format and Requirements
=================================
Please make submissions via the EasyChair system at:
https://easychair.org/conferences/?conf=mume2016 .
The workshop is a full day event that includes:
- Presentations of FULL TECHNICAL PAPERS (8 pages maximum)
- Presentations of POSITION PAPERS and WORK-IN-PROGRESS PAPERS (5 pages
maximum)
- Presentations of DEMONSTRATIONS (3 pages maximum) which present outputs
of systems (working live or offline).
All papers should be submitted as complete works. Demo systems should be
tested and working by the time of submission, rather than be speculative.
We encourage audio and video material to accompany and illustrate the
papers (especially for demos). We ask that authors arrange for their web
hosting of audio and video files, and give URL links to all such files
within the text of the submitted paper.
Submissions do not have to be anonymized, as we use single-blind reviewing.
Each submission will be reviewed by at least three program committee
members.
Workshop papers will be published as MUME 2016 Proceedings and will be
archived with an ISBN number. Submissions should be formatted using the
AAAI, 2-column format; see instructions and templates here:
http://www.aaai.org/Publications/Author/author.php
Submission should be uploaded using MUME 2016 EasyChair portal:
https://www.easychair.org/conferences/?conf=mume2016
For complete details on attendance, submissions and formatting, please
visit the workshop website: http://www.musicalmetacreation.org
Presentation and Multimedia Equipment:
==========================================
We will provide a video projection system as well as a stereo audio system
for use by presenters at the venue. Additional equipment required for
presentations and demonstrations should be supplied by the presenters.
Contact the Workshop Chair to discuss any special equipment and setup
needs/concerns.
Attendance
=======================================
It is expected that at least one author of each accepted submission will
attend the workshop to present their contribution. We also welcome those
who would like to attend the workshop without presenting. Workshop
registration will be available through the ICCC2016 conference system.
Questions & Requests
======================================
Please direct any inquiries/suggestions/special requests to the Workshop
Chair, Philippe Pasquier (pasquier(a)sfu.ca).
Workshop Organizers
===================
Pr. Philippe Pasquier (Workshop Chair)
School of Interactive Arts and Technology (SIAT)
Simon Fraser University, Canada
Pr. Arne Eigenfeldt
School for the Contemporary Arts
Simon Fraser University, Canada
Dr. Oliver Bown
Design Lab, Faculty of Architecture, Design and Planning
The University of Sydney, Australia
Kıvanç Tatar (MUME Administration and Publicity Assistant)
School of Interactive Arts and Technology,
Simon Fraser University, Vancouver, Canada.
----------------------
http://www.musicalmetacreation.org
======================
--
Kıvanç Tatar
----------------------------------
Researcher, Metacreation Lab
Interactive Arts and Technology
Simon Fraser University, Vancouver, Canada
GSM : 1 778 858 6073
Email: kivanctatar(a)gmail.com
Website: https://kivanctatar.wordpress.com/
Howdy all,
I’m pleased to announce that the PdParty BETA has started!
After almost two months of work, there have been quite a number of improvements and I feel the app is now close to version 1.0 status.
I just need your help to find bugs, suggest improvements, and create demo scenes.
How this works
1. Send me your name & email*
2. I add you to the tester list
3. You should receive a notification email
4. Download the free TestFlight app form the App Store
5. Open TestFlight and install the latest PdParty build
*Those of you who participated in the alpha testing should already have received an email.
Info
PdParty User Guide <https://github.com/danomatika/PdParty/blob/master/doc/guide/PdParty_User_Gu…>
PdParty Composer Pack <http://docs.danomatika.com/PdParty_composerpack.zip>
Happy Patching!
--------
Dan Wilcox
@danomatika <https://twitter.com/danomatika>
danomatika.com <http://danomatika.com/>
robotcowboy.com <http://robotcowboy.com/>
Hi everyone,
I'd like to announce that we've released version 1.0.0 of PD for Android on
a maven repository on JCenter.
There were no changes done to the API of pd-for-android but rather to the
way that it can be used. It is much simpler now to integrate the library in
Android apps and the project can now be easily used with Android Studio,
which has replaced Eclipse as the standard tool used for developing Android
apps.
For further information please see the project's page:
https://github.com/libpd/pd-for-android
Best wishes,
Tal Kirshboim
Hi everyone,
I'm looking for a creative pd composer/coder for a big project in Rio de
Janeiro, Brazil. Remote working is not a problem.
It would be awesome to find someone who is experienced in transforming
multiple sources of input into something musical / pleasant to hear,
ranging from calm and soothing to something fast and furious.
If you're available for this challenge, i'd love to see your portfolio.
Please get in touch!
All the best,
Tassio Knop
--
Tássio Knop | Software Developer | tassio.knop(a)ydreams.com.br |
+55 21 98327-7562 <5521992307966>
YDreams Brasil | ydreams.com.br <http://www.ydreams.com.br/> | YouTube
<https://youtube.com/ydreams> | Facebook <https://fb.me/ydreams> | Instagram
<https://instagram.com/ydreamsbrasil/> | Twitter
<https://twitter.com/ydreams> | +55 21 2225-7029 <552122257029>
Disclaimer and Confidentiality Notice: YDreams Brasil accepts no
responsibility or liability whatsoever with regard to the information
herein contained. This message including any attachment hereof is
confidential and may be privileged or otherwise legally protected from
disclosure and may only be read, copied and used by the intended recipient.
If you are not the intended recipient, please contact the sender
immediately (+55 21 2225-7029 <552122257029>) and delete this email and any
attachment permanently from your system. You must not copy this email or
any attachment or disclose its/their contents to any other person or
entity. Thank you.
======================
CALL FOR PARTICIPATION
======================
((( MUME 2016 )))
4th International Workshop on Musical Metacreation
http://www.musicalmetacreation.org
<https://www.easychair.org/conferences/?conf=mume2016>
June 27, 2016
MUME 2016 is to be held in Paris at Université Pierre et Marie Curie
(UPMC), in conjunction with the Seventh International Conference on
Computational Creativity, ICCC 2016.
=== Important Dates ===
Workshop submission deadline: May 1, 2016
Notification date: June 1, 2016
Camera-ready version: June 10, 2016
Workshop date: June 27, 2016
======================
We are delighted to announce the 4th International Workshop on Musical
Metacreation (MUME 2016) to be held June 27, 2016, in conjunction with the
Seventh International Conference on Computational Creativity, ICCC 2016.
MUME 2016 builds on the enthusiastic response and participation we received
for the past occurrences of MUME series:
- MUME 2012 (held in conjunction with AIIDE 2012 at Stanford):
http://musicalmetacreation.org/index.php/mume-2012/
- MUME 2013 (held in conjunction with AIIDE 2013 at NorthEastern):
http://musicalmetacreation.org/index.php/mume-2013/
- MUME 2014 (held in conjunction with AIIDE 2014 at North Carolina):
http://musicalmetacreation.org/index.php/mume-2014/
Metacreation involves using tools and techniques from artificial
intelligence, artificial life, and machine learning, themselves often
inspired by cognitive and life sciences, for creative tasks. Musical
Metacreation explores the design and use of these tools for music making:
discovery and exploration of novel musical styles and content,
collaboration between human performers and creative software “partners”,
and design of systems in gaming and entertainment that dynamically generate
or modify music.
MUME aims to bring together artists, practitioners, and researchers
interested in developing systems that autonomously (or interactively)
recognize, learn, represent, compose, generate, complete, accompany, or
interpret music. As such, we welcome contributions to the theory or
practice of generative music systems and their applications in new media,
digital art, and entertainment at large.
Topics
======
We encourage paper and demo submissions on MUME-related topics, including
the following:
-- Representation and Algorithms for MUME
---- Novel representations of musical information
---- Advances or applications of AI, machine learning, and statistical
techniques for generative music
---- Advances or applications of evolutionary computing or agent and
multi-agent based systems for generative music
-- Systems and Applications of MUME
---- Systems for autonomous or interactive music composition
---- Systems for automatic generation of expressive musical interpretation
---- Systems for learning or modeling music style and structure
---- Systems for intelligently remixing or recombining musical material
---- Online musical systems (i.e. systems with a real-time element)
---- Adaptive and generative music in video games
---- Techniques and systems for supporting human musical creativity
---- Applications of musical metacreation for digital entertainment: sound
design, soundtracks, interactive art, etc.
-- Evaluation of MUME
---- Methodologies for qualitative or quantitative evaluation of MUME
---- Studies reporting on the evaluation of MUME
---- Theory, and Socio-economical Impact of MUME
---- Computational models of human musical creativity
---- Emerging musical styles and approaches to music production and
performance involving the use of AI systems
---- Socio-economical impact of MUME
---- Authorship and legal implications of MUME
Submission Format and Requirements
=================================
Please make submissions via the EasyChair system at:
https://easychair.org/conferences/?conf=mume2016 .
The workshop is a full day event that includes:
- Presentations of FULL TECHNICAL PAPERS (8 pages maximum)
- Presentations of POSITION PAPERS and WORK-IN-PROGRESS PAPERS (5 pages
maximum)
- Presentations of DEMONSTRATIONS (3 pages maximum)
All papers should be submitted as complete works. Demo systems should be
tested and working by the time of submission, rather than be speculative.
We encourage audio and video material to accompany and illustrate the
papers (especially for demos). We ask that authors arrange for their web
hosting of audio and video files, and give URL links to all such files
within the text of the submitted paper.
Submissions do not have to be anonymized, as we use single-blind reviewing.
Each submission will be reviewed by at least three program committee
members.
Workshop papers will be published as MUME 2016 Proceedings and will be
archived with an ISBN number. Submissions should be formatted using the
AAAI, 2-column format; see instructions and templates here:
http://www.aaai.org/Publications/Author/author.php
Submission should be uploaded using MUME 2016 EasyChair portal:
https://www.easychair.org/conferences/?conf=mume2016
For complete details on attendance, submissions and formatting, please
visit the workshop website: http://www.musicalmetacreation.org
Presentation and Multimedia Equipment:
==========================================
We will provide a video projection system as well as a stereo audio system
for use by presenters at the venue. Additional equipment required for
presentations and demonstrations should be supplied by the presenters.
Contact the Workshop Chair to discuss any special equipment and setup
needs/concerns.
Attendance
=======================================
It is expected that at least one author of each accepted submission will
attend the workshop to present their contribution. We also welcome those
who would like to attend the workshop without presenting. Workshop
registration will be available through the ICCC2016 conference system.
Questions & Requests
======================================
Please direct any inquiries/suggestions/special requests to the Workshop
Chair,
Philippe Pasquier (pasquier(a)sfu.ca).
Workshop Organizers
===================
Pr. Philippe Pasquier (Workshop Chair)
School of Interactive Arts and Technology (SIAT)
Simon Fraser University, Canada
Pr. Arne Eigenfeldt
School for the Contemporary Arts
Simon Fraser University, Canada
Dr. Oliver Bown
Design Lab, Faculty of Architecture, Design and Planning
The University of Sydney, Australia
Kıvanç Tatar (MUME Administration and Publicity Assistant)
School of Interactive Arts and Technology,
Simon Fraser University, Vancouver, Canada.
----------------------
http://www.musicalmetacreation.org
======================
Best regards,
--
Kıvanç Tatar
----------------------------------
Researcher, Metacreation Lab
Interactive Arts and Technology
Simon Fraser University, Vancouver, Canada
GSM : 1 778 858 6073
Email: kivanctatar(a)gmail.com
Website: https://kivanctatar.wordpress.com/
Hi Everyone,
Here is a fun project based on libpd and thus pd : ppp.mgsx.net
This work is based on a fork of droidparty by Chris McCormick : it helps
you to build binary apps for android, and include a standard wifi MIDI
clock in your patch. This means that every app published with this
framework can be synched with one another, or even any other kind of MIDI
compliant music gear.
We still have some documentation to write up, but we have a good start as
well as a video and a few sample apps available for free in the android
playstore.
Feedback is welcome :)
I hope you like it as much as we enjoy making it, have fun !
Apologies for x-posting,
This holiday release brings you:
*-legacy flag that provides 100% backwards compatibility with iemgui
objects
*gfsm library
*added support for $0 functionality in messages
*support for Intel Haswell and Skylake CPUs
*ability to use # in labels
*ability to use multiple $n arguments in labels
*fixed bug in keyboard autorepeat and cleaned up [key] object to support
autorepeat filtering
*added autotune~ external and its K12 module
*synced cyclone and iem libraries
*other small fixes and cosmetic improvements
For a raw (unedited) changelog and a more detailed overview, please visit:
https://puredata.info/downloads/Pd-L2Ork/releases/20151219
To download pd-l2ork:
*http://l2ork.music.vt.edu/main/make-your-own-l2ork/software/
<http://l2ork.music.vt.edu/main/make-your-own-l2ork/software/>*
NB: Currently only Ubuntu 15.10 64bit build is available, with 32bit and
Raspberry Pi builds forthcoming.
About Pd-L2Ork
Pd-L2Ork is a fork of the ubiquitous Pure-Data focusing on improved user
interface, expanded collection of externals, and an advanced SVG-enabled
graphical front-end. Originally it was introduced as the core
infrastructure for the Linux Laptop Orchestra (L2Ork http://l2ork
.icat.vt.edu), and has since expanded to include K-12 learning module with
a unique learning environment offering adaptable granularity that has been
utilized in over dozen maker workshops and initiatives, including the
Raspberry Pi Orchestra program for middle school children introduced in the
summer 2014. Today, pd-l2ork is being developed by a growing number of
international collaborators and contributors.
For additional info L2Ork and pd-l2ork:
http://l2ork.music.vt.edu
Best,
--
Ivica Ico Bukvic, D.M.A.
Associate Professor
Computer Music
ICAT Senior Fellow
Director -- DISIS, L2Ork
Virginia Tech
School of Performing Arts – 0141
Blacksburg, VA 24061
(540) 231-6139
ico(a)vt.edu
www.performingarts.vt.edudisis.icat.vt.edul2ork.icat.vt.eduico.bukvic.net
[please forward]
Georgia Tech is now accepting applications for the MS and PhD programs in music technology for matriculation in August 2016. All PhD students, and a limited number of MS students, receive graduate research assistantships that cover tuition and pay a competitive monthly stipend. The deadline for applications is January 10, 2016.
The MS in Music Technology is a two-year program that instills in students the theoretical foundation, technical skills, and creative aptitude to design the disruptive technologies that will enable new modes of music creation and consumption in a changing industry. Students take courses in areas such as music information retrieval, music perception and cognition, signal processing, interactive music, the history of electronic music, and technology ensemble. They also work closely with faculty on collaborative research projects and on their own MS project or thesis. Recent students in the program have worked and/or interned at companies such as Apple, Avid, Dolby, Bose, Gracenote, Rdio, Sennheiser, Ableton, Echo Nest, Pandora, Moog, and Smule, and gone on to PhD studies. Applicants are expected to have an undergraduate degree in music, computing, engineering, or a related discipline, and they should possess both strong musical and technical skills.
Students in the PhD program in Music Technology pursue individualized research agendas in close collaboration with faculty in areas such as interactive music, robotic musicianship, music information retrieval, digital signal processing, mobile music, network music, and music education, focusing on conducting and disseminating novel research with a broad impact. PhD students are also trained in research methods, teaching pedagogy, and an interdisciplinary minor field as they prepare for careers in academia, at industry research labs, or in their own startup companies. PhD applicants are expected to hold a Masters degree in music technology or from an allied field, such as computing, music, engineering, or media arts and sciences. All PhD applicants must demonstrate mastery of core masters-level material covered in Music Technology, including music theory, performance, composition, and/or analysis; music information retrieval; digital signal processing and synthesis; interactive music systems design; and music cognition.
Both the MS and PhD programs are housed within the School of Music at Georgia Tech, in close collaboration with the Georgia Tech Center for Music Technology (GTCMT). The GTCMT is an international center for creative and technological research in music, focusing on the development and deployment of innovative musical technologies that transform the ways in which we create and experience music. Its mission is to provide a collaborative framework for committed students, faculty, and researchers from all across campus to apply their musical, technological, and scientific creativity to the development of innovative artistic and technological artifacts.
Core faculty in the music technology program include Gil Weinberg (robotic musicianship, mobile music, and sonification), Jason Freeman (participatory and collaborative systems, education, and composition), Alexander Lerch (music information retrieval and digital signal processing), Timothy Hsu (acoustics), Frank Clark (multimedia and network music), and Chris Moore (recording and production).
More information on the MS program is at: http://www.music.gatech.edu/academics/graduate/overview
More information on the PhD program is at: http://www.music.gatech.edu/academics/phd/overview
More information on the GTCMT is at: http://www.gtcmt.gatech.edu
To apply, please visit: http://www.gradadmiss.gatech.edu/apply/
To contact us, please visit: http://www.gtcmt.gatech.edu/contact-us