ICAD 2019 — Call for Student ThinkTank Appilcations (Student Research
Consortium)
25th International Conference on Auditory Display Northumbria
University, Newcastle-upon-Tyne, UK June 23–27, 2019
https://icad2019.icad.org/
Date: Sunday June 23rd, 2019
Time: 9:00 AM - 6:00 PM
Applications due: Friday March 29th, 2019
The Student ThinkTank (Student Research Consortium) is a full day
meeting for students doing Graduate or Undergraduate work on auditory
display. It will be held on June 23rd. Selected students will give
formal presentations and have the opportunity to discuss research ideas
and problems with fellow researchers in the field. The program will also
include career-related activities.
The ThinkTank is your chance to set a whole roomful of auditory
researchers to work on your particular research issue, to help you
choose which method, tool, technique, or principle to use to save you
from heading down a dead end. Besides providing thoughtful insights into
your particular project, the ThinkTank will foster friendships and
networks among fellow students and researchers that are essential in an
international community for auditory display.
Financial assistance will be available for selected applicants from US
Universities from ICAD and the NSF. Any registered ICAD participants can
join as an observer free of charge. If you’d like to attend as an
observer, please send an email to icad2019thinktank(a)icad.org
How to apply
To apply please submit the following 4 items:
1. Cover Letter
Your cover letter should include the following information:
*
Statement of interest in participating in the ThinkTank.
*
Full name of the School and Department in which you are studying.
*
Current stage in your academic program (e.g., completed MS, 2 years
into PhD).
*
Name of the supervising professor.
*
Your full contact information: address, phone number, and email
address.
*
Title of the research and keywords pertinent to the research.
*
The URL of your web page (if any).
0.
2. Two-page Research Interest Summary
The body of the research summary should provide a clear overview of
the research that you have already conducted; is in the process of
completing; has planned, or ideas for research that you would like
to pursue. The statement should discuss the relevance and potential
impact of the research on the field and discuss its broader impact
in the world. You are encouraged to include the following sections:
*
Introduction and problem description.
*
Brief background and overview of the existing literature.
*
Goal of the research.
*
Current status of the research.
*
Preliminary results accomplished, if any.
*
Broader Impact of this research.
*
Open issues, topics to be discussed at the ThinkTank, and expected
outcomes from participating
in the ThinkTank. For example, you might want to discuss choosing
the right tools and techniques, to choosing research topics, to how
to organize your thesis, to philosophical or aesthetic issues –
anything where you could benefit from the perspectives and
experience of other students and experts.
3. Letter of Recommendation
Enclose a letter of recommendation written by your Graduate
Advisor/Thesis Advisor/Supervisor. Your advisor/supervisor is asked
to verify that you are a graduate student, working in the area of
sonification or auditory display. (In case of undergraduate
student’s submission, please verify the enrolled student status and
include information about the undergraduate research project.)
Advisors are also encouraged to include an assessment of the current
status of your research and an indication of the expected date of
its completion. In addition, your advisor is encouraged to indicate
what she/he hopes you would both gain and contribute by
participating in the ThinkTank.
4. Curriculum Vitae (CV)
Please prepare a 1-page CV that relates your background, relevant
experience, and research accomplishments.
Selection and Presentation:
Up to 15 proposals will be selected for formal presentation and
“think-tanking”. The selected problems will be representative examples
of a widespread problem or will be particularly interesting or
challenging (as determined by the expert Panel). Each participant will
prepare a 15-minute presentation for all ThinkTank attendees. The Panel
will present a report on the ThinkTank in the ICAD 2019 conference program.
All those who submit a problem may participate in the ThinkTank to watch
the presentations and join in the discussions; however, only the
selected submissions will be able to make a formal oral presentation.
Even if your problem is not selected you will leave with a sense of what
other students are doing and how they are approaching problems in
auditory display, as well as new friends to talk about your project with
during the rest of the ICAD 2019 conference and in the future. If your
problem is selected you may also leave with the breakthrough you need!
ThinkTank Panel:
The ThinkTank Panel comprises several international researchers who work
across a range of disciplines covered by auditory displays. The
confirmed Panel members are:
*
Dr. Areti Andreopoulou (chair), University of Athens
*
Dr Bruce Walker, Georgia Institute of Technology
*
Dr. Matti Gröhn, Stereoscape, Finland
*
Derek Brock, United Stated Naval Research Lab
*
Dr. Myounghoon Jeon, Virginia Tech
*
TBC
*
TBC
*
How to Submit:
Please email your proposal by March 29th, 2019 to the ThinkTank chair at
icad2019thinktank(a)icad.org If you have any questions, please feel free
to email us.
--
--
Areti Andreopoulou, PhD Assistant Professor in Music Technology
Laboratory of Music Acoustics and Technology (LabMAT) Department of
Music Studies National and Kapodistrian University of Athens
labmat.music.uoa.gr <http://labmat.music.uoa.gr/>
===================
MUME 2019 - EXTENDED DEADLINE: MARCH 24, 2019
===================
======================
3rd CALL FOR SUBMISSIONS
======================
((( MUME 2019 )))
The 7th International Workshop on Musical Metacreation
http://musicalmetacreation.org
June 17-18, 2019, Charlotte, North Carolina
MUME 2019 is to be held at the University of North Carolina Charlotte in
conjunction with the 10th International Conference on Computational
Creativity, ICCC 2019 (http://computationalcreativity.net/iccc2019).
=== Important Dates ===
EXTENDED Workshop submission deadline: March 24, 2019
Notification date: April 28, 2019
Camera-ready version: May 19, 2019
Workshop dates: June 17-18, 2019
======================
Metacreation applies tools and techniques from artificial intelligence,
artificial life, and machine learning, themselves often inspired by
cognitive and natural science, for creative tasks. Musical Metacreation
studies the design and use of these generative tools and theories for music
making: discovery and exploration of novel musical styles and content,
collaboration between human performers and creative software “partners”,
and design of systems in gaming and entertainment that dynamically generate
or modify music.
MUME intends to bring together artists, practitioners, and researchers
interested in developing systems that autonomously (or interactively)
recognize, learn, represent, compose, generate, complete, accompany, or
interpret music. As such, we welcome contributions to the theory or
practice of generative music systems and their applications in new media,
digital art, and entertainment at large.
Topics
===========================
We encourage paper and demo submissions on MUME-related topics, including
the following:
-- Models, Representation and Algorithms for MUME
---- Novel representations of musical information
---- Advances or applications of AI, machine learning, and statistical
techniques for generative music
---- Advances of A-Life, evolutionary computing or agent and multi-agent
based systems for generative music
---- Computational models of human musical creativity
-- Systems and Applications of MUME
---- Systems for autonomous or interactive music composition
---- Systems for automatic generation of expressive musical interpretation
---- Systems for learning or modeling music style and structure
---- Systems for intelligently remixing or recombining musical material
---- Online musical systems (i.e. systems with a real-time element)
---- Adaptive and generative music in video games
---- Generative systems in sound synthesis, or automatic synthesizer design
---- Techniques and systems for supporting human musical creativity
---- Emerging musical styles and approaches to music production and
performance involving the use of AI systems
---- Applications of musical metacreation for digital entertainment: sound
design, soundtracks, interactive art, etc.
-- Evaluation of MUME
---- Methodologies for qualitative or quantitative evaluation of MUME
systems
---- Studies reporting on the evaluation of MUME
---- Socio-economical Impact of MUME
---- Philosophical implication of MUME
---- Authorship and legal implications of MUME
Submission Format and Requirements
=================================
Please make submissions via the EasyChair system at:
https://easychair.org/conferences/?conf=mume2019
The workshop is a day and a half event that includes:
-Presentations of FULL TECHNICAL PAPERS (8 pages maximum)
-Presentations of POSITION PAPERS and WORK-IN-PROGRESS PAPERS (5 pages
maximum)
-Presentations of DEMONSTRATIONS (3 pages maximum) which present outputs of
systems (working live or offline).
All papers should be submitted as complete works. Demo systems should be
tested and working by the time of submission, rather than be speculative.
We encourage audio and video material to accompany and illustrate the
papers (especially for demos). We ask that authors arrange for their web
hosting of audio and video files, and give URL links to all such files
within the text of the submitted paper.
Submissions do not have to be anonymized, as we use single-blind reviewing.
Each submission will be reviewed by at least three program committee
members.
Workshop papers will be published as MUME 2019 Proceedings and will be
archived with an ISBN number. Please use the updated MuMe paper template to
format your paper. Also please feel free to edit the licence entry (at the
bottom left of the first page of the new template). We created a new MUME
2019 template based on AAAI template. The MUME 2019 latex and Word template
is available at:
http://musicalmetacreation.org/buddydrive/file/templates_mume2019/
Submission should be uploaded using MUME 2019 EasyChair portal:
https://easychair.org/conferences/?conf=mume2019
For complete details on attendance, submissions and formatting, please
visit the workshop website: <http://www.musicalmetacreation.org>
http://musicalmetacreation.org
Presentation and Multimedia Equipment:
==========================================
We will provide a video projection system as well as a stereo audio system
for use by presenters at the venue. Additional equipment required for
presentations and demonstrations should be supplied by the presenters.
Contact the Workshop Chair to discuss any special equipment and setup
needs/concerns.
Attendance
=======================================
It is expected that at least one author of each accepted submission will
attend the workshop to present their contribution. We also welcome those
who would like to attend the workshop without presenting. Workshop
registration will be available through the ICCC 2019 conference system.
History
=======================================
MUME 2019 builds on the enthusiastic response and participation we received
for the past occurrences of MUME series:
-
MUME 2012 (held in conjunction with AIIDE 2012 at Stanford):
http://musicalmetacreation.org/mume-2012/
-
MUME 2013 (held in conjunction with AIIDE 2013 at NorthEastern):
http://musicalmetacreation.org/mume-2013/
-
MUME 2014 (held in conjunction with AIIDE 2014 at North Carolina):
http://musicalmetacreation.org/mume-2014/
-
MUME 2016 (held in conjunction with ICCC 2016 at Université Pierre et
Marie Curie):
http://musicalmetacreation.org/mume-2016/
-
MUME 2017 (held in conjunction with ICCC 2017 at Georgia Institute of
Technology):
http://musicalmetacreation.org/mume-2017/
-
MUME 2018 (held in conjunction with ICCC 2018 at Salamanca University):
http://musicalmetacreation.org/mume-2018/
Questions & Requests
======================================
Please direct any inquiries/suggestions/special requests to one of the
Workshop Chairs, Bob (keller(a)cs.hmc.edu) or Bob (bobs(a)kth.se).
Workshop Organizers
===================
Program Co-Chair
Robert M. Keller, Professor
Computer Science Department
Harvey Mudd College
301 Platt Blvd
Claremont, CA 91711 USA
https://www.cs.hmc.edu/~keller/
keller(a)cs.hmc.edu
Program Co-Chair
Bob L. Sturm, Associate Professor
Tal, Musik och Hörsel (Speech, Music and Hearing)
Lindstedtsvägen 24
School of Electronic Engineering and Computer Science
Royal Institute of Technology KTH, Sweden
https://www.kth.se/profile/bobs
bobs(a)kth.se
Concert Chair
Gus Xia, Assistant Professor
Computer Science
NYU Shanghai
gxia(a)nyu.edu
Publicity Chair
Dr. Oliver Bown
Senior Lecturer
Faculty of Art & Design, The University of New South Wales
Room AG12, Cnr Oxford St & Greens Rd,
Paddington, NSW, 2021, Australia
o.bown(a)unsw.edu.au
----------------------
http://musicalmetacreation.org
======================
MUME Steering Committee
Andrew Brown, Griffith University, Australia
Michael Casey, Dartmouth College, US
Arne Eigenfeldt, Simon Fraser University, Canada
Anna Jordanous, University of Kent, UK
Bob Keller, Harvey Mudd College, US
Róisín Loughran, University College Dublin, Ireland
Philippe Pasquier, Simon Fraser University, Canada
Benjamin Smith, Purdue University Indianapolis, USA
--
Kıvanç Tatar
----------------------------------
PhD Candidate
Interactive Arts + Technology
Simon Fraser University, Vancouver, Canada
Email: kivanctatar(a)gmail.com
Website: https://kivanctatar.com/
Howdy, please allow me to share a rather personal and detailed report to
this dear list about this release: The Cyclone Library (a set of Pure Data
objects cloned from Max/MSP) is finally upgraded to version 0.3! The main
goal of Cyclone 0.3 was to update it to Max 7! Max 8 is out now and there
are minor updates that could be included in cyclone, which may be ported in
a possible future 0.4 release. Cyclone 0.3 also provide numerous fixes,
several new objects and a newly written documentation!
In the last release (cyclone 0.3 release candidate 1), we noted how we
finished updating our last object to Max 7! The catch was that we still
needed to update [comment], which won't be updated to Max 6+. Now it's
been updated but not yet fully compliant to Max 5 - nonetheless it's
"acceptable" and further work can be taken care in future 0.3.x releases.
Anyhow, there were also other updates and fixes and the thing is, with what
we have now, I just feel comfortable and happy to state: "*Mission
Completed: Cyclone 0.3 stable release is out!*".
This took 3 years. In fact, this release happens in the exact day of the
3rd anniversary of our repository. I was pretty clueless on how to code
externals when we started and I still struggle a lot - even though I was
able to learn a thing or two in the meantime :) -, this is to say that if
it weren't for my colleagues Derek and Matt, nothing of this would be
possible! I feel I have to praise and thank them (which I can't do enough)
as I don't want to outshine them (since I'm the usual spokesman of the
project). We made a great team! My limitations actually came in handy as I
could just deal with what I could, which was the most tedious and manual
labor that this project needed, like revising every object, looking for
bugs, cleaning things up, dealing with the less complicated stuff while I
learned how to code and etc. Then after a lot of attention, I could deliver
a good briefing so they could help me fix the more hardcore stuff without
all that hassle.
After this long period, it's also noticeable to say we lost steam. I can
only speak for myself and I'm now taking care of my own library (thanks to
all I've learned dealing with cyclone, I must say). I don't know much and
can't promise that cyclone will keep receiving the same attention from now
on, but don't expect it to be abandoned ;) The project is obviously open
for collaboration and any help is welcome. We actually had a very good and
recent contribution from Diego Barrios Romero, who made it possible for
anyone to compile cyclone now as a single library! Apparently he needed
this to load cyclone with libpd more conveniently. Further cyclone releases
may also bring the option of a single compiled binary. Instructions on how
to compile cyclone as a single library and more about the project in genera
can be found here:
https://github.com/porres/pd-cyclone/blob/cyclone0.3/README.md Also check
the changelog for a full list of changes:
https://github.com/porres/pd-cyclone/blob/cyclone0.3/documentation/extra_fi…
Cyclone is now available for the main architectures via Pd (Help => Find
Externals) - it might take a while until it shows up in the system. You can
also get it from here:
https://github.com/porres/pd-cyclone/releases/tag/cyclone0.3
For last, I can't finish this message without thanking the Pd Community in
general (specially Dan and IOhannes for being big players and great
leaders) but mostly Miller Puckette, of course, without whom there's be no
Max or Pd (and therefore no cyclone). We obviously need to also thank the
original author of Cyclone, Krzysztof Czaja, who created this important
library for Pd! Hans Christof Steiner needs to be honored for maintaining
this library and keeping it in Pd Extended for a long time, and later Fred
Jan Kraan was also very important to start fixing and updating this library
after so long in the cyclone 0.2 releases.
Cheers
17th Beta release of ELSE 1.0 - now with a total of 294 objects! This needs
Pd 0.49-0 or above! My Live Electronics Tutorial was also updated!
So, my last update from a few days ago had a bugged [conv~] object (which
performs partitioned convolution). This release fixes it and there are
other things too, see:
https://github.com/porres/pd-else/releases/tag/v1.0-beta17 for more
details. Get binaries also directly from Pd (Help => Find Externals).
Anyway, I can only now finally say my Live Electronics Tutorial solely
depends 100% on the ELSE library. I've also made important reviews on it.
I'm now also reorganizing things in a new and unfinished volume 3! Check it
out:
https://github.com/porres/Live-Electronics-Tutorial/releases/tag/v-1.0beta-7
cheers
16th Beta release of ELSE 1.0 - now with a total of 293 objects! This needs
Pd 0.49-0 or above! Not much new in this release. The highlights are 3 new
objects: [conv~], [rec~] and [sample~]. See
https://github.com/porres/pd-else/releases/tag/v1.0-beta16 for more
details. Get binaries also directly from Pd (Help => Find Externals).
The [conv~] object is a partitioned convolution abstraction, but a compiled
object should come up sooner or later. One way or another, this object
removes the last dependency from my Live Electronics Tutorial, which now
solely depends 100% on the ELSE library, and what makes this release
special! Check it out:
https://github.com/porres/Live-Electronics-Tutorial/releases/tag/v-1.0beta-6
cheers
(Apologies for cross-postings)
ICAD 2019 — Call for Submission of Papers, Extended Abstracts, Workshops, and Tutorials
25th International Conference on Auditory Display
Northumbria University, Newcastle-upon-Tyne, UK
23–27 June, 2019
https://icad2019.icad.orghttps://twitter.com/ICAD2019
Theme/Special Focus of ICAD 2019 is "Digital Living: Sonification for Everyday Life".
Digital technology and artificial intelligence are becoming embedded in the objects all around us, from consumer products to the built environment. Everyday life happens where People, Technology, and Place intersect. Our activities and movements are increasingly sensed, digitised and tracked. Of course, the data generated by modern life is a hugely important resource not just for companies who use it for commercial purposes, but it can also be harnessed for the benefit of the individuals it concerns. Sonification research that has hit the news headlines in recent times has often been related to big science done at large publicly funded labs with little impact on the day-to-day lives of people. At ICAD 2019 we want to explore how auditory display technologies and techniques may be used to enhance our everyday lives. From giving people access to what’s going on inside their own bodies, to the human concerns of living in a modern networked and technological city, the range of opportunities for auditory display is wide.
PAPERS AND EXTENDED ABSTRACTS
The ICAD 2019 committee is seeking papers and extended abstracts that will contribute to knowledge of how sonification can support everyday life.
For details on topics of interest, proposal format, submission instructions, and additional conference information please visit https://icad2019.icad.org/call-for-participation
HYUNDAI MOTOR COMPANY DESIGN CHALLENGE ON AUDITORY UX DESIGN FOR AUTONOMOUS VEHICLES AND FUTURE MOBILITY
In addition to the general papers call, the organizing committee is pleased to announce a special call for papers, videos and demonstrations of prototypes for the Hyundai Motor Company-sponsored design challenge on "Auditory User eXperience Design for Autonomous Vehicles and Future Mobility". Submissions to this special category could cover:
- Auditory applications for infotainment for autonomous vehicles
- Auditory interactions between connected vehicles
- Auditory situation awareness in autonomous vehicles
- Auditory interactions between autonomous vehicles and pedestrians
- Auditory user interfaces for blind drivers in autonomous vehicles
- Auditory displays for hand-over/take-over request in semi-autonomous vehicles
- Sonification for emotion regulation/meditation of drivers in autonomous vehicles
- Sonic interaction to build the feeling of trust as well as safety
- Sonification for the use cases of electric/hybrid vehicles (e.g., charging, low battery, etc.)
- Driver (or occupant) & vehicle agent dialogue design
- Driving as playing instruments (driving sonification)
- In-vehicle application of spatial (or 3D) sounds for new experiences
WORKSHOPS AND TUTORIALS
ICAD workshops and tutorials provide in-depth opportunities for conference attendees to discuss and explore important aspects of the field of auditory display with like-minded researchers and practitioners. Sessions can range from applications and programming methodologies to interdisciplinary research skills, emerging research areas, and challenge problems, to sonification/compositional Maker sessions.
Space, facilities, technical support, and the number of workshop and tutorial sessions that can be accepted are limited, so early submission of proposals is encouraged. Workshop and tutorial organizers are expected to collaborate with the conference committee, issue calls for participation, gather and review contributed materials (if appropriate), and decide upon the final programme for their session.
IMPORTANT DATES:
- Friday 15th March 2019 — Deadline for submission of full papers and the HMC design challenge
- Friday 29th March 2019 — Workshop/Tutorial proposals due
- Friday 26 April 2019 — Notification of decisions
- Friday 3 May 2019 — Extended abstract submission (some full paper submissions may be recommended for the extended abstract category)
Papers Chair:
Tony Stockman
icad2019papers(a)icad.org
Workshop/Tutorial Chair:
Derek Brock
icad2019workshops(a)icad.org
Conference Chairs:
Paul Vickers and Matti Gröhn
icad2019chairs(a)icad.org
ABOUT ICAD
First held in 1992, ICAD is a highly interdisciplinary conference with relevance to researchers, practitioners, artists, and graduate students working with sound to convey and explore information. The conference is unique in its specific focus on auditory displays and the range of interdisciplinary issues related to their use. Like its predecessors, ICAD 2019 will be a single-track conference, open to all, with no membership or affiliation requirements.
--
Paul Vickers
Co-chair of ICAD 2019: https://icad2019.icad.org
This message is intended solely for the addressee and may contain confidential and/or legally privileged information. Any use, disclosure or reproduction without the sender’s explicit consent is unauthorised and may be unlawful. If you have received this message in error, please notify Northumbria University immediately and permanently delete it. Any views or opinions expressed in this message are solely those of the author and do not necessarily represent those of the University. Northumbria University email is provided by Microsoft Office365 and is hosted within the EEA, although some information may be replicated globally for backup purposes. The University cannot guarantee that this message or any attachment is virus free or has not been intercepted and/or amended.
Apologies for the cross posting.
======================
2nd CALL FOR SUBMISSIONS
======================
((( MUME 2019 )))
The 7th International Workshop on Musical Metacreation
http://musicalmetacreation.org
June 17-18, 2019, Charlotte, North Carolina
MUME 2019 is to be held at the University of North Carolina Charlotte in
conjunction with the 10th International Conference on Computational
Creativity, ICCC 2019 (http://computationalcreativity.net/iccc2019).
=== Important Dates ===
Workshop submission deadline: February 24, 2019
Notification date: April 28, 2019
Camera-ready version: May 19, 2019
Workshop dates: June 17-18, 2019
======================
Metacreation applies tools and techniques from artificial intelligence,
artificial life, and machine learning, themselves often inspired by
cognitive and natural science, for creative tasks. Musical Metacreation
studies the design and use of these generative tools and theories for music
making: discovery and exploration of novel musical styles and content,
collaboration between human performers and creative software “partners”,
and design of systems in gaming and entertainment that dynamically generate
or modify music.
MUME intends to bring together artists, practitioners, and researchers
interested in developing systems that autonomously (or interactively)
recognize, learn, represent, compose, generate, complete, accompany, or
interpret music. As such, we welcome contributions to the theory or
practice of generative music systems and their applications in new media,
digital art, and entertainment at large.
Topics
===========================
We encourage paper and demo submissions on MUME-related topics, including
the following:
-- Models, Representation and Algorithms for MUME
---- Novel representations of musical information
---- Advances or applications of AI, machine learning, and statistical
techniques for generative music
---- Advances of A-Life, evolutionary computing or agent and multi-agent
based systems for generative music
---- Computational models of human musical creativity
-- Systems and Applications of MUME
---- Systems for autonomous or interactive music composition
---- Systems for automatic generation of expressive musical interpretation
---- Systems for learning or modeling music style and structure
---- Systems for intelligently remixing or recombining musical material
---- Online musical systems (i.e. systems with a real-time element)
---- Adaptive and generative music in video games
---- Generative systems in sound synthesis, or automatic synthesizer design
---- Techniques and systems for supporting human musical creativity
---- Emerging musical styles and approaches to music production and
performance involving the use of AI systems
---- Applications of musical metacreation for digital entertainment: sound
design, soundtracks, interactive art, etc.
-- Evaluation of MUME
---- Methodologies for qualitative or quantitative evaluation of MUME
systems
---- Studies reporting on the evaluation of MUME
---- Socio-economical Impact of MUME
---- Philosophical implication of MUME
---- Authorship and legal implications of MUME
Submission Format and Requirements
=================================
Please make submissions via the EasyChair system at:
https://easychair.org/conferences/?conf=mume2019
The workshop is a day and a half event that includes:
-Presentations of FULL TECHNICAL PAPERS (8 pages maximum)
-Presentations of POSITION PAPERS and WORK-IN-PROGRESS PAPERS (5 pages
maximum)
-Presentations of DEMONSTRATIONS (3 pages maximum) which present outputs of
systems (working live or offline).
All papers should be submitted as complete works. Demo systems should be
tested and working by the time of submission, rather than be speculative.
We encourage audio and video material to accompany and illustrate the
papers (especially for demos). We ask that authors arrange for their web
hosting of audio and video files, and give URL links to all such files
within the text of the submitted paper.
Submissions do not have to be anonymized, as we use single-blind reviewing.
Each submission will be reviewed by at least three program committee
members.
Workshop papers will be published as MUME 2019 Proceedings and will be
archived with an ISBN number. Please use the updated MuMe paper template to
format your paper. Also please feel free to edit the licence entry (at the
bottom left of the first page of the new template). We created a new MUME
2019 template based on AAAI template. The MUME 2019 latex and Word template
is available at:
http://musicalmetacreation.org/buddydrive/file/templates_mume2019/
Submission should be uploaded using MUME 2019 EasyChair portal:
https://easychair.org/conferences/?conf=mume2019
For complete details on attendance, submissions and formatting, please
visit the workshop website: <http://www.musicalmetacreation.org/>
http://musicalmetacreation.org
Presentation and Multimedia Equipment:
==========================================
We will provide a video projection system as well as a stereo audio system
for use by presenters at the venue. Additional equipment required for
presentations and demonstrations should be supplied by the presenters.
Contact the Workshop Chair to discuss any special equipment and setup
needs/concerns.
Attendance
=======================================
It is expected that at least one author of each accepted submission will
attend the workshop to present their contribution. We also welcome those
who would like to attend the workshop without presenting. Workshop
registration will be available through the ICCC 2019 conference system.
History
=======================================
MUME 2019 builds on the enthusiastic response and participation we received
for the past occurrences of MUME series:
-
MUME 2012 (held in conjunction with AIIDE 2012 at Stanford):
http://musicalmetacreation.org/mume-2012/
-
MUME 2013 (held in conjunction with AIIDE 2013 at NorthEastern):
http://musicalmetacreation.org/mume-2013/
-
MUME 2014 (held in conjunction with AIIDE 2014 at North Carolina):
http://musicalmetacreation.org/mume-2014/
-
MUME 2016 (held in conjunction with ICCC 2016 at Université Pierre et
Marie Curie):
http://musicalmetacreation.org/mume-2016/
-
MUME 2017 (held in conjunction with ICCC 2017 at Georgia Institute of
Technology):
http://musicalmetacreation.org/mume-2017/
-
MUME 2018 (held in conjunction with ICCC 2018 at Salamanca University):
http://musicalmetacreation.org/mume-2018/
Questions & Requests
======================================
Please direct any inquiries/suggestions/special requests to one of the
Workshop Chairs, Bob (keller(a)cs.hmc.edu) or Bob (bobs(a)kth.se).
Workshop Organizers
===================
Program Co-Chair
Robert M. Keller, Professor
Computer Science Department
Harvey Mudd College
301 Platt Blvd
<https://maps.google.com/?q=301+Platt+Blvd+Claremont,+CA+91711+USA&entry=gma…>
Claremont, CA 91711 USA
<https://maps.google.com/?q=301+Platt+Blvd+Claremont,+CA+91711+USA&entry=gma…>
https://www.cs.hmc.edu/~keller/
keller(a)cs.hmc.edu
Program Co-Chair
Bob L. Sturm, Associate Professor
Tal, Musik och Hörsel (Speech, Music and Hearing)
Lindstedtsvägen 24
<https://maps.google.com/?q=Lindstedtsv%C3%A4gen+24&entry=gmail&source=g>
School of Electronic Engineering and Computer Science
Royal Institute of Technology KTH, Sweden
https://www.kth.se/profile/bobs
bobs(a)kth.se
Concert Chair
Gus Xia, Assistant Professor
Computer Science
NYU Shanghai
gxia(a)nyu.edu
Publicity Chair
Dr. Oliver Bown
Senior Lecturer
Faculty of Art & Design, The University of New South Wales
Room AG12, Cnr Oxford St
<https://maps.google.com/?q=Oxford+St++Greens+Rd,+Paddington,+NSW,+2021,+Aus…>&
Greens Rd,
<https://maps.google.com/?q=Oxford+St++Greens+Rd,+Paddington,+NSW,+2021,+Aus…>
Paddington, NSW, 2021, Australia
<https://maps.google.com/?q=Oxford+St++Greens+Rd,+Paddington,+NSW,+2021,+Aus…>
o.bown(a)unsw.edu.au
----------------------
http://musicalmetacreation.org
======================
MUME Steering Committee
Andrew Brown, Griffith University, Australia
Michael Casey, Dartmouth College, US
Arne Eigenfeldt, Simon Fraser University, Canada
Anna Jordanous, University of Kent, UK
Bob Keller, Harvey Mudd College, US
Róisín Loughran, University College Dublin, Ireland
Philippe Pasquier, Simon Fraser University, Canada
Benjamin Smith, Purdue University Indianapolis, USA
--
Kıvanç Tatar
----------------------------------
PhD Candidate
Interactive Arts + Technology
Simon Fraser University, Vancouver, Canada
Email: kivanctatar(a)gmail.com
Website: https://kivanctatar.com/
Ofelia v2.0.4 is released and already uploaded to Deken.
It mostly fixes minor bugs. Please update it if you haven’t.
Merry Christmas!
Changes:
- Fixed Array:setTable() bug on Windows.
- Renamed Array:getTable() and Array:setTable() to Array:get() and
Array:set().
- Added additional argument to Array:get() and Array:set() to set onset
value.
- Fixed crashing issue when returning a large sized table as a list.
- Disabled printing the bug fix version on the pd console.
- Added “examples/pd/misc” example to show various pd objects emulation.
For more info: https://github.com/cuinjune/ofxOfelia
Zack
======================
CALL FOR SUBMISSIONS
======================
((( MUME 2019 )))
The 7th International Workshop on Musical Metacreation
http://musicalmetacreation.org
June 17-18, 2019, Charlotte, North Carolina
MUME 2019 is to be held at the University of North Carolina Charlotte in conjunction with the 10th International Conference on Computational Creativity, ICCC 2019 (http://computationalcreativity.net/iccc2019).
=== Important Dates ===
Workshop submission deadline: February 24, 2019
Notification date: April 28, 2019
Camera-ready version: May 19, 2019
Workshop dates: June 17-18, 2019
======================
Metacreation applies tools and techniques from artificial intelligence, artificial life, and machine learning, themselves often inspired by cognitive and natural science, for creative tasks. Musical Metacreation studies the design and use of these generative tools and theories for music making: discovery and exploration of novel musical styles and content, collaboration between human performers and creative software “partners”, and design of systems in gaming and entertainment that dynamically generate or modify music.
MUME intends to bring together artists, practitioners, and researchers interested in developing systems that autonomously (or interactively) recognize, learn, represent, compose, generate, complete, accompany, or interpret music. As such, we welcome contributions to the theory or practice of generative music systems and their applications in new media, digital art, and entertainment at large.
Topics
===========================
We encourage paper and demo submissions on MUME-related topics, including the following:
-- Models, Representation and Algorithms for MUME
---- Novel representations of musical information
---- Advances or applications of AI, machine learning, and statistical techniques for generative music
---- Advances of A-Life, evolutionary computing or agent and multi-agent based systems for generative music
---- Computational models of human musical creativity
-- Systems and Applications of MUME
---- Systems for autonomous or interactive music composition
---- Systems for automatic generation of expressive musical interpretation
---- Systems for learning or modeling music style and structure
---- Systems for intelligently remixing or recombining musical material
---- Online musical systems (i.e. systems with a real-time element)
---- Adaptive and generative music in video games
---- Generative systems in sound synthesis, or automatic synthesizer design
---- Techniques and systems for supporting human musical creativity
---- Emerging musical styles and approaches to music production and performance involving the use of AI systems
---- Applications of musical metacreation for digital entertainment: sound design, soundtracks, interactive art, etc.
-- Evaluation of MUME
---- Methodologies for qualitative or quantitative evaluation of MUME systems
---- Studies reporting on the evaluation of MUME
---- Socio-economical Impact of MUME
---- Philosophical implication of MUME
---- Authorship and legal implications of MUME
Submission Format and Requirements
=================================
Please make submissions via the EasyChair system at:
https://easychair.org/conferences/?conf=mume2019
The workshop is a day and a half event that includes:
-Presentations of FULL TECHNICAL PAPERS (8 pages maximum)
-Presentations of POSITION PAPERS and WORK-IN-PROGRESS PAPERS (5 pages maximum)
-Presentations of DEMONSTRATIONS (3 pages maximum) which present outputs of systems (working live or offline).
All papers should be submitted as complete works. Demo systems should be tested and working by the time of submission, rather than be speculative. We encourage audio and video material to accompany and illustrate the papers (especially for demos). We ask that authors arrange for their web hosting of audio and video files, and give URL links to all such files within the text of the submitted paper.
Submissions do not have to be anonymized, as we use single-blind reviewing. Each submission will be reviewed by at least three program committee members.
Workshop papers will be published as MUME 2019 Proceedings and will be archived with an ISBN number. Please use the updated MuMe paper template to format your paper. Also please feel free to edit the licence entry (at the bottom left of the first page of the new template). We created a new MUME 2019 template based on AAAI template. The MUME 2019 latex and Word template is available at:
http://musicalmetacreation.org/buddydrive/file/templates_mume2019/
Submission should be uploaded using MUME 2019 EasyChair portal:
https://easychair.org/conferences/?conf=mume2019
For complete details on attendance, submissions and formatting, please visit the workshop website: http://musicalmetacreation.org
Presentation and Multimedia Equipment:
==========================================
We will provide a video projection system as well as a stereo audio system for use by presenters at the venue. Additional equipment required for presentations and demonstrations should be supplied by the presenters. Contact the Workshop Chair to discuss any special equipment and setup needs/concerns.
Attendance
=======================================
It is expected that at least one author of each accepted submission will attend the workshop to present their contribution. We also welcome those who would like to attend the workshop without presenting. Workshop registration will be available through the ICCC 2019 conference system.
History
=======================================
MUME 2019 builds on the enthusiastic response and participation we received for the past occurrences of MUME series:
MUME 2012 (held in conjunction with AIIDE 2012 at Stanford):
http://musicalmetacreation.org/mume-2012/
MUME 2013 (held in conjunction with AIIDE 2013 at NorthEastern):
http://musicalmetacreation.org/mume-2013/
MUME 2014 (held in conjunction with AIIDE 2014 at North Carolina):
http://musicalmetacreation.org/mume-2014/
MUME 2016 (held in conjunction with ICCC 2016 at Université Pierre et Marie Curie):
http://musicalmetacreation.org/mume-2016/
MUME 2017 (held in conjunction with ICCC 2017 at Georgia Institute of Technology):
http://musicalmetacreation.org/mume-2017/
MUME 2018 (held in conjunction with ICCC 2018 at Salamanca University):
http://musicalmetacreation.org/mume-2018/
Questions & Requests
======================================
Please direct any inquiries/suggestions/special requests to one of the Workshop Chairs, Bob (keller(a)cs.hmc.edu) or Bob (bobs(a)kth.se).
Workshop Organizers
===================
Program Co-Chair
Robert M. Keller, Professor
Computer Science Department
Harvey Mudd College
301 Platt Blvd
Claremont, CA 91711 USA
https://www.cs.hmc.edu/~keller/
keller(a)cs.hmc.edu
Program Co-Chair
Bob L. Sturm, Associate Professor
Tal, Musik och Hörsel (Speech, Music and Hearing)
Lindstedtsvägen 24
School of Electronic Engineering and Computer Science
Royal Institute of Technology KTH, Sweden
https://www.kth.se/profile/bobs
bobs(a)kth.se
Concert Chair
Gus Xia, Assistant Professor
Computer Science
NYU Shanghai
gxia(a)nyu.edu
Publicity Chair
Dr. Oliver Bown
Senior Lecturer
Faculty of Art & Design, The University of New South Wales
Room AG12, Cnr Oxford St & Greens Rd,
Paddington, NSW, 2021, Australia
o.bown(a)unsw.edu.au
----------------------
http://musicalmetacreation.org
======================
MUME Steering Committee
Andrew Brown, Griffith University, Australia
Michael Casey, Dartmouth College, US
Arne Eigenfeldt, Simon Fraser University, Canada
Anna Jordanous, University of Kent, UK
Bob Keller, Harvey Mudd College, US
Róisín Loughran, University College Dublin, Ireland
Philippe Pasquier, Simon Fraser University, Canada
Benjamin Smith, Purdue University Indianapolis, USA
The ELSE library has been updated and ELSE 1.0 Beta 15 has been released!
Find Binaries up in deken.
There are many new objects for a total of 286 - more details about the
release at: https://github.com/porres/pd-else/releases/tag/v1.0-beta15
I also have this Live Electronics Tutorial that depends on ELSE and it also
gets updates at every new ELSE release to reflect what's new or make it
compatible for any breaking change, so here's the release that depends on
ELSE 1.0 Beta 15:
https://github.com/porres/Live-Electronics-Tutorial/releases/tag/v-1.0beta-5
This might be the last update of the year, so merry xmas and happy holidays
cheers