An update to the ELSE library is out, mostly with updates in current
objects and just a few new objects. Three managing lists: [reverse],
[rotate], [iterate] and a new drum sequencer objects ([drum.seq]). Please
note that [else/drum.seq] and [else/mtx.ctl] have issues in Pd 0.49 (and
only in 0.49), but this is already fixed for the next 0.50 release. See
https://github.com/porres/pd-else/releases/tag/v1.0-beta21 for a complete
changelog on what's new in ELSE 1.0 beta 21 and wait a but for all binaries
to be available in the repository and via deken (only missing so far are
raspberry pi binaries). The Live Electronics Tutorial that relies in the
ELSE library was also updated - see:
https://github.com/porres/Live-Electronic-Music-Tutorial/releases/tag/v1.0-…
cheers
Dear Pure Data community,
I am currently running a publication fundraiser on Kickstarter for my
book VECTOR SYNTHESIS: a Media Archaeological Investigation into
Sound-Modulated Light.
The book describes my research into the military and techno-scientific
legacies at the birth of modern computing, and how early computer,
electronic media, and video artists of the 1950’s, 60’s, and 70’s
attempted to decouple these tools from their destructive origins.
It also presents my Pure Data software library and live performance
project which employs audio synthesis and vector graphics techniques to
investigate direct relationships between sound and image using analog
CRT displays.
The conclusion of the book reflects on how the project and the research
surrounding it has contributed to the larger experimental audiovisual
arts community through events such as the Vector Hack Festival.
Artists discussed include Mary Ellen Bute, Ben Laposky, Len Lye, Norman
McLaren, Desmond Paul Henry, James Whitney, John Whitney Sr., Dan
Sandin, Steina Vasulka, Woody Vasulka, Larry Cuba, Bill Etra, Mitchell
Waite, Rosa Menkman, Cracked Ray Tube, Andrew Duff, Benton C.
Bainbridge, Philip Baljeu, Jonas Bers, Robin Fox, Robert Henke, Ivan
Marušić Klif, Jerobeam Fenderson, Hansi Raber, Ted Davis, Roland
Lioni, Bernhard Rasinger, and the Kikimore group.
The book is 122 pages long, has 21 illustrations, links to several video
examples online, and was fabulously designed by Claire Matthews.
I’ve launched this Kickstarter to print and distribute the book to
people who have expressed interest, to get them to people who have
assisted in its creation, and just as importantly to send them to
organizations and schools who deserve one. Your support of this
publishing project will help make that possible.
Details can be found here:
https://www.kickstarter.com/projects/macumbista/vector-synthesis-book
Thank you for your kind attention!
Derek Holzer
--
derek holzer
noise.art.technology
http://macumbista.net
*Apologies for cross-postings*
# Fifth Annual Web Audio Conference - 3rd Call for Submissions
https://www.ntnu.edu/wac2019
The fifth Web Audio Conference (WAC) will be held 4-6 December, 2019 at the
Norwegian University of Science and Technology (NTNU) in Trondheim, Norway.
WAC is an international conference dedicated to web audio technologies and
applications. The conference addresses academic research, artistic
research, development, design, evaluation and standards concerned with
emerging audio-related web technologies such as Web Audio API, Web RTC,
WebSockets and Javascript. The conference welcomes web developers, music
technologists, computer musicians, application designers, industry
engineers, R&D scientists, academic researchers, artists, students and
people interested in the fields of web development, music technology,
computer music, audio applications and web standards. The previous Web
Audio Conferences were held in 2015 at IRCAM and Mozilla in Paris, in 2016
at Georgia Tech in Atlanta, in 2017 at the Centre for Digital Music, Queen
Mary University of London in London, and in 2018 at TU Berlin in Berlin.
The internet has become much more than a simple storage and delivery
network for audio files, as modern web browsers on desktop and mobile
devices bring new user experiences and interaction opportunities. New and
emerging web technologies and standards now allow applications to create
and manipulate sound in real-time at near-native speeds, enabling the
creation of a new generation of web-based applications that mimic the
capabilities of desktop software while leveraging unique opportunities
afforded by the web in areas such as social collaboration, user experience,
cloud computing, and portability. The Web Audio Conference focuses on
innovative work by artists, researchers, students, and engineers in
industry and academia, highlighting new standards, tools, APIs, and
practices as well as innovative web audio applications for musical
performance, education, research, collaboration, and production, with an
emphasis on bringing more diversity into audio.
## Pre-submission Draft Feedback
Given that this year's conference theme is Diversity in Web Audio, we would
like to welcome everyone to submit. We have launched a pre-submission draft
feedback process to make people feel more confident, supported and welcome
regarding submitting. This call is addressed to those individuals who are
independent from academic institutions (deadline: June 9, 2019). More info
available at:
https://www.ntnu.edu/wac2019/pre-submission-draft-feedback
## Theme and Topics
The theme for the fifth edition of the Web Audio Conference is Diversity in
Web Audio. We particularly encourage submissions focusing on inclusive
computing, cultural computing, postcolonial computing, and collaborative
and participatory interfaces across the web in the context of generation,
production, distribution, consumption and delivery of audio material that
especially promote diversity and inclusion.
Further areas of interest include:
* Web Audio API, Web MIDI, Web RTC and other existing or emerging web
standards for audio and music.
* Development tools, practices, and strategies of web audio applications.
* Innovative audio-based web applications.
* Web-based music composition, production, delivery, and experience.
* Client-side audio engines and audio processing/rendering (real-time or
non real-time).
* Cloud/HPC for music production and live performances.
* Audio data and metadata formats and network delivery.
* Server-side audio processing and client access.
* Frameworks for audio synthesis, processing, and transformation.
* Web-based audio visualization and/or sonification.
* Multimedia integration.
* Web-based live coding and collaborative environments for audio and music
generation.
* Web standards and use of standards within audio-based web projects.
* Hardware and tangible interfaces and human-computer interaction in web
applications.
* Codecs and standards for remote audio transmission.
* Any other innovative work related to web audio that does not fall into
the above categories.
## Submission Tracks
We welcome submissions in the following tracks: papers, talks, posters,
demos, performances, and artworks. All submissions will be single-blind
peer reviewed. The conference proceedings, which will include both papers
(for papers and posters) and extended abstracts (for talks, demos,
performances, and artworks), will be published open-access online with
Creative Commons attribution, and with an ISSN number. A selection of the
best papers, as determined by a specialized jury, will be offered the
opportunity to publish an extended version at the Journal of Audio
Engineering Society.
**Papers**: Submit a 4-6 page paper to be given as an oral presentation.
**Talks**: Submit a 1-2 page extended abstract to be given as an oral
presentation.
**Posters**: Submit a 2-4 page paper to be presented at a poster session.
**Demos**: Submit a work to be presented at a hands-on demo session. Demo
submissions should consist of a 1-2 page extended abstract including
diagrams or images, and a complete list of technical requirements
(including anything expected to be provided by the conference organizers).
**Performances**: Submit a performance making creative use of web-based
audio applications. Performances can include elements such as audience
device participation and collaboration, web-based interfaces, Web MIDI,
WebSockets, and/or other imaginative approaches to web technology.
Submissions must include a title, a 1-2 page description of the
performance, links to audio/video/image documentation of the work, a
complete list of technical requirements (including anything expected to be
provided by conference organizers), and names and one-paragraph biographies
of all performers.
**Artworks**: Submit a sonic web artwork or interactive application which
makes significant use of web audio standards such as Web Audio API or Web
MIDI in conjunction with other technologies such as HTML5 graphics, WebGL,
and Virtual Reality frameworks. Works must be suitable for presentation on
a computer kiosk with headphones. They will be featured at the conference
venue throughout the conference and on the conference web site. Submissions
must include a title, 1-2 page description of the work, a link to access
the work, and names and one-paragraph biographies of the authors.
**Tutorials**: If you are interested in running a tutorial session at the
conference, please contact the organizers directly.
## Important Dates
March 26, 2019: Open call for submissions starts.
June 16, 2019: Submissions deadline.
September 2, 2019: Notification of acceptances and rejections.
September 15, 2019: Early-bird registration deadline.
October 6, 2019: Camera ready submission and presenter registration
deadline.
December 4-6, 2019: The conference.
At least one author of each accepted submission must register for and
attend the conference in order to present their work. A limited number of
diversity tickets will be available.
## Templates and Submission System
Templates and information about the submission system are available on the
official conference website: https://www.ntnu.edu/wac2019
Best wishes,
The WAC 2019 Committee
Hi all,
I want to share an special audio limiter that I've been using for years
in my recordings.
It's "special" because it *can't* be used live or with shorts delays. :(
It's for use with already recorded material and a special side-chain
file must be generated. For this I made an external. My first C thing.
Prior to this external I used to generate the side-chain file on an
[array] but I encounter the single precision limitation for files bigger
than 16777216 samples.
It has no attack/release time settings, that magic happens on the
side-chain file.
Files and brief explanation on:
https://github.com/Lucarda/pd-pulqui
Or get it from Deken. Search "pulqui".
:)
--
Mensaje telepatico asistido por maquinas.
Howdy, I have big updates to the ELSE library and my Live Electronics
Tutorial!
Raspberry Pi binaries for ELSE are still missing, but it'll be up soon.
Find available releases on github while they don't show up in deken.:
https://github.com/porres/pd-else/releases/tag/v1.0-beta18
<https://l.facebook.com/l.php?u=https%3A%2F%2Fgithub.com%2Fporres%2Fpd-else%…>
There are 30 new objects in ELSE. This is the first release over 300
objects and there's already 324 of them! I'm shocked on how much it's
growing... I realize now how this library is unique in the number of
objects towards DSP and audio processing. There are 219 tilde objects,
that's more objects than Pd itself has, or all of cyclone... So, what's the
most exciting new stuff?
- There's a bunch of reverb objects here - [mono.rev~], [stereo.rev~],
[echo.rev~], [giga.rev~], [free.rev~] (check also [fdn.rev~] which wasn't
really ready).
- There are a few dynamic processors as well: [compress~], [duck~] and
[expand~].
- A couple of FX like [chorus~] and [phaser~].
- A [spectrograph~] object.
- Some objects that deal with lists ([regroup], [pick], [sort],
[scramble], [merge], [unmerge] & [rand.seq]).
- A [trigger2] object that behaves like [trigegr] in Purr Data and Max,
that people from time to time request to be in Pd as well (if Pd's trigger
includes such functionalities, I'll delete this object).
Check complete changelog here:
https://github.com/porres/pd-else/releases/tag/v1.0-beta18
<https://l.facebook.com/l.php?u=https%3A%2F%2Fgithub.com%2Fporres%2Fpd-else%…>
.
As usual, I have breaking changes while we're at the beta stage.
As for my Live Electronics tutorial, which is based on this library, most
of the changes reflect the new objects from the ELSE library. So then, most
importantly, the reverb section just got a major upgrade. I'm also now
including an appendix with s quick start on Data Structures! Check it out
at
https://github.com/porres/Live-Electronic-Music-Tutorial/releases/tag/v1.0-…
<https://l.facebook.com/l.php?u=https%3A%2F%2Fgithub.com%2Fporres%2FLive-Ele…>
cheers
ICAD 2019 — Call for Submissions to the ICAD 2019 Algorave.
25th International Conference on Auditory Display
Northumbria University, Newcastle-upon-Tyne, UK
23–27 June, 2019
https://icad2019.icad.orghttps://twitter.com/ICAD2019
Theme/Special Focus of ICAD 2019 is "Digital Living: Sonification for Everyday Life".
Digital technology and artificial intelligence are becoming embedded in the objects all around us, from consumer products to the built environment. Everyday life happens where People, Technology, and Place intersect. Our activities and movements are increasingly sensed, digitised and tracked. Of course, the data generated by modern life is a hugely important resource not just for companies who use it for commercial purposes, but it can also be harnessed for the benefit of the individuals it concerns. Sonification research that has hit the news headlines in recent times has often been related to big science done at large publicly funded labs with little impact on the day-to-day lives of people. At ICAD 2019 we want to explore how auditory display technologies and techniques may be used to enhance our everyday lives. From giving people access to what’s going on inside their own bodies, to the human concerns of living in a modern networked and technological city, the range of opportunities for auditory display is wide.
ALGORAVE
For the first time at an ICAD meeting the conference programme includes an Algorave event. "Algoraves focus on humans making and dancing to music. Algorave musicians don’t pretend their software is being creative, they take responsibility for the music they make, shaping it using whatever means they have. More importantly the focus is not on what the musician is doing, but on the music, and people dancing to it. Algoraves embrace the alien sounds of raves from the past, and introduce alien, futuristic rhythms and beats made through strange, algorithm-aided processes. It’s up to the good people on the dancefloor to help the musicians make sense of this and do the real creative work in making a great party.” (https://algorave.com/about).
The ICAD 2019 committee seeks submissions to this event which will provide an exciting complement to the other conference categories. For details on proposal format, submission instructions, and additional conference information please visit https://icad2019.icad.org/call-for-participation.
IMPORTANT DATES:
Thursday 12th May 2019 — Deadline for submissions to the Algorave tracks.
Algorave Chair:
Shelly Knotts
icad2019algorave(a)icad.org
Conference Chairs:
Paul Vickers and Matti Gröhn
icad2019chairs(a)icad.org
ABOUT ICAD
First held in 1992, ICAD is a highly interdisciplinary conference with relevance to researchers, practitioners, artists, and graduate students working with sound to convey and explore information. The conference is unique in its specific focus on auditory displays and the range of interdisciplinary issues related to their use. Like its predecessors, ICAD 2019 will be a single-track conference, open to all, with no membership or affiliation requirements.
--
Dr Paul Vickers BSc PhD CEng MIEE FHEA
Co-chair of ICAD 2019: https://icad2019.icad.org
This message is intended solely for the addressee and may contain confidential and/or legally privileged information. Any use, disclosure or reproduction without the sender’s explicit consent is unauthorised and may be unlawful. If you have received this message in error, please notify Northumbria University immediately and permanently delete it. Any views or opinions expressed in this message are solely those of the author and do not necessarily represent those of the University. Northumbria University email is provided by Microsoft Office365 and is hosted within the EEA, although some information may be replicated globally for backup purposes. The University cannot guarantee that this message or any attachment is virus free or has not been intercepted and/or amended.
---------- Forwarded message ---------
From: 'Monty Adkins' via CEC-Conference <cec-conference(a)googlegroups.com>
Date: Wed, 3 Apr 2019 at 10:42
Subject: [cec-c] Marie curie
To: cec-conference(a)googlegroups.com <cec-conference(a)googlegroups.com>
The School of Music, Humanities and Media invites expressions of interest
for the Marie Curie Post-Doctoral Fellowship at the University of
Huddersfield.
We have world-leading resources for Music and Music Technology focused
around the Centre for Research in New Music.
We are interested in receiving applications in all areas of Music and Music
Technology.
Please make sure to read the weblinks to ensure your eligibility to the two
schemes.
Many thanks
Prof. Monty Adkins
University of Huddersfield
UK
University of Huddersfield inspiring tomorrow's professionals.
[http://marketing.hud.ac.uk/_HOSTED/EmailSig2014/EmailSigFooterMarch2019.jpg
]
This transmission is confidential and may be legally privileged. If you
receive it in error, please notify us immediately by e-mail and remove it
from your system. If the content of this e-mail does not relate to the
business of the University of Huddersfield, then we do not endorse it and
will accept no liability.
--
You received this message because you are subscribed to the Google Groups
"CEC-Conference" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to cec-conference+unsubscribe(a)googlegroups.com.
To post to this group, send email to cec-conference(a)googlegroups.com.
Visit this group at https://groups.google.com/group/cec-conference.
For more options, visit https://groups.google.com/d/optout.
Dear all,
Andrew McPherson and I are writing a paper on the relationships between music programming languages and the digital instruments, compositions and performances they support. As part of that, we'd like to invite your participation in an online survey about your digital tools and musical practices. The survey contains 27 questions, and we expect it will take around 20 minutes of your time.
https://sopi.aalto.fi/dmiip/idiomaticity/survey/ <https://sopi.aalto.fi/dmiip/idiomaticity/survey/>
Thanks for your participation, apologies for cross-posting, and please feel free to share the link with any other designers and musicians you think might be interested.
best,
Koray
-------------------------------------
M.Koray Tahiroğlu
Department of Media,
Aalto University
School of Arts, Design and Architecture
http://sopi.aalto.fi/http://mlab.taik.fi/~korayt
tel. +358 50 4088441
ICAD 2019 — Call for Submission of Concert pieces and Installations/Demos
25th International Conference on Auditory Display
Northumbria University, Newcastle-upon-Tyne, UK
23–27 June, 2019
https://icad2019.icad.orghttps://twitter.com/ICAD2019
Theme/Special Focus of ICAD 2019 is "Digital Living: Sonification for Everyday Life".
Digital technology and artificial intelligence are becoming embedded in the objects all around us, from consumer products to the built environment. Everyday life happens where People, Technology, and Place intersect. Our activities and movements are increasingly sensed, digitised and tracked. Of course, the data generated by modern life is a hugely important resource not just for companies who use it for commercial purposes, but it can also be harnessed for the benefit of the individuals it concerns. Sonification research that has hit the news headlines in recent times has often been related to big science done at large publicly funded labs with little impact on the day-to-day lives of people. At ICAD 2019 we want to explore how auditory display technologies and techniques may be used to enhance our everyday lives. From giving people access to what’s going on inside their own bodies, to the human concerns of living in a modern networked and technological city, the range of opportunities for auditory display is wide.
CONCERT AND INSTALLATIONS
The ICAD 2019 committee is seeking submissions to the ICAD 2019 Sonification Concert and proposals for Sonification Installations/Demos that will contribute to knowledge of how sonification can support everyday life.
For details on topics of interest, proposal format, submission instructions, and additional conference information please visit https://icad2019.icad.org/call-for-participation
IMPORTANT DATES:
Thursday 18th April 2019 — Deadline for submissions to the Concert and Installations/Demos tracks.
Concert Chairs:
Bennett Hogg with John Bowers, Tim Shaw, and David Worrall
icad2019concert(a)icad.org
Conference Chairs and Installation Chairs:
Paul Vickers and Matti Gröhn
icad2019chairs(a)icad.org
icad2019installations(a)icad.org
ABOUT ICAD
First held in 1992, ICAD is a highly interdisciplinary conference with relevance to researchers, practitioners, artists, and graduate students working with sound to convey and explore information. The conference is unique in its specific focus on auditory displays and the range of interdisciplinary issues related to their use. Like its predecessors, ICAD 2019 will be a single-track conference, open to all, with no membership or affiliation requirements.
--
Dr Paul Vickers BSc PhD CEng MIEE FHEA
Co-chair of ICAD 2019: https://icad2019.icad.org
This message is intended solely for the addressee and may contain confidential and/or legally privileged information. Any use, disclosure or reproduction without the sender’s explicit consent is unauthorised and may be unlawful. If you have received this message in error, please notify Northumbria University immediately and permanently delete it. Any views or opinions expressed in this message are solely those of the author and do not necessarily represent those of the University. Northumbria University email is provided by Microsoft Office365 and is hosted within the EEA, although some information may be replicated globally for backup purposes. The University cannot guarantee that this message or any attachment is virus free or has not been intercepted and/or amended.
(Apologies for cross-postings, please re-distribute at will)
_______________________________________________________________________
Audio Mostly 2019: A Journey in Sound
18th to 20th September 2019
University of Nottingham, Nottingham, UK
www.audiomostly.com<http://www.audiomostly.com>
Facebook: https://www.facebook.com/AudioMostly/
Twitter: https://twitter.com/AudioMostly @AudioMostly
_______________________________________________________________________
AUDIO MOSTLY 2019
Audio Mostly is an audio focused interdisciplinary conference on design, interacting with sound and technology, which embraces applied theory and practice-based research.
It is an annual conference which brings together thinkers and doers from academia and industry who share an interest in sonic interaction and the use of audio for interface design. This remit covers product design, auditory displays, computer games and virtual environments, new digital musical instruments, educational applications and workplace tools, as well as the topics listed below. It further includes fields such as the psychology of sound and music, cultural studies, systems engineering, and everything in between in which sonic Human-Computer Interaction plays a role.
Audio Mostly 2019 will be an inclusive event for all, bringing together a whole range of people and communities. It will be a lively and sociable mix of oral and poster paper presentations, demos, and workshops. We welcome submissions from industry, academia and interested parties in each of these categories.
As in previous years, the Audio Mostly 2019 proceedings will be published by the Association for Computing Machinery (ACM) (to be confirmed) and made available through their digital library. Regular papers, posters and demos/installations will be double-blind peer reviewed. It is envisaged that there will be a special issue of a journal relating to the conference, as with previous years.
CONFERENCE THEME - A Journey in Sound
The special theme for the conference this year is A Journey in Sound and we would particularly welcome papers relating to this theme for at the conference this year. We often have different experiences of sound and music though out our lives, there are sounds that remind us of different places and people. We also have different playlists and songs that take us back and remind us of certain times and events. Throughout our lives we are interacting with sounds and music, we are on a journey in sound. This year the theme of the conference is open to interpretation, but people might think about the following, in relation to the theme:
* Sonic aspects of digital stories, documentaries and archives
* The soundtrack to our lives. Archiving and sharing sound
* The emotional potential of a sound, how might this be used to support interaction
* The different uses of music across different settings
* The re-use of recollections and memories by composers & sound designers
* The development of musical tools that can let us express our experiences over time
* Socio-technical uses of AI create highly personalised soundtracks that respond to one's context
* Adaptive music use in journeys, time and the creative use of data
Audio Mostly 2019 encourages the submission of regular papers (oral/poster presentation) addressing such questions and others related to the conference theme and the topics presented below.
LIST OF TOPICS
The Audio Mostly conference series is interested in sound Interaction Design & Human-Computer Interaction (HCI) in general. The conference provides a space to reflect on the role of sound/music in our lives and how to understand, develop and design systems which relate to sound and music - we are particularly interested in this from a broad HCI perspective. We encourage original regular papers (oral/poster presentation) addressing the conference theme or other topics from the list provided below. We welcome multidisciplinary approaches involving fields such as music informatics, information and communication technologies, sound design, music performance, visualisation, composition, perception/cognition and aesthetics.
* Accessibility
* Aesthetics
* Affective computing applied to sound/music
* AI, HCI and Music
* Acoustics and Psychoacoustics
* Auditory display and sonification
* Augmented and virtual reality with or for sound and music
* Computational musicology
* Critical approaches to interaction, design and sound
* Digital augmentation (e.g. musical instruments, stage, studio, audiences, performers, objects)
* Digital music libraries
* Ethnographic studies
* Game audio and music
* Gestural interaction with sound or music
* Immersive and spatial audio
* Interactive sonic arts and artworks
* Intelligent music tutoring systems
* Interfaces for audio engineering and post-production
* Interfaces or synthesis models for sound design
* Live performing arts
* Music information retrieval & Interaction
* Musical Human-Computer Interaction
* New methods for the evaluation of user experiences of sound and music
* Participatory and co-design methodologies with or for audio
* Philosophical or sociological reflections on Audio Mostly related topics
* Psychology, cognition, perception
* Semantic web music technologies
* Spatial audio, interaction design and ambisonics
* Sonic interaction design
* Sound and image interaction: from production to perception
* Sound and soundscape studies
SUBMISSION INSTRUCTIONS
Regular paper, poster, demo and workshop contributions must be submitted via the EasyChair Audio Mostly 2019 submission portal (https://easychair.org/conferences/?conf=am2019).
All Audio Mostly 2018 papers should be submitted using the 2017 ACM Master Article Template specified below for your contribution. Authors should use the ACM Computing Classification System (CCS) to provide the proper indexing information in their papers (see instructions on the 2017 ACM Master Article Template page). All papers must be submitted in the PDF format.
IMPORTANT DATES
(Papers & Posters)
Deadline for Submissions: 24th May 2019
Notification of Acceptance: 14th June 2019
Camera-ready submissions: 9th August 2019
Early Registration Deadline: 10th August 2019
Conference: 18 - 20 September 2019
Call for Workshops
IMPORTANT DATES (Workshops)
Deadline for Submissions: 24th May 2019
Notification of Acceptance: 14th June 2019
Workshops: 17th September 2019
Call for Demos
IMPORTANT DATES (Demos & Installations)
Deadline for Submissions: 1st July 2019
Notification of Acceptance: 15th July 2019
Submission Deadline: 22nd July 2019
Submission Site
https://easychair.org/conferences/?conf=am2019
LOCATION
This year, the conference is hosted by the Mixed Reality Lab (in the School of Computer Science) and the Department of Music at the University of Nottingham - The conference will be located on the University Park.
The University Park is The University of Nottingham's largest campus at 300 acres. Part of the University since 1929, the campus is widely regarded as one of the largest and most attractive in the country. Set in extensive greenery and around a lake, University Park is the focus of life for students, staff and visitors. Conveniently located only two miles from the city centre. The campus is well connected, the nearest airport is the East Midlands Airport, local train stations are Nottingham, and Beeston.
-
-
Zürcher Hochschule der Künste
Zurich University of the Arts
-
Dr. Daniel Hug (Dipl.-Des.)
Co-Director Master Sound Design
__
Toni-Areal, Pfingstweidstrasse 96 P.O. Box, CH-8031 Zurich
-
Phone +41 78 768 59 49
daniel.hug(a)zhdk.ch<mailto:daniel.hug@zhdk.ch>
-
www.zhdk.ch/sounddesign<http://www.zhdk.ch/sounddesign>
-
Hi all,
I have an exhibition opening soon in London, UK.
https://sonicelectronicsfestival.org/exhibition/
Featuring
- raytracing in curved space
- badly-played tetris
- generative techno
- audiovisual sliding tile puzzle
- interactive graph-directed IFS
- zooming hybrid fractals
Opening 11th April 2019 6pm, until 27th April, at Chalton Gallery,
96 Chalton Street, Camden, London NW1 1HJ, UK. Check website for times.
Two of the works are realized with Pure-data, also using pdlua and Gem.
Thanks also to Andy Farnell for his CheeseBox/AnalogueDrumEngine patch.
Curated by Laura Netz.
Supported using public funding by the National Lottery through Arts
Council England.
Claude
--
https://mathr.co.uk
*C++ developer of interactive audio applications using the JUCE framework
(W/M)*
*Fixed-term contract of 6 months starting from May 2019, possibility of
transformation into a permanent position*
*INTRODUCTION TO IRCAM:*
IRCAM is a non-profit organization that is associated to the Centre
Pompidou (Centre national d’art et de culture Georges Pompidou). Its
missions comprise research, production, and education related to
contemporary music and its relation to science and technology. The
department for Innovation and Means for Research (IMR) handles, besides
other topics, the various aspects of tech transfer of the research results
produced in the research teams. IRCAM is located in the centre of Paris
near the Centre Pompidou, at 1, Place Igor Stravinsky 75004 Paris.
*POSITION DESCRIPTION:*
You will integrate the IMR department and will be responsible for the
conceptual design and development of user interfaces and application layers
for AudioSculpt (version 4), a standalone audio application for sound
visualisation, analysis and transformation built with the C++ JUCE
framework. The different variants, will be designed with the goal to adapt
to different segments of the professional audio production market, and will
target Mac OS and Windows environments. The development activities will be
performed under the responsibility of the head of the Tech transfer service
(part of IMR), and in close collaboration with the research teams that are
contributing the signal processing algorithms, and the commercial and
industrial partners. Your responsibilities will cover all aspects of
building and delivering commercial applications, starting with software
architecture, and interface design, development of cross-platform
applications, packaging, testing, and communication with end users at IRCAM
and in the IRCAM Forum.
*Links to related software:*
http://forumnet.ircam.fr/fr/produit/audiosculpt/
<http://forumnet.ircam.fr/fr/produit/audiosculpt/>
https://www.plugivery.com/products/p1680-TS/
*REQUIRED EXPERIENCES ET SKILLS:*
- You are a C ++ developer / designer experienced in designing
interactive audio applications with the JUCE framework with a strong
interest in music and sound applications.
- You have worked on one or more major projects involving the creation
of a multimodal and interactive application (ideally on an already
commercialized project).
- You have a good knowledge of software design patterns.
- Sensitive to visual creation and human-machine interactions, you
master the graphic interface design (UI/UX) and are able to realize
prototypes and components quickly from a graphic chart and/or simple
specifications.
- You have a good knowledge of the JUCE framework (ideally you have
already developed several projects with this framework).
- You are familiar with editing and sound creation tools such as
ProTools, Ableton Live, MaxMSP, and/or similar tools.
- You have excellent knowledge of C ++.
- You produce robust, reusable, and portable code.
- You have demonstrated rigor, autonomy, and reliability, listening and
relational abilities in your professional activity.
*SALARY:*
According to education and professional experience.
*APPLICATIONS:*
Deadline for application: April, 25th, 2019
Please send an application letter together with your resume and any
suitable information addressing the above issues preferably by email to:
vinet_at_ircam.fr, roebel_at_ircam_dot_fr and rousseau_at_ircam_dot_fr, or
via postal mail to Hugues Vinet, IRCAM, 1 Place Stravinsky, 75004 Paris.
Hello everyone,
The jmmmp library of abstractions has been updated.
A small bug in Granulator, a Pure Data port of Robert Henke's Granulator,
has been fixed. (Also some aesthetic details in the mat~ object collection)
You can download it soon through deken, or follow the link below.
For more informations and a tarball file, visit http://bit.ly/jmmmp_052
Best,
João Pais
I'm happy to announce the first stable release of vstplugin~! Binaries are already available on Deken.
Source code: https://git.iem.at/pd/vstplugin/tree/master
The repository also includes a version for Supercollider called VSTPlugin (which works very differently but achieves the same thing).
The overall design and functionality of vstplugin~ v0.1.0 is the same as in vstplugin~ v0.1-alpha, but there are some breaking changes and new features! Have a look at the change log in https://git.iem.at/pd/vstplugin/releases
Have fun!
Christof
(Apologies for cross-posting)
ICAD 2019 — Extension to Call for Submission of Papers: 29 March 2019
25th International Conference on Auditory Display
Northumbria University, Newcastle-upon-Tyne, UK
23–27 June, 2019
https://icad2019.icad.orghttps://twitter.com/ICAD2019
The deadline for general papers and those for the Hyundai Motor Co. Design Challenge has been extended to 29 March 2019.
The full CfP with submission instructions can be found at https://icad2019.icad.org/call-for-participation/
Conference registration details are available at https://icad2019.icad.org/registration/
Papers Chair:
Tony Stockman
icad2019papers(a)icad.org
Workshop/Tutorial Chair:
Derek Brock
icad2019workshops(a)icad.org
Conference Chairs:
Paul Vickers and Matti Gröhn
icad2019chairs(a)icad.org
ABOUT ICAD
First held in 1992, ICAD is a highly interdisciplinary conference with relevance to researchers, practitioners, artists, and graduate students working with sound to convey and explore information. The conference is unique in its specific focus on auditory displays and the range of interdisciplinary issues related to their use. Like its predecessors, ICAD 2019 will be a single-track conference, open to all, with no membership or affiliation requirements.
--
Paul Vickers
Co-chair of ICAD 2019: https://icad2019.icad.org
This message is intended solely for the addressee and may contain confidential and/or legally privileged information. Any use, disclosure or reproduction without the sender’s explicit consent is unauthorised and may be unlawful. If you have received this message in error, please notify Northumbria University immediately and permanently delete it. Any views or opinions expressed in this message are solely those of the author and do not necessarily represent those of the University. Northumbria University email is provided by Microsoft Office365 and is hosted within the EEA, although some information may be replicated globally for backup purposes. The University cannot guarantee that this message or any attachment is virus free or has not been intercepted and/or amended.
===================
*MUME 2019 Concert - 2nd CALL FOR WORKS*
===================
The 7th International Workshop on Musical Metacreation
http://musicalmetacreation.org
June 17-18, 2019, Charlotte, North Carolina
=== Important Dates ===
submission deadline: March 24, 2019
notification date: April 28, 2019
concert dates: June 17 evening, 2019
======================
*Overview*
Creators of musically metacreative systems are invited to submit works for
a concert of musical metacreation, as part of the 10th International
Conference on Computational Creativity, ICCC 2019, at the University of
North Carolina Charlotte. Submitted works can be of any musical style, but
must involve a performative element and must involve the use of
computational creativity techniques in the creation of the work. Examples
from previous MuMe concerts include, but are not limited to: reproduction
of musical style using machine learning; the evolution of musical
structures using interactive genetic algorithms; rule-based systems;
systems based on emergence or self-organization; systems that perform music
data mining to create remixes, and so on.
*Types of Work*
Any work related to MuMe that can be presented in a performance context
will be considered. This includes:
- Improvising agents that perform alongside live musicians.
- Automated composition systems that generate music in real-time or
pre-compose music for performers, as well as automated DJ systems.
- Systems that use AI methods for other creative objectives, such as
harmonizing, intelligent looping, or finding suitable matches for a target
phrase.
- Systems that generate lyrics.
- Music that uses evolution, emergence or ecosystemic concepts.
*Submission Process*
Please submit a two page A4-portrait PDF describing your work before the
submission deadline. Email your submission to chairs(a)musicalmetacreation.org.
Submissions should *not* be *anonymized.* Your submission should:
- Describe the work.
- Explain how the work relates to the MuMe theme.
- Give a relatively detailed technical description of the system*.
- Give detailed performance requirements. As necessary, this may include
a stage plan, equipment needs, technical support needs, and performing
musicians (you can use additional PDF pages for any supporting items).
- Provide a link to any additional documentary material such as audio or
video recordings, or online software (highly recommended).
- Give credits and biography (name, affiliation, short biography up to
150 words).
* It is common to receive submissions that are highly ambiguous about what
the proposed system actually does. Works will be rejected if this is the
case. The description does not need to be highly technically detailed but
it should remove any fundamental ambiguity.
*Selection Process*
All submissions will be reviewed by three independent members of the
selection committee. Once these reviews have been completed the reviewers
and the chairs will discuss these works and reviews. The chairs will
provide a meta-review, the three original reviews, and a final decision.
Reviewers will be anonymous to the authors, but not to each other. The
final decision lies with the chairs, in consultation with the committee.
*Upon Acceptance*
Artists are required to be in attendance and set up their own equipment. At
least one artist must register for the MuMe workshop.
A schedule for performances, rehearsals and soundchecks will be made
available in the week before the concert.
Follow us on Twitter @MetaMusical.
Workshop Organizers
===================
Program Co-Chair
Robert M. Keller, Professor
Computer Science Department
Harvey Mudd College
301 Platt Blvd
Claremont, CA 91711 USA
https://www.cs.hmc.edu/~keller/
keller(a)cs.hmc.edu
Program Co-Chair
Bob L. Sturm, Associate Professor
Tal, Musik och Hörsel (Speech, Music and Hearing)
Lindstedtsvägen 24
School of Electronic Engineering and Computer Science
Royal Institute of Technology KTH, Sweden
bobs(a)kth.se
Concert Chair
Gus Xia, Assistant Professor
Computer Science
NYU Shanghai
gxia(a)nyu.edu
Publicity Chair
Dr. Oliver Bown
Senior Lecturer
Faculty of Art & Design, The University of New South Wales
Room AG12, Cnr Oxford St & Greens Rd,
Paddington, NSW, 2021, Australia
o.bown(a)unsw.edu.au
----------------------
http://musicalmetacreation.org
======================
MUME Steering Committee
Andrew Brown
Griffith University, Australia
Anna Jordanous
University of Kent, UK
Róisín Loughran
University College Dublin, Ireland
Michael Casey
Dartmouth College, US
Benjamin Smith
Purdue University Indianapolis, US
Philippe Pasquier
Simon Fraser University
Arne Eigenfledt
Simon Fraser University
--
Kıvanç Tatar
----------------------------------
PhD Candidate
Interactive Arts + Technology
Simon Fraser University, Vancouver, Canada
Email: kivanctatar(a)gmail.com
Website: https://kivanctatar.com/
hi all.
15 years ago, some folks in graz (including yours truely) founded a
non-profit organisation "Pd~graz", mainly to organize the 1st
Pd~convention in 2004 (those were the days!)
at the end of last year the austrian police wrote me a letter, asking
whether our association is still alive. and indeed we found that we (as
a club) hadn't had any real activity in the last 9 years or so, and
therefore we decided to terminate the association for good. (no need to
feel sorry. Pd~graz's raison d'être was to make Pd rule the world. this
mission is practically accomplished).
in any case, we are celebrating with a final concert.
the concert will be in two weeks(20.3; right that's pretty soon), at the
postgarage[1]. those of you who have been at the very 1st Pd~convention
will remember it fondly.
anybody who happens to be in the vicinity should of course come and
party with us.
if you are prepared to show your screen, there might even be an open mic
session.
gfmdsar
IOhannes
[1] http://postgarage.at/
Greetings fellow Pd enthusiasts,
As some of you may be already aware, ast year the Purr-Data (a.k.a.
Pd-L2Ork v2) was adapted to support native 64-bit operations. We are
pleased to report that Purr-Data was once again selected this year as one
of the GSoC projects. This means more opportunities to engage and help
further the platform. Ideas for this year are plentiful, ranging from core
infrastructure (C) development to patching and front-end development,
including porting Pd-L2Ork's K12 learning module that has seen its
utilization in dozens of Maker camps over the past 7 years.
If interested, please contact Jonathan and/or me to explore potential
projects and to discuss the next steps in the application process.
Best,
Ico
--
Ivica Ico Bukvic, D.M.A.
Director, Creativity + Innovation
Institute for Creativity, Arts, and Technology
Virginia Tech
Creative Technologies in Music
School of Performing Arts – 0141
Blacksburg, VA 24061
(540) 231-6139
ico(a)vt.edu
www.icat.vt.eduwww.performingarts.vt.edul2ork.icat.vt.eduico.bukvic.net
ICAD 2019 — Call for Student ThinkTank Appilcations (Student Research
Consortium)
25th International Conference on Auditory Display Northumbria
University, Newcastle-upon-Tyne, UK June 23–27, 2019
https://icad2019.icad.org/
Date: Sunday June 23rd, 2019
Time: 9:00 AM - 6:00 PM
Applications due: Friday March 29th, 2019
The Student ThinkTank (Student Research Consortium) is a full day
meeting for students doing Graduate or Undergraduate work on auditory
display. It will be held on June 23rd. Selected students will give
formal presentations and have the opportunity to discuss research ideas
and problems with fellow researchers in the field. The program will also
include career-related activities.
The ThinkTank is your chance to set a whole roomful of auditory
researchers to work on your particular research issue, to help you
choose which method, tool, technique, or principle to use to save you
from heading down a dead end. Besides providing thoughtful insights into
your particular project, the ThinkTank will foster friendships and
networks among fellow students and researchers that are essential in an
international community for auditory display.
Financial assistance will be available for selected applicants from US
Universities from ICAD and the NSF. Any registered ICAD participants can
join as an observer free of charge. If you’d like to attend as an
observer, please send an email to icad2019thinktank(a)icad.org
How to apply
To apply please submit the following 4 items:
1. Cover Letter
Your cover letter should include the following information:
*
Statement of interest in participating in the ThinkTank.
*
Full name of the School and Department in which you are studying.
*
Current stage in your academic program (e.g., completed MS, 2 years
into PhD).
*
Name of the supervising professor.
*
Your full contact information: address, phone number, and email
address.
*
Title of the research and keywords pertinent to the research.
*
The URL of your web page (if any).
0.
2. Two-page Research Interest Summary
The body of the research summary should provide a clear overview of
the research that you have already conducted; is in the process of
completing; has planned, or ideas for research that you would like
to pursue. The statement should discuss the relevance and potential
impact of the research on the field and discuss its broader impact
in the world. You are encouraged to include the following sections:
*
Introduction and problem description.
*
Brief background and overview of the existing literature.
*
Goal of the research.
*
Current status of the research.
*
Preliminary results accomplished, if any.
*
Broader Impact of this research.
*
Open issues, topics to be discussed at the ThinkTank, and expected
outcomes from participating
in the ThinkTank. For example, you might want to discuss choosing
the right tools and techniques, to choosing research topics, to how
to organize your thesis, to philosophical or aesthetic issues –
anything where you could benefit from the perspectives and
experience of other students and experts.
3. Letter of Recommendation
Enclose a letter of recommendation written by your Graduate
Advisor/Thesis Advisor/Supervisor. Your advisor/supervisor is asked
to verify that you are a graduate student, working in the area of
sonification or auditory display. (In case of undergraduate
student’s submission, please verify the enrolled student status and
include information about the undergraduate research project.)
Advisors are also encouraged to include an assessment of the current
status of your research and an indication of the expected date of
its completion. In addition, your advisor is encouraged to indicate
what she/he hopes you would both gain and contribute by
participating in the ThinkTank.
4. Curriculum Vitae (CV)
Please prepare a 1-page CV that relates your background, relevant
experience, and research accomplishments.
Selection and Presentation:
Up to 15 proposals will be selected for formal presentation and
“think-tanking”. The selected problems will be representative examples
of a widespread problem or will be particularly interesting or
challenging (as determined by the expert Panel). Each participant will
prepare a 15-minute presentation for all ThinkTank attendees. The Panel
will present a report on the ThinkTank in the ICAD 2019 conference program.
All those who submit a problem may participate in the ThinkTank to watch
the presentations and join in the discussions; however, only the
selected submissions will be able to make a formal oral presentation.
Even if your problem is not selected you will leave with a sense of what
other students are doing and how they are approaching problems in
auditory display, as well as new friends to talk about your project with
during the rest of the ICAD 2019 conference and in the future. If your
problem is selected you may also leave with the breakthrough you need!
ThinkTank Panel:
The ThinkTank Panel comprises several international researchers who work
across a range of disciplines covered by auditory displays. The
confirmed Panel members are:
*
Dr. Areti Andreopoulou (chair), University of Athens
*
Dr Bruce Walker, Georgia Institute of Technology
*
Dr. Matti Gröhn, Stereoscape, Finland
*
Derek Brock, United Stated Naval Research Lab
*
Dr. Myounghoon Jeon, Virginia Tech
*
TBC
*
TBC
*
How to Submit:
Please email your proposal by March 29th, 2019 to the ThinkTank chair at
icad2019thinktank(a)icad.org If you have any questions, please feel free
to email us.
--
--
Areti Andreopoulou, PhD Assistant Professor in Music Technology
Laboratory of Music Acoustics and Technology (LabMAT) Department of
Music Studies National and Kapodistrian University of Athens
labmat.music.uoa.gr <http://labmat.music.uoa.gr/>
===================
MUME 2019 - EXTENDED DEADLINE: MARCH 24, 2019
===================
======================
3rd CALL FOR SUBMISSIONS
======================
((( MUME 2019 )))
The 7th International Workshop on Musical Metacreation
http://musicalmetacreation.org
June 17-18, 2019, Charlotte, North Carolina
MUME 2019 is to be held at the University of North Carolina Charlotte in
conjunction with the 10th International Conference on Computational
Creativity, ICCC 2019 (http://computationalcreativity.net/iccc2019).
=== Important Dates ===
EXTENDED Workshop submission deadline: March 24, 2019
Notification date: April 28, 2019
Camera-ready version: May 19, 2019
Workshop dates: June 17-18, 2019
======================
Metacreation applies tools and techniques from artificial intelligence,
artificial life, and machine learning, themselves often inspired by
cognitive and natural science, for creative tasks. Musical Metacreation
studies the design and use of these generative tools and theories for music
making: discovery and exploration of novel musical styles and content,
collaboration between human performers and creative software “partners”,
and design of systems in gaming and entertainment that dynamically generate
or modify music.
MUME intends to bring together artists, practitioners, and researchers
interested in developing systems that autonomously (or interactively)
recognize, learn, represent, compose, generate, complete, accompany, or
interpret music. As such, we welcome contributions to the theory or
practice of generative music systems and their applications in new media,
digital art, and entertainment at large.
Topics
===========================
We encourage paper and demo submissions on MUME-related topics, including
the following:
-- Models, Representation and Algorithms for MUME
---- Novel representations of musical information
---- Advances or applications of AI, machine learning, and statistical
techniques for generative music
---- Advances of A-Life, evolutionary computing or agent and multi-agent
based systems for generative music
---- Computational models of human musical creativity
-- Systems and Applications of MUME
---- Systems for autonomous or interactive music composition
---- Systems for automatic generation of expressive musical interpretation
---- Systems for learning or modeling music style and structure
---- Systems for intelligently remixing or recombining musical material
---- Online musical systems (i.e. systems with a real-time element)
---- Adaptive and generative music in video games
---- Generative systems in sound synthesis, or automatic synthesizer design
---- Techniques and systems for supporting human musical creativity
---- Emerging musical styles and approaches to music production and
performance involving the use of AI systems
---- Applications of musical metacreation for digital entertainment: sound
design, soundtracks, interactive art, etc.
-- Evaluation of MUME
---- Methodologies for qualitative or quantitative evaluation of MUME
systems
---- Studies reporting on the evaluation of MUME
---- Socio-economical Impact of MUME
---- Philosophical implication of MUME
---- Authorship and legal implications of MUME
Submission Format and Requirements
=================================
Please make submissions via the EasyChair system at:
https://easychair.org/conferences/?conf=mume2019
The workshop is a day and a half event that includes:
-Presentations of FULL TECHNICAL PAPERS (8 pages maximum)
-Presentations of POSITION PAPERS and WORK-IN-PROGRESS PAPERS (5 pages
maximum)
-Presentations of DEMONSTRATIONS (3 pages maximum) which present outputs of
systems (working live or offline).
All papers should be submitted as complete works. Demo systems should be
tested and working by the time of submission, rather than be speculative.
We encourage audio and video material to accompany and illustrate the
papers (especially for demos). We ask that authors arrange for their web
hosting of audio and video files, and give URL links to all such files
within the text of the submitted paper.
Submissions do not have to be anonymized, as we use single-blind reviewing.
Each submission will be reviewed by at least three program committee
members.
Workshop papers will be published as MUME 2019 Proceedings and will be
archived with an ISBN number. Please use the updated MuMe paper template to
format your paper. Also please feel free to edit the licence entry (at the
bottom left of the first page of the new template). We created a new MUME
2019 template based on AAAI template. The MUME 2019 latex and Word template
is available at:
http://musicalmetacreation.org/buddydrive/file/templates_mume2019/
Submission should be uploaded using MUME 2019 EasyChair portal:
https://easychair.org/conferences/?conf=mume2019
For complete details on attendance, submissions and formatting, please
visit the workshop website: <http://www.musicalmetacreation.org>
http://musicalmetacreation.org
Presentation and Multimedia Equipment:
==========================================
We will provide a video projection system as well as a stereo audio system
for use by presenters at the venue. Additional equipment required for
presentations and demonstrations should be supplied by the presenters.
Contact the Workshop Chair to discuss any special equipment and setup
needs/concerns.
Attendance
=======================================
It is expected that at least one author of each accepted submission will
attend the workshop to present their contribution. We also welcome those
who would like to attend the workshop without presenting. Workshop
registration will be available through the ICCC 2019 conference system.
History
=======================================
MUME 2019 builds on the enthusiastic response and participation we received
for the past occurrences of MUME series:
-
MUME 2012 (held in conjunction with AIIDE 2012 at Stanford):
http://musicalmetacreation.org/mume-2012/
-
MUME 2013 (held in conjunction with AIIDE 2013 at NorthEastern):
http://musicalmetacreation.org/mume-2013/
-
MUME 2014 (held in conjunction with AIIDE 2014 at North Carolina):
http://musicalmetacreation.org/mume-2014/
-
MUME 2016 (held in conjunction with ICCC 2016 at Université Pierre et
Marie Curie):
http://musicalmetacreation.org/mume-2016/
-
MUME 2017 (held in conjunction with ICCC 2017 at Georgia Institute of
Technology):
http://musicalmetacreation.org/mume-2017/
-
MUME 2018 (held in conjunction with ICCC 2018 at Salamanca University):
http://musicalmetacreation.org/mume-2018/
Questions & Requests
======================================
Please direct any inquiries/suggestions/special requests to one of the
Workshop Chairs, Bob (keller(a)cs.hmc.edu) or Bob (bobs(a)kth.se).
Workshop Organizers
===================
Program Co-Chair
Robert M. Keller, Professor
Computer Science Department
Harvey Mudd College
301 Platt Blvd
Claremont, CA 91711 USA
https://www.cs.hmc.edu/~keller/
keller(a)cs.hmc.edu
Program Co-Chair
Bob L. Sturm, Associate Professor
Tal, Musik och Hörsel (Speech, Music and Hearing)
Lindstedtsvägen 24
School of Electronic Engineering and Computer Science
Royal Institute of Technology KTH, Sweden
https://www.kth.se/profile/bobs
bobs(a)kth.se
Concert Chair
Gus Xia, Assistant Professor
Computer Science
NYU Shanghai
gxia(a)nyu.edu
Publicity Chair
Dr. Oliver Bown
Senior Lecturer
Faculty of Art & Design, The University of New South Wales
Room AG12, Cnr Oxford St & Greens Rd,
Paddington, NSW, 2021, Australia
o.bown(a)unsw.edu.au
----------------------
http://musicalmetacreation.org
======================
MUME Steering Committee
Andrew Brown, Griffith University, Australia
Michael Casey, Dartmouth College, US
Arne Eigenfeldt, Simon Fraser University, Canada
Anna Jordanous, University of Kent, UK
Bob Keller, Harvey Mudd College, US
Róisín Loughran, University College Dublin, Ireland
Philippe Pasquier, Simon Fraser University, Canada
Benjamin Smith, Purdue University Indianapolis, USA
--
Kıvanç Tatar
----------------------------------
PhD Candidate
Interactive Arts + Technology
Simon Fraser University, Vancouver, Canada
Email: kivanctatar(a)gmail.com
Website: https://kivanctatar.com/
Howdy, please allow me to share a rather personal and detailed report to
this dear list about this release: The Cyclone Library (a set of Pure Data
objects cloned from Max/MSP) is finally upgraded to version 0.3! The main
goal of Cyclone 0.3 was to update it to Max 7! Max 8 is out now and there
are minor updates that could be included in cyclone, which may be ported in
a possible future 0.4 release. Cyclone 0.3 also provide numerous fixes,
several new objects and a newly written documentation!
In the last release (cyclone 0.3 release candidate 1), we noted how we
finished updating our last object to Max 7! The catch was that we still
needed to update [comment], which won't be updated to Max 6+. Now it's
been updated but not yet fully compliant to Max 5 - nonetheless it's
"acceptable" and further work can be taken care in future 0.3.x releases.
Anyhow, there were also other updates and fixes and the thing is, with what
we have now, I just feel comfortable and happy to state: "*Mission
Completed: Cyclone 0.3 stable release is out!*".
This took 3 years. In fact, this release happens in the exact day of the
3rd anniversary of our repository. I was pretty clueless on how to code
externals when we started and I still struggle a lot - even though I was
able to learn a thing or two in the meantime :) -, this is to say that if
it weren't for my colleagues Derek and Matt, nothing of this would be
possible! I feel I have to praise and thank them (which I can't do enough)
as I don't want to outshine them (since I'm the usual spokesman of the
project). We made a great team! My limitations actually came in handy as I
could just deal with what I could, which was the most tedious and manual
labor that this project needed, like revising every object, looking for
bugs, cleaning things up, dealing with the less complicated stuff while I
learned how to code and etc. Then after a lot of attention, I could deliver
a good briefing so they could help me fix the more hardcore stuff without
all that hassle.
After this long period, it's also noticeable to say we lost steam. I can
only speak for myself and I'm now taking care of my own library (thanks to
all I've learned dealing with cyclone, I must say). I don't know much and
can't promise that cyclone will keep receiving the same attention from now
on, but don't expect it to be abandoned ;) The project is obviously open
for collaboration and any help is welcome. We actually had a very good and
recent contribution from Diego Barrios Romero, who made it possible for
anyone to compile cyclone now as a single library! Apparently he needed
this to load cyclone with libpd more conveniently. Further cyclone releases
may also bring the option of a single compiled binary. Instructions on how
to compile cyclone as a single library and more about the project in genera
can be found here:
https://github.com/porres/pd-cyclone/blob/cyclone0.3/README.md Also check
the changelog for a full list of changes:
https://github.com/porres/pd-cyclone/blob/cyclone0.3/documentation/extra_fi…
Cyclone is now available for the main architectures via Pd (Help => Find
Externals) - it might take a while until it shows up in the system. You can
also get it from here:
https://github.com/porres/pd-cyclone/releases/tag/cyclone0.3
For last, I can't finish this message without thanking the Pd Community in
general (specially Dan and IOhannes for being big players and great
leaders) but mostly Miller Puckette, of course, without whom there's be no
Max or Pd (and therefore no cyclone). We obviously need to also thank the
original author of Cyclone, Krzysztof Czaja, who created this important
library for Pd! Hans Christof Steiner needs to be honored for maintaining
this library and keeping it in Pd Extended for a long time, and later Fred
Jan Kraan was also very important to start fixing and updating this library
after so long in the cyclone 0.2 releases.
Cheers
17th Beta release of ELSE 1.0 - now with a total of 294 objects! This needs
Pd 0.49-0 or above! My Live Electronics Tutorial was also updated!
So, my last update from a few days ago had a bugged [conv~] object (which
performs partitioned convolution). This release fixes it and there are
other things too, see:
https://github.com/porres/pd-else/releases/tag/v1.0-beta17 for more
details. Get binaries also directly from Pd (Help => Find Externals).
Anyway, I can only now finally say my Live Electronics Tutorial solely
depends 100% on the ELSE library. I've also made important reviews on it.
I'm now also reorganizing things in a new and unfinished volume 3! Check it
out:
https://github.com/porres/Live-Electronics-Tutorial/releases/tag/v-1.0beta-7
cheers
16th Beta release of ELSE 1.0 - now with a total of 293 objects! This needs
Pd 0.49-0 or above! Not much new in this release. The highlights are 3 new
objects: [conv~], [rec~] and [sample~]. See
https://github.com/porres/pd-else/releases/tag/v1.0-beta16 for more
details. Get binaries also directly from Pd (Help => Find Externals).
The [conv~] object is a partitioned convolution abstraction, but a compiled
object should come up sooner or later. One way or another, this object
removes the last dependency from my Live Electronics Tutorial, which now
solely depends 100% on the ELSE library, and what makes this release
special! Check it out:
https://github.com/porres/Live-Electronics-Tutorial/releases/tag/v-1.0beta-6
cheers