Dear List,
Today the Click Tracker Library reached 100 works from 51 different
composers. These resources can be turned into music by any interested
musicians.
In the newest additions you can find works by Oscar Bianchi, Aaron
Cassidy, Sara Glojnarić, Rama Gottfried, Bernhard Lang, Emmanuel Nunes,
Gerard Pesson, Enno Poppe, Yann Robin, Mauricio Sotelo, Hans Thomalla
and Ming Tsao.
If there are any works you might need or want to add to this collection,
let me know.
All the details are in https://bit.ly/ClickTrackerLibrary
With best regards,
João Pais
--
Click Tracker Mobile -https://bit.ly/click-tracker-mob
Click Tracker -http://j.mp/click-tracker
Facebook -http://j.mp/clicktrackerfb
2023 Call for Papers
The Korea Electro-Acoustic Music Society (KEAMS) announces a call for papers for the journal Emille.
Emille is an open-access and peer-reviewed journal, and was formed to promote active research and to share the results on computer music. Submitted papers are selected through the review process, and are published in Vol. 21. Authors are not requested for any submission, processing, or publication fees, by the support from Arts Council Korea.
If you want your paper to be considered for publication in 2023, please submit a manuscript(both a text file and a PDF document) and a curriculum vitae to emille[at]keams.org <mailto:emille@keams.org> by the due date.
Please visit the link below for further information: http://keams.org/emille/call.html
Hi folks, here's a pre/test release of an update of ELSE. I need to test if
the builds are working correctly, especially apple silicon architecture. I
need help especially testing [sfont~], [plaits~] and the new [sfz~] object.
I also need help compiling these 3 objects for Raspberry Pi for the final
release and also figuring out what is the sane and "best" versions of
raspberry pi architectures to offer. Check the prior discussion in the
latest threads of the Pd list.
About the release, there's a bunch of cool new stuff, many new objects,
lots of them for Multichannel fun. See changelog below. No uploads to deken
yet, get the binaries from
https://github.com/porres/pd-else/releases/tag/v1.0-rc9-pre-test
cheers
--------------------------------------
CHANGELOG:
--------------------------------------
**LIBRARY:**
Breaking changes:
- [quantizer~], [rescale~], [fold~] and [wrap2~] no more signal inputs for
secondary inlets, only floats.
- [out1~] removed, use [out.mc~] instead now.
- [sin~] doesn't take float input anymore.
- [pluck~] float input is now in the 0-127 range.
- [voices] -list flag removed as this is now the default mode, use "-split"
flag for the old default mode.
Enhancements/fixes/other changes:
- [knob] and [numbox~], use alt+click instead of double click to restore
initial value
- Added MC (multichannel connection) support for some objects (very few
gained it out of the box for being abstractions, lke [gain~]), they are:
[ceil~], [floor~], [rint~], [trunc~], [sin~], [cents2ratio~],
[ratio2cents~], [db2lin~], [lin2db~] (now a compiled object), [gain~],
[samps2ms~], [ms2samps~], [mag~], [pol2car~]/[car2pol~] (now compiled
objects), [bitnormal~], [op~], [sig2float~], [float2sig~] (now an
abstraction), [rescale~], [fold~], [wrap2~] and [quantizer~].
- added "not" operation (!) to [op] and [op~].
- [adsr~] and [asr~] fixed 2nd outlet in log mode.
- [stepnoise~], [rampnoise~] and [lfnoise~] fixed seed flag
- [stepnoise~] and [rampnoise~] fixed loading hz argument when there's a
'seed' flag.
- [rescale] and [rescale~] fixed bug when first argument is higher then the
second
- [slider2d] and [circle], fixed "init" and shipping for macs.
- 16 new objects: [nchs~], [get~], [pick~], [sum~], [sigs~], [out.mc~], [
osc.mc~], [imp.mc~], [rampnoise.mc~], [stepnoise.mc~], [select~], [
xselect.mc~], [merge~], [phaseseq~], [oscnoise~] and [sfz~].
Objects count: total of 492 (264 signal objects and 228 control objects)!
- 280 coded objects (179 signal objects / 101 control objects)
- 212 abstractions (85 signal objects / 127 control objects)
--------------------------------------
**TUTORIAL:**
- Revision of Quickstart chapter with new info on MC connections.
- New examples for new objects in ELSE. Total number of examples is 506
--------------------------------------
I couldn't load any of the externals on my macbook air (mojave)
[image: Screen Shot 2023-07-10 at 17.14.17.png]
Em qua., 19 de jul. de 2023 às 15:12, Matthias Geier <
matthias.geier(a)gmail.com> escreveu:
> On Mon, Jul 10, 2023 at 10:13 PM Alexandre Torres Porres
> <porres(a)gmail.com> wrote:
> >
> > great stuff, hope I can look and give suggestions
>
> Thanks, please do!
>
> > and now that pd supports multichannel signals, do you think you can work
> on objects that are multichannel aware?
>
> I don't have experience with multichannel connections, and AFAIU they
> are quite new, right?
>
> It would be great to extend the externals to support that, but they
> are implemented with the flext library, which isn't really maintained
> anymore, so I guess it won't get multichannel support.
>
> One option would be to drop the flext implementation and rewrite it as
> a "pure" Pd externals.
> However, I don't think that I'll have the time to do that ... but if
> somebody else wants to contribute it, that would be great!
>
> cheers,
> Matthias
>
> >
> > congrats
> >
> > Em seg., 10 de jul. de 2023 às 15:16, Matthias Geier <
> matthias.geier(a)gmail.com> escreveu:
> >>
> >> Hi all.
> >>
> >> We have recently released version 0.6.1 of the SoundScape Renderer
> >> (SSR), which is a tool for real-time spatial audio reproduction
> >> providing a variety of rendering algorithms, e.g. Wave Field
> >> Synthesis, Higher-Order Ambisonics and binaural techniques.
> >>
> >> Most of the rendering back-ends of the SSR have been available as Pd
> >> externals for many years, the Binaural Room Synthesis (BRS) renderer
> >> has been added recently.
> >> However, the externals had to be compiled manually, which might have
> >> kept some potential users from trying them.
> >>
> >> But now, the SSR externals are also available with Deken!
> >> I don't know if I have to explain it on this list, but just in case:
> >> go to "Help" -> "Find externals", search for "ssr" and click
> >> "install"!
> >> Then, put an object [declare -path ssr] into your patch, and now you
> >> can use the externals ssr_binaural~, ssr_brs~, ssr_dca~, ssr_aap~,
> >> ssr_wfs~ and ssr_vbap~.
> >>
> >> You can feed audio signals to the externals and use messages to change
> >> their positions (and volume etc.). For more details, see the help
> >> patches.
> >>
> >> Alternatively, you can use the brand-new asdf~ external to load a
> >> dynamic audio scene using the Audio Scene Description Format (ASDF,
> >> https://audioscenedescriptionformat.readthedocs.io/). This will
> >> provide the source signals and the control messages for you.
> >>
> >> Again, just search for "asdf", click "install" and put an object
> >> [declare -path asdf] into your patch.
> >>
> >> Currently, there are not many example scenes available, but for
> >> starters you can have a look here:
> >> https://github.com/AudioSceneDescriptionFormat/asdf-example-scenes/
> >>
> >> This is our first generation of Pd externals via Deken. If you find
> >> any problems or if you have suggestions for improvements, please let
> >> us know via the issue trackers at
> >> https://github.com/SoundScapeRenderer/ssr or
> >> https://github.com/AudioSceneDescriptionFormat/asdf-rust.
> >>
> >> cheers,
> >> Matthias
> >>
> >>
> >>
> >> _______________________________________________
> >> Pd-announce mailing list
> >> Pd-announce(a)lists.iem.at
> >> https://lists.puredata.info/listinfo/pd-announce
> >>
> >> _______________________________________________
> >> Pd-list(a)lists.iem.at mailing list
> >> UNSUBSCRIBE and account-management ->
> https://lists.puredata.info/listinfo/pd-list
> >
> > _______________________________________________
> > Pd-list(a)lists.iem.at mailing list
> > UNSUBSCRIBE and account-management ->
> https://lists.puredata.info/listinfo/pd-list
>
Hi all.
We have recently released version 0.6.1 of the SoundScape Renderer
(SSR), which is a tool for real-time spatial audio reproduction
providing a variety of rendering algorithms, e.g. Wave Field
Synthesis, Higher-Order Ambisonics and binaural techniques.
Most of the rendering back-ends of the SSR have been available as Pd
externals for many years, the Binaural Room Synthesis (BRS) renderer
has been added recently.
However, the externals had to be compiled manually, which might have
kept some potential users from trying them.
But now, the SSR externals are also available with Deken!
I don't know if I have to explain it on this list, but just in case:
go to "Help" -> "Find externals", search for "ssr" and click
"install"!
Then, put an object [declare -path ssr] into your patch, and now you
can use the externals ssr_binaural~, ssr_brs~, ssr_dca~, ssr_aap~,
ssr_wfs~ and ssr_vbap~.
You can feed audio signals to the externals and use messages to change
their positions (and volume etc.). For more details, see the help
patches.
Alternatively, you can use the brand-new asdf~ external to load a
dynamic audio scene using the Audio Scene Description Format (ASDF,
https://audioscenedescriptionformat.readthedocs.io/). This will
provide the source signals and the control messages for you.
Again, just search for "asdf", click "install" and put an object
[declare -path asdf] into your patch.
Currently, there are not many example scenes available, but for
starters you can have a look here:
https://github.com/AudioSceneDescriptionFormat/asdf-example-scenes/
This is our first generation of Pd externals via Deken. If you find
any problems or if you have suggestions for improvements, please let
us know via the issue trackers at
https://github.com/SoundScapeRenderer/ssr or
https://github.com/AudioSceneDescriptionFormat/asdf-rust.
cheers,
Matthias
To Pd-announce:
Pd version 0.54-0test1 is available from http://msp.ucsd.edu/software.htm
or (source only) via github: https://github.com/pure-data/pure-data
New features include multi-channel signal connections and updated
windows audio. The new window version uses an audio subsystem that
became available in windows 7 (so won't work on Windows XP any more but
should be fine for windows 7 or newer). There are many other
improvements and updates.
cheers
Miller
ELSE 1.0 RC8 with Live Electronics tutorial is out and available from deken
or here https://github.com/porres/pd-else/releases/tag/v1.0-rc8 where you
can also find all changelog details...
We got 2 nice objects: one is [knob]!!! Finally a rotary knob into ELSE and
one that I think will cover all purposes. The other is the [filterdelay~]
object, a nice high level delay unit. ELSE now comes with an object browser
plugin, in order to load it you have to add the 'else' folder to Pd's path.
A not fully up to date rc8 version is available in the last stable release
of PlugData, but it will catch up soon and you can get it from the nightly
builds. I'm still missing raspberry pi binaries, I hope someone can help me
compile plaits~ and sfont~ in it
Please support me on patreon if you use my stuff a lot
https://www.patreon.com/porres
thanks
Alex
"Stop Listening. Start Jamming!
This is a journey into Sound....for Ninja Tune founders Coldcut it’s been a
30 year trip: equal parts dreaming, scheming, designing, remixing, djing,
producing, gigging and Jamming. The aim of Jamm Pro: to create our ultimate
electronic music instrument and share it. Welcome to the journey!" (from
https://jammpro.net/)
JammPro is a live-remixing app: load a sound set, then play one of 8
possible loops on each of the 4 channels. You can then apply tons of
effects and modifiers, and manually trigger 9 other samples ; the
multitouch ability of mobile devices is here fully exploited, which makes
JammPro a real musical instrument. If you purchase the "Jamm Creator"
addon, you can also import new samples, or even record your own loops from
the phone's mic (or from an external device), and create your own sound
set. You also can purchase new sound sets, from plenty of excellent artists.
JammPro is almost entirely written in Pd. It originates from the prior
NinjaJamm app, whose Pd audio patch had been initially written by Ed Kelly.
At this time, the GUI was written in mixed openFrameworks C++ and iOS
objective C; when I was asked to port it to Android (in 2014), I had the
idea to build a Pd external toolkit, written in OF, that has been used to
rewrite the GUI into a Pd patch. This patch can be edited on desktop (using
a secondary window to display the GUI, just like with Gem) and can be
interpreted by a libpd/ofxPd (thanks Dan Wilcox!) app on any mobile
platform. I called this toolkit Pof, Pd-OpenFrameworks.
All this to say that JammPro owes everything to Pd, libpd and ofxPd! Thanks
a lot.
This app is fun.
It's free (*) and many sound sets are available for free; we worked really
hard to make it work as well as possible (believe me, the quest for low
latency full-duplex audio on Android is a lot of fun!), we really hope
you'll enjoy it!
Antoine Rousseau - and the JammPro team
(*) p.s: about open-source
Unfortunately, we have not had time until now to publicly open the JammPro
patch.
We sincerely regret that, and it is in our plans to change this. This is
not trivial, because some parts must remain secret, being related to
selling sound sets and the Creator option.
In the meantime, Pof has been a free project from the beginning. It's
available from Deken for Intel-Mac and Linux (including rPi) platforms;
Windows and Arm-Mac to come, hopefully. The code is there:
https://github.com/Ant1r/ofxPof
Dear all,
This is a kind reminder for the open position at Aalto University in Creative and Expressive Sonification of Human Movement. The deadline for the application is less than two weeks time, on 17 April 2023.
We are looking for a full time Doctoral Researcher / Project Employee to work with us on the Sonic Move - Creative and Expressive Sonification of Human Movement project. The project investigates the use of human body movements in the synthesis of sonification for music and its use in dance and augmented game applications.
Your role and goals
You will be developing interactive sonification models that are more directly linked to supporting creative exploration of dance movements and seek to understand/explore how these models impact dance practices. You will develop and implement algorithms for real-time sonification of human body movements as well as collaborate with choreographers and dancers to create dance and game applications that use the sonification system. You will also publish research papers in journals and present research findings at international conferences and workshops.
Your experience and ambitions
You should have a firm interest and understanding of data sonification applied to arts and music, as well as advanced programming skills and knowledge in audio processing and synthesis. Familiarity with creative AI methods and deep learning models for data analysis and sonification is a plus.
How to apply
To apply for this position please submit an application for a “Doctoral Researcher / Project Employee in Sonification of Dance Movements” at the following link by the deadline of 17th April:
https://aalto.wd3.myworkdayjobs.com/en-US/aalto/details/Doctoral-Researcher…
The Sonic Move - Creative and Expressive Sonification of Human Movement is a joint research project in collaboration with SOPI research group Aalto University School of ARTS, Sensing Solutions VTT Technical Research Centre of Finland, HUMEA Lab University of Eastern Finland, Genelec and Minimi Dance Company. The project is supported by Business Finland funding.
Best,
Koray
-------------------------------------
M.Koray Tahiroğlu
Department of Art and Media,
Aalto University
School of Arts, Design and Architecture
http://sopi.aalto.fi/http://dmi.aalto.fi/http://dmi.aalto.fi/koraytahiroglu/
tel. +358 50 4088441
Announcing the first alpha release of WebPd 1.0.0 !
WebPd is a highly modular compiler for Pure Data, allowing to run .pd
patches on web pages. It converts the audio graph and processing
objects from a patch into plain human-readable JavaScript or
WebAssembly which can then be integrated directly into any web
application.
So far :
- 120 objects implememted (https://github.com/sebpiq/WebPd/blob/main/ROADMAP.md)
- An online patch player and compiler (https://sebpiq.github.io/WebPd_website)
- Several compilation outputs : JavaScript, WebAssembly, WAV, etc.
Test the new WebPd with the online compiler
(https://sebpiq.github.io/WebPd_website) or using the CLI
(https://github.com/sebpiq/WebPd/#using-the-cli). Many patches won't
work yet, as many features are still missing and many bugs are still
waiting to be found. I'm counting on your input and your bug reports
to move this release closer to a 1.0.0 ! The project is also welcoming
motivated contributors (doc is still mostly missing, but I can help
you getting started).
More info about WebPd here (https://github.com/sebpiq/WebPd).
Many thanks to Guillaume Pellerin & Guillaume Piccarreta from IRCAM
WAM team, and to Thor Magnusson & Francisco Bernardo from University
of Sussex for their invaluable support in pushing this through.
Many thanks also to the sponsors : IRCAM & the DAFNE+ project
(https://dafneplus.eu/), University of Sussex, and all the generous
donours at open collective (opencollective.com/webpd) 💜.
Please use and share 💖 !
--
Sébastien Piquemal
----- @sebpiq
----- https://github.com/sebpiq
----- https://second-hander.com
Announcing the first alpha release of WebPd 1.0.0 !
WebPd is a highly modular compiler for Pure Data, allowing to run .pd
patches on web pages. It converts the audio graph and processing
objects from a patch into plain human-readable JavaScript or
WebAssembly which can then be integrated directly into any web
application.
So far :
- 120 objects implememted (https://github.com/sebpiq/WebPd/blob/main/ROADMAP.md)
- An online patch player and compiler (https://sebpiq.github.io/WebPd_website)
- Several compilation outputs : JavaScript, WebAssembly, WAV, etc.
Test the new WebPd with the online compiler
(https://sebpiq.github.io/WebPd_website) or using the CLI
(https://github.com/sebpiq/WebPd/#using-the-cli). Many patches won't
work yet, as many features are still missing and many bugs are still
waiting to be found. I'm counting on your input and your bug reports
to move this release closer to a 1.0.0 ! The project is also welcoming
motivated contributors (doc is still mostly missing, but I can help
you getting started).
More info about WebPd here (https://github.com/sebpiq/WebPd).
Many thanks to Guillaume Pellerin & Guillaume Piccarreta from IRCAM
WAM team, and to Thor Magnusson & Francisco Bernardo from University
of Sussex for their invaluable support in pushing this through.
Many thanks also to the sponsors : IRCAM & the DAFNE+ project
(https://dafneplus.eu/), University of Sussex, and all the generous
donours at open collective (opencollective.com/webpd) 💜.
Please use and share 💖 !
--
Sébastien Piquemal
----- @sebpiq
----- https://github.com/sebpiq
----- https://second-hander.com
hi list,
releasing artnetlib:
---------readme-----------
turn your Pd patch into an Art-Net controller.
Art-Net is an Ethernet protocol based on the TCP/IP protocol suite. Its
purpose is to allow
transfer of large amounts of DMX512 data over a wide area using standard
networking
technology.
https://en.wikipedia.org/wiki/Art-Nethttps://www.artisticlicence.com/WebSiteMaster/User%20Guides/art-net.pdf
artnetlib is a Pd library with 5 objects:
- [artnetfromarray]
- polls a Pd array and convert the values to a list of DMX 1 byte ints
- [artnetsend]
- format a Pd list of ints with an _ArtDMX_ header where you specify
"physical" and "universe".
- [artnetudp]
- send the _ArtDMX_ package to a specified ip
- sends _ArtPoll_ and receive _ArtPollReply_ (used to discover the
presence of other Controllers, Nodes and Media Servers.)
- receive data from other Art-Net compatible devices
- [artnetroute]
- routes received _ArtDMX_ packages according to its "physical" and
"universe".
- [artnettoarray]
- convert _ArtDMX_ packages to a Pd list.
--------------------
repository: https://github.com/Lucarda/pd-artnetlib
bug reports: https://github.com/Lucarda/pd-artnetlib/issues
happy lighting!!!
:)
Lucarda
--
Mensaje telepatico asistido por maquinas.
hi list,
releasing artnetlib:
---------readme-----------
turn your Pd patch into an Art-Net controller.
Art-Net is an Ethernet protocol based on the TCP/IP protocol suite. Its
purpose is to allow
transfer of large amounts of DMX512 data over a wide area using standard
networking
technology.
https://en.wikipedia.org/wiki/Art-Nethttps://www.artisticlicence.com/WebSiteMaster/User%20Guides/art-net.pdf
artnetlib is a Pd library with 5 objects:
- [artnetfromarray]
- polls a Pd array and convert the values to a list of DMX 1 byte ints
- [artnetsend]
- format a Pd list of ints with an _ArtDMX_ header where you specify
"physical" and "universe".
- [artnetudp]
- send the _ArtDMX_ package to a specified ip
- sends _ArtPoll_ and receive _ArtPollReply_ (used to discover the
presence of other Controllers, Nodes and Media Servers.)
- receive data from other Art-Net compatible devices
- [artnetroute]
- routes received _ArtDMX_ packages according to its "physical" and
"universe".
- [artnettoarray]
- convert _ArtDMX_ packages to a Pd list.
--------------------
repository: https://github.com/Lucarda/pd-artnetlib
bug reports: https://github.com/Lucarda/pd-artnetlib/issues
happy lighting!!!
:)
Lucarda
--
Mensaje telepatico asistido por maquinas.
New Master in Composition with Focus on AI - HfM Trossingen
Application period March 1 - April 1, 2023
Starting in October 2023, the University of Music (HfM) Trossingen
(Germany) will offer a master's degree program that is unique in Europe
and aims to train a new generation of artistically and technically
competent composers, sound artists, and music designers, especially by
critically exploring the creative possibilities of AI-based technologies.
This program (Master of Music in Composition) builds on the music
technology teaching foundations of the HfM Trossingen and takes place in
close cooperation with Furtwangen University. The students are taught by
an internationally renowned team of researchers and experienced artistic
practitioners, first and foremost Prof. Dr. Luc Döbereiner and Prof. Dr.
Joachim Goßmann. In addition, this course is being developed within the
framework of a cross-university project funded by the BMBF (KISS -
Artificial Intelligence Service and Systems), whose long-term goal is to
establish a center of excellence for the sustainable development of AI.
As part of the master's program, interested students with prior musical
and/or technical experience at the bachelor's level can choose one of
three concentrations: Music Design, Instrumental Composition, or
Electroacoustic Composition. Available modules of study range from
Digital Lutherie, Experimental Sound Synthesis, and Interface Design to
Sound Ecology and Digital Ethics. In addition, a new space for the
conception, experimentation and realisation of artistic projects
("Latent Space" - Space for Artistic Research and Design in Music and
AI) is being created.
The entrance examination consists of two phases. In the first phase, in
addition to the general application documents, the following documents
must be submitted (via Post, USB-stick with max. file-size of 1 GB):
Motivation letter (max. 1 page)
Curriculum vitae including chronological overview of
musical/artistic development
Media documentation of own works with explanation (min. 2 pages)
Exposé - description of the planned artistic project (max. 3 pages)
After reviewing the documents, the examination committee selects
candidates for an in-person examination in Trossingen. This second phase
consists of an interview about the submitted work and the exposé
(approx. 30 min).
Further information about the program, application requirements can be
found on the following page: https://www.hfm-trossingen.de/ai-in-music
contact: l.doebereiner(a)doz.hfm-trossingen.de
ELSE 1.0-0 rc7 with Live Electronics Tutorial is out. These ELSE updates
are only happening this often to pair up with PlugData releases, which is
rocking hard (expect a PlugData release announcement soon) - btw, this
means I'm not really following my release plans towards a my release...
On breaking changes, I'm highlighting that I'm removing the recently added
support for pd-lua. It is still available in PlugData though! I couldn't
really get into it and Albert Graef is really active in its development. So
now plugdata users have to download it separately for vanilla, and yes, I
uploaded the latest version (0.11.6) to deken as well. I thought I could
maybe create my own pd-lua variant but I gave it up. I'd still like to
offer something like ofelia does, but it's way beyond my limits and way
down in my priorities. By the way, Albert also started porting ELSE to Purr
Data, check it out ==>
https://github.com/agraef/purr-data/releases/tag/2.19.2+ELSE
There are also many bug fixes and new features. Let me highlight that most
of my oscillators now have built-in "Soft Sync" capabilities! They can also
optionally take pitch in MIDI, which helps with exponential FM. As for new
objects, I'm including [beats~], a bpm detection based on aubio. Seb shader
also included a new [keycode] object that responds to computer keyboard
keys with layout independence and I'm using it for another object
([keymap]) that turns your computer keyboard into a MIDI keyboard input.
For last, I'm also including [plaits~], which is based on the PLAITS module
from Mutable Instruments. More clones from Mutable Instruments are coming
and I have to say I have big plans in 2023 to also design eurorack inspired
abstractions (like MAX's Beap and Automatonism). This should be a submodule
in ELSE and available in PlugData too, but will have its own name and
repository (it is called *Modular EuroRacks Dancing Along* *[M.E.R.D.A.]*).
I also have a new chapter in my tutorial about 'CV' (Control Voltage). So,
yeah, I'm going modular...
ELSE 1.0-0 RC7 is up on deken for Linux/mac/windows 64 bit versions. I'm
having issues building for raspberry pi (same for pd-lua) but can upload it
if someone helps me with that. Total number of objects is now 474 and total
number of examples in the tutorial is 498 (expect me to go beyond 475
objects and 500 examples in the next update). Here's a full changelog:
https://github.com/porres/pd-else/releases/tag/v1.0-rc7
If you've read this far, you probably care about this project of mine. May
I ask you then to consider supporting me on
PATREON --> https://www.patreon.com/porres?l=en
I promise to focus on issues and request from subscribers and this already
happened for this release, where I was able to improve [else/bicoeff]
thanks to 'Esa Ruoho'. I also promise to take motivation from patreon
support to keep collaborating to Pd Vanilla as well :) as in my tedious and
long revisions of the documentation and other stuff.
Cheers
Celebrating the 7th anniversary of our takeover, Cyclone 0.7-0 has been
released today on february 21st 2023.
The main goal for 7 years has been updating cyclone to MAX 7, the major
release version at the time in 2016. I thought we had taken care of all
updates but we missed [mtr], an object that got updates sometime after 7.0
and ironically this is one that got too much new stuff. Amongst new
messages and attributes, there are almost 30 new things, no kidding. Many
of which can't even be included in cyclone because we miss some stuff like
dictionary and transport.
We included some of the main things and started dealing with it... we now
have 'speed', 'trackspeed', 'embed' and 'loop', but the thing is that the
design of this object is questionable and things don't work quite well,
like 'looping'. This is a usual problem in Cyclone, implementing things as
they are originally instead of how they should be if designed in a
sane way... [mtr] is also a feature creep nightmare and some things will be
really hard to implement, but let's see.
Worst case scenario we'll stop at one point and say we couldn't make it and
only reached like 99% of our goal :) I guess cannot help but pass the
impression that I am not so happy and excited about keeping up with this, I
am really busy with ELSE and also Vanilla stuff. We may be reaching the
point where Cyclone's development of new stuff will stop and we'll keep it
in maintenance mode.
I am not sure we'll pursue MAX 8 updates, but maybe that can happen if
nothing crazy like [mtr] happens. For now, not much really happened in MAX
8 so I don't wanna discard it yet. Let's see if we can do it when MAX 9
comes out. The thing is that cyclone contains a limited set of MAX/MSP and
MAX has been adding new things all the time, which makes it a bit pointless
to get a limited set of things updated to the last version. The other thing
is not caring much about this anymore after 7 years :)
Other things in this release is that [comment] is basically done, before
realizing [mtr] updates were missing, this was the last thing that needed
finishing and we also said we couldn't include all features from MAX 7 in
it. This object now has also extra features just for cyclone, something
that rarely happens in Cyclone. To be honest, this is now a clone of
[else/note], an object that first started based on cyclone/comment. Anyway,
this thing finally has a properties windows, thanks to Tim Schoen, and is
now acceptable!
I did some other funny thing. I expanded [pink~], which now also has extra
features, it is now the same object as else/pink~, which has extra features
and is backwards compatible to the old cyclone/pink~. I hope people don't
mind this.
Full changelog at
https://github.com/porres/pd-cyclone/releases/tag/cyclone_0.7-0
it should be up in deken
Cheers
IEM Music Residency Programs 2023 - Call for Applications
University of Music and Performing Arts Graz (KUG)
https://iem.at/
(please distribute)
The IEM – Institute of Electronic Music and Acoustics – in Graz, Austria
is happy to announce its calls for its 2023 residency program.
You can apply for two different residencies: the Artistic Residency (1)
and the Artistic Research Residency (2).
(1) Artistic Residency
The residency is aimed at individuals wishing to pursue projects in
performance, composition, installation, sound art, development of tools
for art production, and related areas. Individuals are asked to submit a
project proposal that is related to the following research fields of the
IEM:
* Algorithmic Composition
* Algorithmic Experimentation
* Audio-Visuality
* Dynamical Systems
* Experimental Game Design
* Live Coding
* Sonic Interaction Design
* Spatialization/higher-order Ambisonics
* Standard and non-standard Sound Synthesis
Duration of residency: 3 months
Start date: June 1st 2023 (negotiable)
APPLICATION DEADLINE: February 28th 2023
Please reply to the official call by KUG for a University Assistantship
(in German and English):
<https://www.kug.ac.at/fileadmin/01_Kunstuniversitaet_Graz/05_News/Mitteilun…>
(2) Artistic Research Residency
The residency is aimed at individuals wishing to pursue an artistic
research project in close collaboration with an IEM staff member and
related to the research fields of the IEM (see list above under (1)).
Duration of residency: 3 months
Start date: September 1st 2023 (negotiable)
APPLICATION DEADLINE: April 30th 2023
Please reply to the official call by KUG for a University Assistantship
(in German and English):
<https://www.kug.ac.at/fileadmin/01_Kunstuniversitaet_Graz/05_News/Mitteilun…>
The Institute:
The Institute of Electronic Music and Acoustics is a department of the
University of Music and Performing Arts Graz founded in 1965. It is a
leading institution in its field, with more than 35 staff members of
researchers and artists. IEM offers education to students in composition
and computer music, sound engineering, sound design, contemporary music
performance, and musicology. It is well connected to the University of
Technology, the University of Graz as well as to the University of
Applied Sciences Joanneum through three joint study programs.
The project results will be released through the Institute's own Open
CUBE and Signale concert series, as well as through various
collaborations with international artists and institutions.
What we expect from applicants:
- A project proposal that adds new perspectives to the Institute's
activities and resonates well with the interests of IEM.
- Willingness to work on-site in Graz for the most part of the Residency.
- Willingness to exchange and share ideas, knowledge and results with
IEM staff members and students, and engage in scholarly discussions.
- The ability to work independently within the Institute.
- A dissemination strategy as part of the project proposal that ensures
the publication of the work, or documentation thereof, in a suitable
format. This could be achieved for example through the release of media,
journal or conference publication, a project website, or other means
that help to preserve the knowledge gained through the Residency and
make it available to the public.
- A public presentation as e.g. a concert or installation, which
presents the results of the Residency.
What we offer:
- 24/7 access to the facilities of the IEM.
- Exchange with competent and experienced staff members.
- A desk in a shared office space for the entire period and access to
studios including the CUBE [1], according to availability.
- Extensive access to the studios of the IEM during the period from July
1st until end of September.
- access to the IKOsahedron loudspeaker [2]
- access to the “Autoklavierspieler” [3]
- infrared motion tracking systems
- Regular possibilities for contact and exchange with peers from similar
or other disciplines.
- Concert and presentation facilities (CUBE 30 channel loudspeaker
concert space).
What we cannot offer to the successful applicant:
- We can not provide any housing.
- We also cannot provide continuous assistance and support, although the
staff is generally willing to help where possible.
- We can not host artist duos or groups, because of spatial limitations.
- We can not offer any additional financial support for travel or
material expenses.
Feel free to contact residency(a)iem.at if you have any questions.
[1] The Cube has a 30-channel loudspeaker system
[2] https://iko.sonible.com/
[3] https://algo.mur.at/projects/autoklavierspieler
(Apologies for cross-postings, please distribute widely)
ICAD 2023
2nd Call for Submission of Papers, Extended Abstracts,
Workshops/Tutorials, and Music/Installations
28th International Conference on Auditory Display
Norrköping, Sweden
26 June – 01 July 2023
https://icad2023.icad.org
THEME: “SONIFICATION FOR THE MASSES”
After decades of research dedicated to sonification in terms of audio
signal processing, aesthetics, art and design, perception, awareness and
human factors, accessibility, auditory augmentation and mixed reality,
auditory displays have reached a state with tools and principles that
work. It is about time for the breakthrough of sonification in terms of
implementation in mass media and consumer markets: Sonification for the
masses! The ICAD 2023 committee is seeking full papers, abstracts,
concert pieces, demos, installations, workshops, and tutorials from
across the full spectrum of auditory display research and practice. This
year, we are particularly interested in use cases from everyday life,
implementations in mass media, sonification evaluation by wider
populations, concepts of sonification in education, learnability and
pleasantness of sonifications, and other ways to tackle the theme of
sonification for the general public. How informative, aesthetic,
universally applicable or specified do sonifications need to be in order
to be beneficial for users outside the audio community? During the
student think tank, students and young researchers get the chance to
exchange ideas and learn from established members of the auditory
display community. As a satellite event, we are organizing the
Sonic-Tilt competition, where researchers, students and artists submit
their own sound design for Tiltification, a bullseye spirit level
available in the Google Play Store and Apple App Store.
SUBMIT HERE:
https://easychair.org/conferences/?conf=icad2023
SUBMISSION DEADLINES:
Full Papers (6 to 8 pages) - March 1, 2023
Extended Abstracts (2 to 4 pages) - March 15
Workshops / Tutorials / Demos - March 15
Live Performances / Installations - March 15
Doctoral Consortium / Think Tank - March 15
For details on topics of interest, proposal format, submission
instructions, and additional conference information, please visit
https://icad2023.icad.org/cfp/
COMMITTEES:
Papers: Tim Ziemer and Marian Weger – papers(a)icad2023.icad.org
Extended Abstracts: Jonas Löwgren and Michael Nees –
abstracts(a)icad2023.icad.org
Music: Michael Iber – music(a)icad2023.icad.org
Workshops & Tutorials: Emma Frid – workshops(a)icad2023.icad.org
Installations & Demos: Jordan Wirfs-Brock – installations(a)icad2023.icad.org
Student Think Tank: Sara Lenzi – thinktank(a)icad2023.icad.org
Sonic-Tilt Competition: Tim Ziemer – sonictilt(a)icad2023.icad.org
Accessibility & Webmaster: Katie Wolf – accessibility(a)icad2023.icad.org
About ICAD:
First held in 1992, ICAD is a highly interdisciplinary conference with
relevance to researchers, practitioners, artists, and graduate students
working with sound to convey and explore information. The conference is
unique in its specific focus on auditory displays and the range of
interdisciplinary issues related to their use. Like its predecessors,
ICAD 2023 will be a single-track conference, open to all, with no
membership or affiliation requirements.
Niklas Rönnberg
Chair of ICAD 2023
chair(a)icad2023.icad.org
Hi everyone!
I’ve never officially posted here yet about plugdata, so here we go! plugdata v0.6.4 is out now, you can get it here: https://github.com/plugdata-team/plugdata/releases/tag/v0.6.4
Alexandre Porres informed you a few days ago about v0.6.3, this version fixes some of the most important bugs we encountered in there.
plugdata is a Pd flavour with a new GUI made with JUCE, and the ability to run both standalone or as a VST3/AU/LV2 plugin in any DAW. It comes with the ELSE and cyclone libraries built-in. You can set up your own themes and keyboard shortcuts, it has a built-in package manager (similar to Deken), and most objects/inlets/outlets have tooltips that inform you on how to use that object. The recent versions also integrates with the Heavy compiler, to allow exporting patches directly to Electro-Smith Daisy, audio plugins, or C++ code.
Tim


To Pd announce -
The pd~ Max object (which allows you to embed Pd patches in Max/MSP) is now
available for the M1 architecture in a test version (0.55test1). You can
get it at the usual spot: http://msp.ucsd.edu/software.html
It _should_ be possible to run it in Max version 8 for either intel or "M1"
(apple) architecture, using Pd compiled for either architecture - but I haven't
tried out all the permutations yet.
Bug reports are most welcome, either to Pd list or to me personally.
cheers
Miller
PlugData 0.6.3 is out and it includes the latest ELSE 1.0 rc6!
You know PlugData, this amazing fork of Pure Data with a MAX like interface
coded in JUCE which also runs as a plugin in any DAW (VST/AU/LADSPA) on any
OS (Mac/Win/Linux)! Version 0.6.3 is just out, see
https://github.com/plugdata-team/plugdata/releases/tag/v0.6.3
This software supports and includes my ELSE library for Pure Data, which
now has 473 objects and counting! It is not fully supported yet, with still
a few missing GUI objects, but the plan is to fully support it until
PlugData 1.0 is out. It also includes the Cyclone library, which clones
objects from MAX (so it's an entry door for MAX users) and it's another
library I take care of.
Cheers