In an effort to understand a bit more about neural networks, I wrote a
Pd external, by translating Python code to C, from the book "Neural
Networks from Scratch in Python". After a couple of months of working on
this, I ended up with [neuralnet].
This is an object written in pure C, without any dependencies, for
creating densely connected neural networks for classification,
regression, and binary logistic regression. You can choose among
different activation and loss functions, optimizers, and other settable
features. I've created some examples, trying to replicate some of the
examples in the aforementioned book, and some examples of my own.
The code is on GitHub (https://github.com/alexdrymonitis/neuralnet), and
Linux amd64 and armv7-32 (Raspberry Pi) binaries are uploaded to deken.
I don't have a Mac or Windows machine, and I don't know how to compile
for these architectures on a Linux machine (or if that is even
possible). I would be grateful if anyone can compile for any of these
architectures and upload to deken.
I would also love to get feedback both on how the object performs, and
on the source code itself.
Hi all,
flite (a text to speech external) is good enough for a release. It includes:
- single binary without dependencies.
- 5 built-in voices
- can load .flitevox voice files (english only)
- read from a text file
- threaded functions for "synthesis" and "read file"
You can get it from Deken or at
https://github.com/Lucarda/pd-flite/releases/tag/0.3.2
There are no mac M1 or Linux ARM builds but there shouldn't be a problem
for those who want to compile (and upload to Deken).
:)
--
Mensaje telepatico asistido por maquinas.
Please pardon the cross-posting. My COMPEL project collaborators and I
would appreciate it if you would please distribute this email widely among
various communities whose work is rooted in computer music, starting with
composers, performers, and instrument and installation designers.
Dear all,
As part of the preparations for the Workshop on NIME Archiving
<https://nime.pubpub.org/pub/oyi0po4b> to be held on 28 June, we look for
volunteers to fill out one or more records of *artifact*s (defined broadly)
in this survey:
*https://forms.gle/A8zNrFVxs9N4aBcp9 <https://forms.gle/A8zNrFVxs9N4aBcp9>*
The idea is to check whether categories developed for the COMPEL archive
make sense from the community's perspective. We ask that you please
consider filling out the survey *before 24 June* so that we have a couple
of days to look at results before the workshop. Feel free to make entries
also if you cannot make it to the workshop!
Given that this effort may benefit the broader computer music community,
please note that *both the survey and the workshop are open to any person
who is interested in participating*, regardless whether they are registered
for the conference. Since this year NIME is an online-only conference, *the
zoom link will be forthcoming and will be shared with all survey
contributors and conference participants soon*.
The workshop will continue discussions in the community on how to best
preserve information from the NIME conferences, the NIME community, and the
computer music community at large. The workshop will follow up on threads
from the NIME publication ecosystem workshop
<https://nime2020.bcu.ac.uk/nime-publication-ecosystem-workshop/> (NIME
2020, Birmingham), ICMC 2018 paper
<https://dblp.org/rec/conf/icmc/BukvicO18.html>, SEAMUS 2018 conference
presentation, and the NIMEhub workshop
<https://www.duo.uio.no/handle/10852/50604> (NIME 2016, Brisbane). As we
rebuild the COMPEL platform to sidestep technological limitations of the
old infrastructure, the main task is to find a solution for an open,
future-oriented, engaging, and institutionally recognized archiving
solution for the activities of the community that ensures *reproducibility *of
archived artifacts. While NIME publications are archived according to the FAIR
principles <https://www.go-fair.org/fair-principles/>, currently no
solutions exist for archiving information about instruments/interfaces and
other hardware/software-based artifacts produced in the community. Neither
do we have a system for describing and preserving compositions/pieces,
installations, performances, and workshops. We believe that this challenge
affects the computer music community at large. The goal of this workshop
and forum discussions
<https://forum.nime.org/t/survey-and-workshop-on-nime-archiving/306> is to
propel the project forward and expand the community engagement.
Thank you for your consideration and participation. Should you have any
questions, please do not hesitate to contact one of the workshop organizers
<https://nime.pubpub.org/pub/oyi0po4b>.
Best,
Ico
--
Ivica Ico Bukvic, D.M.A.
Director, Creativity + Innovation
Director, Human-Centered Design iPhD
Institute for Creativity, Arts, and Technology
Virginia Tech
Creative Technologies in Music
School of Performing Arts – 0141
Blacksburg, VA 24061
(540) 231-6139
ico(a)vt.edu
ci.icat.vt.edul2ork.icat.vt.eduico.bukvic.net
TL;DR : Faster, better, stronger, WebPd 1.0 is coming (featuring
WebAssembly, Audio Worklet and more) ... but it needs your support !
WebPd is a highly modular web audio programming toolkit inspired by Pure Data.
→ it allows Pure Data patches to run in web pages, therefore enabling
non-programmers (artists musicians, etc ... ) to design live and
interactive audio for the web.
→ it provides experienced web programmers with a complete audio
toolkit that is production-ready, and enables efficient audio
synthesis and processing in the browser.
🌱 You can try a demo of the upcoming version here :
https://sebpiq.github.io/WebPd_demos/the-graph/www/.
🌱 You can donate money to help making it real :
https://opencollective.com/webpd#category-CONTRIBUTE.
🌱 Your money will help moving forward the following roadmap :
https://github.com/sebpiq/WebPd/blob/master/README.md#roadmap.
----------------------
FULL VERSION
For the past weeks, development of WebPd (1) has picked up a good
pace, and the project has reached a state where I feel now confident
for sharing a demo (2), a public update on what's going on and a call
for crowdfunding (3).
🔊 1. How WebPd Got Here
The project was started in 2010 by Chris Mc Cormick when Firefox
released the first implementation of an API that enabled live audio
synthesis in the web browser. In 2012, I took over and ported WebPd to
a different API called Web Audio API (4), which has since become the
web standard for live audio.
The Web Audio API implementation of WebPd is still the current version
(v0.4) and hasn't seen any significant update for many years. It
works, but it is hackish and limited. This is due to the fact that the
Web Audio API impose its own synthesis and processing functions
(oscillators, filters, etc, ... ), and is therefore nearly impossible
to customize.
Luckily, the situation is now very different than it was 10 years ago.
New APIs and standards have been proposed and adopted by all major
browser vendors, making it finally possible to build serious custom
audio apps for the web browser :
- The AudioWorklet (5) : a recent fix to the Web Audio API allowing it
to run custom audio code with good performance.
- WebAssembly (6) : a binary instruction format which allows to
compile code so that it can run in the browser with performance close
to that of native applications.
With these in mind, I've been meaning to re build WebPd from scratch
for several years already but it's a daunting task : it requires an
entire re-design of the audio engine, and a full re-write of the code
to make the project future proof. Another problem is that there isn't
yet a good ecosystem of libraries for writing code with the new web
audio technologies mentioned above. Therefore, some generic web audio
packages need to be built as part of the project (adding to the
complexity and the amount of work).
🔊 2. What's planned for this new version, WebPd 1.0
First, let me say that WebPd does not intend to be a fully-fledged
application like Pure Data is on the desktop, but rather a library for
developing web applications. In that sense, conceptually, it is closer
to libpd (7) than to Pure Data. Of course, you could build a
fully-featured user interface, a Pure Data on the web, using WebPd,
but that is out of the scope of the project.
WebPd's goals are :
→ to allow artists to take their Pure Data patches and run these in
web pages, therefore enabling non-programmers (sound designers,
musicians, etc ... ) to design live and interactive audio for the web.
→ to provide experienced web programmers with a complete audio toolkit
that is production-ready, and enables efficient audio synthesis and
processing in the browser.
For this to be possible, WebPd needs a small community of users and
developers. In fact, I have received many messages inquiring about the
status of the project and many have offered help with development, so
I know that this community exists.
The first milestone is therefore to build a minimum viable product,
write good docs, resources for beginners to find help, find a good
platform for questions and discussion, etc ... making it more easy for
others to contribute. In the long run, I would like the development
work to be as collaborative as possible, and progressively hand over
ownership of the project to the community.
🔊 3. Crowdfunding campaign
In order to help reach this first milestone, I am starting a small
crowdfunding campaign so I can spend more time on the project in the
coming months. You can find here
(https://github.com/sebpiq/WebPd/blob/master/README.md#roadmap) a list
of what I plan to achieve for WebPd 1.0 with help from that money.
If this is something you'd like to help with, you can donate on the
opencollective page of the project
(https://opencollective.com/webpd#category-CONTRIBUTE). Any amount is
welcome.
You can also ask questions, or come share ideas here
(https://github.com/sebpiq/WebPd/issues/113) or here
(https://opencollective.com/webpd/conversations/announcing-webpd-complete-re…).
(1) https://github.com/sebpiq/WebPd
(2) https://sebpiq.github.io/WebPd_demos/the-graph/www/
(3) https://opencollective.com/webpd/contribute/backer-42114/checkout
(4) https://webaudio.github.io/web-audio-api/
(5) https://webaudio.github.io/web-audio-api/#AudioWorklet
(6) https://webassembly.org/
(7) https://github.com/libpd/libpd
Dave Smith, MIDI Pioneer, passed away at 72. As a hommage, I'd like to
announce the release of ELSE 1.0 Release Candidate 2, whose main attraction
is the Sound Font player object called [sfont~], more details at
https://github.com/porres/pd-else/releases/tag/v1.0-rc2
By the way, if any of you are on instagram, I'm posting Pd/ELSE related
stuff over there, check this post with an example using [sfont~] to play
some MIDI sequences you people can easily find online -->
https://www.instagram.com/p/CeTwd4VNal9/ General MIDI Sound Fonts are also
easy to get, have fun!
Of course the nice thing about Pd is that we can do crazier stuff than just
playing these, tell me what you feel like doing. Also, note [sfont~] has
nice microtonal capabilities, check its help file for more.
*And please test to see if it's all working fine. *It took me over 2.5
years to get this sound font player into ELSE, it was quite challenging and
lots of people helped me, especially Roman and Lucarda with the compilation
issues. I hope it's all good!
You can download the latest ELSE directly from Pd. Heads up, the next
release of ELSE will include a major overhaul in the [midi] object (perhaps
it will even have a new name). Compatibility will break but we'll be able
to do lots more with it, like writing multi track MIDI files and deal with
"meta" information.
Have fun with MIDI and Sound Fonts, Happy patching.
RIP Dave Smith
Hi all,
"flite" is and old "text to speech" external.
I had made some updates which includes:
- single binary without dependencies.
- support for the 5 voices
- read from a text file
- threaded functions for "synthesis" and "read file"
There are amd64 builds for tests at
https://github.com/Lucarda/pd-flite/releases
click on "Assets" and get the version for your OS.
Let me know if this is good enough for a Deken release or whatever.
Lucarda.
--
Mensaje telepatico asistido por maquinas.
hi
i'd like to announce the release of deken-v0.9.2
https://github.com/pure-data/deken/releases/v0.9.2
the GUI-plugin can be installed via deken.
just go to "Help -> Find externals" and enter
"deken-plugin" as the search string.
this is the first bugfix release of deken-v0.9, which solves an
important regression (aka: crash) on some older versions of Pd on macOS.
i hope this is now ready for inclusion in the next Pd release as the
standard "find externals" implementation.
in any case: please test!
changes since 0.9:
deken-plugin ("Find externals" within Pd)
=========================================
- fix crash on some macOS versions of Pd (basically reverting to the old
list view in selected cases)
- double-click on the "library" heading now will open/close all library
nodes
- double-click on any other heading (e.g. "version") will also re-sort
the libraries
- (single-click will only re-sort packages within each library node,
but leave the library-sorting itself intact)
deken-xtra-apt-plugin
=====================
- fix the deken-extension to also include packages installed via
`apt-get` (on Debian and derivatives)
deken cmdline (create and upload packages)
==========================================
- fix overriding of destination URL with `--destination`
- minor fixes in the GPG/SHA256 verification code
- make flags for enabling/disabling certain features consistent
- make wrapper script POSIX compliant
- fixes to help Debian packaging of the the cmdline tool
- update Docker image
- reduce size
- prepare Docker image for GPG signing
binaries and a Docker image for the cmdline tool are available from the
releases page.
happy patching
gfmdasr
IOhannes
hi
i'd like to announce the release of deken-v0.9
https://github.com/pure-data/deken/releases/v0.9
the GUI-plugin can be installed via deken.
just go to "Help -> Find externals" and enter
"deken-plugin" as the search string.
this is the version i'd like to get into the next Pd release as the
standard "find externals" implementation.
so please test!
deken-plugin ("Find externals" within Pd)
=========================================
- brand-new excel-like interface
- sort by library/version/uploader/date
- quick link from a package to it's deken homepage (which features
objectlists and more)
- simple selection of multiple packages
- INSTALL button
- watch the installation as it unfolds
- proper menu for installing pre-downloaded package files
- fast SHA256 verification on all platforms
- option to uninstall packages
- proper menu for proper ties
- advertise https://deken.puredata.info
- better feedback if search returned no result
- tooltip
- balloons
- improve guessing of deken architecture
- fix version sorting
- fix all bugs
- fix some more bugs
- implement many bugs
deken cmdline (create and upload packages)
==========================================
- ported to hy-1.0a4
- improve detection of single/double precision externals
- fix detection of FreeBSD/NetBSD/OpenBSD externals
- fix `==` version comparator (for finding packages of an exact version)
- add `~=` version comparator (for finding packages of a compatible version)
- allow mixing of libraries with version constraints and libraries
without in search terms.
- allow the user to specify a target directory when downloading
- more output at higher verbosities
- drop Python2 support
- allow the user to switch off GPG-signing with a envvar
- fix some more bugs
- add brand new bugs
binaries a Docker image for the deken cmdline tool are available from
the releases page.
with this release, the repository switched to using `main` as the
primary release branch.
happy patching
dsrax
IOhannes
** sorry for xx-posting **
--
PikselXX is scheduled for November 17 – 20. 2022
Piksel 20 years Anniversary
Dear friends,
We are glad to announce the call for projects for the 20 years Piksel
edition!
To celebrate the anniversary we open a new track for texts, if you are
a previous Piksel participant and want to share with us your
experience, this section is yours. Selected articles from the open call
together with some curated texts from Piksel artists and colleagues
will be included in the Piksel 20 years book.
Piksel will go hybrid again. Screen-based artworks and PikselSavers are
primarily intended for the Piksel XX Cyber Salon. Ideas for
collaborative online/physical activities are welcome. This year we want
to be back to physicality, we encourage you to present art
installations that can be built in Bergen to minimize the international
transport, according to the green strategy.
Please feel free to submit your projects to any one of the open tracks:
Presentations, workshops, concerts, installations, and the texts call.
Deadline is 31st of July!!
Please use the online submit form at: https://pretalx.com/piksel22/
Piksel22 is supported by the Municipality of Bergen, Arts Council
Norway, Vestland fylkeskommune and others.
more info: https://piksel.no/
**Piksel is an international festival for electronic art and
technological freedom. Part workshop, part festival, it is organised in
Bergen, Norway, and involves participants from more than a dozen
countries exchanging ideas, coding, presenting art and software
projects, doing workshops, performances and discussions on the
aesthetics and politics of art and free technologies.**
--
**open CALL for PROJECTS**
For the exhibition and other parts of the program we currently seek
projects in the following categories:
**1. Installations**
Projects to be included in the exhibitions.
The works must be realized by the use of free and open source
technologies.
**2. Audiovisual performance**
Live art realized by the use of free software and/or open/DIY
hardware. We encourage audio-visual projects, online “orchestra”
collaborations with local actors,...
**3. Presentations**
Innovative DIY/open hardware and audiovisual software tools or software
art released under a free/open license. (Also includes presentations of
artistic projects realized using free/open technologies.)
**4. Workshops**
Hands on workshops utilizing free software and/or open/DIY hardware for
artistic use. Workshops can be on a virtual basis too.
**5. PikselSavers**
Video and software art based on the screensaver format – short
audiovisual (non)narratives made for endless looping. Possible thematic
fields includes but are not limited to: sustainable resource
allocation, renewable technologies, energy harvesting, fair trade
hardware, free content, open access, open data, DIY economy, shared
development. The works must be realized by the use of free/open source
technologies.
**6. Texts**
Anecdotes and reflections from the 20 years history of Piksel for the
anniversary book. We are specially interested in hearing about
collaborations and projects that was initiated as a result of artists
meeting at the festival.
**Deadline to July 31. 2022**
Please use the online submit form at: https://pretalx.com/piksel22/
--
--
--
Gisle Frøysland
Piksel Produksjoner
Strandgaten 207
5004 Bergen, Norway
+47 90665018
www.piksel.no
Hey, first update of ELSE and the tutorial in 2022 has been released. Total
number of objects is now 446, total number of examples in the tutorial is
now 464..
I never take this long but there are lots and lots of changes here. Many
breaking changes on one hand but on the other I'm finally moving on to a
next development phase of "Release Candidates" aiming towards more
stability. One big change was having flags come first as they always should
have.
Many many fixes and many new objects.
*Highlights:*
- [metronome]: this object was added in the last update, I made many
changes and included several new high level funciontanilities. Specially, I
Added support for quite crazier time signatures.
- [tabplayer~]: now can trigger start and stop playing at audio rate with
signal input with gates and impulses.
- new [score]/[score2] objects to write and play musical score sequences
with a friendly syntax with bars, time signatures and fractional note
durations
- [polymetro]/[polymetro~]: polymetric metronomes at control and audio
rates.
The Live Electronics Tutorial has also been updated, check detailed
changelog at: https://github.com/porres/pd-else/releases/tag/v1.0-rc1
I also added in the readme alternatives to cyclone in ELSE.
Find binaries for the main 64 bits systems in deken (Linux, Windows and
macOS intel/arm), more to come soon and extra binaries will be available
and found only in the release downloads from the repository.
Cheers
Hi
I added the current version of Pure Data to my personal package archive
(PPA), so that it is finally possible to get a recent version of Pd on
Ubuntu without compiling.
Kudos goes to IOhannes who maintains the puredata package on Debian and
who makes sure that the most recent release of Pd is available in the
backports repository. His work enabled me to just¹ get his package
sources and push them to my PPA.
I also rewrote the "How to install on Debian/Ubuntu" FAQ page on
puredata.into who was referring to the retired apt.puredata.info repo
and Pd-extended.
You find instructions on how to get the most recent version of Pd on
Debian and on Ubuntu here:
https://puredata.info/docs/faq/debian
Roman
¹well, a little work was necessary...
Dear List,
Version 2.3 of the Click Tracker is out.
This version was generously supported by the Quatuor Bozzini
(https://www.quatuorbozzini.ca/), and reflects mainly improvements
gained from the perspective of the users.
The new features are divided into 3 categories:
Syntax features:
- added meters with mixed denominators
- added "x Y" command to repeat inputed events
- added fermatas
GUI features:
- new GUI layout
- removed "record" button
- added reset button for pickup bar
GUI features for the application and Max patch:
- added file drop to open a score
- change the window size to scale the contents
- added new control keys g l t u, also combined with shift for reset
As in the previous version, you can use in any of the following ways:
- as an android app (https://bit.ly/click-tracker-mob or
https://bit.ly/clicktracker-playstore)
- as a closed desktop app in windows (http://bit.ly/ClickTracker2-3Win)
or apple (http://bit.ly/ClickTracker2-3Apple)
Due to Apple's recent security settings, you'll need to allow the Pd and
other externals to run on your system.
WARNING: M1 users will need to run the program with Rosetta.
- as the traditional Pure Data patch (https://bit.ly/ClickTracker2-3)
- as a Max/MSP patch in windows (http://bit.ly/ClickTracker2-3MaxWin) or
apple (http://bit.ly/ClickTracker2-3MaxApple)
For more information, refer to the Click Tracker's website at
http://j.mp/click-tracker.
You can also visit the Click Tracker on facebook -
http://j.mp/clicktrackerfb, or check out the click track library in
http://jmmmp.github.io/clicktracker/index-library.
With best regards,
João Pais
--
Click Tracker Mobile -https://bit.ly/click-tracker-mob
Click Tracker Website -http://j.mp/click-tracker
Click Tracker Library -https://bit.ly/ClickTrackerLibrary
Facebook -http://j.mp/clicktrackerfb
Hey all,
If anyone is anywhere near Margate (England, north-eastern tip of the south east peninsula) I will be exhibiting a quadrophonic interactive sound installation in the Margate School from the 18th to the 22nd of March.
This is an artist-in residence program, and there are three artists exhibiting on those dates - my sensor-driven Pd sound-artwork, a 3D CGI designer and another who works with light sculpture.
The Margate School is an art college housed in a very large (deserted by Woolworths) department store in Margate, England, which is also a great seaside resort. I am capturing sound clips from the building and creating a set of Pd patches that will intreract with the audience through infra-red sensors.
Please come if you are in the area.
Best wishes,Ed Kelly
News | The Margate School
|
|
|
| | |
|
|
|
| |
News | The Margate School
See the latest news from The Margate School
|
|
|
Driftwood - the latest album by Lone Shark, now available at https://synchroma.bandcamp.com/releases
For Lone Shark releases, Pure Data software and published Research, go to http://sharktracks.co.uk
Dear all,
the iem is co-hosting the 1st international conference on "Data Art and
Climate Action" (together with Hongkong School of Creative Media).
Today we have a Matinee Concert, that includes performances in Pd.
If you like, you are happily invited to join our live stream at
https://go.iem.at/daca22
The concert starts at 10:45CET (in about 45minutes) so be quick to make
up your mind.
For the program notes, see the attached PDF.
fsamd
IOhannes
Hi,
I am happy to announce a new bug fix release for [vstplugin~] - a Pd
external for hosting VST2 and VST3 plugins on Windows, macOS and Linux.
It is available on Deken (search for "vstplugin~"). Please upgrade!
Here is the full change log: https://git.iem.at/pd/vstplugin/-/releases
Please report any issues at https://git.iem.at/pd/vstplugin/-/issues
Have fun!
Christof
Please pardon cross-posting,
The following announcement may be of particular interest to graduate
students seeking PhD oppportunities. Please share widely, as appropriate.
Virginia Tech Human-Centered Design (HCD), a unique individualized
interdisciplinary PhD program, in collaboration with the Linux Laptop
Orchestra (L2Ork) has a new fully funded graduate research assistantship
available. Supported as part of a grant from the Office of Naval Research,
the assistantship is looking for a graduate PhD-level student who will be
fully funded for up to 4 years (contingent on adequate progress) to work on
a project that combines knowledge in sound and music with that of
cybersecurity and K-12 education. Desired expertise includes:
- Solid C and JS programming skills
- Familiarity with emscripten and building Web apps
- Solid knowledge of acoustics, psychoacoustics, and of digital signal
processing using visual dataflow programming languages, such as Pd-L2Ork/Pd
or Max
While student will be required to work on the funded project, they will
also have an opportunity to develop facets of the said project into their
own dissertation, as well as explore their own unique research
trajectories. Students who may not possess all the desired expertise but
only a subset are also welcome to inquire and apply. Before applying,
students are strongly encouraged to contact me to ensure they may have the
right skillset.
Virginia Tech offers top tier research facilities in areas of spatial
sound, immersion, and telematics, including a $30M Moss Arts Center and the
Institute for Creativity, Arts, and Technology's Cube with its 140+
loudspeaker array, multi-projection surfaces, and a high resolution motion
capture system. The said space is supported by a constellation of other
labs, including the Perform studio with its 24.4 Genelec and Mocap system,
DISIS with its 24.2 system, Create Studio with access to cutting edge
fabrication tools, and many more. Most of the audio facilities on campus
are networked using the Dante protocol.
If you would like to learn more about the HCD iPhD program please visit
https://hcd.icat.vt.edu
For additional info on related opportunities, visit:
https://www.icat.vt.eduhttp://ci.icat.vt.eduhttps://l2ork.icat.vt.edu
Questions? Please feel free to email me at <ico(a)vt.edu>
--
Ivica Ico Bukvic, D.M.A.
Director, Creativity + Innovation
Director, Human-Centered Design iPhD
Institute for Creativity, Arts, and Technology
Virginia Tech
Creative Technologies in Music
School of Performing Arts – 0141
Blacksburg, VA 24061
(540) 231-6139
ico(a)vt.edu
ci.icat.vt.edul2ork.icat.vt.eduico.bukvic.net
Dear all,
We are hiring three early career researchers for the project AMBIENT:
Bodily Entrainment to Audiovisual Rhythms
<https://www.uio.no/ritmo/english/projects/ambient/index.html>. The
project will study how the sonic and visual "background" of indoor
environments influence people's bodily behaviour.
* 1-2 Doctoral Research Fellowships in Audiovisual Rhythms
<https://www.jobbnorge.no/en/available-jobs/job/217521/1-2-doctoral-research…>
* 1-2 Post-Doctoral Research Fellowships in Audiovisual Rhythms
<https://www.jobbnorge.no/en/available-jobs/job/217519/1-2-post-doctoral-res…>
Application deadline: *15 March 2022*.
We are looking for people with backgrounds in musicology, music
technology, sound studies, psychology, sound and music computing,
computer science, human movement science, or other relevant field. The
aim is to put together a cross-disciplinary team that together covers
the following methods: sound analysis, video analysis, interviews,
questionnaires, motion capture, physiological sensing, statistics,
signal processing, machine learning, interactive (sound/music) systems.
Please forward to relevant candidates. Do not hesitate to get in touch
if you want to know more about the positions or the project.
--
Alexander Refsum Jensenius [he/him]
Professor, Department of Musicology, University of Oslo
https://people.uio.no/alexanje
Deputy Director, RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion
https://www.uio.no/ritmo/english/
Director, fourMs Lab
https://fourms.uio.no
Chair, NIME Steering Committee
https://www.nime.org
New master's programme: "Music, Communication & Technology"
http://www.uio.no/mct-master
Dear Pd community,
in July this year, we, The Center for Haptic Audio Interaction Research
(CHAIR for short), released a VST3i plug-in: EXC!TE SNARE DRUM. It's an
exciter-resonator physical modelling snare drum syth. The exciter can be
triggered via MIDI. The plug-in is a free download.
In the PRO version of the plug-in (20€), on top of the MIDI triggers
there is an audio sidechain which allows direct excitation of the
waveguide resonator.
Video here:
https://www.chair.audio/product/excte-snare-drum-pro/
The VST3/AU is built using Steinberg's VST3 SDK and libpd. Yes, the
audio synthesis is done in Pd.
We are quite happy with the overall performance and negligible CPU
overhead. We were able to run 70 instances of the plug-in before we
started to hear dropouts. Nobody needs 70 parallel snares :)
The core of the audio synthesis is open source, in fact you can open Pd,
search for "CHAIR" in Deken and get the example patches with the snare
drum sound.
There is also a paper about the algorithm: "Efficient Snare Drum Model
for Acoustic Interfaces with Piezoelectric Sensors"
A bit longer story of the plug-in development can be read in our journal
article: "EXC!TE SNARE DRUM — Making an Audio Plugin with Pure Data
inside" both are to be found here:
https://discourse.chair.audio/t/its-science-papers-about-our-work/44
We are thankful that we were able to build upon the work which has gone
into Pd (and libpd on top of that). If you have contributed Pd or libpd
in any form and would like to try the PRO version of the plug-in, please
send me a quick email.
Thank you Miller, Dan, IOhannes, Peter, Pierre, Christof, Antoine and
everyone else.
Max (+ Clemens, Philipp and Sebastian)