In an effort to understand a bit more about neural networks, I wrote a
Pd external, by translating Python code to C, from the book "Neural
Networks from Scratch in Python". After a couple of months of working on
this, I ended up with [neuralnet].
This is an object written in pure C, without any dependencies, for
creating densely connected neural networks for classification,
regression, and binary logistic regression. You can choose among
different activation and loss functions, optimizers, and other settable
features. I've created some examples, trying to replicate some of the
examples in the aforementioned book, and some examples of my own.
The code is on GitHub (https://github.com/alexdrymonitis/neuralnet), and
Linux amd64 and armv7-32 (Raspberry Pi) binaries are uploaded to deken.
I don't have a Mac or Windows machine, and I don't know how to compile
for these architectures on a Linux machine (or if that is even
possible). I would be grateful if anyone can compile for any of these
architectures and upload to deken.
I would also love to get feedback both on how the object performs, and
on the source code itself.
Hi all,
flite (a text to speech external) is good enough for a release. It includes:
- single binary without dependencies.
- 5 built-in voices
- can load .flitevox voice files (english only)
- read from a text file
- threaded functions for "synthesis" and "read file"
You can get it from Deken or at
https://github.com/Lucarda/pd-flite/releases/tag/0.3.2
There are no mac M1 or Linux ARM builds but there shouldn't be a problem
for those who want to compile (and upload to Deken).
:)
--
Mensaje telepatico asistido por maquinas.
Please pardon the cross-posting. My COMPEL project collaborators and I
would appreciate it if you would please distribute this email widely among
various communities whose work is rooted in computer music, starting with
composers, performers, and instrument and installation designers.
Dear all,
As part of the preparations for the Workshop on NIME Archiving
<https://nime.pubpub.org/pub/oyi0po4b> to be held on 28 June, we look for
volunteers to fill out one or more records of *artifact*s (defined broadly)
in this survey:
*https://forms.gle/A8zNrFVxs9N4aBcp9 <https://forms.gle/A8zNrFVxs9N4aBcp9>*
The idea is to check whether categories developed for the COMPEL archive
make sense from the community's perspective. We ask that you please
consider filling out the survey *before 24 June* so that we have a couple
of days to look at results before the workshop. Feel free to make entries
also if you cannot make it to the workshop!
Given that this effort may benefit the broader computer music community,
please note that *both the survey and the workshop are open to any person
who is interested in participating*, regardless whether they are registered
for the conference. Since this year NIME is an online-only conference, *the
zoom link will be forthcoming and will be shared with all survey
contributors and conference participants soon*.
The workshop will continue discussions in the community on how to best
preserve information from the NIME conferences, the NIME community, and the
computer music community at large. The workshop will follow up on threads
from the NIME publication ecosystem workshop
<https://nime2020.bcu.ac.uk/nime-publication-ecosystem-workshop/> (NIME
2020, Birmingham), ICMC 2018 paper
<https://dblp.org/rec/conf/icmc/BukvicO18.html>, SEAMUS 2018 conference
presentation, and the NIMEhub workshop
<https://www.duo.uio.no/handle/10852/50604> (NIME 2016, Brisbane). As we
rebuild the COMPEL platform to sidestep technological limitations of the
old infrastructure, the main task is to find a solution for an open,
future-oriented, engaging, and institutionally recognized archiving
solution for the activities of the community that ensures *reproducibility *of
archived artifacts. While NIME publications are archived according to the FAIR
principles <https://www.go-fair.org/fair-principles/>, currently no
solutions exist for archiving information about instruments/interfaces and
other hardware/software-based artifacts produced in the community. Neither
do we have a system for describing and preserving compositions/pieces,
installations, performances, and workshops. We believe that this challenge
affects the computer music community at large. The goal of this workshop
and forum discussions
<https://forum.nime.org/t/survey-and-workshop-on-nime-archiving/306> is to
propel the project forward and expand the community engagement.
Thank you for your consideration and participation. Should you have any
questions, please do not hesitate to contact one of the workshop organizers
<https://nime.pubpub.org/pub/oyi0po4b>.
Best,
Ico
--
Ivica Ico Bukvic, D.M.A.
Director, Creativity + Innovation
Director, Human-Centered Design iPhD
Institute for Creativity, Arts, and Technology
Virginia Tech
Creative Technologies in Music
School of Performing Arts – 0141
Blacksburg, VA 24061
(540) 231-6139
ico(a)vt.edu
ci.icat.vt.edul2ork.icat.vt.eduico.bukvic.net
TL;DR : Faster, better, stronger, WebPd 1.0 is coming (featuring
WebAssembly, Audio Worklet and more) ... but it needs your support !
WebPd is a highly modular web audio programming toolkit inspired by Pure Data.
→ it allows Pure Data patches to run in web pages, therefore enabling
non-programmers (artists musicians, etc ... ) to design live and
interactive audio for the web.
→ it provides experienced web programmers with a complete audio
toolkit that is production-ready, and enables efficient audio
synthesis and processing in the browser.
🌱 You can try a demo of the upcoming version here :
https://sebpiq.github.io/WebPd_demos/the-graph/www/.
🌱 You can donate money to help making it real :
https://opencollective.com/webpd#category-CONTRIBUTE.
🌱 Your money will help moving forward the following roadmap :
https://github.com/sebpiq/WebPd/blob/master/README.md#roadmap.
----------------------
FULL VERSION
For the past weeks, development of WebPd (1) has picked up a good
pace, and the project has reached a state where I feel now confident
for sharing a demo (2), a public update on what's going on and a call
for crowdfunding (3).
🔊 1. How WebPd Got Here
The project was started in 2010 by Chris Mc Cormick when Firefox
released the first implementation of an API that enabled live audio
synthesis in the web browser. In 2012, I took over and ported WebPd to
a different API called Web Audio API (4), which has since become the
web standard for live audio.
The Web Audio API implementation of WebPd is still the current version
(v0.4) and hasn't seen any significant update for many years. It
works, but it is hackish and limited. This is due to the fact that the
Web Audio API impose its own synthesis and processing functions
(oscillators, filters, etc, ... ), and is therefore nearly impossible
to customize.
Luckily, the situation is now very different than it was 10 years ago.
New APIs and standards have been proposed and adopted by all major
browser vendors, making it finally possible to build serious custom
audio apps for the web browser :
- The AudioWorklet (5) : a recent fix to the Web Audio API allowing it
to run custom audio code with good performance.
- WebAssembly (6) : a binary instruction format which allows to
compile code so that it can run in the browser with performance close
to that of native applications.
With these in mind, I've been meaning to re build WebPd from scratch
for several years already but it's a daunting task : it requires an
entire re-design of the audio engine, and a full re-write of the code
to make the project future proof. Another problem is that there isn't
yet a good ecosystem of libraries for writing code with the new web
audio technologies mentioned above. Therefore, some generic web audio
packages need to be built as part of the project (adding to the
complexity and the amount of work).
🔊 2. What's planned for this new version, WebPd 1.0
First, let me say that WebPd does not intend to be a fully-fledged
application like Pure Data is on the desktop, but rather a library for
developing web applications. In that sense, conceptually, it is closer
to libpd (7) than to Pure Data. Of course, you could build a
fully-featured user interface, a Pure Data on the web, using WebPd,
but that is out of the scope of the project.
WebPd's goals are :
→ to allow artists to take their Pure Data patches and run these in
web pages, therefore enabling non-programmers (sound designers,
musicians, etc ... ) to design live and interactive audio for the web.
→ to provide experienced web programmers with a complete audio toolkit
that is production-ready, and enables efficient audio synthesis and
processing in the browser.
For this to be possible, WebPd needs a small community of users and
developers. In fact, I have received many messages inquiring about the
status of the project and many have offered help with development, so
I know that this community exists.
The first milestone is therefore to build a minimum viable product,
write good docs, resources for beginners to find help, find a good
platform for questions and discussion, etc ... making it more easy for
others to contribute. In the long run, I would like the development
work to be as collaborative as possible, and progressively hand over
ownership of the project to the community.
🔊 3. Crowdfunding campaign
In order to help reach this first milestone, I am starting a small
crowdfunding campaign so I can spend more time on the project in the
coming months. You can find here
(https://github.com/sebpiq/WebPd/blob/master/README.md#roadmap) a list
of what I plan to achieve for WebPd 1.0 with help from that money.
If this is something you'd like to help with, you can donate on the
opencollective page of the project
(https://opencollective.com/webpd#category-CONTRIBUTE). Any amount is
welcome.
You can also ask questions, or come share ideas here
(https://github.com/sebpiq/WebPd/issues/113) or here
(https://opencollective.com/webpd/conversations/announcing-webpd-complete-re…).
(1) https://github.com/sebpiq/WebPd
(2) https://sebpiq.github.io/WebPd_demos/the-graph/www/
(3) https://opencollective.com/webpd/contribute/backer-42114/checkout
(4) https://webaudio.github.io/web-audio-api/
(5) https://webaudio.github.io/web-audio-api/#AudioWorklet
(6) https://webassembly.org/
(7) https://github.com/libpd/libpd
Dave Smith, MIDI Pioneer, passed away at 72. As a hommage, I'd like to
announce the release of ELSE 1.0 Release Candidate 2, whose main attraction
is the Sound Font player object called [sfont~], more details at
https://github.com/porres/pd-else/releases/tag/v1.0-rc2
By the way, if any of you are on instagram, I'm posting Pd/ELSE related
stuff over there, check this post with an example using [sfont~] to play
some MIDI sequences you people can easily find online -->
https://www.instagram.com/p/CeTwd4VNal9/ General MIDI Sound Fonts are also
easy to get, have fun!
Of course the nice thing about Pd is that we can do crazier stuff than just
playing these, tell me what you feel like doing. Also, note [sfont~] has
nice microtonal capabilities, check its help file for more.
*And please test to see if it's all working fine. *It took me over 2.5
years to get this sound font player into ELSE, it was quite challenging and
lots of people helped me, especially Roman and Lucarda with the compilation
issues. I hope it's all good!
You can download the latest ELSE directly from Pd. Heads up, the next
release of ELSE will include a major overhaul in the [midi] object (perhaps
it will even have a new name). Compatibility will break but we'll be able
to do lots more with it, like writing multi track MIDI files and deal with
"meta" information.
Have fun with MIDI and Sound Fonts, Happy patching.
RIP Dave Smith
Hi all,
"flite" is and old "text to speech" external.
I had made some updates which includes:
- single binary without dependencies.
- support for the 5 voices
- read from a text file
- threaded functions for "synthesis" and "read file"
There are amd64 builds for tests at
https://github.com/Lucarda/pd-flite/releases
click on "Assets" and get the version for your OS.
Let me know if this is good enough for a Deken release or whatever.
Lucarda.
--
Mensaje telepatico asistido por maquinas.
hi
i'd like to announce the release of deken-v0.9.2
https://github.com/pure-data/deken/releases/v0.9.2
the GUI-plugin can be installed via deken.
just go to "Help -> Find externals" and enter
"deken-plugin" as the search string.
this is the first bugfix release of deken-v0.9, which solves an
important regression (aka: crash) on some older versions of Pd on macOS.
i hope this is now ready for inclusion in the next Pd release as the
standard "find externals" implementation.
in any case: please test!
changes since 0.9:
deken-plugin ("Find externals" within Pd)
=========================================
- fix crash on some macOS versions of Pd (basically reverting to the old
list view in selected cases)
- double-click on the "library" heading now will open/close all library
nodes
- double-click on any other heading (e.g. "version") will also re-sort
the libraries
- (single-click will only re-sort packages within each library node,
but leave the library-sorting itself intact)
deken-xtra-apt-plugin
=====================
- fix the deken-extension to also include packages installed via
`apt-get` (on Debian and derivatives)
deken cmdline (create and upload packages)
==========================================
- fix overriding of destination URL with `--destination`
- minor fixes in the GPG/SHA256 verification code
- make flags for enabling/disabling certain features consistent
- make wrapper script POSIX compliant
- fixes to help Debian packaging of the the cmdline tool
- update Docker image
- reduce size
- prepare Docker image for GPG signing
binaries and a Docker image for the cmdline tool are available from the
releases page.
happy patching
gfmdasr
IOhannes