Hi list,
ofelia v1.0.6 is now available.
This version includes [pdgui] abstractions which emulate pd's built-in GUI.
Please try out "ofelia/examples/gui/pdguiExample.pd".
Screenshot:
https://github.com/cuinjune/ofxOfelia/blob/master/doc/pdgui_screenshot.png
More GUI abstractions will be added in the close future.
You can also customize the look/behavior of the GUI if you know the basics
of ofelia.
Changes:
* [ofCreateFbo] auto MSAA scaling is disabled
* fixed bug for mesh editor and getter objects
* [ofReceive], [ofValue] can change name dynamically
* float inlet is removed from [ofGetCanvasName], [ofGetDollarZero],
[ofGetDollarArgs], [ofGetPatchDirectory] as it's problematic when used in
cloned abstraction
* [ofGetPos], [ofGetScale] are renamed to [ofGetWindowPos],
[ofGetWindowScale]
* [ofGetTranslate], [ofGetRotate], [ofGetScale] are added
* [pdgui] abstractions are added to the "examples/gui" directory
* [ofMap] has 5th argument which enables/disables clamping
* [ofGetElapsedTime], [ofGetLastFrameTime] returns time in seconds
* [ofGetElapsedTimeMillis], [ofGetLastFrameTimeMillis] are added
Upcoming features:
* More GUI abstractions
* GLSL shader loader
* Video player and grabber
More info about ofelia: https://github.com/cuinjune/ofxOfelia
Cheers
Hi all,
In an effort to get organized and share work more effectively, I made git
repos for some ongoing projects and some new ones. I've gotten to a
stopping place for now, and uploaded the following items to deken. Note
that these were packaged with deken 4.0, so you may need to update to find
them. Here's a quick rundown:
********
[convolve~]: a partitioned impulse response convolution reverb
- version 0.11 uses FFTW for non-power-of-2 window sizes and therefore
finer control over delay. It includes an "eq" method for shaping the
spectrum of the reverb in 25 Bark-frequency bands. It also accepts a second
argument to specify an IR array for analysis at creation.
https://github.com/wbrent/convolve_tilde.git
********
********
[DRFX]: a dynamic routing system for DSP effects. (Pd-vanilla abstraction)
- [DRFX] automatically creates a signal routing system and associated
controls (routing matrix) based on inputs and effect modules that you
specify. This allows you to make any type of series or parallel connection
chain between your inputs and effects, and change routing on the fly. It
also saves/loads complex routing presets, including effect parameter
settings.
https://github.com/wbrent/DRFX.git
********
********
[martha~]: an oscillator bank designed to accept output from [sigmund~]'s
sinusoidal tracking function. (Pd-vanilla abstraction)
- Version 0.6 adds an option to toggle between oscillators and band-pass
filtered white noise, and vibrato functionality.
https://github.com/wbrent/martha_tilde.git
********
********
[missive~]: a vector synth object. (Pd-vanilla abstraction)
- [missive~] is a vector synth object that uses a wavetable index to
crossfade between neighboring wavetables in a set. Wavetable sets can have
an arbitrary number of wavetables, and are composed of individual .wav
files that each contain one wavetable cycle. The length of each wavetable
can be arbitrary because [missive~] adds the extra guard points required
for Pd's 4-point interpolation scheme.
https://github.com/wbrent/missive_tilde.git
********
********
[streamStretch~]: a time-stretching/pitch-shifting/layering pastiche
effect. (Pd-vanilla abstraction)
- [streamStretch~] buffers multiple copies of incoming live audio and
creates overlapping streams of time-stretched & transposed output that
trail the input to achieve a variety of results.
https://github.com/wbrent/streamStretch_tilde.git
********
********
[timbreID]: an audio analysis and classification library
- version 0.7.3 adds a few methods to the [tabletool] object: NRT
overlap-add, permutations, and sequential output of table contents (like
[list-drip] for tables). As of version 0.7, timbreID uses FFTW to allow for
large and non-power-of-2 window sizes. Several basic time-domain objects
were also added at 0.7.
https://github.com/wbrent/timbreID.git
********
********
[timeStretch~]: a polyphonic time compression/expansion sample player.
(Pd-vanilla abstraction)
- [timeStretch~] is built around the I07.phase.vocoder.pd example patch
from Pd's built-in documentation, and adds functionality for changing
sample arrays on the fly, 16-voice polyphony, predetermined playback
duration, and indefinitely suspending time post-transient until a "release"
command is issued.
https://github.com/wbrent/timeStretch_tilde.git
********
********
[tune~]: a real-time pitch correction object. (Pd-vanilla abstraction)
- [tune~] tunes an input signal to any desired MIDI pitch while keeping
formant structure relatively intact. It is an adaptation of the built-in
Pure Data documentation patch I10.phase.bash.pd. While the original patch
demonstrates the technique in separate analysis and playback stages,
[tune~] is designed for real-time pitch correction.
https://github.com/wbrent/tune_tilde.git
********
--
William Brent
www.williambrent.com
“Great minds flock together”
Conflations: conversational idiom for the 21st century
www.conflations.com
Hey, these have finally been uploaded to deken (old one, so no upgrade to
the new deken plugin needed) - we're only missing Linux 32 bits binaries.
> Date: Sat, 10 Mar 2018 12:54:47 -0300
> From: Alexandre Torres Porres <porres(a)gmail.com>
> To: pd-announce(a)lists.iem.at
> Subject: [PD-announce] ELSE 1.0 Beta 8 released
> Message-ID:
> <CAEAsFmj-g6a3z6j3ZS9L4bbSPRZ6hcGzr_QFXoY7cG0C2e0--Q(a)mail.gmail.
> com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi, it's here: https://github.com/porres/pd-else/releases/tag/v1.0-beta8
> (and will be hopefully and eventually up in deken). This is still
> experimental, meaning (besides instability) I'll still change things and
> compromise backwards compatibility (I'm trying to get all such planned
> changes done with, but still have about two more releases to go before
> moving to the next/final stage).
>
> cheers
jit_expr is a clone of the pure data expr/expr~/fexpr~ objects. It
just-in-time compiles its expressions so they should be much more optimized
than the original. If all works as designed, they should use less CPU than
the equivalent vanilla, non-expr, patching and have a significant CPU
advantage over the original expr objects.
I've put the external, compiled for 64-bit Mac-OS and 64-bit Linux, up on
deken: in pd, go to help menu, find externals, search for "jit_expr".
After installing the external you should be able to change any of your expr
family of objects to just in time compile by loading the library, [declare
-lib jit_expr], and then prefixing the object name with "jit/", for example
[jit/fexpr~ $x1[0] + $y1[-1]].
I believe they are feature complete with the originals but I'd love to know
if there is anything that I'm missing or any bugs that you discover.
I'm not exactly sure how to profile pure data patches. If anyone has a good
approach or original expr~/fexpr~ patches that use a lot of CPU you can
share, let me know.
Compiling in the object takes a little bit of time, so the initial
instantiation of the object/expression will be a bit slower than the
original, FYI.
Please report any issues here:
https://github.com/x37v/jit-expr/issues
BTW, if you're curious to see the llvm assembly produced by your
expression, send the |print( message into the left most inlet of your
object then check out the pd console.
I would love help building Windows and 32-bit Linux versions of the
externals. I'm guessing we could also do raspi/arm builds but we'd need
some changes to the source code as it uses llvm and explicitly generates
code for x86 right now.
The source code can be found in the git repo:
https://github.com/x37v/jit-expr
-Alex Norman
Dear all,
A Doctoral Research Fellowship in Sound and Music Computing is available
at the University of Oslo:
https://www.jobbnorge.no/en/available-jobs/job/148855/
* Deadline: 04.04.2018
* Start date: September 2018
* Duration: 3+1 year
* Salary: 436-490 900 NOK
The fellowship is connected to the new Nordic Sound and Music Computing
Network <https://nordicsmc.create.aau.dk/> (NordicSMC) funded by the
Nordic Research Council. This network brings together a group of
internationally leading sound and music computing researchers from
institutions in five Nordic countries: Aalborg University, Aalto
University, KTH Royal Institute of Technology, University of Iceland,
and University of Oslo. The network covers the field of sound and music
from the “soft” to the “hard,” including the arts and humanities, and
the social and natural sciences, as well as engineering, and involves a
high level of technological competency.
We invite PhD proposals that focus on sound/music interaction with
periodic/rhythmic human body motion (walking, running, training, etc.).
The appointed candidate is expected to carry out observation studies of
human body motion in real-life settings, using different types of mobile
motion capture systems (full-body suit and individual trackers). Results
from the analysis of these observation studies should form the basis for
the development of prototype systems for using such periodic/rhythmic
motion in musical interaction.
The appointed candidate will benefit from the combined expertise within
the NordicSMC network, and is expected to carry out one or more
short-term scientific missions to the other partners. At UiO, the
candidate will be affiliated with RITMO Centre for Interdisciplinary
Studies in Rhythm, Time and Motion
<http://www.hf.uio.no/ritmo/english/>. This interdisciplinary centre
focuses on rhythm as a structuring mechanism for the temporal dimensions
of human life. RITMO researchers span the fields of musicology,
psychology and informatics, and have access to state-of-the-art
facilities in sound/video recording, motion capture, eye tracking,
physiological measurements, various types of brain imaging (EEG, fMRI),
and rapid prototyping and robotics laboratories.
Qualification requirements
* A Master's Degree or equivalent in sound and music computing, music
technology, computer science, or other relevant field. The applicant
is required to document that the degree corresponds to the profile
for the post. The Master's Degree must have been obtained by the
time of application.
* Experience with development of real-time sound and music systems
(Max, PD, SuperCollider, or similar)
* Excellent skills in written and oral English
* Personal suitability and motivation for the position
Please circulate through your networks and forward to relevant
candidates. Apologies for cross posting.
--
Alexander Refsum Jensenius, Ph.D.
Assoc. Professor, Department of Musicology, University of Oslo
<http://people.uio.no/alexanje/>
Deputy Director, RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion
<http://www.uio.no/ritmo/>
New book: "A NIME Reader"
<http://link.springer.com/book/10.1007/978-3-319-47214-0>
New master's programme: "Music, Communication & Technology"
<http://www.uio.no/mct-master/>
Hi, it's here: https://github.com/porres/pd-else/releases/tag/v1.0-beta8
(and will be hopefully and eventually up in deken). This is still
experimental, meaning (besides instability) I'll still change things and
compromise backwards compatibility (I'm trying to get all such planned
changes done with, but still have about two more releases to go before
moving to the next/final stage).
cheers
we are proud to announce a major overhaul of the deken fileformat.
TL;DR to download new deken packages, you will need at least version
v0.3.0 of the deken-plugin (Pd-0.48-1 includes deken-v0.2.4).
When searching externals with an old (incompatible) version of the
plugin, you will always find a suggestion to install the new plugin with
something like:
> deken-plugin/0.3.0
> (Deken externals downloader - REQUIRED FOR NEW PACKAGES)
once you have installed the new version (and/or Pd includes a compatible
version), the nagging will stop :-)
# fileformat
the new fileformat fixes a number of issues we had with the existing one.
things that are better:
- cool filename extension '.dek'
- consistent archival options for all platforms (ZIP!)
- proper detection of the version string for all libraries (even if the
library has a weird name like "a-v12"
- support for double-precision externals (once Pd supports that!)
- extensible
# Pd-integration
the updated "deken-plugin" has a number of nifty features, you don't
want to miss:
- support for the new fileformat
- support for the old (legacy) fileformat
- (optionally) uninstall libraries before re-installing them
- (optionally) show a README that is included in the package on
successful installation
- set installation path (suggesting Pd's default search paths, and
offering to create those that are missing)
- (optionally) hides the search-results for incompatible platforms
- (optionally) overrides the "compatible" Pd-architecture
- nice preferences
- bugfixes
- much more
# Developer tools
if you are a developer of libraries, we have also updated our 'deken'
cmdline tool.
things you always wanted to have and which are now free:
- support for the new fileformat (default)
- support for the old (legacy) fileformat
- better detection of binary architectures
- better detection of included sources
- (experimental) support for double-precision externals
- pre-compiled & self-contained Windows binaries
- bugfixes
- much more
if you want to know more, see
- http://puredata.info/downloads/deken/releases/0.3.0/
- https://lists.puredata.info/pipermail/pd-dev/2018-02/021513.html
- https://github.com/pure-data/deken/issues/161
i have submitted also submitted a Pull-Request to include the new
deken-plugin into Pd proper:
- https://github.com/pure-data/pure-data/pull/320
and another Pull-Request for Pd that is required to make the
double-precision detection work:
- https://github.com/pure-data/pure-data/pull/300
happy patching.
fgm,asdr
IOhannes
IEM Music Residency Program 2019 - Call for Applications
https://iem.at
(sorry for cross-posting; please distribute)
The IEM - Institute of Electronic Music and Acoustics - in Graz, Austria
is happy to announce its call for the 2019 Artist-in-Residence program.
The residency is aimed at individuals wishing to pursue projects in
performance, composition, installation, sound art, development of tools
for art production, and related areas. Individuals are asked to submit a
project proposal that is related to the fields of artistic research of
the IEM, as:
* Algorithmic Composition
* Algorithmic Experimentation
* Audio-Visuality
* Dynamical Systems
* Experimental Game Design
* Live Coding
* Sonic Interaction Design
* Spatialization/higher-order Ambisonics
* Standard and non-standard Sound Synthesis
Duration of residency: 5 months
Start date: June 1st 2019 (negotiable)
Monthly salary: approx. EUR 1100 (net)
APPLICATION DEADLINE: 1st of May 2018
The Institute:
The Institute of Electronic Music and Acoustics is a department of the
University of Music and Performing Arts Graz founded in 1965. It is a
leading institution in its field, with more than 35 staff members of
researchers and artists. IEM offers education to students in composition
and computer music, sound engineering, sound design, contemporary music
performance, and musicology. It is well connected to the University of
Technology, the University of Graz as well as to the University of
Applied Sciences Joanneum through three joint study programs.
The artwork produced at IEM is released through the Institute's own Open
CUBE and Signale concert series, as well as through various
collaborations with international artists and institutions.
What we expect from applicants:
- A project proposal that adds new perspectives to the Institute's
activities and resonates well with the interests of IEM.
- Willingness to work on-site in Graz for the most part of the Residency.
- Willingness to exchange and share ideas, knowledge and results with
IEM staff members and students, and engage in scholarly discussions.
- The ability to work independently within the Institute.
- A dissemination strategy as part of the project proposal that ensures
the publication of the work, or documentation thereof, in a suitable
format. This could be achieved for example through the release of media,
journal or conference publication, a project website, or other means
that help to preserve the knowledge gained through the Music Residency
and make it available to the public.
- A public presentation as e.g. a concert or installation, which
presents the results of the Artist Residency.
What we offer:
- 24/7 access to the facilities of the IEM.
- Exchange with competent and experienced staff members.
- A desk in a shared office space for the entire period and access to
studios including the CUBE [1], according to availability.
- Extensive access to the studios of the IEM during the period from July
1st until end of September.
- access to the IKOsahedron loudspeaker [2]
- access to the “Autoklavierspieler” [3]
- infrared motion tracking systems
- Regular possibilities for contact and exchange with peers from similar
or other disciplines.
- Concert and presentation facilities (CUBE 24 channel loudspeaker
concert space).
- A monthly salary of approx. EUR 1100 net per month in addition to
health and accident insurance.
What we cannot offer to the successful applicant:
- We cannot provide any housing.
- We also cannot provide continuous assistance and support, although the
staff is generally willing to help where possible.
- We cannot host artist duos or groups, because of spatial limitations.
- We cannot offer any additional financial support for travel or
material expenses.
An application form providing more information is available at
https://residency.iem.at/
Feel free to contact residency(a)iem.at if you have any questions.
[1] The Cube has a 24-channel loudspeaker system
[2] http://iko.sonible.com/
[3] http://algo.mur.at/projects/autoklavierspieler
===================
MUME 2018 - EXTENDED DEADLINE: MARCH 11, 2018
===================
((( MUME 2018 )))
The 6th International Workshop on Musical Metacreation
http://musicalmetacreation.org
June 25-26, 2018, Salamanca, Spain.
MUME 2018 is to be held at the University of Salamanca in conjunction with
the Ninth International Conference on Computational Creativity, ICCC 2018.
=== Important Dates ===
Workshop submission deadline: March 2nd, 2018 MARCH 11, 2018
Notification date: April 30th, 2018
Camera-ready version: May 25th, 2018
Workshop dates: June 25-26, 2018
======================
MUME brings together artists, practitioners, and researchers interested in
developing systems that autonomously (or interactively) recognize, learn,
represent, compose, generate, complete, accompany, or interpret music. As
such, we welcome contributions to the theory or practice of generative
music systems and their applications in new media, digital art, and
entertainment at large.
Topics
===========================
We encourage paper and demo submissions on MUME-related topics, including
the following:
-- Models, Representation and Algorithms for MUME
-- Systems and Applications of MUME
-- Evaluation of MUME
Moree information:
----------------------
http://musicalmetacreation.org
Please direct any inquiries/suggestions/special requests to the Workshop
Chair, Philippe Pasquier (pasquier(a)sfu.ca).
Workshop Organizers
===================
Pr. Philippe Pasquier (Workshop Chair)
School of Interactive Arts and Technology (SIAT)
Simon Fraser University, Canada
Pr. Arne Eigenfeldt
School for the Contemporary Arts
Simon Fraser University, Canada
Dr. Oliver Bown
Faculty of Art & Design, The University of New South Wales
Kıvanç Tatar (MUME Administration and Publicity Assistant)
PhD Candidate, School of Interactive Arts and Technology,
Simon Fraser University, Vancouver, Canada.
MUME Steering Committee
======================
Andrew Brown
Griffith University, Australia
Anna Jordanous
University of Kent, UK
Bob Keller
Harvey Mudd College, US
Róisín Loughran
University College Dublin, Ireland
Michael Casey
Dartmouth College, US
Benjamin Smith
Purdue University Indianapolis, US
--
Kıvanç Tatar
----------------------------------
PhD Candidate
Interactive Arts and Technology
Simon Fraser University, Vancouver, Canada
Email: kivanctatar(a)gmail.com
Website: https://kivanctatar.com/