Dear Piksel friends, here is the PIKSEL17 -- We Take EmoCoin! workshops
announcement.
Please feel free to spread the word to your friends. Have a nice day!
Piksel Team.
-----------------------------------------------------------------------
PIKSEL17 We Take EmoCoin!
The 15th annual Piksel Festival for Electronic Art and Free Technologies
- Wokshops
- November 13st-16th, Bergen (NO)
- http://17.piksel.no
The 15th edition of the Piksel Festival takes place in Bergen (NO)
November 16th- 18th 2017. We Take EmoCoin! The Piksel17 festival slogan
points out to the new capital: our emotions. The interest of the human
beings can be captured through emotions, and therefore can be
monetized. Emotions has become the new coin. Emotions can be measured,
monitorized and monetized in almost real time. Together with our use of
social networks, technology is also investing in bio-sensing the body,
using small components and microcontrollers we can collect our
bio-data. So, we ourselves with our public online behaviour and our
stored bio-signals, visualized and interfaced, create a direct link
between emotions and money.
PIKSEL17 – We Take EmoCoin!
-----------------------------------------------------------
Workshops Programme:
All workshops are free to attend.
To sign up send an email to:prod(at)piksel(dot)no
BioSIGNAL Sensing Workshop by Cristian Delgado
16th Nov
Building: Piksel Studio 207
Date: 14:00-18:00
http://17.piksel.no/?p=72
In this workshop the participants are going to use small components and
microcontrolers to make biosignal sensors, to record pulse, muscle and
cardiac activity to conect it to instruments and visuals, working with
the sense plants response, electromiography and oxygen in blood to
control both visuals and audio software with body signals and
interaction between bodies. The result is a colective exhibition made
by the participants and the invited artist.
Bergen PD Meeting
16th Nov
Building: Piksel Studio 207
Date: 14:00-16:00
http://17.piksel.no/?p=72
PureData is an open source visual programming language for music and
multimedia creation, oftenly used by composers, performers, software
designers, researchers and artists to create performances and
installations. This first PD meeting will try to gather the Bergen PD
community around regular meetings in order to discuss and learn about
PD, what it is, how do we see or use it, but also a place to discuss
about electronic music and open-source culture. The meeting is run by
artist and composer Arthur Hureau with the support of Piksel.
From E-waste to Sound Device by Toni Quiroga
17h Nov
Building: Piksel Studio 207
Date: 11:00-18:00
http://17.piksel.no/?p=230
During this workshop you will learn how to turn parts of e-waste and
trash into functional primitive sound devices. Through the vivisection
of dead media devices you will learn how to extract valuable components
(like motors, VU meters, integrated circuits, transistors and other raw
materials) and reuse them in order to build a primitive and
idiosyncratic instrument. We will build fully recycled electronic
gadgets powered through alternative and ecologically sustainable
methods integrating our own body residuals into the process (if you
want to). The idea is to get a better understanding of new media
through the excavation of the old and obsolete by highlighting the
nonlinear history within those devices.
The Praxis LIVE - Hybrid Visual IDE for Live Creative Coding by Neil C.
Smith 17h Nov
Building: Piksel Studio 207
Date: 14:00-17:00
http://17.piksel.no/?p=69
Praxis LIVE is an innovative and powerful new way to work with OpenJDK
and tools like Processing. It is a way to create projections,
interactive spaces, custom AV instruments, or live-coding performances.
The workshop will introduce basic project building and patching with
Praxis LIVE. The participants will be able to continue to experiment
with visual patching, or learn how to "drop down" to the built-in code
editor and live recode components using Processing / Java or OpenGL.
They will be able to explore Praxis LIVE's support for physical
computing, including prebuilt integration with TinkerForge open
hardware, or GPIO on the Raspberry Pi. www.praxislive.org
|www.neilcsmith.net
A Recipe for Destruction: Secure Hardware Data Erasure by Nikita Mazurov
18h Nov
Building: Piksel Studio 207
Date: 11:00-13:00
http://17.piksel.no/?p=227
This workshop propose the question that how securely delete data
nowdays has become an important thing. Exists a huge number of software
solutions which advocate wholesale drive encryption, but software
solutions are woefully inadequate for the task. So this piece propose a
demonstration of a pragmatic hardware solution: secure device
destruction via open source recipes. Will demonstrate and walk
attendees through creating homemade recipes to securely get rid of
their devices, whether tablets, laptops, phones, or even desktops. The
ultimate goal of this non-traditional workshop is to illustrate that
for our digital data to truly become 'renewable' it must be liberated
from the prison of the physical form, exorcised from the demon of the
hard drive.
Sonified Textiles by Paola Torres Nuñez del Prado
18h Nov
Building: Piksel Studio 207
Date: 14:00-18:00
http://17.piksel.no/?p=78
The Shipibo-Konibo, from the Peruvian rainforest, openly link their
traditional singing (Ikaros) to the designs they draw on vessels and
their bodies, and the textiles they use as decoration and clothing.
They consider that their designs can be sung.
The workshop includes an introduction to various sonification methods,
ranging from databending to code, using different open-source softwares
(Audacity, Gimp, Hex editor) and programming platforms for mapping
sounds on visual data (images and video). An explanation on how glitch
is related to the designs behind the artworks from Paracas Culture from
Perú, the Chincheros Textile Masters from Cusco, and the Shipibo-Konibo.
Biotransmissions by Colectivo Electrobiota
18h Nov
Building: Piksel Studio 207
Date: 14:00-18:00
http://17.piksel.no/?p=72
An experiment with electronics and biology seeking to explore different
forms of interspecies communication and relationship with nature.
Introduction to biointeractivity and electronics to build our own
biosensor that will allow us to make latent the potential voices of the
different forms of life that inhabit the rhizosphere.
Vector Synthesis by Derek Holzer
November 27 – November 29
Date: 14:00-20:00
http://17.piksel.no/?p=81
VECTOR SYNTHESIS is an audiovisual, computational art project using
sound synthesis and vector graphics display techniques to investigate
the direct relationship between sound+image. It draws on the historical
work of artists such as Mary Ellen Bute, John Whitney, Nam June Paik,
Ben Laposky, and Steina & Woody Vasulka among many others, as well as
on ideas of media archaeology and the creative re-use of obsolete
technologies. Audio waveforms control the vertical and horizontal
movements as well as the brightness of a single beam of light, tracing
shapes, points and curves with a direct relationship between sound and
image. http://macumbista.net/?page_id=5000
NIME 2018 Call for submissions
Please pardon the cross-posting,
NIME (New Interfaces for Musical Expression) is the premier conference
in human-machine interfaces and interactions for musical performance.
NIME is a gathering of researchers, designers, musicians, who come
together to share knowledge, perform music, and build community through
research presentations, concerts, installations, and workshops.
On behalf of the 2018 NIME Committee I am pleased to announce that the
NIME 2018 "Mirrored Resonances" Conference call for submissions is now
officially open! Co-organized between Virginia Tech and the University
of Virginia, the conference will take place June 3-6, 2018 in
Blacksburg, Virginia. We welcome submissions of papers, posters, panels,
musical performances, installations, demos, and workshops, particularly
those that may respond to the overarching conference theme of “Mirrored
Resonances” and its thematic areas in any of the many ways they might be
interpreted. Likewise, we encourage potential participants to consider
exploring the unique Virginia Tech facilities, including the Institute
for Creativity, Arts, and Technology’s Cube with a massive high density
loudspeaker array. The deadline for the *double-blind peer reviewed
submissions*, including papers, panels, demo papers, music, and
installations is January 20th, 2018. Submissions created by January 20th
will continue to be editable until January 27th when the submission
process will close. Demos without paper and workshops will be *curated
*and have an extended submission deadline until March 1st, 2018. In
addition to the NIME and academic communities, we also invite industry,
as well as non-academic creatives to consider participating in the
aforesaid categories. For a complete list of important dates visit the
Participate <http://nime2018.icat.vt.edu/Participation/#dates> page.
We are excited to announce that the conference will feature four keynote
artists:
*Onyx Ashanti**
**Benjamin Knapp**
**Ikue Mori**
**Pamela Z*
If interested in sponsorship opportunities please do not hesitate to
contact us
<mailto:conference-chairs@nime2018.org?Subject=NIME%202018%20Sponsorship>
On behalf of the entire NIME 2018 Committee, we look forward to
welcoming you in Virginia next June!
Go to the website
<http://virginiatech.cmail19.com/t/j-l-otlirdl-ydiykltyik-r/>
NIME2018.ORG <http://nime2018.org>
Facebook <http://virginiatech.cmail19.com/t/j-l-otlirdl-ydiykltyik-y/>
Twitter <http://virginiatech.cmail19.com/t/j-l-otlirdl-ydiykltyik-j/>
Instagram <http://virginiatech.cmail19.com/t/j-l-otlirdl-ydiykltyik-t/>
Website <http://virginiatech.cmail19.com/t/j-l-otlirdl-ydiykltyik-i/>
Moss Arts Center
190 Alumni Mall
Blacksburg, VA 24060
Like <http://virginiatech.cmail19.com/t/j-fb-otlirdl-ydiykltyik-d/>
Tweet <http://virginiatech.cmail19.com/t/j-tw-otlirdl-ydiykltyik-h/>
Share <http://virginiatech.cmail19.com/t/j-li-otlirdl-ydiykltyik-p/>
Forward
<http://virginiatech.forwardtomyfriend.com/j-ydiykltyik-61B1A90A-otlirdl-l-x>
I did 4 updates this month because of a course I'm teaching, which ends
now, so I'm gonna drop this for a while, sorry for polluting with so many
updates. Anyway, it's up on deken already, and here are some details:
https://github.com/porres/pd-else/releases
cheers
[please forward and apologies for cross-posting]
Georgia Tech School of Music
Guthman Musical Instrument Competition
2018 Call for Submissions
Is Now Open!
http://guthman.gatech.edu
Georgia Tech's 2018 Margaret Guthman Musical Instrument Competition is an annual event aimed at identifying the world's next generation of musical instruments and unveiling the best new ideas in musicality, design, and engineering. Wired magazine called the competition an "X-Prize for music," and contestants liken it to a TED Conference for new musical instrument designers. The Guthman Competition will take place March 7-8, 2018, at Georgia Tech's Ferst Center for the Arts, in Atlanta, Georgia.
The deadline for submissions has been extended to November 3, 2017. Approximately twenty semi-finalists will be invited to demonstrate, discuss, and perform with their instruments as they compete for $10,000 in cash prizes.
Submit Your Instrument at: http://guthman.gatech.edu/guthman-submissions
///////////////////////////////////////////////////
:::: This Year's Judges ::::
PERRY COOK - Professor Emeritus, Princeton University
SUZANNE CIANI - Electronic Music Pioneer
JESPER KOUTHOOFD - Founder, Teenage Engineering
///////////////////////////////////////////////////
To learn more about the competition,
visit guthman.gatech.edu
Just a few days later I'm releasing an update, I deleted one object and
added other 2, but also did change some important things, details in
https://github.com/porres/pd-else/releases/tag/v1.0-beta3
I ended up not uploading beta 2 to deken and now have already uploaded beta
3 so it's just a matter of time until all of them show up - I've deleted
beta 1 so just "else-v1.0beta3" shows up in deken.
cheers
2017-10-14 21:58 GMT-03:00 Alexandre Torres Porres <porres(a)gmail.com>:
> Second beta version is out; 145 objects, the trigate~ object has been
> deleted, but we have 12 new objects: display / display~ / ar~ / metro~ /
> impseq~ / sequencer~ / trig2bang~ / trigger~ / sampler~ / trighold~ /
> randpulse~ / brickwall~
>
> find it here: https://github.com/porres/pd-else/releases/tag/1.0-beta2
>
> but it should be up in deken in around 24h
>
Second beta version is out; 145 objects, the trigate~ object has been
deleted, but we have 12 new objects: display / display~ / ar~ / metro~ /
impseq~ / sequencer~ / trig2bang~ / trigger~ / sampler~ / trighold~ /
randpulse~ / brickwall~
find it here: https://github.com/porres/pd-else/releases/tag/1.0-beta2
but it should be up in deken in around 24h
Pure Data Patching Circle Hebden Bridge (PdPcHb)
"At Patching Circles, you can work on personal projects, professional
projects, school projects, ask for help, help others, or just patch quietly
to yourself in a room full of other people patching patches and helping
other people patch."
7-9pm Tuesday 3rd October 2017 (meeting bi-weekly) @:
Big Tin Shed
Alternative Technology Centre
Unit 7,
Victoria Works,
Victoria Rd,
Hebden Bridge,
West Yorkshire UK
HX7 8LN
Sessions run in association with:
Noisy Toys*
http://noisytoys.org/event/scavengers-club-starts-3-october-at-the-big-tin-…https://www.facebook.com/noisytoys
Who are running Scavengers Club for ages 8+ from 5pm
Small donations appreciated (£2) to cover basic costs
Please contact me for further PdPcHb info
Hope to see you there,
Julian
*Many thanks to Steve Summers for making this happen
Georgia Tech School of Music
Guthman Musical Instrument Competition
2018 Call for Submissions
Is Now Open!
http://guthman.gatech.edu
Georgia Tech's 2018 Margaret Guthman Musical Instrument Competition is an annual event aimed at identifying the world's next generation of musical instruments and unveiling the best new ideas in musicality, design, and engineering. Wired magazine called the competition an "X-Prize for music," and contestants liken it to a TED Conference for new musical instrument designers. The Guthman Competition will take place March 7-8, 2018, at Georgia Tech's Ferst Center for the Arts, in Atlanta, Georgia.
The deadline for submissions is October 20, 2017. Approximately twenty semi-finalists will be invited to demonstrate, discuss, and perform with their instruments as they compete for $10,000 in cash prizes.
Submit Your Instrument at: http://guthman.gatech.edu/guthman-submissions
///////////////////////////////////////////////////
:::: This Year's Judges ::::
PERRY COOK - Professor Emeritus, Princeton University
SUZANNE CIANI - Electronic Music Pioneer
JESPER KOUTHOOFD - Founder, Teenage Engineering
///////////////////////////////////////////////////
To learn more about the competition,
visit guthman.gatech.edu or our Facebook page.
We’d love to hear your comments!
Hi, I've been working this year on a new library called "ELSE", this first
beta release, and it is up in deken for Windows, Mac and Windows. Here's my
repository github.com/porres/pd-else
It has 134 objects as of now, and I plan to keep adding more until the
stable release. It is a multi purpose / swiss army knife library, with
basic building blocks for patching, such as oscillators, filters, etc... it
is strongly oriented towards sample accuracy, so you can trigger objects
with impulses and gates. SuperCollider is also like that, so I stole a
bunch stuff and ideas from their objects.
My idea is to heavily rely on it and in cyclone to patch the examples in my
didactic material - that I've been developing for 9 years.
I'm presenting a paper tomorrow about it here in São Paulo, Brazil, at SBCM
( http://compmus.ime.usp.br/sbcm/2017/ ). Here's the paper:
https://drive.google.com/file/d/0B3AoiT0xk8fnUmxqZ0MtNlptNHM/view?usp=shari…
An object list with objects listed by category =>
https://github.com/porres/pd-else/blob/72ad687ad3b85514be5cf9f6babf102c73b9…
Any feedback is welcome
Thanks
this is just to announce, that the latest and greatest Pd is now also
available pre-packaged for Debian 9 ("stretch").
Debian/stretch has been released in June 2017, *before* Pd-0.48 was
released. Debian policy does not allow updating packages after a Debian
release, which often means that users end up with out-of-date packages
(after all, Debian does a release every 2 years or so; which means that
available packages can be several years old - esp. if people insist on
not upgrading to the latest Debian/stable).
in comes Debian/backports, which allows users to install up-to-date
packages on their stable Debian systems, with full Debian support.
For instructions on how to enable this, see:
http://backports.debian.org/
I intend to provide backports for all Pd related packages (Pd-vanilla
and all those externals available).
I do not intend to provide backports for Debian/jessie (RaspberryPi
users are advised to upgrade to Debian/stretch; really anybody is
advised to do that)
mfgsdr
IOhannes
sidenote: i generally recommend to use Debian/stable on servers, and use
Debian/testing (constantly updating) for anything multimedia related (as
you usually don't want to use multimedia applications that are several
years old). with Debian/backports you can have the best of two worlds -
a rock solid OS with some select up-to-date applications.
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
MUME Google group - Apologies for cross-posting
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Dear music enthusiasts,
>From automatic composition, live musical agents to sound synthesis
parameter tuning and automatic mixing, Musical Metacreation (
http://musicalmetacreation.org/) is the community that gather artists and
researchers interested in the partial or complete automation of musical
tasks.
After 5 successful International workshop on Musical Metacreation, we are
offering a new communication channel for those interested specifically in
generative music and all that surrounds it. We started a google group /
mailing list, and you can subscribe here:
https://groups.google.com/forum/#!forum/musicalmetacreation
or by email using musicalmetacreation+subscribe(a)googlegroups.com
Join the group, and spread the word where appropriate. We are looking
forward to reading your posts.
Best wishes,
Pr. Philippe Pasquier (Workshop Chair)
School of Interactive Arts and Technology (SIAT)
Simon Fraser University, Canada
http://metacreation.net/https://www.kadenze.com/programs/generative-art-and-computational-creativity
Pr. Arne Eigenfeldt
School for the Contemporary Arts
Simon Fraser University, Canada
Dr. Oliver Bown
Design Lab, Faculty of Architecture, Design and Planning
The University of Sydney, Australia
Kıvanç Tatar
School of Interactive Arts and Technology,
Simon Fraser University, Vancouver, Canada.
----------------------
http://musicalmetacreation.org
To Pd-announce:
Pd version 0.48-0test6 is available on http://msp.ucsd.edu/software.htm
or via git from github:
git clone https://github.com/pure-data/pure-data.git
This fixes the major bugs I'm aware of from "test5" and earlier. Many minor
problems and wish-list items remain, but I think most of them can wait for a
future release.
cheers
Miller
RELEASE ANNOUNCEMENT
VECTOR SYNTHESIS LIBRARY 0.1
The Vector Synthesis library allows the creation and manipulation of 2D
and 3D vector shapes, Lissajous figures, and scan processed image and
video inputs using audio signals sent directly to oscilloscopes, hacked
CRT monitors, Vectrex game consoles, ILDA laser displays, or
oscilloscope emulation softwares using the Pure Data programming
environment.
Please check the README.md file for a full list of requirements,
capabilities, and acknowledgements to the amazing people who have helped
me along the way.
https://github.com/macumbista/vectorsynthesis
--
derek holzer
noise.art.technology
http://macumbista.net
We got a new release of Cyclone (0.3 beta 2) - the library that clones
Max/MSP objects! Not much news here, but the fact that this version will be
the first version from cyclone 0.3 to be included in the next Purr Data
release soon. I know I said this about the release of "cyclone 0.3 beta 1",
but this time this is for real ;)
Most important changes from beta1 are: operators from the cyclone sub
library (like >~, <~, etc) now accept the "cyclone/" prefix, to be
compatible to Purr Data, which will not requite a library to load these
objects as it has the [hexloader] external built-in.
I hope vanilla could have [hexloader] built in, but this is too unlikely,
so the plan is not only to keep using a library named cyclone to carry
these operators, but also to include all objects from cyclone into this
single binary pack, in the same way as cyclone was once originally
distributed.
A couple of other highlights is that we also made major fixes for old
[sprintf] and took care of a bug we introduced to [funnel].
This should be up in deken in a day or so, but you can download it here if
you like: https://github.com/porres/pd-cyclone/releases
And stay tuned for the next Purr Data release
cheers
"If I told you there was a great way to promote your band all over the net,
for free, you'd probably think, 'Yeah, what's the catch?'"
racolage (French) : practice of forced seduction used to attract a
potential customer (in the context of prostitution).
racolage.xxx is a label that releases experimental music through spam
email. Each release is a single track, sent to hundreds of thousands of
random email addresses collected with the same techniques used by usual
spammers.
If you are interested in releasing a track on racolage.xxx, please send to
contact(a)racolage.xxx :
♦ the title of your release > artist name (or some weird pseudo), track
title, ... whatever : 30 characters max
♦ a description for your release : 200 characters max
♦ an image
♦ an audio track : mp3 format, 5mb max
We will respond to you shortly.
http://racolage.xxx/#about
<https://l.facebook.com/l.php?u=http%3A%2F%2Fracolage.xxx%2F%23about&h=ATMke…>
Please help distribute this Call -- Apologies for cross-posting
SPECIAL ISSUE - Musica Hodie & The 16th Brazilian Symposium on
Computer Music: “Contributions of sound and music computing to musical
and artistic knowledge”
Guest Editors: Damián Keller, Luiz Naveda, Rogério Costa
SBCM 2017 - The 16th Brazilian Symposium on Computer Music
São Paulo, Brazil, September 3-6, 2017
http://www.sbc.org.br/sbcm2017/http://compmus.ime.usp.br/sbcm/
We are glad to inform that the journal Musica Hodie will publish a
selection of contributions of the SBCM 2017. Authors of the selected
contributions will be invited to submit extended versions of the
original text to a special issue on “Contributions of sound and music
computing to musical and artistic knowledge”. We invite all authors to
discuss the mutual influences that reshape the fields of computer
music research and the knowledge of music, sound, and art. All
accepted contributions (full papers, posters, and music submissions)
will be considered for the special issue during the double-blind
peer-review process. It is important to emphasize that this 2nd
selection will require complete and extended articles in relation to
the works submitted to the SBCM. More details, deadlines, the schedule
related to the special issue will be published on the conference
website.
About Musica Hodie
The Musica Hodie Journal is a bi-annual publication hosted by the
School of Music and Drama of the Federal University of Goiás (Brazil)
since 2001. The journal is indexed in the Web of Science, RILM –
International Repertory of Music Literature, Arts & Humanity Index,
EBSCO, and CAPES-Qualis (A1). The publication promotes contributions
in the areas of musical performance and interfaces, composition and
information technologies, music education, music and
interdisciplinarity, music therapy, sonic languages, semiotics and
musicology. More information at
[https://www.revistas.ufg.br/musica/index]
About SBCM 2017
The Brazilian Symposia on Computer Music are thriving and exciting
venues for sharing ideas about recent developments in the fields of
computer music, sound and music computing, music information
retrieval, computational musicology, multimedia performance and many
other things related to art, science, and technology. We would like to
invite you to contribute to and participate in this event, which will
take place at the beautiful campus of the University of São Paulo,
Brazil, from September 3rd to September 6th 2017. This year's Keynote
Speakers are Xavier Serra (Director of the Music Technology Group at
the Universitat Pompeu Fabra in Barcelona), Emilios Cambouropoulos
(Member of the Cognitive and Computational Musicology Group at the
Aristotle University of Thessaloniki) and Damián Keller (Founder of
the Amazon Center for Music Research at the Federal University of
Acre). During the event, participants will also have the opportunity
to attend to oral presentations of music and technical papers, poster
discussion sessions, discussion panels and concerts, and will have
plenty of opportunities to interact and discuss collaborations with
other participants.
Contributions may take the form of full papers, posters, and art, as
described in the Instructions for Authors [1]. Full papers, which may
be technically or musically focused, are expected to present original
research, communicated in oral presentations. Music papers are
expected to deal with actual instances of music or sound art pieces,
which shall be included in the submission. Posters, which are
accompanied by extended abstracts and may also be technically or
musically focused (with accompanying music pieces), are expected to
bring ongoing research open to debate and contributions from
participants during specific gatherings. Art can take the form of
compositions, performances, installations or objects, which shall be
presented either in concerts or exhibition rooms; artistic pieces are
also required to be accompanied by extended abstracts, describing how
art, science, and technology come together in the creative process.
All papers and extended abstracts will be included in the electronic
proceedings, which become freely available in the SBCM homepage [2].
Important Dates:
May 20th: Deadline for Papers, Posters and Music submissions
June 30th: Notifications of accepted submissions
July 7th: Author registration deadline / Early-bird registration deadline
July 14th: Camera-ready accepted submissions deadline
Please feel free to contact us [3] with any questions you may have.
We hope to see you all in São Paulo!
The SBCM 2017 Organizing Committee
Links:
[1] Instructions for Authors:
http://compmus.ime.usp.br/sbcm/2017/instructions-for-authors
[2] SBCM Homepage: http://compmus.ime.usp.br/sbcm/
[3] Contact page: http://compmus.ime.usp.br/sbcm/2017/contact-us
Luiz Naveda
_____________________________________________________
Professor of Musicology
School of Music - State University of Minas Gerais (Brazil)
http://www.naveda.infohttp://edu.naveda.info (academic work, in Portuguese)
^v^
^v^
^v^
^~^~^~^~^~^~^~^~~^~~^~~^~^~^~~~^^~^~~~~
^~^~^~^~^~^~^~^~^~^~~^~~^~~^~^~^~~~^~~~
*Cyclone 0.3 beta-1 is out!*
Cyclone: A set of Pure Data objects cloned from Max/MSP
- Cyclone expands Pure Data with objects cloned from cycling74's
Max/MSP. It thus provides some good level of compatibility between the two
environments, helping users of both systems in the development of
equivalent patches.
In february 21st of 2017, we released the first alpha release, this second
release marks the begging of the beta phase.
You can also just download it from 'deken', directly from "Pure Data in
Help => Find externals", just type 'cyclone' and hit 'search'. Worth
mentioning over the last alpha release is a couple of new objects
(listfunnel/unjoin), plus numerous fixes -
All what's new from version 0.2 is described here https://github.com/porr
…/pd-cyclone/…/cyclone-0.3-changlelog - there have been numerous fixes
since the last alpha version, but worth mentioning is the addition of two
new objects: [listfunnel] and [unjoin].
you can check and download cyclone from our github:
https://github.com/porres/pd-cyclone/releases
But you can also just download it from 'deken', directly from "Pure Data in
Help => Find externals", just type 'cyclone' and hit 'search'.
cheers
(Apologies for multiple postings)
--------------------------------------------------------------------------------------------------
13th International Summer Workshop on Multimodal Interfaces (eNTERFACE'17) July 03-28, 2017, Porto, Portugal.
http://artes.ucp.pt/enterface17
Call for Participation Extended Submission Deadline: May 07, 2017(Firm)
--------------------------------------------------------------------------------------------------
The Digital Creativity Centre (CCD), Universidade Catolica Portuguesa - School of Arts (Porto, Portugal) invites researchers from all over the world to join eNTERFACE'17, the 13th one-month Summer Workshop on Multimodal Interfaces. During this workshop, senior project leaders, researchers, and students gather in one single place to work in teams on pre-specified challenges for 4 weeks long. Each team has a project defined and will address specific challenges.
Senior researchers, PhD students, or undergraduate students interested in participating to the Workshop should send their application by emailing before 7th of May 2017(extended) to enterface17(a)porto.ucp.pt (Guidelines for researchers applying to a project<http://artes.ucp.pt/enterface17/authors-kit/Guidelines.for.researchers.appl…>).
Participants must procure their own travel and accommodation expenses. Information about the venue location and stay are provided on the eNTERFACE'17 website. Note that although no scholarships are available for PhD students, there are no application fees. The list of projects that participants can choose from is the following.
How to Catch A Werewolf, Exploring Multi-Party Game-Situated Human-Robot Interaction
Lead-Organizers: Catharine Oertel, KTH (PI), Samuel Mascarenhas, INESC-ID, Zofia Malisz, KTH, José Lopes, KTH, Joakim Gustafson, KTH
In this project we will focus on the implementation of the roles of the "villager" and the "werewolves" using the IrisTK dialogue framework and the robot head Furhat. To be more precise, the aim of this project is to use multi-modal cues in order to inform the theory of mind model to drive the robot's decision making process. Theory of mind is a concept that is related to empathy and it refers to the cognitive ability of modeling and understanding that others have different beliefs and intentions than our own. In lay terms, it can be described as "to put oneself into another's shoes" and it is a crucial skill to properly play a deception game like "Werewolf".
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Proposal_Catharine.Oer…>
KING'S SPEECH Foreign language: pronounce with style!
Principal investigators: Georgios Athanasopoulos*, Céline Lucas* and Benoit Macq* (ICTEAM-ELEN - Université Catholique de Louvain, Belgium)
The principal investigators are developing the GRAAL1 project which is concerned with developing a set of tools to facilitate self-training on foreign language pronunciation, with the first target being learning French. The goal of KING'S SPEECH is to develop new interaction modalities and evaluate them in combination with existing functionality aiming to better personalize GRAAL to the taste and specificities of each learner. This personalization will rely on a machine learning approach and an experimental set-up to be developed during eNTERFACE'17. The eNTERFACE'17 developments could be based on a karaoke scenario where the song is replaced by some authentic sentences (extracts of news, films, publicities, etc.). Applications like SingStar (Sony) or JustSing (Ubisoft) could also serve as a source of inspiration, e.g., using a smartphone as a microphone while interacting with avatars.
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Project_King's%20speech.pdf>
The RAPID-MIX API: a toolkit for fostering innovation in the creative industries with Multimodal, Interactive and eXpressive (MIX) technology
Principal Investigators Francisco Bernardo, Michael Zbyszynski, Rebecca Fiebrink, Mick Grierson (EAVI - Embodied AudioVisual Interaction group, Goldsmiths University of London, Computing), Team Candidates Sebastian Mealla , Panos Papiotis (MTG/UPF - Music Technology Group, Universitat Pompeu Fabra), Carles Julia, Frederic Bevilacqua , Joseph Larralde (IRCAM - Institut de Recherche et Coordination Acoustique/Musique)
Members of the RAPID-MIX project are building a toolkit that includes a software API for interactive machine learning (IML),digital signal processing (DSP), sensor hardware, and cloud-based repositories for storing and visualizing audio, visual, and multimodal data. This API provides a comprehensive set of software components for rapid prototyping and integration of new sensor technologies into products, prototypes and performances.
We aim to investigate how developers employ and appropriate this toolkit so we can improve it based on their feedback. We intend to kickstart the online community around this toolkit with eNTERFACE participants as power users and core members, and to integrate their projects as demonstrators for the toolkit. Participants will explore and use the RAPID-MIX toolkit for their creative projects and learn workflows for using embodied interaction with sensors
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Project_RAPID-MIX.pdf>
Prynth
Principal investigator: Ivan Franco (IDMIL / McGill University)
Prynth is a technical framework for building self-contained programmable synthesizers, developed by Ivan Franco at the Input Devices and Music Interaction Lab (IDMIL) of McGill University. The goal of this new framework is to support the rapid development of a new breed of digital synthesizers and their respective interaction models.
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Proposal_prynth.pdf>
End-to-End Listening Agent for Audio-Visual Emotional and Naturalistic Interactions
Principal Investigators: Kevin El Haddad (TCTS Lab - numediart institute - University of Mons, Belgium), Yelin Kim (Inspire Lab - University at Albany, State University of New York, USA), Hüseyin Çakmak (TCTS Lab - numediart institute - University of Mons, Belgium)
In this project, we aim at building a listening agent that would react with a naturalistic and human-like behavior and using nonverbal expressions to a user. The agent's behavior will be modeled by and built on three main components: recognizing and synthesizing emotional and nonverbal expressions, and predicting the next expression to synthesize based on the currently recognized expressions. Its behavior will be rendered on a previously developed avatar which will also be improved during this workshop. At the end we should obtain functioning and efficient modules which ideally should work in real-time.
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Proposal_listening%20a…>
Cloud-based Toolbox for Computer Vision
Principal investigator: Dr. Sidi Ahmed MAHMOUDI from the Faculty of Engineering at the University of Mons. Belgium. Candidates: Dr. Fabian LECRON, PhD, Faculty of Engineering at the University of Mons. Belgium, Mohammed Amin BELARBI, PhD Student, Faculty of Exact sciences and Mathematics, University of Mostaganem, Algeria, Mohammed EL ADOUI, PhD Student, Faculty of Engineering, University of Mons, Belgium, Abdelhamid DERRAR, Student in Master University of Lyon, France, Pr. Mohammed BENJELLOUN, PhD, Faculty of Engineering, University of Mons, Belgium, Pr. Said MAHMOUDI, PhD, Faculty of Engineering, University of Mons, Belgium.
Nowadays, images and videos have been present everywhere, they can come directly from camera, mobile devices or from other peoples that share their images and videos. The latter are used to present and illustrate different objects in a large number of situations (public areas, airports, hospitals, football games, etc.). This makes from image and video processing algorithms a very important tool used for various domains related to computer vision such as video surveillance, human behavior understanding, medical imaging and database (images and videos) indexation methods. The goal of this project is develop an extension of our cloud platform (MOVACP) developed in the previous edition of eNTERFACE'16 workshop. The latter integrated several image and video processing applications. The users of this platform can use these methods without having to download, install and configure the corresponding software. Each user can select the required application, load its data and retrieve results, with an environment similar to desktop. Within eNTERFAC'17 workshop, we would like to improve and develop four main tools for our platform: 1. Integration of the major image and video processing algorithms that could be used by guests to perform their own applications. 2. Integration of machine learning methods (used for images and videos indexation) that exploit the uploaded data of users (is they accept of course) in order to improve the results precision. 3. Fast treatment of data acquired from IOT systems. 4. Development of an online 3D viewer that could be used for the visualization of 3D reconstructed medical images. 4. Fast treatment of data acquired from distant IoT systems.
Keywords cloud computing, image and video processing, video surveillance, medical imaging.
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Project_CMP.pdf>
Across the virtual bridge
Project Coordinators: Thierry RAVET (software design, motion signal processing, machine learning), Fabien GRISARD (software design, human-computer interface), Ambroise MOREAU (computer vision, software design), Pierre-Henri DE DEKEN (software design, game engine) - Numediart Institute, University of Mons, Belgium.
The goal of the project is to explore different ways of creating interactions between people evolving in the real world (local players) and people evolving in a virtual representation of the same world (remote players). This latter one will be explored thanks to a virtual reality headset while local players will be geo-located through an app on a mobile device. Actions executed by remote players will be perceived by local players in the form of a sound or visual content and actions performed by local players will impact the virtual world as well. Local players and remote players will be able to exchange information with each other.
Keywords: Virtual world, mixed reality, computer-mediated communication .
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Project_AcrossTheVirtu…>
ePHoRt project: A telerehabilitation system for reeducation after hip replacement surgery
Principal investigators: Yves Rybarczyk (Nova University of Lisbon, Portugal), Arián Aladro (Universidad de las Américas, Ecuador), Mario Gonzalez (Health and Sport Science from University of Zaragoza - Spain), Santiago Villarreal (Universidad de las Américas - Quito, Ecuador), Jan Kleine Detersa (University of Twente in Human Media Interaction)
This project aims to develop a web-based system for the remote monitoring of rehabilitation exercises in patients after hip replacement surgery. The tool intends to facilitate and enhance the motor recovery, due to the fact that the patients will be able to perform the therapeutic movements at home and at any time. As in any case of rehabilitation program, the time required to recover is significantly diminished when the individual has the opportunity to practice the exercises regularly and frequently. However, the condition of such patients prohibits transportations to and from medical centres and many of them cannot afford a private physiotherapist. Thus, low-cost technologies will be used to develop the platform, with the aim to democratize its access. For instance, the motion capture system will be based on the Kinect camera that provides a good compromise between accuracy and price. The project will be divided into four main stages. First, the architecture of the web-based system will be designed. Three different user interfaces will be necessary: (i) one to record quantitative and qualitative data from the patient, (ii) another for the therapist consulting the patient's performance and adapting the exercises accordingly, and (iii) for the physician having a medical supervision of the recovery process. Second, it will be essential to develop a module that performs an automatic assessment and validation of the rehabilitation activities, in order to provide a real-time feedback to the patient regarding the correctness of the executed movements. Third, we also intend to make use of a serious game and affective computing approaches, with the intention of motivating the user to perform the exercises for a sustainable period of time. Finally, an ergonomic study will be carried out, in order to evaluate the usability of the system.
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Proposal_Full_proposal…>
Big Brother can you find, classify, detect and track us ?
Principal investigators: Marc Décombas, Jean Benoit Delbrouck (TCTS Lab - University of Mons, Belgium)
In this project, we will build a system that can detect, recognize objects or humans and describe them as much as possible on video. Objects may be moving as well as the people coming in and out of the visual field of the camera(s). Our project will be split into three main tasks : detection and tracking, people re-identification, image/video captioning
The system should work in real time and should be able to detect people and follow them, re-identify them when they come back in the field and give a textual description of what each people is doing.
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.Final%20Proposal_BigBrother.pdf>
Networked Creative Coding Environments
Principal investigator: Andrew Blanton, Digital Media Art at San Jose State University
As a part of ongoing research Andrew Blanton will present a workshop using Amazon Web Servers for the creation of networked art. The workshop will demonstrate sending data from Max/MSP to a Unix based Amazon Web Server and receiving data into a p5.js via websockets. The workshop will explore the critical discourse surrounding data as a borderless medium and the ideas and potentials of using a medium that can have global reach .
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.FinalProposal_NCCE.pdf>
AUDIOVISUALY EXPERIENCE THROUGH IMAGE HOLOGRAPHY
Principal investigator: Maria Isabel Azevedo ( ID+ Research Institute for Design, Media and Culture, University of Aveiro), Elizabeth Sandford-Richardson ( University of the Arts, Central Saint Martins College of Art and Design
Today in interactive art, there are not only representations that speak of the body but actions and behaviours that involve the body. In digital holography, the image appears and disappears from the observer's vision field; because the holographic image is light, we can see multidimensional spaces, shapes and colours existing on the same time, presence and absence of the image on the holographic plate. And the image can be flowing in front of the plate that sometimes people try touching it with his hands.
That means, to the viewer will be interactive events, with no beginning or end that can be perceived in any direction, forward or backward, depending on the relative position and the time the viewer spends in front of the hologram.
In this workshop we are proposing an audiovisual interactive installation composed by four digital holograms and spatial soundscape. When viewers move in front of each hologram, different sources of sound are trigger. The outcome will be presented in the last week of July with an invited performer. We are looking for sound designers and interaction programmers.
Keywords: Digital holographic image, holographic performance, sound spatialization, motion capture
Full Project Description<http://artes.ucp.pt/enterface17/proposals/02.FinalProposal_holo.pdf>
Study of the reality level of VR simulations
Principal investigators: Andre Perrotta, UCP/CITAR
We propose to develop a VR simulation based on 360o video, spatialized audio and force feedback using fans and motors, of near collision experiences of large vehicles on a first person perspective, to be experienced by users wearing head-mounted stereoscopic VR gear in a MOCAP (motion capture) enabled environment that enables a one-to-one relationship between real and virtual worlds.
............................................................
AVISO DE CONFIDENCIALIDADE
Esta mensagem (incluindo quaisquer anexos) pode conter informação confidencial ou legalmente protegida para uso exclusivo do destinatário. Se não for o destinatário pretendido da mesma, não deverá fazer uso, copiar, distribuir ou revelar o seu conteúdo (incluindo quaisquer anexos) a terceiros, sem a devida autorização. Se recebeu esta mensagem por engano, por favor informe o emissor, por e-mail, e elimine-a imediatamente. Obrigado.
............................................................
CONFIDENTIALITY NOTICE
This message may contain confidential information or privileged material, and is intended only for the individual(s) named. If you are not the named addressee, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. Thank you.
Dear PD community
I am happy to announce the beta release of Context, a powerful new sequencer for PD. Context is a modular sequencer that re-imagines musical compositions as a networks. It combines traditional step sequencing and timeline playback with non-linear and algorithmic paradigms, all in a small but advanced GUI.
Unlike most other sequencing software, Context is not an environment. It is a single abstraction which may be replicated and interconnected to create an environment in the form of a network. There are literally endless possibilities in creating Context networks, and the user has a great deal of control over how their composition will function.
>From a technical perspective, Context features a lot of things that you don't often see in PD, such as click + drag canvas resizing, dynamic menus, embeddable timelines, and fully automatic state saving. It also boasts its own language, parsed entirely within PD.
Context is work in progress--there are still lots of bugs in the software and lots of holes in the documentation. However, I have gotten it to a place where I feel it is coherent enough for others to use it, and where it would benefit from wider feedback. I am especially looking for people who can help me with proof reading and bug tracking. Please let me know if you want to join the team! Even if you can't commit to much, pointing out typos or bits of the documentation that are confusing will be very helpful to me.
Some notes on the documentation: I have been putting 90% of my efforts recently into writing the manual, and only 9% into writing the .pd help files (the remaining 1% being sleep). The help files are pretty, but the information in them is not very useful. This will be corrected as soon as I have more time and better perspective. In the mean time, please don't be put off by the confusing help files, and treat the manual as the main resource.
Context is available now at https://github.com/LGoodacre/context-sequencer.<https://github.com/LGoodacre/context-sequencer>
A few other links:
* The debut performance of Context at PDCon16~: https://www.youtube.com/watch?list=PL6f_f6nYVx1f1TdAa58418kk5K9bWJ7pm&v=poW…
<https://www.youtube.com/watch?list=PL6f_f6nYVx1f1TdAa58418kk5K9bWJ7pm&v=poW…>
* An explanation of this performance: http://newblankets.org/liam_context/context-patch.webm
<http://newblankets.org/liam_context/context-patch.webm>
* A small demo video: https://www.youtube.com/watch?v=5_eLOGA7Ado
* My paper from PDCon16~: https://contextsequencer.files.wordpress.com/2016/11/goodacre-context.pdf
<https://contextsequencer.files.wordpress.com/2016/11/goodacre-context.pdf>
Finally, I should say that this project has been my blood sweat and tears for the past 18 months, and it would mean a great deal to me to see other people using it. Please share your patches with me! And also share your questions--I will always be happy to respond.
I would like to thank the PD community for their support and inspiration, in particular Joe Deken and the organizers of PDCon16.
Regards,
Liam Goodacre
Dear list,
I just finished a tiny library (actually just one abstraction and one external) which allows to stream audio to disc in zero logical time (= as fast as possible). this is basically the same as running Pd in batch mode (i.e. with the -batch flag) - but you can do it within the usual interactive mode!
My first approach was single stepping DSP with [switch~]. unfortunately, it doesn't work with clock objects like [delay], [metro], [line] because clock timeouts are tied to the global scheduler. replacing an [until] to [switch~] with a very fast [metro] at least allowed clocks with delay 0 to time out (e.g. [bang~], [env~] etc.). Finally, I went for the most naive solution: I wrote a simple external which calls sched_tick() whenever it receives a bang. what shall I say: it works :-D.
this library made me very happy, because it allows me to render any number of seconds/minutes/hours of a patch to disc without leaving the interactive mode. this way I can have a nice big chaotic patch, tweak it till I have the right settings and then just write like 1 hour of sound material in just a few minutes. Now I just rendered 15 minutes of super-timestreched Henry Mancini.
I'm pretty sure someone has already had this idea but I didn't come across anything...
here you can download the alpha release: https://git.iem.at/ressi/batchrecord
for Windows, it should work out of the box. Linux and OSX users just have to compile [schedtick] with "build.sh" (you're of course warmly invited to send me the binaries :-)
Any kind of feedback is highly welcome! I tested on Pd 0.47.1 on a Lenovo Thinkpad with Windows 7 and couldn't find an issue yet. Once I know that it is stable and safe (which I'm not totally sure about...) I will make a proper release and upload to the Deken.
Have fun!
Christof