Pd's constraints make the automatic allocation unlikely.
Read this:
'Since at least 1990, users and critics of Max/FTS have
observed that it would be desirable for objects to be automatically
allocated to processors in a way that would minimize
the bandwidth of interconnections between the objects. This
would free the user from the cumbersome task of understanding
the actual flow of data between objects in the patch; the
software would automatically assess that.
This didn’t prove practical, for two reasons. First, as has
long been well known, one can’t compute the quantity of data
that will flow between any given pair of objects in a patch (at
least, not if the patching language is able to solve arbitrary
computing problems). Predicting how much data will flow
where is hopeless.
The second problem is that nobody has been able to make
an expressive patching language that doesn’t depend on objects
sharing data. In Max/FTS (and in Pd as well) this takes
the form of “named” objects such as arrays. Any automatic
distribution of patches that allows accessing arrays would have
to place every object that accesses any particular array on the
same processor, or else use some kind of locking mechanism
that would be unlikely to work in real time. Also, any situation
in which there is of recombination of message fanout
would require that both message paths be synchronized, i.e.,
that both message paths go through the same itinerary of processors
or be otherwise delay-equalized. In combination, these
constraints would require that, for complete transparency, almost
any interesting patch would have to reside on a single
processor. It appears to be an inescapable fact that multiprocessing
has non-hideable effects on the execution of “patches”
and can’t effectively be carried out without the user’s active
participation.'
It might be easier if Pd used a system of buses for routing rather than arbitrarily drawn patch connections, or if a graphical patching environment had a good way of implementing something like SuperCollider's Synth, which works with a flexible node order but has relatively limited means of input and output. Here's what I see as the bare minimum of what we'd need to address to make your wish a reality:
1) Unit generators instantiated in Pd have to exist somewhere in a running patch to output. This is in distinction to SC3 and csound, where instances of Synths or instrument templates are instantiated and destroyed. In the latter two, the order of creation and destruction of instruments (and in csound, the order they're defined in the orchestra) matters a lot in the DSP graph, which makes it more predictable. SC3 also has user methods for ordering nodes.
2) Connections in Pd are flexible and arbitrarily complex, which makes the DSP graph a lot more ramified than a mixer model with buses, inserts, aux sends, etc., and therefore much less predictable in the abstract.
3) Pd is deterministic, which means that (as noted in the quote above), any memory sharing across threads would need to involve locks, which can be killer in real-time, not to mention difficult to scale and guarantee thread safety. [pd~] communicates via FIFO because it needs to be able to keep messages and audio in sync by block.
I'm sure there's more.