On Friday, October 14, 2011 2:48 PM, "katja" katjavetter@gmail.com wrote:
On Fri, Oct 14, 2011 at 5:27 AM, Hans-Christoph Steiner hans@at.or.at wrote:
On Oct 13, 2011, at 8:37 PM, katja wrote:
Indeed I stumbled upon that extra extra when trying to find the puzzling cause of errors. They are not in sync, externals/extra has the old code, that's why it fails to build. The extra's in pd-double.git and are double-ready.
Hmm, are you sure? I just re-copied the files from pure-data.git to externals/extra and svn tells me there is no difference. The do have different build systems, but the .pd, .c, and .h files I think are all the same.
Yes externals/extra are the same as the extra's in pure-data.git, with all the floats still there, instead of t_float. If externals/extra can be synched with the extra's in pd-double.git, we're fine.
I've added a daily type-punning log that just grabs the gcc warning from the build logs and consolidates them, its posted here every day: http://autobuild.puredata.info/auto-build/2011-10-13/logs/type-punning.log
Very useful. It's quite a list, but be sure it's not complete!
When looking at the logs, I also found that today's macosx106-x86_64 pd-double autobuild failed. That was to be expected. Because of the precision-to-bitness coupling in m_pd.h, it was now building the externals in double precision for the first time (the earlier builds all have single precision externals). gcc exited with an error when compiling creb, just like I've seen it on my own computer.
The point is, I have no clue how to create a dev setup for even the simplest commit. I was mimicking the nightly build on my computer just to see where problems are, which libs do not compile etc. But it's not a useful route for development. On the other hand, in a local pd-svn, externals can only be built against pd-double when old pd is overwritten with pd-double.git. But then it's no longer a working copy of pure-data SVN, and does not allow for update or commit. And from within pd-double git dir, I could not get an external lib to be built with the symlink method as described on puredata.info.
Just replacing the pd/ folder with the pd-double.git as pd/ does not affect the rest of the code in the SVN tree. It is all still fully functional SVN. That's a big difference between how git and SVN work. It is one advantage of SVN, in that it allows you to create whack setups like this and still have it functional ;). This is how I've been working on Pd-extended for years.
That said, you can still treat most libraries as standalone units, certainly any lib that is built on the Library Template. So if you prefer, you can work that way. You will probably need to set the PD_INCLUDE variable then to point to your pd-double.git/src, something like:
make PD_INCLUDE=-I~/code/pd-double.git/src
You can also build libraries individually using the Pd-extended build system:
cd pd-double/externals/ make DESTDIR=/tmp creb_clean creb creb_install
Pd-double-extended has a very weird status: partly it's a branch or fork, partly it's a form of bugfixing. So you can commit untested changes which are safe, like changing float into t_float, and then in the nightly build check if it worked out well with pd-double. But some modifications, for example the replacement of Hoeldrich style routines and other type punning stuff, must be tested thoroughly on functionality, robustness and performance before committing. By lack of a formal branch where all the double-ready code is unified, this work can only be done locally, with a dev set up yet to figure out. In any case, such substantial rewrites can be submitted to the patch tracker, so others can test them before they're committed. It won't concern too many cases. For example, in the core, less than ten classes on a total of ~200 needed a fundamental redefinition to make them double-ready. (Even so I've spent over a month on that, due to unfamiliarity with pd's core code).
Is this an idea?
- simple changes like float > t_float: untested, via SVN
- substantial function redefinitions: via patch tracker
In this way, the fact that one can't build and test for pd-double in pd-svn is less of a problem. Then we can think about a dev set up from where you'd only produce patch files, and not commit to SVN. That should give more options of how to organize it. At the same time, it gives some guarantee that redefinitions with possible performance- or functionality- consequences are not committed too fast.
That works for me, as long as you ask if its ok before you commit to a certain library. I don't think you need to ask per file, per-library is probably fine-grained enough. To start with, while I'm not the original author, I am the maintainer of many libs in pure-data SVN. Here are some you can commit to directly now for the pd-double project (other features should be discussed before hand):
bassemu~ boids cxc creb ext13 freeverb~ ggee hcs markex maxlib mjlib moonlib motex pdogg plugin~ sigpack smlib windowing
It sounds like you have a good process already for working on this: code, test, commit, test. It doesn't need to be perfect before committing, as long as you test before committing and follow up on any issues. This is also a great opportunity to write some automated test patches. I've been working on the infrastructure for running an automated nightly test suite. The first part is working, it is the 'load every help' log that gets posted on the auto-build site each day. The next step is to start writing Pd patches that test Pd itself. Then I can start plugging those in.
I'm thinking it'll automatically run any patch called *-regressiontest.pd, then check if that patch outputs 'SUCCESS' within a timeout period.
.hc