Ok, too bad it didn't happen. But it's still something that would be cool to have, so some ideas in case this ever is getting on someone's project list.
I suppose one of challenges isn't about which tests to write or even the writing of the tests, but writing an infrastructure that makes it easy to add new tests.
I guess it's not so much infrastructure - each test probably has a name, a description, some parameters and a run function. And it needs probably access to the Irrlicht-device. And maybe some functions for output (for example function like OnStart, OnUpdate, OnFinish which could be overloaded). Not sure about results - maybe a common result-format is needed. At least when the tests have run we should afterward have all the information in a format that allows comparing the results.
I suspect tests can have parameters. I would propose to use the Irrlicht attribute system for passing those, as it will make it easy to read the parameters from xml and maybe also write them out together with the results (and it's also flexible enough to allow any kind of parameters).
Probably we also need a bunch of common tool-functions, mostly certainly those to measure time. I've already done some speed-tests in the past, maybe that can give some basic ideas: http://www.michaelzeilfelder.de/irrlich ... IrrStl.zip
(the strange defines in there are mostly for comparing 2 types of code while reducing effects like caching, but the Profiler itself might be useful). Also it could be that our "tests" folder in Irrlicht already contains some useful code for this.
And then some application main-screen with buttons for run-tests, watch-last-results, maybe compare-to-other-tests, quit .... anything missing?
Also important is collecting as much information as possible about the environment in which tests are run. Fortunately Irrlicht already has most of that information (in IrrlichtDevice and IOSOperator):
- version of the engine (would be interesting to have even the svn-revision if available).
- operating system
All test-results should be written in some file. Maybe together with the parameters used for tests as those might change over time.
I would propose again to use the Irrlicht attribute system for that - and write the results out as xml's. That allows easy further processing with other tools.
The result-file should probably have the date as part of the filename so it will produce a new one in each run.
Some functions to compare test-results would be cool (but can be done in a second step as just comparing xml's by hand can be done for a start).