I finally have the changes to the regression scripts sufficiently complete that I've managed to do a few minimal cycles of regression testing from the git repos. I now need some help in various ways: 1. General testing.. I need some of the regular testers to do runs with their setups with the new scripts. To test I suggest starting from a new testing location. And follow the usual instructions except get the run.py script from < https://raw.github.com/boostorg/boost/develop/tools/regression/src/run.py>. The options are exactly the same as the SVN version of the scripts (the --tag is mapped to master or develop as needed). If you run patch scripts you will have to adjust paths for the new arrangement. In particular the boost tree is now checked out to <testing-root>/boost_root. 2. Non-nix testing.. I don't have the time or resources available to check if the scripts and git invocations work on Windows. I need one person to test and likely make changes to the regression script to do the right things on Windows. Testing should be the same as above. But doing changes locally and testing them is challenging. The procedure I use is (as shell commands):
mkdir testing-dev cd testing-dev link
/tools/regression/src/run.py run.py ln -s < BOOST_DEVELOP>/tools/regression/xsl_reports xsl_reports ln -s < BOOST_DEVELOP>/tools/regression/src tools_regression_src ./run.py --runner= --skip-script-download --use-local
That lets you edit the regression.py and run.py scripts in your local develop checkout. The options make it skip most of the run.py code, but otherwise make it behave as the regular testing except using your local changes. If you need to test changes to run.py in the normal use case you will have to commit+push the changes and then test. 3. Report generation.. It seems that the reports are failing to get generated for some reason. I suspect that the new scripts putting the git commit SHAs as the tested version number are messing up the results generation. I need someone familiar with the result generation to fix this. -- -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo
On Sun, Dec 29, 2013 at 2:41 PM, Rene Rivera
I finally have the changes to the regression scripts sufficiently complete that I've managed to do a few minimal cycles of regression testing from the git repos. I now need some help in various ways:
1. General testing..
I need some of the regular testers to do runs with their setups with the new scripts. To test I suggest starting from a new testing location. And follow the usual instructions except get the run.py script from < https://raw.github.com/boostorg/boost/develop/tools/regression/src/run.py>. The options are exactly the same as the SVN version of the scripts (the --tag is mapped to master or develop as needed). If you run patch scripts you will have to adjust paths for the new arrangement. In particular the boost tree is now checked out to <testing-root>/boost_root.
2. Non-nix testing..
I don't have the time or resources available to check if the scripts and git invocations work on Windows. I need one person to test and likely make changes to the regression script to do the right things on Windows. Testing should be the same as above. But doing changes locally and testing them is challenging. The procedure I use is (as shell commands):
mkdir testing-dev cd testing-dev link
/tools/regression/src/run.py run.py ln -s < BOOST_DEVELOP>/tools/regression/xsl_reports xsl_reports ln -s < BOOST_DEVELOP>/tools/regression/src tools_regression_src ./run.py --runner= --skip-script-download --use-local That lets you edit the regression.py and run.py scripts in your local develop checkout. The options make it skip most of the run.py code, but otherwise make it behave as the regular testing except using your local changes. If you need to test changes to run.py in the normal use case you will have to commit+push the changes and then test.
3. Report generation..
It seems that the reports are failing to get generated for some reason. I suspect that the new scripts putting the git commit SHAs as the tested version number are messing up the results generation. I need someone familiar with the result generation to fix this.
PS. Eventually.. 4. Performance.. I will need help from git experts to optimize the git checkouts that the scripts are currently doing as it takes about 5 minutes just to do an update on the boost tree. -- -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo
On Sun, Dec 29, 2013 at 3:41 PM, Rene Rivera
I finally have the changes to the regression scripts sufficiently complete that I've managed to do a few minimal cycles of regression testing from the git repos.
Great! 2. Non-nix testing.. I don't have the time or resources available to check if the scripts and git invocations work on Windows. I need one person to test and likely make changes to the regression script to do the right things on Windows. Testing should be the same as above. I'll give it a try, but others should try too. Particularly those who usually run Windows tests. Would it be helpful if we temporarily commented out most test-suites in status until the scripts are working? --Beman
On December 29, 2013 02:41:54 PM Rene Rivera wrote:
I need some of the regular testers to do runs with their setups with the new scripts. To test I suggest starting from a new testing location. And follow the usual instructions except get the run.py script from < https://raw.github.com/boostorg/boost/develop/tools/regression/src/run.py>. The options are exactly the same as the SVN version of the scripts (the --tag is mapped to master or develop as needed). If you run patch scripts you will have to adjust paths for the new arrangement. In particular the boost tree is now checked out to <testing-root>/boost_root.
Ran fine for me (tester "Debian-Sid"). It even claims to upload the results. -Steve
On Sun, Dec 29, 2013 at 2:41 PM, Rene Rivera
3. Report generation..
It seems that the reports are failing to get generated for some reason. I suspect that the new scripts putting the git commit SHAs as the tested version number are messing up the results generation. I need someone familiar with the result generation to fix this.
Is anyone out there willing to investigate this? -- -- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com -- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo
On Tue, Dec 31, 2013 at 10:39 AM, Rene Rivera
On Sun, Dec 29, 2013 at 2:41 PM, Rene Rivera
wrote: 3. Report generation..
It seems that the reports are failing to get generated for some reason. I suspect that the new scripts putting the git commit SHAs as the tested version number are messing up the results generation. I need someone familiar with the result generation to fix this.
Is anyone out there willing to investigate this?
The error message is: Error extracting file: The specified zipfile was not found Too bad it didn't say what file it was looking for:-( I looked at the ftp directory, and there was one suspicious looking file: 3.12.5-302.fc20.x86_64 That was it; no .zip I deleted it to see if that might workaround the problem. Does anyone know how often the trunk reports are generated? --Beman
On 31 December 2013 16:53, Beman Dawes
The error message is: Error extracting file: The specified zipfile was not found
Too bad it didn't say what file it was looking for:-(
A file on the web server (/home/grafik/www.boost.org/testing/live/trunk.zip). The error message isn't correct - the file exists but is empty.
participants (4)
-
Beman Dawes
-
Daniel James
-
Rene Rivera
-
Steve M. Robbins