Hello all,
The Debug Buildslave will be down for maintenance throughout the next week. We hope to have it fully working again by next weekend.
In the process of this maintenance, the OS will be reinstalled to match the one we already use on our other servers. Besides, more of our administrators will get access to the machine, so that further problems can be debugged more easily.
- Colin
Hi,
What about fixing release buildbot first? And is there a chance, maybe in the future, to switch buildbots more easily, to have a backup solution running? One week without debug builds is a serious thing.
Timo
Am 19.08.2011 21:43, schrieb Colin Finck:
Hello all,
The Debug Buildslave will be down for maintenance throughout the next week. We hope to have it fully working again by next weekend.
In the process of this maintenance, the OS will be reinstalled to match the one we already use on our other servers. Besides, more of our administrators will get access to the machine, so that further problems can be debugged more easily.
- Colin
Ros-dev mailing list Ros-dev@reactos.org http://www.reactos.org/mailman/listinfo/ros-dev
Inlined.
Timo Kreuzer timo.kreuzer@web.de wrote on Sat, August 20th, 2011, 10:25 AM:
Hi,
Hi
What about fixing release buildbot first?
AFAIK, this is not doable yet.
And is there a chance, maybe in the future, to switch buildbots more easily, to have a backup solution running? One week without debug builds is a serious thing.
Given the new way debug build slave will be administrated, having one week without debug builds will be impossible: simple configuration, monitored host & services, several administrators.
Timo
Regards, Pierre
What about Testslave? Lack of KVM is currently the worst thing.
Best regards
On Sat, 20 Aug 2011 13:32 +0200, "Pierre Schweitzer" pierre.schweitzer@reactos.org wrote:
Inlined.
Timo Kreuzer timo.kreuzer@web.de wrote on Sat, August 20th, 2011, 10:25 AM:
Hi,
Hi
What about fixing release buildbot first?
AFAIK, this is not doable yet.
And is there a chance, maybe in the future, to switch buildbots more easily, to have a backup solution running? One week without debug builds is a serious thing.
Given the new way debug build slave will be administrated, having one week without debug builds will be impossible: simple configuration, monitored host & services, several administrators.
Timo
Regards, Pierre
Ros-dev mailing list Ros-dev@reactos.org http://www.reactos.org/mailman/listinfo/ros-dev
Timo Kreuzer timo.kreuzer@web.de wrote:
What about fixing release buildbot first?
It's still a private machine owned and administered solely by Christoph. As long as I can't even reach him by phone, we can only wait.
And is there a chance, maybe in the future, to switch buildbots more easily, to have a backup solution running?
First of all, we hope that we can get the same reliability as our other servers after reinstalling the server OS, giving more ReactOS admins access to the machine and adding monitoring. The current OS was meant to be reinstalled for quite a long time, but up to now, nobody with physical access to the machine had any time to do it.
If such problems continue to exist afterwards, we can try to set up a fallback system, but this would require an equally configured and powerful Linux machine first.
One week without debug builds is a serious thing.
I hope you're aware that builds are still properly uploaded, it's only the testing step which fails.
- Colin
Hi,
finally. Colin and I are pleased to announce you that ReactOS Linux KVM tests are back online and working. You can find the first tests results (on r53383) sent tonight by the testbot on testman: http://www.reactos.org/testman/.
Regards, Pierre.
ReactOS Development List ros-dev@reactos.org wrote on Sat, August 20th, 2011, 4:24 PM:
Timo Kreuzer timo.kreuzer@web.de wrote:
What about fixing release buildbot first?
It's still a private machine owned and administered solely by Christoph. As long as I can't even reach him by phone, we can only wait.
And is there a chance, maybe in the future, to switch buildbots more easily, to have a backup solution running?
First of all, we hope that we can get the same reliability as our other servers after reinstalling the server OS, giving more ReactOS admins access to the machine and adding monitoring. The current OS was meant to be reinstalled for quite a long time, but up to now, nobody with physical access to the machine had any time to do it.
If such problems continue to exist afterwards, we can try to set up a fallback system, but this would require an equally configured and powerful Linux machine first.
One week without debug builds is a serious thing.
I hope you're aware that builds are still properly uploaded, it's only the testing step which fails.
- Colin
Ros-dev mailing list Ros-dev@reactos.org http://www.reactos.org/mailman/listinfo/ros-dev
@Eric
http://reactos.org/testman/compare.php?ids=7429,7579 http://reactos.org/testman/detail.php?id=2639952
Please find that the crash is not CMake related, and CMake build is not inherently broken.
On Thu, 25 Aug 2011 23:03 +0200, "Pierre Schweitzer" pierre.schweitzer@reactos.org wrote:
Hi,
finally. Colin and I are pleased to announce you that ReactOS Linux KVM tests are back online and working. You can find the first tests results (on r53383) sent tonight by the testbot on testman: http://www.reactos.org/testman/.
Regards, Pierre.
ReactOS Development List ros-dev@reactos.org wrote on Sat, August 20th, 2011, 4:24 PM:
Timo Kreuzer timo.kreuzer@web.de wrote:
What about fixing release buildbot first?
It's still a private machine owned and administered solely by Christoph. As long as I can't even reach him by phone, we can only wait.
And is there a chance, maybe in the future, to switch buildbots more easily, to have a backup solution running?
First of all, we hope that we can get the same reliability as our other servers after reinstalling the server OS, giving more ReactOS admins access to the machine and adding monitoring. The current OS was meant to be reinstalled for quite a long time, but up to now, nobody with physical access to the machine had any time to do it.
If such problems continue to exist afterwards, we can try to set up a fallback system, but this would require an equally configured and powerful Linux machine first.
One week without debug builds is a serious thing.
I hope you're aware that builds are still properly uploaded, it's only the testing step which fails.
- Colin
Ros-dev mailing list Ros-dev@reactos.org http://www.reactos.org/mailman/listinfo/ros-dev
Ros-dev mailing list Ros-dev@reactos.org http://www.reactos.org/mailman/listinfo/ros-dev
With best regards Caemyr
Hi Olaf,
I am already investigating the services test issues. One of my first findings is that the current widl seems to mess up value ranges in some cases. This is the cause of the following errors: err:(dll/win32/rpcrt4/ndr_marshall.c:6496) value exceeded bounds: 918, low: 0, high: 514
<rant> It is pretty annoying that jgardou updated rpcrt4, widl and other components without proper testing. At least he should have posted a bug list BEFORE he comitted the new stuff. </rant>
Regards Eric
@Eric
http://reactos.org/testman/compare.php?ids=7429,7579 http://reactos.org/testman/detail.php?id=2639952
Please find that the crash is not CMake related, and CMake build is not inherently broken.
On Aug 26, 2011, at 3:01 AM, Eric Kohl wrote:
<rant> It is pretty annoying that jgardou updated rpcrt4, widl and other components without proper testing. At least he should have posted a bug list BEFORE he comitted the new stuff. </rant>
I also ranted about that. In fact that's why I stopped syncing rpcrt4 some time ago - because new version always gave problems. So I wanted to solve problems first and only then commit.
But, OK, as Olaf said - if something needs to be done, let's simply do that and fix everything which broke ;).
WBR, Aleksey.