The improvement is based on readability / coding style. I think
the comment explained the reason to improve it. But I can explain
again in more detail:
The old code created a random value for a page offset inside a 64k
region. The PEB was supposed to get into that region, but
preferably not below.
Obviously, we don't have ALL of these 16 possible 4k page
locations available, if we neither want to go above the upper or
below the lower margin, but still fit n pages between these 2.
The old code "fixed" the random value to account for that
limitation by hardcoding a 2 (for 2 pages), making sure that there
are 2 pages available to put the PEB in.
The new code will check the size of the PEB instead of relying on
the hardcoded magic value of 2, wich wasn't even explained
anywhere. So all I did here was remove a hack.
The rest of the change affects the path that we follow on a
failure. Instead of trying to allocate the PEB at a constant given
address and if that fails, try again from top down, we use the
upper margin and try to allocate at the highest address from
there. So it cannot fail, unless the whole address space is
blocked. The commit message might not have been describing this
properly, but the claim that there is no change in Windows
behaviour still stays, since there is no way to predict the
address of the PEB anyway. It's random, from top down. And it
stays that way. Under normal circumstances only the NLS section
would block the address range and in that case the allocation will
go below, so no change at all. When it comes to cloned processes,
things might be more complicated, but then there is no chance to
predict the location of the PEB anyway, it could go anywhere, even
below the 64k range. Again no change.
If you still have doubts, please let me know what exactly you
think could be wrong here, so I can address that accordingly.
Timo
PS: we are talking about randomized behavior here, that is done
for security reasons, so doing it differently without breaking
assumptions that user mode applications can make would most likely
be beneficial. When you write a security software that has
prevention capabilities, you also also change Windows behaviour.
If that had a negative effect on *legitimate* software running on
the system, it would be bad. If it doesn't have any negative
effect, or only on "bad" software, it's good. If you can
reasonably argue, why this change could possibly affect legitimate
software in a negative way, then I can write a test based on that.
Am 11.10.2014 18:37, schrieb Alex Ionescu:
Where't the unit test proving your 'improved'
algorithm matches XP SP2/SRV03 SP1?
_______________________________________________
Ros-dev mailing list
Ros-dev@reactos.org
http://www.reactos.org/mailman/listinfo/ros-dev