Hello,
I'd like to provide a small VS.NET project file with 5 different tests.
- 46ffffe9 tells everything is alright with the function calculation - times are measured with QueryPerformanceCounter - loop run cnt: 0x2ffffff
result orig function 46ffffe9 it took 1491052 result orig function inlined 46ffffe9 it took 1035547 result second proposal inlined 46ffffe9 it took 1244434 result optimized asm 46ffffe9 it took 1338367 result debug asm 46ffffe9 it took 8774815
The second proposal is the original proposal but more shifts - still slower tho.
Interesting is the inlined version generated by MSVC, shaving off almost 1/3 of the overall time. Also stared as "optimized asm" - no gurantee on register safety tho ;)
For portability and performance sake it should be considered to create a compiler macro. This function is terribly small, any optimisations inside are outweighted by the calling overhead in this case. The most impressive one is the original function inlined, althought the ASM would only work on x86.
Please do not think about using 64k tables, thats what, 1/2 of a Sempron L2 cache? It would really really trash performance.
Available at: <a href="http://hackersquest.org/kerneltest.html" title="Kernel Test Emulator">Kernel Test</a>
<a href="http://wohngebaeudeversicherung.einsurance.de/">http://wohngebaeudeversicherung.einsurance.de</a>
Ash wrote:
Hello,
I'd like to provide a small VS.NET project file with 5 different tests.
- 46ffffe9 tells everything is alright with the function calculation
- times are measured with QueryPerformanceCounter
- loop run cnt: 0x2ffffff
result orig function 46ffffe9 it took 1491052 result orig function inlined 46ffffe9 it took 1035547 result second proposal inlined 46ffffe9 it took 1244434 result optimized asm 46ffffe9 it took 1338367 result debug asm 46ffffe9 it took 8774815
The second proposal is the original proposal but more shifts - still slower tho.
Interesting is the inlined version generated by MSVC, shaving off almost 1/3 of the overall time. Also stared as "optimized asm" - no gurantee on register safety tho ;)
For portability and performance sake it should be considered to create a compiler macro. This function is terribly small, any optimisations inside are outweighted by the calling overhead in this case. The most impressive one is the original function inlined, althought the ASM would only work on x86.
Please do not think about using 64k tables, thats what, 1/2 of a Sempron L2 cache? It would really really trash performance.
Hi Ash,
Thanks a lot for your tests. I don't have much time tonight, but if you'd like/can, can you add two more tests?
One using "bsr", the intel opcode. It does all the work for you and returns the index.
i think it's as simple as "bsr ecx, eax" where "ecx" is the mask and eax is the returned index. Or it might be backwards.
The second test is using a 256-byte log2 table:
const char LogTable256[] = { 0, 0, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 };
and a lookup code like this:
Addon = 16; if ((IntMask = Mask >> 16)) { Addon = 0; IntMask = Mask; } if (IntMask && 0xFFFFFF00) { Addon += 8; } HighBit = LogTable256[(Mask >> Addon)] + Addon;
Alex Ionescu wrote:
Addon = 16; if ((IntMask = Mask >> 16)) { Addon = 0; IntMask = Mask; } if (IntMask && 0xFFFFFF00) { Addon += 8; } HighBit = LogTable256[(Mask >> Addon)] + Addon;
methinks there's bugs there, use this instead:
int highest_bit_tabled ( unsigned int i ) { int ret = 0; if ( i > 0xffff ) i >>= 16, ret = 16; if ( i > 0xff ) i >>= 8, ret += 8; return ret + LogTable256[i]; }
also, FWIW, I've tried the following three tests:
( i > 0xffff )
( i & 0xffff0000 )
( i >> 16 )
and the first is the fastest on my a64
BSR has a latency of 8-12 Cycles on Athlon/P3 but can be pipelined. Worse (up to ~80 cycles) on Pentium and other older CPUs. http://www.amd.com/us-en/Processors/TechnicalResources/0,,30_182_739_3748,00...
Maximum latency on a Pentium 4 is eight clock cycles. But its throughput is one, which means it is fully pipelined. So when you start this instruction eight clock cycles before you need the result, it behaves as if it only takes one clock cycle. You're not going to be able to beat that with any other code. The closest you can get is to convert the integer to float, then extract the exponent. That could be done in less than eight clock cycles but throughput will be lower.
http://www.flipcode.com/cgi-bin/fcmsg.cgi?thread_show=16986&msg=113105
Dont know about A64 - maybe someone can test BSR with A64? I'm very disappointed by its peformance on my K7 system - i might have messed something up thought. You could save on the Pushing / Popping but that would be kinda like cheating and if you do it dirty/lazy I wont get the right returns either.
It doesnt make much sense to put the optimized ASM in there, neither is much hope of GCC having a good day and doing a lot of optimisation. So far the best option would be the macro with a lookup table (only one global kernel table tho).
Here are the updated STATS also available at http://hackersquest.org/kerneltest.html
result orig function 46ffffe9 it took 1526862 18% result orig function inlined 46ffffe9 it took 1041460 12% result second proposal inlined 46ffffe9 it took 1248990 15% result optimized asm 46ffffe9 it took 1321532 16% result lookup inlined 46ffffe9 it took 682264 8% result bsr inlined 46ffffe9 it took 1751088 21% result macro 46ffffe9 it took 653692 7%
http://wohngebaeudeversicherung.einsurance.de/
----- Original Message ----- From: "Alex Ionescu" ionucu@videotron.ca To: "ReactOS Development List" ros-dev@reactos.com Sent: Thursday, March 24, 2005 3:57 AM Subject: Re: [ros-dev] Speed Tests (was: ping Alex regarding log2() forscheduler)
Ash wrote:
Hello,
I'd like to provide a small VS.NET project file with 5 different tests.
- 46ffffe9 tells everything is alright with the function calculation
- times are measured with QueryPerformanceCounter
- loop run cnt: 0x2ffffff
result orig function 46ffffe9 it took 1491052 result orig function inlined 46ffffe9 it took 1035547 result second proposal inlined 46ffffe9 it took 1244434 result optimized asm 46ffffe9 it took 1338367 result debug asm 46ffffe9 it took 8774815
The second proposal is the original proposal but more shifts - still slower tho.
Interesting is the inlined version generated by MSVC, shaving off almost 1/3 of the overall time. Also stared as "optimized asm" - no gurantee on register safety tho ;)
For portability and performance sake it should be considered to create a compiler macro. This function is terribly small, any optimisations inside are outweighted by the calling overhead in this case. The most impressive one is the original function inlined, althought the ASM would only work on x86.
Please do not think about using 64k tables, thats what, 1/2 of a Sempron L2 cache? It would really really trash performance.
Hi Ash,
Thanks a lot for your tests. I don't have much time tonight, but if you'd like/can, can you add two more tests?
One using "bsr", the intel opcode. It does all the work for you and returns the index.
i think it's as simple as "bsr ecx, eax" where "ecx" is the mask and eax is the returned index. Or it might be backwards.
The second test is using a 256-byte log2 table:
const char LogTable256[] = { 0, 0, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7 };
and a lookup code like this:
Addon = 16; if ((IntMask = Mask >> 16)) { Addon = 0; IntMask = Mask; } if (IntMask && 0xFFFFFF00) { Addon += 8; } HighBit = LogTable256[(Mask >> Addon)] + Addon;
Ros-dev mailing list Ros-dev@reactos.com http://reactos.com:8080/mailman/listinfo/ros-dev
Ash schrieb:
It doesnt make much sense to put the optimized ASM in there, neither is much hope of GCC having a good day and doing a lot of optimisation. So far the best option would be the macro with a lookup table (only one global kernel table tho).
What about using inline assembler in a macro - only as a platform specific optimization of course?
result bsr inlined 46ffffe9 it took 1751088 21%
Using GCC this should be much faster when using
#define get_bits(value) \ ({ \ int bits = -1; \ __asm("bsr %1, %0\n" \ : "+r" (bits) \ : "rm" (value)); \ bits; \ })
This macro returns -1 when no bits were set. I tested it and it works as expected. When the -1 as "error" isn't suitable, you might want to change it to 0 ... line 3 of the macro.
result macro 46ffffe9 it took 653692 7%
The table approach doesn't seem to be bad either <g>.
Regards, Mark
Ash schrieb:
BSR has a latency of 8-12 Cycles on Athlon/P3 but can be pipelined. Worse (up to ~80 cycles) on Pentium and other older CPUs. http://www.amd.com/us-en/Processors/TechnicalResources/0,,30_182_739_3748,00...
My tests have shown that you're right and BSR is much too slow.
Dont know about A64 - maybe someone can test BSR with A64?
I have an AMD64 here but it doesn't run in 64 bit mode.
It doesnt make much sense to put the optimized ASM in there, neither is much hope of GCC having a good day and doing a lot of optimisation. So far the best option would be the macro with a lookup table (only one global kernel table tho).
I've converted your sources to be compileable with GCC (MinGW). I attached the sources.
Here are the updated STATS also available at http://hackersquest.org/kerneltest.html
result orig function 46ffffe9 it took 1526862 18% result orig function inlined 46ffffe9 it took 1041460 12% result second proposal inlined 46ffffe9 it took 1248990 15% result optimized asm 46ffffe9 it took 1321532 16% result lookup inlined 46ffffe9 it took 682264 8% result bsr inlined 46ffffe9 it took 1751088 21% result macro 46ffffe9 it took 653692 7%
This are my results on the AMD64 using your Release-EXE:
STATS result orig function 46ffffe9 it took 1272638 18% result orig function inlined 46ffffe9 it took 875751 12% result second proposal inlined 46ffffe9 it took 1051861 15% result optimized asm 46ffffe9 it took 1225282 17% result lookup inlined 46ffffe9 it took 549861 7% result bsr inlined 46ffffe9 it took 1410179 20% result macro 46ffffe9 it took 607638 8%
This are my results using the GCC EXE (-O2):
STATS result orig function 46ffffe9 it took 1321663 24% result orig function inlined 46ffffe9 it took 879318 16% result second proposal inlined 46ffffe9 it took 940285 17% result lookup inlined 46ffffe9 it took 615267 11% result bsr inlined 46ffffe9 it took 1103432 20% result macro 46ffffe9 it took 484450 9%
BTW: I had to remove all functions using the __asm() statement. The "result bsr inlined" uses my GCC BSR macro. You can see that using BSR seems to be much too slow ...
Regards, Mark
Testing on a Atlon 1.2 Ghz and a K6 233 Mhz: Both on Windows 2000 T:\cvs_cd>gcc --version gcc (GCC) 3.4.2 (mingw-special) Copyright (C) 2004 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Atlon 1.2 ghz: T:\cvs_cd>speedtest STATS result orig function 46ffffe9 it took 944301 23% result orig function inlined 46ffffe9 it took 697547 17% result second proposal inlined 46ffffe9 it took 774974 19% result lookup inlined 46ffffe9 it took 603607 15% result bsr inlined 46ffffe9 it took 656956 16% result macro 46ffffe9 it took 336330 8%
K6 233 Mhz: C:>speedtest STATS result orig function 46ffffe9 it took 5819845 23% result orig function inlined 46ffffe9 it took 3533468 14% result second proposal inlined 46ffffe9 it took 4043743 16% result lookup inlined 46ffffe9 it took 2290520 9% result bsr inlined 46ffffe9 it took 5961779 24% result macro 46ffffe9 it took 3001376 12%
Ash wrote:
BSR has a latency of 8-12 Cycles on Athlon/P3 but can be pipelined. Worse (up to ~80 cycles) on Pentium and other older CPUs.
^^^ I think this might be our "silver bullet".
I don't want to waste 256 bytes of L1 cache ( assuming we get a cache hit ), or spend 100's of cycles once per interrupt waiting for the cache miss lookup to go through, so the table-based approach is bad in this scenario.
In some more testing, BSR is never significantly faster than the C code, but in many scenarios is equivalent in speed. However, the C code trashes too many registers to do in parallel with anything else.
All that being said, I think BSR's ability to be able to be pipelined will make it our big winner after all. I will work on trying to verify if BSR pipelines well on AMD products, too.