Analyzing Geekbench 6 under Intel's BOT
by hajile on 4/1/2026, 3:27:38 AM
https://www.geekbench.com/blog/2026/03/analyzing-geekbench-6-under-intels-bot/
Comments
by: userbinator
<i>This suggests the checksum is used to identify whether the binary is known to BOT, and thus whether BOT can optimize the binary.</i><p>I do wonder what this "optimize" step actually entails; does it just replace the binary with one that Intel themselves carefully decompiled and then hand-optimised? If it's a general "decompile-analyse-optimise-recompile" (perhaps something similar to what the <a href="https://en.wikipedia.org/wiki/Transmeta_Crusoe" rel="nofollow">https://en.wikipedia.org/wiki/Transmeta_Crusoe</a> does), why restrict it?
4/1/2026, 4:43:27 AM
by: boomanaiden154
Post link optimization (PLO) tools have been around for quite a while. In particular, Meta’s BOLT (fully upstream in LLVM) and Google’s Propeller (somewhat upstream in LLVM, but fully open source) have been around for 5+ years at this point.<p>It doesn’t seem like Intel’s BOT delivers more performance gains, and it is closed source.
4/1/2026, 4:40:55 AM
by:
4/1/2026, 4:32:05 AM
by: tyushk
quack3.exe again in a way. If it's been done for years on GPU shaders, then why not CPU code?
4/1/2026, 4:51:56 AM
by: refulgentis
> BOT optimizations are poorly documented, aggressive in scope, and damage comparability with other CPUs. For example, BOT allows Intel processors to run vector instructions while other processors continue to run scalar instructions. This provides an unfair advantage to Intel<p>Wait until they hear about branch predictors.
4/1/2026, 4:21:55 AM