To the list of profiling tools I would like to add KDAB Hotspot and KDE Heaptrack.
The former, hotspot, is a visualiser for perf data, and it deals ok with truly massive files that made perfetto and similar just big down. It also supports visualing off-CPU profiles ("why is my program slow but not CPU bound?").
The latter, heaptrack, is a tool with very similar UI to hotspot (I think the two tools share some code even) to profile malloc/free (or new/delete). Sometimes the performance issue is as simple as not reusing a buffer but reallocating it over and over inside a loop. And sometimes you wonder where all the memory is going.
Great article. Can confirm, writing performance focused C# is fun. It's great having the convenience of async, LINQ, and GC for writing non-hot path "control plane" code, then pulling out Vector<T>, Span<T>, and so on for the hot path.
One question, how portable are performance benefits from tweaks to memory alignment? Is this something where going beyond rough heuristics (sequential access = good, order of magnitude cache sizes, etc) requires knowing exactly what platform you're targeting?
Author here. First of all, thanks for the compliment! It’s tough to get myself to write these days, so any motivation is appreciated.
And yes, once all the usual tricks have been exhausted, the nest step is looking at the cache/cache line sizes of the exact CPU you’re targeting and dividing the workload into units that fit inside the (lowest level possible) cache, so it’s always hot. And if you’re into this stuff, then you’re probably aware of cache-oblivious algorithms[0] as well :)
Personally, I almost never had the need to go too far into platform-specific code (except SIMD, of course), doing all the stuff in the post is 99% of the way there.
And yeah, C# is criminally underrated, I might write a post comparing high-perf code in C++ and C# in the future.
One other trick I use reasonably often is using something more complicated than AoS or SoA layouts. Reasons vary (the false sharing padding in your article is one example), but cache lines are another good one. You might, e.g., want an AoSoA structure to keep the SoA portion of things on a single cache line if you know you'll always need both data elements (the entire struct), want to pack as much data in a cache line as possible, and also want that data to be aligned.
Author here, kinda sorta. I should've been a bit more specific than that.
You can have a profile showing a function taking up 99% of the time, but when you dive into it, there's no clear bottleneck. But just because there's no bottleneck, that doesn't mean it's optimized; vice versa-a well-optimized program can have a bottleneck that's already been cycle-squeezed to hell and back.
What I wanted to say was that a spiky profile provides a clear path to optimizing a piece of code, whereas a flat profile usually means there are more fundamental issues (inefficient memory management, pointer chasing all over the place, convoluted object system, etc.).
It sounds like a flat profile essentially is a local optimum, compared to cases where there's a path "upwards" along a hill to some place more optimal that doesn't require completely changing your strategy.
That's actually a good observation, yeah. It's often the case that you dig deeper and deeper and find some incomprehensible spaghetti and just say "fuck it, I'll just do what I can here, should be enough".
I've seen a few of these in my career, if I understand the author correctly. You have a big ball of mud that can theoretically be 10x or 100x faster, but the costs are diffuse and can't be solved by just finding a hotspot and optimizing it.
It often happens for good reasons. Features get added over time, there are some scars from a mocking framework, simpler (faster) solutions don't quite work because they're supporting X which supports Y which supports Z (dead code, but nobody noticed), people use full datetime handling when they mean to access performance counters, the complexity of the thing means that you blow your branch prediction cache size budget, etc....
The solution is to deeply understand the problem (lots of techniques, but this comment isn't a blog post) and come up with a solution, like a ground-up rewrite of some or all of the offending section.
Good article, but for profiling you are missing the big daddy of them all, Tracy.
To the list of profiling tools I would like to add KDAB Hotspot and KDE Heaptrack.
The former, hotspot, is a visualiser for perf data, and it deals ok with truly massive files that made perfetto and similar just big down. It also supports visualing off-CPU profiles ("why is my program slow but not CPU bound?").
The latter, heaptrack, is a tool with very similar UI to hotspot (I think the two tools share some code even) to profile malloc/free (or new/delete). Sometimes the performance issue is as simple as not reusing a buffer but reallocating it over and over inside a loop. And sometimes you wonder where all the memory is going.
Great article. Can confirm, writing performance focused C# is fun. It's great having the convenience of async, LINQ, and GC for writing non-hot path "control plane" code, then pulling out Vector<T>, Span<T>, and so on for the hot path.
One question, how portable are performance benefits from tweaks to memory alignment? Is this something where going beyond rough heuristics (sequential access = good, order of magnitude cache sizes, etc) requires knowing exactly what platform you're targeting?
Author here. First of all, thanks for the compliment! It’s tough to get myself to write these days, so any motivation is appreciated.
And yes, once all the usual tricks have been exhausted, the nest step is looking at the cache/cache line sizes of the exact CPU you’re targeting and dividing the workload into units that fit inside the (lowest level possible) cache, so it’s always hot. And if you’re into this stuff, then you’re probably aware of cache-oblivious algorithms[0] as well :)
Personally, I almost never had the need to go too far into platform-specific code (except SIMD, of course), doing all the stuff in the post is 99% of the way there.
And yeah, C# is criminally underrated, I might write a post comparing high-perf code in C++ and C# in the future.
[0]: https://en.wikipedia.org/wiki/Cache-oblivious_algorithm
One other trick I use reasonably often is using something more complicated than AoS or SoA layouts. Reasons vary (the false sharing padding in your article is one example), but cache lines are another good one. You might, e.g., want an AoSoA structure to keep the SoA portion of things on a single cache line if you know you'll always need both data elements (the entire struct), want to pack as much data in a cache line as possible, and also want that data to be aligned.
Great article by the way.
> worst case scenario being the flat profile where program time is roughly evenly distributed
It sounds like the “worst case“ here is that the program is already optimized.
Author here, kinda sorta. I should've been a bit more specific than that. You can have a profile showing a function taking up 99% of the time, but when you dive into it, there's no clear bottleneck. But just because there's no bottleneck, that doesn't mean it's optimized; vice versa-a well-optimized program can have a bottleneck that's already been cycle-squeezed to hell and back.
What I wanted to say was that a spiky profile provides a clear path to optimizing a piece of code, whereas a flat profile usually means there are more fundamental issues (inefficient memory management, pointer chasing all over the place, convoluted object system, etc.).
It sounds like a flat profile essentially is a local optimum, compared to cases where there's a path "upwards" along a hill to some place more optimal that doesn't require completely changing your strategy.
That's actually a good observation, yeah. It's often the case that you dig deeper and deeper and find some incomprehensible spaghetti and just say "fuck it, I'll just do what I can here, should be enough".
I've seen a few of these in my career, if I understand the author correctly. You have a big ball of mud that can theoretically be 10x or 100x faster, but the costs are diffuse and can't be solved by just finding a hotspot and optimizing it.
It often happens for good reasons. Features get added over time, there are some scars from a mocking framework, simpler (faster) solutions don't quite work because they're supporting X which supports Y which supports Z (dead code, but nobody noticed), people use full datetime handling when they mean to access performance counters, the complexity of the thing means that you blow your branch prediction cache size budget, etc....
The solution is to deeply understand the problem (lots of techniques, but this comment isn't a blog post) and come up with a solution, like a ground-up rewrite of some or all of the offending section.
Not necessarily. It could just be uniformly slow with no particular bottleneck.
This is a narrative commonly heard from profiler skeptics, but I've never seen a real example.