In this post we investigate how we can improve the performance of our memory-intensive codes through changing the memory layout of our performance-critical data structures.

In this post we investigate how we can improve the performance of our memory-intensive codes through changing the memory layout of our performance-critical data structures.
In this post we introduce a few most common tools used for memory subsystem performance debugging.
We investigate the secret connection between class layout and software performance. And of course, how to make your software faster.
A short tale of how horrible code yields clean performance.
We investigate techniques of frugal programming: how to program so you don’t waste the limited memory resources in your computer system.
We continue the investigation from the previous post, trying to measure how the memory subsystem affects software performance. We write small programs (kernels) to quantify the effects of cache line, memory latency, TLB cache, cache conflicts, vectorization and branch prediction.
We investigate how memory consumption, dataset size and software performance correlate…
For all the engineers who like to tinker with software performance, vectorization is the holy grail: if it vectorizes, this means that it runs faster. Unfortunately, many times this is not the case, and the results of forcing vectorization by any means can mean lower performance. This happens when vectorization hits the memory wall: although…
We investigate why software gets slower as new features are added or data set grows and what can you do about it.
Linked lists are celebrity data structures of software development. They are celebrities because every engineer has had something to do with them in one part of their career. They are used in many places: from low-level memory management in operating systems up to data wrangling and data filtering in machine learning. They promise a lot:…