In this post we investigate how we can improve the performance of our memory-intensive codes through changing the memory layout of our performance-critical data structures.
In this post we introduce a few most common tools used for memory subsystem performance debugging.
We investigate techniques for hiding memory latency on in-order CPU cores. The same techniques that the compilers employ.
We investigate the secret connection between class layout and software performance. And of course, how to make your software faster.
A short tale of how horrible code yields clean performance.
We investigate memory loads and stores that the compiler inserts for us without our knowledge: “the compiler’s secret life”. We show that these loads and stores, although necessary for the compiler are not necessary for the correct functioning of our program. And finally, we explain how you can improve the performance of your program by removing them.
In this post, we are investigating a few common ways to decrease the number of memory accesses in your program.
We investigate techniques of frugal programming: how to program so you don’t waste the limited memory resources in your computer system.
We introduce compiler optimization report, a useful tool if you wish to speed up your program by looking at what the compiler failed to optimize.
In our experiments with the memory access pattern, we have seen that good data locality is a key to good software performance. Accessing memory sequentially and splitting the data set into small-sized pieces which are processed individually improves data locality and software speed. In this post, we will present a few techniques to improve the…