In our experiments with the memory access pattern, we have seen that good data locality is a key to good software performance. Accessing memory sequentially and splitting the data set into small-sized pieces which are processed individually improves data locality and software speed. In this post, we will present a few techniques to improve the…
All posts in Performance
What is faster: vec.emplace_back(x) or vec[x] ?
When we need to fill std::vector with values and the size of vector is known in advance, there are two possibilities: using emplace_back() or using operator[]. For the emplace_back() we should reserve the necessary amount of space with reserve() before emplacing into vector. This will avoid unnecessary vector regrow and benefit performance. Alternatively, if we…
When an instruction depends on the previous instruction depends on the previous instructions… : long instruction dependency chains and performance
This post has a second part, the same problem is solved differently. Read more. In this post we investigate long dependency chains: when an instruction depends on the previous instruction depends on the previous instruction… We want to see how long dependency chains lower CPU performance, and we want to measure the effect of interleaving…
The memory subsystem from the viewpoint of software: how memory subsystem affects software performance 2/3
We continue the investigation from the previous post, trying to measure how the memory subsystem affects software performance. We write small programs (kernels) to quantify the effects of cache line, memory latency, TLB cache, cache conflicts, vectorization and branch prediction.
The memory subsystem from the viewpoint of software: how memory subsystem affects software performance 1/3
In this post we investigate the memory subsystem of a desktop, server and embedded system from the software viewpoint. We use small kernels to illustrate various aspects of the memory subsystem and how it effects performance and runtime.
Instruction-level parallelism in practice: speeding up memory-bound programs with low ILP
We talk about instruction level parallelism: what instruction-level parallelism is, why is it important for your code’s performance and how you can add instruction-level parallelism to improve the performance of your memory-bound program.
Memory consumption, dataset size and performance: how does it all relate?
We investigate how memory consumption, dataset size and software performance correlate…
Crash course introduction to parallelism: Multithreading
In this post we introduce the essentials of programming for systems with several CPU cores. We start with an explanation of software threads and synchronization, two fundamental building blocks of multithreaded programming. We explain how these are implemented in hardware, and finally, we present several multithreading APIs you can use for parallel programming.
Vectorization, dependencies and outer loop vectorization: if you can’t beat them, join them
As I already mentioned in earlier posts, vectorization is the holy grail of software optimizations: if your hot loop is efficiently vectorized, it is pretty much running at fastest possible speed. So, it is definitely a goal worth pursuing, under two assumptions: (1) that your code has a hardware-friendly memory access pattern1 and (2) that…
Making your program run faster: the key concepts of software performance
In this post we present key concepts of software performance engineering.