For Software Performance, the Way Data is Accessed Matters!

For Software Performance, the Way Data is Accessed Matters!

In our experiments with the memory access pattern, we have seen that good data locality is a key to good software performance. Accessing memory sequentially and splitting the data set into small-sized pieces which are processed individually improves data locality and software speed. In this post, we will present a few techniques to improve the…

What is faster: vec.emplace_back(x) or vec[x] ?

What is faster: vec.emplace_back(x) or vec[x] ?

When we need to fill std::vector with values and the size of vector is known in advance, there are two possibilities: using emplace_back() or using operator[]. For the emplace_back() we should reserve the necessary amount of space with reserve() before emplacing into vector. This will avoid unnecessary vector regrow and benefit performance. Alternatively, if we…

When an instruction depends on the previous instruction depends on the previous instructions… : long instruction dependency chains and performance

When an instruction depends on the previous instruction depends on the previous instructions… : long instruction dependency chains and performance

This post has a second part, the same problem is solved differently. Read more. In this post we investigate long dependency chains: when an instruction depends on the previous instruction depends on the previous instruction… We want to see how long dependency chains lower CPU performance, and we want to measure the effect of interleaving…

The memory subsystem from the viewpoint of software: how memory subsystem affects software performance 2/3

The memory subsystem from the viewpoint of software: how memory subsystem affects software performance 2/3

We continue the investigation from the previous post, trying to measure how the memory subsystem affects software performance. We write small programs (kernels) to quantify the effects of cache line, memory latency, TLB cache, cache conflicts, vectorization and branch prediction.

Crash course introduction to parallelism: Multithreading

Crash course introduction to parallelism: Multithreading

In this post we introduce the essentials of programming for systems with several CPU cores. We start with an explanation of software threads and synchronization, two fundamental building blocks of multithreaded programming. We explain how these are implemented in hardware, and finally, we present several multithreading APIs you can use for parallel programming.

Vectorization, dependencies and outer loop vectorization: if you can’t beat them, join them

Vectorization, dependencies and outer loop vectorization: if you can’t beat them, join them

As I already mentioned in earlier posts, vectorization is the holy grail of software optimizations: if your hot loop is efficiently vectorized, it is pretty much running at fastest possible speed. So, it is definitely a goal worth pursuing, under two assumptions: (1) that your code has a hardware-friendly memory access pattern1 and (2) that…