In this post we investigate the memory subsystem of a desktop, server and embedded system from the software viewpoint. We use small kernels to illustrate various aspects of the memory subsystem and how it effects performance and runtime.

In this post we investigate the memory subsystem of a desktop, server and embedded system from the software viewpoint. We use small kernels to illustrate various aspects of the memory subsystem and how it effects performance and runtime.
We talk about instruction level parallelism: what instruction-level parallelism is, why is it important for your code’s performance and how you can add instruction-level parallelism to improve the performance of your memory-bound program.
We investigate how memory consumption, dataset size and software performance correlate…
As I already mentioned in earlier posts, vectorization is the holy grail of software optimizations: if your hot loop is efficiently vectorized, it is pretty much running at fastest possible speed. So, it is definitely a goal worth pursuing, under two assumptions: (1) that your code has a hardware-friendly memory access pattern1 and (2) that…
We try to answer the question of why is quicksort faster than heapsort and then we dig deeper into these algorithms’ hardware efficiency. The goal: making them faster.
For all the engineers who like to tinker with software performance, vectorization is the holy grail: if it vectorizes, this means that it runs faster. Unfortunately, many times this is not the case, and the results of forcing vectorization by any means can mean lower performance. This happens when vectorization hits the memory wall: although…
We use matrix multiplication example to investigate loop interchange and loop tiling as techniques to speed up your program that works with matrices.
A post explaining how a few small changes in the right places can have a drastic effects on performance of an image processing algorithm named Canny.
We are exploring how class size and layout of its data members affect your program’s speed
This is the first article about hardware support for parallelization. We talk about SIMD, an extension almost every processor nowadays has that lets you speed up your program.