We investigate memory loads and stores that the compiler inserts for us without our knowledge: “the compiler’s secret life”. We show that these loads and stores, although necessary for the compiler are not necessary for the correct functioning of our program. And finally, we explain how you can improve the performance of your program by removing them.
All posts tagged memory access
Decreasing the Number of Memory Accesses 1/2
In this post, we are investigating a few common ways to decrease the number of memory accesses in your program.
Memory consumption, dataset size and performance: how does it all relate?
We investigate how memory consumption, dataset size and software performance correlate…
Memory Access Pattern and Performance: the Example of Matrix Multiplication
We use matrix multiplication example to investigate loop interchange and loop tiling as techniques to speed up your program that works with matrices.
Speeding up an Image Processing Algorithm
A post explaining how a few small changes in the right places can have a drastic effects on performance of an image processing algorithm named Canny.
2-minute read: Class Size, Member Layout and Speed
We are exploring how class size and layout of its data members affect your program’s speed
The price of dynamic memory: Memory Access
If your program uses dynamic memory, its speed will depend on allocation time but also on memory access time. Here we investigate how memory access time depends on the memory layout of your data structure. We also investigate ways to speed up your program by laying out your data structure optimally.