This is the last memory optimization that we are covering in this blog. You can see the full list of all memory subsystem optimization that we covered earlier here. Definitely a read for anyone who is trying to improve performance of memory intensive software. In this post, we are covering a few remaining optimization techniques…
All posts in Low Level Performance
Speeding Up Convergence Loops. Or, on Vectorization and Precision Control
In this post we investigate methods to speed up convergence loops – while loops that slowly converge to the correct result.
Latency-Sensitive Application and the Memory Subsystem Part 2: Memory Management Mechanisms
In this post we talk about memory mechanism that increase memory accesses latency and we explore the techniques to avoid them in latency-sensitive systems.
Latency-Sensitive Applications and the Memory Subsystem: Keeping the Data in the Cache
We explore performance of latency-sensitive application, or more specifically, how to avoid evicting your critical data from the data cache.
The pros and cons of explicit software prefetching
We investigate explicit software prefetching, a mechanism software developers can use to prefetch the data in advance so it is ready once the program needs it.
A story of a very large loop with a long instruction dependency chain
A story of a very large loop with a long instruction dependency chain.
On Avoiding Register Spills in Vectorized Code with Many Constants
How to avoid register spilling in vectorized code with many constants?
Unexpected Ways Memory Subsystem Interacts with Branch Prediction
We investigate the unusual way memory subsystem interacts with branch prediction and how this interaction shapes software performance.
Multithreading and the Memory Subsystem
In this post we investigate how the memory subsystem behaves in an environment where several threads compete for memory subsystem resources. We also investigate techniques to improve the performance of multithreaded programs – programs that split the workload onto several CPU cores so that they finish faster.
Speeding Up Translation of Virtual To Physical Memory Addresses: TLB and Huge Pages
In this post we explore how to speed up our memory intensive programs by decreasing the number of TLB cache misses