In this post we talk about how to write code that is both flexible and fast!

In this post we talk about how to write code that is both flexible and fast!
In this post we introduce Speescope, a useful tool to help you visualize what your program is doing and where it is spending time.
A post explaining how a few small changes in the right places can have a drastic effects on performance of an image processing algorithm named Canny.
We investigate the performance impact of multithreading.
When processing (searching, inserting etc) your data structure, if you are accessing it in random-access fashion, the performance will suffer due to many data cache misses. Read on how to use the explicit data prefetching to speed up access to your data structure.
We will talk about expensive instructions in modern CPUs and how to avoid them to speed up your program.
Profile guided optimizations are a compiler-supported optimization technique that is easy to use and will make your program run faster with little effort. Here you will learn how to enable it on your project and what kind of improvements you can expect.
If your program uses dynamic memory, its speed will depend on allocation time but also on memory access time. Here we investigate how memory access time depends on the memory layout of your data structure. We also investigate ways to speed up your program by laying out your data structure optimally.
Function calls are not cheap operations and for time critical code it is better to avoid them. This article explores techniques you can use to avoid function calls thus speeding up your code.
Traditional compilation-linking cycle generates binaries that work fine, but in case you need more speed, you need to learn about link time optimizations. Here we talk about what link time optimizations are, how to enable them and what improvements to expect.