Having spent some time optimizing concurrent code to be lock-free a while ago, I have recently been reading up on advanced concurrent programming techniques again. If anyone is interested, here are a couple of links with some helpful information.
[2016-08-12 Update] I’ve taken this article, updated a few of the links, added some more and reposted it as Reading up on Concurrent Programming – Reloaded. I recommend reading that instead of this post.
A good book to get you started is Concurrent Programming on Windows by Joe Duffy. It describes various aspects of locking and lock-free programming, the synchronization primitives available on Windows and much more. It’s certainly not easy-reading, but well worth it when you really want to learn these things in detail. The book is for .NET as well as Win32 programmers, as everything is explained in terms of both C# and Visual C++.
If your target platform is .NET, you must check out Threading in C# by Joseph Albahari which contains everything one could possibly want to know about, well, threading in C#. My personal advice, however, when doing multithreading in C#: in 99% of all cases don’t bother about creating your own threads, but use the Task Parallel Library (TPL) extensively. Don’t get me wrong, it’s still important to know these other things, though. And if you are really serious about optimizing your code, you will have to resort to some of the advanced techniques described there.
Another good resource for applications targeting Windows is the Parallel Computing page on MSDN containing links to a bunch of documentation, sample code tutorials and videos. While there is some information there about the Async technology preview, you might also want to check out this separate Visual Studio Asynchronous Programming page. I particularly liked Anders Hejlsberg’s introduction video. The PFX team’s blog also has some interesting posts about various parallel programming topics and earlier this year Raymond Chen also did a series on lock-free algorithms on his blog The Old New Thing.
If your focus is Java, you have to take a look at Disruptor, a “hard-core” (their words, not mine) concurrent programming framework for the JVM. To get started, you should check-out this video of a presentation by the developers and/or read their technical paper. There is also a description of the larger architecture that Disruptor is a part of on Martin Fowler’s website. And even if you are not programming in Java, you should look at this, as it contains some interesting pointers to what one could do in other languages to improve the performance of concurrent code. The Mechanical Sympathy blog by one of the developers contains a bunch of additional background information regarding Disruptor’s implementation.
- Why does shared state degrade performance?
- Is object pooling a deprecated technique?
- Optimizing for the CPU cache (with a link to What every programmer should know about memory, which I highly recommend checking out)
- How important is multithreading in the current software industry?
- How is lazySet in Java’s Atomic* classes implemented? (lazySet is used by Disruptor and came up a couple of times in the presentation, which is why I was asking)
- What are some of the core principles needed to master Multi threading using Delphi?