Reading up on Apache Kafka

Even though it is a lot of fun to write your own high-performance middleware and fine-tune its multicast performance, it is important to every now and then look outside your team and your organization at what’s available in the market. Even if it isn’t a perfect fit, it might give you some ideas for your own implementation (like PGM did for me).

Hence I attended Data-driven Day at codecentric, a day of talks about various aspects of Apache Kafka. Key note speaker was Tim Berglund from Confluent, a company building a platform on top of the open source Kafka. A lot of the basic concepts from his talk you can find in the Kafka Wikipedia article, but it was helpful to have Tim explain them in a way that made sense even for someone like me who had never heard of Kafka before.

Videos from that day’s talks are available on YouTube: [1], [2], [3].

Afterwards, I got a chance to talk to Tim about some of the specific requirements we have in terms of performance, throughput and protocol design, and he pointed out some articles on the Confluent blog. Additionally, I also found the following resources very helpful:


The Case of Multicast Message Loss (again)

I have written about trouble-shooting multicast issues several times before, but multicast is a gift that keeps on giving.

The Problem

The application in question would miss a substantial number of messages. A trace on the connected switch showed that all packets had been put on the wire. Tracing with Microsoft Message Analyzer on the machine showed these same messages missing, so our application probably was not at fault. Additionally, it did work on other machines just fine.

The Analysis

So I went back to the drawing board, reviewed and double-checked everything I had learned about high-throughput multicast messaging and

  • set appropriately large socket receive buffer sizes in the multicast message receiving application,
  • activated all TCP/UDP Rx/Tx offloads in the NIC configuration,
  • activated receive side scaling (RSS) and picked the maximum number of RSS queues,
  • set the NICs’ receive buffers to their maximum values,
  • disabled flow-control,
  • turned off all power-saving features in NIC and operating system,
  • used the most aggressive interrupt moderation setting, and
  • updated the NIC driver top the latest version.

In order to check NIC settings, I keep the following PowerShell snippet handy. It gives me the current, all valid and maximum values for each parameter of each NIC in the NIC team. And it doesn’t even require admin privileges.

(Get-NetLbfoTeam "MyNicTeam").Members | Get-NetAdapterAdvancedProperty | ft DisplayName,DisplayValue,ValidDisplayValues,NumericParameterMaxValue

Other useful sources:

But even after ensuring all parameter were at their optimal values, the problems persisted. So I spent some time setting up perfmon with these network-related performance counters.

One counter immediately jumped out: Packets Received Discarded was pretty much constant on the machines our application worked on. But on the machines where we noticed packet loss, this number was growing fast.

This Technet blog post has a good explanation of that performance counter and tips on how to gather it from multiple machines remotely using PowerShell.

The Cause

It turns out the machines experiencing multicast message loss had substantially smaller receive buffers (512) compared to the machines that were working fine (2048 and 4096). Even though our setup script had correctly configured the maximum value for this parameter, that was apparently still insufficient.

So we ended up upgrading the NICs on the cluster experiencing the problems and the multicast messages loss went away.

Upon closer examination we also noticed TCP packet loss while our multicast application was running. But because resends were mostly successful, only introducing small delay this had gone unnoticed before.

Thoughts on Investing in Bitcoin and other Crypto-Currencies

About two years ago I became interested in Bitcoin and the underlying blockchain technology. At the time, it was mostly tech and finance circles discussing their potential.

Today, it seems everybody is talking about it. As the price of Bitcoin has soared even mainstream media feel they need to report on it.

Is it a bubble?

Earlier this year, I figured that simply being interesting in crypto-currencies wasn’t enough. In order to talk about it credibly, I wanted to be more invested in it (literally). So I decided a take a few hundred Euros to buy small mounts of Bitcoin, Ether and Ripple, even though they had already increased quite substantially in previous months. I reasoned, if the bubble burst, the financial loss would be bearable, but if it increased ten-fold, I would turn a nice profit. Either outcome seemed equally likely then.

Of all the crypto-currencies out there, I chose these three, because

  1. Bitcoin was (and still is) the most popular and thus in my mind has the largest chances of being driven up by speculators.
  2. Ether is the value token of the Ethereum smart-contract platform, which is a technology I thought had a lot of promise as the building blocks of many digital innovations yet to come.
  3. Ripple is different from other crypto-currencies in that it was not really competing with existing currencies and the banking system, but rather complementing them. In my opinion, it also neatly solves a few of the issues in other real-time gross settlement systems, such as the complexity of managing standard settlement instructions (SSIs).

When to get out?

Initially, my investments languished. But particularly in these last few weeks, interest in crypto-currencies has spiked and I suppose, as the rise in Bitcoin has people looking for alternatives, Ether and Ripple have been driven up as well.

Thus, my investments in Bitcoin and Ripple have now increased four- to five-fold (Ether a little less). Naturally, no one knows when this is going to end (but end it must, I believe). So I decided to limit my down-side risk and sell about a fourth or a fifth of my stake in Bitcoin and Ripple.

This way, I’ve fully recouped my initial investment, but still have hundreds of Euros worth of crypto-currency left. Now, a couple of days later, Bitcoin has continued to appreciate, but I am not worried about the money I did not make by selling early. I can now sit back relaxed, knowing that I cannot lose anything, but still have a lot to gain as I watch the crypto-currency story unfold and hopefully realize its full potential.

[11-Jan-2018 Update]: Looks like I’m not the only one taking money off the table.

Should I invest?

I don’t give investment advice. Not in real life and particularly not on the internet.

But whenever you need to ask someone, whether something is a good idea, you obviously don’t know enough about it to make an informed decision for yourself.

The Case of a Delphi Application hanging in ExitProcess

A colleague came to me with a Delphi application that would not shut down, but just hang. The application in question had been refactored such that one module was extracted into a DLL to be reused in another application. When this extracted module was loaded into the application and the application was closed, it would hang. If the module was not loaded, the application would shut down normally.


Debugging the application was initially unsuccessful. Stepping through our code we verified that the shutdown logic executed normally with destructors running as expected. Interestingly, it was not possible to break into the application once it had become unresponsive. Trying to pause the hung program from within the IDE would simply cause the IDE to hang as well.

Thus we used Process Explorer instead to look at the application’s threads and their callstacks.

There we saw that there was one thread stuck on a call to WaitForSingleObject which originated in our DLL code. Higher up the callstack was ExitProcess. I looked at the documentation for ExitProcess to see look for ways in which it could deadlock. One sentence looked promising: “If one of the terminated threads in the process holds a lock and the DLL detach code in one of the loaded DLLs attempts to acquire the same lock, then calling ExitProcess results in a deadlock.” But since there was only one thread, this could not be it.

Looking next at what happens exactly inside ExitProcess, two other things jumped at me:

  • All threads are terminated (except for the one calling ExitProcess).
  • All DLLs are unloaded.


It turns out, the initial analysis that “the shutdown logic executed normally” was wrong. One of the shared units compiled into the DLL (through several layers of indirections), had a finalization section. In this finalization section, a background thread that had been created in the corresponding initialization section, was being destroyed. As part of the destructor code, the thread class was waiting for an event that was set when the thread had stopped executing.

Holzscheite (4)

This finalization section was running as part of the “all DLLs are unloaded” step by ExitProcess. Unfortunately, all threads (including the one created in the initialization) had already been terminated. I am not quite sure how that was accomplished, but it apparently circumvented the normal thread termination logic which set the event that the thread had stopped executing.

This is different for code in the main application, where finalization sections are run while the application is still in working order.


Instead of waiting for the thread to set its “stopped executing” event, I wait on the thread handle to check if the thread was even there to set the event. When run from the DLL’s finalization section, this detects the thread’s absence and just returns.

Business Intelligence with PowerQuery: First steps getting JSON into Microsoft Excel

There are many business intelligence solutions large and small for knowledge workers to choose from. But due to its ubiquity, I assume many (myself included) just use Microsoft Excel to interactively analyze data.

A lot of data on the web is available via RESTful APIs returning JSON, e.g. the REST countries service I’ll use in this example.

In the 2016 release Microsoft has vastly expanded the data analysis capabilities of Excel compared to previous versions. And with the free PowerQuery plugin, these capabilities are available to Excel 2013 users as well.

Getting JSON into Excel

One of these capabilities is to retrieve JSON data from the web and turn it into an Excel table. Just go to Data > From Web and enter the URL, e.g.


Excel will figure out whether there is a regular web page at the given URL (and offer to extract HTML tables from it) or JSON. In the latter case, a list of records is displayed in the Query Editor.

There is a Convert To Table button conveniently placed at the top left. But with every JSON document I’ve come across, this has required several additional steps to create a proper table.


Making a table

Instead I recommend going to View > Advanced Editor and add a manual conversion step by changing the query to this.

    Source = Json.Document(Web.Contents("")),
    Table = Table.FromRecords(Source)

The same can be achieved by adding a manual step in the Query Settings sidebar and adding the bold text as the function.

Voilà, you now have that JSON as a table with one record per country.

You may want to add some more steps such as right-click the topLevelDomain column and Extract Values… to get the list of domains comma-separated in a single cell.


Great usability for non-programmers

This is just one of the many conversion and transformations available via easy to use (but numerous) commands in the context-menu and ribbon.

The great thing about Excel’s handling of all of these conversion steps is that they are applied non-destructively. In the Query Settings sidebar, you can see each one and click it to see the intermediate results it produces.

Not so great: M language

There is one, thing, however, that is a bit disappointing: the programming language behind all of this is the M language inspired by F#. This is unfortunate as it means any previous skills you’ve had working with data in Excel macros is useless, as even basic things such as if statements are syntactically so different I needed to look it up.

Data Analytics and Minimalist Art

For 31 days starting in the beginning of July I posted one new picture each afternoon. All of these pictures were created using the same basic shapes following the rules set out here.

It was interesting to see, how despite their similarities some pictures scored substantially more likes than others. One thing that was abundantly clear from the start was that using many popular hashtags significantly increased my chances of getting likes. Every Instagram guide will tell you that.

What I am interested in is whether there is a way to forecast how many likes a given picture might get. I’m planning on writing a little application to analyze the dataset gathered in July for correlations. This would be an interesting opportunity to try out TensorFlow or Azure Machine Learning and Cognitive Services.

Reading up on Blockchain and Distributed Ledger Technology

Last year, I was reading up on Bitcoin, blockchain and beyond. Since then, there have been several interesting developments in distributed ledger technology (DLT).

If you’re new to the technology, I highly recommend this introductory, plain English guide to blockchain.

I also came cross this great article defining criteria to avoid pointless blockchain projects and its follow-up on four genuine blockchain use cases.

R3 Corda

For one, R3, which I thought then and still think today shows a lot of promise, has released the code for Corda, its distributed ledger project. They also published a non-technical whitepaper as an introduction and two webinar videos: Introduction to Corda and Corda Developers’ Tutorial. There is alos this excellent non-technical 18 second definition of DLT by Richard Gendal Brown, CTO of R3.

R3 also offered its code to the Hyperledger project.


Hyperledger isn’t a distributed ledger, per se, but contains multiple DLT projects, e.g. Fabric, which is backed by IBM. While you can run Hyperledger Fabric on your own machines, IBM also gives developers an opportunity to play with the technology in their cloud Bluemix.

Unlike Corda, which was built from the ground up for the financial services industry, finance is only one of the industries Hyperledger is targeting. There are, however, a number of projects underway in the financial services that use Hyperledger, as their proof of concept tracker shows.

One of those projects was undertaken by Germany’s central bank Deutsche Bundesbank and the country’s largest exchange operator Deutsche Börse. A November 2016 speech by Carl-Ludwig Thiele, member of the executive board of Deutsche Bundesbank contained mostly questions about the new technology. His speech from January 2017 already presented a prototype to handle simple settlement, payment and corporate actions.


There are a number of interesting projects underway to apply distributed ledger technology to finance.

Still, a lot of questions to be addressed regarding distributed ledger technology, as this position paper by SWIFT and Accenture from last year points out.

The Germany IT industry association Bitkom looks at some of these, e.g. legal ramifications of distributed ledger technology in banking (in German).

It is interesting to see though that regulators and central banks are already actively involved even though distributed ledger technology is still in its infancy in the financial services industry.