If you're using KahaDB as the data store for ActiveMQ, this article may help you analyze and optimize ActiveMQ's footprint on the disk volumes used for persistent messaging. I'll go through some techniques that helped in one circumstance where KahaDB was exceeding its 240G allocation. After some tuning, we managed to thin down the footprint to less than 60G while minimizing fluctuations, which ultimately stabilized an environment that was quickly running out of resources.
A note about consumers...
Not all consumers behave the same in most environments. We tend to talk about consumers in terms of "fast" and "slow", but these terms are relative, of course. Consumer speed is generally considering the time between when a message is received and when an acknowledgement is sent back to the message broker. Speed is one characteristic by which consumers can vary, and an important one to consider in the context of persistent messaging, because only fully acknowledged messages are candidates for garbage collection on disk.
One bad apple...
It is common to see KahaDB log files accumulate on disk when there is a particularly slow or malfunctioning consumer in the mix. One thing I often see is a complete lack of consumers on ActiveMQ's dead letter destination. Whether slow consumer or no consumer, if messages are enqueued for extended periods of time, KahaDB will keep log files on disk to track them.
ActiveMQ creates KahaDB log files that are 32MB on disk by default. So even if messages are only a few KBs each, if there is a single message without an acknowledgement referenced somewhere in a log file, the entire 32MB block will not get garbage collected. It is possible to set the size of a KahaDB's log files in the broker.xml configuration, like so:
<persistenceAdapter>
<kahaDB journalMaxFileLength="8mb" />
</persistenceAdapter>
Note that smaller log files mean more disk I/O, and this may have some impact on performance. Smaller log files may mitigate some disk accumulation. One bad apple can still spoil the barrel, but at least you have smaller barrels.
Eggs in the same basket...
Simply adjusting the size of KahaDB log files may improve garbage collection a little, but it does not address the underlying issue that ActiveMQ destinations get consumed at various rates and have different expectations for queue depth and message time in-queue, often by design.
When destinations (queues and topics) vary drastically in terms of the profile of consumers and the nature of message accumulation, it can be advantageous to dedicate an isolated KahaDB store for each. This does not necessarily mean that you need to create specific brokers for specific purposes, although that can be an effective strategy from many angles including risk mitigation, loose coupling, etc. Even within the same broker, KahaDB can be configured to maintain an isolated log store for each destination, like this:
<mKahaDB directory="${data}/kahadb">
<filteredPersistenceAdapters>
<filteredKahaDB perDestination="true" >
<persistenceAdapter>
<kahaDB />
</persistenceAdapter>
</filteredKahaDB>
</filteredPersistenceAdapters>
</mKahaDB>
Back to the example where the DLQ has enqueued messages without any consumers, the above configuration drastically alleviates the overall impact on disk. Even when the KahaDB log size is the default 32MB, at least in this scenario, the DLQ message footprint is concentrated in an isolated store, instead of preventing garbage collection for all destinations in the broker.
It can be convenient for many reasons to avoid having all of your eggs (messages) in the same basket.
- quickly identify which destinations are not being garbage collected in a timely manner due to slow acknowledgements
- recover from data corruption on a single destination without affecting others
- optimize garbage collection when destinations have significant variation in consumer speed and therefore different expectations for queue depth and message time in-queue