Apache Strom is used to processing real-time stream data and when we talk about performance tuning in Storm then Latency, high throughput, and resource consumption are a major area in which we need to look into.
The following factors should be considered during Apache Storm tuning.
- What type of data source we have?
- What is the frequency in which these data sources are sending messages?
- What would be the size of Messages that each data source is sending?
- Which sink is processing messages slowly in the cluster?
Apache Storm cluster has a Nimbus node and multiple Supervisor and Worker nodes. Nimbus node has less load compared to Supervisor and Worker as Supervisor and Worker nodes handle all the computation work hence it is recommended to save hardware resources for Supervisor and Worker nodes.
Apache Storm provides the following suggestions for performance tuning.
1. Buffer Size of Message Queue
Buffer size refers to the message queue that is used by Spouts and Bolts. Spouts read messages from external sources and parked in the message queue until unless it is consumed by the receiver. Typically Apache Storm message queue has multiple producers and a single receiver.
Apache Storm provides two buffer size settings that can be configurable.
- topology.executor.receive.buffer.size: This parameter shows the message queue size of each spout and bolt executor.
- topology.transfer.buffer.size: This parameter shows the message queue size of Worker Transfer Queue.
The basic guidelines of messages queue size are that it should not be very small or very big, because if the size is small then it will hamper throughput and if the size is very big then higher memory is required to maintain those messages in-memory buffer.
2. Batch Size of Messages
Messages generated by producers can be written in either batch mode or one by one message to the consumer. In some cases, it takes some time to fill the buffer, during that time downstream process waits for it to be flushed, this situation increases the latency for these messages delivery.
The guideline to set batch size for low latency is 1 and for high throughput, we can set the batch size to 10, 100, and 1000.
3. Flush Tuple Frequency
Constant flushing of messages is important in those cases where batches are enabled and taking a long time to fill up for downstream components. Flushing of messages is achieved by inserting flush tuples messages in the receiver queue for Spout and Bolt nodes.
4. Sample Rate Control
By using the sampling rate, we can define the computations which are going to be performed on executors of Spout and Bolt. The control parameter which defines sample rate is topology.stats.sample.rate. If we set this control parameter to 1 that means for every message stats are computed.
5. Scale-out with Single Worker mode
Despite running multi-worker instances if the single-worker instance is configured then it will provide very good throughput and better performance on the same cluster of nodes but it will add the slide to manage those instances.