Chapter 11. Tuning Mule

 

This chapter covers

  • Identifying performance bottlenecks
  • Staged event-driven architectures
  • Configuring processing strategies

Whether you have predetermined performance goals and want to be sure to reach them or you’ve experienced issues with your existing configuration and want to solve them, the question of tuning Mule will occur to you sooner or later in the lifetime of your projects. Indeed, Mule, like any middleware application, is constrained by the limits of memory size, CPU performance, storage, and network throughput. Tuning Mule is about finding the sweet spot in which your business needs meet the reality of software and hardware constraints.

The same way a race car needs tuning to adapt to the altitude of the track or to the weather it will race in, Mule can require configuration changes to deliver its best performance in the particular context of your project. Up to this point in the book, we’ve relied on the default configuration of Mule’s internal thread pools and haven’t questioned the performance of the different moving parts, whether they’re standard or custom. We’ll now tackle these tough questions.

11.1. Staged event-driven architecture

11.2. Understanding thread pools and processing strategies

11.3. Identifying performance bottlenecks

11.4. Summary