Chapter 5. Location transparency

 

The previous chapter introduced message passing as a way to decouple collaborating objects. Making communication asynchronous and nonblocking instead of calling synchronous methods enables the receiver to perform its work in a different execution context, such as a different thread. But why stop at interactions within one computer? Message passing works the same way in both local and remote interactions. There is no fundamental difference between scheduling a task to run later on the local machine and sending a network packet to a different host to trigger execution there. In this chapter, we will explore the possibilities offered by this perspective as well as the consequences it has for quantitative aspects of performance such as latency, throughput, and probability of message loss.

5.1. What is location transparency?

5.2. The fallacy of transparent remoting

5.3. Explicit message passing to the rescue

5.4. Optimization of local message passing

5.5. Message loss

5.6. Horizontal scalability

5.7. Location transparency makes testing simpler

5.8. Dynamic composition

5.9. Summary