When you start developing a service or system, it is much easier to make it work as a single thread in a single process. Single threading allows rapid development of concept at the expense of concurrency. Single threaded just makes things so much more predictable.
But eventually a service needs the ability to handle more than a single user at a time. Typically, the process either forks itself or utilizes some threading library (like pthreads). Multithreading an application introduces new complexities. Things like IPC, mutexes, semaphores, race conditions, and atomicity become troublesome concepts. For some applications, multi-threading is the only way to go. Applications we use every day would be unusable as a single threaded application. Complex AJAX websites would be horrible -- the A in AJAX stands for Asynchronous after all.
Most things accomplished with threading can also be accomplished with multiple processes. IPC can be done through a resource other than memory. Creating a socket connection between two processes is a very simple method of implementing IPC between two processes and it avoids some of the complications inherent in semaphores. So why don't developers leverage this approach more often? It is typically viewed as wasteful of resources and it typically performs slower than two threads communicating inside the same process. For these reasons I have avoided processes unless the functionality is intended for physically different hosts.
Then Google Chrome came around and I had to partially rethink my above beliefs. Chrome is not a single process running multiple threads. It is a collection of multiple processes running multiple threads per process. Google decided more upfront memory consumption was an acceptable exchange for more crash isolation. Check out the Chrome book to see a crash course on chrome in a comic-book format.
So now I am trying to write a database service. Like all modern database servers, it needs to support replication. It particularly must support replication between hosts. In development, I normally run two processes on the same host and pretend those processes are running on different machines. It was at this point the lightbulb lit up and I started thinking about Chrome's architecture. What if a database server was a collection of single threaded processes running on a single host. The database equivalent of the prefork Apache MPM. The model is not new, and it isn't even unique, but it isn't common in the DB world. kdb+ is one of the few databases I have heard of that tries this approach.
Unlike the prefork model, I am thinking of leaving them completely unrelated processes. Essentially leveraging the replication functionality as my IPC model and leveraging some other technology to do the load balancing. The simplicity of the approach looks appealing at first glance. The theoretical redundancy and fault tolerance in the model is intriguing.