Wrt. the concurrency model, I would like to have multiple clients accessing simultaneously to the same application. Off course faillures on certain nodes (either clients or servers) must also be considered. Lastly, I want to be able to make computations on both the client and the server side.
Without having a Web architecture, I can find three approaches to the concurrency aspect:
- Shared space: By using a memory space shared by all interested nodes, one can make applications by building (complex) contraptions with locks and semaphores. It's tedious, error-prone, and is generaly considered to be avoided in medium-large applications.
- Message passing: Using the Actors approach in which everything is an object that is operated solely by exchanging messages with it. Most of the times this is simplified to the model where only processes (or nodes, or threads) send messages to each other (like Erlang)
- Declarative data flow: this is the model used by Termite, and I've described it extensively in the previous post. In short each process can send messages, data, and execution-flow to another process.
- Doesn't need the coding complexity needed in the shared data management;
- Does not demand separate programming of paralel processes;
- Declarative DSL's save lots of code lines, with abstractions based on continuations replication between nodes
- How are computations going to be transferred to a given client or back to the server?
- How will the computation be transferred at any point of the application execution?
- Given the bandwidth current limitations, how are computations going to be serializable? In processes, continuations, closures? What is the relevant minimal data set that need to be passed along the wire?
- It is a fact that the client must have an engine (in ECMA Script, I suppose) to process some part of the application - either the user interface or the domain application logic. This engine must be prepared to receive this sort of computations from other nodes (as well as to send some back to them).