back

Heterogeneous/Homogeneous deployment and server cluster


1. Deployment scenarios

In an enterprise development environment the classical client/server architecture is the normal architecture model. Here the server is hosted in a single process and all client applications access this server process remotely (in .net via WCF etc.).

Under certain circumstances we need more then one server, whereas we differentiate two models:
  1. Homogeneous deployment - redundant servers
  2. Heterogeneous deployment - seperation of concerns

1.1. Homogeneous deployment

Let's take an example of a server application containing a couple of stateless (!) services. Now, the amount of client calls per second rise to a no more bearable amount for a single server. This is the moment the development teams think about up-scaling.

Normally this means a second identical server will be used. This server contains the same deployment configuration as the first one - a redundant copy.

Now the mass of clients face two servers which may fulfill each request equally. This is called load balancing. The systematic to determine the server for the next call can be round-robin, fastest first etc. The determination itself takes place within the client applications. It is also possible to use hardware based load balancer. They are the requested service facade and delegate the request to the correct server.



This scenario can be used without any changes to the container one application server.

1.2. Heterogeneous deployment

A bit more tricky is the heterogenous deployment wheras the intersection of deployed services in both servers is not empty. If the intersection is empty the servers divide work on concern and each client knows which server to ask.

But if the client shall execute methods on the server side without knowledge of the deployed services, the target server has to find out if itself can handle the request or the others.

1.2.1. Cluster awareness

Let's take an example:



Component A resides in both servers but B and C on server 1 and Y and Z on 2.

If a client call - addressing A - comes into server 2 and A needs C for completing the task --> Big problem!

The solution can look like the follwoing:
  1. The call arrives Component A in server 2
  2. Component A owns a proxy to the interface of the needed Component C
  3. By calling the proxy the infrastructure delivers a remote proxy as virtual real counterpart to the created spring proxy (without knowing this)
  4. The call to the remote proxy results in a WCF based remote call to the real implementation on server 1
  5. And then back to sender...

What is needed for this sequence:
  1. If a component declares a dependency to a not locally existing but globally accessable component the infrastructure throws no exception
  2. For this the local registry must be aware of other existing registry components in other server processes
  3. This means a newly started server must get knowledge about
    • already running servers
    • stoping servers
    • starting servers
  4. If a server does not expect other ones it can't establish a TCP based connection to other servers. Therefore multicast is the only way

1.2.2. Multicast detection

Think about the following: A container one application server starts with its core components, its functional components and a multicast detector and sender component. This means: Whenever the multicast sender component starts working it will send a greet message to the multicast group. Each already existing application server detects this greeting via its running multicast detector and records a new partner with its IP address. After this recognition all existing servers respond to the newly arrived with a cheersmessage and the inital sender records all responses as new partners with their IP address.

In the end each running server has a list of all other existing servers.

If a server quits work in the cluster it sends a multicast goodby message and all receivers delte the server entry out of their list.

If a server crashes it does not have any possiblity to say goodby to the others. Therefore a keep alive mechanism must be existing also, like: Every second each server calls the remote registry of all other servers via TCP. If the call can't be done the server is assumed to be dead. Optimizing this the first server detecting a dead partner reports this via multicast to all remaing servers.

1.2.3. Multiplexing remote service calls

Okay, now we can detect new servers, existing servers, going servers and died servers. But if a component now wants to call a service of another not locally deployed component it can check all other registries in order to find a suitable communication partner. Fine so far. But: The call to the real implementation is a remote call via WCF. Therefore the service must be a WCF service (operation declaration, data contract etc.). What if I (as component developer) will not be bothered with WCF stuff?

This leads to the fancy idea of a multiplexing gateway service in the infrastructure. The created spring proxy serializes all parameters, calls the multiplexing service (only one method needed: Execute(string[] serializedParams) )and informs this one which service (contract) and which method was meant... The multiplexer now deserializes the parameters, takes the real implementation out of the registry and calls the appropriate method. The result will be serialized as well given back to the initial calling spring proxy. Deserializing and returning - that's it ;).

Last edited Jun 8, 2010 at 6:37 PM by harkon, version 4

Comments

No comments yet.