EcoSys Application Server Clustering - EcoSys - Administration & Configuration - Hexagon

EcoSys System Administration

Language
English
Product
EcoSys
Search by Category
Administration & Configuration
EcoSys Version
9.1

Overview

The multi-server clustering feature in EcoSys provides application-level synchronization among application server instances sharing a single database.

This clustering is independent from Java application server clustering, operating system-level clustering, and database clustering. Java application server clustering is not recommended where EcoSys application clustering is active.

EcoSys application clustering is not intended to address fault tolerance or intra-session load balancing.

Clustering functionality

When you configure two or more EcoSys servers in a cluster, each server is configured to enable it to communicate with the others. In this mode, any change made to any one of the global caches (enterprise data) on an EcoSys server automatically informs the other servers in the cluster about the change via inter-server messaging.  This messaging is throttled by a default five-second delay (configurable). As such, any change on one server appears on the others in under five seconds. This data synchronization applies to globally cached data, including: 

  • Global types: Custom field types and assignments, Cost object types, Organization types, Rate types, Rate tables, Version types, Category types, Cost Accounts, and Funding types

  • Other global system settings

  • Organizations

  • Versions

  • Cost objects

You configure multi-server clustering in FMServerSettings.properties. For more information, see the Administration and Installation documentation delivered with the software.

You can view the health and status of the cluster in System Info > Cluster. Command line batch jobs can participate in the cluster in a notify-only mode, so updates made there appear in the cluster immediately. 

Cluster requirements

EcoSys supports load balancing with the following requirements:

  • The load balancer or proxy in front of the application servers must be configured for sticky sessions.

  • There must not be an artificial HTTP timeout enforced for connections between the load balancer and application instances.

  • The application servers in a single cluster must be able to make HTTP requests among each other.

  • The application servers in a single cluster must be running all the same version of EcoSys.

Configuration and related settings

The following shows an example configuration in FMServerSettings.properties. Each server participating in the cluster must adhere to the following:

  • Share the same cluster name settings.

  • Share the same cluster security token value (any string).

  • Be configured with the same list of cluster members and their application URLs (direct links, not via load balancer or proxy).

  • Have a unique value for cluster.thisServerId, identifying which member of the cluster it is. This value must match one of the members of the cluster list.

Example cluster settings:

#

# Example cluster configuration

#

# Common settings (match across all members):

cluster.name=PRODUCTION

cluster.securityToken=2468ACEGXZ

cluster.server.PROD_A=http://prod_a.mycorp.local:8080/ecosys

cluster.server.PROD_B=http://prod_b.mycorp.local:8080/ecosys

cluster.server.PROD_C=http://prod_c.mycorp.local:8080/ecosys

# Unique setting (identifies this server):

cluster.thisServerId=PROD_A

We do not recommend adjusting the cluster message throttles or other tuning parameters without a specific recommendation from EcoSys support staff.

Command line batch job and API calls

Since command line batch jobs run in their own JVMs, they should be configured so that they notify the application server nodes in the cluster when they complete. To do so, use the same "common" cluster settings in the FMServerSettings.properties files that the application servers are using, but set the cluster.thisServerId value to something unique that is not listed as being a member of the cluster. Using the example cluster settings above, a command line batch job on the PROD_A server could use the following line:

# Unique setting (identifies this server)

cluster.thisServerId=PROD_A-batch

This enables a one-way communication from the command line batch JVM to the cluster nodes because the command line batch job is transient and will not receive and process notifications from other nodes in the cluster. It is recommended to configure cluster sync for command line batch jobs that modify global type data so that the live instances on the same database can update when the job completes.

No additional configuration is needed for API calls to a clustered environment. The EcoSys web service API is processed through the same engine and caching layers as the web application. Cluster sync messages are processed in the same way.

Query cache configuration

EcoSys has a lower level cache than the enterprise data cache that is used for short-lived optimization of database queries. When using a single application server, it is suitable to set the query cache expiry at five or ten minutes. However, in the presence of multiple application servers sharing a database, the query cache can result in stale data being displayed.

For this reason, we recommend reducing the query cache expiry time when in a clustered environment. A setting of 300 seconds is recommended:

# query cache expiry seconds (how long queries should be cached)

# recommended value = 600 seconds, or 300 for clustered mode

database.querycache.expirationSeconds=300

Because the query cache covers only certain subject areas (not transactional data), it is infrequently out of sync and only for short periods. In practice, this is not a significant impact to users accessing a clustered environment.

Troubleshooting

Three areas of the EcoSys application are when troubleshooting a cluster configuration:

  1. The application log under Admin > Display Log includes messages about cluster configuration and issues with inter-server communication. Filter on the term cluster to match related events.

  2. The System Info > System Configuration report in the EcoSys user interface displays the details of the active cluster configuration.

  3. The System Info > Cluster screen displays a live status of the cluster configuration. The screen shows which servers are joined to the cluster and provides basic version and status information about each server.

If multiple application servers are sharing a database but are not properly configured in an EcoSys cluster, cached data types (see the list above) can be out-of-sync across servers, with some servers presenting stale data with respect to the database and other servers.