Cluster Setup Guide

Building a high-availability and load-balancing cluster involves setting up several components:

This guide is about the technical configuration of a high-availability and load-balancing cluster. You may also read more general information about clustering.

Providing a redundant storage

A high-availability and load-balancing cluster requires to have the same storage path for the user folders to be mounted on every PowerFolder Server node, because every node needs to be able to provide all folders to the users at any time, in case an other node goes down as a result of a crash or maintenance works. Therefore it is recommended to choose a highly-available storage solution as a storage server for the PowerFolder Server cluster.

To provide the PowerFolder Server cluster with storage there are different methods:

  • Create a storage space on the storage system, export it via NFS ( (lightbulb) Recommended and tested) and mount it into the operating system under the same path on every PowerFolder Server node.
  • Create a storage space on the storage system, format it with a cluster filesystem (e.g. GFS, GlusterFS,...) and mount it into the operating system under the same path on every PowerFolder Server node.

 When mounting the storage into the operating system, please make sure you grant access permissions on the share to the same dedicated user, which you used on the other nodes. 

Providing a redundant web load-balancer

Users of the PowerFolder Server cluster will not only use desktop clients to access their folders, but also mobile apps or simply the web interface. The web interface (which is also used for the mobile apps) has a unique URL under which it is provided to the users (e.g. https://cloud.example.com), therefore incoming requests on that URL need to be equally distributed to each PowerFolder Server node, except if a node isn't available at that time.

To provide a redundant web load-balancer, you can use one of the three possibilities below:

  • Apache: Module mod_proxy_balancer ((lightbulb) Recommended)
  • Nginx: Module ngx_http_upstream_module
  • Hardware appliance which supports reverse proxy functionality and SSL certificates

(warning) Please note: A request from a client needs to stick to the same backend server for at least 5 minutes. This is the time frame within the session cookie will be written to the database, so the user doesn't need to login again as soon as the request will be forwarded and served by another node.

There are two ports which need to be forwarded to the PowerFolder Server nodes and their web port (by default 8080):

  • 443 TCP - for direct web access
  • 80 TCP - for HTTP tunneling in case communication via a desktop client on the data port isn't available (e.g. if the client resides in a network protected by a firewall)

To provide redundancy to the web load-balancer you can use common techniques like DNS round-robin between or Heartbeat to have at least two machines running an identical Apache or Nginx configuration.   

Example configuration for Apache

The following example will show how to create a web load-balancer based on the Apache mod_proxy_balancer and lbmethod_byrequests module. Both modules need to be installed on the server.

 To get a full picture on how the final virtual host configuration should look like, please review our documentation on using Apache with mod_proxy and mod_ssl. This is an extended documentation for a cluster within the <VirtualHost 10.0.0.1:443> section

<Proxy balancer://httpcluster>
    BalancerMember http://10.0.0.1:8080 route=nodeID1
    BalancerMember http://10.0.0.2:8080 route=nodeID2
    BalancerMember http://10.0.0.3:8080 route=nodeID3
    ProxySet stickysession=rpcid|JSESSIONID|jsessionid scolonpathdelim=On lbmethod=bybusyness
</Proxy>

<Proxy balancer://websocketcluster>
    BalancerMember ws://10.0.0.1:8080
    BalancerMember ws://10.0.0.2:8080 
    BalancerMember ws://10.0.0.3:8080
    ProxySet lbmethod=bybusyness
</Proxy>

ProxyRequests Off

ProxyPass               /websocket/nodeID1        		ws://10.0.0.1:8080/websocket
ProxyPass               /websocket/nodeID2        		ws://10.0.0.2:8080/websocket
ProxyPass               /websocket/nodeID3        		ws://10.0.0.3:8080/websocket
ProxyPass               /websocket						balancer://websocketcluster/websocket

ProxyPass               /websocket_client/nodeID1		ws://10.0.0.1:8080/websocket_client
ProxyPass               /websocket_client/nodeID2       ws://10.0.0.2:8080/websocket_client
ProxyPass               /websocket_client/nodeID3       ws://10.0.0.3:8080/websocket_client
ProxyPass               /websocket_client               balancer://websocketcluster/websocket_client

ProxyPass               /rpc							balancer://httpcluster/rpc	nocanon
ProxyPassReverse        /rpc							balancer://httpcluster/rpc

ProxyPass               /rpc							!

ProxyPass               /								balancer://httpcluster/		nocanon
ProxyPassReverse        /								balancer://httpcluster/

 Replace nodeID1, nodeID2 and nodeID3 with the nodeid from the matching backend server, which can be found in the Server Configuration File.

Example configuration for Nginx

The following example will show how to create a web load-balancer based on the Nginx ngx_http_upstream_module.

 To get a full picture on how the final virtual host configuration should look like, please review our documentation on using Nginx first. This is a full configuration for a cluster: 

upstream powerfolder.example.com {
    ip_hash;
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;
    server 10.0.0.3:8080;
    keepalive 120;
}

server {
    listen 80;
    server_name powerfolder.example.com;

	location / { 
		rewrite ^ https://$server_name$request_uri? permanent;
    }

	location /rpc {
    	proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_pass http://powerfolder.example.com/rpc;
	}
}

server {
    listen 443 ssl;
	server_name powerfolder.example.com;
	client_max_body_size 100G;

    ssl_certificate certificates/cluster.crt.pem;
    ssl_certificate_key certificates/cluster.key.pem;

    location / {
        proxy_pass http://powerfolder.example.com;
		proxy_set_header X-Forwarded-Host $host;
		proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Connection "";
    }

    location /websocket {
        proxy_pass http://powerfolder.example.com/websocket;
        proxy_set_header X-Forwarded-Host $host;
		proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
    }

    location /websocket/nodeID1 {
        proxy_pass http://10.0.0.1:8080/websocket;
        proxy_set_header X-Forwarded-Host $host;
		proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
    }

 	location /websocket_client/nodeID1 {
        proxy_pass http://10.0.0.1:8080/websocket_client;
        proxy_set_header X-Forwarded-Host $host;
		proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
    }

    location /websocket/nodeID2 {
        proxy_pass http://10.0.0.2:8080/websocket;
        proxy_set_header X-Forwarded-Host $host;
		proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
    }

    location /websocket_client/nodeID2 {
        proxy_pass http://10.0.0.2:8080/websocket_client;
        proxy_set_header X-Forwarded-Host $host;
		proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
    }

    location /websocket/nodeID3 {
        proxy_pass http://10.0.0.3:8080/websocket;
        proxy_set_header X-Forwarded-Host $host;
		proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
    }

    location /websocket_client/nodeID3 {
        proxy_pass http://10.0.0.3:8080/websocket_client;
        proxy_set_header X-Forwarded-Host $host;
		proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
    }
}

Providing a MySQL database cluster

Since all PowerFolder Server nodes need to read and write the same database, there should be a highly-available database cluster as well. There are two (and maybe more) ways how to provide high-availability using MySQL:

The Galera cluster implementation can be done using the Galera Cluster from Codership, however we made a good experience with the Percona XtraDB Cluster in connection with Percona XtraBackup (as a method for state snapshot transfers instead of mysqldump or rsync).

For automatic fail over in case a MySQL cluster node goes down (and you are not using a load-balancer in front of the MySQL cluster nodes), you can use the following line when configuring MySQL support in each PowerFolder Server node:

database.url=jdbc:mysql://sqlnode1.example.com:3306,sqlnode2.example.com:3306/powerfolder?failOverReadOnly=false&amp;connectTimeout=30000&amp;socketTimeout=30000&amp;autoReconnect=true&amp;autoReconnectForPools=true

Configuring and starting the PowerFolder Server cluster

Installation and configuration of the first node

The most simple way to configure a PowerFolder Server cluster is to start with the installation and configuration of the first node.

After the installation and configuration:

  1. Stop PowerFolder Server.
  2. Add the following entries to the Server Configuration File:

    folders.mount.dynamic=true
    folder.watcher.enabled=false
    provider.url.httptunnel=http://cloud.example.com/rpc
    web.base.url=https://cloud.example.com

     Replace cloud.example.com with the URL to the web interface, which configured as a part of the web load-balancer configuration.

  3. Start PowerFolder Server.

Installation and configuration of further nodes

After the installation and configuration of the second and further nodes:

  1. Install another node on a different server. (minus) Don't start it yet!
  2. Copy the Server Configuration File from the first node and put it into the same location on the second node.
  3. Replace the values of the following entries in the Server Configuration File with the correct values matching the second node: hostname, nick
  4. Delete the following entry in the Server Configuration File on the second node: nodeid
  5. Add the following entry to the Server Configuration File on the second node:

    folders.mount.dynamic=true
    folder.watcher.enabled=false
    provider.url.httptunnel=http://cloud.example.com/rpc
    web.base.url=https://cloud.example.com

    (plus) Replace cloud.example.com with the URL to the web interface, which configured as a part of the web load-balancer configuration.

  6. Start PowerFolder Server.
  7. Check if every server has its own server_maintenance folder path.

 Repeat the process for all further PowerFolder Server nodes you are going to install.

Synchronization of server settings to all cluster nodes

If you change settings via the web interface as an admin, the settings will be synced across all server nodes. Some settings however will not be synchronized, since they are node-specific.

Settings which will not be synchronized to keep the functionality of the cluster:

  • ajp.port
  • config.url
  • hostname
  • net.bindaddress
  • nick
  • nodeid
  • plugin.webinterface.port
  • port
  • ssl.port

Settings which will not be synchronized, because they might be different depending on the usage scenario:

  • database.url
  • database.username
  • database.password
  • downloadlimit
  • foldersbase
  • landownloadlimit
  • lanuploadlimit
  • license.email
  • license.key
  • provider.url.httptunnel
  • uploadlimit

Most of the settings and their respective names in the web interface are explained in the article about the Server Configuration File.

Please note that the activation of a new license requires activation on each individual server node. The login information to the licensing service however will be synchronized across all server nodes, so they automatically grab the new license, if it will be renewed.