How to bootstrap a highly available infrastructure with round-robin DNS and HAproxy | Scaleway


#1

How to bootstrap a highly available infrastructure with round-robin DNS and HAproxy

Requirements

  • You have an account and are logged into cloud.scaleway.com
  • You have configured your SSH Key
  • You have generate your API Token

In this documentation you will see how to bootstrap a high available infrastructure with round robin DNS and HAproxy on Scaleway. We will use the Scaleway CLI to manage our resources.

For this usecase we will require the following resources:

  • 2 HAproxy server that will act as TCP load balance to absorb TCP requests (level 4).
  • 4 HAproxy to run SSL offloading and load balance traffic to our web servers.
  • 8 Nginx server, we will consider them as the web application.

This documentation is composed of five steps:

  • Install Scaleway CLI
  • Create and start new C1 servers
  • Install and configure HAproxy in TCP mode
  • Install and configure HAproxy to run SSL offloading and load balance traffic to our web servers
  • Install Nginx
  • Setup a round-robin DNS

Step 1 - Install Scaleway CLI

Scaleway CLI help you to manage your Scaleway cloud environment.

To install Scaleway CLI 1.1.0, run the following command in a terminal:

curl -L https://github.com/scaleway/scaleway-cli/releases/download/v1.1.0/scw-`uname -s`-`uname -m` > /usr/local/bin/scw
chmod +x /usr/local/bin/scw

The Scaleway CLI installed, you need to generates a configuration file in <userhome>/.scwrc containing credentials used to interact with the Scaleway API. This configuration file is automatically used by the scw commands.

To create the .scwrc file, run the scw login command:

scw login --token=<your_token> --organization=<your_organization>

Scaleway CLI is now configured and ready to use.

Step 2 - Create and start new C1 servers

Create and start HAproxy servers

Create and start two HAproxy servers with name HAproxy-tcp-$ID with lb, production and tcp tags that will act as level 4 load balancer.

for i in {1..2}; do
    scw start $(scw create --name HAproxy-tcp-$i  -e "LB TCP PRODUCTION"  Ubuntu_Utopic_14_10)
done

Create and start four HAproxy servers with name HAproxy-ssl-$ID with lb, production and ssl tags to run SSL offloading and load balance traffic to our web servers.

for i in {1..4}; do
    scw start $(scw create --name HAproxy-ssl-$i  -e "LB SSL PRODUCTION"  Ubuntu_Utopic_14_10)
done

Create and start application servers

Create and start the www applications server with name www and www, production tags.

for i in {1..8}; do
    scw start $(scw create --name www-$i  -e "WWW PRODUCTION "  Ubuntu_Utopic_14_10)
done

Once all the servers started you can list them with the scw ps command:

$ scw ps
SERVER ID           IMAGE                       COMMAND             CREATED             STATUS              PORTS               NAME
2c423441            Ubuntu_Utopic_14_10_EOL                         2 minutes           running             212.47.251.60       HAproxy-tcp-2
25519d92            Ubuntu_Utopic_14_10_EOL                         2 minutes           running             212.47.234.60       HAproxy-tcp-1
95ac7316            Ubuntu_Utopic_14_10_EOL                         2 minutes           running             212.47.227.198      www-7
3feef431            Ubuntu_Utopic_14_10_EOL                         2 minutes           running             212.47.229.238      www-8
6daf12f7            Ubuntu_Utopic_14_10_EOL                         2 minutes           running             212.47.244.104      www-6
1369c127            Ubuntu_Utopic_14_10_EOL                         2 minutes           running             212.47.232.31       www-5
fb1b0802            Ubuntu_Utopic_14_10_EOL                         2 minutes           running             212.47.244.107      www-4
a4c73457            Ubuntu_Utopic_14_10_EOL                         2 minutes           running             212.47.235.106      www-3
ed424bf3            Ubuntu_Utopic_14_10_EOL                         2 minutes           running             212.47.232.240      www-2
15d14987            Ubuntu_Utopic_14_10_EOL                         2 minutes           running             212.47.241.19       www-1
09c1fc94            Ubuntu_Utopic_14_10_EOL                         5 minutes           running             212.47.228.139      HAproxy-ssl-4
2cdad279            Ubuntu_Utopic_14_10_EOL                         5 minutes           running             212.47.240.247      HAproxy-ssl-3
40c5e0ed            Ubuntu_Utopic_14_10_EOL                         5 minutes           running             212.47.241.241      HAproxy-ssl-2
88fc500f            Ubuntu_Utopic_14_10_EOL                         5 minutes           running             212.47.238.31       HAproxy-ssl-1

Step 3 - Install and configure TCP load balancing

To install HAproxy on the servers, we use scw exec that executes command on the remote servers. The following snippet executes an apt-get update and apt-get install HAproxy on the HAproxy-tcp-{1,2} servers

for i in {1..2}; do
    scw exec HAproxy-tcp-$i 'apt-get update && apt-get install -q -y HAproxy' < /dev/null &
done

Once installed, configure HAproxy servers to perform TCP load balancing. Create a configuration file HAproxy-tcp.cfg with the following settings:

global
  log /dev/log    local0
  log /dev/log    local1 notice
  chroot /var/lib/HAproxy
  stats socket /run/HAproxy/admin.sock mode 660 level admin
  stats timeout 30s
  user HAproxy
  group HAproxy
  daemon
  nbproc 4
  maxconn 100000
defaults
  log     global
  mode    tcp
  option  tcplog
  option  dontlognull
  timeout connect 5000
  timeout client  50000
  timeout server  50000
listen stats 0.0.0.0:8080
  mode http
  stats enable
  stats uri /HAproxy?stats
  stats realm Strictly\ Private
  stats auth username:password
listen tcp-proxy-1-ssl
  balance roundrobin
  bind 0.0.0.0:443
  server HAproxy-ssl-1 10.1.32.25:443 check
  server HAproxy-ssl-2 10.1.32.26:443 check
  server HAproxy-ssl-3 10.1.32.27:443 check
  server HAproxy-ssl-4 10.1.32.15:443 check

We create two listen blocks:

  • stats: Listen on port 8080 and allows us to access HAproxy stats from a web browser
  • tcp-proxy-1-ssl: Listen on port 443 and absort TCP congestion and forward traffic to the SSL offloading load balancers (act as level 4 load balancer).

To deploy the configuration with Scaleway CLI, we use the followings command:

for i in {1..2}; do
    cat HAproxy-tcp.cfg | scw exec HAproxy-tcp-$i 'cat > /etc/HAproxy/HAproxy.cfg'
done

The command above write the HAproxy-tcp.cfg in /etc/HAproxy/HAproxy.cfg on the remote servers HAproxy-tcp-1 and HAproxy-tcp-2. To restart HAproxy service on the servers, run the following:

for i in {1..2}; do
    scw exec HAproxy-tcp-$i "server HAproxy restart"
done

Step 4 - Install and configure HAproxy to run SSL offloading and load balance traffic to our web servers

To install HAproxy on our nodes, we use the same method as we did previously to install HAproxy:

for i in {1..2}; do
    scw exec HAproxy-tcp-$i 'apt-get update && apt-get install -q -y HAproxy' < /dev/null &
done

To configure Haproxy servers to perform SSL offloading and load balance traffic to the web application nodes, we create a configuration file in haproxy-ssl.cfg with the following configuration: We create a configuration file in HAproxy-ssl.cfg with the following settings:

global
  log /dev/log    local0
  log /dev/log    local1 notice
  chroot /var/lib/HAproxy
  stats socket /run/HAproxy/admin.sock mode 660 level admin
  stats timeout 30s
  user HAproxy
  group HAproxy
  daemon
  maxconn 100000
  nbproc 4
defaults
  log     global
  mode    http
  option  httplog
  option  dontlognull
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  errorfile 400 /etc/HAproxy/errors/400.http
  errorfile 403 /etc/HAproxy/errors/403.http
  errorfile 408 /etc/HAproxy/errors/408.http
  errorfile 500 /etc/HAproxy/errors/500.http
  errorfile 502 /etc/HAproxy/errors/502.http
  errorfile 503 /etc/HAproxy/errors/503.http
  errorfile 504 /etc/HAproxy/errors/504.http
listen stats 0.0.0.0:8080
  stats enable
  stats uri /HAproxy?stats
  stats realm Strictly\ Private
  stats auth username:password
listen ssl-proxy-1
  mode http
  bind 0.0.0.0:443 ssl crt /tmp/selfsigned01.pem
  balance roundrobin
  server www-1 10.1.32.232:80 check
  server www-2 10.1.32.235:80 check
  server ...

We create two listen blocks:

  • stats: Listen on port 8080 and allows us to access HAproxy stats from a web browser
  • ssl-proxy-1: Listen on port 443, perform SSL offloading using /tmp/selfsigned01.pem and forward traffic to the web application servers.
for i in {1..4}; do
    cat HAproxy-ssl.cfg | scw exec HAproxy-ssl-$i 'cat > /etc/HAproxy/HAproxy.cfg'
done

The command above write the HAproxy-ssl.cfg in /etc/HAproxy/HAproxy.cfg on the remote servers HAproxy-ssl-1 to HAproxy-ssl-4. Restart HAproxy service on the remote servers to reload their configuration:

for i in {1..4}; do
    scw exec HAproxy-ssl-$i "server HAproxy restart"
done

Install Nginx on the web servers

The nginx servers will represent our web application servers. To install Nginx on the servers, execute the following command:

for i in {1..8}; do
    scw exec www-$i 'apt-get update && apt-get install -q -y nginx' < /dev/null &
done

The Nginx servers are now running and ready to render our pages.

Setup round robin DNS

DNS round robin will let you load balance the through the HAproxy-tcp-1 and HAproxy-tcp-2 servers. On your DNS provider console, simply create two A records pointing on the HAproxy-tcp-1 and HAproxy-tcp-2 servers.

Once setting-up your domain name will resolve on HAproxy-tcp-1 or HAproxy-tcp-2 servers.

Conclusion

Following this documentation, you have setup an highly available and scalable architecture. It is really easy from now to add more nodes at any level, load balancer tcp, ssl or web servers. Using this architecture allows you to bring your application at scale.

Try this tutorial on your own C1 server TRY IT


This is a companion discussion topic for the original entry at https://www.scaleway.com/docs/how-to-configure-round-robin-dns-for-highly-available-infrastructure

#2

This is very interesting, but it seems that the described infrastructure is not highly available.

I think that the problem is about the input stage: If one the two TCP HAProxy load balancer fails, one of the two IP set in the DNS RR will not answers. (except if we use gslb like DNS).

There are a way for full highly available infrastructure with scaleway (like keepalive and IP failover) ?

Thanks
Thierry


#3

Exact, but you can move the reserved IP of the impacted server to a new TCP HAProxy server and avoid failure.


#4

Thank for your response.

There are an automatic process for:

  • detecting the failure
  • restarting a new server
  • putting it the right ip address

Whats the time for this operation ?
Maybe this time can be reduced starting a “sleeping” server ready to take the IP ?


#5

Any reason why you choose haproxy rather than nginx for load balancing ?
Nginx is very simple to install and configure
I want to make sure I made a right choice

Thanks


#6

HAproxy is also easy to use and provides more flexibility.


#7

This is a joke, isn’t it ?
You can’t say that you have high availability when the failure of one machine will result in half the trafic down.
If one fail

  • you have to install and configure a new instance : but servers are not always available.
  • you can’t toggle an “elastic ip” to another the remaining available machine because a machine can have only one public ip
  • you have to reconfigure firewall on all your webservers because you can’t recover the old private ip on the new machine.
  • and the part :

is really not enough because if you add or restart a server, you have to update all HAProxys configuration (or you will forward your traffic to another scaleway’s customer :frowning: )

I’m sure scaleway will be a fantastic service in a few years.
The service will be better and better as dedibox’s service became the best.
But for now, imho, it’s for small projects with 1 ou 2 machines.

What’s really missing is

  • private lan
  • firewalling (and not just the existing security group)
  • an equivalent to Amazon’s ELB.