How to use eth1 on C2M/C2L


#1

C2M and C2L come with two 2.5Gbit/s interfaces. By default, all the traffic is routed through eth0.

This tutorial explains how to configure routing to use eth1 for internal trafic, and eth0 for internet (and nbd connections).

Route your NBD connections through eth0

To prevent your nbd devices from being disconnected later in the tutorial, we must explicitely route them through eth0.

Get the nbd server IP address:

$> ps auxf | grep xnbd-client
root      1830  0.0  0.0   1772    64 ?        S    09:01   0:00 @xnbd-client --blocksize 4096 --retry=900 10.1.133.45 4896 /dev/nbd0
root      4265  0.0  0.0  12956   964 pts/0    S+   09:08   0:00          \_ grep --color=auto xnbd-client

Here, the nbd server is 10.1.133.45.

And show your default routes:

$> ip route
default via 10.1.169.92 dev eth0
10.1.169.92/31 dev eth0  proto kernel  scope link  src 10.1.169.93
10.1.169.94/31 dev eth1  proto kernel  scope link  src 10.1.169.95

The eth0 gateway is 10.1.169.92 (second line), the eth1 gateway is 10.1.169.94 (third line).

Now, let’s explicitely route the nbd connection through eth0:

# replace 10.1.133.45 with nbd server IP
# replace 10.1.169.92 with eth0 gateway IP
$> ip route add 10.1.133.45/32 via 10.1.169.92
$> ip route
default via 10.1.169.92 dev eth0
10.1.133.45 via 10.1.169.92 dev eth0 # the new route
10.1.169.92/31 dev eth0  proto kernel  scope link  src 10.1.169.93
10.1.169.94/31 dev eth1  proto kernel  scope link  src 10.1.169.95

If you have several NBD connections, you need to follow this procedure for each of them.

Route all the internal traffic through eth1

The internal traffic is in the subnets 10.0.0.0/8 and 169.254.0.0/16.

Add the routes:

# replace 10.1.169.94 with eth1 gateway IP
$> ip route add 10.0.0.0/8 via 10.1.169.94
$> ip route add 169.254.0.0/16 via 10.1.169.94

Verification

  • To ensure your NBD connection is still valid, try to create a file:
$> touch whatever

Since it should work, you can remove the file with rm whatever.

  • Check you can still reach the metadata API:
$> curl http://169.254.42.42
{"api": "api-metadata", "description": "Metadata API, just query http://169.254.42.42/conf or http://169.254.42.42/conf?format=json to get info about yourself"}
  • Ensure your are really using eth1 for internal traffic:
$> ip route get 169.254.42.42
169.254.42.42 via 10.1.169.94 dev eth1  src 10.1.169.95
    cache
$> ip route get 10.0.1.2 # 10.0.1.2 is a random IP address in 10.0.0.0/8
10.0.1.2 via 10.1.169.94 dev eth1  src 10.1.169.95
    cache

What else?

The routes added with ip route add are not preserved after reboot. If interested, I can edit this topic to explain how to configure your network to have persistent routes across reboots.


Two interfaces on C2M/C2L can be used reliable
Second Network Interface not working
#2

great tutorial @jcastets thanks. could you expand in how to use C2M/C2L as GW for a internal network?

1st problem I can find is, how you route a server with internal ip in other network? eg:

C2M GW:

  • eth0: 10.6.2.90
  • eth1: 10.6.2.92

C1 webapp:

  • eth0: 10.6.3.94

#3

It is a different subject which is beyond this tutorial. Since private networks are not (yet) possible with Scaleway, you need a tunneling protocol between your nodes. For instance, you can create a vxlan.

If you are interested I can try to find some time this week to create a subject about how to setup a vxlan tunnel. But it will be hackish.


#4

@jcastets thanks!

Anyway mostly everything is very hackish in Scaleway :slight_smile:

IMHO this topic is related to this tutorial, because the only reason to configure a GW server is to other machines route traffic throught it

My current solution is to use some mesh-vpn (peervpn/tinc) and use that iface for the routing.
Any solution I can think require unneccesary complexity or extra steps, eg: I need a temporal public ip in each node to prepare it.

Do you know if there is work being done to fix the private network? right now is a big security hole


#5

What would an “rc.local” look like for something like this?

If we could get a script to run at boot that would set this up that would be amazing.

We have to remember that the nbd-server could be running on more then on ip.


#6

FWIW - I believe you also need to route any traffic to the DNS servers via eth0 (same as nbd already described).

#!/bin/bash
# get default gw IP (going via eth0)
gw=$(ip route | grep default | cut -d ' ' -f 3)
# make sure we route all nbd traffic through eth0
for s in $(scw-metadata --cached | grep EXPORT_URI | cut -d \/ -f 3 | cut -d \: -f 1 | grep -v VOLUMES); do
    ip route add $s/32 via $gw
done
# make sure we route all DNS traffic through eth0
for s in $(cat /etc/resolv.conf |grep nameserver | cut -d ' ' -f 2); do
    ip route add $s/32 via $gw
done
# get eth1 ip
eth1=$(ifconfig eth1 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}')
# route all private traffic via eth1
ip route add 10.0.0.0/8 via $eth1
ip route add 169.254.0.0/16 via $eth1

#8

Found an error in script of @sundbp.
Line 13, here should be the GW ip address and not the device ip address.

#!/bin/bash
# get default gw IP (going via eth0)
default_gw=$(ip -4 route list 0/0 | awk '{ print $3}') 
# make sure we route all nbd traffic through eth0
for s in $(scw-metadata --cached | grep EXPORT_URI | cut -d \/ -f 3 | cut -d \: -f 1 | grep -v VOLUMES); do
    ip route add $s/32 via $default_gw
done
# make sure we route all DNS traffic through eth0
for s in $(cat /etc/resolv.conf | grep nameserver | cut -d ' ' -f 2); do
    ip route add $s/32 via $default_gw
done
# get gw eth1 ip
eth1gw=$(scw-metadata --cached EXTRA_NETWORKS_0_GATEWAY)
# route all private traffic via eth1 gw
ip route add 10.0.0.0/8 via $eth1gw
ip route add 169.254.0.0/16 via $eth1gw

#9

@jcastets So how to make the routes persistent, especially when they are queries to scw-metadata and lookups?


#10

Can you add in topic how to configure for reboot.

i using your method but each reboot its lost i manually configure.

thanks