Disappointed by the network design :(


#1

Well, I wouldn’t write this topic if it wasn’t clearly stated that feedback is welcomed :smile:

It is an interesting and, in some way, unique service. However I’m very disappointed by the network design.
It was very unexpected to discover a routed L3 network.

I’m used to construct the cloud topology as I want and to assign the desired IP addresses without sharing the address space with all the customers of the service.

I was surprised by strange ARP requests arriving on my server’s network interface. Most probably this has no impact on security. Nevertheless, It’s not very encouraging and, being suspicious by nature, I find it strange.

Damn, when i want to connect a “private” instance without public IP to the internet via a “public” instance, I can’t even do it without a “dirty” workaround like a proxy or an GRE/IPIP tunnel. It’s silly for a “cloud” platform…

I don’t even understand the reasons besides this choice. Why not give the customer the right to design his own L2 network. If it is not an commercial secret, can u please tell the community why is there an routed L3 network shared among all the customers and not a “beautiful”, “classical”, private L2 achieved via MPLS or VXLANS?

Sincerely,
Just an angry customer.


#2

I would probably agree that the biggest area where critical cloud features are missing is within the networking. I too have been having problems in implementing a secure infrastructure upon the platform with the shared, multi-tenant layer 2 network. There isn’t even any guarantee that machines would be allocated within the same L2 network when provisioning and it restricts being able to use L2 network discovery protocols to automatically build clusters.

Even if this was an additional service, I think for the people who really care about security and want to deploy clusters, would pay for this feature above the standard compute price. I know that if it was a reasonable price our company would jump on board right away as it would certainly speed things up for simple R&D deployments where security by isolation would be sufficient.

I feel complexity of implementation is what may be holding this back and the focus may be (at the moment) on the basic infrastructure but I would definitely like to see some implementation of private networks in the future alongside improvements in the security groups.


#3

@Core team - Any update on private networks?


#4

The sad thing is we’d not have any of those discussions if the Linux networking maintainers hadn’t been so totally disinterested to proper QinQ support :slight_smile:


#5

I am surprised that no one from the Scaleway team has deemed take the time to contribute to this very important topic…


#6

A good solution for this would be VXLAN if Scaleway has switches that support it. I would be happy to provide information and advice about VXLAN or other solutions to this limitation. Anything to get better network infrastructure. I’d also love to see the ability to get additional IP addresses.

Please reach out to me if there is anything I can do to help move this forward.

Regards,

Emma


#7

Hi @radu,

The analysis of @fidelity is correct, we’re currently mainly focusing on the base infrastructure.
Private networking and additional IPs are highly demanded features which we will implement.
Our rapid growth with our massive production and deployment ramp-up has been quite a challenge lately and has slowed down these new developments but enhanced networking is one of our top priorities.

@emma we’re developing our own routers/switches for our specific needs and we’ve designed our network to support these features.

Yann


#8

Hej!

I don’t understand why each customer cannot have her own small private network? Then everything would work as normal!

/Mattias


#9

Hi Yann,

It could be nice to have some kind of roadmap about the future functionalities. I know you are busy but it’s difficult to know as user what we can expect from Scaleway and the time frame for it. Currently we are like blind ^^ .

Thx.


#10

Well I must agree that the networking part is neglected as per functionality.
But people, let’s have in mind that :slight_smile: Scaleway started as a BareMetal ARM online provider and then things have just boosted!!!
The people at Online.net (@yann correct me if I am wrong) didn’t expect such a growth in such a short notice (but in essence they have made it happen by reducing prices to the absolute minimum :wink:).

So, they are dealing here with 4 main issues:

  1. have enough resources for the current customers
  2. accommodate new customers and have even more resources for them also
  3. provision the availability of resources on an ever changing network platform
  4. nearly all systems are built in-house!!!

That’s why they opened Amsterdam in order to be able to service 2 and hope for the best for 1.
Then when all this is a bit normalized (give or take 6-24 months @sebastien) then they will be able to start thinking for point no 3.

So if my calculations are correct, people be patient, we need another year or so before we see something new on the networking area!


#11

As I understand the Network Hardware is ready. So it’s software part. I’m patient (I’m playing with Scaleway for some months now.) But I would like to see more visibility on the upcoming features so I can plan what I can expect form my projects or not.


#12

This is really affecting our decision to go with Scaleway for our non-profit student organization’s transition to cloud based hosting.

The combination of how public IPs are assigned (NAT) and the fact that other tenants are able to reach my machines is a really big headache.

I can’t simply migrate our consul cluster setup, as the RFC1918 address is accessible from the internet if it has a public IP assigned.
I must use ufw to deny access from everything but let’s say 10.3.0.0/16, but I don’t get consistent subnets when I spawn instances. I can randomly get a 10.2.0.0. But that doesn’t really help when I can log into a different scaleway account and reach those machines anyway if I spin up a server in the same DC.

I’m down to whitelisting IPs in the firewall for each machine, a terribly cumbersome setup, but the only one I am comfortable with handling without requiring assistance.

So what I’m saying is that my organization really needs progress on the private networking / VPC-like features! A milestone of some sort or anticipated introduction of truly private networking would be really helpful for our planning.

  • Aleks

#13

Hi @aleks - have you considered using a tool like tinc to create a secure overlay network between your instances? I think there’s an tinc ansible playbook to simplify bootstrapping it.


#14

Thanks for the suggestion, I will look into it.


#15

I think this is a major feature for the platform.
As the core team talked, this is been addressed, but we are in 2018 now and its talking much time to do, better not rush to do a great job on this area, but since the datacenter has support since 2015 I think a long time passed.

My use case is a kubernetes cluster, has to think twice on some security features that just are solved by VLAN and proper isolation.