A couple of months ago, Chris Marino, CEO at vCider, stopped by the Mirantis office and gave a very interesting presentation on the vCider networking solution for clouds. A few days later, he kindly provided me with beta access to their product.
A few days ago, vCider announced public availability of the product. So now it's a good time to blog about my experience concerning it.
About vCider Virtual Switch
To make a long story short, vCider Virtual Switch allows you to build a virtual Layer 2network across several Linux boxes; these boxes might be Virtual Machines (VMs) on a cloud (or even in different clouds), or it might be a physical server.
The flow is pretty simple: you download a package (DEBs and RPMs are available on the site) and install it to all of the boxes for which you will create a network. No configuration is required except for creating a file with an account token.
After that, all you have to do is to visit the vCider Dashboard and create networks and assign nodes to them.
So to start playing with that, I created two nodes on Rackspace and created a virtual network for them for which I used 192.168.87.0/24 address space.
On both boxes two new network interfaces appeared:
On the first box:
5: vcider-net0: mtu 1442 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether ee:cb:0b:93:34:45 brd ff:ff:ff:ff:ff:ff inet 192.168.87.1/24 brd 192.168.87.255 scope global vcider-net0 inet6 fe80::eccb:bff:fe93:3445/64 scope link valid_lft forever preferred_lft forever |
and on the second one:
7: vcider-net0: mtu 1442 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 6e:8e:a0:e9:a0:72 brd ff:ff:ff:ff:ff:ff inet 192.168.87.4/24 brd 192.168.87.255 scope global vcider-net0 inet6 fe80::6c8e:a0ff:fee9:a072/64 scope link valid_lft forever preferred_lft forever |
tracepath output looks like this:
root@alice:~# tracepath 192.168.87.4 1: 192.168.87.1 (192.168.87.1) 0.169ms pmtu 1442 1: 192.168.87.4 (192.168.87.4) 6.677ms reached 1: 192.168.87.4 (192.168.87.4) 0.338ms reached Resume: pmtu 1442 hops 1 back 64 root@alice:~# |
arping also works fine:
novel@bob:~ %> sudo arping -I vcider-net0 192.168.87.1 ARPING 192.168.87.1 from 192.168.87.4 vcider-net0 Unicast reply from 192.168.87.1 [EE:CB:0B:93:34:45] 0.866ms Unicast reply from 192.168.87.1 [EE:CB:0B:93:34:45] 1.030ms Unicast reply from 192.168.87.1 [EE:CB:0B:93:34:45] 0.901ms ^CSent 3 probes (1 broadcast(s)) Received 3 response(s) novel@bob:~ %> |
Performance
One of the most important questions is performance. First, I used iperf to measure bandwidth on the public interfaces:
novel@bob:~ %> iperf -s -B xx.yy.94.250 ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address xx.yy.94.250 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34231 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.3 sec 12.3 MBytes 9.94 Mbits/sec [ 5] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34232 [ 5] 0.0-20.9 sec 12.5 MBytes 5.02 Mbits/sec [SUM] 0.0-20.9 sec 24.8 MBytes 9.93 Mbits/sec [ 6] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34233 [ 6] 0.0-10.6 sec 12.5 MBytes 9.92 Mbits/sec [ 4] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34234 [ 4] 0.0-10.6 sec 12.5 MBytes 9.94 Mbits/sec [ 5] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34235 [ 5] 0.0-10.5 sec 12.4 MBytes 9.94 Mbits/sec [ 6] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34236 [ 6] 0.0-10.6 sec 12.6 MBytes 9.94 Mbits/sec [ 4] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34237 [ 4] 0.0-10.7 sec 12.6 MBytes 9.94 Mbits/sec [ 5] local xx.yy.94.250 port 5001 connected with xx.yy.84.110 port 34238 [ 5] 0.0-10.6 sec 12.6 MBytes 9.93 Mbits/sec |
So it gives average bandwidth ~9.3Mbit/sec.
And here's the same test via vCider network:
novel@bob:~ %> iperf -s -B 192.168.87.4 ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 192.168.87.4 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60977 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.5 sec 11.4 MBytes 9.10 Mbits/sec [ 5] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60978 [ 5] 0.0-10.5 sec 11.4 MBytes 9.05 Mbits/sec [ 6] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60979 [ 6] 0.0-10.6 sec 11.4 MBytes 9.03 Mbits/sec [ 4] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60980 [ 4] 0.0-10.4 sec 11.2 MBytes 9.03 Mbits/sec [ 5] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60981 [ 5] 0.0-10.5 sec 11.4 MBytes 9.06 Mbits/sec [ 6] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60982 [ 6] 0.0-10.4 sec 11.3 MBytes 9.05 Mbits/sec [ 4] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60983 [ 4] 0.0-20.8 sec 11.2 MBytes 4.51 Mbits/sec [SUM] 0.0-20.8 sec 22.4 MBytes 9.05 Mbits/sec [ 5] local 192.168.87.4 port 5001 connected with 192.168.87.1 port 60984 [ 5] 0.0-10.5 sec 11.3 MBytes 9.03 Mbits/sec |
It gives an average bandwidth of 8.5Mbit/sec, and it's about 91% of the original bandwidth, which is not bad I believe.
For the sake of experimenting, I tried to emulate TAP networking using openvpn. I chose the quickest configuration possible and just ran openvpn on the server this way:
and on the client:
# openvpn --remote xx.yy.94.250 --dev tap0 |
As you might guess, openvpn runs in user space and it tunnels traffic over the public
interfaces on the boxes I use for tests.
And I conducted another iperf test:
[NOTAG]novel@bob:~ %> iperf -s -B 192.168.37.4 ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address 192.168.37.4 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53923 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.5 sec 11.2 MBytes 8.97 Mbits/sec [ 5] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53924 [ 5] 0.0-10.5 sec 11.1 MBytes 8.88 Mbits/sec [ 6] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53925 [ 4] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53926 [ 6] 0.0-10.4 sec 11.1 MBytes 8.90 Mbits/sec [ 4] 0.0-20.6 sec 10.8 MBytes 4.38 Mbits/sec [SUM] 0.0-20.6 sec 21.8 MBytes 8.90 Mbits/sec [ 5] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53927 [ 5] 0.0-10.4 sec 11.0 MBytes 8.87 Mbits/sec [ 6] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53928 [ 6] 0.0-10.3 sec 10.9 MBytes 8.90 Mbits/sec [ 4] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53929 [ 4] 0.0-10.5 sec 11.1 MBytes 8.88 Mbits/sec [ 5] local 192.168.37.4 port 5001 connected with 192.168.37.1 port 53930 [ 5] 0.0-10.3 sec 10.9 MBytes 8.88 Mbits/sec[/NOTAG] |
It gives an average bandwidth of 8.3Mbit/sec, and it's 89% of the original bandwidth. It's just a little slower than vCider Virtual Switch which is very good for openvpn, but I have to note it's not quite a fair comparison:
- I don't use encryption in my openvpn setup
- Real-world openvpn configuration will be much more complex
- I believe openvpn will scale significantly worse with the growth of the number of machines in the network, as openvpn works in client/server mode while vCider works in p2p mode and uses central service to grab metadata such as routing information etc.
Also, it seems to me that the vCider team's comparison to openvpm is helpful, as they have a note on it in the FAQ -- be sure to check it out.
Support
It's a pleasure to note that the vСider team is very responsive. As I started testing the product at quite an early stage, I spotted some issues, and even they were not critical. It's a great pleasure to see they are all fixed in the next version.
Conclusion
vCider Virtual Switch is a product with expected behavior, good performance, complete documentation, and it's easy to use. The vCider team provides good support as well.
It seems that for relatively small setups within a single trusted environment, e.g. about 5-8 VMs within a single cloud provider, where traffic encryption and performance are not that critical, one could go with a openvpn setup. However, when either security or performance becomes important or the size of the setup increases, vCider Virtual Switch would be a good choice.
I am looking forward to new releases and specifically I'm very curious about multicast support and exposed API which manages networks.
Further reading
* vCider Home Page
* vCider Virual Switch FAQ
* Wikipedia article on OSI model
* OpenVPN Home Page