Poor Man’s Proxmox Cluster

Standard

From: http://myatus.com/p/poor-mans-proxmox-cluster/


 

Networking I had written this elsewhere before, but thought I would share it on my own site as well. The idea here is to create a Proxmox VE cluster with limited resources, in particular a lack of a private network / VLAN. We address this by creating a virtual private network using a lightweight VPN provider, namely Tinc.

You could use something else, like OpenVPN or IPSEC. The former is a bit on the heavy side for the task, whilst the latter may not have all the features we need. Specifically, Tinc allows us to create an auto-meshing network, packet switching and use multicast. Multicast will be needed to create a Proxmox VE cluster, whilst the virtual switching ensures packets will eventually be routed to the right server and VM.

 

Create an additional vmbr

By default there should already be a vmbr0 bridge for Proxmox.  We will need to create – or modify – an additional vmbr, which in this example we name vmbr1.

 

Warning: on many systems, vmbr0 bridge is used to make your server accessible over the public network – so do not edit that unless absolutely required!

You also need to think of what private IP block you would like to use, and assign each Proxmox VE server an IP from within that private IP block. For example, I use the IP range 192.168.14.0/23 (which is 192.168.14.1-192.168.15.254 and a netmask of 255.255.254.0). The 192.168.15.x range I assign to the Proxmox VE servers, whereas the 192.168.14.x range I assign to containers / VMs. Using that IP range, you would change the /etc/network/interfaces file as following:

You can force the changes using:

You will need to do this on each server, taking care to select a different IP address. Keep it simple, start at 192.168.15.1, and increment the last number for each new server.

Tinc

The next step would be installing Tinc and configuring it in such a way that Proxmox VE can use multicast over that virtual private network.

So on the server, install Tinc with:

Next, create a directory where the configuration for the VPN will reside (you can have multiple configurations as such):

Next, we create a basic configuration, which tells Tinc to use a “switch” mode and what this server’s “name” is. For sake of simplicity, use the hostname for the “name” (use uname -n to determine it):

The “ConnectTo” is currently left blank, but will be important once you have setup the other servers.  More on this later.

Then we create a server-specific configuration. Note that the filename is the same as specified in “Name =” above.

Obviously you should replace the “Address” line with the actual public IP address of your server.

Now we need to create a public/private key. The private key will remain exactly that: private. The public key will be appended to the file we just created (/etc/tinc/vpn/hosts/server1), which will eventually be distributed to the other servers.

It will ask you to confirm two file locations. The default should be correct (with the last/2nd one the file as mentioned above).

Now we need an up/down script, to do some post configuration of the network when the VPN comes up (or goes away). This is a simple copy & paste, provided you have setup vmbr1 as outlined earlier:

What the above does, is add the VPN tunnel to the vmbr1 bridge. Furthermore, it allows multicast messages over vmbr1. It also sets the use of masquerading, to allow a VM on a private IP to communicate successfully with the outside world – it will use the IP address of vmbr0 to do so.

Then, you need to tell Tinc that the contents in the “vpn” sub-directory should be started whenever it starts:

You will need to do this on each server that needs to be part of the VPN. In addition, the files within the directory /etc/tinc/vpn/hosts/ needs to be distributed to all servers (so that all servers have the files from the other servers). Its simple enough to script this, if you want to go that route, but that’s beyond the scope here.

As mentioned earlier, you will need to edit the /etc/tinc/vpn/tinc.conf and provide the name of another server in the “ConnectTo” setting that was previously left blank.  Which server you chose is entirely up to you, and you could chose a different one for each server – remember that Tinc is auto-meshing, so it will connect all servers over time.

Note: without making that change to /etc/tinc/vpn/tinc.conf, Tinc will not know what to do so you will not have a working VPN as a result.

Once you have edited the configuration as outlined, (re)start Tinc using the following command:

And test your network by pinging another node on its private IP, ie:

Note I use the “-c3″ here, to limit the amount of pings. If the VPN was not configured correctly, or a firewall is interfering, you may otherwise end up with a large number of “Host or destination is unreachable” errors.

Forcing the private IP address

We need to force Proxmox VE, or more specifically Corosync, to use the private IP addresses rather than the public IP address.  This because the multicast needs to be done over our virtual private network.

The easiest, but also the “dirtiest” method is to simply change the /etc/hosts, which I will outline here.

The first step is to ensure that the /etc/hosts file is read before attempting to do a full DNS lookup:

Next edit the /etc/hosts file, by commenting out the original line, and adding our own:

Make sure that the private IP address matches the one you assigned to vmbr1 (double check with ifconfig vmbr1).

Again, this is a “dirty” method and you may want to use your own DNS server instead that resolves IPs for a local network (say, “server1.servers.localnet”).

At this stage, reboot the server to ensure the changes get picked up and everything works as expected (that is, your server comes back up online – hmm!).

Create the cluster

If you do not yet have a cluster configured, you need to create one first. So pick a particular server that you would like to consider as a “main server” and perform the following:

Where <arbitrary-name> is something of your own choosing. Keep the name short and simple, without spaces or other funny characters.

The “main server” is a loose term really, as any server within the cluster can manage other servers. But use it as a consistent starting point for adding other servers to the cluster.

You can check if things are working correctly with:

In particular, you’d want to make sure that the “Node addresses:” portion is the private IP address as on vmbr1.

Adding servers to the cluster

Adding a server (node) to the cluster will need a little preparation. Specifically, because we use private IP addresses for the cluster, we need to force other nodes to do the same when trying to contact another node. In other words, if server1 wants to contact server2, it should use the 192.x range instead of the public IP address.

So, based on the above example, on server1 we need to add a line to the /etc/hosts like this:

Note the double “>>” brackets. If you use a single “>” one, you overwrite the entire file with just that line. You’ve been warned.

And on server2, we need to make sure server1 can be contacted using its private IP as well, so on that server, we perform:

All of this can be made much fancier with your own DNS server and bindings, but again, this is beyond the scope and goes on the assumption you don’t mind doing this for the 2, 5 or 10 servers or so you may have. If you have a few hundred, then I wouldn’t expect you to be looking at a “Poor Man’s” setup.

On the server that you will be adding to the cluster, make sure that you can successfully ping that private IP address of the “main server”.

If tested OK, then still on that server (thus the one that isn’t yet part of the cluster), type:

Where “server1″ is the “main server” (the one on which you first created the cluster). It will ask you for the root password for SSH for server1, and then does its thing with configuration.

Note: If you have disabled password-based root logins using SSH, you may have to temporarily enable it. Using SSH keys would be a preferred method over passwords.

After this has been done, the node should automatically on your web-based GUI and can be verified from the CLI using:

If the nodes show up in the “pvecm nodes” command and GUI, then you have successfully created the cluster.

Note: A note about a 2-node cluster and quorum can be found here.

Containers and VMs

You can now create containers and VMs that can be migrated between the nodes.

You can either assign the private IP address directly (venet, only on OpenVZ containers) or as a network device (veth) attached to vmbr1.

The private IP address should be within the range of your specified netmask on vmbr1. So going by the above example of using 192.168.14.0/23, that’s anything between 192.168.14.1 and 192.168.15.254. Make sure the IP isn’t already used by another VM or a node (see initial notes, re 192.168.14.x for VMs).

If you fire up the VM, its private IP address should be ping-able from any server, and from within the container / VM, you can ping any private as well as public IP address (the latter thanks to masquerading configured with the tinc-up script). If this is not the case, the network configuration was not done correctly.

Final notes

You should now have at least one container / VM with a private IP address. Its good and well if this VM doesn’t need to be accessed from the outside world, but if you want to give it such access, you will need to use NAT on the server. This will instruct the node that incoming traffic on a particular port will need to be forwarded to a particular VM.

For example, TCP port 25 on 123.4.5.6 is forwarded to VM on IP 192.168.14.1:

Note that this is just a simple guide to help you get started. More importantly, it doesn’t include any basic security measures such as a firewall (there are other articles about a firewall for Proxmox on this site [here and here], which I will update when I can).

 

2,096 total views, no views today

Two Brothers

Standard

Once upon a time, two brothers who lived on adjoining farms
fell into conflict. It was the first serious rift in 40 years of
farming side-by-side, sharing machinery, and trading labor and goods as
needed, without a hitch. Then the long collaboration fell apart. It
began with a small misunderstanding, and it grew into a major
difference, and finally, it exploded into an exchange of bitter
words, followed by weeks of silence. One morning, there was a knock
on John’s door. He opened it to find a man with a carpenter’s toolbox.
“I’m looking for a few days’ work,” he said. “Perhaps you would have
a few small jobs here and there I could help with? Could I help you?”

Yes,” said the older brother. “I do have a job for you. Look across the creek at that farm. That’s my neighbor. In fact, it’s my younger brother!
Last week, there was a meadow between us. He recently took his
bulldozer to the river levee, and now there is a creek between us.
Well, he may have done this to spite me, but I’ll do him one better.
See that pile of lumber by the barn? I want you to build me a fence.
an 8-foot fence -so I won’t need to see his place, or his face,
anymore.”
The carpenter said, “I think I understand the situation Show me the
nails, and the post-hole digger, and I’ll be able to do a job that
pleases you.” The older brother had to go to town, so he helped the
carpenter get the materials ready and then he was off for the day. The
carpenter worked hard all that day — measuring, sawing, and nailing.
About sunset, when the farmer returned, the carpenter had just
finished his job. The farmer’s eyes opened wide, his jaw dropped.
There was no fence there at all It was a bridge… a bridge that
stretched from one side of the creek to the other! A fine piece of
work, with handrails,
and all! And, the neighbor, his younger brother, was coming toward them, his
hand outstretched… “You are quite a fellow to build this bridge, after all I’ve said and done.”
The two brothers stood at each end of the bridge, and then they met in
the middle, taking each other’s hand. They turned to see the carpenter
hoist his toolbox onto his shoulder. “No, wait! Stay a few days. I’ve
a lot of other projects for you,” said the older brother.

I’d love to stay on,” the carpenter said, but I have many more bridges
to build. Just remember this…

1. God won’t ask what kind of car you drove, but He’ll ask how many
people you helped get where they needed to go.

2. God won’t ask the square footage of your house, but He’ll ask how
many people you welcomed into your home.

3. God won’t ask about the clothes you had in your closet, but He’ll
ask how many you helped to clothe.

4. God won’t ask how many friends you had, but He’ll ask how many
people to whom you were a friend.

5. God won’t ask in what neighborhood you lived, but He’ll ask how you
treated your neighbors.

6. God won’t ask about the color of your skin, but He’ll ask about the
content of your character.

7. God won’t ask why it took you so long to seek Salvation, but He’ll
lovingly take you to your mansion in heaven, and not to the gates of
Hell.

8. God won’t ask how many people you forwarded this to, but He’ll ask
why you hesitated to pass it on to your friends

677 total views, 1 views today