Poor Man’s Proxmox Cluster

Standard

From: http://myatus.com/p/poor-mans-proxmox-cluster/


 

Networking I had written this elsewhere before, but thought I would share it on my own site as well. The idea here is to create a Proxmox VE cluster with limited resources, in particular a lack of a private network / VLAN. We address this by creating a virtual private network using a lightweight VPN provider, namely Tinc.

You could use something else, like OpenVPN or IPSEC. The former is a bit on the heavy side for the task, whilst the latter may not have all the features we need. Specifically, Tinc allows us to create an auto-meshing network, packet switching and use multicast. Multicast will be needed to create a Proxmox VE cluster, whilst the virtual switching ensures packets will eventually be routed to the right server and VM.

 

Create an additional vmbr

By default there should already be a vmbr0 bridge for Proxmox.  We will need to create – or modify – an additional vmbr, which in this example we name vmbr1.

 

Warning: on many systems, vmbr0 bridge is used to make your server accessible over the public network – so do not edit that unless absolutely required!

You also need to think of what private IP block you would like to use, and assign each Proxmox VE server an IP from within that private IP block. For example, I use the IP range 192.168.14.0/23 (which is 192.168.14.1-192.168.15.254 and a netmask of 255.255.254.0). The 192.168.15.x range I assign to the Proxmox VE servers, whereas the 192.168.14.x range I assign to containers / VMs. Using that IP range, you would change the /etc/network/interfaces file as following:

You can force the changes using:

You will need to do this on each server, taking care to select a different IP address. Keep it simple, start at 192.168.15.1, and increment the last number for each new server.

Tinc

The next step would be installing Tinc and configuring it in such a way that Proxmox VE can use multicast over that virtual private network.

So on the server, install Tinc with:

Next, create a directory where the configuration for the VPN will reside (you can have multiple configurations as such):

Next, we create a basic configuration, which tells Tinc to use a “switch” mode and what this server’s “name” is. For sake of simplicity, use the hostname for the “name” (use uname -n to determine it):

The “ConnectTo” is currently left blank, but will be important once you have setup the other servers.  More on this later.

Then we create a server-specific configuration. Note that the filename is the same as specified in “Name =” above.

Obviously you should replace the “Address” line with the actual public IP address of your server.

Now we need to create a public/private key. The private key will remain exactly that: private. The public key will be appended to the file we just created (/etc/tinc/vpn/hosts/server1), which will eventually be distributed to the other servers.

It will ask you to confirm two file locations. The default should be correct (with the last/2nd one the file as mentioned above).

Now we need an up/down script, to do some post configuration of the network when the VPN comes up (or goes away). This is a simple copy & paste, provided you have setup vmbr1 as outlined earlier:

What the above does, is add the VPN tunnel to the vmbr1 bridge. Furthermore, it allows multicast messages over vmbr1. It also sets the use of masquerading, to allow a VM on a private IP to communicate successfully with the outside world – it will use the IP address of vmbr0 to do so.

Then, you need to tell Tinc that the contents in the “vpn” sub-directory should be started whenever it starts:

You will need to do this on each server that needs to be part of the VPN. In addition, the files within the directory /etc/tinc/vpn/hosts/ needs to be distributed to all servers (so that all servers have the files from the other servers). Its simple enough to script this, if you want to go that route, but that’s beyond the scope here.

As mentioned earlier, you will need to edit the /etc/tinc/vpn/tinc.conf and provide the name of another server in the “ConnectTo” setting that was previously left blank.  Which server you chose is entirely up to you, and you could chose a different one for each server – remember that Tinc is auto-meshing, so it will connect all servers over time.

Note: without making that change to /etc/tinc/vpn/tinc.conf, Tinc will not know what to do so you will not have a working VPN as a result.

Once you have edited the configuration as outlined, (re)start Tinc using the following command:

And test your network by pinging another node on its private IP, ie:

Note I use the “-c3″ here, to limit the amount of pings. If the VPN was not configured correctly, or a firewall is interfering, you may otherwise end up with a large number of “Host or destination is unreachable” errors.

Forcing the private IP address

We need to force Proxmox VE, or more specifically Corosync, to use the private IP addresses rather than the public IP address.  This because the multicast needs to be done over our virtual private network.

The easiest, but also the “dirtiest” method is to simply change the /etc/hosts, which I will outline here.

The first step is to ensure that the /etc/hosts file is read before attempting to do a full DNS lookup:

Next edit the /etc/hosts file, by commenting out the original line, and adding our own:

Make sure that the private IP address matches the one you assigned to vmbr1 (double check with ifconfig vmbr1).

Again, this is a “dirty” method and you may want to use your own DNS server instead that resolves IPs for a local network (say, “server1.servers.localnet”).

At this stage, reboot the server to ensure the changes get picked up and everything works as expected (that is, your server comes back up online – hmm!).

Create the cluster

If you do not yet have a cluster configured, you need to create one first. So pick a particular server that you would like to consider as a “main server” and perform the following:

Where <arbitrary-name> is something of your own choosing. Keep the name short and simple, without spaces or other funny characters.

The “main server” is a loose term really, as any server within the cluster can manage other servers. But use it as a consistent starting point for adding other servers to the cluster.

You can check if things are working correctly with:

In particular, you’d want to make sure that the “Node addresses:” portion is the private IP address as on vmbr1.

Adding servers to the cluster

Adding a server (node) to the cluster will need a little preparation. Specifically, because we use private IP addresses for the cluster, we need to force other nodes to do the same when trying to contact another node. In other words, if server1 wants to contact server2, it should use the 192.x range instead of the public IP address.

So, based on the above example, on server1 we need to add a line to the /etc/hosts like this:

Note the double “>>” brackets. If you use a single “>” one, you overwrite the entire file with just that line. You’ve been warned.

And on server2, we need to make sure server1 can be contacted using its private IP as well, so on that server, we perform:

All of this can be made much fancier with your own DNS server and bindings, but again, this is beyond the scope and goes on the assumption you don’t mind doing this for the 2, 5 or 10 servers or so you may have. If you have a few hundred, then I wouldn’t expect you to be looking at a “Poor Man’s” setup.

On the server that you will be adding to the cluster, make sure that you can successfully ping that private IP address of the “main server”.

If tested OK, then still on that server (thus the one that isn’t yet part of the cluster), type:

Where “server1″ is the “main server” (the one on which you first created the cluster). It will ask you for the root password for SSH for server1, and then does its thing with configuration.

Note: If you have disabled password-based root logins using SSH, you may have to temporarily enable it. Using SSH keys would be a preferred method over passwords.

After this has been done, the node should automatically on your web-based GUI and can be verified from the CLI using:

If the nodes show up in the “pvecm nodes” command and GUI, then you have successfully created the cluster.

Note: A note about a 2-node cluster and quorum can be found here.

Containers and VMs

You can now create containers and VMs that can be migrated between the nodes.

You can either assign the private IP address directly (venet, only on OpenVZ containers) or as a network device (veth) attached to vmbr1.

The private IP address should be within the range of your specified netmask on vmbr1. So going by the above example of using 192.168.14.0/23, that’s anything between 192.168.14.1 and 192.168.15.254. Make sure the IP isn’t already used by another VM or a node (see initial notes, re 192.168.14.x for VMs).

If you fire up the VM, its private IP address should be ping-able from any server, and from within the container / VM, you can ping any private as well as public IP address (the latter thanks to masquerading configured with the tinc-up script). If this is not the case, the network configuration was not done correctly.

Final notes

You should now have at least one container / VM with a private IP address. Its good and well if this VM doesn’t need to be accessed from the outside world, but if you want to give it such access, you will need to use NAT on the server. This will instruct the node that incoming traffic on a particular port will need to be forwarded to a particular VM.

For example, TCP port 25 on 123.4.5.6 is forwarded to VM on IP 192.168.14.1:

Note that this is just a simple guide to help you get started. More importantly, it doesn’t include any basic security measures such as a firewall (there are other articles about a firewall for Proxmox on this site [here and here], which I will update when I can).

 

2,129 total views, no views today

Poor man’s VPN with SSH | Setting up an SSH tunnel with PuTTY

Standard

Article #1 From: http://fnord.no/sysadmin/security/vpn-with-ssh
Article #2 From: http://realprogrammers.com/how_to/set_up_an_ssh_tunnel_with_putty.html


Poor man’s VPN with SSH

SSH has port forwarding, dynamic forwarding, and now also IP forwarding. This allows you to create connections out through a firewall, and allow other connections in and out through your SSH-connection, originating at your SSH server. Read on for a few examples of use, and make sure you have the blessing of your security team.

Local forwarding

With local forwarding, you open a local port, and forward it to another host and port from the remote server.

Often used with forwarding to single webservers, proxies, Citrix ICA servers, VNC servers, and Windows Remote Desktop (RDP).

Example with local forwarding

Connect to a server at work, forwarding a connection from port 10080 on my laptop to important.server.example.org.

I can then open my browser to http://localhost:10080, and do my stuff. Some web applications, though, can be tricky enough to expect a hostname, and for that you need to edit /etc/hosts or equivalent, or you can read on for dynamic forwarding.

Remote forwarding

With remote forwarding, you open a listening port on the remote side, and forward it to another host and port from the local server.

Example with remote forwarding

One useful scenario is to help family members who have PC trouble. For instance: Mom has a problem, calls me, and wonders if I can help, and then clicks an icon on her desktop that does the following thing:

  • Starts Remote Desktop or VNC
  • Connects to my SSH server, with remote forwarding from <vncport1> on the SSH server, to localhost:<vncport1> on her PC.

What I do, is:

  • Connect to my SSH server, with local forwarding from <vncport1> on my laptop, to <vncport1> on the SSH server, which again connects through the remote forwarding to localhost:<vncport1> on mom’s PC.
  • Start a VNC client, and connect to my localhost:5801 on my laptop. This port is now connected through my ssh session, to mom’s ssh session, to her PC.

Dynamic forwarding with SOCKS

OpenSSH’s client has the ability to do dynamic forwarding to act as a local SOCKS server, both for SOCS4 and SOCS5.

Many programs have built-in SOCKS support, so if you enable this, and configure it to use localhost:<socksport> as a SOCKS proxy.

For programs with no built-in SOCKS support, you can use “tsocks”, to intercept networking calls, and work through the SOCKS server.

Example with dynamic forwarding

Then I configure Firefox, for instance, to use the SOCKS server at localhost port 1080, and all my web connections will go through the SSH connection, and appear to be initiated from myserver.example.com. Much easier than with local forwarding, and works great for remote administration of things from home where you use different hostnames and ports, and perhaps also unroutable IP addresses.

IP forwarding with TUN

Now we’re talking. This is the real thing, we get IP forwarding through a point-to-point interface. This exists only in newer versions of OpenSSH, and is not very well documented yet. Unfortunately, this also includes this document until I have more time to research further.

Example with IP forwarding

Where ‘0’ is the local device tun0, and ‘1’ refers to the remote device tun1. On each side, one needs to set an IP address for host-to-host contact, and add routing and perhaps also NAT for network access.

Beware, as careless use of IP forwarding between sites may have a serious impact on network security, and may make others very angry if used without permission.


realprogrammers.com

Setting up an SSH tunnel with PuTTY

What follow is how to set up as SSH tunnel using PuTTY with the MySQL port (3306) forwarded as an example. After completing this how-to you’ll have port 3306 on your local machine listening and forwarding to your remote server’s localhost on port 3306. Thus effectively you can connect to the remote server’s MySQL database as though it were running on your local box.

Prerequisites

This how-to assumes your MySQL installation has enabled listening to a TCP/IP connection. Only listening on 127.0.0.1 is required (and the default as of MySQL 4.1). Although beyond the scope of this how-to, you can verify the server’s listening by using

on the server. Look for

and

in your

. Also, a trouble-shooting guide.

To achieve the same with PostgreSQL simply use PostgreSQL’s default port, 5432.

to test;

and the manual as pointers for configuration.

Set up the tunnel

Create a session in PuTTY and then select the Tunnels tab in the SSH section. In the Source port text box enter 3306. This is the port PuTTY will listen on on your local machine. It can be any standard Windows-permitted port. In the Destination field immediately below Source port enter 127.0.0.1:3306. This means, from the server, forward the connection to IP 127.0.0.1 port 3306. MySQL by default listens on port 3306 and we’re connecting directly back to the server itself, i.e. 127.0.0.1. Another common scenario is to connect with PuTTY to an outward-facing firewall and then your Destination might be the private IP address of the database server.

Putty Tunnel

Add the tunnel

Click the Add button and the screen should look like this,

Putty Tunnel Added

Save the session

Unfortunately PuTTY does not provide a handy ubiquitous Save button on all tabs so you have to return to the Session tab and click Save,

Putty Session

Open the session

Click Open (or press Enter), login, and enjoy!

Here for reference is an example connection using MySQL Adminstrator going to localhost: note the Server Host address of 127.0.0.1 which will be transparently forwarded.

Mysql Administrator Login

2,273 total views, 1 views today