How to mount a block device over network via nbd (Re: Alternatives to sshfs?)

Standard

From:


How to mount a block device over network via nbd (Re: Alternatives to sshfs?)
Originally Posted by alaios
Could you please help me offload a bit my cpu by pushing more my network?
The following procedure mounts the filesystem in the block device “/dev/sdz” on the machine “server” onto the directory “/mnt” on the machine “client”. This is done via nbd (network block device). As a result the filesystem logic is done on the machine “client”. I doubt that there will be a considerable speed advantage, but if your “server” is very slow and your client and network are quite fast, it’s possible. You will have to replace strings like “/dev/sda”, “/mnt” and “server” with the actual names.

Your ssh daemon should already be running on the server, otherwise you couldn’t have used sshfs. Install package “nbd” on both machines. Login at “server” as root, then:
Code:
$ nbd-server 127.0.0.1:60000 /dev/sdz
Add “-r” to the command line to enforce read-only access (you can only mount read-only later with this). Caution: This command allows any local user to access this block device on “server”.

On the client, connect to the server with ssh (as ordinary user, if you want):
Code:
$ ssh -c blowfish-cbc -L 127.0.0.1:60000:127.0.0.1:60000 server
The blowfish cipher is said to be quite fast, hence not eating too much CPU power on the server and client. Caution: This command allows any local user on “client” to access the block device on “server”. Don’t close this ssh-connection while the file system is mounted!

Now login as root on “client”, and mount the device with:
Code:
$ modprobe nbd
$ nbd-client localhost 60000 /dev/nbd0
$ mount /dev/nbd0 /mnt
Your filesystem should now appear in /mnt, subject to access restrictions and privileges based on the user accounts on the machine “client”. In a multi-user environment, this might lead to unexpected results and even security risks! On the other hand, if you’re the only user of “client” and “server”, there shouldn’t be any serious problems. And if your machines are connected via a secure link, you can skip the ssh tunnel entirely (thus saving more CPU power) and connect the nbd-server and nbd-client directly.

To unmount everything:
Code:
$ umount /mnt
$ nbd-client -d /dev/nbd0
Now you can close the ssh connection.

Yarny

1,613 total views, no views today

Setting up the HekaFS on Fedora

Standard

 

[important]
Install:

Use the following command to install all server nodes:
yum -y install glusterfs glusterfs-server glusterfs-fuse hekafs

On the client, user the following command to install:
yum -y install glusterfs glusterfs-fuse hekafs

Start the glusterd and hekafsd daemons on each server node with the following commands:
service glusterd start
service hekafsd start

[/important]

 

[important]

Before setup:

You should get another storage drive other than the OS. Allows you to maintain speed if heavily accessed and in case a drive does wear out, you can just pop another in.
If that cannot be done, create a loop mount file using dd command(dd if=/dev/zero of=hekafs_loop1.iso bs=1024M count=32  Creates a nice 32GB empty file) and add loop mount entry in fstab(/mnt/hekafs_loop_file/hekafs_loop1.iso /mnt/heka_brick1 xfs,iso9660 loop 0 0). Then the HekaFS should be able to use it. However, it needs formatting with a filesystem for use(mkfs.xfs /mnt/hekafs_loop_file/hekafs_loop1.iso). I recommend XFS. Then mount it.

/etc/ssh/sshd_config file needs to allow root ssh access for the Hekafs to work.
Adjust “PermitRootLogin” to “yes”.
Also we need KEYs to work: “PubkeyAuthentication yes”
At least one of the storage bricks(call it the Main access machine) needs password-less access to  ALL other storage bricks via SSH keys on root user. This is why storage bricks are normally a standalone group and clients are another. I use one machine with a key that is in the authorized_keys file on all the other bricks. I only use this machine to setup the system. A better setup, but harder(time consuming, until scripted), is where EVERY machine can access any other.
After all that, you must make a one time connection from the main machine to all the other bricks so that SSH is confirmed on the yes/no prompt.

[/important]

 

Setup:

The HekaFS can be configured some through the web console. Accessed on port 8080 of the machine with Heka installed.

Under the Manage Servers link, you can type in the other servers holding storage “bricks” that you want to combine into the storage cluster.

Under the Manage Volumes link, you can A: checkmark the found mounts or B: specify the mounts under the “Add Directories” header. Check the ones you want and specify the Volume Type.
Types:
Plain, Replicated, Striped, SSL
As of right now, this interface does not allow a combined Replicated+Striped type. Should in the future.
Choose Replicated.

In the next box, type in how many replications. Type 2 for minimal.
This means on the cluster, two copies shall exist on different machines in case one machine fails.

Give a name to the new Volume in the Volume ID.
“General_Use”, “Office Docs”, “IT Programs”, “Backups”, ???

Click Provision

Your volume is created. Now onto WHO can use it.

Tenants are logins to the storage cluster. Each Tenant can have different permissions to access different Volumes.
Name and passwords are easy.
The UID and GIDs are up to you. Recommend starting at 10000 to 10500 for each.

Once the Tenants are setup, you must click the Volumes link next to each one and tell the HekaFS which volumes can be accessed via this Tenant.

Client usage of the newly setup volumes:

Pop this in a script or on a start-up file: “sudo hfs_mount heka1 General_Use ph ph /mnt/heka_client_storage/”
It reads as follows:
mount command | filesystem | Volume | UserName | Password | mount point on client system

Expand Volume:

To expand add in this config 2 new bricks and install as described. Stop at end of “Add bricks in cluster” section. Open Terminal of one brick you configured. Now we add the 2 new bricks to our volume volumeTest.

Check bricks and volume with

After expanding or shrinking a volume (using the add-brick and remove-brick commands respectively), you need to rebalance the data among the servers.

Now we have an Distributed-Replicate volume.

gluster volume info   Volume Name: volumeTest Type: Distributed-Replicate Status: Created Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.0.0.1:/hekafs-exports Brick2: 10.0.0.2:/hekafs-exports Brick3: 10.0.0.3:/hekafs-exports Brick4: 10.0.0.4:/hekafs-exports

478 total views, 0 views today