BIPEDU

my ideas in action

incremental backup of Freenas ZFS volume on external drive

When you have a Freenas  you need sometime to make a backup of the entire volume.

I have a external HDD drive ( connected with a  USB interface). I want to have a exact replica of the entire Freenas dataset so that I can store it on a remote location. The GUI menu from Freenas allow you to do this but as a scheduled replication task . This is nice if you have another NAS connected on the network. In my case I have just a external HDD (1TB) and I want to make a backup from time to time.

First connect external HDD on Freenas USB ports. Plug the cable and then from GUI go to Storage–> Volume–> import Volume.

In the new pop-up select the ZFS volume from your HDD. If you do not have a volume yet then you can create it from GUI .

Now you have to open a terminal because next commands are not possible from GUI.

So you can open the Shell from GUI Freenas  ( left side menu) or you can use a SSH to access directly the Freenas terminal.

In the Freenas terminal you need first to check if you see correctly the NAS volumes and also the new inserted external HDD volume.

zfs list –> to see all volumes available

then type :

zfs list -t snapshot –> to see what snapshots you have on your system.

First time when you make this transfer you will need to copy a lot of data since external HDD is empty . Do not worry because next time when you make backup only the difference data will be copied. So it will be much faster.

So first time do:

zfs snapshot -r data1/bsdshare@backup # create a snapshot
zfs send -R data1/bsdshare@backup | zfs receive -vF bkp_1T # transfer it over

where:

zfs snapshot = create a snapshot of the data

-r = snapshot is recursive

data1/bsdshare = my volume is “data1″ and dataset is”bsdshare”

backup = arbitrary name for my snapshot. (you can see it also in GUI on Storage–>Snapshot)

zfs send = copy the dataset

zfs receive = paste the dataset

bkp_1T = is the name I chose for my external HDD volume ( choose any name you like)

The zfs send/receive command take very long time  !! hours !! since will copy now the entire data. May be longer/shorter depending on the total data, HDD speed, USB interface..etc.

At the end , after the send/receive finished you can detach the external volume from GUI . Go to Storage–> Volumes, click on the “bkp_1T” and select “detach”.  Now you can unplug the external HDD and you are ready.

 

Next time when you want to update the data from external HDD you need to do the following. The idea is to make a incremental update so that you do not spend hours copying entire data again . For this we will do:

First plug the external HDD on USB port. ( Notice that Freenas do not require any reboot !!, all is live)
From GUI import volume . Go to Storage –> Volumes –> import volume. Select the volume from your HDD ( mine is called “bkp_1T”)

Now open again a terminal in Freenas ( from GUI menu or with SSH) and type:

zfs rename -r data1/bsdshare@backup   data1/bsdshare@backup_old

# rename the “old” snapshot made last time from “backup” into “backup_old”.

zfs snapshot data1/bsdshare@backup # take a new snapshot
zfs send -Ri data1/bsdshare@backup_old data1/bsdshare@backup | zfs receive -v bkp_1T # incremental replication

Notice that now we use -i and we add both snapshot ( old and new) for send command. Now only the delta between old-new will be copied to HDD so it should take seconds or minutes, not hours/days.

At the end check one more time ( from GUI or terminal) that sizes, disks and snapshots are ok . If OK then you can optionally cleanup the storage by removing the old snapshots from Freenas and also from external HDD.

zfs destroy -r data1/bsdshare@backup_old # get rid of the previous snapshot from Freenas
zfs destroy -r bkp_1T@backup_old # get rid of the previous snapshot from external HDD

At the end from GUI you can detach the external HDD ( Storage–> Volumes , click on volume and select detach)

That’s it !

 

use PAC file for automatic proxy selection

I will explain how to use automatic proxy selection for a local network.

For example let’s say that you have a proxy server but that proxy is not available all time. In this case the you need to find each time if the proxy is alive (available) and if yes to use it. If not then browser will select direct connection.

The easiest way to use it is to create a PAC file and to add it in the Firefox as automatic proxy selection.

Go to Preferences–>Advanced –> Network –>Settings and choose “Automatic proxy configuration URL”

Then type there the path to a local PAC file. Normally there should be a web page address but if the file is locals works also ( no web server needed)

FFsettings

To create the PAC file use any text editor and create a file called “autoproxy.pac” and put this content:

function FindProxyForURL(url, host)
{
return “PROXY 192.168.1.29:3128; DIRECT”;
}

The Proxy in this case is on local network at 192.168.1.29 (Squid Proxy on port 3128) and Firefox try to use it first . In case it is not responding it will use direct connection.

You can set there multiple proxy servers. The order is important.

In the example below you can have two proxies. If the first one (192.168.1.29) is not responding then the second one (192.168.1.42) will be selected, and if the second one also do not respond the direct network connection will be used.

function FindProxyForURL(url, host)
{
return “PROXY 192.168.1.29:3128; PROXY 192.168.1.42:3128; DIRECT”;
}

The name of the PAC file is not important ( “autoproxy.pac” is name used by me), any name will do.

More details regarding the PAC file , examples, more advanced functions can be found here : http://findproxyforurl.com/

 

 

 

Fixing “RPC: AUTH_GSS upcall timed out.” from dmesg ( Debian and other linux distro)

In case you see this in your dmesg then please read further. The issue is also giving slow connection ( first) time on a NFS share (takes ~30seconds ).

There is a bug (!?) in the NFS client configuration and it runs a module called rpcsec_gss_krb5.

You can check if this module is running with “lsmod”.

Solution : do not load the module :

as root type :

echo blacklist rpcsec_gss_krb5 > /etc/modprobe.d/dist-blacklist.conf

then reboot

Problem solved : fast connection on NFS share and no dmesg error message.

 

 

email me the result of a cronjob/script in Freenas

This is the simplest method to email the result of a command in Freenas.
For example if you run certain scripts with Cron you can use it also.

Personally I use this to get the SMART report about a HDD that may fail soon. So what I put in Cron this (all in one line) :

smartctl -a /dev/ada1 | /usr/bin/mail -s "MyFREENAS HDD /ada1 report " my.email@address.com

For user you can put “root” and any redirect should be off.

Of course to make the email work you have to configure the email server, ..etc… in the user config. Fill the settings here : System –> Email.

for all spammers in the world !

how to add/replace disk to FreeNAS mirror

I recently had to add new disks to my FreeNAS storage. I mounted the disks but I was not able to add them as mirror from GUI interface. But I found this Russian website with very simple and easy to follow tutorial. I copied here mainly for me as a reminder but may be also useful for others.

I hope the original author will not be upset.

The original post is here :
http://ukhov.ru/node/431


The high-level steps are:

Add the new disk to the system – (means connect the cables)
Partition the disk with gpart – (from FreeNAS terminal)
Attach the new partition to ZFS as a mirror

Create the GPT

Use the GUI to find the device ID of the new drive or use camcontrol.

# camcontrol devlist
at scbus2 target 0 lun 0 (ada0,pass0)
at scbus3 target 0 lun 0 (ada1,pass1)
at scbus4 target 0 lun 0 (ada2,pass2)
at scbus5 target 0 lun 0 (ada3,pass3)
at scbus7 target 0 lun 0 (da0,pass4)

let assume that our target is ada1. Create the GUID partition table for ada1.

# gpart create -s gpt ada1
ada1 created


Add the Swap Partition

Create a swap partition matching what FreeNAS created on the original drive. FreeNAS puts a swap partition on every data drive by default, stripes them together, and encrypts with a temporary key each boot. I’m not sure how that works when a drive fails, but it’s the recommended configuration.

# gpart add -b 128 -i 1 -t freebsd-swap -s 2G ada1
ada1p1 added

Add the Data Partition

Use the remaining space for the data partition.
# gpart add -i 2 -t freebsd-zfs ada1
ada1p2 added

Get the GPTID for the Partition

A device may change names depending on the connected port but the GPTID doesn’t change. FreeNAS uses the GPTID to track disks and so we want the rawuuid field of ada1p2.


# gpart list ada1
Geom name: ada1
scheme: GPT
1. Name: ada1p1
Mediasize: 2147483648 (2.0G)
...
rawuuid: 38d6835c-4794-11e4-b95b-08606e6e53d5
2. Name: ada1p2
Mediasize: 1998251364352 (1.8T)
...
rawuuid: 40380205-4794-11e4-b95b-08606e6e53d5

Attach to ZFS as mirror

Attach the partition using zpool which will begin the resilvering process. You will need the GPTID of the encrypted original disk parition.

# zpool attach
# zpool attach storage /dev/gptid/1c5238f9-5e2d-11e3-b7e0-08606e6e53d5 /dev/gptid/40380205-4794-11e4-b95b-08606e6e53d5

auto mount your NAS share

I have discovered a better way on mounting the NAS shares .

Previously I was using /etc/fstab but this was not very convenient.

So I found “autofs”. It is taking care of automounting the CD/DVD/ external shares (NFS/CIFS, etc) based on the usage. I mean that when you click on the folder then it will mount it , but when not using the mount folder then at a predefined timeout time it will umount it . This is a good feature for me since I wanted to unmount automatically when I do not use that share.

 

So after installing the autofs ( from repository) then you have to configure two files.

 

First is /etc/auto.master. You should put here the timeout setting and the mount directory.

If you have more mounting paths you can put different timeout settings.

My example below:

file:   /etc/auto.master

+auto.master
/mnt/nfsfreenas /etc/auto.misc --timeout 60 --ghost

The second file is /etc/auto.misc . This contain the mount settings ( somehow similar with /etc/fstab file)

My example file here:

file : /etc/auto.misc

bsddataset    -fstype=nfs,soft,sync,rsize=6000,wsize=6000    192.168.1.20:/mnt/data1/bsdshare

So what I have here is a Freenas Unix share on 192.168.1.20 ( bsdshare) that is NFS type.

You can put in this file the CIFS/Samba also or sshfs or other shares. CD/DVD disks work also.

at the end , after updating this two files with your setup please restart the autofs service with

service autofs restart

 

 

 

 

Firefox and Backspace and AutoScroll

From time to time I notice that Firefox change some settings after an upgrade. Maybe there are bugs or simple Firefox guys decide that we simple users are stupid and we do not need certain features.

The ones that I use the most are the middle-mouse/wheel autoscroll and backspace to go to the previous page.

So in case they are missing from your Firefox then you can activate them from : about:config and then setting true for autoscrollFFautoscroll

 

 

and setting 0 for backspace_action:

FFbackspace

samba server on tinycore linux – howto

Simple setup for samba server on Tinycore linux server. Many tutorials are available on internet but mine is tested and it working as I want.

First install samba package on Tinycore linux.

from “tc” user start tce-ab

Then edit the smb.conf file from /usr/local/etc/samba/smb.conf.
Inside add something like this:

[global]
workgroup = WORKGROUP
netbios name = box
security = user

[atom_1t]
comment = Data
path = /mnt/sda1
read only = no
guest ok = no

Explanations:

security = user
this will create a share that is based on user/password

netbios name = box
this will be the name of your server ( alias to avoid tiping the IP address)

read only = no
to have write access

guest ok = no
no allow of guest users ( no anonymous connections)
Then as root you run this command

 smbpasswd -a <tinycore user>

then type the samba password for that user. You will use this password from the client machine when you will connect to the samba share
then save the samba config files to make the changes persistent after reboot.

add in /opt/.filetool.lst

usr/local/etc/samba/smb.conf            <-- this contain samba setup
usr/local/etc/samba/private/passdb.tdb  <-- this contain your password

then backup with “backup”
and then restart the server

Next go at the client machine and in filemanager type:
smb://box/
and you should get a popup window asking for user and password. Put the user and that password you set on samba.

easy backup system with rsync – like Time Machine

Backup systems are good for recovering in case of accidental  lost data. But a more useful feature is the incremental backup where you have access to various snapshots in time like the Time Machine on Apple is doing. To do this in Linux (or any Unix  or alike ) systems is actually very easy.

For example we make a backup every day ( or every internal you want) . We need that  the amount of data transferred is small and not big. Imagine transferring few TB every day ! in case our important data is changing a little bit then we will backup only the modified parts. For this Rsync is the best tool. Everybody knows that. But there is a problem. How can we keep daily snapshots of the data without filling the disk ? For this we will use softlinks,  hardlinks and Rsync options.

So we have to create a script file like this:

#!/bin/bash
date=`date "+%Y-%m-%dT%H-%M-%S"`
rsync -aP --delete --log-file=/tmp/log_backup.log --exclude=lost+found --link-dest=/mnt/sdb2/Backups/current /mnt/sda1/ /mnt/sdb2/Backups/back-$date
rm -f /mnt/sdb2/Backups/current
ln -s /mnt/sdb2/Backups/back-$date /mnt/sdb2/Backups/current

So here I make first a “date” variable that will be used in the name of the backup folder to easily know when that backup/snapshot was made.

Then use the rsync with some parameters (see man rsync for more details):

-a = archive mode ( to send only changed parts)

-P = to give a progress info – (optional)

–delete = to delete the deleted files from backup in case they are removed from source

–log-file = to save the log into a file (optional)

–exclude = to exclude some folders/files from backup . This are relative to source path !!! do not use absolute path here !

–link-dest = link to the latest backup snapshot

/mnt/sda1 = source path (here I backup a whole drive)

/mnt/sdb2/Backups/back-$date  = destination folder , it will contain all the content from the source.

Then by using rm I remove the old link to the old backup ( the “current” link) and then I replace it with a new soft link to the newly created snapshot.

So now whenever I click on “current” I go in fact to the latest backup .

And because every time I make the backup the date is different the old snapshots will be kept. So for every day I will have a snapshot.

To automate this you have to create a cron job to execute the above script at the convenient time.

Example to run at 4:01AM every day:

1  4 * * * /path/to/script

Please notice that only the first time the full backup will take a long time since it will copy the full data. The second time you will run the script it will transfer only the changed files/bits.

Now on the destination folder you will see a “back-xxx” folder for every time you run the script. You can open/read the files from all this folders as it if they are completely independent files. In fact if you run df and du you will see something interesting.

For example if the backup is 600GB and the script is run every day you will see that the df will show the same 600GB used from disk space. But if you run “du -sh /* ”  you will see that each “back-xxx” folder is 600GB each. This is possible because there are only hardlinks to the same data copied. Do not worry, the disk is not full and you should trust the df results and not the du results.

user@box:/mnt/sdb2/Backups$ du  -sh ./*
623.8G    ./back-2014-02-24T17:47:12
623.8G    ./back-2014-02-24T21-46-41
623.8G    ./back-2014-02-25T17-05-02
623.8G    ./back-2014-02-25T18-45-34
0    ./current
user@box:/mnt/sdb2/Backups$ df /mnt/sdb2
Filesystem                Size      Used Available Use% Mounted on
/dev/sdb2                 2.7T    623.9G      1.9T  24% /mnt/sdb2

So the Time Machine is in fact only 3 lines of code in a script plus a cron job ! Easy and everybody can do it !

Adapt the script to your needs. Run it when you want with cron jobs.

At any point in time you can delete old backups ( for example backups older than few weeks). This can also be made with cron plus some scripts.

Follow

Get every new post delivered to your Inbox.