BIPEDU

my ideas in action

Category Archives: backup

email me the result of a cronjob/script in Freenas

This is the simplest method to email the result of a command in Freenas.
For example if you run certain scripts with Cron you can use it also.

Personally I use this to get the SMART report about a HDD that may fail soon. So what I put in Cron this (all in one line) :

smartctl -a /dev/ada1 | /usr/bin/mail -s "MyFREENAS HDD /ada1 report " my.email@address.com

For user you can put “root” and any redirect should be off.

Of course to make the email work you have to configure the email server, ..etc… in the user config. Fill the settings here : System –> Email.

how to add/replace disk to FreeNAS mirror

I recently had to add new disks to my FreeNAS storage. I mounted the disks but I was not able to add them as mirror from GUI interface. But I found this Russian website with very simple and easy to follow tutorial. I copied here mainly for me as a reminder but may be also useful for others.

I hope the original author will not be upset.

The original post is here :
http://ukhov.ru/node/431


The high-level steps are:

Add the new disk to the system – (means connect the cables)
Partition the disk with gpart – (from FreeNAS terminal)
Attach the new partition to ZFS as a mirror

Create the GPT

Use the GUI to find the device ID of the new drive or use camcontrol.

# camcontrol devlist
at scbus2 target 0 lun 0 (ada0,pass0)
at scbus3 target 0 lun 0 (ada1,pass1)
at scbus4 target 0 lun 0 (ada2,pass2)
at scbus5 target 0 lun 0 (ada3,pass3)
at scbus7 target 0 lun 0 (da0,pass4)

let assume that our target is ada1. Create the GUID partition table for ada1.

# gpart create -s gpt ada1
ada1 created


Add the Swap Partition

Create a swap partition matching what FreeNAS created on the original drive. FreeNAS puts a swap partition on every data drive by default, stripes them together, and encrypts with a temporary key each boot. I’m not sure how that works when a drive fails, but it’s the recommended configuration.

# gpart add -b 128 -i 1 -t freebsd-swap -s 2G ada1
ada1p1 added

Add the Data Partition

Use the remaining space for the data partition.
# gpart add -i 2 -t freebsd-zfs ada1
ada1p2 added

Get the GPTID for the Partition

A device may change names depending on the connected port but the GPTID doesn’t change. FreeNAS uses the GPTID to track disks and so we want the rawuuid field of ada1p2.


# gpart list ada1
Geom name: ada1
scheme: GPT
1. Name: ada1p1
Mediasize: 2147483648 (2.0G)
...
rawuuid: 38d6835c-4794-11e4-b95b-08606e6e53d5
2. Name: ada1p2
Mediasize: 1998251364352 (1.8T)
...
rawuuid: 40380205-4794-11e4-b95b-08606e6e53d5

Attach to ZFS as mirror

Attach the partition using zpool which will begin the resilvering process. You will need the GPTID of the encrypted original disk parition.

# zpool attach
# zpool attach storage /dev/gptid/1c5238f9-5e2d-11e3-b7e0-08606e6e53d5 /dev/gptid/40380205-4794-11e4-b95b-08606e6e53d5

FTP backup with BASH script ( FTP scripting)

FTP protocol is used to transfer data between computers. The user has also a possibility to combine bash scripts with FTP to automate the backups of the files. This concept can be used for example to backup some files from a local machine to a remote server.
The way to do this is by making an executable script that is run from time to time by a cron job task. In this way the backup is made automatically in the background and do not require user intervention. But there is a problem. if you use FTP in command line then you have to type user name and password in clear text. So how to do it ?
The solution I suggest is like in the example below:

First step is to determine if the backup is needed . We check if the file was changed since last backup. For this we use a compare between the size of the file in this moment with the size of the file at the previous backup time. The previous size was saved in a file. To check this we use the “if compare” structure from BASH:

### check if the file was changed, and if YES then make FTP transfer, if not exit
 if [ `cat ~/.size_myfile.txt` -ne `du -s /path/to/file | cut -f1` ] ; then
 echo 'File changed !'
 fi
 ###########################

Then we define some parameters for FTP command. This definition can be made in a different ( hidden) file. For not critical situations ( home users) I recommend keeping the user details in the same file ( in the script) but to remove all the permissions for the other users ( use chmod command ). So the backup script should look like this:

-rwx------ myuser mygroup 32 june 12 14:52 backupFTP.script

Notice that only the “myuser” user has the right to read/write/execute the file

So for FTP command you need:

##############################
 HOST='ftp.myserver.com'
 USER='myusername'
 PASSWD='mypassword'
 FILE='/path/to/filesource filedestination'
ftp -n $HOST <<END_SCRIPT
 quote USER $USER
 quote PASS $PASSWD
 cd /path/to/destination
 put $FILE
 quit
 END_SCRIPT
 #############################

Since the source path and the destination path may be different you can use “cd /path/to/destination” for the file. The copied file can be also renamed as shown above ( see “filedestination“)

Notice that the commands between “END_SCRIPT” tags are executed inside FTP terminal. This are FTP commands and not BASH/Linux commands. You can put here whatever FTP commands you want based on your needs. For a full list of the FTP commands type “help” in the FTP terminal.

The 3rd step is to recalculate and save the new size of the file so that next time when backup script is run the size file is updated. For this we do:

################################
 ## recalculate the new size of the file, for next backup
 du -s /path/to/file | cut -f1 > ~/.size_myfile.txt
 env DISPLAY=:0 notify-send 'FTP backup done !' 'File : /path/to/file'
 ###############################

Optionally you can show a desktop notification that the backup was made. If you do not have a GUI then do not use it.

Next I show the full script in only one file:

#!/bin/bash
 ### check if the file was changed, and if YES then make FTP transfer, if not exit
 if [ `cat ~/.size_myfile.txt` -ne `du -s /path/to/file | cut -f1` ] ; then
#    echo 'File changed !'
 sleep 1
 #
 HOST='ftp.myserver.com'
 USER='myusername'
 PASSWD='mypassword'
 FILE='/path/to/filesource filedestination'
 ftp -n $HOST <<END_SCRIPT
 quote USER $USER
 quote PASS $PASSWD
 cd /path/to/destination
 put $FILE
 quit
 END_SCRIPT
 sleep 1
 ## recalculate the new size of the file, for next backup
 du -s /path/to/file | cut -f1 > ~/.size_myfile.txt
 env DISPLAY=:0 notify-send 'FTP backup done !' 'File : /path/to/file'
fi
 ###############

WD30EZRX Caviar Green 3TB, correct disk size and how to partition in Linux/Unix

I have a new HDD from Western Digital. It is a WD30EZRX Caviar Green. It has 3TB and the price is almost ok. The disk works fine for now so I will detail the way the partition can be done in Linux/Unix.

Why do we care about this ? Well.. because this 3TB is bigger than normal HDD on the market. The forums are full with people asking why do not see the full 3TB size . Well there are some issues with the way the data is addressed. It is nothing wrong with the disk but with the software used to see the partitions or to access the partitions.

In my case I used a old external HDD enclosure to test the drive. It was out of the box, so I was thinking to partition it and then to be screwed in the final rack.

Wrong ! Since my HDD enclosure ( cheap and old case with JMicron SATA-to-USB interface) is accessing the data in 32bit format (hex) the max visible disk size was 746MB. I searched the web and I found many guys complaining that they see the same 746MB size in the 3TB disk. But other people were saying that they see 2.1-2.1TB !?!

Well… after aprox 1 hour of reading the web, I discovered that the problem is do to software and not do to hardware.

Those of you that are using fdisk to see ( and/or partition) this disk will run in the same issue. So I used “parted” in command line. I did not tested with Gparted ( from live Ubuntu distro) but I assume that will work also. So I used the last Parted version and the disk size was ok 3.001TB.

WARNING  !!! : Parted writes the changes directly on the disk !!! (no undo option , and no cancel !! )

So in order to partition with parted ( command line version) you have to do the following steps: (assuming the disk is sdb , if not please change it to your proper value)

1: you have to change the partition table from msdos to gpt.

# parted /dev/sdb
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type ‘help’ to view a list of commands.

(parted) print
Error: /dev/sdb: unrecognized disk label

(parted) mklabel gpt

(parted) print
Model: Unknown (unknown)
Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start  End  Size  File system  Name  Flags

2: partition the disk

# parted /dev/sdb

(parted) mkpart primary 0GB 3001GB

(parted) print
Model: Unknown (unknown)
Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  3001GB  3001GB               primary

3: then you must format the new partition to the ext2, ext3, ext4 or whatever you want. This step may take few minutes. Wait….

# mkfs.ext2 /dev/sdb1

4: mount the new created partition

# sudo mount /dev/sdb1 /mnt/somefolder

Probably there are other ( more easy or cleaver ) methods but this was working for me.

BTW: in my case I format it as ext2 and at the end I have approx 2.88TB and from this I can use max 2.7TB…

So it appears that this >100GB are lost in filesystem stuff. So please have this in mind for big HDD since it may create disappointment ..(“Look I have a 3TB disk …but I can use only 2.5TB !!”)

 

For more information on this issues please read :

http://www.unixgods.org/~tilo/linux_larger_2TB.html

http://www.thegeekstuff.com/2012/08/2tb-gtp-parted/

http://blogging.dragon.org.uk/index.php/mini-howtos/using-parted-to-add-and-manipulate-disk?page=2

http://www.gnu.org/software/parted/manual/

 

backup script – bash + ssh + rsync

This days I was searching for a more intelligent way to backup my files.

The latest configuration that I have use a Atom board as home linux server and a client ( laptop) that copy file to the server.

The strategy is quite simple and clever, I say. At every hour I check if the IP of my sever is ok. Then I check if the total size of the folder is different from what was before. By this method I can decide very fast if something changed in my folder.

If yes, then I copy the files with rsync over SSH. Rsync makes another verification and decide what files are necessary to be copied.

At the end I put in a file the new size of the folder so that next time ( next hour) I can backup again.

The main idea here is that my server use HDD in standby , to minimize the power consume ( and also noise !)

The full core of my bash script is below:

#!/bin/bash

notok=0

## find yout IP
set `/sbin/ifconfig | grep ‘inet addr:’ | grep -v ‘127.0.0.1’ | cut -d: -f2 | awk ‘{ print $1 }’ `
#echo $1
## put your IP in string format
myip=”$1″
## put IP of the server in string format
targetip=”192.168.1.2″

## check if yout IP is 192.168.1.6. If YES , then you are not on correct network
## so is impossible to make backup
if [ $myip = “192.168.1.6” ]; then
env DISPLAY=:0 notify-send ‘Your IP is 192.168.1.6’ ‘not possible to make backup backup’
notok=1
fi

## If your IP is not identical with 192.168.1.2, then check if the server is alive.
## if yes, then you can make backup
if [ $notok -eq 0 ]; then

# check if router is alive
ping -c 1 192.168.1.1 > /dev/null
if [ $? -eq 0 ]; then
##        echo  IP is up.
sleep 1
### check the size of the personal folder, if changed –> make backup.
if [ `cat ~/.size_personal.txt` -ne `du -s ~/personal | cut -f1` ]; then
#            echo something changed !!

env DISPLAY=:0 notify-send ‘begin backup’

#change permissions:
find /home/user/personal/ -type f -exec chmod 644 {} \;
find /home/user/personal/ -type d -exec chmod 755 {} \;

## add v option for verbose for rsync
rsync -rah –log-file=/tmp/backup_rsync.log -e “ssh -i /home/user/scripts/sshkey/tc_atom”  /home/user/personal/  user@192.168.1.2:/mnt/sda1/personal
sleep 1
## put size of personal folder in a file, for neck backup.
du -s ~/personal | cut -f1 > ~/.size_personal.txt

env DISPLAY=:0 notify-send ‘backup done’ ‘made backup for : Personal’

fi
fi

fi