my ideas in action

Tag Archives: backup

email me the result of a cronjob/script in Freenas

This is the simplest method to email the result of a command in Freenas.
For example if you run certain scripts with Cron you can use it also.

Personally I use this to get the SMART report about a HDD that may fail soon. So what I put in Cron this (all in one line) :

smartctl -a /dev/ada1 | /usr/bin/mail -s "MyFREENAS HDD /ada1 report "

For user you can put “root” and any redirect should be off.

Of course to make the email work you have to configure the email server, ..etc… in the user config. Fill the settings here : System –> Email.


easy backup system with rsync – like Time Machine

Backup systems are good for recovering in case of accidental  lost data. But a more useful feature is the incremental backup where you have access to various snapshots in time like the Time Machine on Apple is doing. To do this in Linux (or any Unix  or alike ) systems is actually very easy.

For example we make a backup every day ( or every internal you want) . We need that  the amount of data transferred is small and not big. Imagine transferring few TB every day ! in case our important data is changing a little bit then we will backup only the modified parts. For this Rsync is the best tool. Everybody knows that. But there is a problem. How can we keep daily snapshots of the data without filling the disk ? For this we will use softlinks,  hardlinks and Rsync options.

So we have to create a script file like this:

date=`date "+%Y-%m-%dT%H-%M-%S"`
rsync -aP --delete --log-file=/tmp/log_backup.log --exclude=lost+found --link-dest=/mnt/sdb2/Backups/current /mnt/sda1/ /mnt/sdb2/Backups/back-$date
rm -f /mnt/sdb2/Backups/current
ln -s /mnt/sdb2/Backups/back-$date /mnt/sdb2/Backups/current

So here I make first a “date” variable that will be used in the name of the backup folder to easily know when that backup/snapshot was made.

Then use the rsync with some parameters (see man rsync for more details):

-a = archive mode ( to send only changed parts)

-P = to give a progress info – (optional)

–delete = to delete the deleted files from backup in case they are removed from source

–log-file = to save the log into a file (optional)

–exclude = to exclude some folders/files from backup . This are relative to source path !!! do not use absolute path here !

–link-dest = link to the latest backup snapshot

/mnt/sda1 = source path (here I backup a whole drive)

/mnt/sdb2/Backups/back-$date  = destination folder , it will contain all the content from the source.

Then by using rm I remove the old link to the old backup ( the “current” link) and then I replace it with a new soft link to the newly created snapshot.

So now whenever I click on “current” I go in fact to the latest backup .

And because every time I make the backup the date is different the old snapshots will be kept. So for every day I will have a snapshot.

To automate this you have to create a cron job to execute the above script at the convenient time.

Example to run at 4:01AM every day:

1  4 * * * /path/to/script

Please notice that only the first time the full backup will take a long time since it will copy the full data. The second time you will run the script it will transfer only the changed files/bits.

Now on the destination folder you will see a “back-xxx” folder for every time you run the script. You can open/read the files from all this folders as it if they are completely independent files. In fact if you run df and du you will see something interesting.

For example if the backup is 600GB and the script is run every day you will see that the df will show the same 600GB used from disk space. But if you run “du -sh /* ”  you will see that each “back-xxx” folder is 600GB each. This is possible because there are only hardlinks to the same data copied. Do not worry, the disk is not full and you should trust the df results and not the du results.

user@box:/mnt/sdb2/Backups$ du  -sh ./*
623.8G    ./back-2014-02-24T17:47:12
623.8G    ./back-2014-02-24T21-46-41
623.8G    ./back-2014-02-25T17-05-02
623.8G    ./back-2014-02-25T18-45-34
0    ./current
user@box:/mnt/sdb2/Backups$ df /mnt/sdb2
Filesystem                Size      Used Available Use% Mounted on
/dev/sdb2                 2.7T    623.9G      1.9T  24% /mnt/sdb2

So the Time Machine is in fact only 3 lines of code in a script plus a cron job ! Easy and everybody can do it !

Adapt the script to your needs. Run it when you want with cron jobs.

At any point in time you can delete old backups ( for example backups older than few weeks). This can also be made with cron plus some scripts.

FTP backup with BASH script ( FTP scripting)

FTP protocol is used to transfer data between computers. The user has also a possibility to combine bash scripts with FTP to automate the backups of the files. This concept can be used for example to backup some files from a local machine to a remote server.
The way to do this is by making an executable script that is run from time to time by a cron job task. In this way the backup is made automatically in the background and do not require user intervention. But there is a problem. if you use FTP in command line then you have to type user name and password in clear text. So how to do it ?
The solution I suggest is like in the example below:

First step is to determine if the backup is needed . We check if the file was changed since last backup. For this we use a compare between the size of the file in this moment with the size of the file at the previous backup time. The previous size was saved in a file. To check this we use the “if compare” structure from BASH:

### check if the file was changed, and if YES then make FTP transfer, if not exit
 if [ `cat ~/.size_myfile.txt` -ne `du -s /path/to/file | cut -f1` ] ; then
 echo 'File changed !'

Then we define some parameters for FTP command. This definition can be made in a different ( hidden) file. For not critical situations ( home users) I recommend keeping the user details in the same file ( in the script) but to remove all the permissions for the other users ( use chmod command ). So the backup script should look like this:

-rwx------ myuser mygroup 32 june 12 14:52 backupFTP.script

Notice that only the “myuser” user has the right to read/write/execute the file

So for FTP command you need:

 FILE='/path/to/filesource filedestination'
 quote USER $USER
 cd /path/to/destination
 put $FILE

Since the source path and the destination path may be different you can use “cd /path/to/destination” for the file. The copied file can be also renamed as shown above ( see “filedestination“)

Notice that the commands between “END_SCRIPT” tags are executed inside FTP terminal. This are FTP commands and not BASH/Linux commands. You can put here whatever FTP commands you want based on your needs. For a full list of the FTP commands type “help” in the FTP terminal.

The 3rd step is to recalculate and save the new size of the file so that next time when backup script is run the size file is updated. For this we do:

 ## recalculate the new size of the file, for next backup
 du -s /path/to/file | cut -f1 > ~/.size_myfile.txt
 env DISPLAY=:0 notify-send 'FTP backup done !' 'File : /path/to/file'

Optionally you can show a desktop notification that the backup was made. If you do not have a GUI then do not use it.

Next I show the full script in only one file:

 ### check if the file was changed, and if YES then make FTP transfer, if not exit
 if [ `cat ~/.size_myfile.txt` -ne `du -s /path/to/file | cut -f1` ] ; then
#    echo 'File changed !'
 sleep 1
 FILE='/path/to/filesource filedestination'
 ftp -n $HOST <<END_SCRIPT
 quote USER $USER
 cd /path/to/destination
 put $FILE
 sleep 1
 ## recalculate the new size of the file, for next backup
 du -s /path/to/file | cut -f1 > ~/.size_myfile.txt
 env DISPLAY=:0 notify-send 'FTP backup done !' 'File : /path/to/file'

WD30EZRX Caviar Green 3TB, correct disk size and how to partition in Linux/Unix

I have a new HDD from Western Digital. It is a WD30EZRX Caviar Green. It has 3TB and the price is almost ok. The disk works fine for now so I will detail the way the partition can be done in Linux/Unix.

Why do we care about this ? Well.. because this 3TB is bigger than normal HDD on the market. The forums are full with people asking why do not see the full 3TB size . Well there are some issues with the way the data is addressed. It is nothing wrong with the disk but with the software used to see the partitions or to access the partitions.

In my case I used a old external HDD enclosure to test the drive. It was out of the box, so I was thinking to partition it and then to be screwed in the final rack.

Wrong ! Since my HDD enclosure ( cheap and old case with JMicron SATA-to-USB interface) is accessing the data in 32bit format (hex) the max visible disk size was 746MB. I searched the web and I found many guys complaining that they see the same 746MB size in the 3TB disk. But other people were saying that they see 2.1-2.1TB !?!

Well… after aprox 1 hour of reading the web, I discovered that the problem is do to software and not do to hardware.

Those of you that are using fdisk to see ( and/or partition) this disk will run in the same issue. So I used “parted” in command line. I did not tested with Gparted ( from live Ubuntu distro) but I assume that will work also. So I used the last Parted version and the disk size was ok 3.001TB.

WARNING  !!! : Parted writes the changes directly on the disk !!! (no undo option , and no cancel !! )

So in order to partition with parted ( command line version) you have to do the following steps: (assuming the disk is sdb , if not please change it to your proper value)

1: you have to change the partition table from msdos to gpt.

# parted /dev/sdb
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type ‘help’ to view a list of commands.

(parted) print
Error: /dev/sdb: unrecognized disk label

(parted) mklabel gpt

(parted) print
Model: Unknown (unknown)
Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start  End  Size  File system  Name  Flags

2: partition the disk

# parted /dev/sdb

(parted) mkpart primary 0GB 3001GB

(parted) print
Model: Unknown (unknown)
Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  3001GB  3001GB               primary

3: then you must format the new partition to the ext2, ext3, ext4 or whatever you want. This step may take few minutes. Wait….

# mkfs.ext2 /dev/sdb1

4: mount the new created partition

# sudo mount /dev/sdb1 /mnt/somefolder

Probably there are other ( more easy or cleaver ) methods but this was working for me.

BTW: in my case I format it as ext2 and at the end I have approx 2.88TB and from this I can use max 2.7TB…

So it appears that this >100GB are lost in filesystem stuff. So please have this in mind for big HDD since it may create disappointment ..(“Look I have a 3TB disk …but I can use only 2.5TB !!”)


For more information on this issues please read :


backup script – bash + ssh + rsync

This days I was searching for a more intelligent way to backup my files.

The latest configuration that I have use a Atom board as home linux server and a client ( laptop) that copy file to the server.

The strategy is quite simple and clever, I say. At every hour I check if the IP of my sever is ok. Then I check if the total size of the folder is different from what was before. By this method I can decide very fast if something changed in my folder.

If yes, then I copy the files with rsync over SSH. Rsync makes another verification and decide what files are necessary to be copied.

At the end I put in a file the new size of the folder so that next time ( next hour) I can backup again.

The main idea here is that my server use HDD in standby , to minimize the power consume ( and also noise !)

The full core of my bash script is below:



## find yout IP
set `/sbin/ifconfig | grep ‘inet addr:’ | grep -v ‘’ | cut -d: -f2 | awk ‘{ print $1 }’ `
#echo $1
## put your IP in string format
## put IP of the server in string format

## check if yout IP is If YES , then you are not on correct network
## so is impossible to make backup
if [ $myip = “” ]; then
env DISPLAY=:0 notify-send ‘Your IP is’ ‘not possible to make backup backup’

## If your IP is not identical with, then check if the server is alive.
## if yes, then you can make backup
if [ $notok -eq 0 ]; then

# check if router is alive
ping -c 1 > /dev/null
if [ $? -eq 0 ]; then
##        echo  IP is up.
sleep 1
### check the size of the personal folder, if changed –> make backup.
if [ `cat ~/.size_personal.txt` -ne `du -s ~/personal | cut -f1` ]; then
#            echo something changed !!

env DISPLAY=:0 notify-send ‘begin backup’

#change permissions:
find /home/user/personal/ -type f -exec chmod 644 {} \;
find /home/user/personal/ -type d -exec chmod 755 {} \;

## add v option for verbose for rsync
rsync -rah –log-file=/tmp/backup_rsync.log -e “ssh -i /home/user/scripts/sshkey/tc_atom”  /home/user/personal/  user@
sleep 1
## put size of personal folder in a file, for neck backup.
du -s ~/personal | cut -f1 > ~/.size_personal.txt

env DISPLAY=:0 notify-send ‘backup done’ ‘made backup for : Personal’