my ideas in action

Category Archives: debian

Fixing “RPC: AUTH_GSS upcall timed out.” from dmesg ( Debian and other linux distro)

In case you see this in your dmesg then please read further. The issue is also giving slow connection ( first) time on a NFS share (takes ~30seconds ).

There is a bug (!?) in the NFS client configuration and it runs a module called rpcsec_gss_krb5.

You can check if this module is running with “lsmod”.

Solution : do not load the module :

as root type :

echo blacklist rpcsec_gss_krb5 > /etc/modprobe.d/dist-blacklist.conf

then reboot

Problem solved : fast connection on NFS share and no dmesg error message.




auto mount your NAS share

I have discovered a better way on mounting the NAS shares .

Previously I was using /etc/fstab but this was not very convenient.

So I found “autofs”. It is taking care of automounting the CD/DVD/ external shares (NFS/CIFS, etc) based on the usage. I mean that when you click on the folder then it will mount it , but when not using the mount folder then at a predefined timeout time it will umount it . This is a good feature for me since I wanted to unmount automatically when I do not use that share.


So after installing the autofs ( from repository) then you have to configure two files.


First is /etc/auto.master. You should put here the timeout setting and the mount directory.

If you have more mounting paths you can put different timeout settings.

My example below:

file:   /etc/auto.master

/mnt/nfsfreenas /etc/auto.misc --timeout 60 --ghost

The second file is /etc/auto.misc . This contain the mount settings ( somehow similar with /etc/fstab file)

My example file here:

file : /etc/auto.misc

bsddataset    -fstype=nfs,soft,sync,rsize=6000,wsize=6000

So what I have here is a Freenas Unix share on ( bsdshare) that is NFS type.

You can put in this file the CIFS/Samba also or sshfs or other shares. CD/DVD disks work also.

at the end , after updating this two files with your setup please restart the autofs service with

service autofs restart





Firefox and Backspace and AutoScroll

From time to time I notice that Firefox change some settings after an upgrade. Maybe there are bugs or simple Firefox guys decide that we simple users are stupid and we do not need certain features.

The ones that I use the most are the middle-mouse/wheel autoscroll and backspace to go to the previous page.

So in case they are missing from your Firefox then you can activate them from : about:config and then setting true for autoscrollFFautoscroll



and setting 0 for backspace_action:


easy backup system with rsync – like Time Machine

Backup systems are good for recovering in case of accidental  lost data. But a more useful feature is the incremental backup where you have access to various snapshots in time like the Time Machine on Apple is doing. To do this in Linux (or any Unix  or alike ) systems is actually very easy.

For example we make a backup every day ( or every internal you want) . We need that  the amount of data transferred is small and not big. Imagine transferring few TB every day ! in case our important data is changing a little bit then we will backup only the modified parts. For this Rsync is the best tool. Everybody knows that. But there is a problem. How can we keep daily snapshots of the data without filling the disk ? For this we will use softlinks,  hardlinks and Rsync options.

So we have to create a script file like this:

date=`date "+%Y-%m-%dT%H-%M-%S"`
rsync -aP --delete --log-file=/tmp/log_backup.log --exclude=lost+found --link-dest=/mnt/sdb2/Backups/current /mnt/sda1/ /mnt/sdb2/Backups/back-$date
rm -f /mnt/sdb2/Backups/current
ln -s /mnt/sdb2/Backups/back-$date /mnt/sdb2/Backups/current

So here I make first a “date” variable that will be used in the name of the backup folder to easily know when that backup/snapshot was made.

Then use the rsync with some parameters (see man rsync for more details):

-a = archive mode ( to send only changed parts)

-P = to give a progress info – (optional)

–delete = to delete the deleted files from backup in case they are removed from source

–log-file = to save the log into a file (optional)

–exclude = to exclude some folders/files from backup . This are relative to source path !!! do not use absolute path here !

–link-dest = link to the latest backup snapshot

/mnt/sda1 = source path (here I backup a whole drive)

/mnt/sdb2/Backups/back-$date  = destination folder , it will contain all the content from the source.

Then by using rm I remove the old link to the old backup ( the “current” link) and then I replace it with a new soft link to the newly created snapshot.

So now whenever I click on “current” I go in fact to the latest backup .

And because every time I make the backup the date is different the old snapshots will be kept. So for every day I will have a snapshot.

To automate this you have to create a cron job to execute the above script at the convenient time.

Example to run at 4:01AM every day:

1  4 * * * /path/to/script

Please notice that only the first time the full backup will take a long time since it will copy the full data. The second time you will run the script it will transfer only the changed files/bits.

Now on the destination folder you will see a “back-xxx” folder for every time you run the script. You can open/read the files from all this folders as it if they are completely independent files. In fact if you run df and du you will see something interesting.

For example if the backup is 600GB and the script is run every day you will see that the df will show the same 600GB used from disk space. But if you run “du -sh /* ”  you will see that each “back-xxx” folder is 600GB each. This is possible because there are only hardlinks to the same data copied. Do not worry, the disk is not full and you should trust the df results and not the du results.

user@box:/mnt/sdb2/Backups$ du  -sh ./*
623.8G    ./back-2014-02-24T17:47:12
623.8G    ./back-2014-02-24T21-46-41
623.8G    ./back-2014-02-25T17-05-02
623.8G    ./back-2014-02-25T18-45-34
0    ./current
user@box:/mnt/sdb2/Backups$ df /mnt/sdb2
Filesystem                Size      Used Available Use% Mounted on
/dev/sdb2                 2.7T    623.9G      1.9T  24% /mnt/sdb2

So the Time Machine is in fact only 3 lines of code in a script plus a cron job ! Easy and everybody can do it !

Adapt the script to your needs. Run it when you want with cron jobs.

At any point in time you can delete old backups ( for example backups older than few weeks). This can also be made with cron plus some scripts.

install Debian on USB stick for a ATOM board based server

I have a Intel ATOM board D525MW and I use this for a small home server. Normally the Intel support for Linux is not great but the ATOM boards are special. They are forced (by Intel) to boot only from disks that have FAT bootable partition. So how to install Linux ( Debian Wheezy in my case) ? First I have to trick the ATOM board BIOS to think that the USB stick boots from FAT partition . To do this I have to create a small ( 10MB ) partition formatted with FAT16. I set this to be bootable. But the partition is empty and is never used. Then I create a second partition with EXT2 ( or ext3, ext4…) and I install my Debian on it. So the Debian system will be installed in to a second partition.
How to create a USB stick with this arrangement ? Answer : with fdisk.

Insert the USB stick into a USB port and from linux terminal type (you must be root !):

# fdisk /dev/sdb (or sdc or sdd or whatever is the device name in your case)

then delete (d) all the partitions and create (n) a new FAT16 partition with 10MB and set it bootable

Then write (w) the changes to the disk and unplug the USB stick.

Then plug it again so that the OS read the new partition.

Then create (n) a second partition in the available space. For this use a Linux compatible format like ext2, ext3 or ext4. You can create now any partition scheme you want.

Then write (w) the changes to the disk and unplug the USB stick.

Then plug it again.

Now you are ready to format the partitions.

Use mkfs.msdos to format the FAT16 partition

example of the command used :

# mkfs.msdos -v -F 16  -n LABEL /dev/sdb1

Use mkfs.ext2 to format the second partition with command:

# mkfs.ext2 -v /dev/sdb2

Then umount the USB stick and unplug and plug it again.

Now you are ready to install the Debian ( or whatever distro you want). As a reminder the special arrangement with FAT16 partition is only for Intel ATOM boards because the manufacturer did not solved yet this BIOS bug.

The Debian installation is using debootstrap method. In this method the Debian OS is installed directly on a USB stick. The is no Live install ,there is no CD/DVD involved, there is no ISO burning. You need just a Linux machine, a internet connection and a USB stick formatted like described above.

First create a empty folder to mount the USB stick (preferable on /tmp/usb ). I will assume here that the USB stick is sdb and the sdb1 is FAT16 and sdb2 is ext2 partition. I will assume that we install Debian Wheezy 32bit (i386) with GRUB and no GUI, standard system . The only server installed will be SSH but from there you can add whatever you want ( FTP, Samba, Web , …). All the commands below are run from root account.

# mount /dev/sdb2 /tmp/usb/
# debootstrap --arch=i386 wheezy /tmp/usb/

adapt the command to your needs. read “man debootstrap” if you are not sure. This step will take some time ( 20-30 minutes depending on your network speed)

Then configuration of the new Debian image.

# mount -t proc none /tmp/usb/proc
# mount -t sysfs none /tmp/usb/sys
# mount -o bind /proc /tmp/usb/proc
# mount -o bind /dev /tmp/usb/dev
# mount -o bind /sys /tmp/usb/sys
# LANG=C chroot /tmp/usb/ /bin/bash

Now you enter in chroot mode. The commands that you execute now are inside the new Debian Wheezy OS.

root@debian # mount devpts /dev/pts  -t devpts
root@debian # blkid

the purpose is to find UUID identifier of the dev/sdb2 partition because we will need later the UUID code.

To edit the configuration files you can use various methods: nano, vi, etc… ( personaly I prefere nano)

next edit the /etc/fstab file

root@debian # nano /etc/fstab

and add there the folowing :

proc /proc proc defaults 0 0
UUID=insert-here-your-UUID / ext2 defaults,noatime 0 1
#/dev/sda1 / ext3 defaults,noatime 0 1
tmpfs /tmp tmpfs defaults,noatime 0 0
tmpfs /var/tmp tmpfs defaults,noatime 0 0
tmpfs /var/run tmpfs defaults 0 0
tmpfs /var/log tmpfs defaults 0 0
tmpfs /var/lock tmpfs defaults 0 0

Then edit your network card config file: nano /etc/network/interfaces

auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp

Then configure your local timezone with

root@debian # dpkg-reconfigure tzdata

then edit you server network name ( the machine name in the local network)

root@debian # nano /etc/hostname

Then edit the sources.list for the apt-get/aptitude:

root@debian # nano /etc/apt/sources.list

add there the folowing (edit them to match your needs) :

deb wheezy main
deb-src wheezy/updates main
deb wheezy/updates main

Then make refresh for the aptitude packages with:

root@debian # aptitude update

next install the locales :

root@debian # aptitude install locales
root@debian # dpkg-reconfigure locales

Next install console-data, linux image and grub:

root@debian # aptitude install console-data
root@debian # aptitude install linux-image-486
root@debian # aptitude install grub

next edit the grub config file like this:

root@debian # nano /etc/default/grub

and inside add something like this :

# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.

GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
#GRUB_CMDLINE_LINUX_DEFAULT="verbose console=ttyS0,38400n8 reboot=bios"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=38400"

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)

# Uncomment to disable graphical terminal (grub-pc only)

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux

# Uncomment to disable generation of recovery mode menu entries

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

Then open a new terminal in your Linux machine ( outside the chroot !) and add this in the /boot/grub/ :

(hd0) /dev/sdb

Then update the grub configuration (you are now back on the chroot environment) :

root@debian # grub-install /dev/sdb
root@debian # update-grub

Please check and edit the /boot/grub/ so that your USB stick is on hd0 and not hd1.

Then check if you have hd1 in the /boot/grub/grub.cfg. If yes then please replace it with hd0

Now change the root password for your new Debian Wheezy:

root@debian # passwd root

Next install the utility packages like sudo, ssh-server, standard Debian packages…or whatever you want to install :

root@debian # aptitude install rsyslog sudo
root@debian # tasksel install standard
root@debian # tasksel install ssh-server

now the installation is finished and you need to exit from chroot environment. Please be careful that if you do not use the correct exit method you can damage the newly installed OS.

So do :

root@debian # umount /dev/pts
root@debian # exit

So now you are not in chroot anymore and you are back at your linux terminal.

Now umount :

# umount /tmp/usb/proc/
# umount /tmp/usb/sys
# umount /tmp/usb/dev
# umount /tmp/usb
# umount /dev/sdb2

Now you are ready to unplug the USB stick and to use it on the ATOM board !

FTP backup with BASH script ( FTP scripting)

FTP protocol is used to transfer data between computers. The user has also a possibility to combine bash scripts with FTP to automate the backups of the files. This concept can be used for example to backup some files from a local machine to a remote server.
The way to do this is by making an executable script that is run from time to time by a cron job task. In this way the backup is made automatically in the background and do not require user intervention. But there is a problem. if you use FTP in command line then you have to type user name and password in clear text. So how to do it ?
The solution I suggest is like in the example below:

First step is to determine if the backup is needed . We check if the file was changed since last backup. For this we use a compare between the size of the file in this moment with the size of the file at the previous backup time. The previous size was saved in a file. To check this we use the “if compare” structure from BASH:

### check if the file was changed, and if YES then make FTP transfer, if not exit
 if [ `cat ~/.size_myfile.txt` -ne `du -s /path/to/file | cut -f1` ] ; then
 echo 'File changed !'

Then we define some parameters for FTP command. This definition can be made in a different ( hidden) file. For not critical situations ( home users) I recommend keeping the user details in the same file ( in the script) but to remove all the permissions for the other users ( use chmod command ). So the backup script should look like this:

-rwx------ myuser mygroup 32 june 12 14:52 backupFTP.script

Notice that only the “myuser” user has the right to read/write/execute the file

So for FTP command you need:

 FILE='/path/to/filesource filedestination'
 quote USER $USER
 cd /path/to/destination
 put $FILE

Since the source path and the destination path may be different you can use “cd /path/to/destination” for the file. The copied file can be also renamed as shown above ( see “filedestination“)

Notice that the commands between “END_SCRIPT” tags are executed inside FTP terminal. This are FTP commands and not BASH/Linux commands. You can put here whatever FTP commands you want based on your needs. For a full list of the FTP commands type “help” in the FTP terminal.

The 3rd step is to recalculate and save the new size of the file so that next time when backup script is run the size file is updated. For this we do:

 ## recalculate the new size of the file, for next backup
 du -s /path/to/file | cut -f1 > ~/.size_myfile.txt
 env DISPLAY=:0 notify-send 'FTP backup done !' 'File : /path/to/file'

Optionally you can show a desktop notification that the backup was made. If you do not have a GUI then do not use it.

Next I show the full script in only one file:

 ### check if the file was changed, and if YES then make FTP transfer, if not exit
 if [ `cat ~/.size_myfile.txt` -ne `du -s /path/to/file | cut -f1` ] ; then
#    echo 'File changed !'
 sleep 1
 FILE='/path/to/filesource filedestination'
 ftp -n $HOST <<END_SCRIPT
 quote USER $USER
 cd /path/to/destination
 put $FILE
 sleep 1
 ## recalculate the new size of the file, for next backup
 du -s /path/to/file | cut -f1 > ~/.size_myfile.txt
 env DISPLAY=:0 notify-send 'FTP backup done !' 'File : /path/to/file'

live USB linux with dd command

I use linux most of the time and from time to time I want to test a new distro. The easy way to try a new distro is to use the USB stick. The CD is outdated and the DVD is expensive.

There are many ways to make a live bootable USB but I want to show the easiest one. All you need is the ISO file and any linux machine. Any Unix OS should be fine but I did not tested.

So no special software is required.

First you have to download the ISO for the distro you want to try. For example Mint, Debian , Ubuntu, etc…etc…

Then open a terminal and use dd command to transfer the ISO image to the USB stick.

For this you need the root privileges. You can become root with su or you can use sudo in front of the command.

So in terminal type (first read this post till end !):

sudo dd if=/path/to/linuxdistro.iso of=/dev/sdb bs=4M ; sync

That’s it ! Now the ISO image is copied on the USB stick and you can boot from it.

Details about the command:

sudo = to get super user privileges, if not then you have to become root first with su command

dd = command to copy to disk

if=/path/file  = select the input file, in this case the ISO file.

of = the destination of the copy

/dev/sdb = your USB device. First check what is the name of your USB stick. You can plug the stick and check it with df command. Do not use any number after sdx since the dd must transfer the ISO file to the disk and not to the partition !

bs=4M = it means to copy up to 4MB at a time – it is optional but it optimize the transfer

sync = optional for the linux OS to flash the buffers

For more info about dd or sync commands please use man from terminal:

man dd

and / or:

man sync

Screencast with FFMPEG with sound (Ubuntu 12.04 / Mint 13)

This is how to make a high quality screencast with sound from the soundcard.
Great if you want to record for example screencast tutorial or  a webpage that contains sound.
system: Ubuntu 12.04/ Mint 13

  • Record without sound, image size relative to dimensions of your screen (my case is 1600×900):

ffmpeg -f x11grab -s 1600×900 -r 25 -i :0.0 -sameq output.mkv

  • Record with sound, from your microphone:

ffmpeg -f alsa -i pulse -ab 192 -acodec pcm_s16le -f x11grab -s 1600×900 -r 25 -i :0.0 -sameq output.mkv

To record with sound, from your microphone you have to do also the following steps ( only first time):

* Install Pulse Audio Volume Control. (through synaptic package manager)
* Repeat the second command, recording starts.
* Go to Pulse Audio; go to the tab ‘Recording’, it shows ffmpeg recording the sound. Change the pull down menu to Monitor from Built in Analog Stereo.
Now it records the sound stream in stead of your micro. Set your microphone sound level to the desired values.
That is basically it.

After you save the file you will need to cut first few seconds and last few seconds to remove the parts where you start/stop the ffmpeg command.

This can be done with the below command.

Basically in the below example I extract from “output.mkv” the video starting from second 10 . The length of the output file (clip-output-file.mkv) is set to 120seconds.

ffmpeg -ss 10 -t 120 -i output.mkv -acodec copy -vcodec copy clip-output-file.mkv

print some columns from CSV file

If you have a CSV ( comma separated values file) and you need to print only some columns then you can use sed, awk or cut. Today I will show the “cut” command. Do not confuse it with “cat” !!

CUT is quite easy to use and is more simple than AWK or SED. For example if you have a file like this

$ cat datafile.csv

device1 device2 device2a device4b device5a device8 device9

1 56 8 99 5 41 36 8

22 5 99 89 56 56 2

1 0 2 5 9 63 5

As you can see the delimiter is not comma but space. If you want to print only columns 2,3 and 5 then “cut” is the best tool.

$ cut -d” ”  -f2,3,5 datafile.csv

device2 device2a device5a

56 8 5

5 99  56

0 2 9

-d” ” = means to use ” ” (space) as delimiter in the input file

-f2,3,5 = means to print only the fields 2, 3 and 5

So by default the output delimiter is space. If you want a specific delimiter, like tab, you can use this :

$ cut -d” ”  -f2,3,5 –output-delimiter=$’\t’  datafile.csv

device2         device2a        device5a

56                 8                       5

5                   99                     56

0                  2                        9

The syntax $’\t’ is special because cut do not accept TAB as “\t”

For another delimiter you can use the option like : –output-delimiter=”.:.”

$ cut -d” ”  -f2,3,5 –output-delimiter=$’\t’  datafile.csv





Of course for all this there is a AWK or SED command but I think that for simple column selection a CUT command is easier.

More than that. The CUT command give also the possibility to select only some bytes or characters from the file. For example to print only the characters from 2 to 5 you have to use :

$ cut -c2-5 datafile.csv



2 5


not very useful in my case but nice to have in case the input file format do not have a delimiter.

see more info with “man cut”

Take all columns that have a specific header

Lets say that you have this kind of csv files, named like:


and each file contain columns with data. Each column has a unique header like this :

> cat results_30C_deviceA.csv
Vport1 Vport2 Vport3
11 123 4545
22 123 4545
33 123 4545
44 123 4545

And you want to extract all columns that contain string “Vport2” from all files but to keep the order of temperatures in the normal order ( like -45C, 0C, 30C, 125C…)

The Solution is to sort them in order and then to use PR and AWK. The PR will concatenate all files in a big file with all columns. AWK will print only the columns that have the string on the first row.

 pr -m -t -s `ls | sort -n -t _ -k 2` | awk 'NR==1{for(i=1;i<=NF;i++)if($i~/Vport2/)f[n++]=i}{for(i=0;i<n;i++)printf"%s%s",i?" ":"",$f[i];print" "}'

It is only one line, but is hard to understand it .

Please notice the following:

ls = list the files from current folder

sort -n option = treat strings as numbers

sort -t _ option = consider the names as strings separated by “_” delimiter

sort -k 2 option = order the files based on 2nd field , considered as number ( -n option) , so first fill be -45C, then 0C, then 30C, then 125C, then 150C. If you do not use this option then  the order will be 0, 125, 150, 30, -45

ls and sort commands are inside a “ . This is the symbol that is on the same key with ~  (top left corner )!!!!. This is not the ‘ apostrophe  !

for AWK option please read the manual or search the web.

The result will be

Vport2 Vport2 Vport2 ....
123 123 123
123 123 123
123 123 123

The new file will contain no info about the name order but you know that is in good order because of “sort” command.

Other solution may be possible but this worked for me.

The AWK was inspired by this post :