BIPEDU

my ideas in action

Category Archives: linux

use PAC file for automatic proxy selection

I will explain how to use automatic proxy selection for a local network.

For example let’s say that you have a proxy server but that proxy is not available all time. In this case the you need to find each time if the proxy is alive (available) and if yes to use it. If not then browser will select direct connection.

The easiest way to use it is to create a PAC file and to add it in the Firefox as automatic proxy selection.

Go to Preferences–>Advanced –> Network –>Settings and choose “Automatic proxy configuration URL”

Then type there the path to a local PAC file. Normally there should be a web page address but if the file is locals works also ( no web server needed)

FFsettings

To create the PAC file use any text editor and create a file called “autoproxy.pac” and put this content:

function FindProxyForURL(url, host)
{
return “PROXY 192.168.1.29:3128; DIRECT”;
}

The Proxy in this case is on local network at 192.168.1.29 (Squid Proxy on port 3128) and Firefox try to use it first . In case it is not responding it will use direct connection.

You can set there multiple proxy servers. The order is important.

In the example below you can have two proxies. If the first one (192.168.1.29) is not responding then the second one (192.168.1.42) will be selected, and if the second one also do not respond the direct network connection will be used.

function FindProxyForURL(url, host)
{
return “PROXY 192.168.1.29:3128; PROXY 192.168.1.42:3128; DIRECT”;
}

The name of the PAC file is not important ( “autoproxy.pac” is name used by me), any name will do.

More details regarding the PAC file , examples, more advanced functions can be found here : http://findproxyforurl.com/

 

 

 

Advertisements

Fixing “RPC: AUTH_GSS upcall timed out.” from dmesg ( Debian and other linux distro)

In case you see this in your dmesg then please read further. The issue is also giving slow connection ( first) time on a NFS share (takes ~30seconds ).

There is a bug (!?) in the NFS client configuration and it runs a module called rpcsec_gss_krb5.

You can check if this module is running with “lsmod”.

Solution : do not load the module :

as root type :

echo blacklist rpcsec_gss_krb5 > /etc/modprobe.d/dist-blacklist.conf

then reboot

Problem solved : fast connection on NFS share and no dmesg error message.

 

 

auto mount your NAS share

I have discovered a better way on mounting the NAS shares .

Previously I was using /etc/fstab but this was not very convenient.

So I found “autofs”. It is taking care of automounting the CD/DVD/ external shares (NFS/CIFS, etc) based on the usage. I mean that when you click on the folder then it will mount it , but when not using the mount folder then at a predefined timeout time it will umount it . This is a good feature for me since I wanted to unmount automatically when I do not use that share.

 

So after installing the autofs ( from repository) then you have to configure two files.

 

First is /etc/auto.master. You should put here the timeout setting and the mount directory.

If you have more mounting paths you can put different timeout settings.

My example below:

file:   /etc/auto.master

+auto.master
/mnt/nfsfreenas /etc/auto.misc --timeout 60 --ghost

The second file is /etc/auto.misc . This contain the mount settings ( somehow similar with /etc/fstab file)

My example file here:

file : /etc/auto.misc

bsddataset    -fstype=nfs,soft,sync,rsize=6000,wsize=6000    192.168.1.20:/mnt/data1/bsdshare

So what I have here is a Freenas Unix share on 192.168.1.20 ( bsdshare) that is NFS type.

You can put in this file the CIFS/Samba also or sshfs or other shares. CD/DVD disks work also.

at the end , after updating this two files with your setup please restart the autofs service with

service autofs restart

 

 

 

 

Firefox and Backspace and AutoScroll

From time to time I notice that Firefox change some settings after an upgrade. Maybe there are bugs or simple Firefox guys decide that we simple users are stupid and we do not need certain features.

The ones that I use the most are the middle-mouse/wheel autoscroll and backspace to go to the previous page.

So in case they are missing from your Firefox then you can activate them from : about:config and then setting true for autoscrollFFautoscroll

 

 

and setting 0 for backspace_action:

FFbackspace

samba server on tinycore linux – howto

Simple setup for samba server on Tinycore linux server. Many tutorials are available on internet but mine is tested and it working as I want.

First install samba package on Tinycore linux.

from “tc” user start tce-ab

Then edit the smb.conf file from /usr/local/etc/samba/smb.conf.
Inside add something like this:

[global]
workgroup = WORKGROUP
netbios name = box
security = user

[atom_1t]
comment = Data
path = /mnt/sda1
read only = no
guest ok = no

Explanations:

security = user
this will create a share that is based on user/password

netbios name = box
this will be the name of your server ( alias to avoid tiping the IP address)

read only = no
to have write access

guest ok = no
no allow of guest users ( no anonymous connections)
Then as root you run this command

 smbpasswd -a <tinycore user>

then type the samba password for that user. You will use this password from the client machine when you will connect to the samba share
then save the samba config files to make the changes persistent after reboot.

add in /opt/.filetool.lst

usr/local/etc/samba/smb.conf            <-- this contain samba setup
usr/local/etc/samba/private/passdb.tdb  <-- this contain your password

then backup with “backup”
and then restart the server

Next go at the client machine and in filemanager type:
smb://box/
and you should get a popup window asking for user and password. Put the user and that password you set on samba.

easy backup system with rsync – like Time Machine

Backup systems are good for recovering in case of accidental  lost data. But a more useful feature is the incremental backup where you have access to various snapshots in time like the Time Machine on Apple is doing. To do this in Linux (or any Unix  or alike ) systems is actually very easy.

For example we make a backup every day ( or every internal you want) . We need that  the amount of data transferred is small and not big. Imagine transferring few TB every day ! in case our important data is changing a little bit then we will backup only the modified parts. For this Rsync is the best tool. Everybody knows that. But there is a problem. How can we keep daily snapshots of the data without filling the disk ? For this we will use softlinks,  hardlinks and Rsync options.

So we have to create a script file like this:

#!/bin/bash
date=`date "+%Y-%m-%dT%H-%M-%S"`
rsync -aP --delete --log-file=/tmp/log_backup.log --exclude=lost+found --link-dest=/mnt/sdb2/Backups/current /mnt/sda1/ /mnt/sdb2/Backups/back-$date
rm -f /mnt/sdb2/Backups/current
ln -s /mnt/sdb2/Backups/back-$date /mnt/sdb2/Backups/current

So here I make first a “date” variable that will be used in the name of the backup folder to easily know when that backup/snapshot was made.

Then use the rsync with some parameters (see man rsync for more details):

-a = archive mode ( to send only changed parts)

-P = to give a progress info – (optional)

–delete = to delete the deleted files from backup in case they are removed from source

–log-file = to save the log into a file (optional)

–exclude = to exclude some folders/files from backup . This are relative to source path !!! do not use absolute path here !

–link-dest = link to the latest backup snapshot

/mnt/sda1 = source path (here I backup a whole drive)

/mnt/sdb2/Backups/back-$date  = destination folder , it will contain all the content from the source.

Then by using rm I remove the old link to the old backup ( the “current” link) and then I replace it with a new soft link to the newly created snapshot.

So now whenever I click on “current” I go in fact to the latest backup .

And because every time I make the backup the date is different the old snapshots will be kept. So for every day I will have a snapshot.

To automate this you have to create a cron job to execute the above script at the convenient time.

Example to run at 4:01AM every day:

1  4 * * * /path/to/script

Please notice that only the first time the full backup will take a long time since it will copy the full data. The second time you will run the script it will transfer only the changed files/bits.

Now on the destination folder you will see a “back-xxx” folder for every time you run the script. You can open/read the files from all this folders as it if they are completely independent files. In fact if you run df and du you will see something interesting.

For example if the backup is 600GB and the script is run every day you will see that the df will show the same 600GB used from disk space. But if you run “du -sh /* ”  you will see that each “back-xxx” folder is 600GB each. This is possible because there are only hardlinks to the same data copied. Do not worry, the disk is not full and you should trust the df results and not the du results.

user@box:/mnt/sdb2/Backups$ du  -sh ./*
623.8G    ./back-2014-02-24T17:47:12
623.8G    ./back-2014-02-24T21-46-41
623.8G    ./back-2014-02-25T17-05-02
623.8G    ./back-2014-02-25T18-45-34
0    ./current
user@box:/mnt/sdb2/Backups$ df /mnt/sdb2
Filesystem                Size      Used Available Use% Mounted on
/dev/sdb2                 2.7T    623.9G      1.9T  24% /mnt/sdb2

So the Time Machine is in fact only 3 lines of code in a script plus a cron job ! Easy and everybody can do it !

Adapt the script to your needs. Run it when you want with cron jobs.

At any point in time you can delete old backups ( for example backups older than few weeks). This can also be made with cron plus some scripts.

Tiny core linux no autologin

on Tiny core Linux distribution the tc user is automatically login at boot time.

If you want this not to happen then you have to change /opt/bootsync.sh file and to add this line :

echo “booting” > /etc/sysconfig/noautologin

and then backup with backup command. Of course all this must be run from root account.

install Debian on USB stick for a ATOM board based server

I have a Intel ATOM board D525MW and I use this for a small home server. Normally the Intel support for Linux is not great but the ATOM boards are special. They are forced (by Intel) to boot only from disks that have FAT bootable partition. So how to install Linux ( Debian Wheezy in my case) ? First I have to trick the ATOM board BIOS to think that the USB stick boots from FAT partition . To do this I have to create a small ( 10MB ) partition formatted with FAT16. I set this to be bootable. But the partition is empty and is never used. Then I create a second partition with EXT2 ( or ext3, ext4…) and I install my Debian on it. So the Debian system will be installed in to a second partition.
How to create a USB stick with this arrangement ? Answer : with fdisk.

Insert the USB stick into a USB port and from linux terminal type (you must be root !):

# fdisk /dev/sdb (or sdc or sdd or whatever is the device name in your case)

then delete (d) all the partitions and create (n) a new FAT16 partition with 10MB and set it bootable

Then write (w) the changes to the disk and unplug the USB stick.

Then plug it again so that the OS read the new partition.

Then create (n) a second partition in the available space. For this use a Linux compatible format like ext2, ext3 or ext4. You can create now any partition scheme you want.

Then write (w) the changes to the disk and unplug the USB stick.

Then plug it again.

Now you are ready to format the partitions.

Use mkfs.msdos to format the FAT16 partition

example of the command used :

# mkfs.msdos -v -F 16  -n LABEL /dev/sdb1

Use mkfs.ext2 to format the second partition with command:

# mkfs.ext2 -v /dev/sdb2

Then umount the USB stick and unplug and plug it again.

Now you are ready to install the Debian ( or whatever distro you want). As a reminder the special arrangement with FAT16 partition is only for Intel ATOM boards because the manufacturer did not solved yet this BIOS bug.

The Debian installation is using debootstrap method. In this method the Debian OS is installed directly on a USB stick. The is no Live install ,there is no CD/DVD involved, there is no ISO burning. You need just a Linux machine, a internet connection and a USB stick formatted like described above.

First create a empty folder to mount the USB stick (preferable on /tmp/usb ). I will assume here that the USB stick is sdb and the sdb1 is FAT16 and sdb2 is ext2 partition. I will assume that we install Debian Wheezy 32bit (i386) with GRUB and no GUI, standard system . The only server installed will be SSH but from there you can add whatever you want ( FTP, Samba, Web , …). All the commands below are run from root account.

# mount /dev/sdb2 /tmp/usb/
# debootstrap --arch=i386 wheezy /tmp/usb/ http://ftp.be.debian.org/debian

adapt the command to your needs. read “man debootstrap” if you are not sure. This step will take some time ( 20-30 minutes depending on your network speed)

Then configuration of the new Debian image.

# mount -t proc none /tmp/usb/proc
# mount -t sysfs none /tmp/usb/sys
# mount -o bind /proc /tmp/usb/proc
# mount -o bind /dev /tmp/usb/dev
# mount -o bind /sys /tmp/usb/sys
# LANG=C chroot /tmp/usb/ /bin/bash

Now you enter in chroot mode. The commands that you execute now are inside the new Debian Wheezy OS.

root@debian # mount devpts /dev/pts  -t devpts
root@debian # blkid

the purpose is to find UUID identifier of the dev/sdb2 partition because we will need later the UUID code.

To edit the configuration files you can use various methods: nano, vi, etc… ( personaly I prefere nano)

next edit the /etc/fstab file

root@debian # nano /etc/fstab

and add there the folowing :

# UNCONFIGURED FSTAB FOR BASE SYSTEM
proc /proc proc defaults 0 0
UUID=insert-here-your-UUID / ext2 defaults,noatime 0 1
#/dev/sda1 / ext3 defaults,noatime 0 1
tmpfs /tmp tmpfs defaults,noatime 0 0
tmpfs /var/tmp tmpfs defaults,noatime 0 0
tmpfs /var/run tmpfs defaults 0 0
tmpfs /var/log tmpfs defaults 0 0
tmpfs /var/lock tmpfs defaults 0 0

Then edit your network card config file: nano /etc/network/interfaces

auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp

Then configure your local timezone with

root@debian # dpkg-reconfigure tzdata

then edit you server network name ( the machine name in the local network)

root@debian # nano /etc/hostname

Then edit the sources.list for the apt-get/aptitude:

root@debian # nano /etc/apt/sources.list

add there the folowing (edit them to match your needs) :

deb http://ftp.be.debian.org/debian wheezy main
deb-src http://security.debian.org/ wheezy/updates main
deb http://secirity.debian.org/ wheezy/updates main

Then make refresh for the aptitude packages with:

root@debian # aptitude update

next install the locales :

root@debian # aptitude install locales
root@debian # dpkg-reconfigure locales

Next install console-data, linux image and grub:

root@debian # aptitude install console-data
root@debian # aptitude install linux-image-486
root@debian # aptitude install grub

next edit the grub config file like this:

root@debian # nano /etc/default/grub

and inside add something like this :

# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
#GRUB_CMDLINE_LINUX_DEFAULT="verbose console=ttyS0,38400n8 reboot=bios"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=38400"
GRUB_CMDLINE_LINUX=""

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
GRUB_TERMINAL=console
#GRUB_TERMINAL=serial

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_LINUX_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

Then open a new terminal in your Linux machine ( outside the chroot !) and add this in the /boot/grub/device.map :

(hd0) /dev/sdb

Then update the grub configuration (you are now back on the chroot environment) :

root@debian # grub-install /dev/sdb
root@debian # update-grub

Please check and edit the /boot/grub/device.map so that your USB stick is on hd0 and not hd1.

Then check if you have hd1 in the /boot/grub/grub.cfg. If yes then please replace it with hd0

Now change the root password for your new Debian Wheezy:

root@debian # passwd root

Next install the utility packages like sudo, ssh-server, standard Debian packages…or whatever you want to install :

root@debian # aptitude install rsyslog sudo
root@debian # tasksel install standard
root@debian # tasksel install ssh-server

now the installation is finished and you need to exit from chroot environment. Please be careful that if you do not use the correct exit method you can damage the newly installed OS.

So do :

root@debian # umount /dev/pts
root@debian # exit

So now you are not in chroot anymore and you are back at your linux terminal.

Now umount :

# umount /tmp/usb/proc/
# umount /tmp/usb/sys
# umount /tmp/usb/dev
# umount /tmp/usb
# umount /dev/sdb2

Now you are ready to unplug the USB stick and to use it on the ATOM board !

FTP backup with BASH script ( FTP scripting)

FTP protocol is used to transfer data between computers. The user has also a possibility to combine bash scripts with FTP to automate the backups of the files. This concept can be used for example to backup some files from a local machine to a remote server.
The way to do this is by making an executable script that is run from time to time by a cron job task. In this way the backup is made automatically in the background and do not require user intervention. But there is a problem. if you use FTP in command line then you have to type user name and password in clear text. So how to do it ?
The solution I suggest is like in the example below:

First step is to determine if the backup is needed . We check if the file was changed since last backup. For this we use a compare between the size of the file in this moment with the size of the file at the previous backup time. The previous size was saved in a file. To check this we use the “if compare” structure from BASH:

### check if the file was changed, and if YES then make FTP transfer, if not exit
 if [ `cat ~/.size_myfile.txt` -ne `du -s /path/to/file | cut -f1` ] ; then
 echo 'File changed !'
 fi
 ###########################

Then we define some parameters for FTP command. This definition can be made in a different ( hidden) file. For not critical situations ( home users) I recommend keeping the user details in the same file ( in the script) but to remove all the permissions for the other users ( use chmod command ). So the backup script should look like this:

-rwx------ myuser mygroup 32 june 12 14:52 backupFTP.script

Notice that only the “myuser” user has the right to read/write/execute the file

So for FTP command you need:

##############################
 HOST='ftp.myserver.com'
 USER='myusername'
 PASSWD='mypassword'
 FILE='/path/to/filesource filedestination'
ftp -n $HOST <<END_SCRIPT
 quote USER $USER
 quote PASS $PASSWD
 cd /path/to/destination
 put $FILE
 quit
 END_SCRIPT
 #############################

Since the source path and the destination path may be different you can use “cd /path/to/destination” for the file. The copied file can be also renamed as shown above ( see “filedestination“)

Notice that the commands between “END_SCRIPT” tags are executed inside FTP terminal. This are FTP commands and not BASH/Linux commands. You can put here whatever FTP commands you want based on your needs. For a full list of the FTP commands type “help” in the FTP terminal.

The 3rd step is to recalculate and save the new size of the file so that next time when backup script is run the size file is updated. For this we do:

################################
 ## recalculate the new size of the file, for next backup
 du -s /path/to/file | cut -f1 > ~/.size_myfile.txt
 env DISPLAY=:0 notify-send 'FTP backup done !' 'File : /path/to/file'
 ###############################

Optionally you can show a desktop notification that the backup was made. If you do not have a GUI then do not use it.

Next I show the full script in only one file:

#!/bin/bash
 ### check if the file was changed, and if YES then make FTP transfer, if not exit
 if [ `cat ~/.size_myfile.txt` -ne `du -s /path/to/file | cut -f1` ] ; then
#    echo 'File changed !'
 sleep 1
 #
 HOST='ftp.myserver.com'
 USER='myusername'
 PASSWD='mypassword'
 FILE='/path/to/filesource filedestination'
 ftp -n $HOST <<END_SCRIPT
 quote USER $USER
 quote PASS $PASSWD
 cd /path/to/destination
 put $FILE
 quit
 END_SCRIPT
 sleep 1
 ## recalculate the new size of the file, for next backup
 du -s /path/to/file | cut -f1 > ~/.size_myfile.txt
 env DISPLAY=:0 notify-send 'FTP backup done !' 'File : /path/to/file'
fi
 ###############

live USB linux with dd command

I use linux most of the time and from time to time I want to test a new distro. The easy way to try a new distro is to use the USB stick. The CD is outdated and the DVD is expensive.

There are many ways to make a live bootable USB but I want to show the easiest one. All you need is the ISO file and any linux machine. Any Unix OS should be fine but I did not tested.

So no special software is required.

First you have to download the ISO for the distro you want to try. For example Mint, Debian , Ubuntu, etc…etc…

Then open a terminal and use dd command to transfer the ISO image to the USB stick.

For this you need the root privileges. You can become root with su or you can use sudo in front of the command.

So in terminal type (first read this post till end !):

sudo dd if=/path/to/linuxdistro.iso of=/dev/sdb bs=4M ; sync

That’s it ! Now the ISO image is copied on the USB stick and you can boot from it.

Details about the command:

sudo = to get super user privileges, if not then you have to become root first with su command

dd = command to copy to disk

if=/path/file  = select the input file, in this case the ISO file.

of = the destination of the copy

/dev/sdb = your USB device. First check what is the name of your USB stick. You can plug the stick and check it with df command. Do not use any number after sdx since the dd must transfer the ISO file to the disk and not to the partition !

bs=4M = it means to copy up to 4MB at a time – it is optional but it optimize the transfer

sync = optional for the linux OS to flash the buffers

For more info about dd or sync commands please use man from terminal:

man dd

and / or:

man sync