my ideas in action

Category Archives: linux

change the date of the JPG photos with EXIF date

Sometimes it happens that you get various photos , from a vacation, for example, that are not in the good order. The name of the file or the date of the file is wrong. This makes that, when you try to browse the pictures, some of them are not in correct chronological order.

To fix that I found really useful to use a combination of a jhead command and a self-made bash script.

Steps to do:

  1. make a backup of the photos
  2. install jheads and use it to copy the date from EXIF info to the date of the file
    1. install jhead
    2. run from terminal
    3. jhead -ft *.jpg

  3. make a script like this and run it
    1. make a script as ASCI text with nano and then chmod 755 rename_by_date.script
    2. content of the script:
      1. #!/bin/bash

        echo “rename of files based on the date:”

        ls -tr | while read i; do n=$((n+1)); mv — “$i” “$(printf ‘NewName_%04d.jpg’ “$n”)”;



That it is all.



update PiHole (inside VM on Freenas)

The PiHole need some updates , from time to time, so is best to setup a cron job to do them automatically. The Pihole updates cannot, at this time , be made form Web interface. In my case Pihole runs inside a VM (ubuntu server) , inside Freenas.

So you need to enter the VM shell , inside your Freenas machine. Open Freenas web interface, then go to VMs –> Pihole –> VNC via web .

Then at the new terminal , after login, you need to sudo a crontab:

sudo crontab -e

Then in crontab editor write this :

5 1 * * * pihole -g -up

15 1 * * * pihole -up

This means that at 1:05am every day the pihole list of ad-serving domains is updated. Then at 1:15am the pihole is updated.

If you want to see the pihole status in real time then use

pihole -c

Screenshot from 2018-12-12 09-21-35





recover and boot a VM inside Freenas 11

I made recently a VM inside Freenas with Ubuntu and Pi-Hole to get out some advertisements from my network.

All was configured as described here :


The main problem that remains is that after a restart of the VM  ( ubuntu) the system does not reboot correctly. I did not entered in details about the boot process of the VM inside Freenas but apparently is an issue with the EFI file location. The longer explanation is given here :

So … how to fix it ?

first you need to open the VNC to the VM machine ( ubuntu in this case). Then you type “exit”  to get from shell to the EFI menu system and navigate to “Boot Maintenance Manager” and then select “Boot from file” to locate and select your grubx64.efi file.

After booting, execute this command as root (use sudo !!):

grub-install –efi-directory=/boot/efi –boot-directory=/boot –removable

then after reboot of the VM you get back the VNC terminal.

If your VM restarted correctly then it is fine.

If not, then you have to copy some files to be sure that the reboot will happen next time also.

More specifically you have to copy the grubx64.efi from /boot/efi/EFI/ubuntu to /boot/efi/EFI/BOOT.

Do this as root (use sudo !!!) :

cp /boot/efi/EFI/ubuntu/grubx4.efi  /boot/efi/EFI/BOOT/grubx64.efi

cp /boot/efi/EFI/ubuntu/grubx4.efi  /boot/efi/EFI/BOOT/BOOTX64.EFI


NB: If grubx64.efi gets updated you will need to re-create bootx64.efi.

So if the VM ubuntu make an automatic update of the grub utility then you may get the VM(ubuntu+PiHole) not restarting correctly. If you do not reboot ever the Frenas and VM then you are fine, but if you get regular restarts ( updates or maintenance) then please remember to do all this steps again.

Unfortunately I do not have yet a permanent fix.


For making updates to the VM machine ( ubuntu , in my case) it is better to apply only the security patches and for that is better to use

sudo unattended-upgrade

and not the classical apt-get

sudo apt-get update

sudo apt-get upgrade






use ffmpeg to edit videos

I want to show here how to produce a video and what are the steps to post-production.

All these are using free and open source software. No need to be expert in computers , only a little bit of courage.

You can use this recipe for homemade videos that you post online, for your vlog/blog or whatever.

I use ffmpeg and Inkscape that are available for Linux and Windows also.

As a general rule, if you have Linux/Unix, then please use the RAM since it is faster (read/write in /tmp folder always for all temporary files).


These are the steps :

1= record the video + audio

use a decent camera (preferable HD or 4K) with preferable audio . DSLR is OK also.

If your camera do not have good audio recording from builtin mic, then record the audio separate and clap in front of camera so that you can synchronize video/audio in post-production. Preferably use an external microphone ( ex. lavalier) as close as possible of the sound source. A cheap and good lav mic is Boya BM-M1 (see Amazon)

You can also record with a smartphone but be aware that the sample rate may be 44100 instead of 48000 as camera will record. This may produce audio that is not in sinc with video.

During recording please keep 5-10 seconds of silence , because you may need it later to do some noise reduction (see next step)

2= extract the audio separated and do some processing (if necessary)

Personally I use Audacity ( free and open source) .

I import audio and I do mainly two operations : Noise reduction and Compression. I use default settings from Audacity .. are ok for most cases. In case the audio sample rate is on 44100 then it is preferable to change it to 48000 to match the video.

Save the audio as WAV ( 16bit) in 48000.

3= then replace the video old audio with the new audio (new audio is with noise reduction)

3.1 = strip the audio from the video

ffmpeg -i /tmp/DSC_0009.MOV -an -c:v copy -c:a copy /tmp/

3.2 = add new audio to the video

ffmpeg -i /tmp/ -i /tmp/DSC_0009.wav -vcodec copy -acodec copy /tmp/


4= cut the video to the correct size

play first the new video and check the quality and note on paper the moments you want to save. For example : from 00:01:12 till 00:25:48 (hh:mm:ss format)

use example : cut from 20 sec till 9min and 36sec :

ffmpeg -i /tmp/ -vcodec copy -acodec copy -ss 00:00:20 -to 00:09:36 /tmp/

5= (optional) add some logo/title/watermark

5.1 = Create the logo

With Inkscape (free and open source) create a image that has the same format as the video. Example if video is HD 1920*1080 then create a page that is 1920*1080. On this place your logo/watermark / artwork and preferable use alpha (transparency) to get more effect. Save the work and export it to PNG and keep the transparency.

5.2 = Overlay the PNG image to the video with ffmpeg

ffmpeg -i /tmp/ -loop 1 -i /tmp/TitleFile.png -filter_complex “[1:v]fade=in:st=0:d=0.1:alpha=1,fade=out:st=9:d=1:alpha=1[png];[0:v][png]overlay=x=0:enable=’between(t,0,10)'” -b:v 2000k -bufsize 2000k /tmp/

Here I overlay the PNG image from second 0 ( st=0) in 0.1sec (d=0.1) and then fadeout starting at second 9 (st=9) for 1 second (d=1). The entire overlay is from seconds 0 to 10 ( see : between(t,0,10) ). Then I also force bitrate to be 2Mb/s (-b:v 2000k -bufsize 2000k) to avoid too much loss of quality. If not the ffmpeg will reduce the bitrate even more and it may be too low.


6= (optional) add a static picture at the final with credits/ links/Thanks or other announcements

6.1 = use Inkscape to create a closing image. Same as before, can use transparency (alpha) .

6.2= use ffmpeg to create and concatenate the image at the end of the video:

ffmpeg -i /tmp/ -loop 1 -framerate 30 -t 5 -i /tmp/Closing.png -f lavfi -t 0.1 -i anullsrc=channel_layout=stereo:sample_rate=48000 -filter_complex “[0:v][0:a][1:v][2:a]concat=n=2:v=1:a=1” -b:v 2000k -bufsize 2000k /tmp/

options :

framerate=30 : need to match the original framerate of your video ( if you shoot in 60FPS then use 60 here also)

t 5 : generate 5 seconds video that will show your Closing.png image

stereo : add stereo sound if youd video is stereo ( if not you can put mono)

sample_rate=48000 : use the same sample rate as your audio (other option is 44100)


7=(optional) post the video online ( Ex: Youtube, Vimeo……etc)



use PAC file for automatic proxy selection

I will explain how to use automatic proxy selection for a local network.

For example let’s say that you have a proxy server but that proxy is not available all time. In this case the you need to find each time if the proxy is alive (available) and if yes to use it. If not then browser will select direct connection.

The easiest way to use it is to create a PAC file and to add it in the Firefox as automatic proxy selection.

Go to Preferences–>Advanced –> Network –>Settings and choose “Automatic proxy configuration URL”

Then type there the path to a local PAC file. Normally there should be a web page address but if the file is locals works also ( no web server needed)


To create the PAC file use any text editor and create a file called “autoproxy.pac” and put this content:

function FindProxyForURL(url, host)
return “PROXY; DIRECT”;

The Proxy in this case is on local network at (Squid Proxy on port 3128) and Firefox try to use it first . In case it is not responding it will use direct connection.

You can set there multiple proxy servers. The order is important.

In the example below you can have two proxies. If the first one ( is not responding then the second one ( will be selected, and if the second one also do not respond the direct network connection will be used.

function FindProxyForURL(url, host)

The name of the PAC file is not important ( “autoproxy.pac” is name used by me), any name will do.

More details regarding the PAC file , examples, more advanced functions can be found here :




Fixing “RPC: AUTH_GSS upcall timed out.” from dmesg ( Debian and other linux distro)

In case you see this in your dmesg then please read further. The issue is also giving slow connection ( first) time on a NFS share (takes ~30seconds ).

There is a bug (!?) in the NFS client configuration and it runs a module called rpcsec_gss_krb5.

You can check if this module is running with “lsmod”.

Solution : do not load the module :

as root type :

echo blacklist rpcsec_gss_krb5 > /etc/modprobe.d/dist-blacklist.conf

then reboot

Problem solved : fast connection on NFS share and no dmesg error message.



auto mount your NAS share

I have discovered a better way on mounting the NAS shares .

Previously I was using /etc/fstab but this was not very convenient.

So I found “autofs”. It is taking care of automounting the CD/DVD/ external shares (NFS/CIFS, etc) based on the usage. I mean that when you click on the folder then it will mount it , but when not using the mount folder then at a predefined timeout time it will umount it . This is a good feature for me since I wanted to unmount automatically when I do not use that share.


So after installing the autofs ( from repository) then you have to configure two files.


First is /etc/auto.master. You should put here the timeout setting and the mount directory.

If you have more mounting paths you can put different timeout settings.

My example below:

file:   /etc/auto.master

/mnt/nfsfreenas /etc/auto.misc --timeout 60 --ghost

The second file is /etc/auto.misc . This contain the mount settings ( somehow similar with /etc/fstab file)

My example file here:

file : /etc/auto.misc

bsddataset    -fstype=nfs,soft,sync,rsize=6000,wsize=6000

So what I have here is a Freenas Unix share on ( bsdshare) that is NFS type.

You can put in this file the CIFS/Samba also or sshfs or other shares. CD/DVD disks work also.

at the end , after updating this two files with your setup please restart the autofs service with

service autofs restart





Firefox and Backspace and AutoScroll

From time to time I notice that Firefox change some settings after an upgrade. Maybe there are bugs or simple Firefox guys decide that we simple users are stupid and we do not need certain features.

The ones that I use the most are the middle-mouse/wheel autoscroll and backspace to go to the previous page.

So in case they are missing from your Firefox then you can activate them from : about:config and then setting true for autoscrollFFautoscroll



and setting 0 for backspace_action:


samba server on tinycore linux – howto

Simple setup for samba server on Tinycore linux server. Many tutorials are available on internet but mine is tested and it working as I want.

First install samba package on Tinycore linux.

from “tc” user start tce-ab

Then edit the smb.conf file from /usr/local/etc/samba/smb.conf.
Inside add something like this:

workgroup = WORKGROUP
netbios name = box
security = user

comment = Data
path = /mnt/sda1
read only = no
guest ok = no


security = user
this will create a share that is based on user/password

netbios name = box
this will be the name of your server ( alias to avoid tiping the IP address)

read only = no
to have write access

guest ok = no
no allow of guest users ( no anonymous connections)
Then as root you run this command

 smbpasswd -a <tinycore user>

then type the samba password for that user. You will use this password from the client machine when you will connect to the samba share
then save the samba config files to make the changes persistent after reboot.

add in /opt/.filetool.lst

usr/local/etc/samba/smb.conf            <-- this contain samba setup
usr/local/etc/samba/private/passdb.tdb  <-- this contain your password

then backup with “backup”
and then restart the server

Next go at the client machine and in filemanager type:
and you should get a popup window asking for user and password. Put the user and that password you set on samba.

easy backup system with rsync – like Time Machine

Backup systems are good for recovering in case of accidental  lost data. But a more useful feature is the incremental backup where you have access to various snapshots in time like the Time Machine on Apple is doing. To do this in Linux (or any Unix  or alike ) systems is actually very easy.

For example we make a backup every day ( or every internal you want) . We need that  the amount of data transferred is small and not big. Imagine transferring few TB every day ! in case our important data is changing a little bit then we will backup only the modified parts. For this Rsync is the best tool. Everybody knows that. But there is a problem. How can we keep daily snapshots of the data without filling the disk ? For this we will use softlinks,  hardlinks and Rsync options.

So we have to create a script file like this:

date=`date "+%Y-%m-%dT%H-%M-%S"`
rsync -aP --delete --log-file=/tmp/log_backup.log --exclude=lost+found --link-dest=/mnt/sdb2/Backups/current /mnt/sda1/ /mnt/sdb2/Backups/back-$date
rm -f /mnt/sdb2/Backups/current
ln -s /mnt/sdb2/Backups/back-$date /mnt/sdb2/Backups/current

So here I make first a “date” variable that will be used in the name of the backup folder to easily know when that backup/snapshot was made.

Then use the rsync with some parameters (see man rsync for more details):

-a = archive mode ( to send only changed parts)

-P = to give a progress info – (optional)

–delete = to delete the deleted files from backup in case they are removed from source

–log-file = to save the log into a file (optional)

–exclude = to exclude some folders/files from backup . This are relative to source path !!! do not use absolute path here !

–link-dest = link to the latest backup snapshot

/mnt/sda1 = source path (here I backup a whole drive)

/mnt/sdb2/Backups/back-$date  = destination folder , it will contain all the content from the source.

Then by using rm I remove the old link to the old backup ( the “current” link) and then I replace it with a new soft link to the newly created snapshot.

So now whenever I click on “current” I go in fact to the latest backup .

And because every time I make the backup the date is different the old snapshots will be kept. So for every day I will have a snapshot.

To automate this you have to create a cron job to execute the above script at the convenient time.

Example to run at 4:01AM every day:

1  4 * * * /path/to/script

Please notice that only the first time the full backup will take a long time since it will copy the full data. The second time you will run the script it will transfer only the changed files/bits.

Now on the destination folder you will see a “back-xxx” folder for every time you run the script. You can open/read the files from all this folders as it if they are completely independent files. In fact if you run df and du you will see something interesting.

For example if the backup is 600GB and the script is run every day you will see that the df will show the same 600GB used from disk space. But if you run “du -sh /* ”  you will see that each “back-xxx” folder is 600GB each. This is possible because there are only hardlinks to the same data copied. Do not worry, the disk is not full and you should trust the df results and not the du results.

user@box:/mnt/sdb2/Backups$ du  -sh ./*
623.8G    ./back-2014-02-24T17:47:12
623.8G    ./back-2014-02-24T21-46-41
623.8G    ./back-2014-02-25T17-05-02
623.8G    ./back-2014-02-25T18-45-34
0    ./current
user@box:/mnt/sdb2/Backups$ df /mnt/sdb2
Filesystem                Size      Used Available Use% Mounted on
/dev/sdb2                 2.7T    623.9G      1.9T  24% /mnt/sdb2

So the Time Machine is in fact only 3 lines of code in a script plus a cron job ! Easy and everybody can do it !

Adapt the script to your needs. Run it when you want with cron jobs.

At any point in time you can delete old backups ( for example backups older than few weeks). This can also be made with cron plus some scripts.