my ideas in action

Category Archives: Freenas

update PiHole (inside VM on Freenas)

The PiHole need some updates , from time to time, so is best to setup a cron job to do them automatically. The Pihole updates cannot, at this time , be made form Web interface. In my case Pihole runs inside a VM (ubuntu server) , inside Freenas.

So you need to enter the VM shell , inside your Freenas machine. Open Freenas web interface, then go to VMs –> Pihole –> VNC via web .

Then at the new terminal , after login, you need to sudo a crontab:

sudo crontab -e

Then in crontab editor write this :

5 1 * * * pihole -g -up

15 1 * * * pihole -up

This means that at 1:05am every day the pihole list of ad-serving domains is updated. Then at 1:15am the pihole is updated.

If you want to see the pihole status in real time then use

pihole -c

Screenshot from 2018-12-12 09-21-35






recover and boot a VM inside Freenas 11

I made recently a VM inside Freenas with Ubuntu and Pi-Hole to get out some advertisements from my network.

All was configured as described here :


The main problem that remains is that after a restart of the VM  ( ubuntu) the system does not reboot correctly. I did not entered in details about the boot process of the VM inside Freenas but apparently is an issue with the EFI file location. The longer explanation is given here :

So … how to fix it ?

first you need to open the VNC to the VM machine ( ubuntu in this case). Then you type “exit”  to get from shell to the EFI menu system and navigate to “Boot Maintenance Manager” and then select “Boot from file” to locate and select your grubx64.efi file.

After booting, execute this command as root (use sudo !!):

grub-install –efi-directory=/boot/efi –boot-directory=/boot –removable

then after reboot of the VM you get back the VNC terminal.

If your VM restarted correctly then it is fine.

If not, then you have to copy some files to be sure that the reboot will happen next time also.

More specifically you have to copy the grubx64.efi from /boot/efi/EFI/ubuntu to /boot/efi/EFI/BOOT.

Do this as root (use sudo !!!) :

cp /boot/efi/EFI/ubuntu/grubx4.efi  /boot/efi/EFI/BOOT/grubx64.efi

cp /boot/efi/EFI/ubuntu/grubx4.efi  /boot/efi/EFI/BOOT/BOOTX64.EFI


NB: If grubx64.efi gets updated you will need to re-create bootx64.efi.

So if the VM ubuntu make an automatic update of the grub utility then you may get the VM(ubuntu+PiHole) not restarting correctly. If you do not reboot ever the Frenas and VM then you are fine, but if you get regular restarts ( updates or maintenance) then please remember to do all this steps again.

Unfortunately I do not have yet a permanent fix.


For making updates to the VM machine ( ubuntu , in my case) it is better to apply only the security patches and for that is better to use

sudo unattended-upgrade

and not the classical apt-get

sudo apt-get update

sudo apt-get upgrade






incremental backup of Freenas ZFS volume on external drive

When you have a Freenas  you need sometime to make a backup of the entire volume.

I have a external HDD drive ( connected with a  USB interface). I want to have a exact replica of the entire Freenas dataset so that I can store it on a remote location. The GUI menu from Freenas allow you to do this but as a scheduled replication task . This is nice if you have another NAS connected on the network. In my case I have just a external HDD (1TB) and I want to make a backup from time to time.

First connect external HDD on Freenas USB ports. Plug the cable and then from GUI go to Storage–> Volume–> import Volume.

In the new pop-up select the ZFS volume from your HDD. If you do not have a volume yet then you can create it from GUI .

Now you have to open a terminal because next commands are not possible from GUI.

So you can open the Shell from GUI Freenas  ( left side menu) or you can use a SSH to access directly the Freenas terminal.

In the Freenas terminal you need first to check if you see correctly the NAS volumes and also the new inserted external HDD volume.

zfs list –> to see all volumes available

then type :

zfs list -t snapshot –> to see what snapshots you have on your system.

First time when you make this transfer you will need to copy a lot of data since external HDD is empty . Do not worry because next time when you make backup only the difference data will be copied. So it will be much faster.

So first time do:

zfs snapshot -r data1/bsdshare@backup # create a snapshot
zfs send -R data1/bsdshare@backup | zfs receive -vF bkp_1T # transfer it over


zfs snapshot = create a snapshot of the data

-r = snapshot is recursive

data1/bsdshare = my volume is “data1″ and dataset is”bsdshare”

backup = arbitrary name for my snapshot. (you can see it also in GUI on Storage–>Snapshot)

zfs send = copy the dataset

zfs receive = paste the dataset

bkp_1T = is the name I chose for my external HDD volume ( choose any name you like)

The zfs send/receive command take very long time  !! hours !! since will copy now the entire data. May be longer/shorter depending on the total data, HDD speed, USB interface..etc.

At the end , after the send/receive finished you can detach the external volume from GUI . Go to Storage–> Volumes, click on the “bkp_1T” and select “detach”.  Now you can unplug the external HDD and you are ready.


Next time when you want to update the data from external HDD you need to do the following. The idea is to make a incremental update so that you do not spend hours copying entire data again . For this we will do:

First plug the external HDD on USB port. ( Notice that Freenas do not require any reboot !!, all is live)
From GUI import volume . Go to Storage –> Volumes –> import volume. Select the volume from your HDD ( mine is called “bkp_1T”)

Now open again a terminal in Freenas ( from GUI menu or with SSH) and type:

zfs rename -r data1/bsdshare@backup   data1/bsdshare@backup_old

# rename the “old” snapshot made last time from “backup” into “backup_old”.

zfs snapshot data1/bsdshare@backup # take a new snapshot
zfs send -Ri data1/bsdshare@backup_old data1/bsdshare@backup | zfs receive -v bkp_1T # incremental replication

Notice that now we use -i and we add both snapshot ( old and new) for send command. Now only the delta between old-new will be copied to HDD so it should take seconds or minutes, not hours/days.

At the end check one more time ( from GUI or terminal) that sizes, disks and snapshots are ok . If OK then you can optionally cleanup the storage by removing the old snapshots from Freenas and also from external HDD.

zfs destroy -r data1/bsdshare@backup_old # get rid of the previous snapshot from Freenas
zfs destroy -r bkp_1T@backup_old # get rid of the previous snapshot from external HDD

At the end from GUI you can detach the external HDD ( Storage–> Volumes , click on volume and select detach)

That’s it !


Fixing “RPC: AUTH_GSS upcall timed out.” from dmesg ( Debian and other linux distro)

In case you see this in your dmesg then please read further. The issue is also giving slow connection ( first) time on a NFS share (takes ~30seconds ).

There is a bug (!?) in the NFS client configuration and it runs a module called rpcsec_gss_krb5.

You can check if this module is running with “lsmod”.

Solution : do not load the module :

as root type :

echo blacklist rpcsec_gss_krb5 > /etc/modprobe.d/dist-blacklist.conf

then reboot

Problem solved : fast connection on NFS share and no dmesg error message.



email me the result of a cronjob/script in Freenas

This is the simplest method to email the result of a command in Freenas.
For example if you run certain scripts with Cron you can use it also.

Personally I use this to get the SMART report about a HDD that may fail soon. So what I put in Cron this (all in one line) :

smartctl -a /dev/ada1 | /usr/bin/mail -s "MyFREENAS HDD /ada1 report "

For user you can put “root” and any redirect should be off.

Of course to make the email work you have to configure the email server, ..etc… in the user config. Fill the settings here : System –> Email.

how to add/replace disk to FreeNAS mirror

I recently had to add new disks to my FreeNAS storage. I mounted the disks but I was not able to add them as mirror from GUI interface. But I found this Russian website with very simple and easy to follow tutorial. I copied here mainly for me as a reminder but may be also useful for others.

I hope the original author will not be upset.

The original post is here :

The high-level steps are:

Add the new disk to the system – (means connect the cables)
Partition the disk with gpart – (from FreeNAS terminal)
Attach the new partition to ZFS as a mirror

Create the GPT

Use the GUI to find the device ID of the new drive or use camcontrol.

# camcontrol devlist
at scbus2 target 0 lun 0 (ada0,pass0)
at scbus3 target 0 lun 0 (ada1,pass1)
at scbus4 target 0 lun 0 (ada2,pass2)
at scbus5 target 0 lun 0 (ada3,pass3)
at scbus7 target 0 lun 0 (da0,pass4)

let assume that our target is ada1. Create the GUID partition table for ada1.

# gpart create -s gpt ada1
ada1 created

Add the Swap Partition

Create a swap partition matching what FreeNAS created on the original drive. FreeNAS puts a swap partition on every data drive by default, stripes them together, and encrypts with a temporary key each boot. I’m not sure how that works when a drive fails, but it’s the recommended configuration.

# gpart add -b 128 -i 1 -t freebsd-swap -s 2G ada1
ada1p1 added

Add the Data Partition

Use the remaining space for the data partition.
# gpart add -i 2 -t freebsd-zfs ada1
ada1p2 added

Get the GPTID for the Partition

A device may change names depending on the connected port but the GPTID doesn’t change. FreeNAS uses the GPTID to track disks and so we want the rawuuid field of ada1p2.

# gpart list ada1
Geom name: ada1
scheme: GPT
1. Name: ada1p1
Mediasize: 2147483648 (2.0G)
rawuuid: 38d6835c-4794-11e4-b95b-08606e6e53d5
2. Name: ada1p2
Mediasize: 1998251364352 (1.8T)
rawuuid: 40380205-4794-11e4-b95b-08606e6e53d5

Attach to ZFS as mirror

Attach the partition using zpool which will begin the resilvering process. You will need the GPTID of the encrypted original disk parition.

# zpool attach
# zpool attach storage /dev/gptid/1c5238f9-5e2d-11e3-b7e0-08606e6e53d5 /dev/gptid/40380205-4794-11e4-b95b-08606e6e53d5