my ideas in action
Tag Archives: Tips&Tricks
November 13, 2015Posted by on
I will explain how to use automatic proxy selection for a local network.
For example let’s say that you have a proxy server but that proxy is not available all time. In this case the you need to find each time if the proxy is alive (available) and if yes to use it. If not then browser will select direct connection.
The easiest way to use it is to create a PAC file and to add it in the Firefox as automatic proxy selection.
Go to Preferences–>Advanced –> Network –>Settings and choose “Automatic proxy configuration URL”
Then type there the path to a local PAC file. Normally there should be a web page address but if the file is locals works also ( no web server needed)
To create the PAC file use any text editor and create a file called “autoproxy.pac” and put this content:
function FindProxyForURL(url, host)
return “PROXY 192.168.1.29:3128; DIRECT”;
The Proxy in this case is on local network at 192.168.1.29 (Squid Proxy on port 3128) and Firefox try to use it first . In case it is not responding it will use direct connection.
You can set there multiple proxy servers. The order is important.
In the example below you can have two proxies. If the first one (192.168.1.29) is not responding then the second one (192.168.1.42) will be selected, and if the second one also do not respond the direct network connection will be used.
function FindProxyForURL(url, host)
return “PROXY 192.168.1.29:3128; PROXY 192.168.1.42:3128; DIRECT”;
The name of the PAC file is not important ( “autoproxy.pac” is name used by me), any name will do.
More details regarding the PAC file , examples, more advanced functions can be found here : http://findproxyforurl.com/
January 30, 2015Posted by on
This is the simplest method to email the result of a command in Freenas.
For example if you run certain scripts with Cron you can use it also.
Personally I use this to get the SMART report about a HDD that may fail soon. So what I put in Cron this (all in one line) :
smartctl -a /dev/ada1 | /usr/bin/mail -s "MyFREENAS HDD /ada1 report " email@example.com
For user you can put “root” and any redirect should be off.
Of course to make the email work you have to configure the email server, ..etc… in the user config. Fill the settings here : System –> Email.
November 3, 2014Posted by on
I recently had to add new disks to my FreeNAS storage. I mounted the disks but I was not able to add them as mirror from GUI interface. But I found this Russian website with very simple and easy to follow tutorial. I copied here mainly for me as a reminder but may be also useful for others.
I hope the original author will not be upset.
The original post is here :
The high-level steps are:
Add the new disk to the system – (means connect the cables)
Partition the disk with gpart – (from FreeNAS terminal)
Attach the new partition to ZFS as a mirror
Create the GPT
Use the GUI to find the device ID of the new drive or use camcontrol.
# camcontrol devlist
at scbus2 target 0 lun 0 (ada0,pass0)
at scbus3 target 0 lun 0 (ada1,pass1)
at scbus4 target 0 lun 0 (ada2,pass2)
at scbus5 target 0 lun 0 (ada3,pass3)
at scbus7 target 0 lun 0 (da0,pass4)
let assume that our target is ada1. Create the GUID partition table for ada1.
# gpart create -s gpt ada1
Add the Swap Partition
Create a swap partition matching what FreeNAS created on the original drive. FreeNAS puts a swap partition on every data drive by default, stripes them together, and encrypts with a temporary key each boot. I’m not sure how that works when a drive fails, but it’s the recommended configuration.
# gpart add -b 128 -i 1 -t freebsd-swap -s 2G ada1
Add the Data Partition
Use the remaining space for the data partition.
# gpart add -i 2 -t freebsd-zfs ada1
Get the GPTID for the Partition
A device may change names depending on the connected port but the GPTID doesn’t change. FreeNAS uses the GPTID to track disks and so we want the rawuuid field of ada1p2.
# gpart list ada1
Geom name: ada1
1. Name: ada1p1
Mediasize: 2147483648 (2.0G)
2. Name: ada1p2
Mediasize: 1998251364352 (1.8T)
Attach to ZFS as mirror
Attach the partition using zpool which will begin the resilvering process. You will need the GPTID of the encrypted original disk parition.
# zpool attach
# zpool attach storage /dev/gptid/1c5238f9-5e2d-11e3-b7e0-08606e6e53d5 /dev/gptid/40380205-4794-11e4-b95b-08606e6e53d5
May 27, 2014Posted by on
Simple setup for samba server on Tinycore linux server. Many tutorials are available on internet but mine is tested and it working as I want.
First install samba package on Tinycore linux.
from “tc” user start tce-ab
Then edit the smb.conf file from /usr/local/etc/samba/smb.conf.
Inside add something like this:
[global] workgroup = WORKGROUP netbios name = box security = user [atom_1t] comment = Data path = /mnt/sda1 read only = no guest ok = no
security = user
this will create a share that is based on user/password
netbios name = box
this will be the name of your server ( alias to avoid tiping the IP address)
read only = no
to have write access
guest ok = no
no allow of guest users ( no anonymous connections)
Then as root you run this command
smbpasswd -a <tinycore user>
then type the samba password for that user. You will use this password from the client machine when you will connect to the samba share
then save the samba config files to make the changes persistent after reboot.
add in /opt/.filetool.lst
usr/local/etc/samba/smb.conf <-- this contain samba setup usr/local/etc/samba/private/passdb.tdb <-- this contain your password
then backup with “backup”
and then restart the server
Next go at the client machine and in filemanager type:
and you should get a popup window asking for user and password. Put the user and that password you set on samba.
February 25, 2014Posted by on
Backup systems are good for recovering in case of accidental lost data. But a more useful feature is the incremental backup where you have access to various snapshots in time like the Time Machine on Apple is doing. To do this in Linux (or any Unix or alike ) systems is actually very easy.
For example we make a backup every day ( or every internal you want) . We need that the amount of data transferred is small and not big. Imagine transferring few TB every day ! in case our important data is changing a little bit then we will backup only the modified parts. For this Rsync is the best tool. Everybody knows that. But there is a problem. How can we keep daily snapshots of the data without filling the disk ? For this we will use softlinks, hardlinks and Rsync options.
So we have to create a script file like this:
#!/bin/bash date=`date "+%Y-%m-%dT%H-%M-%S"` rsync -aP --delete --log-file=/tmp/log_backup.log --exclude=lost+found --link-dest=/mnt/sdb2/Backups/current /mnt/sda1/ /mnt/sdb2/Backups/back-$date rm -f /mnt/sdb2/Backups/current ln -s /mnt/sdb2/Backups/back-$date /mnt/sdb2/Backups/current
So here I make first a “date” variable that will be used in the name of the backup folder to easily know when that backup/snapshot was made.
Then use the rsync with some parameters (see man rsync for more details):
-a = archive mode ( to send only changed parts)
-P = to give a progress info – (optional)
–delete = to delete the deleted files from backup in case they are removed from source
–log-file = to save the log into a file (optional)
–exclude = to exclude some folders/files from backup . This are relative to source path !!! do not use absolute path here !
–link-dest = link to the latest backup snapshot
/mnt/sda1 = source path (here I backup a whole drive)
/mnt/sdb2/Backups/back-$date = destination folder , it will contain all the content from the source.
Then by using rm I remove the old link to the old backup ( the “current” link) and then I replace it with a new soft link to the newly created snapshot.
So now whenever I click on “current” I go in fact to the latest backup .
And because every time I make the backup the date is different the old snapshots will be kept. So for every day I will have a snapshot.
To automate this you have to create a cron job to execute the above script at the convenient time.
Example to run at 4:01AM every day:
1 4 * * * /path/to/script
Please notice that only the first time the full backup will take a long time since it will copy the full data. The second time you will run the script it will transfer only the changed files/bits.
Now on the destination folder you will see a “back-xxx” folder for every time you run the script. You can open/read the files from all this folders as it if they are completely independent files. In fact if you run df and du you will see something interesting.
For example if the backup is 600GB and the script is run every day you will see that the df will show the same 600GB used from disk space. But if you run “du -sh /* ” you will see that each “back-xxx” folder is 600GB each. This is possible because there are only hardlinks to the same data copied. Do not worry, the disk is not full and you should trust the df results and not the du results.
user@box:/mnt/sdb2/Backups$ du -sh ./* 623.8G ./back-2014-02-24T17:47:12 623.8G ./back-2014-02-24T21-46-41 623.8G ./back-2014-02-25T17-05-02 623.8G ./back-2014-02-25T18-45-34 0 ./current user@box:/mnt/sdb2/Backups$ df /mnt/sdb2 Filesystem Size Used Available Use% Mounted on /dev/sdb2 2.7T 623.9G 1.9T 24% /mnt/sdb2
So the Time Machine is in fact only 3 lines of code in a script plus a cron job ! Easy and everybody can do it !
Adapt the script to your needs. Run it when you want with cron jobs.
At any point in time you can delete old backups ( for example backups older than few weeks). This can also be made with cron plus some scripts.
February 15, 2014Posted by on
on Tiny core Linux distribution the tc user is automatically login at boot time.
If you want this not to happen then you have to change /opt/bootsync.sh file and to add this line :
echo “booting” > /etc/sysconfig/noautologin
and then backup with backup command. Of course all this must be run from root account.
August 12, 2013Posted by on
I want to describe here some schematics for a power switch. The soft power switch is in fact a electronic switch ( no relay, no moving parts) that can be used as a ON/OFF for a certain device.
The classical way to switch ON/OFF a device is to use a flip switch like this :
But this mechanical switch is expensive and can break after a wile.
The other big disadvantage is that it require the mains AC line (supply line) to physically pass through that switch. So if for example you want to put the ON/OFF switch on the front panel of a device you have to go there with AC 110/220V mains. This can create problems (noise, interference…)
Beside this some people want a “fancy” ON/OFF function with only a simple push button. Like this:
So The switch is very small and can be integrated directly on the PCB board. Beside this there is very small voltages/current passing so there is no risk of electroshock.
Usually this type of application use DC voltages in the 3-24V domain. They are used often to start a board that has micro-controller or a small electronic device.
The following possibilities are shown below:
June 29, 2013Posted by on
FTP protocol is used to transfer data between computers. The user has also a possibility to combine bash scripts with FTP to automate the backups of the files. This concept can be used for example to backup some files from a local machine to a remote server.
The way to do this is by making an executable script that is run from time to time by a cron job task. In this way the backup is made automatically in the background and do not require user intervention. But there is a problem. if you use FTP in command line then you have to type user name and password in clear text. So how to do it ?
The solution I suggest is like in the example below:
First step is to determine if the backup is needed . We check if the file was changed since last backup. For this we use a compare between the size of the file in this moment with the size of the file at the previous backup time. The previous size was saved in a file. To check this we use the “if compare” structure from BASH:
### check if the file was changed, and if YES then make FTP transfer, if not exit if [ `cat ~/.size_myfile.txt` -ne `du -s /path/to/file | cut -f1` ] ; then echo 'File changed !' fi ###########################
Then we define some parameters for FTP command. This definition can be made in a different ( hidden) file. For not critical situations ( home users) I recommend keeping the user details in the same file ( in the script) but to remove all the permissions for the other users ( use chmod command ). So the backup script should look like this:
-rwx------ myuser mygroup 32 june 12 14:52 backupFTP.script
Notice that only the “myuser” user has the right to read/write/execute the file
So for FTP command you need:
############################## HOST='ftp.myserver.com' USER='myusername' PASSWD='mypassword' FILE='/path/to/filesource filedestination' ftp -n $HOST <<END_SCRIPT quote USER $USER quote PASS $PASSWD cd /path/to/destination put $FILE quit END_SCRIPT #############################
Since the source path and the destination path may be different you can use “cd /path/to/destination” for the file. The copied file can be also renamed as shown above ( see “filedestination“)
Notice that the commands between “END_SCRIPT” tags are executed inside FTP terminal. This are FTP commands and not BASH/Linux commands. You can put here whatever FTP commands you want based on your needs. For a full list of the FTP commands type “help” in the FTP terminal.
The 3rd step is to recalculate and save the new size of the file so that next time when backup script is run the size file is updated. For this we do:
################################ ## recalculate the new size of the file, for next backup du -s /path/to/file | cut -f1 > ~/.size_myfile.txt env DISPLAY=:0 notify-send 'FTP backup done !' 'File : /path/to/file' ###############################
Optionally you can show a desktop notification that the backup was made. If you do not have a GUI then do not use it.
Next I show the full script in only one file:
#!/bin/bash ### check if the file was changed, and if YES then make FTP transfer, if not exit if [ `cat ~/.size_myfile.txt` -ne `du -s /path/to/file | cut -f1` ] ; then # echo 'File changed !' sleep 1 # HOST='ftp.myserver.com' USER='myusername' PASSWD='mypassword' FILE='/path/to/filesource filedestination' ftp -n $HOST <<END_SCRIPT quote USER $USER quote PASS $PASSWD cd /path/to/destination put $FILE quit END_SCRIPT sleep 1 ## recalculate the new size of the file, for next backup du -s /path/to/file | cut -f1 > ~/.size_myfile.txt env DISPLAY=:0 notify-send 'FTP backup done !' 'File : /path/to/file' fi ###############
June 22, 2013Posted by on
I use linux most of the time and from time to time I want to test a new distro. The easy way to try a new distro is to use the USB stick. The CD is outdated and the DVD is expensive.
There are many ways to make a live bootable USB but I want to show the easiest one. All you need is the ISO file and any linux machine. Any Unix OS should be fine but I did not tested.
So no special software is required.
First you have to download the ISO for the distro you want to try. For example Mint, Debian , Ubuntu, etc…etc…
Then open a terminal and use dd command to transfer the ISO image to the USB stick.
For this you need the root privileges. You can become root with su or you can use sudo in front of the command.
So in terminal type (first read this post till end !):
sudo dd if=/path/to/linuxdistro.iso of=/dev/sdb bs=4M ; sync
That’s it ! Now the ISO image is copied on the USB stick and you can boot from it.
Details about the command:
sudo = to get super user privileges, if not then you have to become root first with su command
dd = command to copy to disk
if=/path/file = select the input file, in this case the ISO file.
of = the destination of the copy
/dev/sdb = your USB device. First check what is the name of your USB stick. You can plug the stick and check it with df command. Do not use any number after sdx since the dd must transfer the ISO file to the disk and not to the partition !
bs=4M = it means to copy up to 4MB at a time – it is optional but it optimize the transfer
sync = optional for the linux OS to flash the buffers
For more info about dd or sync commands please use man from terminal:
and / or:
May 10, 2013Posted by on
The Tiny Core Linux is a distro that is very small in size and I personally use it for my home server.
The update/upgrade this distro is actually very easy but the Tiny Core website is chaotic in giving the information. Some pages are outdated ( from 2.x or 3.x versions)
So in my case I had version 4.5.5 and I wanted to upgrade to 4.7.6.
First you have to find what version you have.
Open terminal (any user) and type : version
Then the real upgrade begin.
The upgrade is made from 2 steps: first upgrade the TCL (Tiny Core Linux) core and then the extensions ( tcz)
So to upgrade the TCL itself you do not need to burn a new CD or use USB sticks , it is enough to download the latest iso file. In my case I downloaded “Core-current.iso” on another linux machine ( client of the TCL server) . Can be made also directly from TCL server but in my case I did not had a software that uncompress ISO files.
Then open the ISO file with a archive manager ( or a similar program that is able to extract files from ISO files). Inside there is a boot folder.
Then copy the content of the unpacked ISO to the TCL disk. In my case I copied from a client machine to the TCL server in a location where I have access to write. In my TCL server I use a USB stick as a boot partition “sdc1“. So I saved the files to a sda1 disk.
Now login as “root” (on TCL server from terminal use “su root“).
Now check what files from the old TCL distro did you changed and which ones are identical with the files from new unpacked ISO file. In my case only /boot/syslinux/syslinux.cfg was customized by me. So I copied all the other files , one by one, and I overwrite the old files from TCL 4.5.5 with the new files (from ISO file).
Then optionally backup the personal settings , if necessary.
Then reboot the system.
After reboot login again as a normal user or as root and check the “version” command to see the new TCL distro version.
The second part is to upgrade the TCL extensions.
This can be upgraded independently from TCL core/kernel upgrade described above. Personally I prefer to upgrade them together.
So you have to login as root ( from terminal use “su root“). Then type “tce-update“. Follow the instructions on the screen ( you have to press ENTER ) and at the end reboot again.
And this is all you have to do : no CD/USB, only a copy and two reboots. All are done remote.
As an alternative to the ISO unpacking you can use the TCL mirrors that give the upgrade files already unpacked. In my case I was not sure which files I have to copy and I used ISO archive as a reference.