BIPEDU

my ideas in action

Tag Archives: tricks

use PAC file for automatic proxy selection

I will explain how to use automatic proxy selection for a local network.

For example let’s say that you have a proxy server but that proxy is not available all time. In this case the you need to find each time if the proxy is alive (available) and if yes to use it. If not then browser will select direct connection.

The easiest way to use it is to create a PAC file and to add it in the Firefox as automatic proxy selection.

Go to Preferences–>Advanced –> Network –>Settings and choose “Automatic proxy configuration URL”

Then type there the path to a local PAC file. Normally there should be a web page address but if the file is locals works also ( no web server needed)

FFsettings

To create the PAC file use any text editor and create a file called “autoproxy.pac” and put this content:

function FindProxyForURL(url, host)
{
return “PROXY 192.168.1.29:3128; DIRECT”;
}

The Proxy in this case is on local network at 192.168.1.29 (Squid Proxy on port 3128) and Firefox try to use it first . In case it is not responding it will use direct connection.

You can set there multiple proxy servers. The order is important.

In the example below you can have two proxies. If the first one (192.168.1.29) is not responding then the second one (192.168.1.42) will be selected, and if the second one also do not respond the direct network connection will be used.

function FindProxyForURL(url, host)
{
return “PROXY 192.168.1.29:3128; PROXY 192.168.1.42:3128; DIRECT”;
}

The name of the PAC file is not important ( “autoproxy.pac” is name used by me), any name will do.

More details regarding the PAC file , examples, more advanced functions can be found here : http://findproxyforurl.com/

 

 

 

Advertisements

how to add/replace disk to FreeNAS mirror

I recently had to add new disks to my FreeNAS storage. I mounted the disks but I was not able to add them as mirror from GUI interface. But I found this Russian website with very simple and easy to follow tutorial. I copied here mainly for me as a reminder but may be also useful for others.

I hope the original author will not be upset.

The original post is here :
http://ukhov.ru/node/431


The high-level steps are:

Add the new disk to the system – (means connect the cables)
Partition the disk with gpart – (from FreeNAS terminal)
Attach the new partition to ZFS as a mirror

Create the GPT

Use the GUI to find the device ID of the new drive or use camcontrol.

# camcontrol devlist
at scbus2 target 0 lun 0 (ada0,pass0)
at scbus3 target 0 lun 0 (ada1,pass1)
at scbus4 target 0 lun 0 (ada2,pass2)
at scbus5 target 0 lun 0 (ada3,pass3)
at scbus7 target 0 lun 0 (da0,pass4)

let assume that our target is ada1. Create the GUID partition table for ada1.

# gpart create -s gpt ada1
ada1 created


Add the Swap Partition

Create a swap partition matching what FreeNAS created on the original drive. FreeNAS puts a swap partition on every data drive by default, stripes them together, and encrypts with a temporary key each boot. I’m not sure how that works when a drive fails, but it’s the recommended configuration.

# gpart add -b 128 -i 1 -t freebsd-swap -s 2G ada1
ada1p1 added

Add the Data Partition

Use the remaining space for the data partition.
# gpart add -i 2 -t freebsd-zfs ada1
ada1p2 added

Get the GPTID for the Partition

A device may change names depending on the connected port but the GPTID doesn’t change. FreeNAS uses the GPTID to track disks and so we want the rawuuid field of ada1p2.


# gpart list ada1
Geom name: ada1
scheme: GPT
1. Name: ada1p1
Mediasize: 2147483648 (2.0G)
...
rawuuid: 38d6835c-4794-11e4-b95b-08606e6e53d5
2. Name: ada1p2
Mediasize: 1998251364352 (1.8T)
...
rawuuid: 40380205-4794-11e4-b95b-08606e6e53d5

Attach to ZFS as mirror

Attach the partition using zpool which will begin the resilvering process. You will need the GPTID of the encrypted original disk parition.

# zpool attach
# zpool attach storage /dev/gptid/1c5238f9-5e2d-11e3-b7e0-08606e6e53d5 /dev/gptid/40380205-4794-11e4-b95b-08606e6e53d5

samba server on tinycore linux – howto

Simple setup for samba server on Tinycore linux server. Many tutorials are available on internet but mine is tested and it working as I want.

First install samba package on Tinycore linux.

from “tc” user start tce-ab

Then edit the smb.conf file from /usr/local/etc/samba/smb.conf.
Inside add something like this:

[global]
workgroup = WORKGROUP
netbios name = box
security = user

[atom_1t]
comment = Data
path = /mnt/sda1
read only = no
guest ok = no

Explanations:

security = user
this will create a share that is based on user/password

netbios name = box
this will be the name of your server ( alias to avoid tiping the IP address)

read only = no
to have write access

guest ok = no
no allow of guest users ( no anonymous connections)
Then as root you run this command

 smbpasswd -a <tinycore user>

then type the samba password for that user. You will use this password from the client machine when you will connect to the samba share
then save the samba config files to make the changes persistent after reboot.

add in /opt/.filetool.lst

usr/local/etc/samba/smb.conf            <-- this contain samba setup
usr/local/etc/samba/private/passdb.tdb  <-- this contain your password

then backup with “backup”
and then restart the server

Next go at the client machine and in filemanager type:
smb://box/
and you should get a popup window asking for user and password. Put the user and that password you set on samba.

easy backup system with rsync – like Time Machine

Backup systems are good for recovering in case of accidental  lost data. But a more useful feature is the incremental backup where you have access to various snapshots in time like the Time Machine on Apple is doing. To do this in Linux (or any Unix  or alike ) systems is actually very easy.

For example we make a backup every day ( or every internal you want) . We need that  the amount of data transferred is small and not big. Imagine transferring few TB every day ! in case our important data is changing a little bit then we will backup only the modified parts. For this Rsync is the best tool. Everybody knows that. But there is a problem. How can we keep daily snapshots of the data without filling the disk ? For this we will use softlinks,  hardlinks and Rsync options.

So we have to create a script file like this:

#!/bin/bash
date=`date "+%Y-%m-%dT%H-%M-%S"`
rsync -aP --delete --log-file=/tmp/log_backup.log --exclude=lost+found --link-dest=/mnt/sdb2/Backups/current /mnt/sda1/ /mnt/sdb2/Backups/back-$date
rm -f /mnt/sdb2/Backups/current
ln -s /mnt/sdb2/Backups/back-$date /mnt/sdb2/Backups/current

So here I make first a “date” variable that will be used in the name of the backup folder to easily know when that backup/snapshot was made.

Then use the rsync with some parameters (see man rsync for more details):

-a = archive mode ( to send only changed parts)

-P = to give a progress info – (optional)

–delete = to delete the deleted files from backup in case they are removed from source

–log-file = to save the log into a file (optional)

–exclude = to exclude some folders/files from backup . This are relative to source path !!! do not use absolute path here !

–link-dest = link to the latest backup snapshot

/mnt/sda1 = source path (here I backup a whole drive)

/mnt/sdb2/Backups/back-$date  = destination folder , it will contain all the content from the source.

Then by using rm I remove the old link to the old backup ( the “current” link) and then I replace it with a new soft link to the newly created snapshot.

So now whenever I click on “current” I go in fact to the latest backup .

And because every time I make the backup the date is different the old snapshots will be kept. So for every day I will have a snapshot.

To automate this you have to create a cron job to execute the above script at the convenient time.

Example to run at 4:01AM every day:

1  4 * * * /path/to/script

Please notice that only the first time the full backup will take a long time since it will copy the full data. The second time you will run the script it will transfer only the changed files/bits.

Now on the destination folder you will see a “back-xxx” folder for every time you run the script. You can open/read the files from all this folders as it if they are completely independent files. In fact if you run df and du you will see something interesting.

For example if the backup is 600GB and the script is run every day you will see that the df will show the same 600GB used from disk space. But if you run “du -sh /* ”  you will see that each “back-xxx” folder is 600GB each. This is possible because there are only hardlinks to the same data copied. Do not worry, the disk is not full and you should trust the df results and not the du results.

user@box:/mnt/sdb2/Backups$ du  -sh ./*
623.8G    ./back-2014-02-24T17:47:12
623.8G    ./back-2014-02-24T21-46-41
623.8G    ./back-2014-02-25T17-05-02
623.8G    ./back-2014-02-25T18-45-34
0    ./current
user@box:/mnt/sdb2/Backups$ df /mnt/sdb2
Filesystem                Size      Used Available Use% Mounted on
/dev/sdb2                 2.7T    623.9G      1.9T  24% /mnt/sdb2

So the Time Machine is in fact only 3 lines of code in a script plus a cron job ! Easy and everybody can do it !

Adapt the script to your needs. Run it when you want with cron jobs.

At any point in time you can delete old backups ( for example backups older than few weeks). This can also be made with cron plus some scripts.

Tiny core linux no autologin

on Tiny core Linux distribution the tc user is automatically login at boot time.

If you want this not to happen then you have to change /opt/bootsync.sh file and to add this line :

echo “booting” > /etc/sysconfig/noautologin

and then backup with backup command. Of course all this must be run from root account.

Power soft switch

I want to describe here some schematics for a power switch. The soft power switch is in fact a electronic switch ( no relay, no moving parts) that can be used as a ON/OFF for a certain device.

The classical way to switch ON/OFF a device is to use a flip switch like this :

But this mechanical switch is expensive and can break  after a wile.

The other big disadvantage is that it require the mains AC line (supply line) to physically pass through that switch. So if for example you want to put the ON/OFF switch on the front panel of a device you have to go there with AC 110/220V mains. This can create problems (noise, interference…)

Beside this some people want a “fancy” ON/OFF function with only a simple push button. Like this:

So The switch is very small and can be integrated directly on the PCB board. Beside this there is very small voltages/current passing so there is no risk of electroshock.

Usually this type of application use DC voltages in the 3-24V domain. They are used often to start a board that has micro-controller or a small electronic device.

The following possibilities are shown below:

1: power_soft_switch

2:

power_soft_switch2

3:

4:

5:

6:

7:

8:

9:

10:

11:

12:

13:

14:

FTP backup with BASH script ( FTP scripting)

FTP protocol is used to transfer data between computers. The user has also a possibility to combine bash scripts with FTP to automate the backups of the files. This concept can be used for example to backup some files from a local machine to a remote server.
The way to do this is by making an executable script that is run from time to time by a cron job task. In this way the backup is made automatically in the background and do not require user intervention. But there is a problem. if you use FTP in command line then you have to type user name and password in clear text. So how to do it ?
The solution I suggest is like in the example below:

First step is to determine if the backup is needed . We check if the file was changed since last backup. For this we use a compare between the size of the file in this moment with the size of the file at the previous backup time. The previous size was saved in a file. To check this we use the “if compare” structure from BASH:

### check if the file was changed, and if YES then make FTP transfer, if not exit
 if [ `cat ~/.size_myfile.txt` -ne `du -s /path/to/file | cut -f1` ] ; then
 echo 'File changed !'
 fi
 ###########################

Then we define some parameters for FTP command. This definition can be made in a different ( hidden) file. For not critical situations ( home users) I recommend keeping the user details in the same file ( in the script) but to remove all the permissions for the other users ( use chmod command ). So the backup script should look like this:

-rwx------ myuser mygroup 32 june 12 14:52 backupFTP.script

Notice that only the “myuser” user has the right to read/write/execute the file

So for FTP command you need:

##############################
 HOST='ftp.myserver.com'
 USER='myusername'
 PASSWD='mypassword'
 FILE='/path/to/filesource filedestination'
ftp -n $HOST <<END_SCRIPT
 quote USER $USER
 quote PASS $PASSWD
 cd /path/to/destination
 put $FILE
 quit
 END_SCRIPT
 #############################

Since the source path and the destination path may be different you can use “cd /path/to/destination” for the file. The copied file can be also renamed as shown above ( see “filedestination“)

Notice that the commands between “END_SCRIPT” tags are executed inside FTP terminal. This are FTP commands and not BASH/Linux commands. You can put here whatever FTP commands you want based on your needs. For a full list of the FTP commands type “help” in the FTP terminal.

The 3rd step is to recalculate and save the new size of the file so that next time when backup script is run the size file is updated. For this we do:

################################
 ## recalculate the new size of the file, for next backup
 du -s /path/to/file | cut -f1 > ~/.size_myfile.txt
 env DISPLAY=:0 notify-send 'FTP backup done !' 'File : /path/to/file'
 ###############################

Optionally you can show a desktop notification that the backup was made. If you do not have a GUI then do not use it.

Next I show the full script in only one file:

#!/bin/bash
 ### check if the file was changed, and if YES then make FTP transfer, if not exit
 if [ `cat ~/.size_myfile.txt` -ne `du -s /path/to/file | cut -f1` ] ; then
#    echo 'File changed !'
 sleep 1
 #
 HOST='ftp.myserver.com'
 USER='myusername'
 PASSWD='mypassword'
 FILE='/path/to/filesource filedestination'
 ftp -n $HOST <<END_SCRIPT
 quote USER $USER
 quote PASS $PASSWD
 cd /path/to/destination
 put $FILE
 quit
 END_SCRIPT
 sleep 1
 ## recalculate the new size of the file, for next backup
 du -s /path/to/file | cut -f1 > ~/.size_myfile.txt
 env DISPLAY=:0 notify-send 'FTP backup done !' 'File : /path/to/file'
fi
 ###############

live USB linux with dd command

I use linux most of the time and from time to time I want to test a new distro. The easy way to try a new distro is to use the USB stick. The CD is outdated and the DVD is expensive.

There are many ways to make a live bootable USB but I want to show the easiest one. All you need is the ISO file and any linux machine. Any Unix OS should be fine but I did not tested.

So no special software is required.

First you have to download the ISO for the distro you want to try. For example Mint, Debian , Ubuntu, etc…etc…

Then open a terminal and use dd command to transfer the ISO image to the USB stick.

For this you need the root privileges. You can become root with su or you can use sudo in front of the command.

So in terminal type (first read this post till end !):

sudo dd if=/path/to/linuxdistro.iso of=/dev/sdb bs=4M ; sync

That’s it ! Now the ISO image is copied on the USB stick and you can boot from it.

Details about the command:

sudo = to get super user privileges, if not then you have to become root first with su command

dd = command to copy to disk

if=/path/file  = select the input file, in this case the ISO file.

of = the destination of the copy

/dev/sdb = your USB device. First check what is the name of your USB stick. You can plug the stick and check it with df command. Do not use any number after sdx since the dd must transfer the ISO file to the disk and not to the partition !

bs=4M = it means to copy up to 4MB at a time – it is optional but it optimize the transfer

sync = optional for the linux OS to flash the buffers

For more info about dd or sync commands please use man from terminal:

man dd

and / or:

man sync

easy way to upgrade Tiny Core Linux 4.x

The Tiny Core Linux is a distro that is very small in size and I personally use it for my home server.

The update/upgrade this distro is actually very easy but the Tiny Core website is chaotic in giving the information. Some pages are outdated ( from 2.x or 3.x versions)

So in my case I had version 4.5.5 and I wanted to upgrade to 4.7.6.

First you have to find what version you have.

Open terminal (any user) and type : version

 

Then the real upgrade begin.

The upgrade is made from 2 steps: first upgrade the TCL (Tiny Core Linux) core and then the extensions ( tcz)

 

So to upgrade the TCL itself you do not need to burn a new CD or use USB sticks , it is enough to download the latest iso file. In my case I downloaded “Core-current.iso” on another linux machine ( client of the TCL server) . Can be made also directly from TCL server but in my case I did not had a software that uncompress ISO files.

Then open the ISO file with a archive manager ( or a similar program that is able to extract files from ISO files). Inside there is a boot folder.

Then copy the content of the unpacked ISO to the TCL disk. In my case I copied from a client machine to the TCL server in a location where I have access to write. In my  TCL server I use  a USB stick as a boot partition “sdc1“. So I saved the files to a sda1 disk.

Now login as  “root” (on TCL server from terminal use “su root“).

Now check what files from the old TCL distro did you changed and which ones are identical with the files from new unpacked ISO file. In my case only /boot/syslinux/syslinux.cfg was customized by me. So I copied all the other files , one by one, and I overwrite the old files from TCL 4.5.5 with the new files (from ISO file).

Then optionally backup the personal settings , if necessary.

Then reboot the system.

After reboot login again as a normal user or as root and check the “version” command to see the new TCL distro version.

 

The second part is to upgrade the TCL extensions.

This can be upgraded independently from TCL core/kernel upgrade described above. Personally I prefer to upgrade them together.

So you have to login as root ( from terminal use “su root“). Then type “tce-update“. Follow the instructions on the screen ( you have to press ENTER ) and at the end reboot again.

 

And this is all you have to do : no CD/USB, only a copy and two reboots. All are done remote.

As an alternative to the ISO unpacking you can use the TCL mirrors that give the upgrade files already unpacked. In my case I was not sure which files I have to copy and I used ISO archive as a reference.

 

 

Asus A72J / K72Jr cooling – replace thermal grease for CPU and GPU

Few days ago I open up my laptop and clean it.I made a small tutorial to help other people to do something similar if they have this laptop.

I have a Asus A72J / K72Jr. This is a 17.3 inch / I5 / ATI HD5470 1G / 4Gb DDR3 / 500Gb laptop.

My problem was that I noticed that the temperature was quite high 50-55C and the fan was running almost continuously. From my past experience with laptops and PC, I know that usually when this happens it is the cooling that is not efficient any more. Usually is the fan that is not blowing enough air , or the heatsink is dirty or simply the thermal grease that transfer the heat is dried.

In my case it was the thermal transfer to the heatsink that was not efficient anymore. My laptop is approx 3 years old and this is somehow expected to happen.

So what I describe here are the steps to replace the thermal grease from CPU+GPU .

Please do not try this is you are not familiar with PC repairs. I show this here just to help other people that may need to know how to do it. The operations are easy but you need patience, calm and to be careful. For me the entire process took approx 3 hours.

So first open the HDD+RAM compartment : 4 screws on the back.

Remove only the HDD. The RAM is not necessary to be removed.

Then remove the keyboard.

To remove the keyboard you have to push the small 5 pins from the top side of the keyboard , on top of key : ESC, F5, F9, Prt.Sq, and END.

Be careful with the keyboard flat cable.

The keyboard:

IMG_0902 IMG_0901

Then you have to remove the 5 screws that are under the keyboard. They are numbered 1 to 5.

Then remove the screws from the back side of the laptop .

The are many screws:

– 2 of them hold the DVD unit – marked with CD/DVD (disk) symbol.

– 5 screws are in the HDD/RAM compartment

– The others are in the holes on the back of the laptop.

Then remove the 4 small screws marked with “A” that are under the battery compartment. I forgot to mention , but is evident, that from the beginning you have to remove the battery first !

Now , separate the front side of the laptop from the bottom side.

With the fingers create a gap and be careful not to scratch the front side. The front is kept with some simple plastic clips. Do not pull too hard since on the back of the front side is a small flat cable for the touchpad.

After you remove the front side you will have something like this:

IMG_0905

IMG_0906

IMG_0910

Now you should see the motherboard and the fan.

The screen is always attached to the back side and is not necessary to be removed. It is better to cover the screen with something to avoid scratches.

IMG_0911One thing that I noticed immediately is that the mainboard controller is not having good thermal contact with the heatsink. His heatsink is the big aluminium plate that is under the keyboard. I fixed this by adding thermal grease ( see it in the pictures below).

Now you have access to the fan and his radiator.

IMG_0915

In my case the fan and the radiator was clean.

If you need to fix only the fan then you can remove the 2 small screws . Then with a knife separate the metal plate that is under the fan by the plastic black top side. On the lateral of the fan you will notice small teeth. Be careful not to bend it too much since this is simple aluminium. If the fan do not fit well in his case it may make noise or vibrate.

In my case I continued with opening and removing the mainboard from the case.

To do this you have to remove ONLY the screws marked with white triangle. The are 2 on the small board that has SD card reader, other 2-3 on the main board.

Then remove the 2 screws from the fan.

Then remove the GND/earth screw from top-right corner that is near the screen connector.

The mainboard is easy to lift up and you have to be careful to the screen connector ( top right side) and speakers connector (right middle side – twisted pair cable)

At the end this is what you should obtain:

IMG_0923

The fan and the small SD card board are always connected with the mainboard. Do not remove them ! not yet anyway.

This is the back side of the mainboard and the CPU and GPU are visible.

IMG_0928 IMG_0929 IMG_0931

Next step is to remove the big copper heatsink and the fan from the CPU+GPU.

Remove first the connector of the fan and then remove the 4 screws from CPU and the 2 screws from GPU.

Be careful not to bend the copper thick heatsink or to force too much pressure on the mainboard or the CPU/GPU.

The next steps are the cleaning of the old thermal grease and replacing it with a new one.

I cleaned the CPU and the copper heatsink with kitchen paper towels and alcohol. This process take a lot of time and attention. Do it in well illuminated places and with no dust.

Do not use chemicals and to not put your (dirty) fingers on CPU or GPU.

When you clean the CPU/GPU do NOT use sharp objects !

Do not use metal objects ! you can scratch the CPU/GPU core and KAPUT !

If the old thermal grease is hard to remove you can use wood sticks (matches) and alcohol. Do not use fabrics/cloths that can generate dust/lint/debris !

Do not remove the CPU from his socket !

I noticed that GPU and CPU were covered by a red plastic cover (look like a adhesive isolation) , probably to isolate a possible contact between metal heatsink and the metal parts from CPU/GPU.

Some pictures from cleaning ( in chronological order):

IMG_0937 IMG_0938 IMG_0939 IMG_0940 IMG_0941 IMG_0942 IMG_0943 IMG_0944 IMG_0945 IMG_0946 IMG_0948 IMG_0952 IMG_0953 IMG_0954 IMG_0957 IMG_0959 IMG_0960 IMG_0962 IMG_0963 IMG_0964 IMG_0965 IMG_0966 IMG_0967 IMG_0968

After the cleaning I added new thermal grease on the CPU and GPU and also on the copper heatsink:

IMG_0969

Then I mounted all in the reverse order of the disassembly ( see order above). Pay attention to the screws of the CPU , see order 1-2-3-4 from heatsink ! Put equal pressure and do not force them too much !

All works fine now, the CPU/GPU are at approx 40-45C and the fan works only from time to time. Of course this is for idle . But even for paying HD movies the GPU is only 50C and CPU 50C also. So I’m happy with the results.

As you can see from above pictures the GPU ( ATI) and CPU (i5) are having a common heatsink and that why the temperature is almost identical on both of them.

 

As a side note I observed also that the mainboard have a secondary SATA connector and an additionally space for a second HDD.

This is interesting since a secondary HDD can be added. In my case I so not need it, but it may be a useful info for guys that want to use two HDD or a combination SSD and HDD.

I did not tested if the second SATA connector is functional.

The disadvantage of using this hidden connection is that you have to remove the front face/side of the laptop to have access to it.

IMG_0970