Pages

Sunday 25 August 2013

Ubuntu - change dns server for proxychains

This is really a not for myself, so that I don't forget it. So if you are using proxychains to access some servers through a proxy, you might want to use that system's dns server. In that case, presuming that you are running ubuntu, please modify the file:
/usr/lib/proxychains3/proxyresolv
When you open the file, it will be obvious what needs to be changed.

Thursday 8 August 2013

Cross compiling uwsgi with buildroot

I really like uwsgi, and would like to see it on my raspberry PI, so I decided to create a buildroot environment, and add uwsgi as a package. It means cross compiling uwsgi. In this blog, I will log the process.
One thing to note, that the text was shortened with:
s,/data/matelakat/repa/.workspace/buildroot-2013.05,$BUILDROOT,g
Let's check out a tagged version
git checkout tags/1.9.9 -B build-fixes
And make sure, it builds on the host system:
make
...
############## end of uWSGI configuration #############
*** uWSGI is ready, launch it with ./uwsgi ***

Cross compilation

First, I set the compiler and the precompiler, as suggested by Roberto:
CC="$BUILDROOT/output/host/usr/bin/arm-unknown-linux-gnueabi-gcc" \
CPP="$BUILDROOT/output/host/usr/bin/arm-unknown-linux-gnueabi-cpp" make
I got this error (newlines inserted for readability)
*** uWSGI linking ***
$BUILDROOT/output/host/usr/bin/arm-unknown-linux-gnueabi-gcc -o uwsgi ...
...
plugins/transformation_chunked/chunked.o -lpthread -lm -rdynamic -ldl -lz 
-L/usr/lib/x86_64-linux-gnu -lpcre -luuid -lssl -lcrypto 
-L/usr/lib/x86_64-linux-gnu -lxml2 -lpthread -ldl -lutil 
-lm /usr/lib/python2.7/config/libpython2.7.so -lutil -lcrypt
/data/matelakat/repa/.crosstool/x-tools/arm-unknown-linux-gnueabi/lib/gcc/
arm-unknown-linux-gnueabi/4.8.2/../../../../arm-unknown-linux-gnueabi/bin/
ld: skipping incompatible /usr/lib/x86_64-linux-gnu/libpthread.so when searching for -lpthread
/usr/lib/x86_64-linux-gnu/libpthread.a: could not read symbols: File format not recognized
collect2: error: ld returned 1 exit status
*** error linking uWSGI ***
make: *** [all] Error 1
That seems to be an issue, my host system's libpthread will definitely fail to link with the arm binary. I added a print statement, and an assert False.
*** uWSGI linking ***
['-lpthread', '-lm', '-rdynamic', '-ldl', '-lz', '-L/usr/lib/x86_64-linux-gnu -lpcre',
 '-luuid', '-lssl', '-lcrypto', '-L/usr/lib/x86_64-linux-gnu -lxml2', '-lpthread',
 '-ldl', '-lutil', '-lm', '/usr/lib/python2.7/config/libpython2.7.so', '-lutil', '-lcrypt']
Traceback (most recent call last):
  File "uwsgiconfig.py", line 1220, in 
    build_uwsgi(uConf(bconf))
  File "uwsgiconfig.py", line 401, in build_uwsgi
    assert False
AssertionError
make: *** [all] Error 1
It seems to be using my host's libpython2.7.so. But that's not the point (now). The real point is the '-L/usr/lib/x86_64-linux-gnu -lxml2' part. I need to find out from where does it come. I need to add some additional breakpoints. As my computer has 2 CPU cores, uwsgi build employs a compile queue. For now, as I would like to debug things, I want to run the build sequentially, so that it is easier to follow what's happening. So I cleaned the build, and set the CPUCOUNT environment variable:
CC="$BUILDROOT/output/host/usr/bin/arm-unknown-linux-gnueabi-gcc" \
CPUCOUNT=1 \
CPP="$BUILDROOT/output/host/usr/bin/arm-unknown-linux-gnueabi-cpp" make
I created a shell script with these contents. The libs variable is used to collect the libraries during plugin compilation. Let's add a print statement to see when we modify this array
./doit.sh  | grep ADDING
...
[ADDING LIBRARY] ['-lpthread', '-ldl', '-lutil', '-lm',
 '/usr/lib/python2.7/config/libpython2.7.so', '-lutil']
...
Oh, that doesn't look good, I bet, that's the python plugin. Let's print out the plugin's name as well
./doit.sh  | grep ADDING
...
[ADDING LIBRARY - python] ['-lpthread', '-ldl', '-lutil',
 '-lm', '/usr/lib/python2.7/config/libpython2.7.so', '-lutil']
...
Okay, shelve it for now, and go back to the original problem, see from where does the '-L/usr/lib/x86_64-linux-gnu -lxml2' line come from. That might be lxml. It seems, that xml2-config --libs is called to detect the library path. The host's binary was called this time, instead of the cross tool's. So let's add the crosstool bin path to the path:
#!/bin/bash

set -eux
PATH="$BUILDROOT/output/host/usr/arm-buildroot-linux-gnueabi/sysroot/usr/bin:${PATH}" \
CC="$BUILDROOT/output/host/usr/bin/arm-unknown-linux-gnueabi-gcc" \
CPUCOUNT=1 \
CPP="$BUILDROOT/output/host/usr/bin/arm-unknown-linux-gnueabi-cpp" make
But as I modified my script, I ended up with this error:
python uwsgiconfig.py --build
$BUILDROOT/output/
host/usr/arm-buildroot-linux-gnueabi/sysroot/usr/bin/
python: 1: $BUILDROOT/
output/host/usr/arm-buildroot-linux-gnueabi/sysroot/usr/bin/python: 
Syntax error: word unexpected (expecting ")")
make: *** [all] Error 2
It's not an x86 executable:
file $BUILDROOT/output/host/usr/arm-buildroot-linux-gnueabi/sysroot/usr/bin/python
$BUILDROOT/output/host/usr/arm-buildroot-linux-gnueabi/sysroot/usr/bin/python: symbolic link to `python2'
file $BUILDROOT/output/host/usr/arm-buildroot-linux-gnueabi/sysroot/usr/bin/python2.7
$BUILDROOT/output/host/usr/arm-buildroot-linux-gnueabi/sysroot/usr/bin/python2.7:
 ELF 32-bit LSB executable, ARM, version 1 (SYSV), dynamically linked (uses shared libs),
 for GNU/Linux 3.2.48, not stripped
that's why I have issues with that python. So I modify my shell script to start a host python built by buildroot:
#!/bin/bash

set -eux
PATH="$BUILDROOT/output/host/usr/arm-buildroot-linux-gnueabi/sysroot/usr/bin:${PATH}" \
CC="$BUILDROOT/output/host/usr/bin/arm-unknown-linux-gnueabi-gcc" \
CPUCOUNT=1 \
CPP="$BUILDROOT/output/host/usr/bin/arm-unknown-linux-gnueabi-cpp" \
$BUILDROOT/output/build/host-python-2.7.3/python uwsgiconfig.py --build
Note, that I also left out the make, and am directly calling uwsgiconfig.py. This is the next error that I get:
*** uWSGI linking ***
...
$BUILDROOT/output/host/usr/lib/libz.so: file not recognized: File format not recognized
collect2: error: ld returned 1 exit status
*** error linking uWSGI ***
file $BUILDROOT/output/host/usr/lib/libz.so.1.2.7 
$BUILDROOT/output/host/usr/lib/libz.so.1.2.7:
 ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked,
 BuildID[sha1]=0x5bbf6ecd4880bbbbb73c57e048dba2d36aa299e0, not stripped
well, that's interesting. That file is in the host directory, that's a host module of course. Maybe the library path is wrong?
*** uWSGI linking ***
['-lpthread', '-lm', '-rdynamic', '-ldl', '-lz',
 '-L$BUILDROOT/output/host/usr/arm-buildroot-linux-gnueabi/sysroot/usr/lib -lpcre',
 '-luuid', '-lssl', '-lcrypto', '-lxml2 -lz -lm -ldl', '-lpthread', '-ldl',
 '-lutil', '-lm', '-lpython2.7', '-lcrypt']
There is the pcre library, which looks suspicious. So I added another entry to the path, which will find the target system's pcre config script.
#!/bin/bash

set -eux
PATH="$BUILDROOT/output/build/pcre-8.32:$BUILDROOT/output/host/usr/arm-buildroot-linux-gnueabi/sysroot/usr/bin:${PATH}" \
CC="$BUILDROOT/output/host/usr/bin/arm-unknown-linux-gnueabi-gcc" \
CPUCOUNT=1 \
CPP="$BUILDROOT/output/host/usr/bin/arm-unknown-linux-gnueabi-cpp" \
$BUILDROOT/output/build/host-python-2.7.3/python uwsgiconfig.py --build
The problem is still there. Let's look at the build output again:
** uWSGI linking ***
['-lpthread', '-lm', '-rdynamic', '-ldl', '-lz', '-lpcre', '-luuid',
 '-lssl', '-lcrypto', '-lxml2 -lz -lm -ldl', '-lpthread', '-ldl',
 '-lutil', '-lm', '-lpython2.7', '-lcrypt']
...
$BUILDROOT/output/host/usr/bin/arm-unknown-linux-gnueabi-gcc -o uwsgi -L$BUILDROOT/output/host/usr/lib
...
I need to find out from where does that come. As a first guess, I am printing out ldflags.
LDFLAGS ['-L$BUILDROOT/output/host/usr/lib']
That could be an issue. Let's try to find out which plugin is doing this.
./doit.sh | grep LDFLAG
...
[ADDING LDFLAG - python] ['-L$BUILDROOT/output/host/usr/lib']
...
As I looked at plugins/python/uwsgiplugin.py, found the include issue, and the libs. So I decided to do a really dirty hack to override libs (It's getting late here) That left me with another issue:
plugins/python/python_plugin.o: In function `init_uwsgi_embedded_module':
python_plugin.c:(.text+0x1d6c): undefined reference to `Py_InitModule4_64'
collect2: error: ld returned 1 exit status
Let's hack the include directory as well, maybe that is causing the issue. as I was there, I also hacked the library paths. And this is my reward:
make clean
./doit.sh
...
############## end of uWSGI configuration #############
*** uWSGI is ready, launch it with ./uwsgi ***

Sunday 9 June 2013

ICS - mexamine

First, start xboard as an ics client:
xboard -ics -icshost freechess.org
I am assuming, that both players, a and b started xboard, and logged in. In order to examine a chess game together with a friend of yours, issue the following commands (The a: part is indicating which player is entering the command):
a: examine
This will print out the game id. In my case, it is 194. Now, the other player needs to observe that game
b: observe 194
Now, we are observing the game with player B. The next step is to enable mexamine:
a: mexamine b
After this, both player are examining the same game. To end the examination:
a: unexamine

Tuesday 4 June 2013

Direct PCB prints + etch

I printed some sample patterns with my solid ink printer to PCB, and etched them. I am pleased with the results. I think I will need a better etch fluid (more fluid or more concentrated fluid). Here you go:
source for the patterns

Friday 31 May 2013

Direct PCB printing with Xerox Phaser 8400

Almost a year ago, I purchased a beautiful Xerox Phaser 8400 with 250000 pages already printed, because I wanted to try out the direct PCB printing method. I purchased some "pyralux" blank PCBs from here: Tech-place. Now, after one and a half year, it is time to try out what is possible.
  • Paper/PCB Size
    The size of the PCB is around 150mm x 115mm. The Xerox manual says, that the minimum paper size should be: 75mm x 127mm - I am in the range, so that's good. My PCB is bigger than an A6.
  • Which Side Gets the Ink:
    It is an experiment. You load a paper with a writing on it, and print something on it. The ink goes to the back of the paper, so I will need to load my PCB with the copper facing down.
  • Paper/PCB Thickness
    In theory, the width of the PCB is 150um. So I looked up the supported papers in the manual, and searched for the thickest one was: Phaser Professional Solid Ink Business Cards, 225 g/m2 (80 lb. Cover) Now I needed to map this information to a thickness. Unfortunately, I failed to find a datasheet for that paper. However, I found this chart which basically tells me that 150 um is around 110 g/m2. I am convinced, my PCB will fit in, this beast can deal with 225g/m2 paper.
  • Print
    The print was successful, see this photo:
    And, if you want to see a video of the steps above, be my guest:

Sunday 21 April 2013

Badblocks run time (Dreamplug + USB)

So I put my hands on two seagate external hard drives (2T), and I want to use them as a raid device. Prior using the drives, I like to run some checks on them, to see if they deserve my data to be put on them. I ran badblocks on both drives at the same time. I don't think, that it has any impact on the speed. To record how much time did it take to run badblocks on 2T drives connected through USB to my Dreamplug, I captured some outputs:
root@plugged:~# badblocks -b 1048576 -w -o badblocks.W1E2JW11 -s -v /dev/sdc
Checking for bad blocks in read-write mode
From block 0 to 1907728
Testing with pattern 0xaa: done                                
Reading and comparing: done                                
Testing with pattern 0x55:  77.39% done, 92:31:15 elapsed
root@plugged:~# badblocks -b 1048576 -w -o badblocks.W1E2GMMM -s -v /dev/sdd
Checking for bad blocks in read-write mode
From block 0 to 1907728
Testing with pattern 0xaa: done                                
Reading and comparing: done                                
Testing with pattern 0x55: done                                
Reading and comparing: done                                
Testing with pattern 0xff: done                                
Reading and comparing:   2.04% done, 172:41:55 elapsed
Badblock with these parameters tests 4 patterns: 0xaa, 0x55, 0xff, 0x00. The latter output shows the time required to get to the read-back phase of the second pattern. So let's say the read and write takes up around the same time, meaning we have the time for 5 iterations:
5x = 172
x = 34.4
And for doing all the 8 phases, you will need 275.2 hours, around 11.5 days, let's say 12 days. I am not sure, that this is the most economic way of doing it, because the drives are consuming a lot power, so it might make sense to drive them as fast as possible with a more powerful machine.

Sunday 7 April 2013

Backup Device - power save

I would like to have a backup device, to safely save down my data. I also want this solution to consume small amounts of power. Thereby, I will build it with a Raspberry PI. The problem, that I am facing, is that I don't want the hard drives to work 24/7. In order to achieve this, I need a way to power them down from my Raspberry. Options that I looked at: This could be useful: raspberry leaf

Thursday 4 April 2013

DHCP issues with Arch on Raspberry PI

I had some issues with my Raspberry. It did not always come up after a power on. As I don't have a monitor attached to this small fruit, I was completely locked out. The applied solution was to switch to dhclient. It could be achieved by installing dhclient, and editing a profile file for that interface:
cat /etc/network.d/interfaces/eth0
DHCLIENT=yes
The small fruit seems to be happy now, and so do I.

Wednesday 20 March 2013

Starting the Raspberry PI

I arrived to a point, to start my RaspberryPi, that I got for Christmas from my beautiful fiancee. Let's see: Download the image from the Arch Linux / Raspberry PI page:
wget -q http://raspberry.mythic-beasts.com/raspb
erry/images/archlinuxarm/archlinux-hf-2013-02-11/archlinux-hf-2013-02-11.zip
unzip archlinux-hf-2013-02-11.zip 
sudo dd bs=1M if=archlinux-hf-2013-02-11.img of=/dev/mmcblk0 oflag=dsync
Please note the usage of the dsync flag. Without that, I have some nasty dmesg logs, complaining:
INFO: task blkid:3812 blocked for more than 120 seconds.
...
If you want to know where dd is, send him an USR1 signal:
sudo kill -USR1 pid_of_dd
And now, plug it to the Raspberry, and see what happens! Of course, I did not plug in a keyboard, nor a screen, as I don't have such devices at home. I expect the Arch to come up nicely with an ssh server. Wow, in the DHCP, I suddenly noticed a new device, alarmpi. Okay, let's ssh in to that! The user/password is root/root Update the system:
pacman -Syu
It downloaded a new kernel as well, which is a bit scary for the first sight. I'll reboot, and see if the kernel update happened or not. The update process was really slow. I guess it is something to be done offline / in an emulator, not on the real device. The actual kernel:
Linux alarmpi 3.6.11-6-ARCH+ #1 PREEMPT Mon Feb 11 02:33:03 UTC 2013 armv6l GNU/Linux
Okay, the update finished, so I rebooted the small fruit. After that, I looked at the kernel version:
Linux alarmpi 3.6.11-8-ARCH+ #1 PREEMPT Sat Mar 9 00:38:58 UTC 2013 armv6l GNU/Linux
So it managed to successfully upgrade the kernel. Nice! Now, let's use the raspberry to check a disk for errors:
badblocks -b 1048576 -n -o badblocks.sam120 -s -v /dev/sda
Further reading for me, it might be interesting how to emulate an arm device, so that the initial update could happen on an emulator

Saturday 16 March 2013

Discovering rsync - TDD approach

rsyncdiscovery

A test script to find out rsync parameters for backup.

Conclusion

To copy src to tgt:

rsync -r -H -l -g -o -t -D -p --del src tgt

And you will end up with

tgt/src

being a mirror of the original src directory.

Test usage

The shell script's source and this document lives here

To discover the tests:

./test_rsync.sh list_tests

To run the tests:

sudo ./test_rsync.sh

Background

I would like to create a file-level backup of some files. My goal is to use rsync to do the job. As rsync has a lot of command line switches, first, I would need to know, which ones to use. For this, I have some expectations:

  1. Backup is recursive
  2. Extra files removed
  3. Links are preserved
  4. Symlinks are preserved
  5. Symlinks are not rewritten
  6. Preserve permissions
  7. Modification times are preserved
  8. Special files reserved

And of course, a test-driven approach will be used. Bash does not prevent you from writing tests, so go ahead.

Ending slash: The ending slash is important on the source side, if you wish to say to copy the contents of that directory, not the directory itself.

The archive mode includes:

  • r - recursive
  • l - copy symlinks as symlinks
  • p - preserve permissions
  • t - preserve modification times
  • g - preserve group
  • o - preserve owner
  • D - preserve device and special files

And the manual also states, that it does not include:

  • H - preserve hard links
  • A - preserve ACLs
  • X - preserve extended attributes

For me the hardlinks seem to be important.

Sunday 10 March 2013

Make an md array writable

I usually set my arrays to readoly, after assembling them. This way I could avoid unattended rebuilds. But sometimes, you would want to update some data on the disk. For doing this, let's see what we have:
root@plugged:~# mdadm --detail /dev/md1 | grep Update 
    Update Time : Sun Dec  2 12:58:58 2012
And what is the short info?
root@plugged:~# cat /proc/mdstat 
Personalities : [raid1] 
md1 : active (read-only) raid1 sdc2[0] sdd2[1]
      83884984 blocks super 1.2 [2/2] [UU]
Now, let's put it into r/w mode:
mdadm --readwrite /dev/md1
Let's see what happened:
cat /proc/mdstat
md1 : active raid1 sdc2[0] sdd2[1]
      83884984 blocks super 1.2 [2/2] [UU]
Did the superblock date changed?
root@plugged:~# mdadm --detail /dev/md1 | grep Update
    Update Time : Sun Mar 10 18:23:18 2013
So, it changed as soon, as I set it to readwrite mode.

Sunday 3 March 2013

Meet with buildroot - Custom Linux

I was thinking about having a really small linux system for cloud images. I already knew about the concept of JeOS (Just enough OS), but I still can't agree having more than 100MB as a disk. Actually, I would like to be able to have something around 50 MB. I knew about CirrOS, which we use for functional testing OpenStack. CirrOS is small, the qcow image is 10 Megs. So I looked at the launchpad site, and discovered, that cirros is using buildroot to create the root filesystem. That was enough inspiration for me to try to create a very minimal Linux. First build
$ mkdir myl
$ cd myl
$ wget -qO - http://buildroot.uclibc.org/downloads/buildroot-2013.02.tar.gz | tar -xzf -
$ cd buildroot*
$ make qemu_x86_defconfig
$ time make
Before starting make, I looked into the configuration, and it turned out, that qemu_x86_defconfig includes building a kernel, so I expect the build to last at least for an hour. In the meanwhile, it makes sense to read how I will start the machine:
$ cat board/qemu/x86/readme.txt
In the end, it was quicker than I expected:
real 26m5.626s
user 32m15.753s
sys  3m55.447s
And, let's see how it boots!
qemu-system-i386 -M pc \
-kernel output/images/bzImage \
-drive file=output/images/rootfs.ext2,if=ide \
-append root=/dev/sda
Wow! that was fast. To get a feeling on how fast it is, see this video And you might be interested in the size of the whole stuff:
$ du -shc output/images/
3.2M output/images/
3.2M total

TDD framework with inotifywait and tox

So let's say, you want to do TDD. It means, that whenever you change something, you would want to run the tests. Okay, it sounds like a boring task, so let's automate it, to meet the Agile practices. So a user-space utility, called
inotifywait
will be used. This blocks, until some even happens with the monitored files. As my source code is in git, I will use:
git ls-files
to get the files to be watched. Okay, let's put it together:
$ git ls-files | inotifywait -q --fromfile -
This blocks, until something happens to a file that is watched. Let's save a file with vim, that is under version control, and see that inotifywait unblocks, and outputs:
fs/__init__.py MOVE_SELF 
Okay, that works, so let's use it as a framework to run our tests:
$ while true; do git ls-files | inotifywait -q --fromfile -; tox; done
As you add a new file to git, that will result in a change in .gitignore, so it should be automatically picked up by our framework. Tox gives you pretty colours as well!