2017-04-01 09:38 am
Entry tags:


The reason for using dd to write to USB flash sticks is historical.

On some other UNIX-like operating systems only "raw" block devices present the partition table, boot sector and unpartitioned space. Specifically, whereas Linux presents a /dev/sda block device containing all the bytes of a disk, in those other operating systems the equivalent would be a /dev/rsda raw block device.

In those other UNIX-like operating systems you must write to raw block devices in multiples of the sector size of the device. dd can do this, cp and cat cannot. How you discover a device's sector size was left as an exercise for the reader, it is traditionally 0.5KB, more recently 4KB, and three orders of magnitude larger again for flash devices.

Linux doesn't have raw devices, so using dd isn't needed to write an image to a disk. You can wget -O /dev/sd𝐱 … a Fedora .iso file directly onto the USB flash drive.

Note that some devices perform better when handed data is particular block sizes. Most USB sticks perform best if handed data in 4MB chunks. dd is useful if you want that optimisation: wget -O - … | dd of=/dev/sd𝐱 bs=4M status=progress. Note that if you do not set the bs blocksize then the default of 0.5KB is going to make writing a USB flash stick very slow.
2016-10-13 11:03 am
Entry tags:

Activating IPv6 stable privacy addressing from RFC7217

Understand stable privacy addressing

In Three new things to know about deploying IPv6 I described the new IPv6 Interface Identifier creation scheme in RFC7217.* This scheme results in an IPv6 address which is stable, and yet has no relationship to the device's MAC address, nor can an address generated by the scheme be used to track the machine as it moves to other subnets.

This isn't the same as RFC4941 IP privacy addressing. RFC4941 addresses are more private, as they change regularly. But that instability makes attaching to a service on the host very painful. It's also not a great scheme for support staff: an unstable address complicates network fault finding. RFC7217 seeks a compromise position which provides an address which is difficult to use for host tracking, whilst retaining a stable address within a subnet to simplify fault finding and make for easy hosting of services such as SSH.

The older RFC4291 EUI-64 Interface Identifier scheme is being deprecated in favour of RFC7217 stable privacy addressing.

For servers you probably want to continue to use static addressing with a unique address per service. That is, a server running multiple services will hold multiple IPv6 addresses, and each service on the server bind()s to its address.

Configure stable privacy addressing

To activate the RFC7217 stable privacy addressing scheme in a Linux which uses Network Manager (Fedora, Ubuntu, etc) create a file /etc/NetworkManager/conf.d/99-local.conf containing:


Then restart Network Manager, so that the configuration file is read, and restart the interface. You can restart an interface by physically unplugging it or by:

systemctl restart NetworkManagerip link set dev eth0 down && ip link set dev eth0 up

This may drop your SSH session if you are accessing the host remotely.

Verify stable privacy addressing

Check the results with:

ip --family inet6 addr show dev eth0 scope global
1: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
    inet6 2001:db8:1:2:b03a:86e8:e163:2714/64 scope global noprefixroute dynamic 
       valid_lft 2591932sec preferred_lft 604732sec

The highlighted Interface Identifier part of the IPv6 address should have changed from the EUI-64 Interface Identifier; that is, the Interface Identifier should not contain any bytes of the interface's MAC address. The other parts of the IPv6 address — the Network Prefix, Subnet Identifier and Prefix Length — should not have changed.

If you repeat the test on a different subnet then the Interface Identifier should change. Upon returning to the original subnet the Interface Identifier should return to the original value.

2016-05-20 06:06 pm
Entry tags:

Heatsink for RPi3

I ordered a passive heatsink for system-on-chip of the Raspberry Pi 3 model B. Since it fits well I'll share the details:


  • Fischer Elektronik ICK S 14 X 14 X 10 heatsink (Element 14 catalogue 1850054, AUD3.70).

  • Fischer Elektronik WLFT 404 23X23 thermally conductive foil, adhesive (Element 14 catalogue 1211707, AUD2.42 ).


To install you need these parts: two lint-free isopropyl alcohol swabs; and these tools: sharp craft knife, a anti-static wrist strap.

Prepare the heatsink: Swab the base of the heatsink. Wait for it to dry. Remove the firm clear plastic from the thermal foil, taking care not to get fingerprints in the centre of the exposed sticky side. Put the foil on the bench, sticky side up. Plonk the heatsink base onto the sticky side, rolling slightly to avoid air bubbles and then pressing hard. Trim around the edges of the heatsink with the craft knife.

Prepare the Raspberry Pi 3 system-on-chip: Unlug everything from the RPi3, turn off the power, wait a bit, plug the USB power lead back in but don't reapply power (this gives us a ground reference). If the RPi3 is in a case, just remove the lid. Attach wrist strap and clamp to ethernet port surround or some other convenient ground. Swab the largest of the chips on the board, ensuring no lint remains.

Attach heat sink: Remove the plastic protection from the thermal foil, exposing the other sticky side. Do not touch the sticky side. With care place the heatsink squarely and snuggly on the chip. Press down firmly with finger of grounded hand for a few seconds. Don't press too hard: we're just ensuring the glue binds.

Is it worth it?

This little passive heatsink won't stop the RPi3 from throttling under sustained full load, despite this being one of the more effective passive heatsinks on the market. You'll need a fan blowing air across the heatsink to prevent that happening, and you might well need a heatsink on the RAM too.

But the days of CPUs being able to run at full rate continuously are numbered. Throttling the CPU performance under load is common in phones and tablets, and is not rare in laptops.

What the heatsink allows is for a delay to the moment of throttling. So a peaky load can have more chance of not causing throttling. Since we're only talking AUD7.12 in parts a passive heatsink is worth it if you are going to use the RPi3 for serious purposes.

Of course the heatsink is also a more effective radiator. When running cpuburn-a53 the CPU core temperature stabilises at 80C with a CPU clock of 700MHz (out of 1200MHz). It's plain that 80C is the target core temperature for this version of the RPi3's firmware. That's some 400MHz higher than without the heatsink. But if your task needs sustained raw CPU performance then you are much better off with even the cheapest of desktops, let alone a server.

2016-05-05 01:47 am

Using Atmel-ICE JTAG/USB dongle and OpenOCD with ZodiacFX OpenFlow switch

The Atmel-ICE in-circuit debugging hardware

The Atmel-ICE is an in-circuit debugger for the Atmel SAM and AVR systems-on-chip. Depending upon the device it uses the JTAG protoccol or its Serial Wire Debug extension.

I bought the Atmel-ICE offered by Northbound Networks as it included a pre-made ribbon cable matching their Zodiac FX OpenFlow switch.

Plugging the ICE into the ZodiacFX is is straightforward. The small keyed insulation displacement connector on the ribbon cable goes into the ICE's "SAM" port. It will only go one way. Power down the ICE and the ZodiacFX by unplugging their USB connectors. Plug the other end of the ribbon cable onto the header marked "JTAG". Note that Pin 1 on the board is furthest from the "JTAG" silkprinting and Pin 1 on the ribbon cable is marked with a different colour.

The OpenOCD software

OpenOCD is free software for on-chip debugging. Install at least version 0.9.0. This is available in Jessie Backports.

Allow non-root users to use the debugger. Add the following to /etc/udev/rules.d/77-northbound-networks.rules:

# Atmel-ICE JTAG/SWD in-circuit debugger
ATTRS{idVendor}=="03eb", ATTRS{idProduct}=="2141", MODE="664", GROUP="plugdev"
$ sudo udevadm control --reload-rules

You might want to add yourself to the plugdev group with usermod -a -G plugdev vk5tu.

Attach Atmel-ICE to USB port. It will power up, lighting the middle red LED.

usb 1-1.5.3: new high-speed USB device number 12 using dwc_otg
usb 1-1.5.3: New USB device found, idVendor=03eb, idProduct=2141
usb 1-1.5.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 1-1.5.3: Product: Atmel-ICE CMSIS-DAP
usb 1-1.5.3: Manufacturer: Atmel Corp.
usb 1-1.5.3: SerialNumber: J12300012345
hid-generic 0003:03EB:2141.0004: hiddev0,hidraw2: USB HID v1.11 Device
[Atmel Corp. Atmel-ICE CMSIS-DAP] on usb-3f980000.usb-1.5.3/input0

Create openocd.cfg file in current directory containing:

# Atmel-ICE JTAG/SWD in-circuit debugger.
interface cmsis-dap
cmsis_dap_vid_pid 0x03eb 0x2141
cmsis_dap_serial J12300012345
# Northbound Networks Zodiac FX board
# contains Atmel SAM4E8C system-on-chip.
source /usr/share/openocd/scripts/target/at91sam4sXX.cfg

Plug in the Zodiac FX's USB port. It will start and light it's green LED.

If you were doing this as part of a development team you'd write a systemd unit to automatically start OpenOCD when the ICE is powered. But it's just us so we will start the daemon by hand:

$ openocd
Open On-Chip Debugger 0.9.0 (2016-05-04-19:11)
Licensed under GNU GPL v2
For bug reports, read
Info : only one transport option; autoselect 'swd'
adapter speed: 500 kHz
adapter_nsrst_delay: 100
cortex_m reset_config sysresetreq
Info : CMSIS-DAP: SWD  Supported
Info : CMSIS-DAP: JTAG Supported
Info : CMSIS-DAP: Interface Initialised (SWD)
Info : CMSIS-DAP: FW Version = 01.16.0041
Info : SWCLK/TCK = 1 SWDIO/TMS = 1 TDI = 1 TDO = 1 nTRST = 0 nRESET = 1
Info : CMSIS-DAP: Interface ready
Info : clock speed 500 kHz
Info : SWD IDCODE 0x2ba01477
Info : SAM4E8C.cpu: hardware has 6 breakpoints, 4 watchpoints

The line "CMSIS-DAP: Interface ready" indicates the ICE has been reached. The line "SAM4E8C.cpu: hardware has…" indicates that the CPU has been reached. You'll have noticed that the green LED on the Atmel ICE is lit.

The command line is available via telnet to port 4444. The following shows a telnet connection, a test that the ICE is available, and a test that the ZodiacFX is available:

$ telnet localhost 4444
Open On-Chip Debugger
> cmsis-dap info
CMSIS-DAP: FW Version = 01.16.0041
SWCLK/TCK = 1 SWDIO/TMS = 1 TDI = 1 TDO = 0 nTRST = 0 nRESET = 1
> targets
    TargetName         Type       Endian TapName            State       
--  ------------------ ---------- ------ ------------------ ------------
 0* SAM4E8C.cpu        cortex_m   little SAM4E8C.cpu        running

Some other commands you might want to try:

> flash banks
#0 : SAM4E8C.flash (at91sam4) at 0x00400000, size 0x00000000, buswidth 1, chipwidth 1
> at91sam4 gpnvm
sam4-gpnvm0: 0
sam4-gpnvm1: 1
> reg

You can also use gdb to port 3333 for debugging. But remember that it has to be able to debug the architecture and the linkage of the SAM4 whilst being able to run on the architecture and linkage of your workstation. See if your machine can install ARM's port of GNU cc and related tools:

$ sudo apt-get install gcc-arm-none-eabi gdb-arm-eabi-none
$ arm-none-eabi-gdb --eval-command="target remote localhost:3333"

For gdb to be useful you will want the symbol table for the binary file running on the board, which you then pull into gdb using the symbol-file option or the -s parameter. You need to create the symbol file when you do the final linkage. There's ample discussion of this on the internet, such as at the StackExchange sites.

2016-04-29 11:36 pm
Entry tags:

Using network namespaces for testing

It's useful when experimenting with network to have a lot of machines for testing — the development machine; a system under test; a machine running a packet capture. But that leads to a lot of machines. Another approach is to buy a USB hub and a handful of USB/ethernet dongles. Usually we're much more interested in mere connectivity rather than performance, so the shared USB bus back to the computer doesn't worry us.

Let's say a dongle appears as eth1. We can configure that separately from the main set of routing tables by using network namespaces. Users of networking platforms might know this as VRF — virtual routing and forwarding — but Linux's namespace approach applies more widely throughout the operating system than merely the networking components.

Begin by creating the network namespace:

$ sudo ip netns add TEST
$ ip netns show

[Aside: For descriptive clarity I am using a network namespace name which is all upper case. In the real world we will use lower case.]

Now move the eth1 interface into that namespace:

$ ip link show dev eth1
2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 11:22:33:44:55:66 brd ff:ff:ff:ff:ff:ff
$ ip link set dev eth1 netns TEST
$ ip link show dev eth1
Device "eth1" does not exist.

The magic arrives when we can execute commands within that namespace, in this case ip link show dev eth1 to display the ethernet-layer details of eth1:

$ sudo ip netns exec TEST ip link show dev eth1
2: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 11:22:33:44:55:66 brd ff:ff:ff:ff:ff:ff

The link is down. Let's bring it up:

$ sudo ip netns exec TEST ip link set dev eth1 up
$ sudo ip netns exec TEST ip link show dev eth1
2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000

Let's apply an IP address.

$ sudo ip netns exec TEST ip addr add dev eth1

Note carefully that every namespace has its own routing table. Sometimes merely being connected to the subnet is enough to do what we need to do. But if we do need a default route then it can be added manually:

$ sudo ip netns exec TEST ip route add via dev eth1
$ sudo ip netns exec TEST ip route show
default via dev eth1 dev eth1  proto kernel  scope link  src
$ sudo ip netns exec TEST ping www.fsf.org
64 bytes from www.fsf.org ( icmp_seq=1 ttl=43 time=266 ms

If we are connecting to a production network then DHCP works too:

$ sudo ip netns exec TEST dhclient -v eth1
Listening on LPF/eth1/11:22:33:44:55:66
Sending on   LPF/eth1/11:22:33:44:55:66
Sending on   Socket/fallback
DHCPREQUEST on eth1 to port 67
bound to -- renewal in 10000 seconds.

You'll recall that the concept of "namespaces" is broader than the concept of "VRFs". Although the "dhclient" program appears in ps's process list, the program is executing within the TEST network namespace. It is best to manipulate the program from within that network namespace by using the command ip netns exec …. We can see what network namespace a process is in with:

$ sudo ip netns pids TEST
$ sudo ip netns identify 12345

As is usual, IPv6 just works. If there is subnet connectivity then the interface has a link local address. If there is global connectivity then the interface also has a global address. You can use IPv6 and mDNS (via Linux's "Avahi" package) to use symbolic names for systems under test.

We needn't be limited to just one namespace. Let's say we're testing a new SDN switch. We could put interface eth1 into a PORT1 namespace and cable it to Port 1 of the switch. We could put interface eth2 into a PORT2 namespace can cable it to Port 2 of the switch. By using namespaces we can be sure that a ping attempt from eth1 to eth2 isn't using the Linux machine's usual routing table, but is using the namespaces' routing tables. That routing can send the packets across the eth1, and — if the switch is working — through switch ports Port 1 and Port 2, before appearing at eth2.

When we are done we can list all the process IDs in the namespace, kill them all, then delete the namespace with:

$ sudo ip netns delete TEST

If we just want to move the interface out of the TEST namespace and into the default namespace that's made tricky by the default namespace having no name. Here's how to give that namespace a name and move the interface:

$ sudo touch /run/netns/default
$ sudo mount --bind /proc/1/ns/net /run/netns/default
$ sudo ip netns exec TEST ip link set dev eth1 down
$ sudo ip netns exec TEST ip link set dev eth1 netns default
$ sudo umount /run/netns/default
$ sudo rm /run/netns/default

We set the interface to "down" to forestall the interface from conflicting with the addressing of another running interface in the destination namespace.

2016-04-14 12:07 pm
Entry tags:

There are only two ethernet settings

I can't beleive I have to write this in 2016, more that twenty years after the bug in the DEC "Tulip" ethernet controller chip which created this mess.

There are only two ethernet speed and autonegotiation settings you should configure on a switch port or host:


Auto negotiation = on


Auto negotiation = off
Speed = 10Mbps
Duplex = half

These are the only two settings which work when the partner interface is set to autonegotiation = on.

If you are considering other settings then buy new hardware. It will work out cheaper.

That is all.


Oh, so you know what you are doing. You know that explicitly setting a speed or duplex implicitly disables autonegotiation and therefore you need to explicitly set the partner interface's speed and duplex as well.

But if you know all that then you also know the world is not a perfect place. Equipment breaks. Operating systems get reinstalled. And you've left a landmine there, waiting for an opportunity...

A goal of modern network and systems administration is to push down the cost of overhead. That means being ruthless with exceptions which store away trouble for the future.

2016-04-09 01:50 pm
Entry tags:

Embedding files into the executable

Say you've got a file you want to put into an executable. Some help text, a copyright notice. Putting these into the source code is painful:

static const char *copyright_notice[] = {
 "This program is free software; you can redistribute it and/or modify",
 "it under the terms of the GNU General Public License as published by",
 "the Free Software Foundation; either version 2 of the License, or (at",
 "your option) any later version.",
 NULL   /* Marks end of text. */
#include <stdio.h>
const char **line_p;
for (line_p = copyright_notice; *line_p != NULL; line_p++) {

If the file is binary, such as an image, then the pain rises exponentially. If you must take this approach then you'll want to know about VIM's xxd hexdump tool:

$ xxd -i copyright.txt > copyright.i

which gives a file which can be included into a C program:

unsigned char copyright_txt[] = {
  0x54, 0x68, 0x69, 0x73, 0x20, 0x70, 0x72, 0x6f, 0x67, 0x72, 0x61, 0x6d,
  0x20, 0x69, 0x73, 0x20, 0x66, 0x72, 0x65, 0x65, 0x20, 0x73, 0x6f, 0x66,
  0x30, 0x31, 0x2c, 0x20, 0x55, 0x53, 0x41, 0x2e, 0x0a
unsigned int copyright_txt_len = 681;

That program looks like so:

#include "copyright.i"
unsigned char *p;
unsigned int len;
for (p = copyright_txt, len = 0;
     len < copyright_txt_len;
     p++, len++) {

If you are going to use this in anger then modify the generated .i file to declare a static const unsigned char …[]. A sed command can do that easily enough; that way the Makefile can re-create the .i file upon any change to the input binary file.

It is much easier to insert a binary file using the linker, and the rest of this blog post explores how that is done. Again the example file will be copyright.txt, but the technique applies to any file, not just text.

Fortunately the GNU linker supports a binary object format, so using the typical linkage tools a binary file can be transformed into an object file simply with:

$ ld --relocatable --format=binary --output=copyright.o copyright.txt
$ cc -c helloworld.c
$ cc -o helloworld helloworld.o copyright.o

The GNU linker's --relocatable indicates that this object file is to be linked with other object files, and therefore addresses in this object file will need to be relocated at the final linkage.

The final cc in the example doesn't compile anything: it runs ld to link the object files of C programs on this particular architecture and operating system.

The linker defines some symbols in the object file marking the start, end and size of the copied copyright.txt:

$ nm copyright.o
000003bb D _binary_copyright_txt_end
000003bb A _binary_copyright_txt_size
00000000 D _binary_copyright_txt_start

Ignore the address of 00000000, this is relocatable object file and the final linkage will assign a final address and clean up references to it.

A C program can access these symbols with:

extern const unsigned char _binary_copyright_txt_start[];
extern const unsigned char _binary_copyright_txt_end[];
extern const size_t *_binary_copyright_txt_size;

Don't rush ahead and puts() this variable. The copyright.txt file has no final ASCII NUL character which C uses to mark the end of strings. Perhaps use the old-fashioned UNIX write():

#include <stdio.h>
#include <unistd.h>
fflush(stdout);  /* Synchronise C's stdio and UNIX's I/O. */

Alternatively, add a final NUL to the copyright.txt file:

$ echo -e -n "\x00" >> copyright.txt

and program:

#include <stdio.h>
extern const unsigned char _binary_copyright_txt_start[];
fputs(_binary_copyright_txt_start, stdout);

There's one small wrinkle:

$ objdump -s copyright.o
copyright.o:   file format elf32-littlearm
Contents of section .data:
 0000 54686973 2070726f 6772616d 20697320  This program is 
 0010 66726565 20736f66 74776172 653b2079  free software; y
 0020 6f752063 616e2072 65646973 74726962  ou can redistrib
 0030 75746520 69742061 6e642f6f 72206d6f  ute it and/or mo

The .data section is copied into memory for all running instances of the executable. We really want the contents of the copyright.txt file to be in the .rodata section so that there is only ever one copy in memory no matter how many copies are running.

objcopy could have copied an input ‘binary’ copyright.txt file to a particular section in an output object file, and that particular section could have been .rodata. But objcopy's options require us to state the architecture of the output object file. We really don't want a different command for compiling on x86, AMD64, ARM and so on.

So here's a hack: let ld set the architecture details when it generates its default output and then use objcopy to rename the section from .data to .rodata. Remember that .data contains only the three _binary_… symbols and so they are the only symbols which will move from .data to .rodata:

$ ld --relocatable --format=binary --output=copyright.tmp.o copyright.txt
$ objcopy --rename-section .data=.rodata,alloc,load,readonly,data,contents copyright.tmp.o copyright.o
$ objdump -s copyright.o
copyright.o:   file format elf32-littlearm
Contents of section .rodata:
 0000 54686973 2070726f 6772616d 20697320  This program is 
 0010 66726565 20736f66 74776172 653b2079  free software; y
 0020 6f752063 616e2072 65646973 74726962  ou can redistrib
 0030 75746520 69742061 6e642f6f 72206d6f  ute it and/or mo

Link this copyright.o with the remainder of the program as before:

$ cc -c helloworld.c
$ cc -o helloworld helloworld.o copyright.o

2016-04-02 12:02 pm
Entry tags:

Getting started with Northbound Networks' Zodiac FX OpenFlow switch

Yesterday I received a Zodiac FX four 100Base-TX port OpenFlow switch as a result of Northbound Networks' KickStarter. Today I put the Zodiac FX through its paces.

Plug the supplied USB cable into the Zodiac FX and into a PC. The Zodiac FX will appear in Debian as the serial device /dev/ttyACM0. The kernel log says:

debian:~ $ dmesg
usb 1-1.1.1: new full-speed USB device number 1 using dwc_otg
usb 1-1.1.1: New USB device found, idVendor=03eb, idProduct=2404
usb 1-1.1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=0
usb 1-1.1.1: Product: Zodiac
usb 1-1.1.1: Manufacturer: Northbound Networks
cdc_acm 1-1.1.1:1.0: ttyACM0: USB ACM device

You can use Minicom (obtained with sudo apt-get install minicom) to speak to that serial port by starting it with minicom --device /dev/ttyACM0. You'll want to be in the "dialout" group, you can add youself with sudo usermod --append --groups dialout $USER but you'll need to log in again for that to take effect. The serial parameters are speed = 115,200bps, data bits = 8, parity = none, stop bits = 1, CTS/RTS = off, XON/XOFF = off.

The entry text is:

 _____             ___               _______  __
/__  /  ____  ____/ (_)___ ______   / ____/ |/ /
  / /  / __ \/ __  / / __ `/ ___/  / /_   |   /
 / /__/ /_/ / /_/ / / /_/ / /__   / __/  /   |
/____/\____/\__,_/_/\__,_/\___/  /_/    /_/|_|
            by Northbound Networks
Type 'help' for a list of available commands
Typing "help" gives:
The following commands are currently available:
 show ports
 show status
 show version
 show config
 show vlans
 set name <name>
 set mac-address <mac address>
 set ip-address <ip address>
 set netmask <netmasks>
 set gateway <gateway ip address>
 set of-controller <openflow controller ip address>
 set of-port <openflow controller tcp port>
 set failstate <secure|safe>
 add vlan <vlan id> <vlan name>
 delete vlan <vlan id>
 set vlan-type <openflow|native>
 add vlan-port <vlan id> <port>
 delete vlan-port <port>
 factory reset
 set of-version <version(0|1|4)>
 show status
 show flows
 clear flows
 read <register>
 write <register> <value>

Some baseline messing about:

Zodiac_FX# show ports
Port 1
 Status: DOWN
 VLAN type: OpenFlow
 VLAN ID: 100
Port 2
 Status: DOWN
 VLAN type: OpenFlow
 VLAN ID: 100
Port 3
 Status: DOWN
 VLAN type: OpenFlow
 VLAN ID: 100
Port 4
 Status: DOWN
 VLAN type: Native
 VLAN ID: 200

Zodiac_FX# show status
Device Status
 Firmware Version: 0.57
 CPU Temp: 37 C
 Uptime: 00:00:01

Zodiac_FX# show version
Firmware version: 0.57

Zodiac_FX# config

Zodiac_FX(config)# show config
 Name: Zodiac_FX
 MAC Address: 70:B3:D5:00:00:00
 IP Address:
 OpenFlow Controller:
 OpenFlow Port: 6633
 Openflow Status: Enabled
 Failstate: Secure
 Force OpenFlow version: Disabled
 Stacking Select: MASTER
 Stacking Status: Unavailable

Zodiac_FX(config)# show vlans
	VLAN ID		Name			Type
	100		'Openflow'		OpenFlow
	200		'Controller'		Native

Zodiac_FX(config)# exit

Zodiac_FX# openflow

Zodiac_FX(openflow)# show status
OpenFlow Status Status: Disconnected
 No tables: 1
 No flows: 0
 Table Lookups: 0
 Table Matches: 0

Zodiac_FX(openflow)# show flows
No Flows installed!

Zodiac_FX(openflow)# exit

We want to use the controller address on our PC and connect eth0 on the PC to Port 4 of the switch (probably by plugging them both into the same local area network).

Zodiac_FX# show ports
Port 4
 Status: UP
 VLAN type: Native
 VLAN ID: 200
debian:~ $ sudo ip addr add label eth0:zodiacfx dev eth0
debian:~ $ ip addr show label eth0:zodiacfx
    inet scope global eth0:zodiacfx
       valid_lft forever preferred_lft forever
debian:~ $ ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=255 time=0.287 ms
64 bytes from icmp_seq=2 ttl=255 time=0.296 ms
64 bytes from icmp_seq=3 ttl=255 time=0.271 ms
--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.271/0.284/0.296/0.022 ms

Now to check the OpenFlow basics. We'll use the POX controller, which is a simple controller written in Python 2.7.

debian:~ $ git clone https://github.com/noxrepo/pox.git
debian:~ $ cd pox
debian:~ $ ./pox.py openflow.of_01 --address= --port=6633 --verbose
POX 0.2.0 (carp) / Copyright 2011-2013 James McCauley, et al.
DEBUG:core:POX 0.2.0 (carp) going up...
DEBUG:core:Running on CPython (2.7.9/Mar 8 2015 00:52:26)
DEBUG:core:Platform is Linux-4.1.19-v7+-armv7l-with-debian-8.0
INFO:core:POX 0.2.0 (carp) is up.
DEBUG:openflow.of_01:Listening on
INFO:openflow.of_01:[70-b3-d5-00-00-00 1] connected
Zodiac_FX(openflow)# show status
 Status: Connected
 Version: 1.0 (0x01)
 No tables: 1
 No flows: 0
 Table Lookups: 0
 Table Matches: 0

You can then load POX programs to manuipulate the network. A popular first choice might be to turn the Zodiac FX into a flooding hub.

debian:~ $ ./pox.py --verbose openflow.of_01 --address= --port=6633 forwarding.hub
POX 0.2.0 (carp) / Copyright 2011-2013 James McCauley, et al.
INFO:forwarding.hub:Hub running.
DEBUG:core:POX 0.2.0 (carp) going up...
DEBUG:core:Running on CPython (2.7.9/Mar 8 2015 00:52:26)
DEBUG:core:Platform is Linux-4.1.19-v7+-armv7l-with-debian-8.0
INFO:core:POX 0.2.0 (carp) is up.
DEBUG:openflow.of_01:Listening on
INFO:openflow.of_01:[70-b3-d5-00-00-00 1] connected
INFO:forwarding.hub:Hubifying 70-b3-d5-00-00-00
Zodiac_FX(openflow)# show flows
Flow 1
  Incoming Port: 0			Ethernet Type: 0x0000
  Source MAC: 00:00:00:00:00:00		Destination MAC: 00:00:00:00:00:00
  VLAN ID: 0				VLAN Priority: 0x0
  IP Protocol: 0			IP ToS Bits: 0x00
  TCP Source Address:
  TCP Destination Address:
  TCP/UDP Source Port: 0		TCP/UDP Destination Port: 0
  Wildcards: 0x0010001f			Cookie: 0x0
  Priority: 32768			Duration: 9 secs
  Hard Timeout: 0 secs			Idle Timeout: 0 secs
  Byte Count: 0			Packet Count: 0
  Action 1:
   Output: FLOOD

If we now send a packet into Port 1 we see it flooded to Port 2 and Port 3.

We also see it flooded to Port 4 (which is in 'native' mode). Flooding the packet up the same port as the OpenFlow controller isn't a great design choice. It would be better if the switch had four possible modes for ports with traffic kept distinct between them: native switch forwarding, OpenFlow forwarding, OpenFlow control, and switch management. The strict separation of forwarding, control and management is one of the benefits of software defined networks (that does lead to questions around how to bootstrap a remote switch, but the Zodiac FX isn't the class of equipment where that is a realistic issue).

VLANs between ports only seem to matter for native mode. A OpenFlow program can — and will — happily ignore the port's VLAN assignment.

The Zodiac FX is currently a OpenFlow 1.0 switch. So it can currently manipulate MAC addresses but not other packet headers. That still gives a suprising number of applications. Northbound Networks say OpenFlow 1.3 -- with it's manipulation of IP addresses -- is imminent.

The Zodiac FX is an interesting bit of kit. It is well worth buying one even at this early stage of development because it is much better at getting your hands dirty (and thus learn) than is the case with software-only simulated OpenFlow networks.

The source code is open source. It is on Github in some Atmel programming workbench format [Errata: these were some Microsoft Visual Studio 'solution' files]. I suppose it's time to unpack that, see if there's a free software Atmel toolchain, and set about fixing this port mode bug. I do hope simple modification of the switch's software is possible: a switch to teach people OpenFlow is great; a switch to teach people embedded network programming would be magnificent.

2016-03-27 05:30 pm
Entry tags:

Moments in Linux history: Pentium II

One neglected moment in Linux history was the arrival of the Pentium II processor with Deschutes core in 1998. Intel had been making capable 32-bit processors since the 80486, but these processors were handily outperfomed by Alpha, MIPS and SPARC. The Pentium II 450MHz turned the tables. These high-end PCs easily outperformed the MIPS- and SPARC-based workstations and drew level with the much more expensive Alpha.

UNIX™ users looking to update their expensive workstations looked at a high-end PC and thought "I wonder if that runs Unix?". Inserting a Red Hat Linux 6.0 CD into the drive slot and installing the OS lead to the discovery of a capable and mature operating system, a better Unix than the UNIX™ they had been using previously. With a few years the majority of UNIX™ systems administrators were familiar with Linux, because they were running it on their own workstations, whatever Unixen they were administering over SSH.

This familiarity in turn lead to an appreciation for Linux's stablity. When it was time to field new small services — such as DNS and DHCP — then it was financially attractive to serve these from a Linux platform rather than a UNIX™ platform. Moreover the Linux distributors did a much better job of packaging the software which people used, whereas the traditional Unix manufacturers took a "not invented here" attitude: shipping very old versions of software such as DNS servers, and making users download and compile simple tools rather than having rhe tools pre-packaged for simple installation.

The Linux distributors did such a good job that it was much easier to run a web site from Linux than from Windows. The relative importance of these 'Internet' applications was missed by a Microsoft keen to dominate the 'enterprise' market. Before 1999 the ambition of Microsoft to crush the Unixen looked likely. After 2000 that ambition was unrealistic hubris.

2016-03-18 05:03 pm
Entry tags:

Raspberry Pi 3 performance, power and heat

When you order a Raspberry Pi 3 then do yourself a favour and also order the matching 5.1VDC 2.5A power supply (eg: STONTRONICS T5875DV, Element 14 item 2520785). The RPi3 is four cores of 64-bit ARM with an impressive GPU -- that's a lot to power. If you present it with too little power the circuitry will make the red "power" LED blink and the software will reduce the CPU's clock rate.

You'll notice the clever use of tolerances to allow the RPi3 power supply to charge a phone, as you might expect from its Micro USB connector (5.0V + 10% = 5.5V, 5.1V + 5% ≅ 5.4V). The cable on the RPi3 power supply has an impressive amount of copper, so they are serious about avoid voltage drop due to thin cables.

You can argue that this is poor design, that the RPi should really use one of the higher power delivery solutions designed for mobile phones. But with Google, Apple and Samsung all choosing different solutions? Whatever the RPi's designers chose to do then most purchasers would have to buy the matching power supply. At least this design is simple for makers and hobbyists to power the RPi3 (simply provide the specified voltage and current, no USB signalling is needed).

The RPi3 will also slow down when it gets too hot; this is called throttling and is a feature of all modern CPUs. People are currently experimenting with heat sinks. Even a traditional aluminium 10mm heat sink seems to make a worthwhile difference in preventing throttling on CPU-intensive tasks; although how often such tasks occur in practice is another question. The newer ceramic heat sinks are about four times more effective than the classic black aluminium heat sinks, so keep your eyes out for someone offering a kit of those for the RPi3. This is a further complication when looking at cases, as the airflow through most RPi2 cases is quite poor. I've simply taken a drill to the plastic RPi2 case I am using, although there are ugly industrial cases and expensive attractive cases with good airflow.

Further reading: Raspberry Pi 3 Cooling / Heat Sink Ideas, Pi3B thermal throttling.

2015-08-28 10:37 am

Academic publishing, now being tried by the bottom-feeders

There are a lot of fake academic journals out there seeking to defraud authors. Not really surprising: academic publishing has such a high profit margin that even established publishers have a whiff of running a scam[1], open access has blurred the edge of what a journal is, and the sharks have moved in.

But now it seems that even scammers who once may have been Nigerian princes are now trying their hand:

From: Kate .M. (editor)
Sent: Friday, 28 August 2015 9:21 AM
Subject: Assist in Peer-Reviewing Research Papers

Dear Professor,

Thank you for your time for reading this mail. Science Publication wishes to invite you to become our Journal Review Board member.

Your professional expertise will be greatly appreciated by us as well as authors who have submitted their research manuscript for peer-review evaluation and publication.

Our journal deals on the following key studies:

Microbiology | Biochemistry | Medicine and Clinical Trials | Biotechnology | Agricultural Research and Management | Physics | Mathematics and Statistics | Pure and Applied Chemistry | Environmental Engineering Research | Electrical and Electronic Engineering | Civil Engineering and Architecture | Chemical Engineering Research | Economics | Business Management | Psychology | Sociology and Anthropology |

Please inform us of your interest to participate in our Review Board. More information will be provided to you upon your reply.

Thank you.
Assistant Editor
Science Journal Publication

NOTE: Simply Send A Blank Message With Unsubscribe As Subject to Remove Your E-mail From Our List.

For the record: not a professor.

[1] Someone else pays for research to be done. Peer review and editorial is done for free. "Page fees" are charged to authors cover the publishing costs. The journal's subscription revenue is handsome profit.

2015-07-30 03:13 pm
Entry tags:

Customising a systemd unit file

Once in a while you want to start a daemon with differing parameters from the norm.

For example, the default parameters to Fedora's packaging of ladvd give too much access to unauthenticated remote network units when it allows those units to set the port description on your interfaces[1]. So let's use that as our example.

With systemd unit files in /etc/systemd/system/ shadow those in /usr/lib/systemd/system/. So we could copy the ladvd.service unit file from /usr/lib/... to /etc/..., but we're old, experienced sysadmins and we know that this will lead to long run trouble. /usr/lib/systemd/system/ladvd.service will be updated to support some new systemd feature and we'll miss that update in the copy of the file.

What we want is an "include" command which will pull in the text of the distributor's configuration file. Then we can set about changing it. Systemd has a ".include" command. Unfortunately its parser also checks that some commands occur exactly once, so we can't modify those commands as including the file consumes that one definition.

In response, systemd allows a variable to be cleared; when the variable is set again it is counted as being set once.

Thus our modification of ladvd.service occurs by creating a new file /etc/systemd/system/ladvd.service containing:

.include /usr/lib/systemd/system/ladvd.service
# was ExecStart=/usr/sbin/ladvd -f -a -z
# but -z allows string to be passed to kernel by unauthed external user
ExecStart=/usr/sbin/ladvd -f -a

[1] At the very least, a security issue equal to the "rude words in SSID lists" problem. At it's worst, an overflow attack vector.

2015-07-22 10:26 pm
Entry tags:

Configuring Zotero PDF full text indexing in Debian Jessie


Zoterto is an excellent reference and citation manager. It runs within Firefox, making it very easy to record sources that you encounter on the web (and in this age of publication databases almost everything is on the web). There are plugins for LibreOffice and for Word which can then format those citations to meet your paper's requirements. Zotero's Firefox application can also output for other systems, such as Wikipedia and LaTeX. You can keep your references in the Zotero cloud, which is a huge help if you use different computers at home and work or school.

The competing product is EndNote. Frankly, EndNote belongs to a previous era of researcher methods. If you use Windows, Word and Internet Explorer and have a spare $100 then you might wish to consider it. For me there's a host of showstoppers, such as not running on Linux and not being able to bookmark a reference from my phone when it is mentioned in a seminar.

Anyway, this article isn't a Zotero versus EndNote smackdown, there's plenty of those on the web. This article is to show a how to configure Zotero's full text indexing for the RaspberryPi and other Debian machines.

Installing Zotero

There are two parts to install: a plugin for Firefox, and extensions for Word or LibreOffice. (OpenOffice works too, but to be frank again, LibreOffice is the mainstream project of that application these days.)

Zotero keeps its database as part of your Firefox profile. Now if you're about to embark on a multi-year research project you may one day have trouble with Firefox and someone will suggest clearing your Firefox profile, and Firefox once again works fine. But then you wonder, "where are my years of carefully-collected references?" And then you cry before carefully trying to re-sync.

So the first task in serious use of Zotero on Linux is to move that database out of Firefox. After installing Zotero on Firefox press the "Z" button, press the Gear icon, select "Preferences" from the dropbox menu. On the resulting panel select "Advanced" and "Files and folders". Press the radio button "Data directory location -- custom" and enter a directory name.

I'd suggest using a directory named "/home/vk5tu/.zotero" or "/home/vk5tu/zotero" (amended for your own userid, of course). The standalone client uses a directory named "/home/vk5tu/.zotero" but there are advantages to not keeping years of precious data in some hidden directory.

After making the change quit from Firefox. Now move the directory in the Firefox profile to whereever you told Zotero to look:

$ cd
$ mv .mozilla/firefox/*.default/zotero .zotero

Full text indexing of PDF files

Zotero can create a full-text index of PDF files. You want that. The directions for configuring the tools are simple.

Too simple. Because downloading a statically-linked binary from the internet which is then run over PDFs from a huge range of sources is not the best of ideas.

The page does have instructions for manual configuration but the page lacks a worked example. Let's do that here.

Manual configuration of PDF full indexing utilities on Debian

Install the pdftotext and pdfinfo programs:

    $ sudo apt-get install poppler-utils

Find the kernel and architecture:

$ uname --kernel-name --machine
Linux armv7l

In the Zotero data directory create a symbolic link to the installed programs. The printed kernel-name and machine is part of the link's name:

$ cd ~/.zotero
$ ln -s $(which pdftotext) pdftotext-$(uname -s)-$(uname -m)
$ ln -s $(which pdfinfo) pdfinfo-$(uname -s)-$(uname -m)

Install a small helper script to alter pdftotext paramaters:

$ cd ~/.zotero
$ wget -O redirect.sh https://raw.githubusercontent.com/zotero/zotero/4.0/resource/redirect.sh
$ chmod a+x redirect.sh

Create some files named *.version containing the version numbers of the utilities. The version number appears in the third field of the first line on stderr:

$ cd ~/.zotero
$ pdftotext -v 2>&1 | head -1 | cut -d ' ' -f3 > pdftotext-$(uname -s)-$(uname -m).version
$ pdfinfo -v 2>&1 | head -1 | cut -d ' ' -f3 > pdfinfo-$(uname -s)-$(uname -m).version

Start Firefox and Zotero's gear icon, "Preferences", "Search" should report something like:

PDF indexing
  pdftotext version 0.26.5 is installed
  pdfinfo version 0.26.5 is installed

Do not press "check for update". The usual maintenance of the operating system will keep those utilities up to date.

2015-06-11 10:32 pm
Entry tags:

Notes for upgrading RaspberryPi from Raspbian Wheezy to Raspbian Jessie

Debian distributions for the Raspberry Pis

The Raspian distribution is Debian recompiled and tuned for the ARM instruction set used in the original Raspberry Pi Model A, Model B, and Model B+.

The Raspberry Pi2 has a more recent ARM instruction set. That gives RaspberryPi2 users two paths to Debian Jessie: use the Raspbian distribution or use the stock Debian ARM distribution with a hack for the Raspberry Pi kernel.

This article is about upgrading an existing Raspbian Wheezy distribution to Raspbian Jessie. Some Linux systems administration skill is required to do this.

Alter /etc/apt/sources.list

Edit the files /etc/apt/sources.list and /etc/apt/sources.list.d/*.list replacing every occurance of "wheezy" with "jessie".

For example is /etc/apt/sources.list says:

deb http://mirrordirector.raspbian.org/raspbian/ wheezy main contrib non-free rpi

then alter that to:

deb http://mirrordirector.raspbian.org/raspbian/ jessie main contrib non-free rpi

Similarly /etc/apt/sources.list.d/raspi.list contained:

deb http://archive.raspberrypi.org/debian/ wheezy main

and this becomes:

deb http://archive.raspberrypi.org/debian/ jessie main

The repository described by the file /etc/apt/sources.list.d/collabora.list doesn't yet have a Jessie section.


The number of packages to upgrade will depend on how many packages you installed in addition to those which originally arrived with Raspbian Wheezy. In general, the mark is somewhere around 1GB of data.

Upgrades are best done from the old-fashioned text console. Press Crtl-Alt-F1 and login as root.


# apt-get update
# apt-get dist-upgrade
# apt-get autoremove

Do not reboot when that command completes. We'll fix a few of the more common issues with the upgrade at this most convenient moment.

Correct errors

udev rules

There are two files containing syntax errors in /lib/udev/rules.d/ which cause udev to fail to start: 60-python-pifacecommon.rules and 60-python3-pifacecommon.rules. These files are not owned by any packages, which is a little annoying and naive of their authors. Rename them to stop udev attempting to read them and failing.

# cd /etc/udev/rules.d
# mv 60-python-pifacecommon.rules 60-python-pifacecommon.rules.failed
# mv 60-python3-pifacecommon.rules 60-python3-pifacecommon.rules.failed

ifplugd replaced by wicd

Networking of plugin interfaces is done using ifplugd in Wheezy. This is done using wicd in Jessie.

# apt-get purge ifplugd

systemd is the cgroups controller

The init system is done using System V-style scripts in Wheezy. This is done using systemd in Jessie.

Systemd uses control groups so that unexpected process stop is reported to systemd. In Linux a control group can only have one controlling process, which has to be systemd in Jessie. This isn't a poor outcome, as systemd makes a fine controller.

However if another process attempts to be the control groups controller then systemd can fail when starting processes. So remove any existing controllers:

# apt-get purge cgmanager cgroup-bin cgroup-tools libcgroup1

systemd is the init system

A package called systemd-shim allow other init systems to use logind and other programs, as systemd-shim provides just enough of systemd's function. Jessie uses systemd, so we don't need systemd-shim. Unfortunately the dist-upgrade seems to pull this in:

# apt-get purge systemd-shim

[Thanks to ktb on the RaspberryPi.org forums for correcting a typo here.]

Allow logging to the journal

systemd doubles the number of system loggers, by adding a new logger called journald. It can provide logs to the usual syslogd. However when debugging startup issues it can be useful to have journald write the files itself. To do this create the directory /var/log/journal/

journald keeps its logs in binary. Use journalctl -xb to see the logs. The -1 parameter is hand to view the log of the previous boot.

Consider booting in single user mode

You might choose to give yourself a way to debug startup issues by having the kernel start in single user mode. Edit /boot/cmdline.txt, appending the text single

The workflow here is:

  • Boot RPi.

  • Press Ctrl-D when asked for a password to enter single user mode. The boot will continue into multi-user mode.

  • If this hangs then press Ctrl+Alt+Del to shut down.

  • Restart RPi. This time provide the root password at the single user mode password prompt. Use journalctl to view the log of the previous boot and examine what went wrong.

  • Correct the error, and use shutdown -r now to try again from the top.

Once you have sorted issues during system init then remove the single phrase from /boot/cmdline.txt so that the system boots into multiuser mode.


Reboot and work through issues resulting from the upgrade.

Clean up

# apt-get clean

Journald is a enterprise-level logging solution, so it is keen on flushing data to disk. This radically increases the number of flash blocks written and this reduction in flash card lifetime isn't appreciated on the RPi. So it's probably best to remove /var/log/journal/* and allow journald to log to RAM and syslog instead.

2015-03-29 08:33 am
Entry tags:

Fedora 21: automatic software updates

The way Fedora does automatic software updates has changed with the replacement of yum(8) with dnf(8).

Start by disabling yum's automatic updates, if installed:

# dnf remove yum-cron yum-cron-daily

Then install the dnf automatic update software:

# dnf install dnf-automatic

Alter /etc/dnf/automatic.conf to change the "apply_updates" line:

apply_updates = yes

Instruct systemd to run the updates periodically:

# systemctl enable dnf-automatic.timer
# systemctl start dnf-automatic.timer
2015-01-19 12:50 pm
Entry tags:

Fedora: easy recovery from corrupt root partition

When you boot Fedora with a corruption which is not automatically repaired when systemd runs fsck -a then you are asked on the console if to enter single user mode, or if to continue. If you choose to enter single user mode then you'll find that you can't run fsck /dev/md0 as the root filesystem is mounted.

Dracut has a debugging mode with named breakpoints: it will boot up to the break-point, and then dracut will drop the console into a shell.

This is useful for solving a corrupted root filesystem, we can boot up to just before the disk is mounted, breakpoint into the Dracut shell, and then run fsck on the yet-to-be-mounted root filesystem. To do this temporarily add the Dracut breakpoint parameter


to the Linux kernel.

In Fedora you do can temporarily modify the Linux kernel parameters by pressing e at the Grub bootloader prompt, arrow-ing down to the "linux" command, adding the parameter to the end of that line, and pressing F10 to run the Grub command list you see on the screen.

Dracut will load the logical volumes, assemble any RAID, and then present a shell on the console. Say fsck /dev/md0 (or whereever /etc/fstab says your / filesytem lives) and then reboot. This is a world easier than booting from a CD or USB and working out which partitions are on what logical volumes, and which logical volumes are in which RAID devices.

Breakpoints are a very fine feature of Dracut and, as this blog posting shows, very useful for solving problems which appear during the early stages of booting the machine.

2015-01-08 10:42 am
Entry tags:

Hansard, ACT Parliament, 1992-09-16

Mr Stuart Gill

MR STEVENSON (4.29): Madam Speaker, last year I made a number of statements in this Assembly concerning links between organised crime and the pornography industry in Australia. I had received information on these matters from Stuart Gill, who told me that he had been a senior investigator with the Costigan royal commission. He had also said that he was working with the Victoria Police as a consultant and had worked in that capacity for some time. That was later confirmed in a letter of 24 May 1991 by Inspector Cosgriff of the Victoria Police Internal Security Unit.

I hired Gill on staff to assist me in matters relating to pornography and organised crime. During that time Gill told me that a man named Gerald Gold had been named as a leading eastern States crime figure in a confidential segment of the final report of the Costigan royal commission. As a result of that information, I made statements in this Assembly concerning Mr Gold that I now believe were wrong. I later came to understand that Stuart Gill was not a police consultant but was, in fact, an informer for the Victoria Police. Gill left my employ in October last year.

Yesterday the Victorian media reported on allegations about widespread police and political corruption resulting from an investigation named Operation Iceberg. The Victoria Police Commissioner, Mr Kel Glare, stated yesterday that the allegations were not only unsubstantiated but utterly false. The commissioner said that the Operation Iceberg document was not a police report but had been prepared by a police informer. That police informer has been named as Stuart Gill. I am aware that Stuart Gill was born under the name of Paul Dummett and has also used the name Andrew McAuley.

I wish to take the opportunity to apologise to Mr Gerald Gold for any difficulties he may have been caused by statements I made in this house. Gill also stole documents from my office and spread misleading stories about me to the media. I have formed the opinion that Gill is a pathological liar. I have spoken to other people in Victoria - I made a trip to Victoria - and they have told me of certain fraud and other offences which they have said have not been prosecuted. Perhaps this situation in Victoria will give the police an opportunity to put this matter to justice.

MR HUMPHRIES (4.31): Madam Speaker, first of all, I commend Mr Stevenson for that statement. I have had many representations from Mr Gold. I think it took some courage for Mr Stevenson to come into the house and say that he was wrong in things he had said about Mr Gold based on information supplied to him. That is good news for Mr Gold and a tribute to Mr Stevenson.

2014-12-12 11:47 pm
Entry tags:

USB product IDs for documentation - success


In a previous posting I reported a lack of success when enquiring of the USB Implementors' Forum if a Vendor ID had been reserved for documentation.

To recap my motivation, a Vendor ID -- or at least a range of Product IDs -- is desirable to:

  • Avoid defamation, such as using a real VID:PID to illustrate a "workaround", which carries the implication that the product is less-than-perfect. Furthermore, failing to check if a VID:PID has actually been used is "reckless defamation".

  • Avoid consumer law, such as using a real VID:PID to illustrate a a configuration for a video camera, when in fact the product is a mouse.

  • Avoid improper operation, as may occur if a user cuts-and-pastes an illustrative example and that effects a real device.

  • Avoid trademark infringment.

For these reasons other registries of numbers often reserve entries for documentation: DNS names, IPv4 addresses, IPv6 addresses.

Allocation of 256 Product IDs, thanks to OpenMoko

OpenMoko has been generous enough to reserve a range of Product IDs for use by documentation:

0x1d50:0x5200 through to 0x1d50:0x52ff

Note carefully that other Product IDs within Vendor ID 0x1d50 are allocated to actual physical USB devices. Only the Product IDs 0x1d50:0x5200 through to 0x1d50:0x52ff are reserved for use by documentation.

My deep thanks to OpenMoko and Harald Welte.

Application form

The application form submitted to OpenMoko read:

  • a name and short description of your usb device project

    Documentation concerning the configuration of USB buses and devices.

    For example, documentation showing configuration techniques for Linux's udev rules.

    The meaning of "documentation" shall not extend to actual configuration of a actual device. It is constrained to showing methods for configuration. If an VID:PID for an actual device is required then these can be obtained from elsewhere.

    OpenMoko will not assign these "Documentation PIDs" to any actual device, now or forever.

    Operating systems may refuse to accept devices with these "documentation VID:PIDs". Operating systems may refuse to accept configuration which uses these "documentation VID:PIDs".

  • the license under which you are releasing the hardware and/or software/firmware of the device

    The documentation may use any license. Restricting use to only free documentation is problematic: the definition of "free" for documents is controversial; and it would be better if the PID:VIDs were well known and widely used by all authors of technical documentation.

  • a link to the project website and/or source code repository, if any

    Nil, one can be created if this is felt to be necessary (eg, to publicise the allocation).

  • if you need multiple Product IDs, please indicate + explain this at the first message, rather than applying for a second ID later

    Approximately 10.

2014-12-04 10:07 am
Entry tags:

TFTP server, Fedora 24

The major system management tools have altered in recent Fedora versions, so the long-remembered phrases no longer work. Here is how to install and make available to the world a TFTP server.

$ sudo pkcon install tftp tftp-server
$ sudo cat <EOF >> /etc/hosts.allow
in.tftpd: ALL
$ sudo firewall-cmd --add-service tftp
$ sudo firewall-cmd --permanent --add-service tftp
$ sudo systemctl enable tftp.socket
$ sudo systemctl daemon-reload

Test with:

$ sudo cp example.bin /var/lib/tftpboot/
remote$ tftp server.example.com
tftp> get example.bin
tftp> quit

Use cp rather than mv so that SELinux sets the correct attribute on the file.

To see what is going on, use journalctl -f -l. You don't see much. Here's what a working download from the TFTP server looks like:

Jan 01 00:00:00 tftp-server.example.net in.tftpd[2]: RRQ from ::ffff: filename example.bin
Jan 01 00:00:10 tftp-server.example.net in.tftpd[2]: Client :ffff: finished example.bin

To enable enough messages to see why a particular client is failing, to set a small blocksize to be compatible with a wide range of equipment, and to extend the timeout to allow enough time for routers with slow flash not to encounter confusing retransmissions, add the file /etc/systemd/system/tftp.service containing:

.include /lib/systemd/system/tftp.service
ExecStart=/usr/sbin/in.tftpd --blocksize 1468 --retransmit 2000000 --verbose --secure /var/lib/tftpboot

If you want to use a different directory for the files the make sure you get your SELinux labelling correct. There are two setsebool nerb knobs: tftp_anon_write is needed to allow writing (along with changing flags on the daemon command line and getting the Unix permissions correct); and tftp_home_dir loosens the type matching enough so that a user home directory can do TFTP.

Consider that between Fedora 14 (2010) and Fedora 22 (2015) the package installation command, firewall configuration and init system configuration and log viewing of this common systems administration task all change. I wonder if that invalidation of years of practice accounts for some of the opposition to those changes.

2014-11-11 04:06 pm
Entry tags:

USB Vendor ID for documentation

If you are writing documentation then you don't want to use an assigned magic number, like a real IP address or a real DNS name. That can readily lead to: misunderstandings; operational difficulties for the vendor's equipment if the number escapes from documentation into production; and difficulties for the author because of the risk of defamation and trademark infringement.

For these reasons standards associations commonly issue a range of their magic numbers for documentation purposes. For example, the IETF issued magic numbers for documentation in RFC2606 for DNS names, in RFC5737 for IPv4 addresses and in RFC3849 for IPv6 addresses.

I was writing some documentation for using udev, and rather than defame some vendor by suggesting that their product may need a workaround, I asked the USB Implementors' Forum if there is a USB Vendor ID for documentation purposes.

Sadly, there is not:

From: USB-IF Administration <redacted>
Subject: RE: Vendor-ID for use in documentation
Date: 11 November 2014 2:34:21 PM ACDT
To: Glen Turner <redacted>

Dear Glen,

Thank you for your message. Vendor IDs (VIDs) are owned by the vendor company and are assigned and maintained by the USB-IF only. We do not have a generic VID for documentation.

Regards, redacted