Ian's Linux Adventure

General Lew Wallace of the U.S. Civil War. I don't look like him. Howdy.

I'm documenting my adventures (and problems) here, so that I remember my mistakes, and that you may learn from my mistakes. I'm not a programmer, nor a computer expert. I'm just a tinkering guy in Milwaukee with a store and three kids to keep me busy.

If you want to respond, or help me out, or you find a way to explain something better, just use the web form. And thanks!

Make movies on the command line

August 9, 2011: The kids made a time-lapse movie on their camera. Here is how I converted into a real movie

First, copy the photos into their own local folder.

Next, change the filenames into something easy and sequential, for example from IMG_20508.JPG to 508.jpg

Preview using the qiv package to create a slideshow without converting to video. For example, qiv /path/to/photofolder/ --slide --delay 0.05

Convert the jpgs into a smaller uniform size using imagemagick's mogrify command.

Convert the jpgs into a single movie using ffmpeg.

ffmpeg -i %03d.jpg test1.mp4        # Default 20 fps
ffmpeg -r 10 -i %03d.jpg test2.mp4  # Slower, 10 fps

IRC and Mailing list digestor

June 29, 2011: A lot of good stuff goes out on IRC and mailing lists that I should follow, but don't because it would fill my inbox and bookmarks. So I'm going to take a shot at creating an aggregator for those.

Option 1 - An html digest-page

Option 2 - An rss-creator, so they fit with existing aggregators.

For now, I think I'll go with Option 1 - an html digest page on my laptop. A quiet way to start.

Resource: Found irclog2html, a python 2.x script that converts irc logs from text to html and colorizes the usernames (very useful!). Usage: python irclog2html textfile htmlfile

The irc channels I want to monitor are:

So the daily IRC-digest flow should look something like:

  1. Generate the date to look for
  2. Download the .txt files and convert to html...or just download the html files? That might be simpler!
  3. Strip the headers and footers off each html file, and replace them to form a single file with breaks between the different chatrooms.
Seems simple enough to do in shell script

Additional configurations for Avahi and Samba

April 26, 2011: Now that the shared drive is functioning, let's look at some of the other samba and avahi features we can configure.

  1. Use samba and avahi to make cups printing more convenient. Cups does not need any configuration beyond the CUPS webpage admin 'use shared printers' checkbox. Similarly, samba will automatically share cups printers. Samba will also broadcast the shared printers to the netork...except we happen to have that part of samba disabled, since we are using avahi for broadcasting shares instead.
    • (Optional) Configure samba to share cups printer drivers to windows machines using these instructions.

    • Configure avahi to broadcast cups printers (instructions) by creating a new file /etc/avahi/PRINTERNAME.service. Create a new file for *each* shared printer:
      <?xml version="1.0" standalone='no'?>
      <!DOCTYPE service-group SYSTEM "avahi-service.dtd">
      <name replace-wildcards="yes">Printer SCX-4725 via USB port on %h</name>
             <txt-record>ty=Samsung SCX-4725</txt-record>
             <txt-record>product=(GPL Ghostscript)</txt-record>
  2. Start samba using xinetd. (Source) There are three steps to this: Add an xinetd service file, remove the samba symlinks from /etc/init.d, and restart both services to let xinetd take the port.

    • Create a new file /etc/xinetd.d/samba with the following contents:
      service netbios-ssn
              id              =       netbios-ssn      # /etc/services will convert this to port 139
              socket_type     =       stream
              protocol        =       tcp
              wait            =       no
              only_from       =
              user            =       root
              server          =       /usr/sbin/smbd
              disable         =       no
              instances       =       1
    • Configure samba to be started by xinetd. All we need to do is prevent samba from starting at boot; xinetd now has the job of starting it. Since Debian 6 standard uses sysvinit instead of Upstart, run the following commands to prevent samba from starting up at boot. These are just symlinks to /etc/init.d/samba, you can always recreate them: update-rc.d -f samba remove
    • (Optional) Have xinetd take over the ports. Stop samba (to free the ports), then restart xinetd so it will take over the job. This is only necessary during the current session, instead of restarting. Do the following commands:
      service samba stop
      service xinetd restart
      netstat -tulp                 # Check that xinetd is using the port
  3. (EXPERIMENTAL) Start debtorrent using xinetd. There are four steps to this: Add an xinetd service file, remove the debtorrent symlinks from /etc/init.d, add debtorrent to /etc/services, and restart both services to let xinetd take the port.

    • Append these lines to /etc/services:
      debtorrent      9899/tcp            # Debtorrent web access
      debtorrent      9990/tcp            # Debtorrent 
    • Create a new file /etc/xinetd.d/samba with the following contents [doesn't work yet]:
      service debtorrent
      	id		=	debtorrent      # /etc/services will convert this to ports 9899 and 9990)
      	socket_type	=	stream
      	wait		=	no
      	user		=	root
      	server		=	/usr/sbin/debtorrent
      	disable		=	no
      	mdns		=	yes
      	instances	=	3
    • Configure debtorrent to be started by xinetd. All we need to do is prevent debtorrent from starting at boot; xinetd now has the job of starting it. Since Debian 6 standard uses sysvinit instead of Upstart, run the following commands to prevent samba from starting up at boot. These are just symlinks to /etc/init.d/samba, you can always recreate them: update-rc.d -f debtorrent remove
    • (Optional) Have xinetd take over the ports. Stop debtorrent (to free the ports), then restart xinetd so it will take over the job. This is only necessary during the current session, instead of restarting. Do the following commands:
      service debtorrent stop
      service xinetd restart
      netstat -tulp                 # Check that xinetd is using the port
  4. TODO Install ftp server
  5. TODO Install http server

Connecting to the server from a MacBook

June 6, 2011: I have a MacBook on my home network. I need to connect it to the shared drive partition. It also needs to connect regularly to the backup partition and run Time Machine.

Here are the problems, in order:

  1. Configure the Mac to manually connect to the shared partition and the backup partition. This should be easy, since the mac should be listening for Samba shares on the network. Here are THREE ways to manually mount a network share. Try them in order until something works. All can be easily unmounted through the GUI. Let's do the shared partition, then you can repeat on your own for the backup partition:
    1. Open the Finder and look in Devices for the network share. Double click on the network share icon. If the connection fails, then 'Connect As' Guest without a password.
    2. Do a CTRL+SHIFT+K to bring up the 'Connect To Server' dialog, and enter smb // for the common drive. If it works, make sure to return to that dialog box and add it to the bookmarks!
    3. Manually create a mount point and mount the share. Open a terminal, and use
      mkdir /Users/macusername/desktop/testfolder
      mount -t smbfs //Guest@ /Users/macusername/Desktop/testfolder
    • (Optional) In the menu Finder --> Preferences --> General, select the box 'Connected servers' to show them on the desktop. You can still see them in the Finder window.
    • Problem: Sometimes the finder would work if I double-click on the server icon...sometimes it wouldn't. The 'Connect To Server' dialog was much more reliable.
    • Problem: After a recent power outage, the Mac partition would only mount read-only. Checking dmesg, it had recorded an unclean unmount. Running fsck fixed it. (fsck can also test for the condition)
    • Problem: Samba didn't respond to any network requests for about 18 hours. Not the Mac's fault, all machines on the net were affected. Started working again just as mysteriously.
  2. Set up the Mac to autoconnect to the shared partition and the backup partition. In System Settings --> Accounts --> Login Items, add the Shared drive and the backup drive. Click the 'hide box'...even though the window will show up anyway.

    Alternately, we could adapt the plist script at http://tech.inhelsinki.nl/locationchanger/ to do it, but that's too much work for the limited benefit. We have a pretty-good solution already.

  3. Set up the backup partition on the server. There are a couple step to accomplish this. This assumes you already have an hfs+ partition, a mount point (/mnt/MacBook-backup) on the server, an entry for the partion in /etc/fstab, that the partition mounts at startup, and that the partition is already mounted.
    • Set file permissions and ownership
      chown nobody /mnt/MacBook-backups
      chmod 666 /mnt/MacBook-backup
    • Tell Samba about the backup partition by appending a new service to /etc/samba/smb.conf
    • # Added by Ian to create the MacBook-backup Drive
         comment = MacBook backup
         path = /mnt/MacBook-backup
         read only = no
         create mask = 0666
         guest ok = yes
         hosts allow =    # Samba will ONLY connect this machine to the backup partition
      Restart Samba with service samba restart
    • On the Mac, test mounting each partition. Try writing and deleting a test file on each partition. If any problems, adjust the server file permissions.
    • Create a sparsebundle package
    • Configure Time Machine to recognize the server backup
    • Higher-level geekery. If I were truly high-speed, I would figure out a way to mount/unmount the backup partition when the Mac appears/disappears on the network. I'm not doing that today, but hmmmm....

Bash if/then loops

June 25, 2011: You know how to do bash if/then loops:

#! /bin/bash
if [conditional true]; then
It's possible to do the same thing in just a line (or two):
#! /bin/bash
conditional true \
  && { action1; action2; } \
  || { action3; action4; }
So here's an example:
cat /etc/mtab | grep -q diskname \     # grep -q and the test flags are your friend!
  && logger -i -f /var/log/syslog diskname is already mounted \
  || { mount diskname \
       && logger -i -f /var/log/syslog mounted diskname \
       || logger -i -f /var/log/syslog failed to mount diskname; }

Building a Debian 6 router from an old Dell Optiplex GX60

June 25, 2011: For the learning experience, I want to try replacing my DSL modem and wireless router with my seven-year-old Dell desktop. The modem and router still work - this is for fun and learning.

Networks: I have three network interfaces:

  1. Remove unused packages. This reduces resource consumption and improves security. Debian standard includes an X server, so let's get rid of that. It also includes an unused internal mailserver and NFS server that open ports to listen on, so get rid of them, too. Do the following commands:
    apt-get unmarkauto dkms virtualbox-ose-guest-dkms  # VM Only. Skip this line if a real physical machine.
    apt-get remove x11-common xserver-xorg-core xfonts-base  # Get rid of X to save space
    apt-get remove exim4-base nfs-common portmap             # Get rid of port-opening packages we won't use
    apt-get remove exim4-config bsd-mailx mutt procmail telnet openssh-client geoip-database busybox eject   # Get rid of other excess packages
    apt-get autoremove                                       # Remove a dozen or so automatically-installed packages that are now orphaned
  2. Bridge eth0 and wlan0 to create a single LAN.
    • Installing the package:
      apt-get install bridge-utils
    • Manual Bridge (optional)
      ifdown eth0
      ifdown wlan0
      brctl addbr br0               # Create the bridge br0
      brctl addif br0 eth0 wlan0    # Add interfaces to the bridge
      ifconfig br0 up               # Use ifconfig - ifup won't work for br0 because it's not in the interfaces file
      ifup eth0
      ifup wlan0
      dhclient -v br0               # Should take the same IP as either eth0 or wlan0                   
    • An Automatic Bridge, set up at each boot, requires editing the /etc/network/interfaces file to something like the following. There is a difference in how hostapd gets started, since the eth0-wlan0 bridge seems to interfere with it:
      # This file describes the network interfaces available on your system
      # and how to activate them. For more information, see interfaces(5).
      # The loopback network interface
      auto lo
      iface lo inet loopback
      # The ethernet jack in client mode (not used)
      #allow-hotplug eth0
      #     iface eth0 inet dhcp
      # The unbridged ethernet jack in unbridged server mode (not used)
      #allow-hotplug eth0
      #iface eth0 inet static
      #     address
      #     network
      #     netmask
      # The wi-fi antenna in client mode (not used)
      #auto wlan0
      #iface wlan0 inet dhcp
      #     pre-up iwconfig wlan0 essid MyOldNetworkName
      # The unbridged wi-fi antenna in unbridged server mode (not used)
      #auto wlan0
      #iface wlan0 inet static
      #     address
      #     network
      #     netmask
      #     up hostapd -B /etc/hostapd/hostapd.conf
      #     down ifconfig mon.wlan0 down
      #     down ifconfig wlan0 down
      #     down pkill hostapd
      # The ethernet jack and wi-fi antenna
      # in bridged server mode
      iface eth0 inet manual
      iface wlan0 inet manual
           # Brings up mon.wlan0 upon 'ifup wlan0'
           up hostapd -B /etc/hostapd/hostapd.conf
           # Brings down mon.wlan0 upon 'ifdown wlan0'
           down ifconfig mon.wlan0 down
           # hostapd becomes a zombie when the wlan0 interface closes - kill it.
           down pkill hostapd
      auto br0
      iface br0 inet static
           bridge_ports wlan0 eth0
           # The netmask has changed- we've collapsed down to 
           # one subnet since dnsmasq can track a lot for us
           # Brings up mon.wlan0 upon startup or 'ifup br0'
           up hostapd -B /etc/hostapd/hostapd.conf
           # Brings down mon.wlan0 upon shutdown of 'ifdown br0'
           down ifconfig mon.wlan0 down
           # hostapd becomes a zombie when the interface closes - kill it.
           down pkill hostapd
      # The following lines are auto-generated for the dsl connection
      # Um, probably shouldn't touch them
      iface dsl-provider inet ppp
      pre-up /sbin/ifconfig dsl0 up # line maintained by pppoeconf
      provider dsl-provider
      auto dsl0
      iface dsl0 inet manual
    • Bridging a wireless interface and a wired interface. According to many websites, this can be a tough problem to solve...but here are three ways to do it.
      1. hostapd: We already use hostapd to put the wireless card into master mode. So a simple change to the /etc/hostapd/hostapd.conf file should resolve it:
        bridge=br0     # I commented this out when installing the card. I restored it.
        This is the one I used.

      2. ebtables: ebtables is a debian package that changes the mac address of packets (instructions).

      3. iptables iptables, otherwise known as firewall rules, can apparently also do this (example).
  3. DHCP and DNS using dnsmasq. Source. We need to install the packages, then edit two config files.

    • Installing the package. Do the following commands:
      apt-get install dnsmasq  # Install
      service dnsmasq stop     # Stop the autostarted service - it's not configured yet
    • Edit the end of the /etc/dnsmasq.conf file:
      # Include another file of configuration options.
      conf-dir=/etc/dnsmasq.d   # Uncomment this line
                                # This should be the only uncommented line in the file
    • Create a new configuration file at /etc/dnsmasq.d/dnsmasq-conf with something like the following contents:
      # This is the *real* configuration file for dnsmasq
      # The original one was full of great information, so we left it alone
      # Items that make us better netizens
      # Never forward plain names (without a dot or domain part)
      # Never forward addresses in the non-routed address spaces.
      # Listen only on this interface
      # Run an executable when a DHCP lease is created or destroyed.
      # The arguments sent to the script are "add" or "del",
      # then the MAC address, the IP address and finally the hostname
      # if there is one.
      ## Put the freeloader-detector script here
      # DNS Nameservers
      # First two are from the ISP, second two are Google public
      # DHCP Ranges
      # Network name is
      # .00 Network name
      # Static/Whitelisted Available .1-.62    The /26 netmask
      # .01 Server/Gateway
      # .02 - .09 Network hardware, printers, servers
      # .10 - .19 Laptops and desktops
      # .20 - .29
      # .30 - .39 Virtualbox machines
      # .40 - .49 VPN
      # .50 - .59
      # .60 - .62
      # .63 Broadcast
      # .64 - .99 not used
      # .100 - .150 Greylisted/Transients       The /24 submask
      # .151 - .254 not used
      # .255 Broadcast
    • (Optional) - create a set of IP reservations for whitelisted machines. This is handy so dhcp doesn't screw up port forwarding. Your machines will always have the correct address. Create a new configuration file at /etc/dnsmasq.d/whitelist with something like the following contents:
      # This is the keep-the-same-IP file for dnsmasq
      # Dnsmasq will always assign the same IP to a MAC address if
      # it is listed here.
      # MAC Address, issued name, issued IP, lease time
      # If you add a second MAC address (laptop), put wireless
      # first and wired second. If both show up on the network,
      # dnsmasq will kick off the first one and keep the second.
    • (Optional) - create a set blacklisted machines filtered by MAC address. Create a new configuration file at /etc/dnsmasq.d/blacklist with something like the following contents:
      # This is the blacklist file for dnsmasq
    • Restart dnsmasq to reload the new configs:
      service dnsmasq restart
    • Test the dsl connection: Try the following commands:
      route      # Test the kernel IP routing table
      Kernel IP routing table
      Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
      adsl-76-229-211 *      UH    0      0        0 ppp0     *        U     0      0        0 br0
      default         *              U     0      0        0 ppp0
      # See the upstream connection using ppp0?
      ping -c3 www.google.com  # Test connectivity
      PING www.l.google.com ( 56(84) bytes of data.
      64 bytes from icmp_req=1 ttl=56 time=12.0 ms
      64 bytes from icmp_req=2 ttl=56 time=12.0 ms
      64 bytes from icmp_req=3 ttl=52 time=12.0 ms
      --- www.l.google.com ping statistics ---
      3 packets transmitted, 3 received, 0% packet loss, time 2008ms
      rtt min/avg/max/mdev = 12.000/12.000/12.000/0.000 ms
      # Succesful connectivity test. dsl0 clearly works just fine.
  4. Update the kernel settings, including enabling IP Forwarding (source):
    echo 1 > /proc/sys/net/ipv4/ip_forward    # Enable IP forwarding in the kernel
    service dnsmasq restart                   # restart dnsmasq to use the new rules
    But that only works for this session. To make the change persistent across reboots, and to secure the system more completely, edit /etc/sysctrl.conf as follows:
    #  Around line 19
    # Turn on Source Address Verification in all interfaces to
    # prevent some spoofing attacks.
    net.ipv4.conf.default.rp_filter=1     # Uncomment the line
    net.ipv4.conf.all.rp_filter=1         # Uncomment the line
    #   Around line 26
    net.ipv4.tcp_syncookies=1             # Uncomment the line
    #   Around line 29
    net.ipv4.ip_forward=1                 # Uncomment the line
    Apply the changes immediately with this command:
    sysctl -p
    service dnsmasq restart    # (Optional) to use the new settings
  5. Disable dhcp client. Since wanrouter does not use dhcp, and the other interfaces are static IPs, we don't need a dhcp client anymore. The dhcp server is handles by dnsmasq. It also closes another open port:
    apt-get remove isc-dhcp-client
  6. Security using etc/hosts: Limit the IPs that router daemons will respond to. (source). This is optional now, but will become important if we add more services than basic routing.

    Deny access to any unauthorized machine: Edit /etc/hosts.deny to append the following line:

    ALL: ALL

    Define the authorized machines. As we populate the network with static IPs and reserved dhcp IPs and service daemons, we can add more detail later. Since /etc/hosts.allow to append the following lines:

    # Anything in the range to .1.62 (static ips/whitelisted)
    # Since dhcp assigns .1.100 and higher to non-whitelisted, they will be denied daemon access

  7. Firewall and NAT using IPtables. (Instructions) Save this script as /etc/network/if-up.d/00-firewall.
    # 00-Firewall script.
    # Put it in /etc/network/if-up.d
    # Delete all existing rules
    iptables -F
    iptables -t nat -F
    iptables -t mangle -F
    iptables -X
    # Always accept loopback traffic
    iptables -A INPUT -i lo -j ACCEPT
    # Drop spoofers (not currently used, use with dnsmaq)
    #iptables -A INPUT -m mac --mac-source 00:00:00:00:00:00 -j REJECT
    #iptables -A INPUT -s -j REJECT
    # Allow established connections
    iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
    iptables -A FORWARD -i ppp0 -o br0 -m state --state ESTABLISHED,RELATED -j ACCEPT
    # Allow outgoing connections from the LAN side
    iptables -A FORWARD -i br0 -o ppp0 -j ACCEPT
    # Masquerade
    iptables -t nat -A POSTROUTING -o ppp0 -j MASQUERADE
    # Reject any non-established connections from the WAN
    #iptables -A INPUT -i ppp0 -m state --state NEW -j REJECT
    #iptables -A FORWARD -i ppp0 -o br0 -j REJECT
    # Or, instead of the above, *limit* new WAN connections to discourage DOS and 
    # Brute Force attacks Source: http://ubuntuforums.org/showthread.php?t=1779679
    iptables -A INPUT -i ppp0 -m state --state NEW -m limit --limit 1/minute --limit-burst 5 -j ACCEPT
    iptables -A INPUT -i ppp0 -j REJECT
    iptables -A FORWARD -i ppp0 -o br0 -j REJECT
  8. Test connectivity from the LAN. Hook up a test machine (like a laptop) to the LAN.
    • It should see the wireless network
    • It should be assigned a DHCP address
    • It should be able to ping www.google.com
    • The browser should be able to interact with pages on the internet

  9. Rename the gateway interface (Optional). I called the new router .12 instead of .1 temporarily to prevent confusion. If you want to replace the old router *right now*, then choose the permanent IP address (like - something easy to remember) and change the following:
    • In /etc/network/interfaces: Rename the static IP for br0
    • In /etc/dnsmasq.d/dnsmasq-conf: Rename the Router-option=3 line
    • If you were using SSH, this will cause a conflict that is easy to fix - the error message will tell you which line of /home/USERNAME/.ssh/known_hosts to delete.

Installing and using Mediatomb to share media on the LAN

May 30, 2011: Installing mediatomb, a UPnP media stream forwarder (just couldn't get gmediaserver to work properly). This way, my media can remain on the common drive, accessable to anybody on the LAN. Source

  1. Install mediatomb with the command:
    apt-get install mediatomb
  2. Change the config file at /etc/mediatomb/config.xml to enable the web-based GUI on a custom port
    # In the server/ui section
    <ui enabled="no" show-tooltips="yes">     # From
    <ui enabled="yes" show-tooltips="yes">    # To
    # In the server section below ui
    <name>My Media Server</name>
  3. TODO Start mediatomb using xinetd
    • Append /etc/services
    • Configure xinet to start mediatomb
    • Configure mediatomb to be started by xinetd

  4. TODO Configure mediatomb firewall and port settings

  5. TODO Configure mediatomb for Youtube and Hulu

  6. Restart mediatomb to reload the changed conf file(s):
    service mediatomb restart            # If not using xinetd
    service mediatomb stop               # If using xinted
    service xinetd restart               # If using xinetd
    sh /etc/network/if-up-d/00-firewall  # Reload the firewall, if needed
    netstat -tulp                        # Check that xinetd is handling the port
  7. Test connectivity from a client (in my case, Xbuntu 10.10 laptop):
    sudo mkdir /media/server_media                       # Create the test mount point on the HOST
    sudo apt-get install djmount                         # Install the the UPnP controller
    sudo djmount -o allow_others /media/server_media     # The media files on the server should mount to the test folder
    Try a File manager at /media/server_media - you should see mediatomb-created database.
    Try your favorite player at /media/server_media - you should be able to play stuff.
  8. Mount automatically upon detection of the correct network.

Mounting a Raid1 Array under Debian 6

May 30, 2011: I installed two 2TB SATA hard drives, intending to build a RAID1 (mirroring) array. These are for dedicated backup and storage, not boot.

Writing to syslog using shell scripts

January 24, 2009: logger is a simple command to write a string to /var/log/syslog. For example:

logger -i "This is a test string"
#   Appends the string to syslog

logger -is -p local0.info -f /var/log/syslog This is another test string
#  -i includes the PID
#  -s sends a copy to stdout (the screen)
#  -p is the priority. See the available priorities at 'man logger'
#  -f is the log to append to
And here's how to log from a crontab
* * * * * /usr/bin/logger -i crontab test

Shell scripts to modify cron jobs

May 28, 2011: I have a cron job that only needs to run when the network is up. Upstart monitors the network status, so all it needs is a way to turn the cron job on and off.

I used to do this by using the bash script to edit the cron job, but this method has since been superseded by a much better and easier way. The theory is basically crontab -l > tempfile, play with the tempfile, then crontab tempfile to install the edited crontab. DON'T mess with the original crontab in /var/spool.

The proper way to edit a crontab is to use the /etc/cron.d/ directory:

Tip: Upstart listeners will happily add and remove the crontab file using bash script. Here's an example of conditional upstart listeners to add and remove a crontab file. Since Upstart runs as root, it doesn;t need to use sudo.

# Begin listener to ADD
start on net-device-up
# Upstart listener to ADD a crontab, no conditions
/home/me/.config/weather/weather-update.sh                      # Run once at startup
ln /home/me/.config/weather/crontab /etc/cron.d/weather-update  # Add to crontab
logger -i -f /var/log/syslog Adding weather-update crontab to /etc/cron.d
end script

# Alternately,
/path/to/conditional_test_script   # You can put bash considtionals in Upstart, but
                                   # it's probably better to script them separately
# End

# Begin listener to REMOVE
start on net-device-down
# Upstart listener to REMOVE a crontab, no conditions
rm /etc/cron.d/weather-update
logger -i -f /var/log/syslog Removing weather-update crontab from /etc/cron.d

# Upstart listener to REMOVE a crontab, with a conditional test if the file exists
if [ -f /etc/cron.d/server-monitor ]; then
  rm /etc/cron.d/server-monitor
  logger -i -f /var/log/syslog Removing server-monitor crontab from /etc/cron.d

# BUT WAIT! Let's use Bash to make that if/then loop even more concise:
[ -f /etc/cron.d/server-monitor ] && rm /etc/cron.d/server-monitor && logger -i -f /var/log/syslog Removing server-monitor crontab from /etc/cron.d
end script
# End

Remote server administration - Using sockets to monitor the server

May 28, 2011: Figured out today how to create a socket. I will use a socket to pass monitoring information to my laptop. Socket tutorial

The basic setup is:

  1. Create the new monitor script on the server at /home/USERNAME/monitor. Do this as root so it has root permissions.
    # Show the current CPU temp, Drive temp, and live users
    # Must be run as root (hddtemp and access to /var/lib/dnsmasq.leases require it)
    # Must be run with bash (arrays require it)
    # Section 1 - Temps, Memory, and Processes
    echo ""
    CpuTemp=$( cat /proc/acpi/thermal_zone/THRM/temperature | awk '{ print $2 }' )
    HddTempB=$( hddtemp -n /dev/sdb )   # Boot Disk
    HddTempA=$( hddtemp -n /dev/sda )   # RAID Disk
    HddTempC=$( hddtemp -n /dev/sdc )   # RAID Disk
    TotalRam=$( cat /proc/meminfo | head -n+1 | awk '{ print $2 }' )
    TotRam=$(( $TotalRam / 1000))
    FreeRam=$( cat /proc/meminfo | head -n+2 | tail -n+2 | awk '{ print $2 }' )
    FreRam=$(( $FreeRam / 1000 ))
    PctRam=$( echo "100 * $FreeRam / $TotalRam" | bc )
    echo "Temps:  CPU: $CpuTemp      HDDs: $HddTempB $HddTempA $HddTempC"
    echo "RAM:    $PctRam % Free ( $FreRam M free, $TotRam M total)"
    echo ""
    # Section 2 - System Activity
    echo "System Activity:"
    echo -e " CPU\tMEM\tPID\tCommand"
    cpulist=$( ps -eo %cpu,%mem,pid,comm | sort -r -k1 | head -n+5 | tail -n+2 ) 
    CpuCpuArray=( $( echo "$cpulist" | awk '{ print $1;}' ) )
    CpuMemArray=( $( echo "$cpulist" | awk '{ print $2;}' ) )
    CpuPidArray=( $( echo "$cpulist" | awk '{ print $3;}' ) )
    CpuComArray=( $( echo "$cpulist" | awk '{ print $4;}' ) )
    memlist=$( ps -eo %cpu,%mem,pid,comm | sort -r -k2 | head -n+5 | tail -n+2 ) 
    MemCpuArray=( $( echo "$memlist" | awk '{ print $1;}' ) )
    MemMemArray=( $( echo "$memlist" | awk '{ print $2;}' ) )
    MemPidArray=( $( echo "$memlist" | awk '{ print $3;}' ) )
    MemComArray=( $( echo "$memlist" | awk '{ print $4;}' ) )
    # Merge the duplicates into CPU by matching PID
    for CpuElement in $(seq 0 $((${#CpuCpuArray[@]} - 1 )))
      for MemElement in $(seq 0 $((${#MemCpuArray[@]} - 1 )))
        if [ "${CpuPidArray[CpuElement]}" == "${MemPidArray[MemElement]}" ]; then
    # Merge the rest into CPU
    for MemElement in $(seq 0 $((${#MemCpuArray[@]} - 1 )))
      if [ "${MemCpuArray[MemElement]}" != "" ]; then
    # Display the System Activity table by iterating through the combined array
    for DisplayElement in $(seq 0 $((${#CpuCpuArray[@]} - 1 )))
      for CpuElement in $(seq 0 $((${#CpuCpuArray[@]} - 1 )))
        if [ "${CpuCpuArray[CpuElement]}" != "" ]; then
          if [ $(echo "${CpuCpuArray[CpuElement]} > $MaxValue" | bc) -eq 1 ]; then
          if [ $(echo "${CpuMemArray[CpuElement]} > $MaxValue" | bc) -eq 1 ]; then
      echo -e " ${CpuCpuArray[MaxValueElement]}\t${CpuMemArray[MaxValueElement]}\t${CpuPidArray[MaxValueElement]}\t${CpuComArray[MaxValueElement]}"
    echo ""
    # Section 3 - Who's Online
    IPArray=( $( cat /var/lib/misc/dnsmasq.leases | awk '{ print $3;}' ) )
    NameArray=( $( cat /var/lib/misc/dnsmasq.leases | awk '{ print $4;}' ) )
    ArpArray=( $( cat /proc/net/arp | tail -n+2 | awk '{ print $1;}' ) )
    echo "Current Leases:"
    # Print the active leases, and strip them from the arrays
    for ArpElem in $(seq 0 $((${#ArpArray[@]} - 1)))
      for IPElem in $(seq 0 $((${#IPArray[@]} - 1)))
        if [ "${ArpArray[ArpElem]}" != "" ]; then
          if [ "${ArpArray[ArpElem]}" == "${IPArray[IPElem]}" ]; then
            echo -e " ${IPArray[IPElem]}\t${NameArray[IPElem]}\tActive"
    # Print any remaining (inactive) leases
    for IPElem in $(seq 0 $((${#IPArray[@]} - 1)))
      if [ "${IPArray[IPElem]}" != "" ]; then
        echo -e " ${IPArray[IPElem]} \t${NameArray[IPElem]}"
    # Print any remaining ARP (unleased activity)
    for ArpElem in $(seq 0 $((${#ArpArray[@]} - 1)))
      if [ "${ArpArray[ArpElem]}" != "" ]; then
        echo -e " ${ArpArray[ArpElem]}\t\t\t**Unleased ARP Activity**"
    echo ""
  2. Make the new script executable with chmod +x /home/USERNAME/monitor
  3. The socket is operated by xinetd. Create the socket with the following file, /etc/xinetd.d/monitor. Testing the file using xinetd'd -d flag is very handy, as errors from cut-and-paste sneak in very easily:
    service monitor
       port = 3333              # Pick a number
       socket_type = stream
       wait = no
       user = root
       only_from =    # My laptop
       server = /home/ian/status
       log_on_success += USERID
       log_on_failure += USERID
       disable = no
  4. Next, xinetd requires you to append the new service to the end of /etc/services:
    # Locally created services
    status		3333/tcp			# onboard status-monitor script at /home/USERNAME/monitor
  5. Finally, restart xinetd and test that port 3333 is being monitored by xinetd. No change to the router firewall is needed:
    service xinetd restart
    netstat -tulpn | grep xinetd
  6. Test the socket from an authorized system (on the 'only_from' line):
    telnet 3333    # Server IP address, port 3333
  7. Next, we need a script on my laptop to connect to the socket and store the results. I'll call this script /home/me/Scripts/server-monitor
    status=$(telnet 11105)
    (date +'%D %R') > /home/ian/.config/server-monitor/timestamp
    echo "$status" | awk 'BEGIN{RS=ORS="\n\n";FS=OFS="\n"}/Temps:/' > /home/ian/.config/server-monitor/temps_and_ram
    echo "$status" | awk 'BEGIN{RS=ORS="\n\n";FS=OFS="\n"}/System Activity:/' > /home/ian/.config/server-monitor/system_activity
    echo "$status" | awk 'BEGIN{RS=ORS="\n\n";FS=OFS="\n"}/Current Leases:/' > /home/ian/.config/server-monitor/current_leases
  8. And a cron job on my laptop to trigger the script periodically:
    */5 * * * * ian /home/ian/.config/server-monitor/server-monitor.sh
  9. But wait! Turns out there's a problem with Telnet - it won't send session output to stdout if it's triggered by cron or under a few other conditions. This is a known limitation of Telnet. You can use netcat or other programs, or here's a straight bash hack:
    exec 3<>/dev/tcp/
    status=$(cat <&3)
    echo "$status" | awk 'BEGIN{RS=ORS="\n\n";FS=OFS="\n"}/Temps:/' > /home/ian/.config/server-monitor/temps_and_ram
    echo "$status" | awk 'BEGIN{RS=ORS="\n\n";FS=OFS="\n"}/System Activity:/' > /home/ian/.config/server-monitor/system_activity
    echo "$status" | awk 'BEGIN{RS=ORS="\n\n";FS=OFS="\n"}/Current Leases:/' > /home/ian/.config/server-monitor/current_leases
  10. (OPTIONAL) And finally, conky fields on my laptop to display the server info:
    $alignc ${head /home/ian/.config/server-monitor/temps_and_ram 2}
    Server Activity      ${head /home/ian/.config/server-monitor/timestamp 1}${tail /home/ian/.config/server-monitor/system_activity 5}
    Current Leases
    ${tail /home/ian/.config/server-monitor/current_leases 5}
  11. It should also work the other way round, for example, a server script should be able to send information to a listening socket on my laptop.

Remote desktop administration - Vinagre/VNC and SSH

May 26, 2011: My network, at home and work, are reaching a point where I wish to remotely administer (or work on) some machines. This kind of connection turn out to be terribly useful for doing bookkeeping from home. I set up a VNC Reverse Connection through SSH. As installed below, the VNC connection is automatically routed through ssh.

How To Install It

  1. Administrator's Ubuntu: (source) Use the following commands
    apt-get install openssh-server xtightvncviewer
    apt-get remove vinagre    # Last time I checked in 2009, vingre didn't work with many VNC servers
    sudo update-alternatives --set vncviewer /usr/bin/xtightvncviewer
  2. Administrator's Gateway or Router (optional): Add the following iptables script entry to forward VNC connections to the administrator's Ubuntu system:

    This works on my Debian Router. Since I have SSH and socket access, I can build a socket to enable/disable the prot forwarding.

    iptables -A PREROUTING -t nat -i eth1 -p tcp --dport 80 -j DNAT --to
    iptables -A FORWARD -p tcp -i eth1 -o eth0 -d --dport 80 -j ACCEPT
  3. Mac systems: (source) Download Vine Server and put it anyplace convenient.
    I put the original in 'Utilities', and a shortcut on the desktop named 'Get Help'
  4. Windows systems: (source) Use the following commands:
    Download and open the latest 'tightvnc' zip file for windows
    Copy WinVNC.exe and VNCHooks.dll to someplace convenient (desktop works), trash the rest of the .zip
    Download source

How To Use It - from my (helpdesk) point of view

  1. The first thing I need is my own IP address. I can find my IP easily by using the command curl "http://www.whatismyip.com/automation/n09230945.asp", the return is pure text IP address - no further processing required. An alternative using wget to query only my router is wget -O - -o /dev/null --user=MyRouterUserName --password=MyRouterPassword | grep 'var wan_ip' | cut -d'"' -f2 , but that ia specific only to my router.
  2. Open ports 22, 5500, 5800, and 5900 on my router and port forward them to my computer. In my case, this uses the router's web interface.
  3. Tell my viewer to listen for the connection on port 5500 with xtightvncviewer -listen 0
  4. Tell the IP address to the other side...their server need it.
    MAC Systems:
    Find Vine Server in Finder --> Applications --> Utilities --> Vine 3.0 --> Vine Server
    Open Vine Server by double-clicking on it.
    Select Server Menu --> Reverse Connection.
    Type in my (me, the helpdesk) IP address. Instant connection! Automated/Scripted Mac connection: echo /Applications/Utilities/Vine3.0/Vine\ Server.app/OSXvnc-server -connectHost $connectionIP -dontdisconnect -nodimming
    WINDOWS Systems:
    Double click WinVNC.exe, type an easy password into the "Password:" box, and press OK. 
    Right click on WinVNC icon in system tray, choose "Add New Client", and type in my (me, the helpdesk) IP address.
  5. When finished, they can quit their VNC server (On Mac: Apple Menu --> System Preferences --> Sharing, disable Remote Desktop). They don't need to close anything else - they didn't open anything else. I need to close port 5500 on my router (via web interface) and quit my VNC client.

This is a simple script to find my internet IP, my local (router-assigned) IP, command the router to begin port forwarding, set up a job to disable port forwarding later, and set the tightvncviewer on my xubuntu box to listen. So when somebody calls, I just run the script...

# This script sets up my computer to receive a reverse-VNC connection.
# It does _NOT_ need to be run as root.
# Step 1: Set up port forwarding on the router: That's beyond the scope of this script, 
# though it should be possibel using IPTables:
#    iptables -A PREROUTING -t nat -i eth1 -p tcp --dport 80 -j DNAT --to
#    iptables -A FORWARD -p tcp -i eth1 -o eth0 -d --dport 80 -j ACCEPT
# Step 2: Find my internet IP and echo it to the screen
# The server (person at the other end) needs this to connect.

externalIP=$(curl -s $ip_url)
echo "My IP address is: "$externalIP

# Step 3: Find my local, router-assigned IP
# We need this for Port Forwarding the router

localIP=$(ifconfig | grep "inet addr" | grep -v "" | awk '{ print $2 }' | awk -F: '{ print $2 }')
echo "My Local address is: "$localIP

# Step 4: Create a pop-up or notification with the IP information

zenity --info --title "IP Addresses" --text "Internet IP Address (the server needs this): "$externalIP"\\nLocal IP Address (for Port Forwarding): "$localIP"\\n\\nMac Instructions: Open Vine Server, File --> Reverse Connection\\n\\nWindows Instructions: Open tightvncserver, right-click on icon in system tray, select 'Add New Client'" &
zenity --notification --text "External: "$externalIP"   Local: "$localIP &

# Step 5: Set up the VNC client to listen.

xtightvncviewer -listen 0
echo "End!"

Python3, QuickBooks Pro, and qbfc + COM to make them work together

April 26, 2011: Here's are tips on how to structure a python script to use qbfc. This takes us from 'hello world' and introduces the libraries/modules needed to make it work well.

I have come a long way since January. I can now reliably construct qbfc queries and parse the responses. I'm constructing a Python3 library to automate much of the process.

I really dislike Intuit's poor documentation and support. I expect much better from a commercial product. It's worth the small extra expense to support a vendor that values their customers' time.

qbfc queries consist of several elements:

Many qbfc requests and responses use the same fields over and over, so it's possible to combine multiple types in one function. It reduces duplication, but at the cost of adding complexity. There is also a balance to be made between retaining the familiar variable names -used in all the training examples- and confusing new names used when combining or abstracting. For example:

if query_type in ['InventoryAdjustmentQuery', 'CustomerQuery']:
        response['TimeCreated'] = Ret.TimeCreated.GetValue()
In this example 'query_type' and 'Ret' and the purpose of the try/except would need to be explained to users. I may take a stab at it...or not.

Custom fields (DataExtRet) and additional line items, if flagged, are parsed a little differently - the additional items are contained within a list object, which must be parsed and iterated through, then added to the response dict. Exactly like the XML. Here are a couple examples:

    for line_number in range(0, Ret.InventoryAdjustmentLineRetList.Count):
        line_item = ('Line {}'.format(item + 1))
        response[adjustment][line_item] = {}
        LineRet = Ret.InventoryAdjustmentLineRetList.GetAt(line_number)
        response[adjustment][line_item]['TxnLineID'] = LineRet.TxnLineID.GetValue()
        response[adjustment][line_item]['ItemListID'] = LineRet.ItemRef.ListID.GetValue()
        response[adjustment][line_item]['ItemFullName'] = LineRet.ItemRef.FullName.GetValue()
        response[adjustment][line_item]['QuantityAdj'] = LineRet.QuantityDifference.GetValue()
        response[adjustment][line_item]['ValueAdj'] = LineRet.ValueDifference.GetValue()

    for item in range(0, Ret.DataExtRetList.Count):
        LineRet = Ret.DataExtRetList.GetAt(item)
        if LineRet.DataExtName.GetValue() in ['Instrument Name', 'InstrumentName']:
            response[y]['Instrument Name'] = LineRet.DataExtValue.GetValue()
        elif LineRet.DataExtName.GetValue() in ['Security Deposit', 'SecurityDeposit']:
            response[y]['Security Deposit'] = LineRet.DataExtValue.GetValue()
        elif LineRet.DataExtName.GetValue() in ['Instrument Date', 'InstrumentDate']:
            response[y]['Instrument Date'] = LineRet.DataExtValue.GetValue()                        

The basic structure of a qbfc program written in python, after a bit of experimentation, looks like this:

Add network statistics reporting to the Gateway

June 25, 2011: I want to know about the traffic on my network, so I'm installing the 'darkstat' package.

Darkstat, like cups, serves web pages on port 666 or 667. It can automatically start upon boot.

  1. Install the debian package: apt-get install darkstat .
  2. Edit the file /etc/darkstat/init.cfg to specify the interface, network, and webserver port.
  3. Start darkstat (without a reboot) with the command: darkstat -i ppp0
  4. Start darkstat automatically upon startup by editing /etc/init.d/darkstat. Look for the Interface="" line and add the WAN interface (ppp0). I also set it to start after xinetd - it's not an essential service, and can start late. Then, add darkstat to the runlevels (rc*.d directories) using the command update-rc.d darkstat defaults
  5. Open a port in iptables for the web browser: nano /etc/network/if-up.d/00-firewall (Your firewall method may differ)
    # Allow access to the darkstat webpage from trusted machines on the LAN only (range .1.0 - .1.62)
    iptables -A INPUT -s -p tcp --sport 667 -j ACCEPT
    And then reload the firewall with: sh /etc/network/if-up.d/00-firewall
  6. Fire up a web browser and try it:
Worked great on the first try.

Add shared storage to the Gateway

April 1, 2011: The new router is stable, the new disks are installed, it's time to add samba to create a convenient single shared storage for authorized users on the network

There are three elements to this:

  1. Install samba with the command:
    apt-get install samba
  2. Create and configure the shared directory using the following commands. This allows all guests to upload/download:
    mkdir /mnt/share-drive                  # Create the mount point
    chown root:users /mnt/share-drive       # Add to group 'users'
    chown nobody /mnt/share-drive           # So the folder is not owned by root
    chmod -R ug+rw,o+rw /mnt/share-drive    # Set permissions for sharing, universal read/write, no executables
    chmod -R +t /mnt/share-drive/           # Add the sticky bit, so only root can delete items
    mount /dev/md0 /mnt/share-drive         # Mount the shared partition
    touch /mnt/share-drive/testfile0        # Create a test file so the directory isn't empty
    mv /etc/samba/smb.conf /etc/samba/smb.conf.master  # Preserve the original samba conf file
  3. Configure samba by creating a new /etc/samba/smb.conf file: (Source)
    	workgroup = Whatever_Workgroup_Name_You_Want
    	netbios name = MYSERVER
    	server string = %h server
    	security = SHARE          # 'user' if there are user /home directories
    	map to guest = Bad User
    	obey pam restrictions = Yes
    	pam password change = Yes
    	passwd program = /usr/bin/passwd %u
    	passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
    	unix password sync = Yes
    	syslog = 0
    	log file = /var/log/samba/log.%m
    	max log size = 1000
    	disable netbios = Yes     # Disable Samba servie broadcasting since we'll use Avahi
    	name resolve order = bcast host lmhosts wins
    	printcap name = cups
    	dns proxy = No
    	panic action = /usr/share/samba/panic-action %d
    	hosts allow =, 127     # Limited to LAN and loopback
    	hosts deny = all
    	comment = All Printers
    	path = /var/spool/samba
    	create mask = 0700
    	printable = Yes
    	browseable = No
    	comment = Printer Drivers
    	path = /var/lib/samba/printers
    	comment = shared media
    	path = /mnt/Common
    	read only = No
    	guest ok = Yes
        create mask = 0666
    Test the new conf file and then restart samba:
    service samba restart
  4. Install avahi with the following command:
    apt-get install avahi-daemon
    mv /etc/avahi/avahi-daemon.conf /etc/avahi/avahi-daemon.conf.orig
  5. Configure avahi by creating a new /etc/avahi/avahi-daemon.conf
  6. Add an Avahi Service file for the shared drive by creating a new file /etc/avahi/services/smb.service:
    <?xml version="1.0" standalone='no'?>
    <!DOCTYPE service-group SYSTEM "avahi-service.dtd">
       <!--Samba Shares on %h</name>-->
       <name replace-wildcards="yes">My Home Server</name>
  7. Restart samba and avahi to load the new conf files:
    service samba restart
    service avahi-daemon restart
  8. Test connectivity from a Xubuntu machine without Samba mounting. I had to do all the following on the test machine to reach the samba server and mount the shared drive:
    # Edit /etc/samba/smb.conf in the [global] section
    client lanman auth = Yes
    # Test connectivity
    testparm                                    # Because I changed /etc/samba/smb.conf
    sudo mkdir /media/netdrive                  # Do this on the HOST (Ubuntu uses /media for mounting and requires sudo
    smbclient -L //       # List shares available - the new share should show up
    sudo mount.cifs // /media/netdrive -o guest  # Mount the network drive
    ls /media/netdrive/                         # You should see the testfile
    sudo touch /media/netdrive/testfile1        # Test if the network drive is writable
    sudo umount /media/netdrive                 # Unmount the drive after testing
    Since it works, you can now add the share to any other machines on the network. Go back to the server now.

  9. Test connectivity from OSX:
    • Open the Finder
    • Do Command+K to connect to a server
    • Enter the server: //

  10. Protect the server files from erroneous deletion using the sticky bit. Uploading and downloading files is now global. Root, of course, can delete anything anytime. Here's how to set/unset the sticky bit:
    chmod -R +t /mnt/share-drive/  # Prohibit deletes
    chmod -R -t /mnt/share-drive/  # Allow deletes

Keep freeloaders off my network

March 26, 2011: Back in October, I whipped up a Python script that logs into the router to check the device table, then compares the MAC addresses against a whitelist. But then I replaced that router, and learned a lot more about bash scripting.

Here's the old python script, which scrapes the router's web page for data:

#!/usr/bin/env python
"""freeload_detector.py is a python2.7 script the looks for unauthorized
   DHCP leases on a local wireless network."""

import urllib2
import base64
import csv
import re
import subprocess
import string

def read_devices_from_router( ):
    # Tutorial on authenticating web pages is at http://www.voidspace.org.uk/python/articles/authentication.shtml
    router_username = "" # Put your ROUTER user name here
    router_password = "" # Put your ROUTER password here
    device_list_url = "" # Put your router device list URL here
    base_64_string = base64.encodestring('%s:%s' % 
        (router_username, router_password))[:-1]
    authorization_header = "Basic %s" % base_64_string
    request = urllib2.Request(device_list_url)
    request.add_header("Authorization", authorization_header)
    handle = urllib2.urlopen(request)
    router_web_page = handle.readlines()
    lease_string = ''    
    for row in router_web_page:
        print row
        if row.count("var leases = [") == 1:   # Your equipment may vary!
    #print ('lease_string:' + row)    # Debug Tool
    return row

def parse_mac_string(mac_string):
    # Parses a string with a regular expression looking for MAC Addresses. Source: http://xiix.wordpress.com/2008/06/26/python-regex-for-mac-addresses/
    mac_list = []
    mac_string = string.upper(mac_string)
    expression = '([a-fA-F0-9]{2}[:|\-]?){6}'
    c = re.compile(expression).finditer(mac_string)
    for y in c:
        mac_list.append(mac_string[y.start(): y.end()])
    return mac_list

def read_whitelist( ):
    whitelist = []
    local_whitelist_file = "/home/me/Local Networks/Friendly Machines.csv"
    a = open(local_whitelist_file, 'r')
    b = a.read()
    print b
    whitelist = parse_mac_string(b)
    return whitelist

def compare_lists(whitelist, lease_list):
    not_white_list = []
    for item in lease_list:
        if item not in whitelist:
    return not_white_list

def notify_user(unknown_addresses):
    if unknown_addresses:
        text_string = ('The following MAC Addresses are unknown' + 
            string.join(unknown_addresses, ', '))
        subprocess.call(['zenity', '--info', '--title', 'Unknown MAC' + \
            'Address(es) on Router', '--text', text_string])  
        subprocess.call(['zenity', '--info', '--title', 'MAC Address Scan', 
        '--text', 'All Clear.\\nOnly whitelisted MAC Addresses have ' + \
        'leases on the router'])  
lease_string = read_devices_from_router( )
lease_list = parse_mac_string(lease_string)
whitelist = read_whitelist( )
unknown_addresses = compare_lists(whitelist, lease_list)

But now I have a linux router that I can *really* play with, so I can use bash directly. Whitelist/blacklists are kept directly by dnsmasq instead of a separate spreadsheet, and by using bash arrays I can really shrink the code!

#!/bin/bash         # Bash is necessary for the arrays

# Read the list of leases, and the list of active arp pings
IPArray=( $( cat /var/lib/misc/dnsmasq.leases | awk '{ print $3;}' ) )
NameArray=( $( cat /var/lib/misc/dnsmasq.leases | awk '{ print $4;}' ) )
ArpArray=( $( cat /proc/net/arp | tail -n+2 | awk '{ print $1; }' ) )
echo "Current Leases:"

# Print the active leases, and delete them from the arrays
for ArpElement in $(seq 0 $((${#ArpArray[@]} - 1)))
  for IPElement in $(seq 0 $((${#IPArray[@]} - 1)))
    if [ "${ArpArray[ArpElement]}" == "${IPArray[IPElement]}" ]; then
      echo -e " ${IPArray[IPElement]}\t${NameArray[IPElement]}\tActive"

# Print any remaining (inactive) leases
for IPElement in $(seq 0 $((${#IPArray[@]} - 1)))
  if [ "${IPArray[IPElement]}" != "" ]; then
    echo -e " ${IPArray[IPElement]}\t${NameArray[IPElement]}"

# Print any remaining ARP (unleased activity)
for ArpElement in $(seq 0 $((${#ArpArray[@]} - 1)))
  if [ "${ArpArray[ArpElement]}" != "" ]; then
    echo -e " ${ArpArray[ArpElement]}\t\t\t**Unleased ARP Activity**"
Dnsmasq is configured to issue whitelisted machines a certain IP range, and non-whitelisted machines a different range. Static IPs will show up as unleased. So freeloaders will be easy to spot, and easy to add to the dnsmasq blacklist or whitelist, which are simply extra dnsmaq .conf files in /etc/dnsmasd.d/.

Clone a Virtualbox Machine

March 17, 2011: How to create a clone of an existing VBox. Only works after the vbox has been shut down properly!

cd /home/USERNAME/VirtualBox\
That's it.

Reading the temperature sensors

March 15, 2011: Since the new router is really headless server, I want to monitor the CPU and HDD and any other temps with a sensor. The lm-sensors package monitors the cpu and motherboard, and the hddtemp package monitors hdds (of course)

apt-get install lm-sensors hddtemp

lm-sensors needs to build a configuration file, so run the following command to create one. It will ask you a lot of questions:

Then, run the following command to get the sensor values:
But it can be done even easier:

cat /proc/acpi/thermal_zone/THRM/temperature    # Current CPU temp
cat /proc/acpi/thermal_zone/THRM/trip_points    # Current shutdown temp setting (changeable in BIOS)

hddtemp is easier to set up, but it expects you to know your hard drives. Use the following command to locate your hard drives:

cat /etc/mtab
Look for sda or hda entries. Ignore any trailing numbers - those are just partitions, and we don't care about partitons. For example, one line of my mtab was:
/dev/sda1 / ext3 rw,relatime,errors=remount-ro,commit=0 0 0
So the hard drive is located at /dev/sda/ . TO monitor the HDD temp, run the command:
hddtemp /dev/sda

Sample bash script to show the current router status:

# Show the current CPU temp, Drive temp, and live users
# Must be run as root (hddtemp and access to /var/lib/dnsmasq.leases require it)

CpuTemp=$( cat /proc/acpi/thermal_zone/THRM/temperature | awk '{ print $2 }' )
HddTempB=$( hddtemp -n /dev/sdb )   # Boot Disk
HddTempA=$( hddtemp -n /dev/sda )   # RAID Disk
HddTempC=$( hddtemp -n /dev/sdc )   # RAID Disk
TotalRam=$( cat /proc/meminfo | head -n+1 | awk '{ print $2 }' )
TotRam=$(( $TotalRam / 1000))
FreeRam=$( cat /proc/meminfo | head -n+2 | tail -n+2 | awk '{ print $2 }' )
FreRam=$(( $FreeRam / 1000 ))
PctRam=$( echo "100 * $FreeRam / $TotalRam" | bc )

Leases=$( cat /var/lib/misc/dnsmasq.leases | awk '{ print $3,$4; }' )
ArpLines=$( cat /proc/net/arp | wc -l )
Arp=$( cat /proc/net/arp | tail -n+"$ArpLines" | awk '{ print $1; }' )

echo "Temps:  CPU: $CpuTemp      HDDs: $HddTempB $HddTempA $HddTempC"
echo "RAM:    $PctRam % Free ( $FreRam M free, $TotRam M total)"
echo "Current Leases:"
echo "$Leases"
echo "Currently Active:"
echo "$Arp"

Use debian-live to create a RAM-based Debian system

March 13, 2011: Now that the new router is up and working, let's use the debian-live system to convert it into a ram-based system. This means we can avoid spinning up hard drives most of the time, save a lot of power, and reduce a lot of heat. This is the first step toward creating a bootable ram-based router (just like the router-distros!). Unfortunately, it doesn't work yet due to the Sangoma DSL card software.

The first step is to get a usb-bootable live system that recognizes the three network connections: Ethernet, wireless (PCI card), and DSL (PCI card)

  1. Setup: Install and set up the appropriate tools.
    apt-get install checkinstall                                     # Create .deb packages from everything installed
                                                                     # Also includes build-essential, fakeroot, and other build tools
    apt-get install libncurses5-dev bison libtool                    # Required for sangoma driver install
    apt-get install linux-source-2.6.32 linux-headers-2.6.32-5-686   # Required for kernel and sangoma driver
    apt-get install live-boot live-build live-config                 # Required for creating the debian-live environment
    mkdir /usr/src/live-build-1                                        # Create a location for the live-build environment
    cd /usr/src/live-build-1
    lb config                                                        # Create the chroot environment for the live-build 
  2. Create the sangoma driver .deb package from the tar:
    • Use the same instructions as installation, with one exception:
      ./Setup install                                # Old way - direct install
      checkinstall --install=no -D ./Setup install   # New way - create a .deb
    • After installation is complete, checkinstall will reveal where the .deb is located. For example, mine was at /usr/src/wanpipe-3.5.18/wanpipe_3.5.18-1_i386.deb. In my case, checkinstall created a package that dpkg rejected for three different reasons - and here's how I fixed them:
      1. Backup the deb and extract it into a temp directory:
        cd /usr/src/wanpipe-3.5.18/
        cp wanpipe_3.5.18-1_i386.deb wanpipe_3.5.18-1_i386.deb.0  # Backup
        mkdir temp
        dpkg-deb -x wanpipe_3.5.18-1_i386.deb temp  # Extract the deb
        dpkg-deb --control wanpipe_3.5.18-1_i386.deb temp/DEBIAN
      2. Edit /temp/DEBIAN/control to remove empty lines (like 'Depends: '). They cause one set of errors.
      3. Misused conffiles caused another set of errors. Replace conffiles with an empty file:
        rm /temp/DEBIAN/conffiles
        touch /temp/DEBIAN/conffiles
      4. One sangoma module conflicts with an identical (older) module in the kernel. It creates an install error, so we'll move it out of the package now and install it manually in a later step:
        mv temp/lib/modules/2.6.32-5-686/kernel/net/wanrouter/wanrouter.ko /usr/src
      5. Rebuild the changed deb:
        dpkg -b temp wanpipe_3.5.18-1_i386.deb      # Rebuild the new deb
    • Copy the .deb to the appropriate debian-live install folder:
      cp /usr/src/wanpipe-3.5.18/wanpipe_3.5.18-1_i386.deb /usr/src/live-build-1/config/chroot_local-packages
  3. Edit three live builder config files.
    • Edit the file /usr/src/live-build-1/config/bootstrap to select archives closer to home:
      ## Around Line 34
      #LB_MIRROR_BOOTSTRAP="http://ftp.de.debian.org/debian/" # Old
      LB_MIRROR_BOOTSTRAP="http://ftp.us.debian.org/debian/"  # New - closer mirror
      ## Around Line 38
      #LB_MIRROR_CHROOT="http://ftp.de.debian.org/debian/"    # Old
      LB_MIRROR_CHROOT="http://ftp.us.debian.org/debian/"     # New - closer mirror
      ## Around Line 48
      #LB_MIRROR_CHROOT_VOLATILE="http://ftp.de.debian.org/debian/"  # Old
      LB_MIRROR_CHROOT_VOLATILE="http://ftp.us.debian.org/debian/"   # New - closer mirror
      ## Around Line 72
      #LB_MIRROR_DEBIAN_INSTALLER="http://ftp.de.debian.org/debian/" # Old
      LB_MIRROR_DEBIAN_INSTALLER="http://ftp.us.debian.org/debian/"  # New - closer mirror
      ## Around Line 80
      #LB_ARCHIVE_AREAS="main"          # Old
      LB_ARCHIVE_AREAS="main non-free"  # New - add non-free for the firmware-linux package
    • Edit the /usr/src/live-build-1/config/binary to change important boot parameters:
      ## Around Line 9
      LB_BINARY_IMAGES="iso-hybrid"     # Old
      LB_BINARY_IMAGES="usb-hdd"        # New - build a bootable usb drive
      ## Around Line 17
      LB_BOOTAPPEND_LIVE=""             # Old
      LB_BOOTAPPEND_LIVE="ip=frommedia" # New - tells the system to use the preconfigured network info instead of overwriting it.
      ## Around Line 61
      LB_HOSTNAME="debian"       # Old
      LB_HOSTNAME="MySystem"     # New - assign the system hostname    
      ## Around Line 137
      LB_SYSLINUX_TIMEOUT="0"    # Old, keep the menu up until a human presses a key
      LB_SYSLINUX_TIMEOUT="1"    # One second, then boot to the live environment
    • Edit the file /usr/src/live-build-1/config/bootstrap to select archives closer to home:
      ## Around Line 37
      LB_LINUX_FLAVOURS="486 686"  # Old...maybe. Maybe I screwed this up
      LB_LINUX_FLAVOURS="686"      # New...at least I figured out how to fix it
  4. Create the list of additional packages to install: Create a file /usr/src/live-build-1/config/chroot_local-packageslists/connectivity.list (the suffix is important) with the following contents:
    # Additional packages required to make the network interfaces work properly
    # Minimal-install extras
    # DSL PCI Card
    # (wanpipe_3.5.18-1_i386.deb is already included in chroot_local-packages, 
    # and does not need to be listed here)
    # Wi-Fi Card
    # Ethernet
  5. Create the custom config files (normally in /etc) in config/chroot_local-includes/*. Lots of these can simply be copied from the current setup, but two must be customized because this live setup is just a test, not really a router (yet):
    # Let's fix that problematic wanrouter.ko kernel module now
    # This will overwrite the kernel's older version with the newer
    # version we extracted from the Sangoma wanpipe .deb we created earlier
    mkdir -p /usr/src/live-build-1/config/chroot_local-includes/lib/modules/2.6.32-5-686/kernel/net/wanrouter
    mv /usr/src/wanrouter.ko /usr/src/live-build-1/config/chroot_local-includes/lib/modules/2.6.32-5-686/kernel/net/wanrouter/
    # DSL Card (wanpipe/wanrouter) configs. On a regular system, these are set by the 'wancfg' program
    mkdir /usr/src/live-build-1/config/chroot_local-includes/etc
    cd /usr/src/live-build-1/config/chroot_local-includes/etc
    mkdir -p wanpipe/interfaces wanpipe/scripts
    cp /etc/wanpipe/interfaces/dsl0 wanpipe/interfaces/
    cp /etc/wanpipe/wanpipe1.conf wanpipe/
    cp /etc/wanpipe/scripts/wanpipe1-dsl0-start wanpipe/scripts/
    cp /etc/wanpipe/scripts/wanpipe1-dsl0-stop wanpipe/scripts/
    # More DSL card (wanpipe/wanrouter) configs. This fixes wanrouter to bring the dsl line up upon startup.
    mkdir init.d
    cp /etc/init.d/wanrouter init.d/     # Bugfix
    mkdir rc0.d rc1.d rc2.d rc3.d rc4.d rc5.d rc6.d rc.local
    ln -s init.d/wanrouter rc0.d/K01wanrouter
    ln -s init.d/wanrouter rc1.d/K01wanrouter
    ln -s init.d/wanrouter rc2.d/S19wanrouter
    ln -s init.d/wanrouter rc3.d/S19wanrouter
    ln -s init.d/wanrouter rc4.d/S19wanrouter
    ln -s init.d/wanrouter rc5.d/S19wanrouter
    ln -s init.d/wanrouter rc6.d/K01wanrouter
    # DSL Account (pppoe/pppd) configs. On a normal system, these are set by the 'pppoeconf' program
    mkdir ppp
    mkdir ppp/peers
    cp /etc/ppp/pap-secrets ppp/
    cp /etc/ppp/peers/dsl-provider ppp/peers/
    # Wireless access Point (hostapd) configs
    mkdir hostapd
    # Wireless access point firmware (yours will vary - mine is not included in firmware-linux)
    mkdir /usr/src/live-build-1/config/chroot_local-includes/lib
    mkdir /usr/src/live-build-1/config/chroot_local-includes/lib/firmware
    cp /lib/firmware/rt2561s.bin /usr/src/live-build-1/config/chroot_local-includes/lib/firmware/
    # Networking interface setup
    mkdir network
    Create a file hostapd/hostapd.conf with the following contents:
    Create a file networking/interfaces with the following contents:
    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).
    # The loopback network interface
    auto lo
    iface lo inet loopback
    # The ethernet jack in client mode (not used)
    #allow-hotplug eth0
    #iface eth0 inet dhcp
    # The ethernet jack in unbridged server mode (not used)
    allow-hotplug eth0
    iface eth0 inet static
    # The wi-fi antenna in client mode (not used)
    #auto wlan0
    #iface wlan0 inet dhcp
    #     pre-up iwconfig wlan0 essid Klein-Weisser
    # The wi-fi antenna in unbridged server mode (not used)
    auto wlan0
    iface wlan0 inet static
         up hostapd -B /etc/hostapd/hostapd.conf
         down ifconfig mon.wlan0 down
         down ifconfig wlan0 down
         down pkill hostapd
    # The ethernet jack and wi-fi antenna
    # in bridged server mode
    #iface eth0 inet manual
    #iface wlan0 inet manual
    #     up hostapd -B /etc/hostapd/hostapd.conf
    #     down ifconfig mon.wlan0 down
    #     down pkill hostapd
    #auto br0
    #iface br0 inet static
    #     bridge_ports wlan0 eth0
    #     address
    #     broadcast
    #     netmask
    #     network
    #     up hostapd -B /etc/hostapd/hostapd.conf
    #     up route add -net netmask br0
    #     down ifconfig mon.wlan0 down
    #     down pkill hostapd
    #     down route -net netmask br0
    # The following lines are auto-generated for the dsl connection
    iface dsl-provider inet ppp
         pre-up /sbin/ifconfig dsl0 up # line maintained by pppoeconf
         provider dsl-provider
    auto dsl0
    iface dsl0 inet manual
  6. Create the live environment:
    cd /usr/src/live-build-1
    lb config -k 486 -p minimal --binary-indices false --apt-recommends false --bootappend-live 'nonetworking' or --bootappend-live 'ip=frommedia'
    lb build
    lb config warning: This command is persistent, 'lb clean' does not undo previous config flags. You may need to delete your /usr/src/live-build-1/config file to remove all previous config mistakes, then run 'lb config' again to recreate it.
  7. Test the live environment: FAILED TEST. Apparently the Sangoma .deb has more than three fatal problems. I still get the following errors:
    $ sudo wanrouter hwprobe
    Loading WAN drivers: wanpipe FATAL: Error inserting wanpipe (/lib/modules/2.6.32-5-686/kernel/drivers/net/wan/wanpipe.ko): Unknown symbol in module, or unknown parameter (see dmesg)
    $ dmesg | tail
    wanpipe_syncppp: Unknown symbol wan_set_ip_address
    wanpipe_syncppp: Unknown symbol wan_add_gateway
    wanpipe_syncppp: Unknown symbol wan_run_wanrouter
    wanpipe_syncppp: Unknown symbol wan_get_ip_address
    I'm going to put it aside for now, since the router does run off disk.

Add an ssh server to the router/gateway

June 25, 2011: openssh-server permits incoming ssh connections for maintenance. Since the ssh package is not installed, it cannot do outgoing connections. This server is headless, so a way login and maintain it is essential! This setup assumes xinetd will be monitoring the port - if not, then ignore that part of the setup.

  1. Install openssh-server with the following command:
    apt-get install openssh-server
  2. First Login: Log into from a different system. It makes cut-and-paste easier. It also tests connectivity, and tests that ssh server is running.
    $ ssh username@             # Reminder: The server has a fixed IP address
    The authenticity of host ' (' can't be established.
    RSA key fingerprint is 2c:11:19:89:5e:c9:91:01:c2:03:a2:55:c0:9e:9d:f0.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '' (RSA) to the list of known hosts.
    username@'s password:
    Okay, we're in!

  3. Configure xinetd to start openssh-server. Source.

    Create a file /etc/xinetd.d/ssh with the following contents:

    service ssh
            socket_type     = stream
            protocol        = tcp
            wait            = no
            user            = root
            server          = /usr/sbin/sshd    
            server_args     = -i
  4. Configure openssh-server to be started by xinetd. All we need to do is prevent openssh-server from starting at boot; xinetd now has the job of starting it. Since Debian 6 standard uses sysvinit instead of Upstart, run the following commands to prevent ssh from starting up at boot. These are just symlinks to /etc/init.d/ssh, you can always recreate them: update-rc.d -f ssh remove
  5. Edit the /etc/ssh/sshd_config file to limit the possible logins to just you. You *can* still login as a user, then su to root:
    ## Around Line 26
    PermitRootLogin no      # Change from 'yes' to 'no'
    ## Append to the bottom
    AllowUsers MyUserName   # Use your own username, of course.
  6. Edit /etc/hosts.allow and /etc/hosts.deny. Source.

    Since the /etc/host.allow and .deny global ('ALL') rules may change, let's put some explicit sshd rules in to limit access to whitelisted systems on the LAN. Edit the /etc/hosts.allow file to something similar to this:

    # Allow daemon access in the range to .1.62 (static ips/whitelisted)
    # Since my dhcp assigns .1.100 and higher to non-whitelisted, they will be
    # denied daemon access
    Edit the /etc/hosts.deny file to something similar to this:
    # Default, deny all connections not explicitly accepted in /etc/hosts.allow
    sshd: ALL
    ALL:  ALL
  7. Firewall (optional). For ssh access from the LAN, no firewall changes are needed.

  8. Test xinetd and ssh together. Stop ssh (to free the port) and restart xinetd, so it will take over the job of listening on that port. Then try connecting to ssh from another system on the LAN:
    service ssh stop       # If you are using SSH to do this on a headless system, 
                           # this will kill your session instantly; reboot instead
    service xinetd restart
    netstat -tulp          # Let's test and see if xinetd is working
    The result of netstat should look something like this:
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 *:domain                *:*                     LISTEN      1023/dnsmasq    
    tcp        0      0 *:ssh                   *:*                     LISTEN      1106/xinetd      # Aha! It worked!
    tcp6       0      0 [::]:domain             [::]:*                  LISTEN      1023/dnsmasq    
    udp        0      0 *:domain                *:*                                 1023/dnsmasq    
    udp        0      0 *:bootps                *:*                                 1023/dnsmasq    
    udp6       0      0 [::]:domain             [::]:*                              1023/dnsmasq

Add debtorrent to the router/gateway

March 9, 2011: Debtorrent is a server that hosts the packages you already have in /var/cache/apt/archives for others to download via torrent. It's a trivial cost, and a nice way to give back to the community. It also replaces the apt- download to draw from the swarm instead of a server, if the package is available in the swarm.

  1. Install the debtorrent package: (Instructions)
    apt-get install debtorrent
  2. Edit the /etc/apt/sources.list file to point sources to the torrent server instead of the ftp server. Replace 'http://' with 'debtorrent://localhost:9988/'. Well, don't replace them - comment out the old lines
    #Sample existing line
    deb http://ftp.us.debian.org/debian etch main contrib non-free
    #New line
    #deb http://ftp.us.debian.org/debian etch main contrib non-free    #Commented out
    deb debtorrent://localhost:9988/ftp.us.debian.org/debian etch main contrib non-free
  3. Edit the configuration file at /etc/debtorrent/debtorrent-client.conf. Limit the number of ports to listen one or two (from 50,000). Make the following changes to the file:
    ## Around Line 175
    # minport = 10000
    minport = 9989
    # Around Line 185
    # maxport = 60000
    maxport = 9990
  4. Edit /etc/hosts.allow and /etc/hosts.deny to limit access to the debtorrent-client daemon. Edit /etc/hosts.allow to look more like this:
    And edit /etc/hosts.deny to look more like this:
    sshd:              ALL
    debtorrent-client: ALL
    ALL:               ALL
  5. xinted and dnsmasq do not interact with debtorrent, and no changes are required.

  6. Firewall rules to open those two listening ports. Edit the file /etc/network/if-up.d/00-firewall to add the following rules:
    # Allow incoming debtorrent requests on ports 9899-9990
    iptables -A INPUT -p tcp --dport 9899 -j ACCEPT
    iptables -A INPUT -p tcp --dport 9900 -j ACCEPT
  7. Start the service with the following commands:

    sh /etc/network/if-up.d/00-firewall    # Reload the changed firewall
    apt-get update                         # Reload /etc/apt/sources.list
    service debtorrent-client restart      # Reload /etc/debtorrent/debtorrent-client.conf

    Check the service from another system on the LAN (like a laptop): Open a web browser to and see debtorrent at work. I bookmarked it.

    Check the ports using netstat -tulp:

    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 *:domain                *:*                     LISTEN      1023/dnsmasq
    tcp        0      0 *:ssh                   *:*                     LISTEN      1106/xinetd
    tcp        0      0 *:9988 # Debtorrent webserver                   LISTEN      1744/python
    tcp        0      0 *:9989 # Debtorrent listening for external connections      1744/python
    tcp6       0      0 [::]:domain             [::]:*                  LISTEN      1023/dnsmasq
    udp        0      0 *:domain                *:*                                 1023/dnsmasq
    udp        0      0 *:bootps                *:*                                 1023/dnsmasq
    udp6       0      0 [::]:domain             [::]:*                              1023/dnsmasq 

Add NTP to the router/gateway

March 9, 2011: NTP is useful to set the time of LAN devices. In addition, the 'adjtimex' package updates the hardware clock. (Slightly out-of-date instructions)

  1. Install NTP with the following command:
    apt-get install ntp adjtimex
  2. Edit the /etc/ntp.conf file to broadcast time to the LAN:
    ## Around Line 48
    # If you want to provide time to your local subnet, change the next line.
    # (Again, the address is an example only.)
  3. Edit /etc/hosts.allow and /etc/hosts.deny to limit access to ntpd. Edit /etc/hosts.allow to look more like this:
    # /26 creates the range .1.0 - .1.62
    And edit /etc/hosts.deny to look more like this:
    sshd:              ALL
    debtorrent-client: ALL
    ntpd:              ALL
    ALL:               ALL
  4. xinted and dnsmasq do not interact with ntpd, and no changes are required.

  5. Firewall rules to open those two listening ports. Edit the file /etc/network/if-up.d/00-firewall to add the following rules:
    # Allow incoming debtorrent requests on TCP ports 9899-9990
    iptables -A INPUT -p tcp --dport 9899 -j ACCEPT
    iptables -A INPUT -p tcp --dport 9900 -j ACCEPT
    # Allow NTP syncs on UDP port 123
    iptables -A INPUT -p udp --dport 123 -j ACCEPT 
    iptables -A OUTPUT -p udp --sport 123 -j ACCEPT
  6. Restart ntpd with these commands:
    sh /etc/network/if-up.d/00-firewall  # Reload the firewall
    service ntpd restart                 # Reload /etc/ntp.conf

Add a print server to the router/gateway

March 9, 2011: CUPS print server for my always-connected home printers.

  1. Install CUPS with the following command:
    apt-get install cups
    Unfortunately, it drags in about 80MB of dependencies! But it runs, and begins listening on the appropriate ports
  2. Firewall rules to open those listening ports for cupsd. Edit the file /etc/network/if-up.d/00-firewall to add the following rules:
    # Allow incoming debtorrent requests on TCP ports 9899-9990
    iptables -A INPUT -p tcp --dport 9899 -j ACCEPT
    iptables -A INPUT -p tcp --dport 9900 -j ACCEPT
    # Allow NTP syncs on UDP port 123
    iptables -A INPUT -p udp --dport 123 -j ACCEPT 
    iptables -A OUTPUT -p udp --sport 123 -j ACCEPT
    # Allow UPnP requests from the LAN only (range .1.0 - .1.62)
    iptables -A INPUT -s -p tcp --dport 2869 -j ACCEPT
    iptables -A INPUT -s -p udp --dport 1900 -j ACCEPT
    # Allow CUPS print jobs from the LAN only (range .1.0 - .1.62)
    iptables -A INPUT -s -p tcp --dport 631 -j ACCEPT
    iptables -A INPUT -s -p udp --dport 631 -j ACCEPT
  3. Edit the /etc/cups/cups.conf to allow remote access to the CUPS web page:
    ## Around Line 19
    # Only listen for connections from the local machine.
    Listen       # Add this line so CUPS listens for network connections
    Listen localhost:631
    Listen /var/run/cups/cups.sock
    ## Around Line 33
    # Restrict access to the server...
      Order allow,deny
      Allow       # Add this line to authorize all LAN users to print
    ## Around Line 39
    # Restrict access to the admin pages...
      Order allow,deny
      Allow       # Add this line to authorize only whitelisted LAN users to use the CUPS admin interface
  4. Since CUPS has it's own allow/deny IP ranges, we don't need to edit /etc/hosts.allow or /etc/hosts.deny.

  5. dnsmasq does not interact with CUPS, so no changes are required. xinetd can work with cups in theory, but it does not seem to work in Debian.

  6. Test the admin web page:
  7. Restart cups with:
    sh /etc/network/if-up.d/00-firewall  # Reload the firewall
    service cupsd restart                # Reload /etc/cups/cupsd.conf

    Use any machine in the range (the /26 mask), and open a web browser to the CUPS web page at You should see the CUPS web page.

    Adding the printers and doing the test print are, of course, the next steps. Since your printers likely vary from mine, I'll leave those to you.

Add UPnP to the router/gateway

March 9, 2011:UPnP connectivity us available using the 'linux-igd' package. UPnP can be used for router control, media servers, game boxes, and much more. It is designed to be insecure! Documentation can be a bit sparse.

Add a dynamic DNS client to the router/gateway

March 9, 2011: This will allow access to future VPN and other external services.

  1. Register for a dynamic DNS service. Any good search engine can point you to a good service.
  2. Install ddclient using the following command:
    apt-get install ddclient
    The installer will ask questions about the newly-registered dynamic dns account. No further configuration seems needed.
  3. Find the dynamic dns address, which is the router's IP address a couple ways.
    route | awk '{ print $2 }' | sort | tail -n+4 | head -n+1    # If on the LAN
    dig +short myaccount.dyndns.org                              # Elsewhere on the internet
    nslookup myaccount.dyndns.org ns.dyndns.org                  # Another way from the internet
    http://www.dnscog.com/dig/myaccount.dyndns.org/              # As a web page

Add a VPN (PPTP) server to the router/gateway

March 9, 2011: This will permit easy access to the home network from outside. (Instructions)

  1. Install pptpd with the following command:
    apt-get install pptpd
  2. Edit /etc/pptpd.conf to declare the available IP addresses for VPN clients. Append somehting like the following to the bottom of the file:
  3. Edit /etc/dnsmasq.d/dnsmasq-conf to prevent an IP address assignment conflict. In this case, .40-.49 are already within the static-assignment range, so there is no conflict, so a simple comment in the file should be sufficient.
    # .40 to .49 reserved for VPN. See /etc/pptp.conf
  4. Edit /etc/ppp/pptpd-options to tell the system how to handle incoming VPN connections:
    ## Around Line 55
    ms-dns    # Add this line. We'll let dnsmasq handle dns requests
    ## Around Line 64
    ms-wins   # Add this line. We have not set up Samba yet, but we will....
  5. Edit /etc/ppp/chap-secrets to setup VPN passwords. Append using this format:
    username	 pptpd	 somepassword	 *
  6. Open tcp port 1723 in the firewall. Edit the file /etc/network/if-up.d/00-firewall:
    # Allow incoming debtorrent requests on TCP ports 9899-9990
    iptables -A INPUT -p tcp --dport 9899 -j ACCEPT
    iptables -A INPUT -p tcp --dport 9900 -j ACCEPT
    # Allow NTP syncs on UDP port 123
    iptables -A INPUT -p udp --dport 123 -j ACCEPT 
    iptables -A OUTPUT -p udp --sport 123 -j ACCEPT
    # Allow UPnP requests trusted machines on the LAN only (range .1.0 - .1.62)
    iptables -A INPUT -s -p tcp --dport 2869 -j ACCEPT
    iptables -A INPUT -s -p udp --dport 1900 -j ACCEPT
    # Allow access to the CUPS webpage from trusted machines on the LAN only (range .1.0 - .1.62)
    iptables -A INPUT -s -p tcp --sport 631 -j ACCEPT
    iptables -A INPUT -s -p udp --sport 631 -j ACCEPT
    # Allow incoming VPN connections from the internet    # Add this line
    iptables -A INPUT -p tcp --dport 1723 -j ACCEPT       # Add this line, too.
  7. Restart the pptpd server with the new configuration:
  8. Restart pptpd with:
    sh /etc/network/if-up.d/00-firewall  # Reload the firewall
    service pptpd restart                # Reload multiple conf files

Add xinetd services to the router/gateway

March 9, 2011: xinetd is a superserver for light-duty use - instead of keeping open daemons for all these servers, xinet launches them as-needed. It saves memory and reduces idle resource consumption.

apt-get install xinetd
xinetd needs to be configured for each service it will supervise. See the section for each service added.

Creating a debian-minimal system for VM

March 11, 2011: I need to create small debian installations for VM and the router experiments. I use the Debian business card install image, and here's what I do:

Option 1 (not preferred) - Do a standard install, then the command

aptitude remove ~pstandard
This leaves the existing 'required' and 'important' packages in place.

Option 2 (preferred) - Do an Expert Install

Installing a Sangoma S518 DSL Modem card in a Dell Optiplex GX60 running Debian 6

March 5, 2011: Ebay provided a new-to-me used DSL PCI modem card to replace my 10-year-old DSL modem. The old modem still works; this is purely for fun.

The card is recognized, but no kernel module is associated with it.

lspci -vv
01:07.0 Network controller: Globespan Semiconductor Inc. Pulsar [PCI ADSL Card] (rev 01)
	Subsystem: Globespan Semiconductor Inc. Device d018
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping+ SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=slow >TAbort- SERR- 

Sangoma has a Linux driver, but it needs to be compiled, so it needs linux source code to refer to during compilation. The linux kernel itself does not need to be recompiled. It's annoying and time consuming, but it does work. Sangoma's information and downloads are at their wiki. Details of how to prepare the kernel source and headers is here.

Compatibility with networking and commands:

  • Do not make any changes to the /etc/network/interfaces file. The wanrouter program defines and brings up/down the dsl interface without using the file.
  • Ifup/ifdown does not work, because they rely on the interfaces file.
  • Ifconfig *does* work but only after a wanpipe is already active. Ifconfig up/down do work without restarting the wanpipe.

The nomenclature and order of events can be confusing:

wanrouter is the command that starts everything. It's just a bash script at usr/sbin/wanrouter. Don't be fooled by the name - it's not really a router. The wanrouter command turns on/off a wanpipe. A wanpipe is the low-level connection to the PCI card, and they create/destroy the dsl0 high-level network interface. Wanpipes are configured by the wancfg command.

pon/poff create a pppoe connection from the dsl0 interface to the upstream network provider. The pppoe connection, including dsl login/password, are configured by the pppoeconf command. pon/poff are actually just part of pppd, the ppp daemon, which creates another high-level interface, ppp0 to represent the actual live dsl link.

The upshot of all this is that wanrouter must create the dsl0 interface before pon can create the ppp0 connection (interface), and poff must terminate the ppp0 interface before wanrouter can destroy the dsl0 interface. Happily, wanpipes include a place to insert these commands so wanrouter appears to handle it all.

How to install the Sangoma wanpipe drivers, configure the card, configure the interface, and configure pppoe. The dsl line does not need to be plugged in until the final steps of configuring pppoe.

# Install the tools needed
apt-get install build-essential linux-source-2.6.32 linux-headers-2.6.32-5-686 libncurses5-dev bison libtool pppoe

# Get the Sangoma Wanpipe package. Unpack the linux-source and wanpipe packages 
cd /usr/src
wget ftp://ftp.sangoma.com/linux/current_wanpipe/wanpipe-3.5.18.tgz
tar zxvf wanpipe-3.5.18.tgz
tar xjf /usr/src/linux-source-2.6.32.tar.bz2

# Prepare the linux source for the wanpipe install script 
cd linux-source-2.6.32
cp /usr/src/linux-headers-2.6.32-5-686/Module.symvers ./
make oldconfig && make prepare && make modules_prepare

# Run the wanpipe install script
cd /usr/src/wanpipe-3.5.18
./Setup install
The script will ask for the linux source directory: /usr/src/linux-source-2.6.32. It will throw a lot of questions about using 2.6.32-5-686 instead, just answer yes and let the installer continue.
# When install is successfully completed
cd /home/USERNAME
wanrouter hwprobe  # Test if the card is detected
wancfg             # Ncurses tool to configure the wanpipe and interface
See the Sangoma Wiki for details, really all you need to choose is the interface name (for example, 'dsl0')
wanrouter start wanpipe1   # Test - should bring up interface
ifconfig                   # The interface should be on the list
ifconfig dsl0 down         # Test - should bring down interface
ifconfig                   # The interface should *not* be on the list
ifconfig dsl0 up           # Test - should bring up interface
ifconfig                   # The interface should be on the list
wanrouter stop wanpipe1    # Test - should being down interface
ifconfig                   # The interface should *not* be on the list
Plug in the dsl connection in order to configure pppoe. Then run the PPPoE configuration program (pppoeconf). You need your dsl login and password at this point.
The pppoeconf program will ask two important questions:
  • Do you want to start PPPoE at each startup? NO, because it will fail - dsl0 will not be ready yet
  • Do you want to start PPPoE now? You can, but if there are any problems, the process will be orphaned. Kill it with the command 'poff -a'

You can see the PPPoE configuration (linking it to the dsl0 interface) in /etc/ppp/peers/dsl-provider. You can see your dsl username and password in /etc/ppp/pap-secrets.

To manually open/close the dsl connection:

wanrouter start     # To bring up the dsl0 interface. Doing this at boot is part of the Wanrouter install
pon dsl-provider    # To bring up the ppp0 interface, which is the real PPPoE connection 
                    # (with an IP address). We'll automate this in the next section
plog                # A handy debugging tool. Take a quick look at the log
ifconfig            # The dsl0 interface does not have an IP, and the new ppp0 interface does have an IP
poff                # To close the PPPoE connection, and bring down the ppp0 interface
wanrouter stop      # To bring down the dsl0 interface. Doing this at shutdown is part of the Wanrouter install

To automatically open/close the dsl connection: Go back into wancfg. Edit the wanpipe1 file --> Interface Setup --> Interface Configuration --> Advanced options. Insert a start script and a stop script as follows:

pon dsl-provider    # Append this to the bottom of the START script

poff -a             # Append this to the bottom of the STOP script
Save the wanpipe1 config file, and let's test automatic dsl connection/disconnection:
wanrouter stop      # In case it was on.
ifconfig            # Neither dsl0 nor ppp0 interfaces should be live.
wanrouter start     # Bring up dsl0. The script should then bring up ppp0
ifconfig            # ppp0 should be up, and have an IP address. If not, try again - ppp0 is often missing the first time I try.
wanrouter stop      # Bring down the intefaces
ifconfig            # Should be back to the normal down state. ppp0 and dsl0 should not be showing.
Finally, test with a reboot and a shutdown to see in the interfaces change properly. Success! Time to clean up using the following commands:
apt-get remove build-essential linux-source-2.6.32 linux-headers-2.6.32-5-686 libncurses5-dev bison libtool
apt-get autoremove
rm -r /usr/src/wanpipe-3.5.18
rm -r /usr/src/linux-source-2.6.32

BUG: missing LSB tags and overrides. When I tried to install something else later, I got the following warnings:

insserv: warning: script 'K01wanrouter' missing LSB tags and overrides
insserv: warning: script 'wanrouter' missing LSB tags and overrides
A quick search on the warnings gave an answer. LSB tags are used by init, and the tags are easily added to the beginning of the /etc/init.d/wanrouter script. Here is a sample script that eliminated the warning:
#! /bin/sh        # Just to show where we are in the file

# Provides:             wanpipe
# Required-Start:       $syslog
# Required-Stop:        $syslog
# Default-Start:        2 3 4 5
# Default-Stop:         
# Short-Description:    kernal support to DSL modem

Final notes:
  • Three elements of the Sangoma package failed to compile: LibSangoma API library, LibStelephony API library, and API Development Utilities. I have seen no effect from those failures.
  • To uninstall WANPIPE package run ./Setup remove
  • There is additional documentation at /usr/share/doc/wanpipe
  • A firmware update utility is included in /etc/wanpipe/util
  • 'wanpipemon' is an included diagnostic tool. The easiest way to use it is 'wanpipemon -g' for the ncurses gui.
  • Changing the default route to send packets across the dsl connection is beyond the scope of what I wanted to do. I just wanted to see if it worked.

Installing a Rosewill RNX-G300EX Wireless PCI card in a Dell Optiplex GX60 running Debian 6

March 1, 2011: New wireless card for an old desktop system. First, I want to get it working in client mode. But since I plan for the GX60 to become the primary home server, the second step is to get it working as an Access Point.

As a client: The card is live. The system can find it using the lspci command:

lspci -vv
01:08.0 Network controller: RaLink RT2561/RT61 802.11g PCI
	Subsystem: RaLink EW-7108PCg
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=slow >TAbort- SERR- 

I did get a dmesg boot complaint about missing firmware - I fixed that with the command:

apt-get install firmware-linux

I found instructions for the driver at http://wiki.debian.org/rt61pci. And with a bit of experimentation, I figured out the /etc/network/interfaces entry:

# The loopback network interface
auto lo
iface lo inet loopback

# The ethernet jack
allow-hotplug eth0
iface eth0 inet dhcp

# The wi-fi antenna in client mode
auto wlan0
iface wlan0 inet dhcp
    pre-up iwconfig wlan0 essid MyNetwork   # Without wpa-supplicant
    wireless-essid MyNetwork                # With wpa-supplicant

wlan0 is spotty across restarts, but that seems to be a router issue - the router gets grumpy during testing when a dhcp release is followed immediatly by a new request. 60 seconds or so, and the router is fine.

Access Point mode: Linux has a very easy way to do this:

iwconfig wlan0 mode master
But the rt61pci driver doesn't support a 'master' mode (source ), so we need to install hostapd:
apt-get install hostapd
But that doesn't work either, because hostapd is incompatible with the rt61pci driver...until we lie to hostapd (example) about which driver to use! Create a new file /etc/hostapd/hostapd.conf with the following:
interface=wlan0      # Interface we want hostapd to monitor
#bridge=br0          # Bridge not created yet
driver=nl80211       # We lie about the driver here
ctrl_interface=/var/run/hostapd   # Create a listing in /var/run

Launch hostapd using:
hostapd -dd /etc/hostapd/hostapd.conf
And then look for the ESSID broadcast on a nearby laptop. Do CTRL+C to kill hostapd. Finally, edit /etc/network/interfaces with the following:
# The loopback network interface
auto lo
iface lo inet loopback

# The ethernet jack in server mode
allow-hotplug eth0          # Run at boot, or when plugged in
iface eth0 inet static
     address    # These three are required for static address

# The wi-fi antenna in server mode
auto wlan0                  # Run at boot
iface wlan0 inet static
     address    # These three are required for static address
     # If bringing wlan0 up, start hostapd to run the access point
     up hostapd -B /etc/hostapd/hostapd.conf
     # If bringing wlan0 down, hostapd does not stop gracefully
     down ifconfig mon.wlan0 down     # Take down the hostapd-created interface
     down ifconfig wlan0 down         # Take down wlan0
     down pkill hostapd               # Kill the orphaned hostapd daemon

Manual control of the wireless interface: To bring up/take down any interface manually, use ifup/if down. ifup reads the interfaces file, so it automatically assigns the ESSID, automatically requests the IP address from dhcp (if applicable), or starts/stops hostapd:

ifup wlan0     
ifdown eth0
Alternately, manually (the harder way):
# Bring up
ifconfig wlan0 up
iwconfig wlan0 essid MyNetwork   # Wireless only - eth0 should skip this
dhclient -v wlan0

# Take down
ifconfig eth0 down

How to get Python to talk to QuickBooks Pro 2008 using QBFC

January 5, 2011: You can import/export and change most QuickBooks data (including most transactions) using the qbXML and QBFC methods. Quickbooks has a built-in COM listener, and it's possible for other applications to login through this interface...even if QB is not running.


Hello World in QBFC: QBFC is just a thin object-oriented wrapper around qbXML, so the best way to understand QBFC is to look at some qbXML first.

<QBXML>                                   # XML container
   <QBXMLMsgsRq>                          # Container for multiple transaction types
      <InventoryAdjustmentQueryRs>        # List of transactions
         <InventoryAdjustmentRet>         # Container for each transaction
            <TxnID>12345</TxnID>          # Transaction detail
            <Memo>Something to say</Memo> # Transaction detail
As you see, getting transaction information in qbXML is like peeling an onion - layer after layer of containers. Here's how to do qbXML using Python:
import win32com.client
import xml.etree.ElementTree

# Connect to Quickbooks
sessionManager = win32com.client.Dispatch("QBXMLRP2.RequestProcessor")    
sessionManager.OpenConnection('', 'Test qbXML Request')
ticket = sessionManager.BeginSession("", 0)

# Send query and receive response
qbxml_query = """
<?qbxml version="6.0"?>
   <QBXMLMsgsRq onError="stopOnError"> 
      <InventoryAdjustmentQueryRq metaData="MetaDataAndResponseData">
response_string = sessionManager.ProcessRequest(ticket, qbxml_query)

# Disconnect from Quickbooks
sessionManager.EndSession(ticket)     # Close the company file
sessionManager.CloseConnection()      # Close the connection

# Parse the response into an Element Tree and peel away the layers of response
QBXML = xml.etree.ElementTree.fromstring(response_string)
QBXMLMsgsRs = QBXML.find('QBXMLMsgsRs')
InventoryAdjustmentQueryRs = QBXMLMsgsRs.getiterator("InventoryAdjustmentRet")
for InvAdjRet in InventoryAdjustmentQueryRs:
    txnid = InvAdjRet.find('TxnID').text
    memo = InvAdjRet.find('memo').text
See how each container needs to be opened to get to the bottommost data? It's about the same in both qbXML and QBFC. Here's the same query in QBFC
import win32com.client
#No ElementTree needed, since no raw XML

# Open a QB Session
sessionManager = win32com.client.Dispatch("QBFC10.QBSessionManager")    
sessionManager.OpenConnection('', 'Test QBFC Request')
# No ticket needed in QBFC
sessionManager.BeginSession("", 0)

# Send query and receive response
requestMsgSet = sessionManager.CreateMsgSetRequest("US", 6, 0)
responseMsgSet = sessionManager.DoRequests(requestMsgSet)

# Disconnect from Quickbooks
sessionManager.EndSession()           # Close the company file (no ticket needed)
sessionManager.CloseConnection()      # Close the connection

# Peel away the layers of response
QBXML = responseMsgSet
QBXMLMsgsRq = QBXML.ResponseList
InventoryAdjustmentQueryRs = QBXMLMsgsRq.GetAt(0)
for x in range(0, len(InventoryAdjustmentQueryRs.Detail)):
    InventoryAdjustmentRet = QueryRs.Detail.GetAt(x)
    txnid = InventoryAdjustmentRet.TxnID.GetValue()
    memo = InventoryAdjustmentRet.Memo.GetValue()
The two small advantages of QBFC are:
  1. Each transaction is a single variable with all the data appended at attributes. That seems slightly easier to deal with than using ElementTree to convert each transaction into a dict. But not much.
  2. QBFC transactions seem to be about 20% faster than equivalent qbXML

Python on Windows + OpenOffice + QuickBooks

December 21, 2010: OpenOffice comes with its own Python version built-in, which seems a bit of bloat, though they give several good reasons for it. You cannot easily add new packages to the OO Python like PyWin32, required to connect to QuickBooks. Other Python installs cannot easily access the Python-UNO bridge needed to use OpenOffice.

I have some scripts that need *both* access to OpenOffice and PyWin32.

There are a couple ways to approach this

  1. OO listens on COM, so my System Python can communicate via COM using win32com.client. This approach avoids all the pain and frustration (below) of matching python versions and importing variables and similar silliness. The API is just a little different, so a Python-UNO script needs a few adjustments, see how I solved it. This is the solution I chose since it has the best cross-platform compatibility and the COM bridge is compatible with all versions of Python. The API is slightly harder, but the system setup is much simpler.
  2. I can edit the PYTHONPATH of my System Python and a few Windows system Environment Variables to import the UNO (the Python-UNO bridge for OO). Some have managed: Instructions, better instructions. This approach works well, and you can see how I did it. The down side is that both versions of Python must be the same, so you are stuck at OO's version.
  3. I can edit the PYTHONPATH of my OO Python to import Win32com. instructions.
  4. For a more radical solution, I can completely replace OO's Python, so even OO must use the System's python instead. These instructions are for Windows, not Linux, and they are a bit outdated! source 1, source 2. I'm not doing this unless it really becomes necessary.

Now I can use UNO and Win32.client and any custom modules in the same Python script. I didn't make any permanent changes to Python or OpenOffice or Windows, and I didn't modify any config files. It's a clean solution.

Using Upstart to monitor network events

November 9, 2010: My wonderful weather-update script is great, and it updates every 20 minutes...but of course that's not enough. I want it to update each time the system reconnects to the network. Here's how I did it.<\p>

The event to listen for is 'net-device-up. The event is already emitted by the /etc/network/if-up.d/upstart script whenever an interface comes up. Take a look at the script. You can simulate the event (for testing) on the command line using sudo initctl emit -n net-device-up. Upstart events seem to error if they are not initiated by root/su/sudo.

Listen for the event by creating a new script in /etc/init/ . All it says that when event X occurs, do actions Y (and Z). You could do essentially the same thing by putting a link to the action(s) in /etc/network/if-up.d and bypassing upstart altogether. For example, here is the script I made for /etc/init/weather-update:

# 'weather update.sh' - desktop weather update script
# weather update is a shell script that runs every 20 minutes, or whenever the
# network interface comes up

description "update weather every time a new network connection"

start on net-device-up

end script

Once the new script is in place get upstart to read it with the command sudo initctl reload-configuration. Check upstart event status using initctl list

But wait, we can do even more! Let's add a second Upstart script to listen for net-device-down so I won't have an annoying obsolete weather picture on my desktop whaen I am offline.

Bonus! We can also enable/disable the cron job, which does not need to run when offline.

# '99-network-down.conf' - System custom upstart events
# When the network comes up, two things happen:
# weather-update.sh changes the desktop and updates conky.
# Add a cron job that runs weather-update every 20 minutes.

description "My custom upstart events"

# The following events occur on net-device-up
start on net-device-up

ln -s /home/me/Scripts/network-crontabs /etc/cron.d/
end script

This is a foundation to build on. For example, whenever the network comes up or down, scripts can check which network, add/remove network drives, change default printers, look for updates, send backups, etc.

Tkinter tips and concepts without classes

October 31, 2010: I spent an entire day teaching myself more advanced Tkinter and writing a test program with labels and fields and buttons and more. I did not use any new classes, becasue I just don't understand them yet. Here's a bunch of lessons learned.

  1. Multiple mainloops. This is important for feedback to the user. The problem is that a simple window.quit() to break out of the first mainloop continues to break out of all of them. Until I enclosed each step in it's own Frame widget.
    w = create_window()           # w is a big dict of widget objects
    info = create_dict_of_data()  # info is a small dict of the data we care about
    info = mainloop_1(w, info)    # Prompt user for information
    info = mainloop_2(w, info)    # Confirm
    info = do_stuff_to_info(info) 
    info = mainloop_3(w, info)    # Show user the result
    Each mainloop looks like this:
    def mainloop_2(w, info):
        widgets_used = []                            # List of widgets in w to use 
        w['b'] = tkinter.Frame(w['a'])               # Create a subframe that exists for this mainloop only 
        w = create_widgets(w, w['b'], widgets_used)  # Populate the subframe
        place_widgets(w, widgets_used)
        tkinter.mainloop()                           # Wait for input
        info = get_info_from_widgets()
        w['b'].destroy()                             # Scrap all the widgets and leave an empty parent window ready for the next loop.
        return info
  2. Passing variables among all these functions is a pain, and so it tracking the data you actually want. So I use two dicts: 'w' (short for widget) contains all the tkinter window and variable and widget data and objects. Passing 'w' back and forth is much cleaner than creating and extracting tuples for everything. Data that I care about, like user input, goes into a separate 'info' dict. So I can do stuff like:
    result = check_info_for_required_data(info)
    if result:
    The 'w' dict has a lot of elements:
    w = {}
    w['a'] = Primary window
    w['b'] = Frame within window, parent to all the other widgets. This way, the frame can be destroyed without killing the window.
    w['widget_dict'] = {   # Widget construction information
        'label_1':{'type':'label', 'text':'Eat More of these', 'row':1}
        'button_1':{'type':'button', 'label':'Cancel', 'command':'quit', 'row':5}
    w['label_1'] = 
    w['button_1'] = 
  3. Using so many frames and loops means that widgets are getting created and destroyed all the time. But you can re-use a single widget-maker function since all the data is in 'w'.
  4. Keeping 'info' and 'w' separate speeds up troubleshooting and bugsquashing a lot. It keeps a clear distinction between GUI variables and real program variables. It also keeps you from accidentally screwing up info data with Tkinter functions.
  5. Make positional data an attribute of the widget. It makes placing the item much easier since you only need one variable instead of four:
    def widget_maker(w, parent, tuple_of_widgets, dict_of_widgets):
        for widget in tuple_of_widgets:
            f = dict_of_widgets[widget]  # Make it easier to read
            if f['type'] == 'label':
                w[widget] = tkinter.Label(parent, text=f['text'])
                w[widget].row = f['row']
                w[widget].column = f['column']
                w[widget].columnspan = f['columnspan']
        return w
    def place_widgets(w, tuple_of_widgets):
        for widget in tuple_of_widgets:
            w[widget].grid(row=w[widget].row, column=w[widget].column, columnspan=w[widget].columnspan)
  6. Radiobuttons are weird, since each is a separate widget, yet they all share the same variable. So they are a special case in making and placing.
  7. Entry fields:
    contents = tkinter.StringVar()
    w[widget] = tkinter.Entry(parent, textvariable=contents)
    if default_text:  w[widget].insert(0, f['text'])            
    w[widget].contents = contents
    # Run a function whenever the entry field loses focus!
    w[widget].configure(validate='focusout', validatecommand=entryvalidator)
    result = w[widget].contents.get()     # Put this after mainloop to capture the entered string
  8. OptionMenus are really cool, and have a couple nifty tricks:
    contents = tkinter.StringVar()            
    w[widget] = tkinter.OptionMenu(parent, contents, *tuple(f['list']))
    w[widget].contents = contents
    w[widget].bind('', entryvalidator)  # Run a function upon losing focus
    w[widget].configure(takefocus=1)              # Make tabbable - Only OptionMenus are not tabbable by default
       # The following three attributes enable scrolling through the list using arrow keys
    w[widget].list = f['list']     # Store the list as an attribute
    w[widget].bind('', menu_up_button)        
    w[widget].bind('', menu_down_button)
    Using the arrow keys to scroll through is very nifty...but there's a trick necessary. Since the menu_*_button command cannot pass any variable, the function must figure out which widget send the signal based only on the event ID. So here's how to do that:
    def menu_up_button(event):
        widget_id = event.widget
        for widget in w:      # Remember how w is global? Handy!
            if str(w[widget]) == str(event.widget):   # Match the event back to the widget
                current_item = w[widget].contents.get()
                the_list = w[widget].list             # Hey, look! The list is with the variable. How useful!
                position = the_list.index(current_item)  # Find the location of the current entry in the list
                if position > 0:
                    w[widget].contents.set(the_list[position - 1]) # Change the 'contents' to the next item in the list

Using conky for desktop text information

October 26, 2010: Experimenting with conky to put a lot of geek eye candy on the desktop background. I found some good instructions.

  1. Created a new script ~.config/conky/startup to launch the conky processes.
  2. New alias in my .bashrc alias conky='killall conky; .config/conky/startup &' makes manually stopping and starting conky (for testing) a breeze.
  3. New autostart entry in the Login and Sessions control panel to launch conky at login.
  4. Each new conky config file goes in ~.config/conky/ and gets a separate line to start it in the startup script.
  5. The system information conky is from a forum post
  6. Developed a script to read National Weather Service data and reformat it for conky display. I certainly learned a lot about sed!
    curl -o $LOCAL_NAME $RADAR_URL
    convert $LOCAL_NAME -background none -splice 0x${RADAR_IMAGE_TOP_PADDING} $LOCAL_NAME
    DISPLAY=:0.0 sudo -u me xfdesktop -reload
    # Format: Observation Time, Sky Conditions, Temperature, Humidity, and Wind
    # These are the 3rd Field of Line 10, The end of Line 5, The first half of the 13th field of line 10, The end of line 8, and the 4th field of line 10.
    OBSERVATION_TIME_ZULU=`awk '$1=="ob:" {print $3}' $LOCAL_NAME | cut -c3-7`
    SKY_CONDITIONS=`grep 'Sky conditions:' $LOCAL_NAME | cut -d: -f2 | sed 's/^[ \t]*//' | sed 's/\<./\u&/'g`
    TEMPERATURE=`grep 'Temperature:' $LOCAL_NAME | cut -d: -f2 | cut -d' ' -f2 `
    HUMIDITY=`grep 'Humidity:' $LOCAL_NAME | cut -d' ' -f3`
    WIND=`grep 'Wind:' $LOCAL_NAME | cut -d' ' -f4,8`
    echo "Observations at ${STATION} as of ${OBSERVATION_TIME}" > ${LOCAL_PATH}conky_observation_1
    SPACER='   '
    # Reformat the forecast
    # 1) Delete the first 11 lines of header
    # 2) Replace the linefeeds with spaces
    sed -e '1,11d' $LOCAL_NAME | tr '\012' ' ' > ${LOCAL_PATH}conky_forecast_1
    # Reformat the forecast. These are the sed commands, in order:
    # 1) Change the whole message to lowercase
    # 2) Replace the '...' with ': '
    # 3) Reduce double-spaces '  ' to single-spaces ' ' 
    # 4) Fix capitalization in the first sentence on new lines (' .thursday' to 
    #    '~Thursday'). The '~' is a placeholder for a newline
    # 5) Fix capitalization in other sentences on each line ('. low' to '. Low')
    # 6) Fix capitalizaation after colons (': mostly' to ': Mostly')
    # 7) Strip all newlines that aren't the beginning of a new forecast period (strip
    #   '30~mph', keep '~Thursday'
    # 8) Delete any '~' before the first text.
    sed -e 's/\(.*\)/\L\1/' -e 's/\.\.\./: /g' -e 's/\ \ /\ /g' -e 's/\.\([a-z]\)/~~\U\1/g' -e 's/\(\.\ [a-z]\)/\U\1/g'  -e 's/\(\:\ [a-z]\)/\U\1/g' -e 's/\([a-z1-9]\)~\([a-z1-9]\)/\1\ \2/g' -e 's/^~~//'< ${LOCAL_PATH}conky_forecast_1 > ${LOCAL_PATH}conky_forecast_2
    # Reformat the text to the desired width, and put the newlines back in.
    cat ${LOCAL_PATH}conky_forecast_2 | fmt -w $DISPLAY_WIDTH | tr '~' '\012' > ${LOCAL_PATH}conky_forecast
  7. Unused, but cool, weather URLs:

Changing the behavior of CTRL+ALT+DELETE in Xubuntu 10.10

October 16, 2010: Currently, using the Control + Alt + Delete keyboard combination sends my system to sleep. I want to change the behavior so it instead brings up the System Monitor.<\p>

Attempt #1: Modifying Upstart:

  1. Open the Upstart file for editing as root: sudo nano /etc/init/control-alt-delete.conf
  2. edit the following line:
    exec /sbin/shutdown -r now "Control-Alt-Delete pressed"            #FROM
    exec /usr/bin/gnome-system-monitor "Control-Alt-Delete pressed"   #TO
I don't actually know how this turned out, because I found a better way and undid the change:

Attempt #2: Use the Keyboard control panel. Fast and simple. May be Xubuntu-specific.

How to get Python to talk to QuickBooks Pro 2008 using qbXML

October 9, 2010: After hitting roadblocks using QBFC, I had success querying information from QB using qbXML. An excellent working example of the python script to send/receive XML is on this guy's blog, so all that's left is to:

  1. Understand the possible queries using qbXML (see the on-screen reference, which also includes useful XML samples)
  2. Understand the Python tools for constructing and parsing out XML. Having learned ElementTree (included in the standard Python install) in the past, I hope to not learn another set of methods.

Here is my first attempt using qbXML, more successful than QBFC. This is an Inventory Adjustment Query, returning a list of 438 past inventory adjustments. Unlike QBFC and it's COM instances, I can parse the XML and retrieve the data:

#!c:/Python/python.exe -u

import win32com.client
import xml.etree.ElementTree

# Open a QB Session
sessionManager = win32com.client.Dispatch("QBXMLRP2.RequestProcessor")    
sessionManager.OpenConnection('', 'Test qbXML Request')
print('The connection is open')               # DEBUG LINE
ticket = sessionManager.BeginSession("", 0)   # Empty string is the path/to/company_file, or (empty) the currently open company file

# Construct the XML query
qbxml_query = """<?xml version="1.0" encoding="utf-8"?>
<?qbxml version="6.0"?>
<QBXMLMsgsRq onError="stopOnError"> 
<InventoryAdjustmentQueryRq metaData="MetaDataAndResponseData"> 
print ('The company file is ready for work. Sending query...')

# Send the query to QB and receive the response
response_string = sessionManager.ProcessRequest(ticket, qbxml_query)

# Process the response
print ('Received response.')
QBXML = xml.etree.ElementTree.fromstring(response_string)
print QBXML
QBXMLMsgsRs = QBXML.find('QBXMLMsgsRs')
print QBXMLMsgsRs
InventoryAdjustmentQueryRs = QBXMLMsgsRs.find('InventoryAdjustmentQueryRs')
print InventoryAdjustmentQueryRs
print (str(len(InventoryAdjustmentQueryRs)) + 'Items in the Inventory Adjustment Query response')
InventoryAdjustmentRet = InventoryAdjustmentQueryRs.find('InventoryAdjustmentRet')
print list(InventoryAdjustmentRet)
TxnID = InventoryAdjustmentRet.find('TxnID')
print ('TxnID = ' + str(TxnID.text))

# End the QB session and close the connection 
sessionManager.EndSession(ticket)     # Close the company file
sessionManager.CloseConnection()      # Close the connection
print('The session is sucessfully ended and the connection is closed')

Next step is to finish a complete transaction: Pull inventory data, get a customer from it, pull customer data, and change inventory data using a customer. So far looks promising.

How to get OpenOffice (LibreOffice) to read/write a SQLite database

October 4, 2010: SQLite is not natively supported by OpenOffice 3.2. Here is how to install support (source)

  1. Install the functionality and drivers with the following packages: sudo apt-get install unixodbc unixodbc-bin libsqliteodbc
  2. Add the SQLite database to the list of ODBC Databases so OO will see it:
    • Use the command ODBCConfig to open the configuration panel (use sudo ODBCConfig for multi-user access to the same database)
    • Click the User DSN tab
    • Click Add
    • Click the SQLite3 driver (not the old SQLite driver), then click OK
    • Look for the 'Name' field. This is what shows up on you list of databases to choose from
    • Look for the 'Database' field. Click the little arrow on the right edge of the field. Navigate to the SQLite database location.
    • Click the little checkmark to finish. Your database should now show in the User DSN tab.
    • Close the panel
  3. Open OO-Base, and connect to a database using ODBC

It takes just a bit of playing with. It could certainly use just a bit of developer love to make easier. Since SQLite Databases are just one file, it would be lovely if OO recognized them directly. Since SQLite is popular, it would be nice if OO included the drivers. Oh, well, you cannot have everything!

Python-uno: Creating and Changing OpenOffice (LibreOffice) documents using Python

October 4, 2010: I use a python script to update an ODF (OO Writer file. Currently, my script opens and existing Template file (just a zip file with a bunch of XML files within it), mucks around with the XML, and closes the newly created file.

But I suspect that my need for external control of ODFs will increase soon. For example, I want to create customized invoices and envelopes from QuickBooks and other uses. Some stuff I can't seem to do by just mucking about with XML. So it's time to investigate OpenOffice's Universal Network Objects (UNO).

UNO is the glue that holds OO components together, and it has an external API (Ref: OO Developer Guide) that can be accessed from python using the python-uno package (in Ubuntu).

The Hello World Example works. It throws a couple errors, but it does really work...if OpenOffice is started separately. That's wierd. Because it fails if the simple Python code is used to start OO:

# This code causes the hello_world script to fail
import subprocess
process_one = subprocess.call(["soffice", "-accept=socket,host=localhost,port=2002;urp;"])
Hmmm. Anyone have any ideas how I can have a script start OO, then create and edit a document?

Reading a Quickbooks IIF file into a SQLite database

September 28, 2010: QuickBooks Pro 2008 doesn't handle some of our business needs well. For example, we rent items out for months, and while tracking these jobs in QB is possible, it is also laborious and complex. Tracking the inventory status of these items is also difficult. Also, once your data is in QB, it's hard to export...you're locked in.

Exporting from QuickBooks Pro is possible using two methods - an IIF file and a qbXML file. Today we'll do the first - it's easier.

An IIF file is really just a bunch of CSV (spreadsheet or table) tables stacked on top of each other. The first cell in each row tells you which table the row belongs in.

So a python script to simply READ an IIF file looks like:

#!/usr/bin/env python

import csv   #csv is part of the standard python install

filename = '/path/to/your/IIF_File'   #Put your filename here
iif_data = csv.reader(open(filename, 'rb'), delimiter='\t')
for row in iif_data:
    do_something_to_each_row(row)     #Row[0] has the table name

I want to export customer and other data into a SQLite database to see if SQLite offers more flexibility for my needs. By the way, bookeeping transaction data can be inported into QB using IIF, but cannot be exported. That's part of the frustration.

So a Python script to create a SQLite database looks like:

#!/usr/bin/env python

import sqlite3   #sqlite3 is part of the standard python install

db_connection = sqlite3.connect('/tmp/iif_import.db')
cursor = db_connection.cursor()
cursor.execute('create table TABLE1(Column1 Text, Column2 Integer)')
sample_values = ['Sample Text', 12345]
cursor.execute('insert into TABLE1 values (?,?)', sample_values)

Putting it all together, this is a Python 2.7 script that reads an IIF file, and separates the data into a series of SQLite tables. If you have some good ways to improve this script, I'm happy to try!

#!/usr/bin/env python

"""Getting an IIF File into SQLite

USAGE: iif_import.py iif_filename

In theory this should be easy.

1) Create a temporary SQLite database in the desired location
2) Read the IIF File
3) Create 13 tables in the database 
4) Sort each line of the IIF file into the appropriate database table
5) Commit and close the database.
6) Clean up - delete the IIF File

In SQLite, to attach the import database, use:
>attach database '/path/to/temp_database' as iif


import csv, sys, sqlite3

def create_sqlite_db():
    """Create the SQLite database and prepare the tables."""
    db_connection = sqlite3.connect('/tmp/iif_import.db')
    cursor = db_connection.cursor()
    cursor.execute('create table HDR(HDR Text, PROD Text, VER Text, REL Text, \
        IIFVER Integer, DATE Text, TIME Integer, ACCNTNT Text, \
        ACCNTNTSPLITTIME Integer)')
    cursor.execute('create table ACCNT(ACCNT Text, NAME Text, REFNUM Integer, \
        TIMESTAMP Integer, ACCNTTYPE Text, OBAMOUNT Real, DESC Text, \
        ACCNUM Integer, SCD Integer, BANKNUM Integer, EXTRA Text, \
        HIDDEN Text, DELCOUNT Integer, USEID Text)')
    cursor.execute('create table CUSTITEMDICT(CUSTITEMDICT Text, "INDEX" \
        Integer, LABEL Text, INUSE Text)')
    cursor.execute('create table INVITEM(INVITEM Text, NAME Text, \
        REFNUM Integer, TIMESTAMP Integer, INVITEMTYPE Text, DESC Text, \
        QNTY1 Integer, QNTY2 Integer, PRICE Real, COST Real, TAXABLE Text, \
        REORDERPOINT Integer, EXTRA Text, CUSTFLD1 Text, CUSTFLD2 Text, \
        CUSTFLD3 Text, CUSTFLD4 Text, CUSTFLD5 Text, DEP_TYPE Integer, \
        ISPASSEDTHRU Text, HIDDEN Text, DELCOUNT Integer, USEID Text, \
        ISNEW Text, PONUM Text, SERIALNUM Integer, WARRANTY Text, \
        SALEEXPENSE Integer, NOTES Text, ASSETNUM Integer, COSTBASIS Real, \
    cursor.execute('create table CTYPE(CTYPE Text, NAME Text, \
        REFNUM Integer, TIMESTAMP Integer, HIDDEN Text)')
    cursor.execute('create table CUSTNAMEDICT(CUSTNAMEDICT Text, \
        "INDEX" Integer, LABEL Text, CUSTOMER Text, VENDOR Text, \
        EMPLOYEE Text)')
    cursor.execute('create table JOBTYPE(JOBTYPE Text, NAME Text, \
        REFNUM Integer, TIMESTAMP Integer, HIDDEN Text)')
    cursor.execute('create table CUST(CUST Text, NAME Text, REFNUM Integer, \
        TIMESTAMP Integer, BADDR1 Text, BADDR2 Text, BADDR3 Text, \
        BADDR4 Text, BADDR5 Text, SADDR1 Text, SADDR2 Text, SADDR3 Text, \
        SADDR4 Text, SADDR5 Text, PHONE1 Text, PHONE2 Text, FAXNUM Text, \
        EMAIL Text, NOTE Text, CONT1 Text, CONT2 Text, CTYPE Text, \
        TERMS Text, TAXABLE Text, SALESTAXCODE Text, "LIMIT" Integer, \
        RESALENUM Text, REP Text, TAXITEM Text, NOTEPAD Text, \
        LASTNAME Text, CUSTFLD1 Text, CUSTFLD2 Text, CUSTFLD3 Text, \
        CUSTFLD4 Text, CUSTFLD5 Text, CUSTFLD6 Text, CUSTFLD7 Text, \
        CUSTFLD8 Text, CUSTFLD9 Text, CUSTFLD10 Text, CUSTFLD11 Text, \
        CUSTFLD12 Text, CUSTFLD13 Text, CUSTFLD14 Text, CUSTFLD15 Text, \
        JOBDESC Text, JOBTYPE Text, JOBSTATUS Integer, JOBSTART Text, \
        JOBPROJEND Text, JOBEND Text, HIDDEN Text, DELCOUNT Integer, \
        PRICELEVEL Text)')
    cursor.execute('create table VTYPE(VTYPE Text, NAME Text, \
        REFNUM Integer, TIMESTAMP Integer, HIDDEN Text)')
    cursor.execute('create table VEND(VEND Text, NAME Text, REFNUM Integer, \
        TIMESTAMP Integer, PRINTAS Text, ADDR1 Text, ADDR2 Text, ADDR3 Text, \
        ADDR4 Text, ADDR5 Text, VTYPE Text, CONT1 Text, CONT2 Text, \
        PHONE1 Text, PHONE2 Text, FAXNUM Text, EMAIL Text, NOTE Text, \
        TAXID Text, "LIMIT" Integer, TERMS Text, NOTEPAD Text, \
        LASTNAME Text, CUSTFLD1 Text, CUSTFLD2 Text, CUSTFLD3 Text, \
        CUSTFLD4 Text, CUSTFLD5 Text, CUSTFLD6 Text, CUSTFLD7 Text, \
        CUSTFLD8 Text, CUSTFLD9 Text, CUSTFLD10 Text, CUSTFLD11 Text, \
        CUSTFLD12 Text, CUSTFLD13 Text, CUSTFLD14 Text, CUSTFLD15 Text, \
        "1099" Text, HIDDEN Text, DELCOUNT Integer)')
    cursor.execute('create table EMP(EMP Text, NAME Text, REFNUM Integer, \
        TIMESTAMP Integer, INIT Text, ADDR1 Text, ADDR2 Text, ADDR3 Text, \
        ADDR4 Text, ADDR5 Text, CITY Text, STATE Text, ZIP Integer, \
        SSNO Text, PHONE1 Text, PHONE2 Text, EMAIL Text, NOTE Text, \
        SALUTATION Text, CUSTFLD1 Text, CUSTFLD2 Text, CUSTFLD3 Text, \
        CUSTFLD4 Text, CUSTFLD5 Text, CUSTFLD6 Text, CUSTFLD7 Text, \
        CUSTFLD8 Text, CUSTFLD9 Text, CUSTFLD10 Text, CUSTFLD11 Text, \
        CUSTFLD12 Text, CUSTFLD13 Text, CUSTFLD14 Text, CUSTFLD15 Text, \
        HIDDEN Text, DELCOUNT Integer)')
    cursor.execute('create table SHIPMETH(SHIPMETH Text, NAME Text, \
        REFNUM Integer, TIMESTAMP Integer, HIDDEN Text)')
    cursor.execute('create table PAYMETH(PAYMETH Text, NAME Text, \
        REFNUM Integer, TIMESTAMP Integer, HIDDEN Text)')
    cursor.execute('create table INVMEMO(INVMEMO Text, NAME Text, \
        REFNUM Integer, TIMESTAMP Integer, HIDDEN Text)')
    cursor.execute('create table TERMS(TERMS Text, NAME Text, REFNUM Integer, \
        TIMESTAMP Integer, DISCPER Text, STDDUEDAYS Integer, \
        DATMINDAYS Integer, TERMSTYPE Integer, HIDDEN Text)')
    cursor.execute('create table SALESTAXCODE(SALESTAXCODE Text, CODE Text, \
        REFNUM Integer, TIMESTAMP Integer, HIDDEN Text, DESC Text, \
        TAXABLE Text)')
    return db_connection

def read_iif_data():
    """Read the IIF as a .csv file"""
    filename = sys.argv[1]
    iif_data = csv.reader(open(filename, 'rb'), delimiter='\t')
    return iif_data

def put_data_into_database(iif_data, db_connection):
    """Loop through each data row, and filter each row into the table"""
    cursor = db_connection.cursor()
    for row in iif_data:
        if row[0] == 'HDR':
            cursor.execute('insert into HDR values (?,?,?,?,?,?,?,?,?)', \
                row) #9
        elif row[0] == 'ACCNT':
            cursor.execute('insert into ACCNT values (?,?,?,?,?,?,?,?,?,?,?, \
                ?,?,?)', row) #14        
        elif row[0] == 'CUSTITEMDICT':
            cursor.execute('insert into CUSTITEMDICT values (?,?,?,?)', row) #4
        elif row[0] == 'INVITEM' and len(row) == 46:
            cursor.execute('insert into INVITEM values (?,?,?,?,?,?,?,?,?, \
                ?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?, \
                ?,?,?,?,?,?,?,?)', row) #46
        elif row[0] == 'CTYPE':
            cursor.execute('insert into CTYPE values (?,?,?,?,?)', row) #5
        elif row[0] == 'CUSTNAMEDICT':
            cursor.execute('insert into CUSTNAMEDICT values (?,?,?,?,?,?)', \
                row) #6
        elif row[0] == 'JOBTYPE':
            cursor.execute('insert into JOBTYPE values (?,?,?,?,?)', row) #5
        elif row[0] == 'CUST':
            cursor.execute('insert into CUST values (?,?,?,?,?,?,?,?,?,?,?, \
                ?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?, \
                ?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)', row) #59
        elif row[0] == 'VTYPE':
            cursor.execute('insert into VTYPE values (?,?,?,?,?)', row) #5
        elif row[0] == 'VEND':
            cursor.execute('insert into VEND values (?,?,?,?,?,?,?,?,?,?,?, \
                ?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?, \
                ?,?,?,?)', row) #45
        elif row[0] == 'EMP':
            cursor.execute('insert into EMP values (?,?,?,?,?,?,?,?,?,?,?, \
                ?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)', \
                row) #40
        elif row[0] == 'SHIPMETH':
            cursor.execute('insert into SHIPMETH values (?,?,?,?,?)', row) #5
        elif row[0] == 'PAYMETH':
            cursor.execute('insert into PAYMETH values (?,?,?,?,?)', row) #5
        elif row[0] == 'INVMEMO':
            cursor.execute('insert into INVMEMO values (?,?,?,?,?)', row) #5
        elif row[0] == 'TERMS':
            cursor.execute('insert into TERMS values (?,?,?,?,?,?,?,?,?,?, \
                ?,?)', row) #12
        elif row[0] == 'SALESTAXCODE':
            cursor.execute('insert into SALESTAXCODE values (?,?,?,?,?,?,?)', \
                row) #7


datatypes = ['HDR', 'ACCNT', 'CUSTITEMDICT', 'INVITEM', 'CTYPE', 

conn = create_sqlite_db()
data = read_iif_data()
put_data_into_database(data, conn)

Helper application for Evolution html links in Xubuntu 10.04

September 2, 2010: One of the little bugs in Xubuntu 10.04 is that Evolution automatically opens websites and other HTML links in Firefox. But I prefer Epiphany instead of Firefox. It's a small problem, and here's the easy workaround.

  1. Install gconf-editor sudo apt-get install gconf-editor
  2. Open gconf-editor. From a Terminal, run: gconf-editor
  3. Navigate to desktop/gnome/url-handlers/http
  4. Right-click on the name of the wrong browser and select 'Edit Key'. Replace the string. For example, I replaced firefox %s with epiphany %s
  5. Repeat this for desktop/gnome/url-handlers/https
  6. Test a link in Evolution. It should open in the correct browser
  7. If you installed gconf-editor, you can remove it if you wish

Internet Connection Sharing and D-Link NAS-321 between networks with Ubuntu 10.04

June 21, 2010: I'm getting ready to spend a month with a few co-workers in a couple strange places. We currently have a D-link NAS-321 NAS that we use as a common drive for sharing photos and documents. When we travel, we want to continue to use the shared drive, plus share an internet connection to save money.

Internet Connection Sharing. A single connection over wireless, shared to a wired ethernet connection. The wired connection runs to a wireless router so many people can share the same connection wirelessly.

  1. Log into the router and DISABLE DHCP. Addresses will be assigned by the laptop using dnsmasq.
  2. Install the package 'dnsmasq': sudo apt-get install dnsmasq
  3. Test the internet connection from the laptop.
  4. Network Manager #1: Do *nothing* to the existing wireless connection to the internet.
  5. Network Manager #2: Network manager -> Edit Connections -> Wired -> Edit -> IPv4 settings -> Shared to other computers
  6. Test the internet connection from the laptop.
  7. Plug in an ethernet cable from the computer to the wireless router. Plug it into one of the router's LAN ports, *NOT* the WAN (internet) port.
  8. If the wired connection keeps disconnecting every few seconds, use the command sudo pkill dnsmasq to reset it
  9. Test the internet connection from the laptop.
  10. Test a second laptop's ability to connect the router.
  11. Test the second laptop's internet connection through the router.

Static IP NAS on the subnet: Dnsmasq has many features, but it's not a real dhcp server. So for static-ip services on the network like printers and servers, you need to use the /etc/ethers and /etc/hosts files to let the network know about these services.

  1. Add a line to /etc/hosts: xxx.xxx.xxx.xxx Machine_Name
  2. Create a new line in /etc/ethers (a whole new file it it doesn't exist):
    # Machine_name - Model info
    aa:aa:aa:aa:aa (mac address) xxx.xxx.xxx.xxx (static ip address)

Multiple monitors from Ubuntu 10.04

June 4, 2010: I want to hook my laptop up to a huge HDTV in order to make a giant monitor. It didn't work, probably due to an issue with the hardware or driver - not the concept. I did learn a lot about the xrandr tool.

The problem that I was unable to resolve was fuzziness or waviness or scan mismatch. Some internet posts say to 'play with it' but my tinkerining made no difference.

TIP #1: DON'T use the 'Display' control panel to change resolution! As you may know already, it doesn't revert like Windows does, leaving you stuck with a black monitor that will persist after a restart. Instead, open a terminal window and learn to use the xrandr commands. If you make a mistake that blacks your screen, just type xrandr --auto to recover from most problems.

xrandr            # Show the resolution and state of all known monitors
xrandr --auto     # Handy to recover from a settings mistake! You can do it without seeing the screen.
xrandr --output [LVDS or VGA-0] --mode [1280x720]    # Most changes you need to make are on lines like these. In this case, LVDS is my laptop screen, and VGA-0 is my VGA-out port.
xrandr --output VGA-0 --off      # Kill the external VGA port and eliminate all residual settings. Easier than resarting X again after you are finished. You can undo even this using --auto.

TIP #2: To begin, plug in the external screen, then log out/log back in. This will restart X, giving a much greater chance of autodetection. Even if the external monitor is detected without restarting X, my video card wouldn't send signal to it unless I restarted X.

X has changed a lot since 2008. For example, Xorg.conf files are generally no longer used, instead X now autodetects and autoconfigures most equipment. If any online help is over six months old, it might be obsolete.

Installing a Logitech QuickCam Chat to work with Skype on Ubuntu 10.04

May 18, 2010: My Logitech QuickCam Chat 046d:092e no longer works after upgrading to Ubuntu 10.04.

# Figuring out which type of QuickCam I have:
$ lsusb
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 002 Device 006: ID 046d:092e Logitech, Inc. QuickCam Chat
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Some (outdated) information at https://wiki.ubuntu.com/SkypeWebCams

# Solving the problem.  Add the following line to .bashrc:
alias "bash -c 'LD_PRELOAD=/usr/lib/libv4l/v4l1compat.so skype'"
# Try the command 'skype' from the command line.

# Next, fix it when launching skype from the application menu
$sudo nano /usr/share/application/skype.desktop

# Replace the line:
# With the replacement line:
Exec="bash -c 'LD_PRELOAD=/usr/lib/libv4l/v4l1compat.so skype'"

# Save and close and logout. Upon new login, try skype from the menu.

Installing a NAS (D-link DNS-321)

April 3, 2010: I added a DNS-321 NAS to my home network. Here's how I installed it using Ubuntu 9.10:

  1. Install the 3.5" SATA drives. They aren't included. I used 2x 1TB drives.
  2. Add it to the local wired network. Turn it on.
  3. Log in to the router to figure out the NAS IP address. Remember the IP address!
  4. Log into the NAS' buit-in web server (username ADMIN, no password) to format the drives, configure all the access, and learn the top-level-folder name.
  5. Create user accounts for all access

Here's how I access it using Ubuntu 9.10:

  • Get the Samba packages: sudo apt-get install samba smbfs
  • Create a mount point: sudo mkdir /media/Sharedrive
  • Add the following line to fstab: sudo nano /etc/fstab
    // /media/Sharedrive/ cifs nounix,uid=ubuntu_account_username,gid=ubuntu_account_username,file_mode=0777,dir_mode=0777,username=my_NAS_account_name,password=my_NAS_account_password 0 0
    # Your IP address and top-level-folder are very likely to be different.
    # Your uid and gid should be your Ubuntu username/groupname (your Ubuntu machine login name)
    # Your username and password are almost certainly different. 
  • Mount the new share using sudo mount -a
  • Unmount the new share using sudo umount /media/Sharedrive
  • Optional: Create a shortcut in the File Manager to make easy access to the mount point.
  • I haven't investigated automounting upon startup yet.

    Using a US DOD CAC Card with Ubuntu 9.10

    February 8, 2010: Adding a CAC Card reader and using a CAC card with Ubuntu used to be bloody hard. Getting the hardware recognized, getting the add-ons to Firefox and Evolution, installing the certificates, what a pain!

    Well, I tried agin, using the Ubuntu help center instructions at https://help.ubuntu.com/community/CommonAccessCard

    Result: I can log into AKO using my CAC Card! After four years of hoping. Hooray!

    Quick and Easy CD-ripping

    November 27, 2009: Ripping a CD to add to my music collection using the command-line application abcde.

    To install abcde, use sudo apt-get abcde

    To rip a CD, insert the CD and use the terminal:

    cd Music
    abcde -x -o flac
    # This will rip the CD to FLAC (lossless) format, and eject the CD when complete

    Linux converter for Microsoft .lit files

    November 15, 2009: .lit is a proprietary format, and must be converted using the C-lit application, included in the 'epub-utils' package. It can then be read with the 'fbreader' e-book reader.

    To install epub-utils and fbreader, use sudo apt-get epub-utils fbreader

    To convert a book file, use lit2epub /path/to/book.lit

    To read a converted book, open fbreader and point it to /path/to/book.epub

    Video, photos, and music sharing with my phone

    July 13, 2009: I have a snazzy new geek-phone, a Shuttle from Virgin Mobile. It has a 4GB MicroSD slot so files can be exchanged with my Xubuntu system.

    Installing Xubuntu 9.04 on an emachines E625-5192

    July 7, 2009: Received my new laptop today - I need it for the fall, and got it a little early due to a sale.

    What went well: I created a set of Restore DVDs (in case I want Windows back), then removed windows and installed Xubuntu 9.04 full-disk. Copied over my old /home directory, and installed all my favorite apps. E-mail, web, games, .bashrc, most dektop settings, etc. transferred without a hiccup. Recreated my crontab. Wireless networking and video work great. Built-in card reader reads all cards from my cameras and phone. Machine is noticeably faster. FN-key brightness control works. FN-key multimedia controls work.

    Solved Problems:

    Problems: No built-in webcam. Keyboard is different, and will take time to get used to - many typos in the meantime. TomTom GPSautomatic update doesn't work on Linux. Department of Defense forms and other applications are Windows-specific.

    Lossless formats and converting audio using SoundConverter

    July 8, 2009: FLAC is a lossless audio storage format. So I'm going to try converting my audio collection to FLAC. Converting lossy (.mp3, .wma) files to a lossless format is a bit of a waste - better to rip them directly to FLAC next time. I use .mp3 files on my phone.

    The 'soundconverter' package is an easy and convenient way to convert. From the package description "It reads sound files in any format supported by GStreamer and outputs them in Ogg Vorbis, FLAC, or WAV format, or MP3 format if you have the GStreamer LAME plugin." As far as I can tell, the LAME plugin is included in the ubuntu-restricted-extras package.

    The GUI works, but I haven't figured out the command line yet.

    Creating Custom color palletes using The GIMP

    July 6, 2009: My spouse wants to create art using carefully-arranged Rubik's Cubes. She needs a tool to do it: She needs to manipluate images to the right color and pixellation to look cube-ready (Solving them to the displayed configuration is another issue).

    So let's set up The GIMP to do it.

    Preparation: GIMP needs to know what colors to use, so let's create a custom color pallette. (This only needs to be done the first time)

    1. Find an image or two with cube colors (Google is handy for this). Copy the images to your clipboard.
    2. Import the image to GIMP: File -> Create -> From Clipboard
    3. Create the new color palette: Image -> Mode -> Indexed -> Use Custom Palette -> Open The Palette Selection Dialog -> New Palette
    4. Select the six colors: Use the color-picker tool to change the foreground color, then right-click in the palette-editor to add each color.
    5. Save the new palette with a name you will remember, like 'Cube Colors'

    Changing an image

    1. Import the image into GIMP
    2. Reduce the image to the six cube colors - Image -> Mode -> Indexed -> Use Custom Palette -> Cube Colors
    3. Reduce the image to manageable size with Image -> Scale Image Pick a small size that is a multiple of 3 (3 rows/colums on each cube)
    4. NEED a way to split the image into 3x3 (blown up to 9x9) squares for each cube.

    Scripting for batch-resizing: This looks possible using Imagemagick - see "using pre-defined color maps" for an easy way to get Imagemagick to reduce the colors.

    Batch converting photos for a picture frame using Imagemagick

    July 4, 2009: Our store has a 7-inch photo frame, with a 2GB stick in it. So we can just drag-and-drop lots of photos onto the stick.

    But how can we use Imagemagick, the command-line photo editor, to batch-resize a whole bunch of images...perhaps as part of a script?

    # Resize an image to 480 across (keeping in scale for vertical).
    mogrify -sample 480 foldername/filename.jpg
    # Resize a whole directory of files from 4000 x 3000 to 400 x 300.
    mogrify -sample 400 foldername/*.jpg

    Installing Empathy on Xubuntu 9.04

    July 3, 2009: Trying Empathy instead of Pidgin.

    Installing a Samsung SCX-4725FN printer under Xubuntu 9.04

    July 2, 2009: Instead of the installation disks, I used these instructions

    Then go into Settings -> Printer -> Add New Printer and let it autodetect the new printer on the network.

    Xubuntu user-level login scripts

    July 1, 2009: Here is how to get a custom script to run at XFCE startup:

    1. Create a generic startup script and save it as custom_startup.sh. You can put anything in the script; in there now is only a logger so you know it's working.
      # This script is run automatically by xfce4-desktop during system startup.
      logger -i "Running the custom startup script"
    2. Create the following entry as /home/me/.config/autostart/MyStartup.
      [Desktop Entry]
      Exec=bash /home/me/Scripts/startup_script.sh

    Xubuntu desktop wallpaper from a website (NOAA Radar)

    June 28, 2009: Living in the midwest, I check the weather radar a lot to protect my laundry drying outside from lots of pesky thunderstorms. So I want to make the radar image my desktop picture in Xubuntu, and to have it automatically update.

    To get the image: I'm using http://radar.weather.gov/lite/N0R/MKX_0.png. It is from the United States National Weather Service and updates every 5-6 minutes.

    A shell script to refresh the radar image as the desktop picture:

    # This is the path of the final output files that get read by other processes.
    # Working files show up here, too. You may wish to create your own directory.
    # This is the closest weather station for observations. Find your weather 
    # station: http://www.nws.noaa.gov/xml/current_obs/. Format is four letters. 
    # All UPPER CASE. For example, 'KMKE' for Mitchell Field in Milwaukee, WI
    # This is the closest weather radar. Find your radar: http://radar.weather.gov/
    # Check the link to your weather radar, for example:
    # http://radar.weather.gov/ridge/radar.php?rid=mkx&product=N0R&overlay=11101111&loop=no
    # The radar name is in the 'rid=' field. In this example, mkx is Milwaukee, WI
    # Format is UPPER CASE. For example, 'MKX' for Milwaukee.
    # (OPTIONAL) The height of your top menu bar, in pixels.
    # The radar image is padded by this amount on the top edge so the menu doesn't
    # block the timestamp in the image.
    # Download the radar image.
    echo "Weather Update: Downloading the most recent radar image available..."
    curl -o $Local_Name $Radar_Url
    # (OPTIONAL) Use imagemagick to pad the image top so the timestamp is not 
    # blocked by the menu bar.
    #convert $Local_Name -background none -splice 0x${Radar_Image_Top_Padding} $Local_Name
    # Refresh desktop background with 'xfdesktop -reload'. NOTE - some versions 
    # of XFCE flicker all the icons brighter when this occurs, providing visual 
    # feedback that the refresh occurred. An Alternate method to avoid the 
    # flicker is below.
    # The 'DISPLAY=:0.0' prefix is required so root processes like cron and 
    # Upstart can process it.
    # The 'sudo -u ian' is required because a root process (like an Upstart 
    # trigger) may be trying to change a user's desktop. Sudo changes the command 
    # to run as user instead of root. 
    DISPLAY=:0.0 sudo -u ian xfconf-query -v -c xfce4-desktop -p /backdrop/screen0/monitor0/image-path -s $Local_Name
    DISPLAY=:0.0 sudo -u ian xfdesktop -reload
    # Alternate method to avoid the flicker by changing desktop pictures for 
    # just a moment, then changing it back.
    #DISPLAY=:0.0 sudo -u ian xfconf-query -v -c xfce4-desktop -p /backdrop/screen0/monitor0/image-path -s /usr/share/xfce4/backdrops/xfce4logo.png
    #DISPLAY=:0.0 sudo -u ian xfconf-query -v -c xfce4-desktop -p /backdrop/screen0/monitor0/image-path -s $LOCAL_NAME
    I'll save this script as radar_background.sh, and make it executable with sudo chmod +x radar_background.sh.
    1. The DISPLAY=:0.0 element is explained here.
    2. The xfconf-query command, and how to change the background using DBus, are discussed in the XFCE forums

    Updating the desktop picture manually:
    Since we have it in a shell script already, we need only create a .bashrc alias to run the script. nano .bashrc opens the .bashrc for editing. Add the line alias radar='/home/me/radar_background.sh' to the bottom of the file and save it. Open a new terminal window (terminals only read the .bashrc upon starting) and try the command radar.

    Updating the desktop picture automatically: Since we can update the desktop image using bash commands, let's make a crontab entry to update the desktop image automatically. Here's what it looks like - a crontab entry with just the shell command:

    # m h  dom mon dow   command
    */20 * * * * /home/me/radar_background.sh
    Note that the desktop picture will refresh every 20 minutes.

    Explaining the cron instructions:
    # m h dom mon dow command - That's just part of the crontab
    */20 * * * * - tells the machine to run the script every 20 minutes. */5 * * * * will run the script every five minutes.
    > /dev/null at the end (optional) - tells the machine to not e-mail me every time the script runs.
    Make sure the command is all on one line (not wrapped), or you'll get crontab errors.

    Watching a DVD in Xubuntu 9.04

    June 27, 2009: After the previous reinstall two months ago, DVDs stopped working.

    Here's how I got it to work:

    1. Add the medibuntu repository, if you haven't already
      sudo cp /etc/apt/sources.list /etc/apt/sources.list~   # Backup the sources.list file
      sudo mousepad /etc/apt/sources.list &                  # Open the sources.list file in an editing window
        ## In sources.list, append the following two lines at the bottom, then save (don't close it)
        ## Medibuntu
        deb http://packages.medibuntu.org/ jaunty free non-free
        ## If you use debtorrent, use: deb debtorrent://localhost:9988/packages.medibuntu.org/ jaunty free non-free
      sudo apt-get update
      sudo sudo apt-get install medibuntu-keyring
      sudo apt-get update
    2. Use these two terminal commands to add the correct software (source):
      sudo apt-get install totem-xine libxine1-ffmpeg libdvdread4
      sudo /usr/share/doc/libdvdread4/install-css.sh
      # Note - this installs the libdvdcss2 package, which is *not* in the repositories.
      # If you use this list of packages to rebuild your system, for example by using a 
      # Jablicator metapackage, it will fail due to this missing dependency.
    3. I installed ubuntu-restricted-extras for unrelated reasons. So I don't *think* it's necessary.
      sudo apt-get install ubuntu-restricted-extras
    4. Finally, I got an error message when I put in a dvd: "Could not open location; you might not have permission to open the file." This is indeed a permission issue. Fix it with:
      sudo chmod 755 /media/cdrom0
      And then try opening the DVD from within your player application (Totem).

    Reinstalling Xubuntu 9.04

    June 27, 2009: About a month ago, audio suddenly stopped working. Rather than troubleshoot, I decided to reinstall...it might be faster. Unlike last time, this time was a complete reinstall to get rid of certain dependency problems that had also cropped up.

    Creating a metapackage with jablicator

    May 31, 2009: A convenient way to create a metapackage is the jablicator command-line application, included in the package of the same name.

    Jablicator creates a metapackage with all current packages as the dependencies. So you can edit the dependencies to limit metapackage, or take a snapshot of your packages for easy restoring after an update.

    Launchpad bug #355209 - The Evolution plugin for Mail Notification doesn't work/exist

    April 27, 2009: In mail-notification 5.4 in Ubuntu Jaunty (9.04), the following files are installed to /usr/lib/evolution/2.24/plugins/ instead of /usr/lib/evolution/2.26/plugins/:

      A simple workaround:
    1. Copy or link the files to the correct directory:
      sudo ln -s /usr/lib/evolution/2.24/plugins/org-jylefort-mail-notification.eplug /usr/lib/evolution/2.26/plugins/
      sudo ln -s /usr/lib/evolution/2.24/plugins/liborg-jylefort-mail-notification.so /usr/lib/evolution/2.26/plugins/
    2. Restart evolution and go to Edit -> Plugins -> Jean-Yves Lefort's Mail Notification. Check the box.
    3. Right-click on the Mail-Notification icon -> Properties. Add your evolution e-mailbox.

    Upgrading from Xubuntu 8.04 to 9.04

    April 25, 2009: It's finally time to reinstall/dist-upgrade, which I haven't done in a year.

    Burning the 9.04 CD: Instead of installing Brasero or another burner, I used the command wodim dev=$PATH-TO-DEVICE driveropts=burnfree -v -data $PATH-TO-ISO, so in my case wodim dev=/dev/scd0 driveropts=burnfree -v -data /home/me/Ubuntu\ Images/xubuntu-9.04-desktop-i386.iso. Very easy and fast that way.

    Using the 9.04 LiveCD installer: Very simple. One hiccup when automatic partitioning failed. I chose to reuse my existing partition *without* formatting it first, and (COOL!) my /home directory was untouched. All my preferences and saved fies were still there...as if they had been migrated. Networking and sound worked immediately from the default installation.

    Enabling Medibuntu and debtorrent: Medibuntu is for non-free packages like skype. Debtorrent is a method of using torrents instead of mirrors to download. Both require changes to the /etc/apt/sources.list file. debtorrent instructions

    sudo cp /etc/apt/sources.list /etc/apt/sources.list~   # Backup the sources.list file
    sudo mousepad /etc/apt/sources.list &                  # Open the sources.list file in an editing window
      ## In sources.list, append the following two lines at the bottom, then save (don't close it)
      ## Medibuntu
      deb debtorrent://localhost:9988/packages.medibuntu.org/ jaunty free non-free
    sudo apt-get update
    sudo sudo apt-get install medibuntu-keyring debtorrent apt-transport debtorrent
      ## In sources.list, substitute each occurrence of the string 'deb http://' prefix with 'deb debtorrent://localhost:9988/'
      ## Save and close the sources.list file.
    sudo apt-get update

    Bringing back my favorite apps: Using this table, it's pretty easy to figure out what to install and remove. Downloading all this stuff took about 40 minutes.

    The droid fonts are nice, but not special.
    SubjectPackages I RemovedPackages I AddedInstall Notes
    Evolution needed a couple restarts to start working properly
    Download-at-first-need for audio also works well.
    Remote desktopvinagreopenssh-server
    xtightvncviewer required manual config: sudo update-alternatives --set vncviewer /usr/bin/xtightvncviewer
    blues-gnome gets rid of bluetooth-applet
    pmount mounts usb drives as user instead of root

    Several launcher icons were missing - the launchers were still in the right place and fully functional, but the application (like bluefish) was gone. After reinstallation, most images came back automatically. A couple needed to be reassociated with the image by right-clicking on 'properties'.

    Two important shortcuts were missing.

    The crontab was gone and had to be recreated.

    The mail-notification icon couldn't find evolution (Bug 355209). The bug report has the simple workaround.

    Using rmadison and apt-cache

    April 15, 2009: rmadison is part of the devscripts package. It's a fantastic little tool that tells you which version of a package is in which release of Ubuntu or Debian. It's also the quickest way to see if a package is in one of them at all - very handy to check package requests on Launchpad and in Brainstorm.

    apt-cache is part of the default Ubuntu install. It's very handy to find the right package name, dependencies, and other clues when tracking down the correct package for Launchpad or Brainstorm.

    Using debtorrent to contribute to the community

    April 7, 2009: Debtorrent is a way to download packages using a torrent instead of a mirror. It's also a way to contribute to the community by reducing the need for mirrors.

    Installing debtorrent: (instructions)

    1. Install from the command line ( sudo apt-get install debtorrent apt-transport-debtorrent ), or synaptic.
    2. Edit the lines in /etc/apt/sources.list to take advantage of debtorrent
      deb http://ftp.us.debian.org/debian etch main contrib non-free                              #OLD
      deb debtorrent://localhost:9988/ftp.us.debian.org/debian etch main contrib non-free         #NEW
      # Do not modify deb-src lines
    3. Reload the package list with sudo apt-get update

    Using debtorrent: Debtorrent runs in the background automatically. You don't need to start it or stop it. To check on what it's doing, use the web interface at http://localhost:9988/

    One IM application to rule them all...

    April 1, 2009: I use Pidgin, the default IM client on Xubuntu 8.04. But I also use the Department of Defense's AKOIM. And my laptop doesn't have a built-in webcam or microphone, so family uses Skype's IM. I really don't want three IM's open (one tying up a browser window and java!), so here's how I consolidated them into just Pidgin.

    AKO Instant Messenger uses a standard xmpp protocol, so Pidgin can talk to it. All your contacts migrate like magic! If you have AKO/DKO access, get the details here. If you don't have AKO/DKO access, then safely ignore this paragraph.

    Skype uses a proprietary protocol, so it's integration is limited. The skype4pidgin plugin (.deb package) shares Skype contacts and IM with Pidgin...but Skype must still be running alongside Pidgin to work...though many fewer windows must be open. After installation, BOTH Skype and Pidgin must be restarted. I had to edit the Skype options manually to turn off chat notifications - otherwise both apps whine when a new message arrives.

    Scanning for wireless networks

    March 22, 2009: Two methods to scan for wireless networks. One requires sudo/root, the other requires Network Manager.

    #! /usr/bin/env python
    """This python 2.5 script uses iwlist to scan for nearby wireless networks. It must be run as sudo/root to work."""
    import subprocess as SU
    command = ['iwlist', 'eth1', 'scan']
    output = SU.Popen(command, stdout=SU.PIPE).stdout.readlines()
    data = []
    for item in output:
        print item.strip()
        if item.strip().startswith('ESSID:'): 
            data.append(item.lstrip(' ESSID:"').rstrip('"\n'))
        if item.strip().startswith('Quality'): 
            data.append(int(item.split()[0].lstrip(' Quality=').rstrip('/100 ')))
        if item.strip().startswith('Encryption key:off'): data.append('OPEN')
        if item.strip().startswith('Encryption key:on'): data.append('encrypted')        
    print data
    #! /usr/bin/env python
    """This python 2.5 script uses dbus to query Network Manager, which scans regularly for wireless networks. It does NOT require root/sudo."""
    import dbus
    item = 'org.freedesktop.NetworkManager'
    path = '/org/freedesktop/NetworkManager/Devices/eth1'
    interface = item + '.Device'
    bus = dbus.SystemBus()
    data = []
    wireless = dbus.Interface(bus.get_object(item, path), interface)
    for network_path in wireless.getNetworks():
        network = dbus.Interface(bus.get_object(item, network_path), interface)
        data.append(network.getName())                        # also network.getProperties[1]
        data.append(network.getStrength())                    # also network.getProperties[3]
        if network.getEncrypted(): data.append('encrypted')
        else: data.append('OPEN')
    print data

    Replacing os.popen() with subprocess.Popen() in Python 2.5

    March 11, 2009: In Python, you can execute shell commands using the os.popen method...but it's been deprecated in favor of a whole new command.

    # The old way, which worked great!
    import os
    shell_command = 'date'
    event = os.popen(shell_command)
    stdout = event.readlines()
    print stdout
    # The new way, which is more powerful, but also more cumbersome.
    from subprocess import Popen, PIPE, STDOUT
    shell_command = 'date'
    event = Popen(shell_command, shell=True, stdin=PIPE, stdout=PIPE, 
        stderr=STDOUT, close_fds=True)
    output = event.stdout.read()
    print output
    # The new way, all in one line (a bit uglier), works in Python3!
    import subprocess
    output = subprocess.Popen('date', stdout=subprocess.PIPE).stdout.read()
    print output

    Using DBus on Xubuntu 8.04

    March 8, 2009: DBus is a system that permits different applications to exchange information. Tutorial Reference Other Reference.

    Sometimes, DBus crashes upon restart from a suspend or hibernation. These bash commands will help you figure out if it has crashed, and how to restart it.

    $ps -e | grep `cat /var/run/dbus/pid` # Confirm if DBus is running by checking for the PID number in the list of live processes.
                                          # If DBus is running, this will return the process number. 
                                          # If not, it will return nothing.
    $sudo rm /var/run/dbus/pid            # Remove the stale pid file so DBus can be restarted.
    $sudo dbus-daemon                     # Start DBus again.

    A python script uses DBus to see if the network connection is available by asking Network Manager:

    #! /usr/bin/env python
    import dbus
    bus = dbus.SystemBus()
    item = 'org.freedesktop.NetworkManager'
    eth0_path = '/org/freedesktop/NetworkManager/Devices/eth0'
    eth1_path = '/org/freedesktop/NetworkManager/Devices/eth1'
    interface = 'org.freedesktop.NetworkManager.Devices'
    # There are two possible network interfaces: eth0 (wired) and eth1 (wireless).
    eth0 = dbus.Interface(bus.get_object(item, eth0_path), interface)
    if eth0.getLinkActive(): print('The wired network is up') # getLinkActive() is a boolean, TRUE if the network link is active
    eth1 = dbus.Interface(bus.get_object(item, eth1_path), interface)
    if eth1.getLinkActive(): print('The wireless network is up')
    This shell script does exactly the same thing, using the same DBus call:
    # This shell script checks Network Manager if the network is up, using dbus as the communications medium.
    # There are two possible network interfaces: eth0 (wired) and eth1 (wireless). Of course, you may need to alter these to meet your own circumstances.
    # The basic format of dbus-send is: dbus-send --system --dest=org.freedesktop.NetworkManager /org/freedesktop/NetworkManager/Devices/eth0 --print-reply org.freedesktop.NetworkManager.Devices.eth0.getLinkActive
    result_eth0=`dbus-send --system --dest=$DEST $PPATH'/eth0' --print-reply $DEVICE'.eth0.getLinkActive'`
    shaved_eth0=`echo $result_eth0 | cut -d ' ' -f8`
    if [ $shaved_eth0 = 'true' ]; then echo 'The wired network is up'; fi
    result_eth1=`dbus-send --system --dest=$DEST $PPATH'/eth1' --print-reply $DEVICE'.eth1.getLinkActive'`
    shaved_eth1=`echo $result_eth1 | cut -d ' ' -f8`
    if [ $shaved_eth1 = 'true' ]; then echo 'The wireless network is up'; fi

    A Python script that queries Network Manager to get the list of wireless networks.

    import dbus
    item = 'org.freedesktop.NetworkManager'
    path = '/org/freedesktop/NetworkManager'
    interface = item + '.Device'
    network_list = []
    bus = dbus.SystemBus()
    # Create a Network Manager interface and get the list of network devices
    event = dbus.Interface(bus.get_object(item, path), interface)
    # Create an interface for each device
    # Query each interface to see if it's wireless
    # Query each wireless interface for the networks it sees
    for device in event.getDevices():
        device_interface = dbus.Interface(bus.get_object(item, device), interface)
        if device_interface.getType() == 2:  # 0 unknown, 1 wired, 2 wireless
    # Reformat the network names in the list to be more readable
    if network_list:
        for entry in network_list:
            #print entry    # String of path/to/network_name
            entry_list = entry.split('/')
            print entry_list[-1]  # String of network_name

    A Python listener that catches the changes in wireless signal strength using both available methods.

    import dbus, gobject
    from dbus.mainloop.glib import DBusGMainLoop
    def print_device_strength1(*args):  #Use the received signals
        signal_strength = args[1]
        print ('Signal Strength (Method 1): ' + str(signal_strength) + '%')
    def print_device_strength2(*args, **kwargs):  #Use the received signals
        signal_strength = args[1]
        print ('Signal Strength (Method 2): ' + str(signal_strength) + '%')
    DBusGMainLoop(set_as_default=True)    # Set up the event loop before connecting to the bus
    bus_object = dbus.SystemBus()
    # The variables you need. I used the shell command 'dbus-monitor --system' to find this information
    sender = 'org.freedesktop.NetworkManager'
    path = '/org/freedesktop/NetworkManager'
    interface = sender
    member = 'DeviceStrengthChanged'
    # Method 1 - bus_object.proxy_object.connect_to_signal(method, action, filter, message_parts)
    proxy_object = bus_object.get_object(sender, path)
    proxy_object.connect_to_signal(member, print_device_strength1, dbus_interface = interface)
    # Method 2 - bus_object.add_signal_receiver(action, [filters])
    bus_object.add_signal_receiver(print_device_strength2, dbus_interface = interface, member_keyword = member)
    # Start the loop
    loop = gobject.MainLoop()

    Thunar responds beautifully to D-Bus. Introspection is fully set up, so it's easy to use with the d-feet application. Useful for launching programs, opening folders and windows, and manipulating the trash. Launching a program by this method means that the the window manager launches the program, not the script or terminal, so the program can remain open after the script or terminal terminates.

    #!/usr/bin/env python
    import dbus
    item = ('org.xfce.Thunar')
    path = ('/org/xfce/FileManager')
    interface = ('org.xfce.FileManager')
    event = dbus.Interface(dbus.SessionBus().get_object(item, path), interface)
    # These three lines at the end of the script open the file's 'properties' window
    display = (':0')         # The current session screen
    uri = ('/home/me/dbus_test.py')
    event.DisplayFileProperties(uri, display)
    # These three lines at the end of the script launch a new application
    display = (':0')         # The current session screen
    uri = ('/usr/bin/gftp-gtk')
    event.Launch(uri, display)
    # These four lines at the end of the script open a folder window and optionally select a file
    display = (':0')         # The current session screen
    uri = ('/home/me/.cron')
    filename = ('anacrontab.daily')
    event.DisplayFolderAndSelect(uri, filename, display)

    A sample hal script.

    """This python 2.5 script uses dbus to check if the lid switch is open.
    Based on an original python script at http://schurger.org/wordpress/?p=49"""
    import dbus
    dest = 'org.freedesktop.Hal'
    hal_path = '/org/freedesktop/Hal/Manager'
    hal_interface = 'org.freedesktop.Hal.Manager'
    udi_interface = 'org.freedesktop.Hal.Device'
    # Get the list of possible input switches. The return is a list of paths.
    bus = dbus.SystemBus()
    hal = dbus.Interface(bus.get_object(dest, hal_path), hal_interface)
    list_of_udi_paths = hal.FindDeviceByCapability('input.switch')
    # Filter the list for the word 'lid'. Print the status for each one.
    for udi_path in list_of_udi_paths:
        udi = dbus.Interface(bus.get_object(dest, udi_path), udi_interface)
        if udi.GetProperty('button.type') == "lid":
            # The button.state.value is FALSE if the lid is open.
            if udi.GetProperty('button.state.value'): print ('Lid is closed')
            else: print ('Lid is open') 
        else: print ('Problem: I could not find the lid switch. Sorry.')


    Bash and Python scripts to unzip and modify an OpenOffice .odt document

    March 4, 2009: .odt files are actually containers (you can see one using unzip -l document.odt). Within the container, content is in the content.xml file. Script info source

    Here's what I've figured out about opening, modifying, and saving the content of an .odt file:

    Five ways to make a notification pop-up

    February 27, 2009: There are two notification apps included with Ubuntu - notification-daemon and zenity. I've tried them both.

    Notification-daemon sleeps until it is triggered (like every other daemon) by shell command, from python, and from DBus. It is the preferred notification method.

    1. From the command line using the notify-send command. This command is part of the libnotify-bin package, and I haven't tried it. From descriptions, it seems pretty easy. One example.

    2. From python using the pynotify module:
      #!/usr/bin/env python
      """Creates a System-Tray pop-up bubble"""
      import pynotify
      title = ('Test Title')
      text = ('This is the sample text body')
      icon = ('/usr/share/icons/Rodent/48x48/apps/gnome-info.png')
      notification = pynotify.Notification(title, text, icon) 
    3. From DBus using the command line. It should be something really close to this, but I can't get it quite right.
    4. dbus-send --session --dest=org.freedesktop.Notifications --type=method_call --reply-timeout=10000 \
        /org/freedesktop/Notifications org.freedesktop.Notifications string:'Test Application' uint32:0 \
        string: string:'NOTIFICATION TEST' string:'This is a test of the notification system via DBus.' \
        array:string: dict:string: int32:10000
    5. From DBus using the python dbus module:
      #!/usr/bin/env python
      """This is a python 2.5 script that creates a notification using dbus."""
      import dbus
      item = ('org.freedesktop.Notifications')
      path = ('/org/freedesktop/Notifications')
      interface = ('org.freedesktop.Notifications')
      icon = ('/usr/share/icons/Rodent/48x48/apps/gnome-info.png')
      array = ''
      hint = ''
      time = 10000   # Use seconds x 1000
      app_name = ('Test Application')
      title = ('NOTIFICATION TEST')
      body = ('This is a test of the notification system via DBus.')
      bus = dbus.SessionBus()
      notif = bus.get_object(item, path)
      notify = dbus.Interface(notif, interface)
      notify.Notify(app_name, 0, icon, title, body, array, hint, time)
      # The four lines can be consolidated into two:
      notify = dbus.Interface(dbus.SessionBus().get_object(item, path), interface)
      notify.Notify(app_name, 0, icon, title, body, array, hint, time)
      # The 'notify =' line can be made even uglier!
      notify = dbus.Interface(dbus.SessionBus().get_object('org.freedesktop.Notifications', '/org/freedesktop/Notifications'), 'org.freedesktop.Notifications')

    Zenity is an application, not a daemon. It creates a regular window that can be abused for notifications. It can also create icons in the notification area with mouseover information.

    zenity --info --title "Big Title" --text "This is the\\nbody text" &
    zenity --notification --text "This is the mouseover text" &

    You Are Here on a Google Map

    February 24, 2009: The following python script shows your current (GPS-enabled) location on a Google Map. A useful learning experience:

    #!/usr/bin/env python
    """This is a python 2.5 script that plot's a GPS receiver's location on the
    Google Maps website.
    In order to work, you need a network connection, an attached GPS receiver, and
    the GPS daemon (gpsd).
    import os
    #import subprocess and SU
    import gps
    import dbus
    import sys
    def test( ):
        """ Step 1: Test for the existence of a running gpsd, test for the existence of an open network connection, and test for a firefox process.
        If any fail, give an error message, don't try to recover. FUTURE: Could also use DBus to test for firefox and gpsd."""
        process_list = os.popen('ps -e')    # os.popen is deprecated in favort of subprocess.Popen
        #process_list = SU.Popen(['ps','e'], stdout=SU.PIPE).stdout.read()  
        gpsd_existence_flag = 0
        firefox_existence_flag = 0
        for line in process_list.readlines():
            if line.count('gpsd') > 0: gpsd_existence_flag = 1
            if line.count('firefox') > 0: firefox_existence_flag = 1
        if not gpsd_existence_flag:
            print ("gpsd is not running. Use 'gpsd -b /dev/ttyUSB0' to start it, and then try again.")
        else: print ('Checking...found gpsd')
        if not firefox_existence_flag:
            print ("firefox is not running. Please start it and try again.")
        else: print ('Checking...found firefox')
        bus = dbus.SystemBus()
        nm_item = ('org.freedesktop.NetworkManager')  # This string gets used a lot
        nm_path = ('/org/freedesktop/NetworkManager')
        nm_device = ('org.freedesktop.NetworkManager.Device')
        list_of_interface_paths = dbus.Interface(bus.get_object(nm_item, nm_path), nm_device).getDevices()
        found_network_flag = 0
        for interface_path in list_of_interface_paths:
            one_interface = dbus.Interface(bus.get_object(nm_item, interface_path), nm_device)
            if one_interface.getLinkActive():   # True if there is an active network on this interface
                if one_interface.getType() == 2: # 0 unknown, 1 wired, 2 wireless
                    print('Checking...found the wireless network') 
                    found_network_flag = 1
                elif one_interface.getType() == 1: 
                    print('Checking...found the wired network')
                    found_network_flag = 1
        if found_network_flag: return
            print ("cannot find a network connection. Please connect and try again.")
    def get_position_fix( ):
        """Step 2: Get a position fix from gpsd."""
        session = gps.gps('localhost','2947')  # Open a connection to gpsd
        session.query('p')                     # Get the location fix 
        lat = session.fix.latitude
        lon = session.fix.longitude
        print ('Location is ' + str(lat) + ' latitude and ' + str(lon) + ' longitude.')
        return (lat, lon)
    def show_map(lat_lon_tuple):
        """Step 3: Submit the position fix to Google Maps. Note that the parentheses '()' in the URL must be escaped '\' to work.
        Sample URL format: http://maps.google.com/maps?q=37.771008,+-122.41175+(You+can+insert+your+text+here)&iwloc=A&hl=en"""
        url_string = ('http://maps.google.com/maps?q=' + str(lat_lon_tuple[0]) + ',+' + str(lat_lon_tuple[1]) + '+\(You+Are+Here\)&iwloc=A&hl=en')
        os.popen('firefox ' + url_string)
    # Run this script as a standalone program
    if __name__ == "__main__" :
        location = get_position_fix()

    GPS and Xubuntu 8.04

    February 23, 2009: I'm experimenting with USB GPS receiver (dongle). It's a Canmore GT-730F that I received in January 2009. Here's what I've learned so far.

    Manually getting data using the command line (source):

    1. Check dmesg, the kernel log, to find out where the device has been mounted. In my case, it mounts reliably to /dev/ttyUSB0. If it doesn't mount, try the command sudo modprobe pl2303 to load the correct USB driver.
    2. Set the data output rate to 4800 baud using the stty command: stty 4800 > /dev/ttyUSB0
    3. Read the data stream using the cat command: cat /dev/ttyUSB0
    4. You should see a set of data scroll down the screen. Use CTRL+C to end the test.

    The Linux GPS Daemon (gpsd) is the central clearinghouse for receiving GPS data from the receiver, buffering it, and forwarding it to the applications that want it. gpsd has features to broadcast to dbus (system bus), update ntpd, and respond to a multitude of specific queries from clients. References: Project home page, gpsd man page, and a great example program

    $sudo apt-get install gpsd gpsd-clients  # Installs the daemon (gpsd) and test set (gpsd-clients) packages
    $gpsd -b /dev/ttyUSB0                    # Start gpsd, telling it where to find the receiver
    $cgps                                    # Current satellite data - great way to test that the receiver and gpsd are working

    gpsfake is a gpsd simulator. It tricks gpsd into reading from a logfile instead of a real GPS device. Very handy for testing without actually using a GPS dongle. It is included with the gpsd package, and has a man page for additional reference. To make a logfile, and then to use gpsfake:

    $cat /dev/ttyUSB0 > path/to/testlog  # Create the log file. Use CTRL+C to end the capture.
    $gpsfake path/to/testlog             # Run gpsd simulator (not a daemon - it will occupy the terminal)

    Python interface to gpsd (python-gps) is a programming tool to build your own gps-aware application.

    >>>import gps                             # Load the module
    >>>session = gps.gps('localhost','2947')  # Open a connection to gpsd
    >>>session.query('o')                     # See man gpsd(8) for the list of commands
    >>>print session.fix.latitude             # Query responses are attributes of the session
    >>>dir(session)                           # To see the possible responses
    >>>del session                            # Close the connection to gpsd

    In this case, it seems that I need a periodic session.query('p'), which just gives lat/lon and timestamp.

    Time might be an issue, since the system and the GPS may think the time is different. To see if it's an issue, compare them using the python script below. In my tests, they vary from 0.08 to 1.3 seconds apart, not enough to worry about. GPS timestamps use GMT, not localtime.

    #!/usr/bin/env python
    import calendar, time, gps
    system_time = calendar.timegm(time.gmtime())  # System time (in seconds)
    session = gps.gps('localhost','2947')         # Open a connection to gpsd
    session.query('p')                            # See man gpsd(8) for the list of commands
    gps_time = session.timings.c_recv_time        # GPS time (in seconds)
    print ('The time difference is ' + str(system_time - gps_time) + ' seconds.')

    MGRS (military map coordinates) conversion to/from latitude and longitude is not currently available in Ubuntu...that I can find. The dongle documentation doesn't mention MGRS at all. An online converter is available. The proj package looks promising, but I haven't figured it out yet. Perhaps Lat/Lon -> UTM -> MGRS?

    DBus access appears to be undocumented...but there are tantalizing hints on Google that examples are out there. I can listen to dBus using cgps to make traffic, then dbus-monitor --system to see it.

    The best storage format for tracklogs, routes, and waypoints seems to be GPX format, since it's easy to understand and cgpxlogger, included with gpsd, will create an XML track in GPX 1.1 format. Google's KML is more powerful, but also much more complex. GPSbabel universal data translator is a command-line application that translates one file type to another, and does convert GPX <-> KML.

    $cgpxlogger -i 30 > path/to/logfile         # Save data every 30 seconds to an XML file
    $gpsbabel -i gpx -f path/to/gpx_file -x duplicate -o kml -F path/to/kml_file
    $#gpsbabel [options] -i INTYPE -f INFILE -x FILTER -o OUTTYPE -F OUTFILE

    GPSdrive navigation system looks cool, but I couldn't get maps to load, so it's utility was limited. However, it seems that online plotting of tracklogs, routes, and waypoints is possible on Google Maps (and Yahoo Maps, and others). One example is the cool GPS Visualizer.

    Gypsy is an alternative daemon, but not currently in Debian or Ubuntu, so I haven't tried it. Last release 0.6 in March 2008.

    GPSman device manager is an app I know nothing about. I couldn't get it to work, so I removed it. The dongle seems small enough and simple enough that it may not need to be 'managed' at all.

    Using Python to compare .odt files

    February 20, 2009: A quick python script to copy, unzip, and reformat an .odt file. It adds indentations and line breaks. Useful to debug my invoicing script, which muddles with the xml files, by making the files diff-able and easier to read and easier to search.

    #!/usr/bin/env python
    import os
    import xml.etree.ElementTree as ET
    odt_path_and_file = 'path/to/file.odt'
    # This function was copied from http://effbot.org/zone/element-lib.htm
    def indent(elem, level=0):
        i = "\n" + level*"  "
        if len(elem):
            if not elem.text or not elem.text.strip():
                elem.text = i + "  "
            if not elem.tail or not elem.tail.strip():
                elem.tail = i
            for elem in elem:
                indent(elem, level+1)
            if not elem.tail or not elem.tail.strip():
                elem.tail = i
            if level and (not elem.tail or not elem.tail.strip()):
                elem.tail = i
    odt_filename = odt_path_and_file.split('/')[-1]
    folder_name = ('Desktop/' + odt_path_and_file.split('/')[-1].rstrip('.odt'))
    os.popen('rm -r ' + folder_name) #Delete any old working files
    os.popen('mkdir ' + folder_name)
    os.popen('cp ' + odt_path_and_file + ' ' + folder_name)
    os.popen('unzip ' + folder_name + '/' + odt_filename + ' -d ' + folder_name)
    reply = os.popen('ls ' + folder_name)
    file_list = [filename.rstrip('\n') for filename in reply.readlines() if filename.count('.xml') > 0]
    for file in file_list:
        print ('Parsing ' + folder_name + '/' + file)
        tree = ET.parse(folder_name + '/' + file)
        tree.write(folder_name + '/' + file)
        print ('Completed ' + file)

    Bean on Mac OS 10.5

    February 10, 2009: Our old copy of Appleworks from a previous Mac is sputtering and becoming unreliable in 10.5, so I'm looking for a replacement. OpenOffice and AbiWord are cluttered and unpleasant (spouse dislikes them), but Bean seems pleasant enough to fit our needs. Happily, it reads and writes simple documents to .rtf, .doc, and .odt.

    Moving Cron jobs to Anacron

    January 23, 2009: Cron is a great way to run recurring jobs. But some jobs need to run weekly...and sometimes the computer is turned off, so the cron job doesn't run. So I'm going to migrate some jobs to anacron. Cron runs once each minute, checking if the time matches anything in the crontab list. Anacron, however, runs once each day and checks if the interval since each item was last run is greater than the listed value (more than 6 days, for example).

    Anacron is a root/sudo-level command. Running it as a user will fail silently.

    Anacron stores the timestamps of each job's last run in /var/spool/anacron/JOBNAME. This is handy to change while testing.

    There are two ways to run a command using anacron. You can place the command directly in the anacrontab (/etc/anacrontab), or you can put a script in one of the periodic folders (/etc/cron.daily, /etc/cron.weekly, or /etc/cron.monthly)Here are some examples:

    Working on Launchpad bugs using the python API instead of the web page

    January 22, 2009: Trying to build a simple tracker of the bugs I've worked in Launchpad.

    The API installation and instructions and reference documentation.

    Web Scraper for string prices

    January 2, 2009: I successfully tested a web scraper in python. It scrapes about 20 web pages for the prices of violin strings, then puts the prices in an OpenOffice document for handy printing. It is structured so I can add other output formats, and I could add an XML file to track prices over time or just print changes.

    I'm installing it on the store iMac, and setting it as a daily recurring job. The finished file just pops onto the desktop, marked with the date.

    A future version may compare prices from multiple sites.

    The script and template live in the standard location for user scripts, /Users/username/Library/Scripts/scriptname/

    Installing Skype on an XO laptop

    December 21, 2008: I successfully installed Skype with video and did a test call from my XO laptop (build 720) today. I used a version of these instructions. Note: Skype will probably need to be reinstalled each time the xo laptop is upgraded. Warning: Skype isn't really meant for the xo - this install creates several zombie processes each time Skype is started, and the sound quality is definitely inferior.

    1. Install Skype
      #mkdir /home/olpc/skype
      #cd skype
      #wget http://skype.com/go/getskype-linux-fc7
      #yum --nogpgcheck -y localinstall skype-
      #wget ftp://ftp.pbone.net/mirror/atrpms.net/el5-i386/atrpms/testing/#libasound2-1.0.15-33.el5.i386.rpm
      #rpm -i libasound2-1.0.15-33.el5.i386.rpm
      #wget http://dev.laptop.org/~ffm/gstfakevideo.zip
      #unzip gstfakevideo.zip
      #chmod +x gstfakevideo
    2. Skype doesn't have a sugar package, so it must be started by root with the command /home/olpc/skype/gstfakevideo. This is cumbersome and hard to remember, so use nano .bashrc to create an alias. Add this line to the 'alias' section of the .bashrc.
      # User specific aliases and functions
      alias skype 'sudo /home/olpc/skype/gstfakevideo'
      Now the user (not root) can start Skype from any prompt with the command skype.

    SSH suddenly stops working

    December 21, 2008: I tried to ssh to my XO laptop today, but got the following error:

    me:~$ssh user@(IP addr of xo_laptop)
    Someone could be eavesdropping on you right now (man-in-the-middle attack)!
    It is also possible that the RSA host key has just been changed.
    The fingerprint for the RSA key sent by the remote host is
    Please contact your system administrator.
    Add correct host key in /home/USER/.ssh/known_hosts to get rid of this message.
    Offending key in /home/USER/.ssh/known_hosts:NUMBER
    RSA host key for (IP addr of xo_laptop) has changed and you have requested strict checking.
    Host key verification failed.

    Why? Probably because I upgraded the OLPC laptop.

    How to fix it?

    1. Use ssh-keygen -R (IP addr) to purge the old key.
    2. Connect again like normal ssh user@xo_laptop to install the new (current) key.

    Creating a patch to fix an Ubuntu bug

    October 13, 2008: Today I'm preparing a patch to fix Launchpad Bug #117984, which is also Gnome Bug #451734.

    References: Ubuntu Wiki, Ubuntu Packaging Guide

    It turns out that to do it properly, you need to make two patches. The first one is a .diff file for Debian and upstream. The second creates a .debdiff patch for Ubuntu.

    There are circumstances, of course, where one of the patches is not neccessary, and some of these steps can be skipped.

      Note: Use steps 1-9 & 14-15 for just an upstream (.diff) patch. Use steps 1-4,5-7,11-13 & 15 for just an Ubuntu (.debdiff) patch.
    1. Open a terminal window
    2. Create a working directory with the command mkdir working
    3. Move to the working directory with the command cd working
    4. Download the latest source package using the command apt-get source packagename. This method automatically appends an '.orig' suffix, and unpacks the file, too. DON'T download the source from packages.ubuntu.com; instead ADD the repos to your Software Sources control panel using the instructions at the Ubuntu Wiki
    5. Make a copy of the unpacked folder with cp -r package-folder package-folder-orig
    6. Go into the unpacked folder (not the orig) with cd package-folder.
    7. Edit the file using nano path/to/file/to/fix. Fix the file
    8. Return to the working directory with cd ..
    9. Use diff -Nurp package-folder-orig package-folder > upstream-bug#.diff to create the upstream patch.
    10. Go back into the unpacked folder (not the orig) with cd package-folder.
    11. Use the command dch -i to update the changelog. Show the change and list the bug# fixed.
    12. Use debuild -S -us -uc to create the debdiff patch.
    13. Attach the .debdiff patch to the bug in Launchpad.
    14. Attach the upstream .diff patch to the Launchpad bug AND the upstream bug.
    15. Delete the working directory, and all contents.
    Update February 5, 2009: Accepted and fixed!

    Cron to be deprecated in favor of Upstart in Ubuntu

    September 20, 2008: Ubuntu's Upstart is an init daemon replacement, quite analagous to OS X's launchd. Launchd also replaced cron on OS X - and upstart plans to replace cron on Ubuntu. No telling when, but all my cron jobs will need to be reformatted.

    Recurring jobs, launchctl, plist, and shell scripting on OSX

    September 17, 2008: OSX is Unix - similar, and yet dissimilar to linux. I'm setting up a few recurring maintenance scripts

    First of all, cron is deprecated in favor of launchd, which handles a lot more than just cron jobs.

    First, recurring jobs are located in /Users/username/Library/LaunchAgent/ as .plist files.

    Sample syntax of a .plist file:

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http:// www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">

    Alternative interval for daily

    Activate the job using the command launchctl load /Users/username/Library/LaunchAgents/plistname.plist
    Deactivate the job the same way (except use unload)

    Music Score Program

    September 9, 2008: Trying music notation programs

    mscore - looks great, requires fluid-soundfont-gm to playback (piano)

    denemo/lillypond - haven't tried yet.

    lime might be promising for the Mac - but Lime for Windows doesn't work under Ubuntu Wine.

    Replacing update-notifier with a cron script in Xubuntu

    September 9, 2008: Update-notifier does a lot of good things. It shows you when a package manager is working in the background, it automates security updates, it caused updates to be downloaded in the background, and it provides a convenient icon to click on and launch update-manager. Good stuff.

    But it's daemon eats an annoying trickle of ram and resources.

    I have it set to update weekly, so the rest of the time it's just eye candy. Much of it's power is wasted on my scheduled update cycle. Time to find a replacement with less overhead.

    The simple solution: Really, all I need is to run update-manager weekly. So add the following to your user crontab (crontab -e):

    #Reminder bot - this e-mail replaces the update-notifier package
    40 7 * * sun /usr/bin/nail -s "Reminder Bot: Run Update-Manager" me@example.com >/dev/null
    or instead of using nail (to e-mail the notice), I can simply use zenity to create a pop-up:
    #Reminder bot - this e-mail replaces the update-notifier package
    40 7 * * sun DISPLAY=:0.0 /usr/bin/zenity --info --title "Reminder!" --text "It's time to run Update-Manager"
    or zenity can also create a notification-area icon:
    #Reminder bot - this e-mail replaces the update-notifier package
    40 7 * * sun DISPLAY=:0.0 /usr/bin/zenity --notification --text "Reminder: It's time to run Update-Manager"
    and I'll get a reminder e-mail every Sunday morning at 7:40 am...if I'm awake. Then I launch update-manager from the menu or command line...if I want.

    The lazy solution: The simple solution has a drawback - none of the packages are downloaded yet, so the update might take a while (zzzzz). An additional cron entry, though, will have everything downloaded and ready.
    Edit the root crontab: sudo mousepad /etc/crontab
    Add the line 42 6 * * sun root apt-get update && apt-get autoclean && \apt-get -q -d -y -u upgrade (source)
    This will download the updated and upgraded packages on Sunday mornings about an hour before the reminder e-mail, plenty of time.

    In theory we could try 42 6 * * sun root apt-get update && apt-get autoclean && \apt-get -q -d -y -u upgrade && /usr/bin/nail -s "Reminder Bot: Run Update-Manager" me@example.com and have it all in one place, but I'll find that confusing in another 6 months.

    Removing update-notifier and update-notifier-common: Nothing depends on the package in Xubuntu 8.04, so a simple sudo apt-get remove update-notifier update-notifier-common gets rid of the packages, freeing up a whopping 426kb of disk space.

    I find I don't miss the old notifier at all.

    Using the at command in Xubuntu 8.04

    September 9, 2008: Tried using at and batch to compare/complement cron. Unfortunately, I couldn't decipher the syntax completely. I could put in a job, but then I couldn't see the results of the job (so it's not much use to me, eh?)

    Package management in Ubuntu

    September 2, 2008: Package tools and command line tips.

    What's the difference between dpkg, apt-get, aptitude, synaptic, and add/remove? Which one should I use?

    1. dpkg: In the beginning, there were packages. And the community divided the stable packages from the testing, and the dev team cried out, 'Lo, these things need a manager, for how else are we to install and remove them?', and the milestone begat 'dpkg'.

      For dpkg was the first, before all others, and is still the heart of package maintenance. dpkg still does the real work, though the usurpers get the credit.

    2. apt-get: But the people were troubled, and they cried out, 'O mighty devs, dpkg is too hard to use. It heeds only our words, and knows not our heart. It is complicated, and we suffer because it is too easy to bork our system. And forth, into the light, came the Advanced Package Manager, and they called it apt. And the most favored among the apt was 'apt-get'

      And apt-get was a glorious step forward, for now the people's package manager heeded their hearts instead of long lists of pakage names when they wanted to update, and it gave them easier choices to understand. And obeisance to apt-get was still obeisance to dpkg, for apt-get was merely an easier-to-use front end for dpkg.

    3. aptitude: But the people, who are just *never* satisfied, lamented 'O mighty devs, this command-line nonsense has gone far enough. We want a graphical front end tool, because these endless lists of packages are breaking our spirit and making us confused in our oblutions. And the devs heaved a mighty sigh. And their voice brought forth 'aptitude'

      And aptitude had the same commands as apt, but in an easier-to use graphical format. It was called ugly, for it was descended of the command-line, and was despised by the pretty x-server-window adherents, yet it was strong and easy to use. Obeisance to aptitude was strong among the servers and headless devices, for they never listened to the words of the followers of the x-windows-server. And aptitude simply converted the commands into apt commands, and apt converted the commands into dpkg commands, and dpkg still did the real work. And when anyone pointed out that this was a bit convoluted, they were cast out.

    4. synaptic: But the majority of the people, the whiny people, cried out once more 'O mighty devs, who have given us packages and dpkg to rule them, and apt-get to prevent us borking our systems, and aptitude so we don't need to think very hard. And that's nice, for now we can download a lot more pornography. But also, we have heeded the words of the x-windows-server, and we want pretty windows that we control with mice and touchpads and drag around the screen. So chop chop, and get us a version of aptitude for these cool x-windows. And thus was born 'synaptic'.

      And synaptic was strong and robust, and popular among the adherents of GUI were very pleased and multiplied like locusts into legions of users. And like aptitude before it, synaptic simply converted the commands into apt commands, and apt converted the commands into dpkg commands, and dpkg still did the real work.

    5. add/remove: And still the people lamented 'O mighty and wonderful devs, who have found a way to automate updates, who have given me a powerful operating system for free, who have duplicated the miracles of Microsoft and Apple, we think that package management is still too hard, particularly for new users. Your newest followers should be able to install and remove applications with just one click, even more easily than they can now. And the devs considered casting those chuckleheads into the lake of fire, but instead raised their hand and brought forth 'Add/Remove'.

      And add/remove worked with a limited number of applications, a one-click installer/uninstaller of windowed apps, with a lot of advanced details hidden. Add/remove was a front-end for synaptic, which was itself a front-end for apt-get, which was itself a front-end for dpkg.
    6. but still, the people wailed for more and easier, but the tired devs closed their ears and hardened their hearts and went off to work instead on torrent-downloaders to speed up pornography.

    Fast File Sharing from Ubuntu

    August 29, 2008: Command line Python 2.6 tool to set up an instant FTP server:
    Open a terminal

    cd /path/to/directory-I-want-to-share
    python -m SimpleHTTPServer # Python 2.x only  starts the server on port 8000 
    python3 -m http.server     # Python 3.x only  starts the server on port 8000

    Opening a terminal window using cron

    August 22, 2008: Fast experiment to use cron to open a terminal window and execute a command. Success!

    The crontab item is:

    * * * * * DISPLAY=:0.0 /usr/bin/xfce4-terminal -x top
         # * * * * *               - the crontab time codes. Substitute your own.
         # DISPLAY=:0.0            - sends the subsequent commands to the screen
         # /usr/bin/xfce4-terminal - the application (opens a terminal window)
         # -x top                  - execute the top command in the application
    Once a minute, a new terminal window spawns running 'top', just as intended.

    AbiWord Spell Checker on Xubuntu 8.04

    August 16, 2008: AbiWord doesn't come with a spell checker installed...but it doesn't tell you that.

    Here are the packages to install the spell checker: (source)

    Use Synaptic or the command line to install. Restart AbiWord, and Presto...now it works.

    Trouble connecting to wi-fi

    August 3, 2008: Trying to connect to our old 801.b Linksys router. Under Xubuntu 7.10, it connected fine...but this is the first time trying under 8.04. No luck. Just keeps refusing to associate with the ESSID, and returning error 812 (whatever that is).

    But the hardware works - boot into XP, and it connects just fine.

    Installing an SCR201 CAC Card Reader in Xubuntu 8.04

    July 28, 2008: Trying to get an SCR201 PCMCIA smart card reader to work.

    Packages to install: libccid libpcsc-perl pcsc-tools pcscd libpcsclite1 libckyapplet1 coolkey

    Just can't get it to work. Oh well, we'll see if it works on Windows boot.

    Creating an Evolution e-mail from shell and python

    July 24, 2008: Evolution is intended as a GUI e-mail client, so the scripting options are limited. Also, it looks like you cannot autosend e-mail from a script, there's so script command equivalent to the 'send' button.

    But that's a good thing; I can autosend from nail when I want to. So a script will compose the e-mail, which will sit on my desktop until I review it and click to send it. Nice.

    shell command source
    This command creates a window with To:, From:, Subject:, and Body filled in. 'From:' is already known by Evolution, the rest is parsed from the following command:

    evolution mailto:person@example.com?subject=Test\&body=Some%20crazy%20story%20stuff%0D%0Acolumn1%09%09column2%09%09column3%0D%0A%0D%0ASincerely,%0D%0A%0D%0AMe

    Another (easier) way:
    evolution mailto:person@example.com?cc="second_person@example.com"\&subject="This Is The Subject"\&body="This is the body of the message"\&attach=Desktop/test_file.xml

    python command
    Python has it's own smtp module for creating e-mail, it's far more useful in most circumstances. But in this case, we want the composed evolution window.

    import os
    body_string = 'This is the body of the e-mail message'
    body_string = body_string.replace(' ','%20')
    os.popen('evolution mailto:person@example.com?subject=Test\&body=' + body_string)
    >>> import os
    >>> to = 'person@example.com'
    >>> cc = '"second_person@example.com"'
    >>> subject = '"This is the subject"'
    >>> body = '"This is the body"'
    >>> attachment = 'Desktop/rss_test.xml'
    >>> os.popen('evolution mailto:'+to+'?cc='+cc+'\&subject='+subject+'\&body='+body+'\&attachment='+attachment)

    Fixing a broken cron job

    July 22, 2008: I am afflicted by Ubuntu Bug #189462, and so I'm getting occasional e-mails from cron that tell me:

    slocate: fatal error: load_file: Could not open file: /etc/updatedb.conf: No such file or directory

    It turns out to have a trivial solution: touch /etc/updatedb.conf

    What are all these processes?

    July 19, 2008: Trying to figure out all the processes I have running, to see what I can kill, and which packages I can remove.

    According to gnome-system-monitor:

    System or Root processes I uninstalled,

    avahi-daemon - the avahi local discovery system.
    System or Root processes I don't use but can't uninstall,
    atd                - the at command daemon.
    console-kit-daemon - a tracking app for multiuser systems. Many GNOME apps depend on it for some reason.
    getty              - console, 6 of them open.
    System or Root processes I should keep,
    bonobo-activation-server - GNOME component tracker. Safer to leave in place.
    gam_server               - gamin tells the window manager about changes in the filesystem.
    thunar-tpa               - Thunar trash can applet
    evolution-data-server    - database including addressbook and other functions to integrate Gnome app data (breaks Evolution)
    evolution-alarm-notify   - alarm clock (breaks Evolution)
    system-tools-backends    - DBUS

    Configuring Wine in Xubuntu

    July 17, 2008: Wine 1.0 bugs and fixes.

    Using DOD-essential software: PureEdge and ApproveIt and under Wine (Xubuntu)

    July 16, 2008: The Army requires leaders to use PureEdge Viewer 6.5 for all forms, and ApproveIt 5.7.3 to digitally sign the forms...and they are only available for windows.

    A Google search 'linux xfdl' and a Synaptic package search turned up no likely substitutes in Linux.

    Under WINE 1.0, PureEdge installs cleanly (right-click on the file, 'Open with Wine Windows Program Loader'), but firefox won't send forms to it, and it chokes on a form...see wine bug #11625 .

    Under WINE 1.0, ApproveIt fails to recognize it's serial number, and installation fails.

    Using aria2c for ftp, http, and torrent downloads

    July 16, 2008: Trying aria2c as a commpan-line replacement for wget, curl, and torrents.

    Installing DOD CLASS 3 CA-7 security certificate into Firefox 3.0

    July 14, 2008: The US Army has a plethora of websites to keep it's mighty bureaucracy chugging along. Unfortunately, the security certificate they all require is not included in Firefox 3.0 (under Ubuntu 8.04). Here's how to get it and install it.

    NOTE: The certificate is worthless to non-DOD people. It doesn't give you access, you still need an account. It's really boring, anyway, and none of the cool secret stuff is in these websites. All the certificate really does for most people is prevent the annoying message: This website has a certificate that I don't trust.

    1. Download the following three files to the desktop:
      • http://dodpki.c3pki.chamb.disa.mil/rel3_dodroot_1024.p7b
      • http://dodpki.c3pki.chamb.disa.mil/rel3_dodroot_2048.p7b
      • http://dodpki.c3pki.chamb.disa.mil/dodeca.p7b

      The easy way in linux is to use curl -O http://dodpki.c3pki.chamb.disa.mil/rel_dodroot_1024.p7b -O http://dodpki.c3pki.chamb.disa.mil/rel3_dodroot_1024.p7b -O http://dodpki.c3pki.chamb.disa.mil/dodeca.p7b

    2. Go to the Firefox Certificate Manager
      • Open Firefox
      • Edit Menu --> Preferences
      • Advanced settings
      • Encryption tab
      • View Certificates button

    3. Import the three new certificates
      (Repeat for each certificate)
      • Authorities tab
      • Click the 'Import' button
      • Show firefox where the downloaded certificate is and click 'OK'

    4. Fix a bug with the CLASS 3 CA-7 Certificate
      • In the Certificate Manager, Authorities Tab, scroll down to the new 'US Government' entries
      • Select DOD CLASS 3 CA-7, and click the 'Edit' button
      • Two of the certificate boxes should be checked. Check them if they are not:
        This certificate can identify web sites
        This certificate can identify mail users

    5. Whew. You're done. Close the windows, restart Firefox and test it.

    Importing the same certificates to Evolution is a similar method.

    Messing with the history file in Linux,

    July 10, 2008: I set the following bash history to make the history file more useful to me.

    HISTCONTROL=ignoreboth     #Prevent duplicate history entries
    HISTIGNORE=ls:history      #Don't remember these commands in history

    DVD ISO to AVI file

    June 18, 2008: I have an old DVD ISO image (I make DVDs of my kids), and I want to make an AVI copy to upload the funniest bits to YouTube. This is on an Ubuntu 8.04 system, your mileage may vary.

    1. Install dvd::rip using Synaptic
    2. Mount the .iso image (source):
      1. Create a mount point with sudo mkdir /mnt/iso
      2. Mount the image with sudo mount -o loop /path/and/file.iso /mnt/iso
        Alternate: Somewhere along the line, my Thunar magically gained this ability when I right-click on the .iso icon, but I cannot remember which package made this possible...
    3. Rip the DVD. Open dvd::rip.
      1. Preferences: I created no special folder for ripped projects; just saved to the home directory. I checked the dependencies, and had to download a mountain of them (xine, mplayer, etc) using Synaptic.
      2. Storage: Click 'Choose DVD image directory', and select /mnt/iso. Select the button for 'Encode DVD on the fly' since it's already on the hard drive.
      3. Rip Title: Click 'Read DVD table of contents' and the chapters appear. I highlighted (CTRL + Click) the chapters I wanted to convert.
      4. Clip and Zoom: Click the preset menu, select 'Autoadjust - Medium Frame Size, HQ Resize'.
      5. Transcode: Video bitrate calculation is the size of the final file, so I can keep it small (~10 MB per minute, or ~200 MB per half hour). Finally, click 'Transcode' and wait an hour or two or five for the machine to work.
      6. Test the finished file to ensure it's really what you want. I had to do it a couple times, twiddling with various controls.
    4. Cleanup.
      1. Unmount the iso: sudo umount /mnt/iso
      2. Remove the empty mount point: sudo rmdir /mnt/iso
      3. Delete the .iso file (optional, obviously)
      4. Open Synaptic and get rid of all the packages you won't use again. Or use:
        sudo apt-get remove dvdrip mplayer xine
        sudo apt-get autoremove
        sudo apt-get autoclean

    The package ripmake didn't work - tcprobe failure.

    Python List Tricks

    June 17, 2008: Poking around looking for ways to clean up my Python code for MWC. It's probably time for me to let go of my 25-year-old BASIC programming experience, and stop making my Python look like BASIC:

    Eliminating duplicates from a list using Sets. Currently I convert a list to a dictionary and back to do this:

    >>> basket = ['apple', 'orange', 'apple', 'pear', 'orange', 'banana']
    >>> fruit = set(basket)              # create a set from a list, removing duplicates
    >>> fruit
    set(['orange', 'pear', 'apple', 'banana'])
    >>> li = []                          # create a list from a set
    >>> for f in fruit:
    ...   li.append(f)
    >>> li
    ['orange', 'pear', 'apple', 'banana']
    >>> fruit = set([])                   # create an empty set
    >>> fruit.add('fred')                 # adding to set
    >>> fruit.remove('fred')              # removing from set
    >>> fruit.clear()                     # clear all data from a set
    >>> 'orange' in fruit                 # fast membership testing
    >>> 'crabgrass' in fruit
    Source: Dive into Python.
    To do: Convert MWC keyword (tag) lists to sets - the duplicates annoy me.

    Using List Comprehensions as a building block for filters and mapping. What else are they good for?

    Filter data using Filters instead of for loops

    Add data using Map instead of other crazy structures

    Ubuntu Fonts

    June 13, 2008: Looking for cool fonts for violin labels (the paper tag inside the f-hole of the instrument).

    I used the free font 'Adorable' from abstractfonts.com.

    Fonts in Ubuntu are much more straightforward than I expected:

    Legacy Printing

    June 8, 2008: Thirteen years ago, I purchased a laser printer that still works great despite a few wrinkles and liver spots. Here's how I keep it alive currently.


    The printer doesn't have an ethernet port, only a localtalk (serial) port. Long ago, our last serial-printer Mac died, so the only way to talk to the printer is through the network via the EtherWrite adapter.

    The EtherWrite is not detectable on the network if it's directly connected (by ethernet) to a mac or a router. It only works if the connection passes through a hub first. I don't know why. The EtherWrite's measly documentation is unhelpful, and the company that made it died long long ago.

    The printer itself still has clean and sharp output. It's outlasted a couple serial, USB, and multifunction inkjets. Toner is still available. The case is yellowing, and the rear feeder spring has been lost, so it feeds out the top - I taped the rear feed selector level in the position I wanted.

    Since I print at home only rarely, I leave the printer/adapter/hub assembly powered off for months at a time. When I need to print, I turn them all on, then alternate unplugging/replugging the printer and adapter until they show up on the network. I've never figured out what the proper power-up sequence should be - and I've tried.


    The printer only speaks Apple's localtalk. It's built-in on the MacBook, and available as an add-on on Ubuntu.

    Printing from the MacBook is simple. First, it needs to be plugged into the router, since my router's wireless doesn't support Appletalk. Next, print the document. If the printing errors out "cannot connect to printer," then cycle the power on the printer or the adapter. Since the printed document is in queue, when the printer appears on the network, the printer will begin working despite the original error message.

    Printing from linux is a bit more tricky, and Ubuntu doesn't make it easy. But it can be done. These instructions are for Ubuntu 8.04, and your results may vary.

    1. Install the netatalk package with sudo apt-get install netatalk. Due to a bug in the package, you might get a bunch of errors:
      dpkg - trying script from the new package instead ...
      hostname: Unknown host
      invoke-rc.d: initscript netatalk, action "stop" failed.
      dpkg: error processing /var/cache/apt/archives/netatalk_2.0.3-6ubuntu1_i386.deb (--unpack):
       subprocess new pre-removal script returned error exit status 1
      hostname: Unknown host
      invoke-rc.d: initscript netatalk, action "start" failed.
      dpkg: error while cleaning up:
       subprocess post-installation script returned error exit status 1
      Errors were encountered while processing:
      E: Sub-process /usr/bin/dpkg returned an error code (1)
      A package failed to install. Trying to recover:
      dpkg: error processing netatalk (--configure):
       Package is in a very bad inconsistent state - you should
       reinstall it before attempting configuration.
      Errors were encountered while processing:
      Don't sweat if the errors look like these - you'll fix them in the next step. What's happening is that the bug prevents netatalk from figuring out the name of it's host.

    2. The netatalk package is broken. Edit these two files to fix them - then netatalk will work Edit the files even if you didn't get the errors in the last step.
      FROM: ATALK_NAME=`/bin/hostname --short`
      TO: ATALK_NAME=`/bin/hostname`
      FROM: ATALK_NAME=`/bin/hostname --short`
      TO: ATALK_NAME=`/bin/hostname`
    3. Test the printer connection.
        nbplkup PrinterName - Find the printer on the network.

      If the printer's not on the network, there are three possible reasons:

      1. Most wireless routers talk Appletalk only on the wired connections - if you're using wireless, that may be the problem. Obviously, this doesn't apply to Apple wireless routers. For example, our Macbook can't talk wirelessly to the Apple printer because the wireless router can't understand Appletalk.
      2. The netatalk daemon in being flaky. There's a second serious bug in netatalk: The daemon looks for Appletalk devices on the network at startup; if it can't find any, then it shuts down instead of sleeping. For example, if I boot my computer into eth1 (wireless) connection -and we already know my router can't handle wireless Appletalk- netatalk finds no Appletalk network and the daemon shuts down. If I plug in eth0 (wired) connection, and nbplkup PrinterName, it will still fail - netatalk never checked the new connection. Use sudo /etc/init.d/netatalk restart (or stop and start to reset the daemon.
      3. The printer or adapter is being flaky. I make sure the printer is visible to the Macbook (always best to start with a known point, and the Macbook speaks Appletalk like a native). Otherwise, cycle the power on the printer or adapter and try again. With such old hardware, I don't know the correct order to power them up. It can be annoying.

    4. Test command-line printing. Before setting up CUPS, install 'pap' (actually just a script to make pap executable from CUPS) and test it on the command line. CUPS needs pap in the next step.
      • Download pap and save it to your desktop.
      • Make pap executable with sudo chmod +x Desktop/pap
      • Install pap in the correct directory with sudo mv pap /usr/lib/cups/backend
      • Restart CUPS with sudo /etc/init.d/cupsys restart
      • Prepare a test file for printing. The printer seems to only speak Postscript, so your testfile must be in Postscript format. I used this one. In Ubuntu, the a2ps package (included with default install) will create .ps files. Usage: a2ps input_text_file -o output_file.ps
      • Test print using pap -p Printername Testfile.ps

    5. Add the printer to CUPS
    6. Most interesting. I can't get CUPS to print, though the rest is great. Hmmmm.

    Custom Browser URL bar icons

    June 7, 2008: Long ago, I made a favicon.ico custom icon for the Milwaukee Without A Car website, but I don't remember how.... I think I created a .bmp image and simply renamed it...

    So I found a new SVG of the same graphic, but I'm not using it - instead I'll stick with the old favicon until inkscape can export to .ico files.

    Another example showing how SVGs and their tools have great promise, but aren't quite there yet.

    More SVG Shortcomings

    June 6, 2008:

    So I'm back to .png and .gif and .jpg for most HTML images. As before, SVG original for easy changes, but export raster images. Well, that's what experiments are for.

    Hey, this is my first red box in over a month!

    Advantages and shortcomings of SVG images

    June 5, 2008: After a bit of experimenting with SVG graphics, I've reached the following conclusions:

    The upshot is that I'll keep file archives in SVG for future manipulation, resized SVGs for web use, and exported ODGs for print media.


    1. SVG can be scaled by HTML tags
    2. OpenOffice imports SVG natively

    Sending e-mail from the command line

    June 1, 2008: I want to send automatic e-mail as a cron job from my Xubuntu 8.04 laptop. Originally, I tried routing it through the legacy mail spools - it is, after all, on the same laptop as my e-mail - but gave up in despair. Thanks to Ubuntu Forums for the info.

    1. sudo apt-get install ssmtp nail
      or install ssmtp and nail using Synaptic.

    2. sudo mousepad /etc/ssmtp/ssmtp.conf to edit the ssmtp config file. Modify or add the following lines to the config file:
      The nail config file does not need to be changed at all.

    3. Using the nail command to send e-mail:
      Tip: Pay close attention to the <RETURN>s and <CTRL+D>s
      First line: nail recipient@address.com<RETURN>
      Second line: Subject: subject<RETURN>
      Third line: E-mail body text<CTRL+D>

      nail me@virusmagnet.com
      Subject: Lotta viruses coming into this account
      I need to stop responding to the spam.

    4. Generating e-mail as part of a script or cron job:
      Script nail -s "Test 3" recipient@email.com < /tmp/test_email will send the following e-mail.

      From: You (automatic)
      To: recipient@email.com
      Subject: Test 3
      Message Body: (read from file /tmp/test_email)

      nail -s "Test 3" recipient@email.com < /dev/null will send an e-mail with a blank body - good for quick reminders using the subject line only.

      To generate e-mail from cron events, use the command crontab -e to edit your crontab file:
      26 21 * * * nail -s "Test 3" recipient@email.com < /tmp/test_email
      This crontab entry will send the same "Test 3" e-mail every day at 9:26 p.m.

    Many elements can be added or customized - this is simple enough to generate automatic e-mails to myself.

    UPDATE: September 9, 2008: I could do this in evolution, but it's more fun to use cron and nail or zenity

    This crontab entry:
    0 6 28 * * /usr/bin/nail -s "Reminder bot: Pay The Store Rent!" me@example.com </dev/null
    will send the following e-mail at 6:00 am on the 28th of each month,

    From:    me <my computer>
    To:      me@example.com
    Subject: Reminder bot: Pay The Store Rent!
    Date:    Thu, 28 Jul 2008 06:00:01 -0500 
    Text:    None

    So I won't forget to pay the rent.... It's important.

    This sends an e-mail to a known (to nail) e-mail server, so all machines that check the account get the mail.

    Alternately, if I want a a pop-up window:
    0 6 28 * * DISPLAY=:0.0 /usr/bin/zenity --info --title "Reminder Bot" --text "Pay The Store Rent"
    will give me a pop-up window instead, and
    0 6 28 * * DISPLAY=:0.0 /usr/bin/zenity --notification --text "Reminder Bot: Pay The Store Rent"
    will give me a tiny notification-area icon.

    Replacing deprecated HTML tags

    May 31, 2008: HTML has changed since I learned it. For example, <style> tags weren't useful, and CSS was just starting out.

    Ew, I'm old.

    So I used to write an image tag as <img src="General_Lew_Wallace.jpg" width="108" height="144" align="left" border="1">.

    But, an alt text is now required, and the border has been deprecated. So after some poking around with CSS, I came up with:

    <img src="General_Lew_Wallace.jpg" alt="General Lew Wallace of the U.S. Civil War. I don't look like him." width="108" height="144" align="left" style="border-style: ridge; margin-right: 10px">

    It does look better. Thanks to the tutorials at www.w3schools.com

    Using Bash aliases to simplify the universe

    May 30, 2008: I update this blog using the curl command to upload to the web server (see my entry back on March 8, 2008, "The Joy of cURL").

    The command, curl -v -T TechBlog.html -u website:password ftp://ftp.freeservers.com/me/ lives in my Bash history; each time I want to upload, I simply up-arrow through the history at the command prompt until the list time appears, and then hit -enter-. Presto! Terribly easy.

    But sometimes I've done a lot of command line work in the meantime (for example, python programming), so I up-arrow seventy or ninety times before the old curl command reappears.

    I'm much too lazy to do that regularly.

    So I created an alias in my .bashrc file: alias blog='curl -v -T TechBlog.html -u website:password ftp://ftp.freeservers.com/me/'. Now I can simply type blog instead of looking for the old curl command.

    Adding python event notifications to the Desktop

    May 28, 2008: Figured out how to get a Python script pop up a notification bubble on the desktop:

    It uses the pynotify frontend to libnotify. It's not part of the python 2.5 base, but is included with the Ubuntu default install. Here's a test script for my Ubuntu system. It pops up a little bubble from the system tray

    import pygtk
    import pynotify
    pynotify.init( "Some Application or Title" )
    n = pynotify.Notification("Title", "body", "dialog-warning")

    Tip: The .init() call is neccessary, or you'll get a lot of ugly DBUS and GTK errors.

    Changing Stuff: Simply replace the elements("Title", "body", "dialog-warning") with your desired title, body, and image path. The image display will take .jpg, .png, .svg, and likely others. For example...

    n = pynotify.Notification("Milwaukee Without A Car", "The Python script MWC_Webcrawler has completed a scheduled run. The logfile has been added to your desktop", "/usr/share/icons/Rodent/48x48/apps/gnome-info.png")

    Adding RSS Icon to the browser URL bar

    May 19, 2008: Found out how to add an RSS icon to the URL bar:

    Add the following to the <head> section of the page: <link rel="alternate" type="application/rss+xml" title="Your News Feed" href="http://somewhere.com/news.xml">

    Xubuntu 8.04 Sessions

    May 15, 2008: Had terrible problems with applications autostarting at login that I didn't want. I looked everywhere in /etc and /home/me/.config, but OpenOffice and Thunar just kept appearing.

    Then I figured it out. One lousy time I checked the Save session for future logins option on logout. So it restarted that way every time.

    So the fix was to shut down all applications, logout (saving the session), log back in. Voila! All fixed.

    Upgrading to Ubuntu 8.04

    April 27, 2008: Here's how I upgraded my 7.10 laptop to 8.04. Ubuntu Forums source.

    1. Download the alternate install .iso file using torrent (to avoid the overloaded servers) - all downloads - http://releases.ubuntu.com/8.04/ubuntu-8.04-alternate-i386.iso.torrent (i386 alternate torrent)
    2. Mounted the .iso file and ran the installer from Terminal: gksu "su /mnt/xubuntu-8.04-alternate-i386.iso/cdromupgrade"

    But it didn't work. So I upgraded over the network instead.

    Spawning a new Terminal window

    April 9, 2008: For the timetable project, I want to occasionally create (or perhaps spawn) a new terminal window that will track the python script(s).

    Created the following item in my /etc/cron.hourly folder:

    python my/test_script

    Created the following script as test_script.py:

    #! /usr/bin/env python
    import os

    It works on the command line, but not from an xfce4-terminal window. So I removed the code, and I'll log to a file for now instead.

    Converting bitmap images to vector graphics

    April 5, 2008: Our sign contractor for the store created some beautiful graphics - but the disk they gave us was all .jpg and bitmap .pdf and even a bitmap .ai file. Limited usefulness for reuse unless we can convert them to vector graphics - bitmaps are big and pixellated, vectors are small and infinitely scalable smoothly.

    Used Inkscape. Turned out to be incredibly easy. Import the bitmap, then Path -> Trace Bitmap. Next, File -> Document Properties to crop the page area (so it doesn't save as one logo in the corner of an empty sheet of paper). Save the converted file.

    Next problem: Inkscape saves as .svg, but OpenOffice can't open it. Easy to fix - instead of .svg, have Inkscape save as .odg, an OpenOffice drawing file. I love when things work out.

    Monochrome and color laser prints of the converted graphics are great - no pixels, smooth and clean edges. Interesting: The bitmaps look better on screen, but the vector graphics are superior on paper.

    DVD copying

    April 4, 2008: Needed to backup a DVD. On Linux, Brasero failed to finish reading the disk, and never said why. The MacBook Disk Utility errored out with -39. Back to Linux dvdcopy - error reading Title VOB at block 159. No luck - giving up.

    NTFS woes

    April 3, 2008: The 320GB external NTFS drive is faring poorly, but it's Linux's fault. There's a bug in the kernel. As an attempted fix, I enabled the backport repository and updated the driver (from 6 months old to four months old). Minor improvement only. Since the next update of Ubuntu is due in three weeks, I'll wait to see it it's fixed then.

    First python script

    March 31, 2008: Did my first successful python scripts! The first simply imports the second, the second checks the existence of a network connection.

    NTFS external drive

    March 20, 2008: For fun, I checked my external drive while booted in Windows. Oh no! It needed to be defragmented! Defrag recommended CHKDSK as well.

    Big mistake: One or both of them scattered a lot of my backup files (including the backups of this blog - quite a few entries gone until they turn up again). I'm afraid to look at my music...or my collection of Futurama episodes. Only fix is to hunt through the 'found' folder and put stuff away manually. Big mess.

    Tesseract OCR

    March 13, 2008: Tesseract is a very powerful OCR package that works only from the command line. Imagemagick is a very powerful image conversion toolkit. To OCR a PDF:

    convert inputfile.pdf covertedfile.tiff
    tesseract covertedfile.tiff textfile
    . That's amazingly easy.

    Deluge Woes

    March 13, 2008: I just figured out that Deluge saves a quick-resume file in the same folder as the torrent (/home/me/.config/deluge/torrentfiles). I wondered how often it updates - now I suspect it updates every time Deulge successfully quits without an error.

    If so, then a good idea is to quit and restart Deluge every 6 or 12 hours or so, thereby limiting the loss from a crash.

    My external drive is NTFS, and likes to crash on occasion...so I might start setting my timer.

    New MacBook - File Sharing

    March 13, 2008: File sharing is finally easy. System Preferences -> Sharing -> Turn sharing on. Option to log in as the user for total access. At last. The MacBook and iBook share wonderfully. Haven't been able to get the Linux box or the XO laptop on board yet - they see the file shares in Avahi, but can't connect to them.

    New MacBook - Migrating

    March 9, 2008: The new MacBook arrived a couple days ago, and I finally touched it tonight. The new MacBook had OS 10.5, the old iBook has 10.3.9. The iBook wouldn't run the Migration Assistant, and I don't have a Firewire cable anyway...so we used the house wireless, I logged into the iBook through file sharing on the MacBook, and moved most of Library by hand. Great web page detailed which files to move...but can't find it any more.

    Sharing Files Using NFS

    March 9, 2008: Setting up an NFS server for local file sharing is much easier than I expected. But getting it to work is much harder:

    1. Install package nfs-kernel-server for the service.
    2. Tell NFS which folders to expose to the world with the 'export' file, sudo mousepad /etc/exports (I use Mousepad under xfce). The examples are a bit sparse, but there are a couple good examples here.

    No mucking about with .config files, no ports or services to modify. The Ubuntu packages are configured properly already.

    On the OLPC (great info here), it's just as easy to set up an NFS client:

    1. In Terminal, do su, then yum install nfs-utils
    2. Reboot, then in Terminal do su, then mount ipaddress:/ServerPath /LocalMountpoint, for example mount /mnt

    Only big sticking point so far is Permissions. The mount works, but the files are not accessible due to permissions (gid, uid, etc...). Very helpful overview here, and the quick list of essential terminal commands, in proper order:

    We'll see if it works after a reboot. Still need to figure out permissions, NFS client and server on Mac, NFS server on OLPC.

    Sharing Music Files Using Avahi

    March 8, 2008: Success! I successfully shared my Linux (Ubuntu 7.10) music files to the Mac using Avahi on a wireless network. Now, if the Mac and Linux boxes are on the same network, the Mac can see and play any of the Linux songs.

    How I set it up (surprisingly easy):

    1. Install package mt-daapd for the service, and package avahi-discover as a diagnostic tool.
    2. Open a web browser to localhost:3689, Username: (blank), Password: mt-daapd. The project is called Firefly Server.
    3. On the Configuration tab, put in the directory to music files. On the Server Status tab, click Start Scan.
    4. Open the Mac and launch iTunes. Look in the Shared section for the Firefly Server, and test play a file.

    No mucking about with .config files, no command line work, no ports or services to modify.

    We'll see if it works after a reboot. Still need to figure out how to share iTunes music with Rhythmbox, and how to share both wth the OLPC. Still need to figure out how to share video files, and other non-media files.

    Mounting a USB stick drive on an OLPC Laptop

    March 8, 2008: Success! How do you mount a USB stick on an OLPC laptop and get a window of the contents?

    Tip #1: Try several of the USB ports. I was successful in the lower right side port.

    Tip #2: Once inserted, the USB stick will automount under /media/UsbStickName. Just do an ls /media/ to see your stick there.

    Tip #3: For the GUI, look in the Journal. The stick shows as an icon on the bottom. Click it, and the window of contents opens!

    Tip #4: To unmount and eject the drive, use the Journal's USB Drive icon. Trying to use the umount command in Terminal fails, claiming the drive is busy.

    The joy of cURL

    March 8, 2008: curl is a wonderfully useful and scriptable upload/download tool. For example, instead up firing up FTP to upload this page to my website, I figured out that I can open a Terminal and enter: $curl -v -T localpath/TechBlog.html -u username:password ftp://ftp.freeservers.com/me/. And it does the whole thing for me!

    Cleaned out orphaned packages

    March 8, 2008: Periodically, I use the terminal command deborphan to clean out orphaned packages. Found eight or nine.

    It's easy to use. Deborphan merely lists the items. I use Synaptic to remove them.

    Fixed Toshiba Sound!

    March 7, 2008: Since the reinstall, there's been no sound in Linux. Xubuntu has output no sound. Found the following solution on the Ubuntu Forums.

    Edit the alsa config file: sudo mousepad /etc/modprobe.d/alsa-base

    Added the following lines at the bottom of the file:
    # Fix for Toshiba sound. From Ubuntu Forums
    options snd-hda-intel model=auto

    We'll see how it works at the next reboot.

    New MacBook on the way!

    March 5, 2008: My wife's old iMac G4 started making an ugly sound last night.

    So I backed it up immediately. We looked at the loose power connector, the buckling casing, and the noise...and decided it had become unreliable. So I ordered a replacement today.

    Looks like the kids get a spiffy toy.

    Reinstallation fallout

    March 5, 2008: So far, only three problems with the reinstallation.

    1. Some data from my shared partition was lost - my backup wasn't as thorough as I thought. I also had to reenter all my e-mail accounts to Evolution (backup copy on the Mac, whew). Nothing critical, just minor annoyances...and a couple Doctor Who episodes.
    2. The trackpad was fine for the first day, then suddenly became jumpy and unusable today. Looking through various forums, it seems I'm not alone. May or may not be related to the reinstall. On Windows, it was intermittent. On Xubuntu, it's always bad.
    3. Sound works great on Windows, no sound at all in Xubuntu. Well, I kind of expected it...

    Dual Boot (Win XP / Linux) Reconstruction

    March 4, 2008: I decided not to wait until April...

    Downloaded and burned an Ubuntu 7.10 Alternate CD. Reinstalled Windows (snore, 1 hour of installing, 2 hours of updating and faddle). Installed Ubuntu 7.10.

    Uh-oh. No I didn't. The Ubuntu CD was corrupt, and instead hosed everything again.

    Reinstall Windows while using the Mac to download and burn an Ubuntu 7.10 Live CD.

    The live CD passes the self-test, but then is no longer recognized as bootable by the laptop.

    Use the installed Windows to download and burn Xubuntu 7.10 live CD. At the same time, installed Quickbooks and Openoffice, and de-bloated the tiny (10GB) Win partition. Even used the 'compression' feature to squeeze 2.5 GB free space.

    Install Xubuntu 7.10 - Success! Lots of adding and updating, but it's done. The laptop is usable again.

    Quickbooks Repair on the store PC

    March 3, 2008: I located the store's .qbw file on my laptop (the Linux partition is still fine), moved it to a stick, and took it to the store. Goal: Install the latest company data on the store PC, since I had been updating at home.

    QuickBooks has encountered a problem and needs to close. Microsoft Visual C++ Runtime Library Runtime Error! Program: C:\PROGRAM FILES\INTUIT\QUICKBOOKS\QBW32.EXE

    Okay, go to Add/Remove Control Panel and repair it...

    The following error occurs when trying to register the QuickBooks items: Internal Error 2908 {7D4B5591-4C80-42BB-B0E5-F2C0CEE02C1A}

    Argh. Turns out I had weeded unused programs last week. I hate bloat. I had removed Microsoft .NET Framework 2.0. Who knew it was installed by Quickbooks?

    Put in the Quickbooks CD. It recognized that the correct component was missing, and installed it. Ejected the CD and put it away.

    Went back to Add/Remove and the Repair option. It chugged a while (20-30 minutes) and reported success. Tried it with the moved company file....success! We're back in business.

    I Hosed My Windows Partition!

    March 2, 2008: I made a severe mistake in Wine - I set the virtual C: Drive on Wine to match the REAL Windows C: Drive (/media/sda1). BIG MISTAKE. I installed a program, and of course Wine's virtual registry probably overwrote the real Win registry. So now Windows refuses to boot at all...even into safe mode.

    Happily, the Ubuntu partition is just fine. I have a huge drive to backup to. My linux box can read/write to NTFS, and I have plenty of time to backup. So I won't lose any data.

    But it sucks. Maybe I'll wait until April (Ubuntu 8.04), and then fix it. Maybe not:

    1. List the Windows apps and Ubuntu Packages, registrations, usernames, and passwords
    2. Backup the data from the Ubuntu /home and Windows partitions.
    3. Download and burn an Ubuntu 8.04 disc. [7.10 alternate downloaded]
    4. Use the laptop's System Restore disk to reinstall Windows.
    5. Update Windows online.
    6. Reinstall my windows apps.
    7. Use the Ubuntu disk to partition the drive and intall Ubuntu and the Grub bootloader.
    8. Reinstall my Ubuntu apps (much easier than the Win apps).
    9. Rebuild mail, browser shortcuts, and more.

    Scripting and CURL

    March 2, 2008: Learning to use curl, to download schedule information. Using a tutorial to guide me through the minutae of faking forms and another to guide through scripting it into BASH.

    Milwaukee Without A Car timetable project

    March 2, 2008: Beginning a major new project today. I want to bring intercity timetable information to my website, Milwaukee Without A Car.

    The goal is a set of pages, purely for my enjoyment:

    1. A news RSS feed and news.html page of local transit and company news, service advisories, media stories, and press releases.
    2. European-style yellow & white Arrival and Departure tables for the Milwaukee Intermodal Terminal.
    3. A table of the companies, when their timetables last changed, and when they next change.

    I want to develop a set of scripts that run on my Linux laptop to: Regularly download the information from the various companies (Python), organize it (XML), output it to several HTML pages (Python, XSL), and upload it to the website (cURL)...all automatically (cron).

    March 2, 2008: Ambitious, considering how awful I am at scripting, and how bad most of the timetables and websites are. In the end, I should be a *lot* better at curl and awk!

    March 13, 2008: There's no good way to extract the data from PDF timetables online.

    Next step is to hand-collect some sample data, hand-create some XML from the data, nail down the XML format, and write the XSL to convert it to the three HTML table formats I want: Yellow sheet (departures), White sheet (arrivals), and timetable format.

    The final HTML on the website does not need to be dynamic - the data changes only every two weeks or month.

    March 28, 2008: My utterly brilliant brother-in-law, Arno has had a big infulence on the plan.

    May 18, 2008: Success! The RSS Feed is up. It's only the first few companies, but they system works!

    July 2, 2008: The feed had a fatal bug on June 11, but I rewrote it and resumed today. It's cleaner, suceptible to fewer fatal bugs, easier to trace breake, and includes more companies.

    March 3, 2009 The news feeds for ground transport and airlines are solid. Haven't started the others yet. Parsing (scraping) web pages turns out to be a big pain, especially when some designer changes the format - but parsing RSS feeds is much easier. Haven't figured out yet how to scrape timetable data, but I still want to.

    Tarnation, Chapter 6

    March 2, 2008: I need to try restarting the Mac to retest Skype.

    Ubuntu Brainstorm, Chapter 3

    March 2, 2008: Installed wine-doors a package manager for Wine. Nothing useful in it, but worth keeping an eye on.

    More fun with Ubuntu Brainstorm

    March 1, 2008: Learned a lot at Ubuntu Brainstorm...mostly that people are not much smarter than cockroaches. Great fun!

    I tried two packages based on comments in Brainstorm:

    mc (midnight commander) is a command-line window manager

    gnome-do is an adaptive workflow application launcher.

    Both are pretty cool to know about, but they're just not something I'll be using. I'm in more of a learn-to-script-it mood.

    Back to Tracker?

    March 1, 2008: Tracker is a powerful indexing package - that was too buggy in October 07 and killed the system. It hogged the CPU and filled(!) the /home directory disk with a corrupt index file. Bad. Bad. Bad. Uninstalled it back in October after a day of frustration and high dudgeon.

    In April 2008, Ubuntu 8.04 will include the 'fixed' and improved Tracker. It is cool, and promising. The bug reports seem promising - all the big problems fixes last fall....

    Okay, I'll try it again in April. I'm a fool for love. But not sooner - the current version on Synaptic is the 7.10 buggy one.

    Update: I never got around to Tracker - I installed Xubuntu instead, which has no great need for it.


    March 1, 2008: Cool! Package apt-rdepends installs a shell command (also apt-rdepends) that gives a clean list of all dependencies. Reverse searching is also possible. I needed something like this in Iraq, when I was upgrading by hand due to a lack of internet connection for my laptop.

    Interestingly, Synaptic has grown a script-maker for offline package changes. It generates a script that you use on a connected machine to download the right packages. I could have used that, too...

    Now they are merely of academic interest - unless I go back.

    Instead of using the apt-rdepends package, you can get the same result using the apt-cache rdepends packagename command.

    Ubuntu Brainstorm is addictive

    February 29, 2008: Trying to limit myself to 30 minutes per session at Ubuntu Brainstorm. Reading the silly ideas (and the few really good ones), and responding to many. So many are already implemented! Many people want to bloat up the distro, others to pare it down to the bone. Very geeky fun.

    Update April 10, 2009: Today I became an idea reviewer for Brainstorm.

    Update May 15, 2011: Today I became an Admin for Brainstorm!

    Tarnation Chapter 5: Gizmo and Skype

    February 29, 2008: Friday afternoons I go to the store with the kids. Can't get much done, but I try.

    Test call from store Windows PC to the store ATA next to it using Gizmo Project. Call connected. Inconclusive - no mic on the PC, and it's inbound calls that are the problem.

    Test all from the ATA to an 800 number (free). Successful two-way conversation. Since it's inbound calls that are the problem, it's likely a router setting issue. I didn't get time to poke at the router (darn).

    Test calls from PC to Mac and Mac to PC using Skype. Mac never acknowledged inbound calls, but PC did. Mac never updated PC's online status. Successful connection from Mac to PC only after restarting the Mac's Skype. Couldn't test voice (no mic on PC), but both were behind the same router. Result: Mac's Skype seems to have a problem. Need to check Skype support for Mac.

    Tarnation Chapter 4: Gizmo and Skype

    February 28, 2008: Two failed test calls.

    Test call from laptop to store ATA using Gizmo Project. Phone rang, picked up. She couldn't hear be, but I could hear her beautifully. Likely firewall issue at store.

    Test call from laptop to her Mac using Skype. Her Skype never rang. She didn't see my status as 'online'. Hmmm....

    Ubuntu Bug #158706 Verified

    February 28, 2008: Success! A couple months ago I submitted a bug. Today they confirmed it. (I made a good report)

    When installing the package netatalk 2.0.3-6ubuntu1 on Ubuntu 7.10, the following errors occur:

    hostname: Unknown host
    invoke-rc.d: initscript netatalk, action "stop" failed.
    dpkg: warning - old pre-removal script returned error exit status 1
    dpkg - trying script from the new package instead ...
    hostname: Unknown host
    invoke-rc.d: initscript netatalk, action "stop" failed.
    dpkg: error processing /var/cache/apt/archives/netatalk_2.0.3-6ubuntu1_i386.deb (--unpack):
     subprocess new pre-removal script returned error exit status 1
    hostname: Unknown host
    invoke-rc.d: initscript netatalk, action "start" failed.
    dpkg: error while cleaning up:
     subprocess post-installation script returned error exit status 1
    Errors were encountered while processing:
    E: Sub-process /usr/bin/dpkg returned an error code (1)
    A package failed to install. Trying to recover:
    dpkg: error processing netatalk (--configure):
     Package is in a very bad inconsistent state - you should
     reinstall it before attempting configuration.
    Errors were encountered while processing:
    Press return to continue.

    How to fix it:

    1. edit /etc/default/netatalk
      FROM: ATALK_NAME=`/bin/hostname --short`
      TO: ATALK_NAME=`/bin/hostname`
    2. edit /etc/init.d/netatalk
      FROM: ATALK_NAME=`/bin/hostname --short`
      TO: ATALK_NAME=`/bin/hostname`


    Ubuntu Startup: services-admin prompts for password

    February 27, 2008: Every time I login to my Ubuntu 7.10 laptop, I get the 'happy login' music and the screen begins to populate with my desktop.

    Then it vanishes (before I can click on anything), and services-admin prompts me for my password. Whether I enter the password or simply cancel, the system thinks for a moment, then brings up the network and brings back the desktop.

    The only difference I've noticed is that CUPS seems to be active only when I've entered my password, and isn't if I cancelled instead. If all it's starting is CUPS, then I don't care if services-admin starts or not; I can start it manually. I'm annoyed by services-admin prompting me.

    Result: Nothing in bugs, nothing in forums. Left a post at www.ubuntu-forums.com to see if anyone can help.

    Update April 2009: Upgrading to 8.04 eliminated the problem. The ubuntuforums thread was never answered.

    Tarnation Chapter 3 - Ekiga and Gizmo Project

    February 27, 2008:

    1. Skype: Successful echo test. But Skype on Mac at the store errors out "unknown error" when I try to Skype it. When I was there, I restarted Skype, and it seemed to work normally.
    2. Gizmo: Successful echo test after playing with the router firewall. I turned it off and then on again. But the garble is gone, and calls complete. Next time I have problems, just cycle the router? But calling the ATA at the store failed - after 'Hello?' the call dropped.
    3. Ekiga: Turning the firewall off made no difference in an Ekiga echo test. Still garbled outbound.

    Next step: Check the router at the store.

    Linux Startup: intel_rng: FWH not detected

    February 27, 2008: Success! Every time I boot Linux, one of the startup messages is intel_rng: FWH not detected. Today I fixed it!

    The item shows up in /var/log/dmesg as Feb 27 06:28:03 my_computer kernel: [ 15.128000] intel_rng: FWH not detected

    It turns out to have a simple explanation, and suggested solution is to 'turn off' the kernel module.

    I used sudo gedit /etc/modprobe.d/blacklist to edit the blacklist file as root. I added the following at the bottom of the file:

    # prevents minor boot error 'intel_rng: FWH not detected'
    blacklist intel_rng

    Reboot, and success! The error message is gone.

    Tarnation Chapter 2 - Ekiga and Gizmo Project

    February 26, 2008: Ekiga Bad - I double checked the Port Triggering on the home router - it's fine. But Ekiga still garbles and echoes my voice. Inbound audio is great. Not usable. Ekiga mailing list had a fellow with a similar problem in Oct 07 using Win XP, probably a driver issue. Might be here, too - audio, though it works, has always been a bit iffy on the Linux Toshiba.

    Ekiga Good - Make calls from the command line using ekiga -c recipient@proxy. For example, the Gizmo echo service is ekiga -c sip:17474743246@proxy01.sipphone.com or ekiga -c sip:echo@proxy01.sipphone.com

    Gizmo Bad - Several test calls, all disconnected instantly. Not usable. Opened SIPPhone ticket #YBE-375340.

    Skype good - Test call to my awesome Brother-In-Law, Barrett, in Texas. He heard me clearly. Didn't have his mic so he couldn't talk back, but audio in has never been a problem...

    Tarnation - Ekiga and Gizmo Project

    February 25, 2008: Failure! We have an Analog Telephone Adapter (ATA) hooked up to the store phone. That way, it can receive both PSTN and Internet calls. The store SIP URL is sip:17476010543@proxy01.sipphone.com. Er, that's the store, and I'm usually not there, so please don't call it to chat...you'll just annoy my wife.

    In the past, I've used this ATA (attached to Gizmo Project account 'kiwkak5678') to dial out from home and the store, so I know it works.

    Yesterday I phoned it from home using Ekiga 2.0.11 for Linux (Ubuntu 7.10 on Toshiba A105-S4054) on my laptop (connection test only - no mic) - Success! The phone rang four times, the answering machine picked up, and I heard the answering machine message.

    Today I tried a couple Ekiga tests on my laptop:

    1. Ekiga voice tests
      • Gizmo Project's echo service sip:echo@proxy01.sipphone.com:5060. Succesful connection using my Gizmo account on Ekiga. Too much jittery echo, and too much recurring echo (echo, echo, echo, echo).
      • Gizmo Project's TellMe service at sip:tellme@proxy01.sipphone.com. Successful connection from both Ekiga and Gizmo accounts on Ekiga. Voice response system came through clearly, but it couldn't hear my voice commands properly.
      • Ekiga'e echo service at sip:500@ekiga.net. Created an Ekiga account, and successfully connected. From the Gizmo account, failed with "Security Check Failed". Same sound quality result as the Gizmo echo test.
    2. Ekiga video test. Success! Ekiga detected my camera using the standard V4L driver, and put a video picture in the correct Ekiga window.
    3. Gizmo Project for Linux voice tests
      • Gizmo Project's echo service. Connected, but too much recurring echo (echo, echo, echo, echo).
      • Store phone via SIP/ATA. Three test calls rang the other end, but neither side could hear the other. Caller ID picked up the SIP call as 'Out Of Area'.
      • Gizmo Project's TellMe service at. Successful connection. Voice response system came through worse than Ekiga, but understandable. It could correctly hear an interpret most of my voice commands. It tried to interpret other signals (background? echo or garble?) at times.
    4. Tried a voice test of Skype. Success! Good sound quality both ways on their echo service. Tried connecting to my wife's Mac. Success!

    Support forum results: Ekiga has nothing at all about the echo echo echo echo. Gizmo claims most problems are really NAT/Firewall issues (never theirs).

    Next Steps:

    Created My Tech Blog

    February 25, 2008: Success! Today I created and tested this page, plus the feedback webform.

    Success stories are so rare....

    Converting a .tiff to a .pdf in Ubuntu

    February 25, 2008: For some reason, my CUPS under Ubuntu 7.10 refuses to print-to-pdf. I've tried everything I know (not much)...and probably broke a lot of stuff along the way.

    But I found an easy way to do it from the command line.

    1. Scan the item to a .tiff file in a known location (for example, save it to the Desktop)
    2. Open a terminal and navigate to that location (for example cd Desktop)
    3. convert -density 300 -units PixelsPerInch scanned_file.tiff converted_file.ps
    4. ps2pdf13 -sPAPERSIZE=letter converted_file.ps final_output_file.pdf
    5. rm scanned_file.tiff converted_file.ps
    © 2008-2011 Ian Weisser -