If you're still using ifconfig, you're living in the past

The world evolves

I regularly see "recommendations" to use ifconfig to get interface information in mailing list posts or bug reports and other places. I might even be guilty of it myself. Still, the world of networking has evolved quite a lot since ifconfig was the de-facto standard to bring up a device, check its IP or set an IP.

Following some improvements in the kernel and the gradual move to driving network things via netlink; ifconfig has been largely replaced by the ip command.

Running just ip yields the following:

Usage: ip [ OPTIONS ] OBJECT { COMMAND | help }
       ip [ -force ] -batch filename
where  OBJECT := { link | address | addrlabel | route | rule | neigh | ntable |
                   tunnel | tuntap | maddress | mroute | mrule | monitor | xfrm |
                   netns | l2tp | fou | macsec | tcp_metrics | token | netconf | ila }
       OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[esolve] |
                    -h[uman-readable] | -iec |
                    -f[amily] { inet | inet6 | ipx | dnet | mpls | bridge | link } |
                    -4 | -6 | -I | -D | -B | -0 |
                    -l[oops] { maximum-addr-flush-attempts } | -br[ief] |
                    -o[neline] | -t[imestamp] | -ts[hort] | -b[atch] [filename] |
                    -rc[vbuf] [size] | -n[etns] name | -a[ll] | -c[olor]}

I understand this may look complicated to some people, but the jist of it is to understand that with ip, you interact with objects, and apply some kind of function to it. For example:

ip address show

This is the main command that would be used in place of ifconfig. It will just display the IP addresses assigned to all interfaces. To be precise, it will show you the layer 3 details the interface: the IPv4 and IPv6 addresses, whether it is up, what are the different properties related to the addresses...

Another command will give you details about the layer 2 properties of the interface: its MAC address (ethernet address), etc; even if it is shown by ip address:

ip link show

Furthermore, you can set devices up or down (similar to ifconfig eth0 up or ifconfig eth0 down) simply by using:

ip link set DEVICE up or ip link set DEVICE down

As shown above, there are lots of other objects that can be interacted with using the ip command. I'll cover another: ip route, in another post.

Why is this important?

As time passes, more and more features are becoming easier to use with the ip command instead of with ifconfig. We've already stopped installing ifconfig on desktops (it still gets installed on servers for now), and people have been discussing dropping net-tools (the package that ships ifconfig and a few other old commands that are replaced) for a while now. It may be time to revisit not installing net-tools by default anywhere.

I want to know about your world

Are you still using one of the following tools?

/bin/netstat    (replaced by ss, for which I'll dedicate another blog post entirely)
/sbin/ifconfig
/sbin/ipmaddr   (replaced by ip maddress)
/sbin/iptunnel
/sbin/mii-tool    (ethtool should appropriately replace it)
/sbin/nameif
/sbin/plipconfig
/sbin/rarp
/sbin/route
/sbin/slattach

If so and there is just no alternative to using them that comes from iproute2 (well, the ip or ss commands) that you can use to do the same, I want to know about how you are using them. We're always watching for things that might be broken by changes; we want to avoid breaking things when possible.

Quick and easy network configuration with Netplan

Earlier this week I uploaded netplan 0.21 in artful, with SRUs in progress for the stable releases. There are still lots of features coming up, but it's also already quite useful. You can already use it to describe typical network configurations on desktop and servers, all the way to interesting, complicated setups like bond over a bridge over multiple VLANs...

Getting started

The simplest netplan configuration might look like this:

# Let NetworkManager manage all devices on this system
network:
  version: 2
  renderer: NetworkManager
At boot, netplan will see this configuration (which happens to be installed already on all new systems since 16.10) and generate a single , empty file: /run/NetworkManager/conf.d/10-globally-managed-devices.conf. This tells the system that NetworkManager is the only renderer for network configuration on the system, and will manage all devices by default.

Working from there: a simple server

Let's look at it on a hypothetical web server; such as for my favourite test: www.perdu.com.

network:
  version: 2
  ethernets:
    eth0:
      dhcp4: true
This incredibly simple configuration tells the system that the eth0 device is to be brought up using DHCP4. Netplan also supports DHCPv6, as well as static IPs, setting routes, etc.


Building up to something more complex

Let's say I want a team of two NICs, and use them to reach VLAN 108 on my network:

            network:
              version: 2
              ethernets:
                eth0:
                  dhcp4: n
                eth1:
                  mtu: 1280
                  dhcp4: n
              bonds:
                bond0:
                  interfaces:
                  - eth1
                  - eth0
                  mtu: 9000
              vlans:
                bond0.108:
                  link: bond0
                  id: 108

I think you can see just how simple it is to configure even pretty complex networks, all in one file. The beauty in it is that you don't need to worry about what will actually set this up for you.

A choice of backends

Currently, netplan supports either NetworkManager or systemd-networkd as a backend. The default is to use systemd-networkd, but given that it does not support wireless networks, we still rely on NetworkManager to do just that.

This is why you don't need to care what supports your config in the end: netplan abstracts that for you. It generates the required config based on the "renderer" property, so that you don't need to know how to define the special device properties in each backend.

As I mentioned previously, we are still hard at work adding more features, but the core is there: netplan can set up bonds, bridges, vlans, standalone network interfaces, and do so for both static or DHCP addresses. It also supports many of the most common bridge and bond parameters used to tweak the precise behaviour of bonded or bridged devices.


Coming up...

I will be adding proper support for setting a "cloned" MAC on a device. I'm reviewing the code already to do this, and ironing out the last issues.

There are also plans on better handling administrative states for devices; along with a few bugs that relate to support MaaS, where having a simple configuration style really shines.

I'm really excited for where netplan is going. It seems like it has a lot of potential to address some of the current shortcomings in other tools. I'm also really happy to hear of stories of how it is being used in the wild, so if you use it, don't hesitate to let me know about it!

Contributing

All of the work on netplan happens on Launchpad. Its source code is at https://code.launchpad.net/netplan; we always welcome new contributions.

Nominations wanted for the Developer Membership Board

Hi!

The Ubuntu Developer Membership Board is in need of new blood.

Of the seven members of the board, five (5) will be expiring on March 9th. Members of the Developer Membership Board are elected by all Ubuntu Developers for a term of 2 years, meeting in #ubuntu-meeting about once a fortnight. Candidates should be Ubuntu developers themselves, and should be well qualified to evaluate prospective Ubuntu developers.

The DMB is responsible for reviewing developer applicants and decides when to entrust them with developer privileges or to grant them Ubuntu membership status.

Providing at least six valid nominations are received, the new members will be chosen using Condorcet voting. Members of the ubuntu-dev team in Launchpad will be eligible to vote, and will receive voting ballots by email (to their email address recorded in Launchpad). A Call for Nominations has already been sent by email to the ubuntu-devel-announce mailing list (but another call for nominations should follow soon): https://lists.ubuntu.com/archives/ubuntu-devel-announce/2016-February/001167.html.

Applications should be sent as GPG-signed emails to developer-membership-board at lists.ubuntu.com (which is a private mailing list accessible only by DMB members).

Of course, if you're nominating a developer other than yourself, please make sure to ask who you're about to nominate beforehand, to make sure they're okay with it.

In full tinfoil hat mode: Using GPG with smartcards

Breaking OPSEC for a bit to write a how-to on using GPG keys with smartcards...

I've thought about experimenting with smartcards for a while. Turns out that my Thinkpad has a built-in smartcard reader, but most of my other systems don't. Also, I'd like to use a smartcard to protect my SSH keys, some of which I may use on systems that I do not fully control (ie. at the university to push code to Github or Bitbucket), or to get to my server. Smartcard readers are great, but they're not much fun to add to a list of stuff to carry everywhere.

There's an alternate option: the Yubikey. Yubico appears to have made a version 4 of the Yubikey which has CCID (smartcard magic), U2F (2-factor for GitHub and Google, on Chrome), and their usual OTP token, all on the same tiny USB key. What's more, it is documented as supporting 4096 bit RSA keys, and includes some ECC support (more on this later).

Setting up GPG keys for use with smartcards is simple. One has the choice of either creating your own keys locally, and moving them on the smartcard, or generating them on the smartcard right away. In other to have a backup of my full key available in a secure location, I've opted to generate the keys off of the card, and transferring them.

For this, you will need one (or two) Yubikey 4 (or Yubikey 4 Nano, or if you don't mind being limited to 2048 bit keys, the Yubikey NEO, which can also do NFC), some backup media of your choice, and apparently, at least the following packages:

gnupg2 gnupg-agent libpth20 libccid pcscd scdaemon libksba8 opensc

You should do all of this on a trusted system, not connected to any network.

First, setup gnupg2 to a reasonable level of security. Edit ~/.gnupg/gpg.conf to pick the options you want, I've based my config on Jeffrey Clement's blog entry on the subject:

#default-key AABBCC90DEADBEEF
keyserver hkp://keyserver.ubuntu.com
no-emit-version
no-comments
keyid-format 0xlong
with-fingerprint
use-agent
personal-cipher-preferences AES256 AES192 AES CAST5
personal-digest-preferences SHA512 SHA384 SHA256 SHA224
cert-digest-algo SHA512
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed
You'll want to replace default-key later with the key you've created, and uncomment the line.

The downside to all of this is that you'll need to use gpg2 in all cases rather than gpg; which is still the default on Ubuntu and Debian. gpg2 so far seems to work just fine for ever use I've had (including debsign, after setting DEBSIGN_PROGRAM=gpg2 in ~/.devscripts).

You can now generate your master key:
gpg2 --gen-key

Then edit the key to add new UIDs (identities) and subkeys, which will each have their own different capabilities:

gpg2 --expert --edit-key 0xAABBCC90DEADBEEF
Best is to follow jclement's blog entry for this. There is no point in reiterating all of it. There's also a pretty complete guide from The Linux Foundation IT here, though it seems to include a lot of stuff that does not appear to be required here on my system, in xenial.

Add the subkeys. You should have one of encryption, one for signing, and one for authentication. Works out pretty well, since there are three slots, one for each of these capabilities, on the Yubikey.

If you also want your master key on a smartcard, you'll probably need a second Yubikey (that's why I wrote two earlier), which would only get used to sign other people's keys, extend expiration dates, generate new subkeys, etc. That one should be left in a very secure location.

This is a great point to backup all the keys you've just created:

gpg2 -a --export-secret-keys 0xAABBCC90DEADBEEF > 0xAABBCC90DEADBEEF.master.key
gpg2 -a --export-secret-subkeys 0xAABBCC90DEADBEEF > 0xAABBCC90DEADBEEF.sub.key
gpg2 -a --export 0xAABBCC90DEADBEEF > 0xAABBCC90DEADBEEF.pub

Next step is to configure the smartcard/Yubikey to add your name, a URL for the public key, set the PINs, etc. Use the following command for this:
gpg2 --card-edit

Finally, go back to editing your GPG key:
gpg2 --expert --edit-key 0xAABBCC90DEADBEEF

From this point you can use toggle to select each subkey (using key #), move them to the smartcard (keytocard), and deselect them (key #). To move the master key to the card, "toggle" out of toggle mode then back in, then immediately run 'keytocard'. GPG will ask if you're certain. There is no way to get a key back out of the card, if you want a local copy, you needed to make a backup first.

Now's probably a great time to copy your key to a keyserver, so that people may eventually start to use it to send you encrypted mail, etc.

After transferring the keys, you may want to make a "second backup", which would only contain the "clues" for GPG to know on which smartcard to find the private part of your keys. This will be useful if you need to use the keys on another system.

Another option is to use the public portion of your key (saved somewhere, like on a keyserver), then have gpg2 discover that it's on a smartcard using:

gpg2 --card-status

Unfortunately, it appears to only manage to pick up either only the master key, or only the subkeys, if you use separate smartcards. This may be a blessing in disguise, in that you'd still only use the master key on an offline, very secure system, and only the subkeys in your typical daily use scenario.

Don't forget to generate a revocation certificate. This is essential if you ever lose your key, if it's compromised, or you're ever in a situation where you want to let the world know quickly not to use your key anymore:

gpg2 --gen-revoke 0xAABBCC90DEADBEEF
Store that data in a safe place.

Finally, more on backing up the GPG keys. It could be argued that keeping your master key on a smartcard might be a bad idea. After all, if the smartcard is lost, while it would be difficult to get the key out of the smartcard, you would probably want to treat it as compromised and get the key revoked. The same applies to keys kept on USB drives or on CD. A strong passphrase will help, but you still lost control of your key and at that point, no longer know whether it is still safe.

What's more, USB drives and CDs tend to eventually fail. CDs rot after a number of years, and USB drives just seem to not want to work correctly when you really need them. Paper is another option for backing up your keys, since there are ways (paperkey, for instance) to represent the data in a way that it could either be retyped or scanned back into digital data to be retrieved. Further securing a backup key could involve using gfshare to split it into multiple bits, in the hope that while one of its locations could be compromised (lost), you'll still have some of the others sufficient to reconstruct the key.

With the subkeys on the Yubikey, and provided gpg2 --card-status reports your card as detected, if you have the gpg-agent running with SSH support enabled you should be able to just run:

ssh-add -l

And have it list your card serial number. You can then use ssh-add -L to get the public key to use to add to authorized_keys files to use your authentication GPG subkey as a SSH key. If it doesn't work, make sure the gpg-agent is running and that ssh-add uses the right socket, and make sure pcscd isn't interfering (it seemed to get stuck in a weird state, and not shutting down automatically as it should after dealing with a request).

Whenever you try to use one of the subkeys (or the master key), rather than being asked for the passphrase for the key (which you should have set as a very difficult, absolutely unguessable string that you and only you could remember and think of), you will be asked to enter the User PIN set for the smartcard.

You've achieved proper two-factor authentication.

Note of ECC on the Yubikey: according to the marketing documentation, the Yubikey knows about ECC p256 and ECC p384. Unfortunately, it looks like safecurves.cr.yp.to considers these unsafe, since they do not meet all the SafeCurves requirements. I'm not especially versed in cryptography, but this means I'll read up more on the subject, and stay away from the ECC implementation on the Yubikey 4 for now. However, it doesn't seem, at first glace, that this ECC implementation is meant for GPG at all. The Yubikey also has PIV magic which would allow it to be used as a pure SSH smartcard (rather than using a GPG authentication subkey for SSH), with a private certificate being generated by the card. These certificates could be created using RSA or ECC. I tried to play a bit with it (using RSA), following the SSH with PIV and PKCS11 document on developers.yubico.com; but I didn't manage to make it work. It looks like the GPG functions might interfere with PIV in some way or I could just not handle the ssh-agent the right way. I'm happy to be shown how to use this correctly.




Installer session at UOS

If you're interested in how Ubuntu gets installed on systems, want to ask about specific features, or have already filed bugs that you'd like to bring to our attention, watch for my session on the calendar:

http://summit.ubuntu.com/uos-1505/meeting/22512/core-1505-installer-healthcheck/

It's currently scheduled for Tuesday May 5th at 18:00 UTC (that's in a little bit more than 24 hours!); but just in case it changes time, make sure you're marked as attending and subscribed to the blueprint.

As stated in the blueprint summary, I can't guarantee we'll get to everything, but it will be the right place to see what has to be done, and for anyone to pitch in time if they're interested in helping out!

Preseeding installations

In early February, I completed a move from Canonical's Phonedations team to the Foundations team. Part of this new work means debugging a lot of different failure cases in the installer, grub, and other early boot or low-level sofware, some of which requiring careful reproduction steps and probably quite a few install runs in VMs on on hardware.

Given the number of installations I do I've started to keep around preseed files; the text files used to configure automatic installations. I've made them available at http://people.canonical.com/~mtrudel/preseed/ so that they can be reused as necessary. Most of these preseed files make heavy use of the network to get the installation data and packages from the web, so they will need to be tweaked for use in an isolated network. They are annotated enough that it should be possible for anyone to improve on them to suit their own needs. I will add to these files as I run across things to test and automate. I hope we can use some of them soon in new automated QA tests where appropriate, so that it can help catch regressions.

For those not familiar with preseeding, these files can be used and referred to in the installation command-line when starting from a network PXE boot or a CDROM or pretty much any other installation medium. They are useful to tell the installer how you want the installation to be done without having to answer all of the individual questions one by one in the forms in ubiquity or debian-installer. The installer will read the preseed file and use these answers without showing the prompts. This also means some of the files I make available should not be used lightly, as they will happily wipe disks without asking. You've been warned :)

To use this, you'll want to specify "preseed/file=/path/to/file" (or just file=) for a file directly accessible as a file system or through TFTP, or "preseed/url=http://URI/to/file" (or just url=) if it's available using HTTP. On d-i installs, this means you may also need to add "auto=true priority=critical" to avoid having to fill in language settings and the like (since the preseeds are typically only read after language, country, and network have been configured); and on ubiquity installs (for example, using a CD), you'll want to add 'only-ubiquity automatic-ubiquity' to the kernel command-line, again to keep the automated, minimal look and feel.

I plan on writing another entry soon on how to debug early boot issues in VMs or hardware using serial. Stay tuned.

Call for testing: urfkill / Getting flight mode to rock on Ubuntu and Ubuntu Touch

Last month, I blogged about urfkill, and what it's meant to be used for.

The truth is, flight mode and proper killswitch (read: disabling radios on devices) handling is something that needs to happen on any device that deems itself "mobile". As such, it's one thing we need to look into for Ubuntu Touch.

I spent the last month or so working on improving urfkill. I've implemented better logging, a way to get debugging logs, flight mode (with patches from my friend Tony Espy), persistence, ...

At this point, urfkill seems to be in the proper state to make it, with the latest changes from the upstream git repository, into the distro. There is no formal release yet, but this is likely to happen very soon. So, I uploaded a git snapshot from the urfkill upstream repository into Trusty. It's now time to ask people to try it out, see how well it works on their systems, and just generally get to know how solid it is, and whether it's time to enable it on the desktop.

In time, it would be nice to replace the current implementation we have of killswitch persistence (saving and restoring the state of the "soft" killswitches) currently in two upstart jobs — rfkill-store and rfkill-restore — with urfkill as a first step, for the 14.04 release (and to handle flight mode on Touch, of course). In the end, my goal would be to achieve convergence on this particular aspect of the operating system sooner than later, since it's a relatively small part of the overall communications/networking picture.

So I call on everyone running Trusty to try to install the urfkill package, and file bugs or otherwise get me feedback on the software. :)

urfkill : a daemon to centrally control RF killswitches

Here's another project of the u-daemon variety. The latest addition to upower, udev, etc. Meet urfkill.

urfkill is meant to be a daemon that centralizes killswitch handling, rather than having all kind of different applications and daemon handle Wi-Fi, Bluetooth, WWAN and whatnot separately, and potentially fighting over them, you can have just one system that tracks the states and makes it possible to switch just one type of killswitch on all at one, or turn everything off should you so desire...

One reason I've taken an interest in urfkill in Ubuntu is that as we build a phone, we have to keep thinking about how users of the devices will be mobile. That's even more the case when you think about a phone or tablet than a laptop: on a laptop, you may have to think of WiFi and Bluetooth, but you're just about as likely to have your laptop off or not have it at all; whereas phones and tablets have become ubiquitous in our way of life.

Like anyone, thinking mobile I'd first think of walking around, driving, or other methods of travel. Granted, nobody needs to turn off Wi-Fi when getting in their car, but what about on planes?

This is the first thing everything brings up when talking about killswitches. Planes. Alright, you really do need to turn the device off on take off and landing, but some airlines do now allow wifi to be on and offer in-flight service. They still require you to keep cellular and bluetooth off. Also, while I sometimes do take my laptop out of my bag on long flights, it's just cramped. Space is at a premium on a flight (hey, I fly economy...), you'll likely want to have a drink, people besides you may need to get up, spillage could occur if there is turbulence...

I don't really enjoy using my laptop on a flight, even though it's quite small. It's just so much trouble and not very comfortable.

However, I do love to watch saved movies, listen to music, and play games on a tablet. That tablet will most likely need to have radios turned off. My phone will typically just stay off and stowed far enough, since I don't really change SIM cards until I can do so safely without risking to lose the thing.

But then, one can also think of how you should avoid using transmitting equipment in a hospital. They have similar rules about radios as planes to avoid interfering with cardiac stimulators, MRI equipment, etc.

Having all kind of different applications handle each type of killswitches separately is quite risky and complicated. How are you certain that things have been turned off? How do you check in the UI whether it's the case? Can you see it quickly by scanning the screen?

What about the actual process of switching things off? Do you need to go through three different interfaces to toggle everything? What do you need to do if you don't have a physical switch to use?

What about persistence after a reboot?

urfkill is meant to, in time, address all such questions. At the moment, it still needs a lot of work though.

I've spent the last day fixing bugs I could find while testing urfkill on my laptop, as well as porting it to logind (still in progress). In time, we should be able to use it efficiently in Ubuntu to handle all the killswitches. With some more work, we will also be able to use it to manage the modem and other systems on Touch.

For the time being, urfkill is available in the archive, for those who want to start experimenting with it.

Hacking with a Samsung ARM Chromebook on Trusty

So I decided it was about time to update / reinstall my Samsung Chromebook (the ARM one...) to Trusty, or at least to use Saucy. Turns out it's not that simple.

First, you need to know where to get the right stuff. I installed straight on the device, so chrUbuntu was the obvious choice. It's a pretty nice script that allows you to do just about anything necessary.

1) Bring your Chromebook to developer mode.

I'm not going to give the details. It's findable on the Internet, and unsafe enough that you should only do this if you know what you're doing... That counts double for running Trusty on the Samsung Chromebook.

From there, get into crosh (Ctrl+Alt+T), in shell mode (type shell at the crosh prompt).

2) Download the script:

cd ~/Downloads
wget http://goo.gl/s9ryd

3) Run the script:

sudo bash s9ryd xubuntu-desktop dev

This will do the gory install step, partition your device and format the new partition, download the ubuntu core tarball, and from there install the metapackage you've asked for as a first argument.

Be aware that if you have never repartitioned the device, you'll likely notice the system rebooting during the process -- if that happens, just re-run the same command to pick this back up where they ended. It's clear when the process is done and the system installed -- the script requires you to press Enter to reboot.

This was where things got fun.

Turns out my device booted fine into Trusty, but it would only show a black screen with the mouse cursor. If you moved the mouse, you could see the cursor changing but still nothing else. Switching to another VT (Ctrl-Alt-arrow (F1) or whatnot) would work to get you a text-mode login, but only if you switched early enough while X was getting ready to load... otherwise, you'd just get a pretty garbled display.

I hacked at the whole thing for a good while. I already know xf86-video-armsoc was involved in ChromeOS at some point, so I tried to install that.

Still no love. Tried to copy the libs from ChromeOS to the device, in case it was some libmali or EGL/GLES issue... Still nothing better.

I even touched /etc/X11/xorg.conf with some black magic, looking up the details using w3m in a text console...

Turns out the problem was with xf86-video-armsoc itself. I initially clued in when I looked at the dates for upload of the X packages and xf86-video-armsoc itself -- it didn't seem quite right: X was newer by a bit. I knew there could be some issue with the ABI in some cases; but after more careful investigation, that's probably fine too -- armsoc properly depends on -abi-14.

After much more work and trial and error, I updated xf86-video-armsoc to 0.6.0 from the Linaro git tree and also reverted one commit changing flags and it's not mostly working. X runs, I get lightdm, I can run apps -- "compositing" in Xubuntu works too, to get transparency and gradients... all with some minimal display corruption of the window decorations.

So the end of the line is -- if you want to run Trusty on your Chromebook and run into similar black screen issues, and you feel daring, feel free to try my newly-built xf86-video-armsoc package in my PPA:

https://launchpad.net/~mathieu-tl/+archive/ppa/+sourcepub/3627079/+listing-archive-extra

It's simple; once you're in a text console on the machine (login as user/user):

nmcli dev wifi connect <your wifi network> password <your wifi password>
sudo add-apt-repository ppa:mathieu-tl/ppa
sudo apt-get update
sudo apt-get install xserver-xorg-video-armsoc-exynos
sudo reboot

These updated packages, or at least some kind of permanent fix, should make it into Trusty soon. Stay tuned :)

Bug 1010724: Why doesn't dnsmasq listen on both IPv4 and IPv6?

Dnsmasq currently only listens on 127.0.0.1; that's done on purpose. If the only nameserver you have is 127.0.0.1, both IPv4 and IPv6 queries will go through it. It doesn't listen on an IPv6 address. We'll likely change the actual address to '127.0.1.1' as soon as this is possible with dnsmasq, there are changes coming up upstream that should support this.

Letting dnsmasq listen on IPv6 is definitely something I wouldn't mind to see working; but it's unfortunately not as simple as adding '--listen-address=::1' to the parameters passed to dnsmasq by NetworkManager. (Actually, it could be, see below)

I understand some may want to disable all IPv4 on their systems, but that's not advisable, at least for the time being and for the loopback interface and dnsmasq specifically. You absolutely can have an IPv6-only system with no IPv4 addresses on any of the physical interfaces, yet retain the use of 127.0.0.1 on the loopback interface for dnsmasq and others -- DNS resolution will still work for both IPv4 and IPv6 without issues, and you will simply not be able to access IPv4 addresses (since it would be an IPv6-only system for the physical interfaces).

The reason just '--listen-address' can't be used is because we've already had reports about dnsmasq listening on 127.0.0.1 being an issue. It's one we want to address. When installed from the 'dnsmasq' package on Ubuntu/Debian; dnsmasq ships an init script that listens on that loopback IPv4 address as well; causing issues for those who genuinely want to run a system-wide instance of dnsmasq that can be interrogated via loopback (thus serving the local machine), or users who haven't changed any of the default configuration for dnsmasq.

In the case of 127.0.0.1, the fix is relatively simple because we can switch to using 127.0.0.2 or 127.0.1.1; but for IPv6, there doesn't seem to be any such thing other than ::1 specifically meant to be used as a loopback address. In IPv4, it's actually a whole subnet that is available to the loopback interface; while in IPv6 you only have one address (::1/128) (see http://tools.ietf.org/html/draft-smith-v6ops-larger-ipv6-loopback-prefix-00).

I'm very open to suggestions; at this point I'm looking for great ideas on how to best fix this and avoid concurrency issues with other applications; but given the rather minimal return of enabling it vs. the impact on other software running on the machine, and because we ran into precisely this kind of issue (multiple applications listening on the same address on port 53) already, I'd be inclined to have a real good alternative before changing things.

Consider the following two strace outputs for 'ping6 www.google.com'. The first one was run with dnsmasq started (manually, for testing purposes, but with the same parameters as NetworkManager uses) to listen on IPv4:

read(3, "# Dynamic resolv.conf(5) file fo"..., 4096) = 183
read(3, "", 4096)                       = 0
close(3)                                = 0
munmap(0x7f45cba80000, 4096)            = 0
socket(PF_INET, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 3
connect(3, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
poll([{fd=3, events=POLLOUT}], 1, 0)    = 1 ([{fd=3, revents=POLLOUT}])
sendto(3, "\r\347\1\0\0\1\0\0\0\0\0\0\3www\6google\3com\0\0\34\0\1", 32, MSG_NOSIGNAL, NULL, 0) = 32
poll([{fd=3, events=POLLIN}], 1, 5000)  = 1 ([{fd=3, revents=POLLIN}])
ioctl(3, FIONREAD, [90])                = 0
recvfrom(3, "\r\347\201\200\0\1\0\2\0\0\0\0\3www\6google\3com\0\0\34\0\1"..., 1024, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("127.0.0.1")}, [16]) = 90
close(3)                                = 0
socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3
connect(3, {sa_family=AF_INET6, sin6_port=htons(1025), inet_pton(AF_INET6, "2001:4860:800a::93", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable)


The network is unreachable only because I didn't have IPv6 access at the time. You can see that the request was sent and the address was properly discovered as "2001:4860:800a::93". The most important part is the first connect() using AF_INET as family, and "127.0.0.1" as the address -- that was libc trying to reach the nameserver defined in /etc/resolv.conf.

Now consider the following strace output, which is for the same request sent while dnsmasq was configured to listen only on ::1; and with ::1 defined as the nameserver in /etc/resolv.conf:

socket(PF_INET6, SOCK_DGRAM|SOCK_NONBLOCK, IPPROTO_IP) = 3
connect(3, {sa_family=AF_INET6, sin6_port=htons(53), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
poll([{fd=3, events=POLLOUT}], 1, 0)    = 1 ([{fd=3, revents=POLLOUT}])
sendto(3, "\220]\1\0\0\1\0\0\0\0\0\0\3www\6google\3com\0\0\34\0\1", 32, MSG_NOSIGNAL, NULL, 0) = 32
poll([{fd=3, events=POLLIN}], 1, 5000)  = 1 ([{fd=3, revents=POLLIN}])
ioctl(3, FIONREAD, [90])                = 0
recvfrom(3, "\220]\201\200\0\1\0\2\0\0\0\0\3www\6google\3com\0\0\34\0\1"..., 1024, 0, {sa_family=AF_INET6, sin6_port=htons(53), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 90
close(3)                                = 0
socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3
connect(3, {sa_family=AF_INET6, sin6_port=htons(1025), inet_pton(AF_INET6, "2001:4860:800a::93", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable)


Very much the same behavior as above. This time, the entry in /etc/resolv.conf was ::1, so that's what was used for the first connect(); and because that's an IPv6 address, AF_INET6 was used as sa_family.

Both IPv4 and IPv6 queries were the first to run, and returned pretty much instantly.

One alternative to allow dnsmasq to listen on both IPv4 and IPv6 could be adding a loopback interface (or a tap interface) and using a limited scope IPv6 address, but there remains gotchas with this particular course of action -- for instance, dnsmasq currently appears to bind to *both* the specified link-local address added to lo as well as the "primary" IPv6 address defined for lo (::1/128).

Furthermore, it seems rather clumsy to me to include both the IPv4 and IPv6 addresses in /etc/resolv.conf when they refer to the same software instance. It's not going to bring much.

If you don't care at all about these details, whether ::1 shows up in /etc/resolv.conf automatically, don't run other instances of dnsmasq and want to experiment with custom configurations; in Quantal you'll be able to add configuration settings to files in /etc/NetworkManager/dnsmasq.d and tweak the settings as required.