Wrapping Ansible Vault with gpg

Ansible Vault is kind of limited for my usual experience because it requires you to type in a password.

Yeah, it’s a bit of a silly complaint. But I really would like to type as few passwords as possible; vault doesn’t do caching.

Vault also offers the option to have a password file where you can read the password from. That’s stupid. But it does allow you to just put an executable there, and then uses the return value of the executable as the password. Goooood.

So some quick config manipulation:

And a little script:

Voila! Just use Vault to drop in the passwords either inline (starting with 2.3+) or in Vault files like $site/group_vars/$group/my_auth; store the encryption password for the Vault in a GPG-encrypted file and set that as VAULT_PW_FILENAME in the script. Your GPG agent now handles credential caching for Ansible.

Bonus: just add your coworkers’ keys if you have multiple collaborators on something (or are just archiving the customer configuration for backup). Even allows crude ACLs if you split up the group variables fine enough.

Update: I’ve just found out about the password-store wrapper which is new in Ansible 2.3; you might just want to use that one if you’ve got pass set up anyway. Be aware that as of 2017-03-19, Ansible 2.3 is still release candidate and not stable.

Authenticating sudo with the SSH agent

I recently stumbled upon the rather intriguing idea of using your SSH agent to do… sudo authentication!

Sounds weird, right? But somebody implemented it. I haven’t audited the code, but it mostly does what it’s supposed to and doesn’t appear to be malicious.

What it is, though, is a PAM module that gives you an ‘auth’ module for PAM1 As we know, the ‘auth’ module does the whole business of validating that a user is who they claim to be by asking for credentials. Usually, we see e.g. sudo asking the user for their password.

The problem with that: remembering all those sudo passwords for remote hosts you’re administering – because, after all, you aren’t logging in as root directly, and you don’t use the same password at the other end all the time, right? Well, except if you’re using LDAP, anyway. But even then, you’d still have to enter the password (but it is the same, and you’re probably feeling fancy with ansible anyway.)

Enter pam_shh_agent_auth.so – just include it in your PAM configuration, have sudo keep SSH_AUTH_SOCK. If you now connect with your SSH agent forwarded, PAM will check the public key you specify against your forwarded SSH agent, and if that check succeeds, proceed along the PAM chain, you being happily authed! Entering a password? Only when you unlock the SSH agent.

Now that the concept has been explained, let’s think about consequences.

Security considerations

Is this method inherently insecure?

Well, not per se; if you think using SSH agent is okay, using it to replace a password, in principle, is okay.

Can this authentication be exploited?

There’s two possible scenarios I can imagine:

  1. Someone manages to take over the SSH agent.
  2. Someone modifies the specified authorized_keys file.

I personally do not assume that taking over the SSH agent is a significant risk; you’re probably the admin setting this up, so you trust the server and the machine you’re connecting from. The only person on the remote side that could abuse the auth socket is root, your user and someone using an 0day, but being afraid of the last won’t get you anywhere. Thus we can safely disregard that.

The only real problem I see is that somebody manages to overwrite the authorized_keys file. pam_ssh_agent_auth allows you to specify where the authorized key files are kept – you can allow them to be in any place you’d like, and there’s shorthand macros for the user’s home, the system’s hostname and the user name itself. A setup I personally like is using $HOME/.ssh/authorized_keys, because it’s a no change in place operation.

BUT.

Anyone who can somehow modify or add to your authorized_keys file can take over your account and its sudo privileges!

Sample attack scenario:

  1. You’re an idiot and ~/.ssh/authorized_keys is world-writable.
  2. Someone else on the system appends their own key to your authorized_keys.
  3. They are connected with their own SSH agent and just do a sudo -l -u $you.
  4. This will now work because PAM asks the attacker’s SSH agent to unlock their key.

Is this an issue? Only if your users are idiots. Or 0day, but see above.

The easy way to work around this is to simply use a only root-controlled file, i.e. create something like /etc/security/sudoers/%u.key for each user. Or just a globally defined one where you pipe new keys in, whatever floats your boat.

But, except for taking care, this in my case is no particularly viable attack scenario either.

If anyone comes up with a good one, please let me know.

How to implement it

Simple! Just run this Puppet manifest if you’re running Debian/Ubuntu and trust me. You probably shouldn’t, but please look at the manifest anyway and improve my Puppetfu by giving clever comments about how I should approach this ‘style’ of sharing configuration.

Essentially, you need to do the following steps:

  1. Install pam_ssh_agent_auth, just use my Debian/Ubuntu repos (deb http://towo.eu/debian/ $your_release main)) or go to the official site.
  2. Add SSH_AUTH_SOCK to the env_keep defaults in /etc/sudoers.
  3. Add auth sufficient pam_ssh_agent_auth.so file=%h/.ssh/authorized_keys to /etc/pam.d/sudo, ideally before common-auth.
  4. That it’s. Open a new connection, sudo -k; sudo -l should work without you having to enter a password.2

Simple as that.

  1. If you really don’t know what PAM is about, read this article to get a bit of an overview.
  2. If not – that’s what you have that other shell for you didn’t close or reuse just now!

Allowing your users to manage their DNS zone

You’ve been in this situation before. You’re being the host for a couple of friends (or straight out customers) whom you’re giving virtual machines on that blade server you’re likely renting from a hosting provider. You’ve got everything mostly set up right, even wrangled libvirt so that your users can connect remotely to restart and VNC their own machine (article on this is pending).

But then there’s the issue of allowing people to update the DNS. If you give them access to a zone file, that sort of works – but you’ve either got to give them access to the machine running the DNS server, or rig up some rather fuzzy and failure-prone system to transfer the zone files to where they’re actually useful. Both cases aren’t ideal.

So here’s how to do it right – by using TSIG keys and nsupdate. I assume you’re clever enough to replace obvious placeholder variables. If you aren’t, you shouldn’t be fiddling with this anyway.

The goal will be that users can rather simply use nsupdate on their end without ever having to hassle the DNS admin to enter a host into the zone file for them.

Generating TSIG keys

This a simple process; you need dnssec-keygen, which comes shippend with bind9utils, for example; you can install it without having to install bind itself, for what it’s worth. Then, you run:

# dnssec-keygen -r /dev/urandom -a HMAC-MD5 -b 512 -n HOST $username

For each user $username you want to give a key to. Simple as that. Be careful not to use anything else than HMAC-MD5, sadly enough, since that’s what TSIG wants to see.

You’ll end up with two files, namely K${username}+157+${somenumber}.{key,private}. .key contains the public key, .private contains the private key.

Server configuration

ISC BIND
Simple define resp. modify the following sections in your named configuration:

  1. Define the key
    key "$username." {
      algorithm hmac-md5;
      secret $(public key - contents of the .key file);
    };
    
  2. Allow the key to update the zone
    zone "some.zone.tld" {
            [...]
            allow-update { key "$username."; };
    };
    
PowerDNS
TSIG support is officially experimental in PDNS; I’m only copypasting the instructions here, I haven’t checked them for correctness. All input examples manipulate the SQL backend.

  1. Set experimental-rfc2136=yes. If you do not change allow-2136-from, any IP can push dynamic updates (as with the BIND setup).
  2. Push the TSIG key into your configuration:
    > insert into tsigkeys (name, algorithm, secret) \
      values ('$username', 'hmac-md5', '$(public key)');
    
  3. Allow updates by the key to the zone:
    > select id from domains where name='some.zone.tld';
    X
    > insert into domainmetadata (domain_id, kind, content) \ 
      values (X, 'TSIG-ALLOW-2136', '$username');
    
  4. Optionally, limit updates to a specific IP 1.2.3.4, X as above:
    insert into domainmetadata(domain_id, kind, content) \ 
      values (X, ‘ALLOW-2136-FROM’,’a.b.c.d/32’);
    
djbdns
You’re probably getting ready to berate me anyway, elitist schmuck. Do it yourself.

Client usage

Ensure that you supply the private key file to your user. (They don’t need the public key.)

Using nsupdate on a client is a rather simple (if not entirely trivial) affair. This is an example session:

nsupdate -k $privatekeyfile
> server dns.your.domain.tld
> zone some.zone.tld.
> update add host.some.zone.tld. 86400 A 5.6.7.8
> show
> send

This will add host.some.zone.tld as an A record with IP 5.6.7.8 to some.zone.tld.. You get the drift. The syntax is as you’d expect, and is very well documented in nsupdate(1).

You could also think about handing out pre-written files to your users, or a little script to do it for you, or handing out puppet manifests to get new machines to add themselves to your DNS.

Have fun.

Ubuntu – why it sucks

Earlier this year, I switched from Debian to Ubuntu on both my netbook and my desktop machine, because it quite pleased me how well it worked. For the netbook, this was sort of appropriate, when ignoring the fact that a netbook is slow by principle, but with my desktop, my choice might have been less than wise.

Jaunty, 9.04, left me with occasional random crashing of my X server, and applications sometimes only starting at the second try, if at all. You’d get situations like banshee firing up, drawing the window on the desktop, and then locking up – which my compiz duly acknowledged by shading the window after about fifteen seconds. You kill it, you restart it, everything works.

Add to this some other applications (like Evolution, Nautilus and Tomboy), along with the fact that GNOME Do just seems to randomly evaporate into digital nothingness in the course of my uptime, and voila, you have a system that works mostly well, but just sometimes annoys the hell out of you, especially when the X server crashed the system because you did something like Alt-Tabbing while you had two applications running fullscreen on different monitors. Yep, it happened.

So, alas and behold, comes the saviour: Ubuntu 9.10, Karmic Koala! It shines, it glitters, and it saves kittens from trees! Everything is so much better with it!

… not.

Karmic, in the vain hope to be so much greater to the common good, tries to optimize and dumb down things for the users. Which, according to others, seems to work splendidly – but absolutely failed on my end.

My woes with the rare animal

odin (the desktop)

For the record: odin’s specs are something along the line of a Core2 Duo, GeForce 260 linked to two screens, a couple of terabytes of hard drive and a SoundBlaster SB Live! 5.1, after the onbound soundcard started acting up and being generally retarded on the gaming OS.

  1. Boot time has gone way … up. Even though it’s supposed to be optimized for quicker boot and whatnot, my previous “less than ten seconds” boot time somewhat diminished in the face of the optimized bootup, which made my resolvconf (which I haven’t even touched!) for no apparent reason, adding a 30 to 60s timeout on the top.
  2. It solved the crashing problems … not at all. The only it actually managed is to get bug-buddy to be all “It looks like nautilus crashed” with a nice dialog saying I should report a bug to Ubuntu. Which I won’t, since there’s nothing logworthy to submit, it just dies and that’s it.
  3. The sound interface has been made super-easy! And, also, bloody hard to configure correctly. The new sound preferences eschew any kind of knowledge about your sound card and just presume to know better than you, which is exactly why it thinks it should fiddle with the Master volume of my Soundblaster when on four way stereo mix up, which controls only two channels, and not the PCM, which then regulates everything. Jaunty allowed me to change the mixer control to one I deemed best – no dice in Karmic. I now need to fire up alsamixer for that, and can’t use my keyboard volume wheel without fiddling.
  4. Speaking of sound, it has become even more annoying to find a way to turn off the logon sounds with GDM, since gdmsetup has been replaced by something which does quite about nothing at all.
  5. And, of course, hibernate doesn’t work anymore. As if any distribution would ever get that right.

baldr, the netbook

  1. Boot time has gone way … up. Yes, even one the famed “we sooo lurv you” Atom notebooks Karmic pretends to like so much, performance pretty much went down the drain.
  2. Improved external monitor support! Plug in a second screen, get none of the real estate! As soon as I plug in the VGA display while the laptop is still running, screens go irreversibly blank until reboot. Having it plugged in while rebooting allows you to run 800×600 on both displays, cloned, without the ability to change the resolution.
  3. Hibernate doesn’t work. Even though it did before.
  4. And myriads of minor nuisances like stutters and all that jazz.

May I note that this even happens when being freshly installed from source on the netbook, so this is no tale of the common upgrade blues.

Conclusion

Well, I’ll probably be changing distribution soonish, yet again. Fedora might be a neat idea for the netbook, not yet sure if I will revert to Debian on odin.

The Karmic Koala is becoming increasingly extinct and fails to reproduce appropriately even with an accepting mindset.