Bash had her iPhone stolen yesterday, in a rather ugly-sounding scam — and then wrote an eloquent and reflective blog post about it. Kudos.
Schneier on terrorist motivation, positing that it’s less about achieving political ends, and more about being part of a social structure [brunns]. Sounds quite reasonable. I was struck a bit by this sentence:
We also need to pay more attention to the socially marginalized than to the politically downtrodden, like unassimilated communities in Western countries.
Now, I won’t argue with that, except: aren’t they often the same people?
However, I found today that using the GUI always asks you for your “ssh password”. I don’t use a ssh password; indeed, all my boxes have password tunneling disabled: you can only log in via keys. The GUI didn’t/doesn’t seem to be key-aware.
Thankfully, the command-line version (sshfs-static) mentioned on the wiki page linked above is aware – so I succeeded in mounting my remote fs using:
Applications/sshfs.app/Contents/Resources/sshfs-static om: om \ -oreconnect,volname=om
I’ve set up an alias for that in my .zshrc and am now in sshfs happiness land.
(Oh yes, perhaps I should have mentioned: I’m using OS X now, for my desk/laptop needs at least. More on this as time progresses, no doubt.)
Unable to hire a production crew for a standard 1980′s era MTV music video, they performed their music in front of 80 of the 13 million CCTV “security” cameras available in England, including one on a bus.
Also good, from the same RISKS digest: How not to use SSL, viz SSL-encrypt the page data, but send the credit card details in cleartext in the URL — win!
Local elections take place in much of England and all of Wales today, so it’s time to vote.
Interestingly, I’ve just realised that I’ve been given two votes this time round, which shouldn’t really happen. In other words, I occur on the electoral register twice. Naturally I won’t exploit this, but I wonder how they’ll react when I show up at the polling station and point this out to them. Will it be a big thing, or just run of the mill? I’d prefer the former; I suspect the latter.
How’s this possible? 2-3 years ago I lived at address X in Mumbles; last year I moved to address Y, also in Mumbles (I was at address Z in between, but that doesn’t feature in our story). I’ve now received, through the post, two polling cards: one for “Andy Martin Gimblett” at address X, and one for “Andrew Gimbleh” at address Y. When I moved in to address Y, I will have filled a form at some point stating that I live here, and somebody in City Hall has obviously bungled the transcription, reading TT as H somehow. Meanwhile, address X presumably lies empty (the landlords were, not to put too fine a point on it, twats) so nobody’s filled in a form telling the world I no longer live there.
The best bit is that, despite X and Y being within 500m or so of each other, they have different polling stations, so I really could vote twice. Even if they ask for ID, I’m sure I could argue convincingly that “Andrew Gimbleh”‘s vote belongs to me, particularly to people who haven’t just seen me vote using a different card. :-)
Thoughtful students of protocol will now be asking the question: Why have I received the card addressed to X? Because I used to have a forward in place, and the postman has apparently learnt my new address. There’s no official forwarding sticker on the card (or on any of my forwarded mail, even the stuff to address Z, where the forward is still in place) – the postie is just being helpful. That’s great, but in this instance is probably not the right thing to do. It wouldn’t surprise me to learn that polling cards are, legally, not supposed to be forwarded — that would go some way to preventing this bug, I guess; however, there’s nothing on the card indicating this.
Security: it’s hard.
… We argue that this is a gross over-estimate and present an attack that recovers secret keys within minutes on a typical desktop PC or within seconds on an FPGA. Our attack exploits statistical weaknesses of the cipher.
USA Democratic Party “global primary” for Democrats Abroad badly run, insecure, untrustworthy — just like almost all (all?) electronic voting systems in use today.
There are well-known risks at every stage of the episode, so I repeat: that whole process was neither secure nor well-run; moreover, its collection of personal information using unsecured Web pages exposed participants to the risk of information theft, and delivering notionally secure information by email is painfully bad judgment. The episode proves nothing except that well-intentioned people continue to make elementary but serious errors in designing and setting up processes that must be safe at every step if they are to be meaningful.
Don’t like getting to sleep at night? Read the RISKS digest avidly.
Darren Nixon had been waiting at a bus stop in Stoke-on-Trent on his way home from work when a woman saw him reach into his pocket and take out a black Phillips MP3 player. The woman thought it was a pistol and called 999.
Police tracked 28-year-old Nixon using CCTV, sending three cars to follow him. When he got off the bus, armed officers surrounded him. He was driven to a police station, kept in a cell and had his fingerprints, photograph and DNA taken.
The Liberal Democrats, who are campaigning to have the DNA records of innocent people destroyed, said the national DNA database now held more than 3m records kept for life, an estimated 125,000 of which belong to people who were neither cautioned or charged.
Boy falsely jailed for bomb threat for 12 days due to DST changeover. (URL in RISKS story doesn’t seem to work? Lots of google hits though, many of which have the same content and no attribution, sadly.)
Webb gave an insight into the school’s impressive investigative techniques, saying that he was ushered in to see the principal, Kathy Charlton. She asked him what his phone number was, and, according to Webb, when he replied ‘she started waving her hands in the air and saying â€œwe got him, we got him.â€’
‘They just started flipping out, saying I made a bomb threat to the school,’ he told local television station KDKA. After he protested his innocence, Webb says that the principal said: ‘Well, why should we believe you? You’re a criminal. Criminals lie all the time.’
Dorks. Dorks in positions of authority, more to the point.
I’ve just set up one of my laptops, running FreeBSD, so that /home is encrypted using GBDE, and is auto-attached/mounted/fsck’d at boot time. The instructions in the FreeBSD handbook aren’t completely clear, so here are some notes on how I did it.
Actually, most of those instructions are very clear, provided you don’t care about attaching at boot time. Indeed, I’ve had an encrypted partition which I attach/detach by hand, for about a year now, for which those instructions were perfectly adequate. Unfortunately, the section on “Automatically Mounting Encrypted Partitions” leaves out two important details which conflict with the rest of the chapter: your lock file’s name needs to end in the text “.lock”, and you need to be careful where you put it. This was confusing, and in some ways what you have to do to get automounting working contradicts the other examples in the chapter.
Anyway, long story short, I had to dig into /etc/rc.d/gbde to work out what had to be done, and here it is… A summary of how to set up an auto-attaching encrypting home partition on FreeBSD using GBDE:
(I did this as pretty much the first thing after a fresh install, before I’d even created any users. Needless to say, then, all of this happens as root.)
Here’s how /etc/fstab looked before I started:
# cat /etc/fstab # Device Mountpoint FStype Options Dump Pass# /dev/ad0s1b none swap sw 0 0 /dev/ad0s1a / ufs rw 1 1 /dev/ad0s1e /tmp ufs rw 2 2 /dev/ad0s1f /usr ufs rw 2 2 /dev/ad0s1g /usr/home ufs rw 2 2 /dev/ad0s1d /var ufs rw 2 2 /dev/acd0 /cdrom cd9660 ro,noauto 0 0
Unmount /home because I’m about to blat it completely:
# umount /usr/home
Create a directory to contain the lock files:
# mkdir /etc/gbde
Initialise the partition for GBDE. I used a use sector size of 2048 (which matches the UFS fragment size). Note that the lock file’s name ends in .lock; this is not how the main body of the GBDE instructions in the handbook does it, but it’s necessary to get /etc/rc.d/gbde to attach it properly on boot up:
# gbde init /dev/ad0s1g -i -L /etc/gbde/ad0s1g.lock Enter new passphrase: Reenter new passphrase:
Attach the encrypted partition to the kernel for the first time (entering the passphrase previously specified), and write the filesystem:
# gbde attach /dev/ad0s1g -l /etc/gbde/ad0s1g.lock Enter passphrase: # newfs -U /dev/ad0s1g.bde /dev/ad0s1g.bde: 6632.5MB (13583360 sectors) block size 16384, fragment size 2048 using 37 cylinder groups of 183.77MB, 11761 blks, 23552 inodes. with soft updates super-block backups (for fsck -b #) at: 160, 376512, 752864, 1129216, 1505568, 1881920, 2258272, 2634624, *SNIP*
Test the mount:
# mount /dev/ad0s1g.bde /usr/home
Assuming that looks good (eg in df), unmount it and detach it:
# umount /usr/home # gbde detach /dev/ad0s1g
Now we’re ready to set it up for auto-attaching. We need to alter /etc/rc.conf and /etc/fs.tab
# tail -n 3 /etc/rc.conf gbde_autoattach_all="YES" gbde_lockdir="/etc/gbde" gbde_devices="ad0s1g"
We need the gbde_lockdir line because otherwise it looks for the lock files just in /etc, I think.
# grep home /etc/fstab /dev/ad0s1g.bde /usr/home ufs rw 2 2
Now when I boot, it asks for the passphrase, attaches the encrypted partition, and mounts it – automatically – then it gets fsck’d with everything else. It doesn’t automatically detach upon halting the system, but I guess that’s no problem. :-)
The only thing I don’t totally like is that if I get the passphrase wrong (3 times), it doesn’t attach or mount the encrypted partition (obviously), but then of course it fails filesystem checks completely and the boot process dumps down to single user mode, messily. Not completely unreasonable, I guess, but still a bit annoying.
Anyway, anyone now stealing my (crappy) laptop will have a much harder time of getting at the data on it (the interesting data, anyway). Whee!
It wasn’t all cryptography, you know; or physics; or thousands of men lobbing hot fast-moving shards of metal at each other repeatedly.
This might explain why the publishers didn’t send me an inspection copy when I asked them for one about six months ago…
One for everyone who took CS-318 last year, or will be taking it next year: Bruce Schneier Facts.
… Iraqis are using fake IDs in light of the recent growth in sectarian killings. The major groups in Iraq are not distinguishable by physical traits, but they are by name. To avoid being killed, people are getting false identification cards: Surnames refer to tribe and clan, while first names are often chosen to honor historical figures revered by one sect but sometimes despised by the other. For about $35, someone with a common Sunni name like Omar could become Abdul-Mahdi, a Shiite name that might provide safe passage through dangerous areas.
Of course, I’m not suggesting this is an argument against having ID cards here in the UK, at least unless Welsh-English tensions get seriously worse. I did once pretend to be Welsh in the face of some extreme hostility, mind – but the drunk Welsh rugby wanker in question didn’t demand my ID card at gunpoint, so…
If the following paragraph doesn’t cause you to nod knowingly, you really should read the whole article. (BTW, I’ve changed the figures to percentages, for enhanced legibility by non mathematicians).
Suppose that NSA’s system is really, really, really good, really, really good, with an accuracy rate of 90%, and a misidentification rate of .001%, which means that only 3,000 innocent people are misidentified as terrorists. With these suppositions, then the probability that people are terrorists given that NSA’s system of surveillance identifies them as terrorists is only 23%, which is far from 100% and well below flipping a coin. NSA’s domestic monitoring of everyone’s email and phone calls is useless for finding terrorists.
This kind of result is often very suprising and non-intuitive, and hence important. When reality diverges from “common sense”, we need to understand why, so we can explain it to people who like to trust “common sense” in their decision making processes (eg Daily Mail readers ;-) ). This kind of result crops up all over the place… I first came across it in the context of medical diagnosis, where it basically explains why misdiagnosis happens so often. Quite simply, the numbers are just stacked against us. There’s nothing we can do about it – we just have to understand what’s happening and get on with it.
The feature is called User Account Protection (UAP) and, as you might expect, it prevents even administrative users from performing potentially dangerous tasks without first providing security credentials, thus ensuring that the user understands what they’re doing before making a critical mistake. It sounds like a good system. But this is Microsoft, we’re talking about here. They completely botched UAP.
A handy feature in most web browsers is the ability to remember usernames and passwords for sites you visit often, so you don’t have to keep typing them in – the browser just fills it in for you. Some sites don’t like you doing this, however. If the input tag of the password field contains the attribute autocomplete=”off”, that’s an instruction to the browser not to allow this handy feature for that field, so you have to type in the password by hand every time.
This is arguably quite a good idea, and reduces the chance that a user in an internet cafe will thoughtlessly click “remember” and partially open up their bank account to the next customer. There are some interesting thoughts on the topic here, but that’s not what this post is about.
What this post is about: the intranet at work is one of these security-minded sites that disables autocomplete, which is really really annoying (they also have a brain-dead policy on password expiration, but that’s another story). At certain times of year I have to use this site a lot – it forgets you’re logged in between sessions, and I find myself repeatedly typing the password.
Well, no more. Have I moved to Opera, which ignores autocomplete=”off” altogether? No, of course not – I’ve found a Firefox extension which does the job for me.
Introducing ketjap, which can apparently do a number of quite funky things but which in particular can rewrite tag attributes arbitrarily using a set of prevalue/postvalue rules. So I defined I rule which acts on input tags, on their autocomplete attribute, turning a prevalue of off into a postvalue of on. Et viola, it works. The next time I visited our intranet and entered my username/password, firefox offered to remember the password for me, and I gratefully agreed to its welcome proposal.
Actually, I was a little confused at first, because I looked at the page source, expecting ketjap to have changed that, but that’s not what happens – it seems it alters firefox’s interpretation of the source on the fly, leaving the source untouched. Neato in extremis.
Now I invite members of the public to point out the page in Firefox’s preferences where I could have just ticked a box to make this happen. ;-)
Lesson one in security: deny by default, allow with care. It is entirely brain dead for your login logic to be “if the logged_in cookie is false, they’re not logged in, otherwise they are”, rather than “if the logged_in cookie is true, they’re logged in, otherwise they’re not”.
Video eavesdropping demo at CeBIT 2006 – 25 metres, no wires: sweet.
Abstract: In the fall of 2005, problems discovered in two Sony-BMG compact disc copy protection systems, XCP and MediaMax, triggered a public uproar that ultimately led to class-action litigation and the recall of millions of discs. We present an in-depth analysis of these technologies, including their design, implementation, and deployment. The systems are surprisingly complex and suffer from a diverse array of flaws that weaken their content protection and expose users to serious security and privacy risks. Their complexity, and their failure, makes them an interesting case study of digital rights management that carries valuable lessons for content companies, DRM vendors, policymakers, end users, and the security community.
That’s “Sony” DRM technology actually brought to you by a company with offices near here, who came to the department and did a presentation at an event organised by IT Wales las year. They certainly did seem very impressive, and IIRC their CTO spoke highly of his programmers’ abilities. Only goes to show, I guess. (Some retrospectively amusing quotes in this article, I thought.)