Mastery will soon be mine!

I had my MPhil viva last Wednesday, and I’m happy to report that I have been recommended for the degree, subject to performing some minor corrections to my thesis. That’s pretty much the best you can hope for — it’s unheard of to “just pass”; I now have four weeks (well, three now!) to complete and submit my corrections, most of which are minor things like typos. In fact, I did all the really minor corrections on the same day… I just have a couple more background paragraphs to write and then I’ll submit it for final rubber stamping, after which I have to get some copies hardback-bound, and when they’re submitted to the university I can write MPhil after my name.

It’s been a long, hard road — mainly because I was trying to do the work part time, on top of my teaching load which tended to grow a bit faster than I could keep up, but woo, it’s done. I am now, apparently, the official world expert on the syntax and static semantics of the specification language CSP-CASL. Lucky me.

The viva itself was no picnic… My examiners were David Aspinall (Edingburgh) and Gerald Luettgen (York) — two external examiners because I’m a member of staff. I started with a 20-minute-or-so presentation on my work, which I think answered some of their questions off the bat, and then we got into the dialogue part. Apparently my thesis was very well polished and as a result there weren’t too many questions on its contents, and I could (I think) deal with them quite well. The harder parts went along the lines of “so, how does such-and-such aspect of your work relate to THIS OTHER THING YOU KNOW NOTHING ABOUT?”. :-) This happened several times, and I think it belied a certain lack of breadth in my research: I’d focused on doing what I needed to do, in the way it had to be done given the external constraints on the project, but that meant I didn’t know enough as perhaps I should on how my work fitted into the rest of its milieu. Gladly, while an instructive lesson, that doesn’t prevent me getting the degree.

I’ll publish my thesis online, and say a bit more about its contents, when the corrections are complete and the final version is bound.

Sadly, I have to wait until next July to graduate (or rather, to celebrate my achievement, as they say – I get the degree before then), because last week also happened to be graduation week in Swansea. ‘Twas lovely to see all the students milling around in their batman gear, especially the ones I’d taught and/or been friends with. I even got to go to the Faculty of Arts ceremony on Monday, and see Alexa graduate, which was lovely.

Late April Keepalive Ping

Heyup.

Lots going on, but blogging sorely neglected. Partially I’ve been busy with work, partially I’ve been busy with going away.

Three Big News Items:

  1. I properly completed and submitted my MPhil thesis; more on this later (including a PDF for anyone masochistic enough to want to read it).
  2. I went to Budapest for a week; photos later. They’re on flickr now but awaiting tags and descriptions; I had done those things but lost them by not listening to the nagging voice telling me not to use crappy software that doesn’t save its state (yes I’m talking about you, kflickr). Executive summary: great city, well worth a visit.
  3. I have a new job! Starting in August, I’ll be a Research Assistant working for Harold Thimbleby on a three-year project on formal tools for analysis of and design for usability, particularly on small devices. Again, more on this later.

There’s a fourth Big News Item but I’m keeping it under my hat for now.

So, lots for me to follow up on, but for now I’d just like to draw your attention to the latest clutch of lovely photos from Bash, who’s sadly been laid low by a bad cold for the last little while.

And stuff Marigold ninja Blossom in the sun How things used to be The lone seagull flies alone

Adam, my nephew

A quickly-jotted and probably ill-conceived note on language

For quite a while now I’ve been thinking, and saying, that languages are the central and fundamental modality for computing, ie everything one might want to do in computer science can/should be approached from a linguistic standpoint. I think I’ve just realised that this slightly misses the point, or at least doesn’t say much — because language is central and fundamental to any intellectual (or at least scientific/non-explicitly-sublime) endeavour.

Forget computer science for a moment, step back, and take a look at language.

Language is a serialisation mechanism for ideas and meaning. It exists and is beneficial because serialisation allows those things to be persisted, exchanged, and manipulated.

In natural language, the persistence mechanisms take the form of writing and recordings; the exchange mechanisms are speech if persistence isn’t required as a side effect, but otherwise largely use the persistence mechanisms. Manipulation takes the form of editing and rewriting at the syntactic level, and argument and debate at the semantic level (and propoganda/sociocultural programming at the political level).

Formal languages are central to computer science not because languages per se have anything much to do with computer science, but because formalisation, which means automation and mechanisation, is the very essence of computer science. It is the science of the mechanistic manipulation of data — “the latest stage in mankind’s ongoing quest to automate everything”, as JVT once said. Languages per se are fundamental to computer science only insofar as they are fundamental to all intelligent human endeavour, in their role as a serialisation mechanism for thought. The point with regard to computer science is not that we use language – that is an unavoidable side effect of thinking; the point is that we have to use formal languages, because of the things we choose to think about. As such, computer science is where the species’ expertise on the formal and formalisable aspects of language reside, mainly (colleagues in linguistics may take issue at this, of course, but my personal opinion is that the distinction between natural and formal is here very very deep, or at best that the formalisms behind natural language are intangible). At its heart, computer science is the science of formalisation; the language of such a science must, necessarily, be largely formal.

I guess that’s it. Does this make sense to anyone else? Maybe I’m not saying anything non-obvious. shrug

(This started, by the way, with me thinking about why the textbook on my table, “Languages and Machines“, is subtitled “An Introduction to the Theory of Computer Science”).

“Who ya gonna vote for? None of the above!”

Here’s one of my favourite little logic problems, courtesy of Faron. It’s about time I got this off my noticeboard and onto my interwebs.

Which of the following is true?

  1. All of the below.
  2. None of the below.
  3. All of the above.
  4. One of the above.
  5. None of the above.
  6. None of the above.

Solutions involving exhaustive analysis of the 2^6 possibilities are forbidden, although elegant code doing so will at least be admired for its cleverness, especially if it’s in Haskell. ;-)

Slide40 – presentations with 8 bit style

Slide40 example slide

From the same good people who brought you the lambda calculcus in a can [ultimate].

Monads and the meaning of imperative language – also, some musing

Monads and the meaning of imperative language — the delicious alpheccar does a lovely job of introducing denotational semantics without saying enough to scare you off, and shows how exceptions (or even assert) in imperative languages are, at bottom, the Maybe monad. This point generalises (apparently – I know enough to believe it could be true, but not enough to assert that it isn’t untrue) to “any formal description of control flow in an imperative language will be monadic in nature.” Gorgeous.

The stuff about defining domains (and that being the hard part) is resonating with me just now; I’ve spent the day nailing down definitions of sets describing a particular aspect of my pet specification language, CspCASL, and it’s not trivial. And this is the easy part: not proper “domains”, just sets without much internal structure. Markus does that, for the model semantics. Anyway, yay language design.

Formally describing languages is hard. That’s why it doesn’t happen much yet, which is one reason our current languages situation is so messy. My hand-waving prediction: it’s hard but not intractable, and we’re getting better and better at it; in time it’ll be a sufficiently managable problem, with sufficiently good tool support, that languages which aren’t formally described will stagnate in comparison to the new languages then appearing. Naturally, from where I’m standing I see the increasing convergance of computer languages (sounds like a dumbing-down term but I’m really just trying to cover both programming and specification) with full-blown mathematical logic in all its various and colourful forms. Mathematics is the language for describing things formally; a computer program is by necessity a formal description of something, therefore this convergance seems like it will be a good thing – again, from where I’m standing. Whether or not it appears that way because where I’m standing is a good place to get a view of what’s going on, or just because that’s all you can see from here, remains to be seen. ;-)

Visualizing regular expressions with reAnimator

reAnimator — a very cool tool for visualizing regular expressions. Given an RE, renders the corresponding NFA and DFA and animates acceptance (or not) of an input string. Try out the “a|ab|abc|abcd” example with input “a” for a neato example. A flash app, written using python.

The Catsters

The Catsters — twenty category theory lectures on Youtube. Looks like they start with monads. Also: ++cute lecturer ftw! Aha, it’s Eugenia Cheng. Lucky Sheffield.

Idempotence vs Referential Transparency

Idempotence vs Referential Transparency [reddit].

(Spoiler: idempotence wins.)

Church’s Thesis and Functional Programming

(I realise it’s gone very “theory of programming languages” round here lately. Don’t worry, I’m sure I’ll have something else to talk about soon.)

Church’s Thesis and Functional Programming, a lovely little paper by David Turner (he of SASL & Miranda fame). Like the Google Tech Talk I mentioned a couple of days ago, it’s big on unifying historical perspective, which is a welcome change from most of the papers I see on these topics, which seem to assume you’ve either read all the papers they reference (recursively), or you’re about to. ;-)

It’s a veritable concucopial tour of TPL: Church’s thesis; the equivalence of lambda-calculus, Turing machines, and recursive functions; how the halting problem fits into that; the lambda calculus; Church-Rosser; strict vs. non-strict; combinatory logic; typed lambda-calculi; implementation of lazy FP (exemplars Miranda & Haskell); domain theory; FP without general recursion; Curry-Howard; Martin-Lof; and a hint of abstract Stone duality (which always makes me think of Father Stone). Seriously, if you’re at all interested in theoretical computer science, you should definitely take a look at this. I started losing the thread in the final section, but loved the rest and, I think, learnt quite a bit – mainly how various bits fit together; like joining the dots on a map.

(Ooh. Strachey coined the term “syntactic sugar”.)

Parametric Polymorphism and the Girard-Reynolds Isomorphism (video)

Great stuff: Parametric Polymorphism and the Girard-Reynolds Isomorphism [reddit] – a 30 minute Google Tech Talk by Phil Gossett. A really clear presentation covering a number of interesting topics, with a very convincing historical sweep. Certainly cleared a few things up for me, at least. His response to the last question is really more Hindley-Milner than Girard-Reynolds, mind. ;-)

Update 2007-04-23

Phil Wadler comments.

The Joy of Grothendieck

A name I’ve come across in my work on hets is Grothendieck. In a great example of Wikipedia cascade, someone on #haskell this morning mentioned the programming language Joy (which looks pretty cool), which led to the page on monoids, which led to the page on Grothendieck groups, which led to the page on Grothendieck himself, who a) also looks cool, and b) “gave lectures on category theory in the forests surrounding Hanoi while the city was being bombed, to protest against the Vietnam war”. Respect.

Monad wrangling, and the joy of #haskell

Three days ago, I enthused about the Python community, and what a great help to me they were back in the day.

These days it’s Haskell that has my brain feeling too small, and I’ve just had my first experience of the Haskell community. I installed an IRC client for the first time in four years just so I could log on to the #haskell IRC channel. I’m happy to report that the experience was entirely positive.

For an imperative programmer, learning Haskell is initially hard because you have to stop thinking in terms of issuing instructions which will be performed in order, and start thinking in terms of composing pure functions (which have no side effects) together to Get Things Done. It’s weird at first, but ultimately very powerful, as it lends itself much more nicely to higher-order programming, lazy evaluation, actually composing programs out of reusable parts, etc. I’d say I’m starting to get a reasonable feel for that, although I’ve got a long way to go.

Unfortunately, a little later, you realise that there are cases where you need side-effects (eg input/output), which leads you into this whole sort-of-imperative side of Haskell, at the heart of which lie the hairy monad things. You totally can’t avoid this, because sooner or later you need to do I/O. Monadic I/O (and monads in general) look & feel imperative at first, but you soon hit barriers thinking like that, and ultimately really have to read stuff like The I/O Inside Tutorial, Tackling The Awkward Squad, etc. in order to understand what’s really going on, which is actually really clever, decidedly not imperative, and at some point turns into lambda calculus or category theory (take your pick).

It’s monads that I’ve been wrangling with lately. I’ve been trying to do a fairly simple I/O task: I have some directory containing some files; I want to operate on a subset of the files in that directory, for each of them reading the file and creating some data object representing its contents. The details aren’t important. Doing this in Python (say) is trivial. Doing it in Haskell has had me stuck for nearly a week. :-) I spent all day last Friday working through “I/O Inside”, and now understand monads much better than I did, and maybe half as well as I should (but maybe not even). That was all very informative and educational, but still didn’t answer my problem.

Anyway, long story short, tonight I installed an IRC client, went on #haskell, asked the question, and got an answer immediately and in a wonderfully friendly fashion. Full #haskell chat log for today is here if anyone’s interested, but in essence it turns out that mapM is what I need for this task. Sweet, and actually incredibly simple when you know it. I think a lot of Haskell is like that…

By the way, the #haskell channel has this neato lambdabot running, which among other handy functions, remembers and repeats amusing/apposite quotes when instructed to do so. Given the sad theory geek that I am becoming, this quote it kicked up tonight made me chortle:

Binkley: the sex is all in the operational semantics, denotational semantics only deals with love.

vincenz, 14:41:22

Haskell and Category Theory

An introduction to Category Theory from a Haskell point of view. As always, monads are where your brain starts hurting.

Foozles!

Yikes. This is waaaaay too close to home: Foozles – a programming language feature comin’ atcha soon! [wadler]

2013: Francoise Boisbleu proves that under a certain formulation, Foozles are a categorical dual to Aspects, which gets everyone terribly excited.

It certainly would.

Operational Semantics of a Simple Process Algebra in Python and Haskell

As promised, though I’m still working on the shiny LaTeX article which actually explains all this stuff…

From the README:

This is an investigative/learning exercise in transforming process algebraic expressions into labelled transition systems according to their operational semantics, using Python and Haskell.

It started out just as an exercise in operational semantics, using Python because it’s the language I get things done fastest in. That proved interesting enough that I wanted to re-do it in Haskell, which I’m learning and which arguably handles this sort of thing better.

The Python version is more advanced. It includes mu-recursion, which the Haskell doesn’t, and is quite tidy IMHO. OTOH the Haskell is less developed, and in particular the functions for actually creating the graphs could, I’m sure, be improved. Always nice to have some future work to do…

I’m publishing this in case anyone is interested in the code. In particular, it is perhaps useful as an example of:

I’m working on a paper describing the problem, the semantics, and the approaches taken in the two languages, but it’s very much an ongoing WIP and isn’t ready to publish yet.

Homepage for this work: http://www.cs.swan.ac.uk/~csandy/research/op_sem_py_hs/

gimbo gimbo, where art thou?

I’m still here, I’ve just been too busy to blog. However, while I’m waiting for ghc-6.4 to compile (that’s my excuse anyway), I thought I’d do a quick blogdump…

I was going to write about my week in London, and a bit about what I’ve been reading lately, but I started by writing the stuff below instead, and now I think that’s enough for one post, so I’ll publish this and follow up with the rest maybe tomorrow or Sunday. (Short version: London fun except bombs, Quicksilver OK, Accelerando completely barmy but getting a bit dull at 40%). Colin, I should warn you that the rest of this post is of the kind you don’t like. The London diary thing might be better. Sorry!

Work’s been truly excellent this week. No students so no teaching, and also no admin for a while too. Some time I need to do some preparation for next year’s teaching, but for the next two months I’m mainly going to be focussing on research at last, aiming to break the back of my MPhil. I made a breakthrough on the parsing side last Sunday (or rather, I cleared up a misconception I’d been harbouring), but have spent this week on a different tack, learning about operational semantics through the magical medium of Python. Specifically, Markus gave me a toy process algebra and its semantics, and outlined the algorithm for turning expressions in the PA into their labelled transition systems, then I went away and programmed it.

It’s been excellent fun, and I got it basically done in no time at all, mainly because I chose to do it in Python. It’s quite remarkable, in fact… For months I’ve been struggling to get anywhere with my research, and it’s been very depressing and making me feel like I can’t hack this stuff. Now I realise it’s just that I’m a complete Haskell newbie. If I was doing my research in Python, I’d probably have finished by now. Alas, I have to do it in Haskell, because of the system we’re interacting with, but it’s encouraging to realise my problems relate to the language/paradigm I’m working in, not some basic failing on my part to understand what the heck I’m trying to do.

Anyway, I’m writing up an article explaining what I’ve done, and either later today or early next week I’ll publish it and my code, so anyone who reads the above and says “huh?” can take a look if they want. (JOn? You reading this? ;-))

Next week is graduation week at Swansea University and I’m acting as a marshall on Monday, which is Science day. So I get to see all this year’s third years do their stuff. With luck and effort, I should be there myself this time next year.

What else is new? I recently made the shift from ion to ion3. Ion’s been by window manager of choice for about three years now, mainly because I can’t be bothered with all that tedious point-and-click move-and-resize nonsense you have to do with windows in most WMs. TR occasionally moans at me that it’s modal but I don’t see it as a problem, it works for me and is extremely keyboard friendly and fast, so I’m happy. But I’ve been feeling behind the curve, and in particular some apps (eg the Gimp) don’t play well with the tiled model – which is why ion3 is nice because it adds optional “float workspaces” which act more like a conventional tedious point-and-click point-and-resize window manager if and when that’s what you need. Making the move was non-trivial because my config files had to be ported to Lua, but now it’s done and I’m very happy with my window manager indeed. Once again, I’d recommend looking at Ion if you’re getting dissatisfied with your Gnome/KDE experience and want to strip things down.

Finally, a couple of Python goodies via the Python-URL: try/except vs if/else in terms of speed (an oldie but a goodie, especially when advocating Python to curmudgeons), and Sparklines, which are kinda weird and kinda cool, but I’ve no idea if they’d actually be useful.