Alan Dix on the difference between how humans parse language, and how machines do so — and associated impacts on interaction. Interesting stuff, as ever.
Nice to see that Wittgenstein (or at least, one of the Wittgensteins) is also a Haskell man…
Last one today, I expect: Literate Haskell with Markdown and Syntax Highlighting.
I’m in Newcastle, staying at a hotel in the shadow of the rather impressive Tyne Bridge. I’m up here with Harold and others, visiting Michael Harrison, doing talks, etc. I spoke today, about “generating theorems from user interface automata”: lots of good discussion and nice ideas for what to do with this next. More of the same tomorrow, then back to Swansea on Thursday; it takes all day on the train. Oh yeah, and it’s frickin’ freezin’ up here. It keeps snowing, just a dusting. Thank heaven for warm gloves, warm hats, and Montane.
Lots of gallivanting of late. This time last week I was up in Gregynog for the annual Swansea CompSci shenanigans. For the first time since I joined Swansea, I wasn’t organising the trip (my sixth time there, and seventh in total) — but I didn’t get to relax because I was still doing the pub quiz (very well received), and a talk on my research (also seemed good, and by the way, woo Keynote!), and helping out because illness had knocked a few members of staff out of the proceedings. Oh, and I sang the first verse of a Czech folk song solo, and didn’t completely screw it up – so that was good too.
Then on Cardiff went to Friday (other way round) with gorgeous friend Adwoa, stayed overnight at gorgeous friend Kate’s house, then flew to Amsterdam for to see Rodrigo y Gabriela at Paradiso. Kate had spotted this gig and suggested a mission, and somehow Adwoa and I were so taken aback by the audacity of the proposal that we had to agree. A 24-hour trip to another country to see a band neither of us knew yet? Fantastic! And it was: with one notable exception I’d say this was the best gig I’ve seen since The Chemical Brothers in Cardiff in 1997. Gabriela is absolutely the star, to my mind (sound quality awful but just watch her go; better quality but less animation), but I would say that because I favour rhythm, and that’s where she’s at. Anyway, yeah, they rocked and I’ll be buying the live album when it’s out.
Then we popped back to our hotel and asked the guy on the front desk if he knew of anywhere nearby doing decent electronic music, and he suggested Melkweg, casually mentioning that Nathan Fake (of my love for whom see here) was playing live. WTF? Oh yes, and Speedy J (hugely influential to my early electronic awakenings, e.g. via The Oil Zone) was DJ’ing. So we went there.
Actually, I found Fake’s live set disappointing: way too much glitch, not enough repetition, just too screwed up — Adwoa seemed to rather enjoy it, however. And I couldn’t fault what I saw of Speedy J, though I confess I didn’t stay for the whole five hours. ;-)
Didn’t get much sleep, but it was pleasant. Awake at 7; on a tram at 7.30; at railway station at 08.00; at airport at 08.30; at departure gate at 09.10; liftoff 09.55; land Cardiff just over an hour later; asleep just over an hour later (again, at Kate’s). Since I was getting a train to Newcastle the next day, it seemed more sensible to stay there than to go to Swansea, so that’s what I did. Then I came here. Then stuff happened. Then I started writing this. Right, that’s me up to date. Bedtime!
I had my MPhil viva last Wednesday, and I’m happy to report that I have been recommended for the degree, subject to performing some minor corrections to my thesis. That’s pretty much the best you can hope for — it’s unheard of to “just pass”; I now have four weeks (well, three now!) to complete and submit my corrections, most of which are minor things like typos. In fact, I did all the really minor corrections on the same day… I just have a couple more background paragraphs to write and then I’ll submit it for final rubber stamping, after which I have to get some copies hardback-bound, and when they’re submitted to the university I can write MPhil after my name.
It’s been a long, hard road — mainly because I was trying to do the work part time, on top of my teaching load which tended to grow a bit faster than I could keep up, but woo, it’s done. I am now, apparently, the official world expert on the syntax and static semantics of the specification language CSP-CASL. Lucky me.
The viva itself was no picnic… My examiners were David Aspinall (Edingburgh) and Gerald Luettgen (York) — two external examiners because I’m a member of staff. I started with a 20-minute-or-so presentation on my work, which I think answered some of their questions off the bat, and then we got into the dialogue part. Apparently my thesis was very well polished and as a result there weren’t too many questions on its contents, and I could (I think) deal with them quite well. The harder parts went along the lines of “so, how does such-and-such aspect of your work relate to THIS OTHER THING YOU KNOW NOTHING ABOUT?”. :-) This happened several times, and I think it belied a certain lack of breadth in my research: I’d focused on doing what I needed to do, in the way it had to be done given the external constraints on the project, but that meant I didn’t know enough as perhaps I should on how my work fitted into the rest of its milieu. Gladly, while an instructive lesson, that doesn’t prevent me getting the degree.
I’ll publish my thesis online, and say a bit more about its contents, when the corrections are complete and the final version is bound.
Sadly, I have to wait until next July to graduate (or rather, to celebrate my achievement, as they say – I get the degree before then), because last week also happened to be graduation week in Swansea. ‘Twas lovely to see all the students milling around in their batman gear, especially the ones I’d taught and/or been friends with. I even got to go to the Faculty of Arts ceremony on Monday, and see Alexa graduate, which was lovely.
So I’m here to say that mindwarp #3 is discovering the function as the basic unit of abstraction. Jaw-droppingly beautiful abstractions and generalizations can be created out of just functions. You can rediscover the usefulness of partial functions and currying, which were techniques created in the 1800s. You can be in the direct lineage of Alan Turing, who used higher order functions in the 1930s to define his theoretical Turing Machine in his paper “On Computable Numbers, with an Application to the Entscheidungsproblem.” And you can finally understand recursion in a deep and intuitive way, and you’ll feel like you’ve looked into the abyss and somehow come back to tell everyone else about it. And maybe, just maybe, you can explain to me what a freakin’ monad is.
Yet people talk about programming as if it were a production process and measure “programmer productivity” in terms of “number of lines of code produced”. In so doing they book that number on the wrong side of the ledger: we should always refer to “the number of lines of code spent”.
Dijkstra, The Fruits of Misunderstanding, and also a similar sentiment (earlier) in “Why is Software so Expensive?” An Explanation to the Hardware Designer — quote spotted on reddit.
… and from the second (first, chronologically) of those essays, Dijkstra hitting the nail on the head with regard to aspects of some recent rumblings about higher education:
To the economic question “Why is software so expensive?” the equally economic answer could be “Because it is tried with cheap labour.” Why is it tried that way? Because its intrinsic difficulties are widely and grossly underestimated.
To any of my students reading this: don’t underestimate the difficulty of the tasks we’ve been educating you to tackle; thus, don’t underestimate your worth if you get good at attacking those tasks; thus, hopefully you’ll appreciate (if not now then one day) the value of a degree in (actual, not pretend) Computer Science.
There are many, many reasons why you should never ever accept a job which involves programming in C++ [via TR].
Lots going on, but blogging sorely neglected. Partially I’ve been busy with work, partially I’ve been busy with going away.
Three Big News Items:
- I properly completed and submitted my MPhil thesis; more on this later (including a PDF for anyone masochistic enough to want to read it).
- I went to Budapest for a week; photos later. They’re on flickr now but awaiting tags and descriptions; I had done those things but lost them by not listening to the nagging voice telling me not to use crappy software that doesn’t save its state (yes I’m talking about you, kflickr). Executive summary: great city, well worth a visit.
- I have a new job! Starting in August, I’ll be a Research Assistant working for Harold Thimbleby on a three-year project on formal tools for analysis of and design for usability, particularly on small devices. Again, more on this later.
There’s a fourth Big News Item but I’m keeping it under my hat for now.
So, lots for me to follow up on, but for now I’d just like to draw your attention to the latest clutch of lovely photos from Bash, who’s sadly been laid low by a bad cold for the last little while.
… We argue that this is a gross over-estimate and present an attack that recovers secret keys within minutes on a typical desktop PC or within seconds on an FPGA. Our attack exploits statistical weaknesses of the cipher.
For quite a while now I’ve been thinking, and saying, that languages are the central and fundamental modality for computing, ie everything one might want to do in computer science can/should be approached from a linguistic standpoint. I think I’ve just realised that this slightly misses the point, or at least doesn’t say much — because language is central and fundamental to any intellectual (or at least scientific/non-explicitly-sublime) endeavour.
Forget computer science for a moment, step back, and take a look at language.
Language is a serialisation mechanism for ideas and meaning. It exists and is beneficial because serialisation allows those things to be persisted, exchanged, and manipulated.
In natural language, the persistence mechanisms take the form of writing and recordings; the exchange mechanisms are speech if persistence isn’t required as a side effect, but otherwise largely use the persistence mechanisms. Manipulation takes the form of editing and rewriting at the syntactic level, and argument and debate at the semantic level (and propoganda/sociocultural programming at the political level).
Formal languages are central to computer science not because languages per se have anything much to do with computer science, but because formalisation, which means automation and mechanisation, is the very essence of computer science. It is the science of the mechanistic manipulation of data — “the latest stage in mankind’s ongoing quest to automate everything”, as JVT once said. Languages per se are fundamental to computer science only insofar as they are fundamental to all intelligent human endeavour, in their role as a serialisation mechanism for thought. The point with regard to computer science is not that we use language – that is an unavoidable side effect of thinking; the point is that we have to use formal languages, because of the things we choose to think about. As such, computer science is where the species’ expertise on the formal and formalisable aspects of language reside, mainly (colleagues in linguistics may take issue at this, of course, but my personal opinion is that the distinction between natural and formal is here very very deep, or at best that the formalisms behind natural language are intangible). At its heart, computer science is the science of formalisation; the language of such a science must, necessarily, be largely formal.
I guess that’s it. Does this make sense to anyone else? Maybe I’m not saying anything non-obvious. shrug
(This started, by the way, with me thinking about why the textbook on my table, “Languages and Machines“, is subtitled “An Introduction to the Theory of Computer Science”).
Here’s one of my favourite little logic problems, courtesy of Faron. It’s about time I got this off my noticeboard and onto my interwebs.
Which of the following is true?
- All of the below.
- None of the below.
- All of the above.
- One of the above.
- None of the above.
- None of the above.
Solutions involving exhaustive analysis of the 2^6 possibilities are forbidden, although elegant code doing so will at least be admired for its cleverness, especially if it’s in Haskell. ;-)
Waiting: A Necessary Part of Life — Don Norman on buffers. I think this has possibly the best first sentence of anything I’ve ever read, ever:
Just as dirt collects in crevices, buffers collect in the interfaces between systems.
I’m impressed by the breadth of this. I like the idea that “interface” is a general and fundamental enough concept that the problems we see, and the mental models/tools for thinking about/solving them are essentially similar whether we’re talking about device usability, protocol design, or soft squidgy stuff involving only meat entities.
Problems arise at interface, any interface, be it person and machine, person and person, or organizational unit and organizational unit. Any place where two different entities interact is an interface, and this is where confusions arise, where conflicting assumptions are born and nourished, where synchronization difficulties proliferate as queues form and mismatched entities struggle to engage.
To the analyst, such as me, interfaces are where the fun lies. Interfaces between people, people and machines, machines and machines, people and organizations. Anytime one system or set of activities abuts another, there must be an interface. Interfaces are where problems arise, where miscommunications and conflicting assumptions collide. Mismatched anything: schedules, commuinication protocols, cultures, conventions, impedances, coding schemes, nomenclature, procedures. it is a designer’s heaven and the practitioners hell. And it is where I prefer to be.
Also worth a read: A Fetish for Numbers: Hospital Care.
Quite an interesting five-part series on database version control. I haven’t used this particular approach myself, but it looks well thought through and sensibly structured, at first blush at least.
Quite so: prototypes and real applications.
Your prototype needs to be written quickly and then it needs to change quickly. You’ll only be able to do that with a maintainable, flexible code base. In short, a well-written code base. You’re a proficient software engineer, you know how to do this. You probably do it without even thinking.
And at some level, everyone knows this. That’s why prototypes are created in languages like Python. A language that you can write quickly, but also write well, quickly.
I delivered my first lecture, for CS-228 Operating Systems, 5 years ago yesterday. Today, TR delivered his first lecture — for, er, CS-228 Operating Systems. I’m sure he’ll do it very well, and certainly better than I, as he is both more informed and opinionated on that topic than I am. Well, on all topics, really… ;-) Welcome to the club, TR!
C isn’t hard; programming in C is hard. On the other hand: Haskell is hard, but programming in Haskell is easy.
More infinite list tomfoolery in Haskell — the one for pi’s particularly “WTF?”.
Monads and the meaning of imperative language — the delicious alpheccar does a lovely job of introducing denotational semantics without saying enough to scare you off, and shows how exceptions (or even assert) in imperative languages are, at bottom, the Maybe monad. This point generalises (apparently – I know enough to believe it could be true, but not enough to assert that it isn’t untrue) to “any formal description of control flow in an imperative language will be monadic in nature.” Gorgeous.
The stuff about defining domains (and that being the hard part) is resonating with me just now; I’ve spent the day nailing down definitions of sets describing a particular aspect of my pet specification language, CspCASL, and it’s not trivial. And this is the easy part: not proper “domains”, just sets without much internal structure. Markus does that, for the model semantics. Anyway, yay language design.
Formally describing languages is hard. That’s why it doesn’t happen much yet, which is one reason our current languages situation is so messy. My hand-waving prediction: it’s hard but not intractable, and we’re getting better and better at it; in time it’ll be a sufficiently managable problem, with sufficiently good tool support, that languages which aren’t formally described will stagnate in comparison to the new languages then appearing. Naturally, from where I’m standing I see the increasing convergance of computer languages (sounds like a dumbing-down term but I’m really just trying to cover both programming and specification) with full-blown mathematical logic in all its various and colourful forms. Mathematics is the language for describing things formally; a computer program is by necessity a formal description of something, therefore this convergance seems like it will be a good thing – again, from where I’m standing. Whether or not it appears that way because where I’m standing is a good place to get a view of what’s going on, or just because that’s all you can see from here, remains to be seen. ;-)
A couple of years ago I read and greatly enjoyed Bryan Ford’s 2004 paper “Parsing Expression Grammars: A Recognition-Based Syntactic Foundation” [PDF, 134Kb] (wow, it’s still on my desk, in fact); while to my disappointment it proved inapplicable to my research (I’m constrained in the parsing and language description technologies I can use by what I have to integrate with), it was clearly Cool Stuff.
reAnimator — a very cool tool for visualizing regular expressions. Given an RE, renders the corresponding NFA and DFA and animates acceptance (or not) of an input string. Try out the “a|ab|abc|abcd” example with input “a” for a neato example. A flash app, written using python.
Why Haskell is Good for Embedded Domain Specific Languages — nice summary, I’d say.
I started working in Swansea University’s Computer Science department five years ago today. Here’s the evidence. It’s been a blast, and the executive summary still holds true, somewhat amazingly. That is all.
Here’s a pretty clear explanation [reddit] of closures, which have hit the mindspace in a big way since Rails made Ruby the hot sauce of the day. When I say “clear”, I mean, perhaps, from a “traditionalist imperative” point of view — the discussion of stack frames and the funarg problems in particular. Of course, lambda-heads and category theorists probably have other criteria for “clear”. ;-)
(Within an hour or two of publishing this, it was pointed out to me that this talk is really about the IO monad rather than monads in general, and that in particular the assertion that a monad represents a computation which performs a side-effect is not, in general true. A nice example is the Maybe monad. So a better title for this talk is “A pragmatic look at monadic I/O in Haskell”.)
A pragmatic look at monads in Haskell (PDF, 293KB) – slides from a talk I gave last Friday in Swansea University’s Computer Science department, as part of our “No Grownups” series of non-research talks given for and by postgraduate students.
The aim of the talk was to explain monads to a non-expert (and, largely, non-Haskell-programming) audience: why do we have them, what problems do they solve, and how are they used? The approach is pragmatic in that the talk explicitly does not go into technical details, instead focusing on a broad understanding, and on some specific useful “rules of thumb” for programming with monads in Haskell. I don’t claim to be an expert on monads or to have produced a talk which is authoritative or even necessarily completely correct. I do hope to have produced something reasonably comprehensible and useful, however. I would welcome any feedback, comments, corrections, clarifications, etc.
The talk was filmed, but I don’t know if my delivery was good enough to warrant putting it online. :-) Let me know if you’re interested. The post-talk discussion was quite useful, so it might be worth it for that. In particular, there was a question about when exactly the “non-purity” happens – when does referential transparency go away? My answer was that it happens when it needs to (ie when the result is required) and that yes, obviously somewhere there is some code which isn’t pure Haskell and which is doing the impure operation – eg an operating system call. Markus opined that a big part of the point of monads is to give us a clear indication, in the type system, that an operation is impure and thus, in some sense, unsafe/not nice. I thought that was a good enough point that I’ve since added a bullet saying that to the talk – but that’s the only addition I’ve made before publishing.
Background/reference material: A History of Haskell: being lazy with class (historical context), monads @ wikibooks (the nuclear waste metaphor), IO inside: down the rabbit’s hole (probably the point where I started understanding monads), rules for Haskell I/O (not an influence, but something with a similar flavour which I saw when I’d nearly finished), do-notation considered harmful (desugaring), monads on the unix shell (just because the dirty string “dramatisation” is so great).
(I realise it’s gone very “theory of programming languages” round here lately. Don’t worry, I’m sure I’ll have something else to talk about soon.)
Church’s Thesis and Functional Programming, a lovely little paper by David Turner (he of SASL & Miranda fame). Like the Google Tech Talk I mentioned a couple of days ago, it’s big on unifying historical perspective, which is a welcome change from most of the papers I see on these topics, which seem to assume you’ve either read all the papers they reference (recursively), or you’re about to. ;-)
It’s a veritable concucopial tour of TPL: Church’s thesis; the equivalence of lambda-calculus, Turing machines, and recursive functions; how the halting problem fits into that; the lambda calculus; Church-Rosser; strict vs. non-strict; combinatory logic; typed lambda-calculi; implementation of lazy FP (exemplars Miranda & Haskell); domain theory; FP without general recursion; Curry-Howard; Martin-Lof; and a hint of abstract Stone duality (which always makes me think of Father Stone). Seriously, if you’re at all interested in theoretical computer science, you should definitely take a look at this. I started losing the thread in the final section, but loved the rest and, I think, learnt quite a bit – mainly how various bits fit together; like joining the dots on a map.
(Ooh. Strachey coined the term “syntactic sugar”.)
Great stuff: Parametric Polymorphism and the Girard-Reynolds Isomorphism [reddit] – a 30 minute Google Tech Talk by Phil Gossett. A really clear presentation covering a number of interesting topics, with a very convincing historical sweep. Certainly cleared a few things up for me, at least. His response to the last question is really more Hindley-Milner than Girard-Reynolds, mind. ;-)
Last week I wrote about sections in Haskell, and discovered flip. Today I hit upon a problem which got me thinking about them again, leading to one of those wonderful moments when Stuff You Just Learnt turns out to be exactly what you need to solve the problem you’ve got right now. (Then you blog about it and find an Even Better Way, as it turns out.)
I’ve been writing some file-manipulation code, and along the way had written the following function:
-- | Check if a given string (subject) ends with another (target). endswith :: String -> String -> Bool endswith sub tgt = drop (length sub - length tgt) sub == tgt
(Actually I’ve forgotten now whether I wrote this myself or found it somewhere, but it’s clearly not a new problem. If anyone knows a source for the above code, please do tell so I can attribute it. Anyway…)
So then you can do:
*Ew> endswith "banana" "na" True *Ew> endswith "banana" "moo" False
So far so good. Later, I had a list of filenames, and I wanted to filter it down to just those ending with a particular suffix. Initially, unthinkingly, I tried:
*Ew> filter (endswith "na") ["banana", "cat", "iguana", "mastodon"] 
Spot the cunning use of a section. Actually not so cunning, as it turns out – the empty list is not the result I was hoping for! The problem is that the first parameter of endswith is the subject (the string to be searched) not the target (the string to be searched for) so the section (endswith "na") is not the function which checks if a string ends with “na”. It’s the function which, given a string, checks if “na” ends in that string:
*Ew> (endswith "na") "banana" False *Ew> (endswith "na") "a" True *Ew> (endswith "na") "na" True
My first reaction was to just swap the parameters around. Let’s call this version endswith'. It’s exactly the same as endswith except for the parameter order:
endswith' :: String -> String -> Bool endswith' tgt sub = drop (length sub - length tgt) sub == tgt
Then (endswith' "na") is the section I’m looking for, and in particular works as I want under filter:
*Ew> (endswith' "na") "banana" True *Ew> filter (endswith' "na") ["banana", "cat", "iguana", "mastodon"] ["banana","iguana"]
I only use endswith in a couple of places, so this change was totally viable. Lesson learnt (or so I thought): you have to think carefully about parameter order when writing functions, because some sections will be more useful than others. Obviously with endswith, the section with bound target is more generally useful than the section with bound subject.
Later, I realised this lesson is bogus, or at least not very strong. I mean, there’s probably something in it, but I didn’t actually have to swap the parameters to make my filter work. That’s exactly what flip does:
*Ew> ((flip endswith) "na") "banana" True *Ew> filter ((flip endswith) "na") ["banana", "cat", "iguana", "mastodon"] ["banana","iguana"]
What’s going on here? (flip endswith) is endswith with the parameters flipped. Thus ((flip endswith) "na") is exactly the same function as (endswith' "na") which, as we saw above, does what I want. So in the end, I didn’t have to replace endswith with endswith' to get my filter to work – I could just flip it. Bearing in mind I didn’t even know flip existed a week ago, I felt pretty smug to realise this. :-)
The next question on my mind is what you might call “generalised sections for multi-parameter functions”… If I have a 4-parameter function and I want to build a section where 2 of the parameters are bound (say), and it’s not the first two, how do I do it? Stupid toy example:
f :: Int -> Int -> Int -> Int -> Int f a b c d = a + b + c + d
Can I make a section representing f with b fixed at 3, and d fixed at 5, say? I know I can do this:
g :: Int -> Int -> Int g a c = f a 3 c d
but can we do it inline, as it were, as with the section approach? Answers on a postcard to the comments page, please! :-) I suspect currying‘s going to wind up in there somewhere…
That’s all I originally intended to blog about; when chasing links to support this post, however, I read this and learnt two more interesting things.
First, someone has, of course, solved this problem already, and I can just use isSuffixOf to get this job done (Strings in haskell are Lists of Chars). So now I get to throw endswith away – w00t, less code. A fundamental lesson: once the language starts making sense, learn what’s in the standard libraries!
Second, thanks to Haskell’s ability to deal with functions in infix form, the section (isSuffixOf "na") can also be written as ("na" `isSuffixOf`) which is, I suppose, more natural (provided you’re not scared off by the backticks, which I’m trying not to be). Then we have:
*Ew> filter ("na" `isSuffixOf`) ["banana", "cat", "iguana", "mastodon"] ["banana","iguana"]
A name I’ve come across in my work on hets is Grothendieck. In a great example of Wikipedia cascade, someone on #haskell this morning mentioned the programming language Joy (which looks pretty cool), which led to the page on monoids, which led to the page on Grothendieck groups, which led to the page on Grothendieck himself, who a) also looks cool, and b) “gave lectures on category theory in the forests surrounding Hanoi while the city was being bombed, to protest against the Vietnam war”. Respect.
I just found two super photos on flickr…
More super for computer people, I guess, but anyway…
(Number 2 in an ongoing Thorsten series.)
As promised, though I’m still working on the shiny LaTeX article which actually explains all this stuff…
From the README:
It started out just as an exercise in operational semantics, using Python because it’s the language I get things done fastest in. That proved interesting enough that I wanted to re-do it in Haskell, which I’m learning and which arguably handles this sort of thing better.
The Python version is more advanced. It includes mu-recursion, which the Haskell doesn’t, and is quite tidy IMHO. OTOH the Haskell is less developed, and in particular the functions for actually creating the graphs could, I’m sure, be improved. Always nice to have some future work to do…
I’m publishing this in case anyone is interested in the code. In particular, it is perhaps useful as an example of:
- Graph programming in Python using NetworkX and Pydot.
- Represention and manipulation of abstract syntax trees in Python.
- Graph programming in Haskell using Data.Graph.Inductive and Data.Graph.Inductive.Graphviz
- Poor newbie Haskell programming style. ;-)
I’m working on a paper describing the problem, the semantics, and the approaches taken in the two languages, but it’s very much an ongoing WIP and isn’t ready to publish yet.
Homepage for this work: http://www.cs.swan.ac.uk/~csandy/research/op_sem_py_hs/
I’m still here, I’ve just been too busy to blog. However, while I’m waiting for ghc-6.4 to compile (that’s my excuse anyway), I thought I’d do a quick blogdump…
I was going to write about my week in London, and a bit about what I’ve been reading lately, but I started by writing the stuff below instead, and now I think that’s enough for one post, so I’ll publish this and follow up with the rest maybe tomorrow or Sunday. (Short version: London fun except bombs, Quicksilver OK, Accelerando completely barmy but getting a bit dull at 40%). Colin, I should warn you that the rest of this post is of the kind you don’t like. The London diary thing might be better. Sorry!
Work’s been truly excellent this week. No students so no teaching, and also no admin for a while too. Some time I need to do some preparation for next year’s teaching, but for the next two months I’m mainly going to be focussing on research at last, aiming to break the back of my MPhil. I made a breakthrough on the parsing side last Sunday (or rather, I cleared up a misconception I’d been harbouring), but have spent this week on a different tack, learning about operational semantics through the magical medium of Python. Specifically, Markus gave me a toy process algebra and its semantics, and outlined the algorithm for turning expressions in the PA into their labelled transition systems, then I went away and programmed it.
It’s been excellent fun, and I got it basically done in no time at all, mainly because I chose to do it in Python. It’s quite remarkable, in fact… For months I’ve been struggling to get anywhere with my research, and it’s been very depressing and making me feel like I can’t hack this stuff. Now I realise it’s just that I’m a complete Haskell newbie. If I was doing my research in Python, I’d probably have finished by now. Alas, I have to do it in Haskell, because of the system we’re interacting with, but it’s encouraging to realise my problems relate to the language/paradigm I’m working in, not some basic failing on my part to understand what the heck I’m trying to do.
Anyway, I’m writing up an article explaining what I’ve done, and either later today or early next week I’ll publish it and my code, so anyone who reads the above and says “huh?” can take a look if they want. (JOn? You reading this? ;-))
Next week is graduation week at Swansea University and I’m acting as a marshall on Monday, which is Science day. So I get to see all this year’s third years do their stuff. With luck and effort, I should be there myself this time next year.
What else is new? I recently made the shift from ion to ion3. Ion’s been by window manager of choice for about three years now, mainly because I can’t be bothered with all that tedious point-and-click move-and-resize nonsense you have to do with windows in most WMs. TR occasionally moans at me that it’s modal but I don’t see it as a problem, it works for me and is extremely keyboard friendly and fast, so I’m happy. But I’ve been feeling behind the curve, and in particular some apps (eg the Gimp) don’t play well with the tiled model – which is why ion3 is nice because it adds optional “float workspaces” which act more like a conventional tedious point-and-click point-and-resize window manager if and when that’s what you need. Making the move was non-trivial because my config files had to be ported to Lua, but now it’s done and I’m very happy with my window manager indeed. Once again, I’d recommend looking at Ion if you’re getting dissatisfied with your Gnome/KDE experience and want to strip things down.
Finally, a couple of Python goodies via the Python-URL: try/except vs if/else in terms of speed (an oldie but a goodie, especially when advocating Python to curmudgeons), and Sparklines, which are kinda weird and kinda cool, but I’ve no idea if they’d actually be useful.
More stuff over at tiddly-pom on the whole 10,000 factorial thing. Julian does a good job of pointing at me and laughing (rightly so, rightly so), and there are a couple of imperative approaches to the problem suggested, including one using generators – yay.
Of course, something I forgot to point out about Stephen’s reduce based solution is that it’s a functional approach, and so has more in common with what’s happening in the Haskell version than these imperative versions. Anyway.
Wow… Potential third year project supervision material for next year: hOp, a microkernel based on the Glasgow Haskell Compiler, in which “experimenting with writing drivers in Haskell should be reasonably easy“. Funky as. [lambda]
Oh, how quickly we forget. Or to put it another way, how stupid I am. Or to put it another way, what a fickle language whore I am.
Further to my recent joy regarding Haskell, Stephen Judd appeared from nowhere and reminded me that of course, Python also handles arbitrary-length integers “out of the box”. How the hell did I forget that?
In fact, he went one better and gave me a single line of Python for calculating 10,000 factorial, viz:
reduce(lambda x,y:x*y, range(1,10001))
Admittedly this is harder to grok than the Haskell version:
bigFac :: Integer -> Integer bigFac n | n == 0 = 1 | n > 0 = n * bigFac(n-1)
(at least, without knowing the semantics of reduce) – but it does also seem to run a bit quicker on my box.
Hurrah for Python!
Of course, this all just reminds me that I’m spending too much time lately thinking about theoretical computer science, and not enough time getting my hands dirty programming… :-/
Update 2007-01-26: Of course, the python version is not easier to grok than this alternative Haskell version:
Now that’s beautiful, as is the definition of product.
Why Haskell is cool (again). Over a year ago I was mucking around Haskell but didn’t get deeply into it. Today I picked it up again, and while playing around with tutorials did something which, in my geeky way, I find cool.
With four lines of code, and with no more knowledge than a beginner has, I was able to calculate 10000 factorial. (For my non-mathsy readers, that is, 10000 times 9999 times 9998 times 9997 … times 3 times 2 times 1).
This is a Very Big Number (it’s here if you don’t believe me), and until today I haven’t come across a programming language which can even represent/work with anything this large “out of the box”. I’m sure there are other languages which can – I just haven’t met them. It’s interesting.
It took about seven seconds to work it out. Nifty.