Wednesday, October 20, 2010

Effect inference

A few people have mentioned they are worried about the type signatures from my previous post.

For a sig like:

fillPrimes :: Array Int -> Int -> Int -> Int -> ()

The fact that there is no 'IO' constructor in that type doesn't mean that side effects aren't being tracked. The type inferencer knows what's going on, and so does the optimiser, you just don't have to write effect information in sigs if you don't want to.

If we add all the information about effects and mutability we actually have:

:: forall %r0 %r1 %r2 %r3 %r4
. Array %r0 (Int %r1) -> Int %r2 -> Int %r3 -> Int %r4 -(!e0)> ()
:- !e0 = !Read %r0 + !Read %r1 + !Write %r1 + !Read %r2 + !Read %r3 + !Read %r4
, Mutable %r1

A term like Array %r0 (Int %r1) means that the fillPrimes accepts an array of boxed integers. The pointers (handles) to those integers are in some region named %r0, and the integers themselves are in region %r1. The effect term !Write %r1 reveals that fillPrimes updates those integers, which requires them to be mutable, hence the Mutable %r1 constraint.

Writing sigs this way is a bit verbose, so we also support the following "short hand" syntax. This sig gets desugared to the previous one using some simple defaulting rules:

fillPrimes :: Array (Int{modify}) -> Int{read} -> Int{read} -> Int{read} -> ()

Full information about effects and mutability is exported in the interface files. I'm currently working on a system whereby you can say "These are the effects I think this function has, but tell me if you think it has more." I'll post more about this when it's done (it half works now).

Saturday, October 16, 2010

Code: Three primes programs

This post illustrates some of the programming styles possible in Disciple.

We'll start with ye'olde primes program straight from the nofib benchmark suite. This is the exact code, except that for the sake of example I've hard wired it to show the 1500th prime instead of taking a command line argument.

suCC :: Int -> Int
suCC x = x + 1

isdivs :: Int -> Int -> Bool
isdivs n x = mod x n /= 0

the_filter :: [Int] -> [Int]
the_filter (n:ns) = filter (isdivs n) ns

primes :: [Int]
primes = map head (iterate the_filter (iterate suCC 2))

main = print $ primes !! 1500

This program builds an infinite list of primes, and then uses list indexing to take the 1500'th one. A couple of nice functional programming paradigms are used, namely higher order functions and laziness. Disciple is a default-strict dialect of Haskell, so we can use almost the same code, except we have to be explicit when we want laziness:

suCC :: Int -> Int
suCC x = x + 1

isdivs :: Int -> Int -> Bool
isdivs n x = mod x n /= 0

the_filter :: [Int] -> [Int]
the_filter (n:ns) = filterL (isdivs n) ns

primes :: [Int]
primes = mapL head (iterateL the_filter (iterateL suCC 2))

main () = println $ show $ primes !! 1500

That's it. I've changed map, filter and iterate to mapL, filterL and iterateL, and wibbled the main function slightly, but it's otherwise the same code. The difference between something like map and mapL is like the difference between foldl and foldl' in Haskell, they're implemented almost the same way, but have slightly different behaviours. Check the base libs for details.

Ok, so Disciple can express the dominant paradigms of Haskell, now it's time to subvert them. In the "Beyond Haskell" session at HIW 2010 we talked about how sometimes when optimising a program in a high level language, you reach a point where you just have to drop down to a "lower level" language for performance reasons. This is like when you're writing a Ruby program and it's running so slow there's simply nothing else to do than to "shell out" to a C routine. This also happens in Haskell, but perhaps not as much.

Anyway, one of the goals of Disciple is to avoid this problem if at all possible. If the lazy, purely functional version of primes just isn't fast enough for you, then let's write the same program using destructive update of a mutable array:

-- | Check if an int is a multiple of any in a given array.
checkPrime :: Array Int -> Int -> Int -> Int -> Bool
checkPrime array high x n
| n >= high = True
| mod x array.(n) == 0 = False
| otherwise = checkPrime array high x (n + 1)

-- | Fill an array with primes.
fillPrimes :: Array Int -> Int -> Int -> Int -> ()
fillPrimes primes max high i
| high > max = ()

| checkPrime primes high i 0
= do primes.(high) := i
fillPrimes primes max (high + 1) (i + 1)

| otherwise
= fillPrimes primes max high (i + 1)

main ()
= do -- We want the 1500'th prime.
max = 1500

-- Start with an array containing the first prime as its first element.
primes = generate&{Array Int} (max + 1) (\_ -> 0)
primes.(0) := 2

-- Fill the array with more primes.
fillPrimes primes max 1 2

-- Take the last prime found as the answer.
println $ show primes.(max)

The syntax primes.(max) means "return the max'th element of the primes array". The syntax primes.(high) := i means "destructively update the high'th element of the primes array to the value i". Updating an array requires the array to be mutable. It's possible to express this mutability constraint in the type signature, but it's not required, so I usually just leave it to the inferencer. The compiler knows that you're using side effects, and will optimise around them, but you usually don't have to say anything about it in source level type sigs. I'll talk about the cases when you do in another post. Note that we don't have to pollute the type sigs with the IO constructor just to use destructive update. After all, checkPrime and fillPrimes aren't doing any IO...

The above code is good, but it still feels like a Haskell program. I'm particularly looking at the tail calls to checkPrime and fillPrimes. This is how you express primitive looping in many functional languages, but it can become tedious when there are lots of state variables to pass back to each iteration. Here is another version of primes written in a more imperative style.

-- | Check if an int is a multiple of any in a given array.
checkPrime :: Array Int -> Int -> Int -> Bool
checkPrime array high x
= do n = 0
isPrime = 1

while (n < high)
do when (mod x array.(n) == 0)
do isPrime := 0

n := n + 1

isPrime /= 0

main ()
= do -- We want the 1500'th prime.
max = 1500

-- Start with an array containing the first prime as its first element.
primes = generate&{Array Int} (max + 1) (\_ -> 0)
primes.(0) := 2

-- Fill the array with primes.
high = 1
i = 2
while (high <= max)
do when (checkPrime primes high i)
do primes.(high) := i
high := high + 1

i := i + 1

-- Take the last prime found as the answer.
println $ show primes.(max)

Now we've got while loops, and break, even. Of course, the while syntax desugars to a simple higher order function that takes the loop body, and break just throws an exception to end the iteration. Such syntax should help the Python programmers feel at home, but Disciple is still lambdas all the way down.

Finally, note that we started with a lazy, purely functional program but arrived at an unapologetically imperative program all in the same language. In my dreams people talk about "functional vs imperative" programs instead of "functional vs imperative" languages.

Write the lazy functional program because it's shorter, cleaner, and easier to understand.. but when the clock is tickin' and the space is leakin' then wouldn't it be nicer "shell out" to the same language?

PS: Code examples are part of the DDC test suite here

PPS: Of couse the absolute performance of these programs compiled with DDC today is atrocious. This is a development blog after all, and we're still developing. They do work though. See the wiki to find out how you can help.

Hacks: Type error beautification

In this series of blog posts I'll name the ones that describe what I'm currently working on after "Hacks: whatever".

I'm currently working on type error beautification. The problem is that without it you get error messages that look like this:

Inferred type:
:: forall %rDE111 %rDE112 %rDE113
. Int32 %rDE111 -> Int32 %rDE112 -(!eDE484 $cDE487)> Int32 %rDE113
:- !eDE484 = !{!Read %rDE111; !Read %rDE112}
, $cDE487 = Test.x : %rDE111

Is bigger than signature:
:: forall %rDE111 %rDE112 %rDE113
. Int32 %rDE111 -> Int32 %rDE112 -(!eDE484 $cDE487)> Int32 %rDE113
:- !eDE484 = !Read %rDE111
, $cDE487 = xDE485 : %rDE111

Because !{!Read %rDE111; !Read %rDE112} does not match !Read %rDE111

That's all fine, apart from the variable names like %rDE111 and $cDE485. Those are "dummy" variables that have been invented by the type elaborator, which fills in missing effect terms in type sigs provided by the user. For example, the source program that gave the above error message is actually:

add <: Int{read} -> Int -> Int
add x y = x + y

The type signature says that the function is permitted to read its first argument, where {read} is a short-hand way of giving the real region and effect term. The elaborator takes the short-hand sigs provided by the user, then fills in the missing bits using some simple defaulting rules, before passing them to the type checker proper. Of course, this function actually reads both of its arguments, hence the error message.

The problem is that when the elaborator needs a new variable it just takes the next available name in the namespace, so the exact names you get depend on what other signatures the elaborator has already processed. This means that if you add a new signature to an imported module, the dummy variables in all the successive signatures change. That's fine for type checking, but it means that error messages presented to the user aren't consistent. It's become a problem because our regression tests compare error messages to what they were before, so now if you modify any of the base libraries, you have to accept a bunch of wibbles in the test suite.

The solution is to rewrite variables in each set of error messages to use consistent names like t0, t1, %r1, %r2 etc before presenting them to the user. Complicating factors are that if a variable was a "real" variable that came from the source program we don't want to rewrite it, and that generated/elaborated names should not collide with real names from the source program.

This is one of those PITAs you discover when you sit down and try to write a real compiler. No sexy semantics, no fancy extensions to the type system. I need to format error messages properly so my regression tests don't break after every commit... but we press onward for the glory.

For Human Consumption

Over the last year I've spent a lot of time cleaning up the internals of DDC, and I've reached a point where I'm no longer embarrassed about the implementation. It's still full of bugs and holes, but enough of the structure and design is there that others should be able to find their way around the code base without too much trouble. I could also use some help, so this blog is an attempt to publicise the project and hopefully convince some of you wonderful people that Disciple/DDC is indeed the way of the future, and that spending some time hacking on it would be a fun thing to do...

I've been working on DDC since I started my PhD back in 2004, though there were several false starts and major rewrites along the way. Back then it was called "Trauma", because all the good names like "Clean", "Hope" and "Eden" were already taken, and I wanted a name that reflected what language design / compiler development / doing a PhD was really like.

DDC as a "plan" started to congeal in 2006 when I had done enough reading and research to form a vision of where I wanted to go. The first patch in the repo is from Sunday September 9th 2007. Before then my "version control" consisted of making a tarball every couple of weeks. Prior to starting my PhD I worked as a C++ programmer for the Australian government, and my experience of ClearCase there was enough to put me off version control systems entirely until I met darcs while working on GHC at MSR in 2007.

Anyway, from "Trauma" we now have "Disciplined Disciple". A few people have asked me about the awkwardly religious sounding name, so I'll explain it here for the record. I chose this name to remind myself to spend more time reading papers than dreaming up crazy language features. Writing a compiler is a massive amount of work, and the only way I see myself ending up with a usable system is by sticking with the game plan and not getting side tracked. Disciple is supposed to be Haskell - Laziness + Effect typing + Mutability Polymorphism. The way I see it there are a huge number of ideas and libraries from the Haskell community that would really shine if we just shifted the base language a bit, and if you've seen the "Beyond Haskell" talk from HIW 2010 in Baltimore you'll know what I mean by that. I want to see the next version of Quake written ground-up in a functional language, with no cop-outs about how it's good enough just to influence the design of C#2++. I had a gut-full of pointer based, monomorphic, imperative programming in my previous life and I'm not planning on going back.

I won't live long enough to invent a whole new eco system or approach to writing programs myself, but if I stick to the game plan and follow the leaders wherever possible I hope that will be enough to push it though. I am the Disciplined Disciple.