tag:blogger.com,1999:blog-301893373063372699Tue, 30 Dec 2014 05:40:46 +0000Disciple DevelopmentMore info at the <a href="http://disciple.ouroborus.net">The Disciplined Disciple Compiler (DDC) Development Wiki</a>.<br>
Easy tickets to get started with: <a href="http://disciple.ouroborus.net/report/9">on the trac</a>.http://disciple-devel.blogspot.com/noreply@blogger.com (Ben Lippmeier)Blogger19125tag:blogger.com,1999:blog-301893373063372699.post-4764669226131471190Tue, 30 Dec 2014 05:40:00 +00002014-12-30T16:40:46.063+11:00Higher order functions and interface files. Work on DDC progresses, though recently it's been straightforward compiler engineering rather than anything novel, so I haven't been blogging. DDC now drops interface files when compiling modules, multi-module compilation works, and the front-end supports unicode string literals. I've also implemented a lambda lifter and support for higher order functions. This part is well baked enough to implement simple stream functions, as used in the Haskell Data.Vector library, though the compiler doesn't do the associated fusion yet. For example: <pre><br />data Stream (s a : Data) where<br /> MkStream : (s -> Step s a) -> s -> Stream s a<br /><br />data Step (s a : Data) where<br /> Yield : a -> s -> Step s a<br /> Skip : s -> Step s a <br /> Done : Step s a<br /><br />-- | Apply a function to every element of a stream.<br />smap (f : a -> b) (ss : Stream s a) : Stream s b<br /> = case ss of <br /> MkStream stepA sA0<br /> -> let stepB q = case stepA q of<br /> Yield x sA1 -> Yield (f x) sA1<br /> Skip sA2 -> Skip sA2<br /> Done -> Done<br /> in MkStream stepB sA0<br /><br />-- | Take the given number of elements from the front of a stream.<br />stake (n : Nat) (ss : Stream s a) : Stream (Tup2 s Nat) a<br /> = case ss of <br /> MkStream fA sA0 <br /> -> let stepB q = case q of<br /> T2 sA ix <br /> -> if ix >= n<br /> then Done<br /> else case fA sA of<br /> Yield x sA2 -> Yield x (T2 sA2 (ix + 1))<br /> Skip sA3 -> Skip (T2 sA3 ix)<br /> Done -> Done<br /> in MkStream stepB (T2 sA0 0)<br /></pre> As we can see, work on larger demos is starting to be hampered by the lack of pattern matching sugar in the source language, so I'm going to add that next. After adding pattern matching, I'm going to spend a couple of weeks bug fixing, and should get the DDC 0.5 release out in early February 2015. http://disciple-devel.blogspot.com/2014/12/higher-order-functions-and-interface.htmlnoreply@blogger.com (Ben Lippmeier)0tag:blogger.com,1999:blog-301893373063372699.post-7550229594312864595Sun, 02 Mar 2014 11:15:00 +00002014-03-02T22:26:30.256+11:00Civilized error messages I spent the weekend fixing bugs in preparation for the upcoming 0.4 release. I've finished all the necessary compiler hacking, but still need to update the cabal files, tutorial, and wiki before releasing it proper. I've made sure to fix all the bugs that would result in a poor user experience for people that just want to spend an afternoon kicking the tyres. Even though DDC still has lots of missing features (garbage collection, compilation for higher order functions etc), the stuff that is implemented is reasonably well baked. If one tries to do something unsupported it should at least give a civilized error message. <br><br>For example: <br><br>Trying to use an anonymous type function (higher order functions are not supported yet) <pre><br />module Test with letrec<br /> id [a : Data] (x : a) : a <br /> = x<br /><br /> foo (_ : Unit) : Unit<br /> = do id (/\a. \(x : a). x)<br /> ()<br /><br />Fragment violation when converting Tetra module to Salt module.<br /> Cannot convert expression.<br /> Cannot convert type abstraction in this context.<br /> The program must be lambda-lifted before conversion.<br /> <br /> with: /\(a : Data). \(x : a). x<br /></pre> <br><br>Trying to use higher kinded type arguments (which need an 'Any' region to implement safely, in general) <pre><br />module Test <br />data List (a : Data) where<br /> Nil : a -> List a<br /> Cons : a -> List a -> List a<br />with letrec<br /> foo [a : Data ~> Data] [b : Data] (x : b) : b<br /> = x<br /><br /> bar (_ : Unit) : Nat#<br /> = foo [List] [Nat#] 5#<br /> <br />Cannot convert expression.<br /> Unsupported type argument to function or constructor.<br /> In particular, we don't yet handle higher kinded type arguments.<br /> <br /> See [Note: Salt conversion for higher kinded type arguments] in<br /> the implementation of the Tetra to Salt conversion.<br /> <br /> with: [List]<br /></pre> <br><br>Trying to partially apply a primitive operator: <pre><br />module Test with letrec<br /> thing (_ : Unit) : Nat# -> Nat#<br /> = add# [Nat#] 5#<br /><br />Fragment violation when converting Tetra module to Salt module.<br /> Cannot convert expression.<br /> Partial application of primitive operators is not supported.<br /> <br /> with: add# [Nat#] 5#<br /></pre> <br> In contrast, the old ddc-alpha compiler (which I wrote for my PhD work), would signal its displeasure with partially applied primops by producing a program that segfaulted at runtime. We turn our backs on the bad old days. <br><br><h3>Nested Data Types</h3>Pleasingly, the new type inferencer does seem to work with some non-standard programs -- even though these don't compile all the way to object code yet. Here is a Core Tetra program using nested data types: <pre><br />module Test<br /> data Tuple2 (a b : Data) where<br /> T2 : a -> b -> Tuple2 a b<br /><br /> data Nest (a : Data) where<br /> NilN : Nest a<br /> ConsN : a -> Nest (Tuple2 a a) -> Nest a<br /><br />with letrec<br /> thing (_ : Unit)<br /> = ConsN 7# (ConsN (T2 1# 2#) (ConsN (T2 (T2 6# 7#) (T2 7# 4#)) NilN))<br /></pre> This example is based on one from the Bird and Meertens 1998 paper. Note that the second argument of the ConsN constructor takes a Nest where the element type is more complex than the original parameter. The type inference algorithm in the alpha compiler would have diverged on this program. <br><br><h3>Higher Rank Types</h3>I've also tried out some simple examples with higher ranked types, here is one: <pre><br />module Test with letrec<br /> thing1 (blerk : ([a : Data]. a -> a) -> Nat#) : Nat#<br /> = blerk (/\a. \(x : a). x)<br /><br /> thing2 (u : Unit)<br /> = thing1 (\(f : [a : Data]. a -> a). f 5#)<br /></pre> thing1 has a rank-3 type because it is a function, that takes a function, that takes a function which is polymorphic. There is a quantifier that appears at depth 3 contravariantly. Writing the type of thing1 in full makes this easier to see: <pre><br /> thing1 :: ((([a : Data]. a -> a) -> Nat#) -> Nat#<br /></pre> Again, I can't compile this example to object code yet because code generation for higher order functions isn't finished. However, type inference is a separate concern, and I don't know of any remaining problems in this part. http://disciple-devel.blogspot.com/2014/03/civilized-error-messages.htmlnoreply@blogger.com (Ben Lippmeier)0tag:blogger.com,1999:blog-301893373063372699.post-5797169367904822644Tue, 11 Feb 2014 09:08:00 +00002014-02-11T20:38:55.006+11:00Bidirectional type inference for DDC DDC now includes a bidirectional type inferencer based on Joshua Dunfield and Neelakantan Krishnaswami's recent ICFP paper "Complete and Easy Bidirectional Type Checking for Higher-Rank Polymorphism". I've extended the base algorithm to infer the kinds of type parameters, as well as to automatically insert type abstractions and applications into the result. <br><h3>Type inference for Disciple Source Tetra</h3>With both the type inferencer and new coeffect system now working, Disciple source programs are looking significantly less offensive. For example, some simple list functions: <pre><br />data List (a : Data) where<br /> Nil : List a<br /> Cons : a -> List a -> List a<br /><br /><br />singleton (x : a) : List a<br /> = Cons x Nil<br /><br /><br />append (xx : List a) (yy : List a) : List a<br /> = case xx of<br /> Nil -> yy<br /> Cons x xs -> Cons x (append xs yy)<br /><br /><br />mapS [a b : Data] [e : Effect]<br /> (f : a -> S e b) (xx : List a) : S e (List b)<br /> = box case xx of<br /> Nil -> Nil<br /> Cons x xs -> Cons (run f x) (run mapS f xs)<br /></pre> etc. etc. The above 'mapS' function is the version using the coeffect system I described in my last post. Using effects currently requires the user to add explicit 'box' and 'run' casts, though these will be automatically inserted in the next DDC release. <br><br>The DDC command-line interface allows one to apply the type inferencer to such a source file, producing a core file with explicit type annotations, like so: <pre><br />$ bin/ddc -infer -load demo/Tetra/Lists/Lists.dst<br />...<br />module Lists <br />data List (a : Data) where {<br /> Nil : List a;<br /> Cons : a -> List a -> List a;<br />}<br />with<br />letrec {<br /> singleton : [a : Data].a -> List a<br /> = /\(a : Data).<br /> \(x : a).<br /> Cons [a] x (Nil [a]);<br /> <br /> append : [a : Data].List a -> List a -> List a<br /> = /\(a : Data).<br /> \(xx yy : List a).<br /> case xx of {<br /> Nil <br /> -> yy;<br /> Cons (x : a) (xs : List a) <br /> -> Cons [a] x (append [a] xs yy)<br /> };<br /><br /> mapS : [a b : Data].[e : Effect].(a -> S e b) -> List a -> S e (List b)<br /> = /\(a b : Data)./\(e : Effect).<br /> \(f : a -> S e b).\(xx : List a).<br /> box<br /> case xx of {<br /> Nil <br /> -> Nil [b];<br /> Cons (x : a) (xs : List a) <br /> -> Cons [b]<br /> (run f x)<br /> (run mapS [a] [b] [e] f xs)<br /> }<br />}<br />...<br /></pre> Such a file can then be converted to C or LLVM for compilation to object code. In the current implementation higher-order programs will type-check, but cannot be compiled all the way to object code. I need to finish the runtime support for that. <br><h3>Type inference for Disciple Core Salt</h3> In practical terms, work on the runtime system is already being helped by the new type inferencer. The DDC runtime is written in a first-order imperative fragment named 'Disciple Core Salt', which compiled with DDC itself. Here is the code that allocates a boxed heap object on a 64-bit platform: <pre><br />allocBoxed <br /> [r : Region] (tag : Tag#) (arity : Nat#) : Ptr# r Obj<br /> = do <br /> -- Multiple arity by 8 bytes-per-pointer to get size of payload.<br /> bytesPayload = shl# arity (size2# [Addr#])<br /> bytesObj = add# (size# [Word32#])<br /> (add# (size# [Word32#]) bytesPayload)<br /><br /> -- Check there is enough space on the heap.<br /> case check# bytesObj of<br /> True# -> allocBoxed_ok tag arity bytesObj<br /> False# -> fail#<br /><br />allocBoxed_ok<br /> [r : Region] (tag : Tag#) (arity : Nat#) (bytesObj : Nat#) : Ptr# r Obj<br /> = do <br /> addr = alloc# bytesObj<br /><br /> tag32 = promote# [Word32#] [Tag#] tag<br /> format = 0b00100001w32#<br /> header = bor# (shl# tag32 8w32#) format<br /> write# addr 0# header<br /><br /> -- Truncate arity to 32-bits.<br /> arity32 = truncate# [Word32#] [Nat#] arity<br /> write# addr 4# arity32<br /><br /> makePtr# addr<br /></pre> Some type annotations are still required to specify data formats and the like, but all the day-to-day annotations can be inferred. <pre><br />$ bin/ddc -infer -load packages/ddc-code/salt/runtime64/Object.dcs<br />[slightly reformatted]<br />...<br />allocBoxed : [r : Region].Tag# -> Nat# -> Ptr# r Obj<br /> = /\(r : Region).<br /> \(tag : Tag#).\(arity : Nat#).<br /> let bytesPayload : Nat# = shl# [Nat#] arity (size2# [Addr#]) in<br /> let bytesObj : Nat# = add# [Nat#] (size# [Word32#])<br /> (add# [Nat#] (size# [Word32#]) bytesPayload) in<br /> case check# bytesObj of {<br /> True# -> allocBoxed_ok [r] tag arity bytesObj;<br /> False# -> fail# [Ptr# r Obj]<br /> };<br /> <br />allocBoxed_ok : [r : Region].Tag# -> Nat# -> Nat# -> Ptr# r Obj<br /> = /\(r : Region).<br /> \(tag : Tag#).\(arity bytesObj : Nat#).<br /> let addr : Addr# = alloc# bytesObj in<br /> let tag32 : Word32# = promote# [Word32#] [Tag#] tag in<br /> let format : Word32# = 33w32# in<br /> let header : Word32# = bor# [Word32#] (shl# [Word32#] tag32 8w32#) format in<br /> let _ : Void# = write# [Word32#] addr 0# header in<br /> let arity32 : Word32# = truncate# [Word32#] [Nat#] arity in<br /> let _ : Void# = write# [Word32#] addr 4# arity32 in<br /> makePtr# [r] [Obj] addr;<br />...<br /></pre> <br><h3>Upcoming 0.4 Release</h3>There are still some bugs to fix before I can enable type inference by default, but when that is done I will release DDC 0.4, which I'm hoping to get done by the end of the month. http://disciple-devel.blogspot.com/2014/02/bidirectional-type-inference-for-ddc.htmlnoreply@blogger.com (Ben Lippmeier)0tag:blogger.com,1999:blog-301893373063372699.post-2201873777062798054Sun, 08 Dec 2013 10:59:00 +00002013-12-09T08:35:07.982+11:00Capabilities and CoeffectsOver the past couple of months I've had "a moment of clarity" regarding some problems with Disciple and DDC that have been bothering me for the last six years or so. Itemized, they include: <br /><ul><li>In the source language, the fact that every function type carries an effect and closure annotation upsets Haskell programmers. They want a function of type (a -> b) to be just a pure function. In Disciple, the function type constructor actually has two other parameters. We write (a -(e|c)> b) where (a) and (b) are the argument and return types as usual, (e) is the effect type of the function and (c) is the closure type. The effect type describes the side effects that the function might have, and the closure type describes what data might be shared via the function closure. Although DDC provides enough type inference that you don't need to write the effect and closure type if you don't want to, this still isn't cool with some people (for understandable reasons).<br/><br/> <li>The fact that the function type has an added effect and closure parameter means that it has a different kind relative to the Haskell version. We've got (->) :: Data -> Data -> Effect -> Closure -> Data, where 'Data' is the (*) kind from Haskell. This means that Haskell programs which are sensitive to the kind of (->) can't be written directly in Disciple.<br/><br/> <li>Even though Disciple has mutability polymorphism, there isn't a safe way to destructively initialize a mutable structure and then freeze it into an immutable one. In Haskell, the client programmer would use a function like (unsafeFreezeArray :: MutableArray a -> IO (Array a)) at their own peril, but you'd think in a language with mutability polymorphism we'd be able to do better than this. </ul> <h3>Run and Box</h3> As you might have hoped, all these problems are now solved in the head version of DDC (henceforth known as "new DDC"), and you can expect a new release sometime in January when it's properly baked. I've got the typing rules worked out and partially implemented, but the modified core language doesn't yet compile all the way to object code. Here is an example in the core language: <pre><br />/\(r : Region). \(ref : Ref# r Nat#).<br /> box do <br /> { x = run readRef# [r] [Nat#] ref<br /> ; run writeRef# [r] [Nat#] ref (add# [Nat#] x x) }<br /></pre> which has type <pre><br /> :: [r : Region].Ref# r Nat# -> S (Read r + Write r) Unit<br /></pre> The [r : Region] in the type signature means "forall", while [..] in the expression itself indicates a type argument. Things ending in # are primitives. The readRef# and writeRef# functions have the following types: <pre><br />readRef# :: [r : Region]. [a : Data]. Ref# r a -> S (Read r) a<br />writeRef# :: [r : Region]. [a : Data]. Ref# r a -> a -> S (Write r) Unit<br /></pre> A type like (S e a) describes a suspended computation with effect (e) and result type (a). The S is a monadic type constructor, though in new DDC it is baked into the type system and not defined directly in the source language. In the above program the 'run' keyword takes a computation type and "runs it" (yeah!), which will unleash the associated effects. The 'box' keyword takes an expression that has some effects and then packages it up into a suspended computation. Critically, the type checker only lets you abstract over pure expressions. This means that if your expression has a side effect then you must 'box' it when using it as a function body. This restriction ensures that all side effect annotations are attached to 'S' computation types, and the functions themselves are pure. The idea of using 'run' and 'box' comes from Andrzej Filinski's "monadic reification and reflection" operators, as well as the paper "A judgemental reconstruction of modal logic" by Frank Pfenning, both of which I wish I had known about a long time ago. Note that the mechanism that <i>combines</i> the two atomic effect types (Read r) and (Write r) into the compound effect (Read r + Write r) is part of the ambient type system. You don't need to hack up effect composition in the source language by using type classes or some such. In summary, new DDC gives you composable effect indexed state monads, with none of the pain of existing Haskell encodings. <h3>Type safe freezing</h3> The following example demonstrates type safe freezing. <pre><br />/\(r1 : Region). \(x : Unit). box<br />extend r1 using r2 with {Alloc r2; Write r2} in<br />do { x = run allocRef# [r2] [Nat#] 0#;<br /> run writeRef# [r2] [Nat#] x 5#;<br /> x;<br />}<br /></pre> This code creates a new region 'r2' which extends the existing region 'r1', and says that the new one supports allocation and writing. It then allocates a new reference, and destructively updates it before returning it. The overall type of the above expression is: <pre><br /> :: [r1 : Region].Unit -> S (Alloc r1) (Ref# r1 Nat#)<br /></pre> Note that the returned reference is in region 'r1' and NOT region 'r2'. When leaving the body of 'extend', all objects in the inner region (r2) are merged into the outer region (r1), so the returned reference here is in (r1). Also, as far as the caller is concerned, the visible effect of the body is that a new object is allocated into region r1. The fact that the function internally allocates and writes to r2 is not visible. Importantly, we allow objects in the inner region (r2) to be destructively updated, independent of whether it is possible to destructively update objects in the outer region (r1). Once the body of the 'extend' has finished, all writes to the inner region are <i>done</i>, so it's fine to merge freshly initialized objects into the outer region, and treat them as constant from then on. <h3>Coeffects</h3> Of the previous example one might ask: what would happen if we also returned a function that can destructively update objects in region 'r2', even after the 'extend' has finished? Here we go: <pre><br />/\(r1 : Region). \(x : Unit). box<br />extend r1 using r2 with {Alloc r2; Write r2} in<br />do { x = run allocRef# [r2] [Nat#] 0#;<br /> f = \(_ : Unit). writeRef# [r2] [Nat#] x 5#;<br /> T2# [Ref# r2 Nat#] [Unit -> S (Write r2) Unit] x f;<br />}<br /></pre> this has type: <pre><br /> :: [r1 : Region]<br /> . Unit <br /> -> S (Alloc r1) <br /> (Tuple2# (Ref# r1 Nat#) <br /> (Unit -> S (Write r1) Unit))<br /></pre> Here, 'T2#' is the data constructor for 'Tuple2#'. The result tuple contains a reference in region r1, as well as a function that produces a computation that, when run, will destructively update that reference. The worry is that if the calling context wants to treat the returned reference as constant, then we can't allow the computation that updates it to ever be executed.<br /> <br />The solution is to say that (Write r1) is NOT an effect type -- it's a coeffect type! Whereas an effect type is a description of the <i>action</i> a computation may have on its calling context when executed, a coeffect type is a description of what <i>capabilities</i> the context must support to be able to execute that computation. It's a subtle, but important difference. A boxed computation with an <i>effect</i> can be executed at any time, and the computation does whatever it says on the box. In contrast, a boxed computation with a <i>coeffect</i> can only be executed when the bearer has the capabilities listed on the box. To put this another way, the <i>effect type</i> (Write r) says that the computation might write to objects in region 'r' when executed, but the <i>coeffect type</i> (Write r) says that the context must support writing to region 'r' for the computation to be executed. There is also a duality: "the context of a computation must support writing to region 'r', because when we execute that computation it might write to region 'r'". Most importantly, though, is to CHECK that the context supports some (co)effect before we run a computation with that (co)effect. <br/> <br />When we take the view of coeffects instead of effects (or as well as effects), then the problem of type safe freezing becomes straightforward. If there is no (Write r1) capability in the calling context, then the type system prevents us from executing computations which require that capability. Here is a complete example: <pre><br />let thingFromBefore<br /> = /\(r1 : Region). \(x : Unit). box<br /> extend r1 using r2 with {Alloc r2; Write r2} in<br /> do { x = run allocRef# [r2] [Nat#] 0#; <br /> f = \(_ : Unit). writeRef# [r2] [Nat#] x 5#;<br /> T2# [Ref# r2 Nat#] [Unit -> S (Write r2) Unit] x f;<br /> }<br />in private r with {Alloc r; Read r} in<br /> case run thingFromBefore [r] () of<br /> { T2# r f -> run f () }<br /></pre> In the above example, 'private' introduces a new region that is not an extension of an existing one. In the case-expression we unpack the tuple and get a computation that wants to write to an object in region 'r'. Because we have not given 'r' a (Write r) capability, then the type system stops us from running the computation that needs it: <pre><br /> Effect of computation not supported by context.<br /> Effect: Write r<br /> with: run f ()<br /></pre> Effect typing is old news, coeffect typing is where it's at. http://disciple-devel.blogspot.com/2013/12/capabilities-and-coeffects.htmlnoreply@blogger.com (Ben Lippmeier)5tag:blogger.com,1999:blog-301893373063372699.post-661333655626726737Fri, 03 May 2013 10:04:00 +00002013-05-03T20:14:42.656+10:00First working program using the Repa plugin <h3>Haskell Source</h3><pre><br />repa_process :: R.Stream k Int -> Int<br />repa_process s<br /> = R.fold (+) 0 s + R.fold (*) 1 s<br /></pre><br> <h3>Actual well formed GHC Core code</h3>.. or at least the parts that get printed with -dsuppress-all. <pre><br />repa_process<br />repa_process =<br /> \ @ k_aq6 arg_s2Rf -><br /> let { (# x1_s2R6, x1_acc_s2R5 #) ~ _ <- newByteArray# 8 realWorld# } in<br /> let { __DEFAULT ~ x2_s2R8 <- writeIntArray# x1_acc_s2R5 0 0 x1_s2R6 } in<br /> let { (# x3_s2Rd, x3_acc_s2Rc #) ~ _ <- newByteArray# 8 x2_s2R8 } in<br /> let { __DEFAULT ~ x4_s2RS <- writeIntArray# x3_acc_s2Rc 0 1 x3_s2Rd } in<br /> let { Stream ds1_s2Rs ds2_s2Rj ~ _ <- arg_s2Rf } in<br /> let { Vector rb_s2Rz _ rb2_s2Ry ~ _ <- ds2_s2Rj `cast` ... } in<br /> letrec {<br /> go_s2RP<br /> go_s2RP =<br /> \ ix_s2Rr world_s2Ru -><br /> case >=# ix_s2Rr ds1_s2Rs of _ {<br /> False -><br /> let { (# x7_s2RF, x0_s2RC #) ~ _ <- readIntArray# x1_acc_s2R5 0 world_s2Ru } in<br /> let { __DEFAULT ~ sat_s2SV <- +# rb_s2Rz ix_s2Rr } in<br /> let { __DEFAULT ~ wild3_s2RD <- indexIntArray# rb2_s2Ry sat_s2SV } in<br /> let { __DEFAULT ~ sat_s2SU <- +# x0_s2RC wild3_s2RD } in<br /> let { __DEFAULT ~ x8_s2RH <- writeIntArray# x1_acc_s2R5 0 sat_s2SU x7_s2RF } in<br /> let { (# x9_s2RN, x1_s2RL #) ~ _ <- readIntArray# x3_acc_s2Rc 0 x8_s2RH } in<br /> let { __DEFAULT ~ sat_s2ST <- *# x1_s2RL wild3_s2RD } in<br /> let { __DEFAULT ~ x10_s2RR <- writeIntArray# x3_acc_s2Rc 0 sat_s2ST x9_s2RN } in<br /> let { __DEFAULT ~ sat_s2SS <- +# ix_s2Rr 1 } in<br /> go_s2RP sat_s2SS x10_s2RR;<br /> True -> world_s2Ru<br /> }; } in<br /> let { __DEFAULT ~ x11_s2RU <- go_s2RP 0 x4_s2RS } in<br /> let { (# x12_s2RY, x1_s2S2 #) ~ _ <- readIntArray# x1_acc_s2R5 0 x11_s2RU } in<br /> let { (# _, x2_s2S3 #) ~ _ <- readIntArray# x3_acc_s2Rc 0 x12_s2RY } in<br /> let { __DEFAULT ~ sat_s2SW <- +# x1_s2S2 x2_s2S3 } in I# sat_s2SW<br /></pre> Im using ByteArrays to fake imperative variables. The next job is to convert the arrays x1_acc_s2R5 and x3_acc_s2Rc into proper accumulation variables, and attach them to the go_s2RP loop. <br><br> <h3>x86 assembly code</h3>This is just for the inner loop. <pre><br />LBB11_2:<br /> movq (%rcx), %rdi<br /> addq %rdi, 16(%rdx)<br /> imulq 16(%rsi), %rdi<br /> movq %rdi, 16(%rsi)<br /> addq $8, %rcx<br /> incq %r14<br /> cmpq %r14, %rax<br /> jg LBB11_2<br /></pre> The extra memory operations are there because the LLVM optimiser doesn't know that the two ByteArrays I'm using to fake imperative variables don't alias. This should turn into optimal code once it uses pure accumulators. http://disciple-devel.blogspot.com/2013/05/first-working-program-using-repa-plugin.htmlnoreply@blogger.com (Ben Lippmeier)1tag:blogger.com,1999:blog-301893373063372699.post-3526409899006429971Wed, 01 May 2013 07:08:00 +00002013-05-02T00:38:02.064+10:00Array fusion using the lowering transform<div>I'm getting back into blogging. This is current work in progress.</div><div><br /></div><div>Repa 4 will include a GHC plugin that performs array fusion using a version of Richard Waters's series expressions system, extended to support the segmented operators we need for Data Parallel Haskell.</div><div><br /></div><div>The plugin converts GHC core code to Disciple Core, performs the fusion transform, and then converts back to GHC core. We're using Disciple Core for the main transform because it has a simple (and working) external core format, and the core language AST along with most of the core-to-core transforms are parametric in the type used for names. This second feature is important because we use a version of Disciple Core where the main array combinators (map, fold, filter etc) are primitive operators rather than regular function bindings.</div><div><br /></div><div>The fusion transform is almost (almost) working for some simple examples. Here is the compilation sequence:</div><div><br /></div><div><br /></div><b>Haskell Source</b><br /><div><b><br /></b></div><div><div><span style="font-family: Courier New, Courier, monospace;">process :: Stream k Int -> Int</span></div><div><span style="font-family: Courier New, Courier, monospace;">process s</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = fold (+) 0 s + fold (*) 1 s</span></div><div><br /></div></div><div>In this example we're performing two separate reductions over the same stream. None of the existing short-cut fusion approaches can handle this properly. Stream fusion in Data.Vector, Repa-style delayed array fusion, and build/foldr fusion will all read the stream elements from memory twice (if they are already manifest) or compute them twice (if they are delayed). We want to compute both reductions in a single pass.</div><div><br /></div><div><br /></div><b>Raw GHC core converted to DDC core</b><br /><div><br /></div><div><div><span style="font-family: Courier New, Courier, monospace;">repa_process_r2 : [k_aq0 : Data].Stream_r0 k_aq0 Int_3J -> Int_3J</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = \(k_c : *_34d).\(s_aqe : Stream_r0 k_c Int_3J).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> $fNumInt_$c+_rjF</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (fold_r34 [k_c] [Int_3J] [Int_3J] $fNumInt_$c+_rjF </span></div><div><span style="font-family: Courier New, Courier, monospace;"> (I#_6d 0i#) s_aqe)</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (fold_r34 [k_c] [Int_3J] [Int_3J] $fNumInt_$c*_rjE </span></div><div><span style="font-family: Courier New, Courier, monospace;"> (I#_6d 1i#) s_aqe)</span></div></div><div><br /></div><div><br /></div><div><b>Detect array combinators from GHC core, and convert to DDC primops</b></div><div><br /></div><div><div><span style="font-family: Courier New, Courier, monospace;">repa_process_r2 : [k_aq0 : Rate].Stream# k_aq0 Int# -> Int#</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = /\(k_c : Rate).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> \(s_aqe : Stream# k_c Int#).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> add# [Int#]</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (fold# [k_c] [Int#] [Int#] (add# [Int#]) 0i# s_aqe)</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (fold# [k_c] [Int#] [Int#] (mul# [Int#]) 1i# s_aqe)</span></div></div><div><br /></div><div><br /></div><div><b>Normalize and shift array combinators to top-level</b></div><div>All array combinators are used in their own binding.</div><div><br /></div><div><div><span style="font-family: Courier New, Courier, monospace;">repa_process_r2 : [k_aq0 : Rate].Stream# k_aq0 Int# -> Int#</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = /\(k_c : Rate).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> \(s_aqe : Stream# k_c Int#).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x0 = add# [Int#] in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x1 = fold# [k_c] [Int#] [Int#] x0 0i# s_aqe in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x2 = mul# [Int#] in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x3 = fold# [k_c] [Int#] [Int#] x2 1i# s_aqe in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> add# [Int#] x1 x3</span></div></div><div><br /></div><div><br /></div><div><b>Inline and eta-expand worker functions</b></div><div>This puts the program in the correct form for the next phase.</div><div><div><br /></div><div><span style="font-family: Courier New, Courier, monospace;">repa_process_r2 : [k_aq0 : Rate].Stream# k_aq0 Int# -> Int#</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = /\(k_c : Rate).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> \(s_aqe : Stream# k_c Int#).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x1</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = fold# [k_c] [Int#] [Int#]</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (\(x0 x1 : Int#). add# [Int#] x0 x1) 0i# s_aqe in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x3</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = fold# [k_c] [Int#] [Int#]</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (\(x2 x3 : Int#). mul# [Int#] x2 x3) 1i# s_aqe in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> add# [Int#] x1 x3</span></div></div><div><br /></div><div><br /></div><div><b>Do the lowering transform</b></div><div>This is the main pass that performs array fusion. Note that we've introduced a single loop# that computes both of the fold# results.</div><div><b><br /></b></div><div><div><span style="font-family: Courier New, Courier, monospace;">repa_process_r2 : [k_c : Rate].Stream# k_c Int# -> Int#</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = /\(k_c : Rate).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> \(s_aqe : Stream# k_c Int#).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x1_acc : Ref# Int# = new# [Int#] 0i# in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x3_acc : Ref# Int# = new# [Int#] 1i# in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let _ : Unit</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = loop# (lengthOfRate# [k_c])</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (\(x0 : Nat#).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x1 : Int# = next# [Int#] [k_c] s_aqe x0 in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x0 : Int# = read# [Int#] x1_acc in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let _ : Void#</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = write# [Int#] x1_acc (add# [Int#] x0 x1) in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x2 : Int# = read# [Int#] x3_acc in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let _ : Void#</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = write# [Int#] x3_acc (mul# [Int#] x2 x1) in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> ()) in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x1 : Int# = read# [Int#] x1_acc in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x3 : Int# = read# [Int#] x3_acc in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> add# [Int#] x1 x3</span></div></div><div><br /></div><div><br /></div><div><b>Assign imperative variable storage to arrays</b></div><div><div>We need to convert the code back to GHC core, but we don't want to use IORefs because they can't hold unboxed values (of types like Int#). Instead, we use some new arrays to hold these values instead.</div><div><br /></div><div><span style="font-family: Courier New, Courier, monospace;">repa_process_r2 : [k_c : Rate].Stream# k_c Int# -> Int#</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = /\(k_c : Rate).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> \(s_aqe : Stream# k_c Int#).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x1_acc : Array# Int# = newArray# [Int#] 8# in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let _ : Void# = writeArray# [Int#] x1_acc 0# 0i# in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x3_acc : Array# Int# = newArray# [Int#] 8# in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let _ : Void# = writeArray# [Int#] x3_acc 0# 1i# in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let _ : Unit</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = loop# (lengthOfRate# [k_c])</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (\(x0 : Nat#).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x1 : Int# = next# [Int#] [k_c] s_aqe x0 in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x0 : Int# = readArray# [Int#] x1_acc 0# in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let _ : Void#</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = writeArray# [Int#] x1_acc 0# </span></div><div><span style="font-family: Courier New, Courier, monospace;"> (add# [Int#] x0 x1) in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x2 : Int# = readArray# [Int#] x3_acc 0# in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let _ : Void#</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = writeArray# [Int#] x3_acc 0# </span></div><div><span style="font-family: Courier New, Courier, monospace;"> (mul# [Int#] x2 x1) in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> ()) in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x1 : Int# = readArray# [Int#] x1_acc 0# in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let x3 : Int# = readArray# [Int#] x3_acc 0# in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> add# [Int#] x1 x3</span></div></div><div><br /></div><div><br /></div><div><b>Thread state token through effectful primops</b></div><div>The lowered code is naturally imperative, and GHC uses state threading to represent this. </div><div><br /></div><div><div><span style="font-family: Courier New, Courier, monospace;">repa_process_r2 : [k_c : Rate].Stream# k_c Int# -> World# -> Tuple2# World# Int#</span></div><div><span style="font-family: Courier New, Courier, monospace;"> = /\(k_c : Rate).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> \(s_aqe : Stream# k_c Int#).\(x0 : World#).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> caselet T2# (x1 : World#) (x1_acc : Array# Int#) </span></div><div><span style="font-family: Courier New, Courier, monospace;"> = newArray# [Int#] 8# x0 in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> caselet T2# (x2 : World#) (_ : Void#) </span></div><div><span style="font-family: Courier New, Courier, monospace;"> = writeArray# [Int#] x1_acc 0# 0i# x1 in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> caselet T2# (x3 : World#) (x3_acc : Array# Int#) </span></div><div><span style="font-family: Courier New, Courier, monospace;"> = newArray# [Int#] 8# x2 in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> caselet T2# (x4 : World#) (_ : Void#) </span></div><div><span style="font-family: Courier New, Courier, monospace;"> = writeArray# [Int#] x3_acc 0# 1i# x3 in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> caselet T2# (x11 : World#) (_ : Unit) </span></div><div><span style="font-family: Courier New, Courier, monospace;"> = loop# (lengthOfRate# [k_c])</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (\(x0 : Nat#).\(x5 : World#).</span></div><div><span style="font-family: Courier New, Courier, monospace;"> caselet T2# (x6 : World#) (x1 : Int#) </span></div><div><span style="font-family: Courier New, Courier, monospace;"> = next# [Int#] [k_c] s_aqe x0 x5 in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> caselet T2# (x7 : World#) (x0 : Int#) </span></div><div><span style="font-family: Courier New, Courier, monospace;"> = readArray# [Int#] x1_acc 0# x6 in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> caselet T2# (x8 : World#) (_ : Void#) </span></div><div><span style="font-family: Courier New, Courier, monospace;"> = writeArray# [Int#] x1_acc 0# (add# [Int#] x0 x1) x7 in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> caselet T2# (x9 : World#) (x2 : Int#) </span></div><div><span style="font-family: Courier New, Courier, monospace;"> = readArray# [Int#] x3_acc 0# x8 in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> caselet T2# (x10 : World#) (_ : Void#) </span></div><div><span style="font-family: Courier New, Courier, monospace;"> = writeArray# [Int#] x3_acc 0# (mul# [Int#] x2 x1) x9 in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> T2# x10 ()) x4 in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> caselet T2# (x12 : World#) (x1 : Int#) </span></div><div><span style="font-family: Courier New, Courier, monospace;"> = readArray# [Int#] x1_acc 0# x11 in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> caselet T2# (x13 : World#) (x3 : Int#) </span></div><div><span style="font-family: Courier New, Courier, monospace;"> = readArray# [Int#] x3_acc 0# x12 in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> T2# x13 (add# [Int#] x1 x3)</span></div></div><div><br /></div><div>Here, "caselet" is just sugar for a case expression with a single alternative.</div><div><br /></div><div><br /></div><div><b>Covert back to GHC core</b></div><div><br /></div><div><div><span style="font-family: Courier New, Courier, monospace;">repa_process_sTX</span></div><div><span style="font-family: Courier New, Courier, monospace;"> :: forall k_c.</span></div><div><span style="font-family: Courier New, Courier, monospace;"> Data.Array.Repa.Series.Stream k_c GHC.Types.Int</span></div><div><span style="font-family: Courier New, Courier, monospace;"> -> GHC.Prim.State# GHC.Prim.RealWorld</span></div><div><span style="font-family: Courier New, Courier, monospace;"> -> (# GHC.Prim.State# GHC.Prim.RealWorld, GHC.Prim.Int# #)</span></div><div><span style="font-family: Courier New, Courier, monospace;">[LclId]</span></div><div><span style="font-family: Courier New, Courier, monospace;">lowered_sTX =</span></div><div><span style="font-family: Courier New, Courier, monospace;"> \ (@ k_c)</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (rate_sTY :: GHC.Prim.Int#)</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (x_sTZ :: Data.Array.Repa.Series.Stream k_c GHC.Types.Int)</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (x_sU0 :: GHC.Prim.State# GHC.Prim.RealWorld) -></span></div><div><span style="font-family: Courier New, Courier, monospace;"> let { (# x_sU1, x_sU2 #) ~ scrut_sUv</span></div><div><span style="font-family: Courier New, Courier, monospace;"> <- newByteArray#_sU3 @ GHC.Prim.RealWorld 8 x_sU0</span></div><div><span style="font-family: Courier New, Courier, monospace;"> } in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let { __DEFAULT ~ x_sU8</span></div><div><span style="font-family: Courier New, Courier, monospace;"> <- writeIntArray#_sU4 @ GHC.Prim.RealWorld x_sU2 0 0 x_sU1</span></div><div><span style="font-family: Courier New, Courier, monospace;"> } in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let { (# x_sU5, x_sU6 #) ~ scrut_sUw</span></div><div><span style="font-family: Courier New, Courier, monospace;"> <- newByteArray#_sU7 @ GHC.Prim.RealWorld 8 x_sU8</span></div><div><span style="font-family: Courier New, Courier, monospace;"> } in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let { __DEFAULT ~ x_sUp</span></div><div><span style="font-family: Courier New, Courier, monospace;"> <- writeIntArray#_sU9 @ GHC.Prim.RealWorld x_sU6 0 1 x_sU5</span></div><div><span style="font-family: Courier New, Courier, monospace;"> } in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let { (# x_sUa, x_sUb #) ~ x_sUa</span></div><div><span style="font-family: Courier New, Courier, monospace;"> <- Main.primLoop</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (Main.primLengthOfRate rate_sTY)</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (\ (x_sUc :: GHC.Prim.Int#)</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (x_sUd :: GHC.Prim.State# GHC.Prim.RealWorld) -></span></div><div><span style="font-family: Courier New, Courier, monospace;"> let { (# x_sUe, x_sU1 #) ~ x_sUe</span></div><div><span style="font-family: Courier New, Courier, monospace;"> <- Main.primNext_Int @ k_c x_sTZ x_sUc x_sUd</span></div><div><span style="font-family: Courier New, Courier, monospace;"> } in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let { (# x_sUf, x_sUc #) ~ x_sUf</span></div><div><span style="font-family: Courier New, Courier, monospace;"> <- readIntArray#_sUg x_sU2 0 x_sUe</span></div><div><span style="font-family: Courier New, Courier, monospace;"> } in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let { __DEFAULT ~ x_sUl</span></div><div><span style="font-family: Courier New, Courier, monospace;"> <- writeIntArray#_sUh</span></div><div><span style="font-family: Courier New, Courier, monospace;"> @ GHC.Prim.RealWorld x_sU2 0 (+#_sUi x_sUc x_sU1) x_sUf</span></div><div><span style="font-family: Courier New, Courier, monospace;"> } in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let { (# x_sUj, x_sU8 #) ~ x_sUj</span></div><div><span style="font-family: Courier New, Courier, monospace;"> <- readIntArray#_sUk x_sU6 0 x_sUl</span></div><div><span style="font-family: Courier New, Courier, monospace;"> } in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let { __DEFAULT ~ x_sUo</span></div><div><span style="font-family: Courier New, Courier, monospace;"> <- writeIntArray#_sUm</span></div><div><span style="font-family: Courier New, Courier, monospace;"> @ GHC.Prim.RealWorld x_sU6 0 (*#_sUn x_sU8 x_sU1) x_sUj</span></div><div><span style="font-family: Courier New, Courier, monospace;"> } in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (# x_sUo, GHC.Tuple.() #))</span></div><div><span style="font-family: Courier New, Courier, monospace;"> x_sUp</span></div><div><span style="font-family: Courier New, Courier, monospace;"> } in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let { (# x_sUq, x_sU1 #) ~ x_sUq</span></div><div><span style="font-family: Courier New, Courier, monospace;"> <- readIntArray#_sUr x_sU2 0 x_sUa</span></div><div><span style="font-family: Courier New, Courier, monospace;"> } in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> let { (# x_sUs, x_sU5 #) ~ x_sUs</span></div><div><span style="font-family: Courier New, Courier, monospace;"> <- readIntArray#_sUt x_sU6 0 x_sUq</span></div><div><span style="font-family: Courier New, Courier, monospace;"> } in</span></div><div><span style="font-family: Courier New, Courier, monospace;"> (# x_sUs, +#_sUu x_sU1 x_sU5 #)</span></div></div><div><br /></div><div><br /></div><div>This doesn't work yet because I've forgotten to pass the type arguments to the unboxed tuple constructor <span style="font-family: Courier New, Courier, monospace;">(#,#)</span><span style="font-family: inherit;">, and maybe other problems as well. I'll post again when I have an actual program running.</span></div><div><span style="font-family: inherit;"><br /></span></div><div><span style="font-family: inherit;"><br /></span></div>http://disciple-devel.blogspot.com/2013/05/array-fusion-using-lowering-transform.htmlnoreply@blogger.com (Ben Lippmeier)0tag:blogger.com,1999:blog-301893373063372699.post-3333727029780551750Tue, 01 Jan 2013 05:03:00 +00002013-01-01T16:03:39.604+11:00Code Generators, Rewrite Rules, Aliasing and the Coq<div>DDC 0.3.1 was pushed onto <a href="http://hackage.haskell.org/package/ddc-tools">Hackage</a> just before Christmas.<br /><br />The main features in this new release are:<br /><ul><li>Back end code generation via C and LLVM that accepts the new core language. There is enough implemented to compile first order programs, but we'll need to implement a lambda lifter for the new core language before we can compile higher order programs directly.</li><li>Lots more program transformations. You can apply program transformations to core programs on the command line, and check that the code is being optimised the way it should be. There is a <a href="http://disciple.ouroborus.net/wiki/Tutorial/Core/Optimisation">tutorial</a> describing how to do this. </li><li>A rewrite rule framework that understands the region and effect system. Rewrite rules work on effectful expressions, and can be given predicates so they only apply when particular effects are non-interfering. This lets us perform build-fold fusion when the builder and folder have (non-interferring) side effects. (more info in Amos Robinson's <a href="http://code.ouroborus.net/ddc/doc/theses/2012-AmosRobinson-RewriteRules.pdf">honours thesis</a>)</li><li>Propagation of no-aliasing meta-data to the LLVM code generator. We have extended the core language with witnesses of distinctness, that prove particular regions of the store are distinct (do not alias). When LLVM has this meta-data it can perform more optimisations on the generated code that it would otherwise be able to. (more info in Tran Ma's <a href="http://code.ouroborus.net/ddc/doc/theses/2012-TranMa-AliasingControl.pdf">honours thesis</a>)</li></ul><div>The main missing feature is:</div><div><ul><li>No type inference, so the core programs contain lots of type annotations. We'll need to get the type inferencer back online before the language will be useful to end-users.</li></ul></div><div>Our immediate plans are to get a paper together about the new core language and how it supports the rewrite rules and LLVM meta-data. There are interesting stories to tell about all of these things. After that, we'll work on the type inferencer.</div><div><br /></div><div>For the paper, we'll need a proof that the core language does what it's supposed to. To get this, I've started adding the region and effect system to the Coq formalisation of SystemF2+MutableStore I did about a year ago. I hadn't touched Coq since then, so eased my way back into it by cleaning up the proofs I had already done, then packaged them up into the <a href="http://iron.ouroborus.net/">Iron Lambda</a> collection.</div><div><br /></div><div><a href="http://iron.ouroborus.net/">Iron Lambda</a> is a collection of Coq formalisations for functional core languages of increasing complexity, with the SystemF2+Regions+Effects language I'm working on now being the latest one. I made a wiki page for Iron Lambda because I think it will be useful in its own right, to help intermediate Coq users bridge the gap between the end of the <a href="http://www.cis.upenn.edu/~bcpierce/sf/">Software Foundations </a>course and what appears in current research papers. The biggest language in Software Foundations is Simply Typed Lambda Calculus (STLC)+Records+SubTyping, but it doesn't include polymorphism or indirect mutual recursion. To deal with SystemF style polymorphism we need a better approach to names than unique identifiers (ie, something like deBruijn indices). To deal with indirectly mutually recursive language definitions we need to know how to define custom own induction schemes. Iron Lambda shows how to do these things.</div><div><br />For the impatient:<br /><br /></div><div><b>DDC 0.3.1</b> </div><pre> $ cabal update<br /> $ cabal install ddc-tools</pre><br /><b>Iron Lambda</b><br /><pre> $ darcs get http://code.ouroborus.net/iron/iron-head/</pre></div>http://disciple-devel.blogspot.com/2013/01/code-generators-rewrite-rules-aliasing.htmlnoreply@blogger.com (Ben Lippmeier)0tag:blogger.com,1999:blog-301893373063372699.post-1305137130684749361Thu, 02 Feb 2012 06:50:00 +00002012-02-02T17:59:54.876+11:00Vectorisation without Replication in Data Parallel HaskellHere is a <a href="http://en.wikipedia.org/wiki/Barnes-Hut">Barnes-Hut</a> gravitation simulation written using <a href="http://hackage.haskell.org/package/vector-0.9.1">Data.Vector</a> and <a href="http://hackage.haskell.org/package/gloss-1.6.0.1">Gloss</a>. <br /><br /><div class="separator" style="clear: both; text-align: center;"><iframe width="640" height="360" src="http://www.youtube.com/embed/tDgOBM29ny4?rel=0" frameborder="0" allowfullscreen></iframe><br /></div><br />If done naively, such an n-body simulation has a runtime complexity of O(n^2) because we need to consider the interaction of every body with every other body. Barnes-Hut performs an approximation that reduces this to O(n . log n). At each step in the simulation we insert the bodies (grey) into a quad-tree (green), and compute the centroid for each branch (blue). Now, if some other body is sufficiently far away from a centroid, then we use the centroid to approximate the force due to the bodies in the corresponding branch, instead of inspecting each one individually.<br /><br />Now you've seen the video, the following graph sums up my work on Data Parallel Haskell (DPH) for the past six months:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://code.ouroborus.net/dph/blog/2012/02/nbody-graph.png" imageanchor="1" style=""><img border="0" height="451" width="600" src="http://code.ouroborus.net/dph/blog/2012/02/nbody-graph.png" /></a></div><br />This is a plot of runtime vs the number of bodies in the Barnes-Hut simulation. The simulation in the video uses just 1000 bodies, but my understanding is that the physicists need millions or billions to do useful physics work. Back to the graph, the red line is the program from the video, which uses Data.Vector to store its arrays and runs sequentially. The brown line is using the DPH library that shipped with GHC 7.2. The blue line is using the DPH library that will ship with GHC 7.4. Note how the asymptotic complexity of the program with New DPH is the same as with Data.Vector. <br /><br />In Old DPH we were using a naive approach to vectorisation that resulted in the quad-tree being replicated (copied) once for every body in the simulation (bad!). In New DPH we compute the force on each body in parallel while sharing the same quad-tree. There is a story about this, and you can read all about it when we finish the paper.<br /><br />Of course, asymptotic complexity isn't everything. The DPH program is still running about 10x slower than the Data.Vector program for large input sizes, and I'm sure that production C programs would run at least 5x faster than that. There's much low hanging fruit here. DPH misses many standard optimisation opportunities, which results in numerical values being unboxed and reboxed in our inner loops. There are also algorithmic improvements to the library that are just waiting for me to implement them. If I can make the blue line cross the red line in the next six months, then I'm buying everyone a beer.http://disciple-devel.blogspot.com/2012/02/vectorisation-without-replication-in.htmlnoreply@blogger.com (Ben Lippmeier)1tag:blogger.com,1999:blog-301893373063372699.post-4620264616681706587Mon, 23 Jan 2012 06:20:00 +00002012-01-23T17:20:20.150+11:00The new Disciple Core languageIt's been a while since updates. The last one was in August, and after that I got distracted with ICFP. When I got back I spent more time doing Coq proofs, finishing with Progress and Preservation for a version of SystemF2 with mutable algebraic data. The proofs are <a href="http://code.ouroborus.net/ddc/ddc-head/proof/DDC/Language/SystemF2Store/">here</a>. All data is allocated into the store (heap), and there are primitive functions to update it. In this language you can, say, destructively update a list so that the tail points to different physical data. This language doesn't have effect typing, but after this proof I felt like I had crossed the line from "Coq-shy" to "Coq-sure".<br /><br />In early November I read about ciu-equivalence, and how to prove contextual equivalence of program transformations. Thinking ahead, it felt like time to cleanup the sponginess in the existing DDC core language, because I'd need to do this before trying to formalise it. Although the plain calculus has a proper semantics and a hand-done soundness proof, the actual core language as used in the compiler also has some half-baked hacks. The reviewers of a previous paper had made suggestions about how to cleanup other aspects of the core calculus, and having Coqified all those languages, I now understand why it was good advice.<br /><br />Cutting to the chase, I've redesigned the DDC core language and there is an interpreter that works well enough to evaluate simple recursive functions. Here is an example: <br /><pre> letrec {<br /> ack [r1 r2 r3: %] <br /> (m : Int r1) {!0 | Use r1 + Use r2 + Use r3}<br /> (n : Int r2) { Read r1 + Read r2 + Alloc r3<br /> | Use r1 + Use r2 + Use r3}<br /> : Int r3<br /> = letregion r4 in<br /> let zero = 0 [r4] () in<br /> let one = 1 [r4] () in<br /> case eqInt [:r1 r4 r4:] m zero of {<br /> 1 -> addInt [:r2 r4 r3:] n one;<br /> _ -> case eqInt [:r2 r4 r4:] n zero of {<br /> 1 -> ack [:r4 r4 r3:] <br /> (subInt [:r1 r4 r4:] m one) <br /> (1 [r4] ());<br /><br /> _ -> ack [:r4 r4 r3:] <br /> (subInt [:r1 r4 r4:] m one)<br /> (ack [:r1 r4 r4:] m (subInt [:r2 r4 r4:] n one));<br /> }<br /> }<br /> } in ack [:R0# R0# R0#:] (2 [R0#] ()) (3 [R0#] ());;<br /></pre>Here is another example that does destructive update of an integer:<br /><pre> letregion r1 with {w1 : Mutable r1} in<br /> let x : Int r1 = 0 [r1] () in<br /> let _ : Unit = updateInt [:r1 r1:] < w1 > x (2 [r1] ()) in<br /> addInt [:r1 r1 R2#:] x (3 [r1] ());;<br /></pre>and one that suspends the allocation of an integer with lazy evaluation:<br /><pre> letregion r1 with { w1 : Const r1; w2 : Lazy r1; w3 : Global r1 } in<br /> let x : Int r1 lazy < w2 > <br /> = purify < alloc [r1] w1 > in<br /> forget < use [r1] w3 w1 > in<br /> 2 [r1] () <br /> in addInt [:r1 R0# R0#:] x (3 [R0#] ());;<br /></pre><br />Note that this is an intermediate representation for a compiler, and has vastly more type information than a real user would want to write. The compiler will perform type inference on the source language, and automatically translate the user program to this lower level language.<br /><br />The new DDC core language is described on the <a href="http://disciple.ouroborus.net/wiki/Language/Core">wiki</a> and so far I've been reasonably good at keeping the wiki up to date with what's implemented. <br /><br />The main changes are:<br /><ul><li>Witnesses now exist in their own universe, at level 0 next to values. Both values and witnesses are classified by types, and types classified by kinds. This removes the dependent-kinding of the old language. The more I thought about it, the less fun formalising a dependently kinded language seemed to be.<br /><br /><li>I've removed the more-than constraints on effect and closure variables. The type inference algorithm I am using returns constraints on effect and closure variables, so I had added similar constraints to the core language. This was a bad idea because managing the additional subsumption judgements during core type checking is a headache. The new core language has no such constraints, and I think I know to process the output of the inferencer so I won't need them.<br /><br /><li>Introducing lazy evaluation is now done with the <tt>let .. lazy</tt> binding form instead of a primitive suspend operator. This goes hand-in-hand with the <tt>purify</tt> and <tt>forget</tt> keywords that mask the effect and closure of their body respectively. These two keywords are akin to the type casts used by SystemFc and the GHC core language to support GADTs. I think it makes more sense to mask out the effect of an expression before suspending it, than to pass a witness as to why it's pure. The former style will still be used in the source program because that's how the type inferencer works, but the latter should be easier to work with in the core language.<br /></ul>Anyway, if you <tt>darcs get</tt> the DDC repo you can <tt>make bin/ddci-core</tt> to build the interpreter. Run the examples like <tt>bin/ddci-core test/ddci-core/30-Eval/30-Letrec/Test.dcx</tt>. It's not completely finished, but all the examples under the <tt>test/ddci-core</tt> directory run fine.<br /><br />The interpreter should be done in another couple of weeks. After that it'll be time to split off the LLVM backend from the existing compiler so that we can compile core programs directly.http://disciple-devel.blogspot.com/2012/01/new-disciple-core-language.htmlnoreply@blogger.com (Ben Lippmeier)0tag:blogger.com,1999:blog-301893373063372699.post-1212596525606215985Sat, 06 Aug 2011 09:39:00 +00002011-08-08T18:10:22.515+10:00How I learned to stop worrying and love de Bruijn indicesWhen I started learning how to formalise languages in Coq, one of my initial stumbling blocks was choosing an approach to name binding. There are at least three fancy sounding contenders: Locally Nameless, Nominal Reasoning and Parametric Higher Order Abstract Syntax. Of course, you could also use good old de Bruijn indices, but proponents of fancy approaches warn about the large volume of lemmas regarding lifting and substitution you'll need to prove before you can start the "real work". <br /><br />Back in April I scratched around for about two weeks reading up on all the different approaches, trying to identify the "best" one (whatever that means). Then I read the following comment by Xavier Leroy on the <a href="https://alliance.seas.upenn.edu/~plclub/cgi-bin/poplmark/index.php?title=Submission_by_Xavier_Leroy">POPLmark website</a>. He had used the Locally Nameless approach, but had stated: "If the purpose is just to solve the challenge and be done with it, I would rather go for pure de Bruijn". That was enough to convince me to stop worrying and just get into it. I've spent about 250 hours now doing my own proofs, and I can report that using de Bruijn isn't really that bad. Yes, you need to prove some lemmas about lifting and substitution, but my entire response to naysayers is summed up henceforth: "The only computational operation that lambda calculus has is substitution, so don't pretend you can get away without proving something about it." Corollary: HOAS doesn't count.<br /><br /><b>The Lemmas</b><br /><br />You need to know what the lemmas are though, and I advise cribbing them from someone else's proof. I learned them from Arthur Charguéraud's proof, but I don't know where they are from originally. You're more than welcome to <a href="http://code.ouroborus.net/ddc/ddc-head/proof/DDC/Language/">copy them from mine</a>.<br /><br />Once you know what the lemmas are, and how they work in STLC, extending them to new languages is straightforward. For reference, the following are the definitions of the lifting and substitution operators for STLC. Note that if you just want to prove Progress and Preservation for STLC, you don't actually need the lemmas. They're needed in bigger languages such as SystemF2 to show that type substitution is commutative. Nevertheless, for illustrative purposes we'll just stick with STLC expressions.<br /><br /><pre>Inductive exp : Type :=
<br /> | XVar : nat -> exp
<br /> | XLam : ty -> exp -> exp
<br /> | XApp : exp -> exp -> exp.
<br />
<br />Fixpoint
<br /> liftX (d: nat) (* current binding depth in expression *)
<br /> (xx: exp) (* expression containing references to lift *)
<br /> : exp
<br /> := match xx with
<br /> | XVar ix
<br /> => if le_gt_dec d ix
<br /> (* Index is referencing the env, so lift it across the new elem *)
<br /> then XVar (S ix)
<br />
<br /> (* Index is locally bound in the expression itself, and
<br /> not the environment, so we don't need to change it. *)
<br /> else xx
<br />
<br /> (* Increase the current depth as we move across a lambda. *)
<br /> | XLam t1 x1
<br /> => XLam t1 (liftX (S d) x1)
<br />
<br /> | XApp x1 x2
<br /> => XApp (liftX d x1) (liftX d x2)
<br /> end.
<br />
<br />Fixpoint
<br /> substX (d: nat) (* current binding depth in expression *)
<br /> (u: exp) (* new expression to substitute *)
<br /> (xx: exp) (* expression to substitute into *)
<br /> : exp
<br /> := match xx with
<br /> | XVar ix
<br /> => match nat_compare ix d with
<br /> (* Index matches the one we are substituting for. *)
<br /> | Eq => u
<br />
<br /> (* Index was free in the original expression.
<br /> As we've removed the outermost binder, also decrease this
<br /> index by one. *)
<br /> | Gt => XVar (ix - 1)
<br />
<br /> (* Index was bound in the original expression. *)
<br /> | Lt => XVar ix
<br /> end
<br />
<br /> (* Increase the depth as we move across a lambda. *)
<br /> | XLam t1 x2
<br /> => XLam t1 (substX (S d) (liftX 0 u) x2)
<br />
<br /> | XApp x1 x2
<br /> => XApp (substX d u x1) (substX d u x2)
<br /> end.
<br /></pre><br /><b>Drawing the deBruijn environment</b><br /><br />Here is a nice lemma to get started. <br /><pre>Lemma lift_lift
<br /> : forall n m t
<br /> , lift n (lift (n + m) t)
<br /> = lift (1 + (n + m)) (lift n t).
<br /></pre>This is one of the lemmas you need for proving commutativity of substitution. Although it looks complicated at first glance, it will make significantly more sense when you know how to draw it. <br /><br />First, consider the standard typing judgement form:<br /><pre>Env |- Exp :: Type
<br /></pre>Assume there are de Bruijn indices in the expression. Some of these indices point to lambdas in the expression itself, corresponding to locally bound variables, while some point to lambdas in the environment, corresponding to free variables. Here is an example:<br /><br /><img border="0" src="http://2.bp.blogspot.com/-ovN0N2suJ9U/Tjzvg78Eu7I/AAAAAAAAADU/HZSZVsu_qsc/s800/debruijnexample.png" /><br /><br />Suppose I apply (lift 2) to the above expression. Although it would be helpful for educational purposes to do this directly using the definition of 'lift' above (try it!), I'll tell you the shortcut instead. Firstly, if we assume that the definition of 'lift' is correct, none of the bound indices will be changed during the application. Ignoring bound indices, the only ones remaining are free indices. These are the indices that point into the environment. <br /><br />Now, although applying 'lift' to an expression may increment these free indices, instead of thinking about indices being incremented, it's easier to think about environment elements being shifted. I'll explain what I mean by that in a moment, but for now let's just do it...<br /><br />Here is the same expression as before, with the environment positions numbered in blue:<br /><br /><img border="0" src="http://3.bp.blogspot.com/-QHLoh_KX6Wk/Tjz0gHKQpfI/AAAAAAAAADc/naH-0Vypd6s/s800/debruijnenvpos.png" /><br />Shifting the elements after position two yields the following:<br /><br /><img border="0" src="http://4.bp.blogspot.com/-ua-UyDDJTVU/Tjz5vjy_oyI/AAAAAAAAADs/ilnIANDn1uI/s800/debruijnenvlifted.png" /><br /><br />In the above diagram I've used '_' to represent a hole in the environment. This is a place where I could insert a new element, and be guaranteed that none of the indices in the expression point to this new element. This in turn means that if I apply (lift 2) to the expression, then insert a new element into the hole, then the resulting judgement is guaranteed to give me the same type as before (t5 in this case).<br /><br />Of course, applying 'lift' to the expression doesn't actually change the environment. In this example I've moved 't1' and 't2' to the left for illustrative purposes, but this is an operation separate from applying 'lift' to the expression itself. However, applying 'lift' does changes the indices, and these indices are <i>pointers</i> into the environment. If we like, we could instead think about moving the pointers (the arrow heads) instead of the actual environment elements. This is an important change in perspective. Instead of thinking about 'lift' as incrementing indices in the expression, we want to think about 'lift' as shifting environment pointers to the left.<br /><br /><br />Let's look at our lemma again:<br /><pre>Lemma lift_lift
<br /> : forall n m t
<br /> , lift m (lift (n + m) t)
<br /> = lift (1 + (n + m)) (lift n t).
<br /></pre><br />As 't' is quantified, we need to prove this for all possible terms 't'. Using our change in perspective, this also means that we want to prove the lemma for all possible sets of environment pointers. Given the environment pointers for 't', the expression '(lift (n + m) t)' transforms them into a new set (by adding a hole at position n + m). Likewise the expression 'lift (1 + (n + m)) (lift n t)' transforms the environment pointers of 't' by adding a hole a position 'n', and then adding another hole at position '1 + (n + m)'. Here is a picture of the full situation. I've just drawn the contiguous set of environment pointers as rectangles, with the holes are marked in grey.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-uAGyoL6az9g/TjzWE3X5vsI/AAAAAAAAADE/mYBg55TTJh4/s800/lemmaliftlift.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" src="http://3.bp.blogspot.com/-uAGyoL6az9g/TjzWE3X5vsI/AAAAAAAAADE/mYBg55TTJh4/s800/lemmaliftlift.png" /></a></div><br />In the first line we have set of environment pointers for 't'. Applying 'lift n' adds a hole a position 'n'. Alternatively, applying 'lift (n + m)' adds a hole at position '(n + m)'. Given 'lift n t', if we add another hole at (1 + n + m) we get the last set shown. Likewise, if we take the set 'lift (n + m) t' and add a hole at position 'n' then we also get the last line.<br /><br />That's it. That complicated looking lemma above can be drawn with a few rectangles, and provided the definition of 'lift' does what it's supposed to, the lemma is obviously true. For a language like SystemF2 you need about 5 lemmas concerning lifting and substitution. They're all easy to draw, and the proofs are straightforward once you understand the general approach. In the next post I'll show how to draw substitution, which works in a similar way.http://disciple-devel.blogspot.com/2011/08/how-i-learned-to-stop-worrying-and-love.htmlnoreply@blogger.com (Ben Lippmeier)0tag:blogger.com,1999:blog-301893373063372699.post-4069330490138844450Wed, 25 May 2011 06:42:00 +00002011-05-25T17:04:02.258+10:00Proofs and mutual recursionQuick update. I'm still working on mechanising the proofs for the DDC core language in Coq. So far I've made it through Progress, Preservation and the correspondence between Big and Small step semantics for <a href="http://code.ouroborus.net/ddc/ddc-head/proof/Simple/">Simply Typed Lambda Calculus</a> (STLC), <a href="http://code.ouroborus.net/ddc/ddc-head/proof/SystemF/">System-F</a>, <a href="http://code.ouroborus.net/ddc/ddc-head/proof/SystemF2/">System-F2</a>, and <a href="http://code.ouroborus.net/ddc/ddc-head/proof/Simple_PCF/">PCF</a>. Extending the STLC proof with polymorphism and fixpoints etc was surprisingly straightforward. It only took about 4 hours each to create the other versions of the proof.<br /><br />The next qualitative hurdle appears to be handling the mutual recursion that arises when we extend the language in other ways. To add case expressions, I need to represent case alternatives. Case alternatives contain expressions, and expressions contain (lists of) alternatives.<br /><br />Unfortunately, with mutually recursive definitions, it appears that Coq can't automatically generate all the induction schemes I need. I could be wrong on this though, as I'm still a Coq newbie. I've figured out that the Coq command "Combined Schemes", will make <i>some</i> of the schemes I might want, but it doesn't cover all of the use-cases. The problem (I think) is that although expressions are defined in terms of lists of alternatives, the lists themselves are polymorphic and aren't directly defined in terms of alternatives. For this reason, I can't convince "Combined Scheme" to make me a strong enough induction scheme, so I must write down and prove this myself.<br /><br />At least, that's what I'm inferring from reading the <a href="http://www.ii.uni.wroc.pl/~dabi/Publications/Haskell10/stg-in-coq/stg-in-coq/">proof of soundness for the STG machine</a> described in a <a href="http://www.comlab.ox.ac.uk/files/3858/pirog-biernacki-hs10.pdf">paper</a> by Maciej Pirog and Dariusz Biernacki. I'm still working through it.http://disciple-devel.blogspot.com/2011/05/proofs-and-mutual-recursion.htmlnoreply@blogger.com (Ben Lippmeier)0tag:blogger.com,1999:blog-301893373063372699.post-6354990687442850953Tue, 19 Apr 2011 11:40:00 +00002011-04-19T21:40:04.070+10:00Falling down the naming wellOver the last month or so I've started to formalise the proofs of the DDC core language in Coq. The existing proofs were just getting too big to manage by hand, and typing them all up in Latex was seeming like a less and less fun thing to do.<br /><br />Being a Coq-newbie I started with Benjamin Pierce and co's <a href="http://www.cis.upenn.edu/~bcpierce/sf/">Software Foundations</a> course notes. I had done a couple of pages of Coq tutorial before, but never tried to use it for anything real. I've found the course notes to provide a fairly smooth ride, though I got antsy during the IMP formalisation and skipped straight to the lambda calculus one. After about two weeks of evenings I was able to redo the simply typed lambda calculus (STLC) one by myself, but hit a wall trying to scale up to System-F. To cut a longer story short, I've come to appreciate (along with many others) what a massive hack capture avoiding substitution is.<br /><br />I'm fast moving towards the viewpoint that the deBruijn and/or locally nameless representation is actually the "real" lambda calculus, whereas named binders are just a layer of syntactic sugar that confuses everything (and gives you cavities). I've ended up just settling on vanilla deBruijn indices for further work, mainly because the examples I've seen so far using the locally nameless representation rely on a heavy amount of Coq Ltac magic that I don't understand yet. I'm also more than willing to trade time for cleverness. By that I mean I'm willing to commit more time for a less clever solution, provided I know that the approach I take will get the job done and I won't have to start all over again using a different formalisation of binders.<br /><br />Underneath all of this is an itch in my mind saying that I was already writing a perfectly good paper about the DDC core language, and I don't want to start another one right now about "How to formalise DDC Core in Coq". Anyway, I've made it through Progress and Preservation for STLC using the deBruijn representation, including all the list lemmas, and am feeling fairly good about it so far. The proofs are in the main DDC tree at <a href="http://code.ouroborus.net/ddc/ddc-head/proof/">http://code.ouroborus.net/ddc/ddc-head/proof/</a>.http://disciple-devel.blogspot.com/2011/04/falling-down-naming-well.htmlnoreply@blogger.com (Ben Lippmeier)1tag:blogger.com,1999:blog-301893373063372699.post-2550061964781586025Tue, 29 Mar 2011 08:44:00 +00002011-03-29T21:05:56.619+11:00Real Time Edge Detection in Haskell<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://2.bp.blogspot.com/-k6mYbRMrKTM/TZGhiwGTWvI/AAAAAAAAAB4/aX9H50xgY-k/s1600/beholder.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 241px;" src="http://2.bp.blogspot.com/-k6mYbRMrKTM/TZGhiwGTWvI/AAAAAAAAAB4/aX9H50xgY-k/s400/beholder.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5589426230942456562" /></a><br />Last week we submitted a <a href="http://www.cse.unsw.edu.au/~benl/papers/stencil/stencil-icfp2011-sub.pdf">paper</a> to ICFP about how to implement efficient parallel stencil convolutions in Haskell. This goes along with a new release of <a href="http://hackage.haskell.org/package/repa">Repa</a> that runs on GHC 7.0.3. We've extended the old array representation to support the partitioning of arrays into several regions, and rewritten the back-end to recover sharing between the computation of adjacent array elements. Coupled with a careful management of aliasing issues, this lets us implement image processing algorithms that have comparable performance to <a href="http://opencv.willowgarage.com/wiki/">OpenCV</a>. <br /><br />Absolute performance depends on what data format we're using. Using Word8s is about 4 times slower than OpenCV because it uses SIMD intrinsics that we don't have access to from Haskell (yet!). On the other hand, using Float32s and 2 Haskell threads can run faster than single threaded OpenCV, as the parallelism makes up for the lack of SIMD instructions.<br /><br />Here is a <a href="http://code.ouroborus.net/beholder/video/Edge480.mov">video</a> of me demonstrating a Canny edge detector applied to a stream captured from my iSight camera. The demo uses Objective C for the GUI, and Haskell for the processing. The <a href="http://code.ouroborus.net/beholder/beholder-head/">source</a> builds with XCode 3.2.1 and GHC 7.0.3, though you'll need to update and run the CONFIGURE.sh script in the root dir to point it to your GHC install. There are also prebuilt versions with the GHC "+RTS -N" option baked in for <a href="http://code.ouroborus.net/beholder/distro/beholder-N2.tgz">two</a>, <a href="http://code.ouroborus.net/beholder/distro/beholder-N4.tgz">four</a> and <a href="http://code.ouroborus.net/beholder/distro/beholder-N6.tgz">six</a> threads. They all do the same thing apart from the number of threads. You can change the RTS options in the main.m module if you want to rebuild it. On my 4-Core i7 desktop it does about 20 frames per second with no problem -- the framerate from the iSight is set dynamically by OSX depending on system load. I've tested it on 3 separate machines, but let me know if it doesn't work for you. Non-Mac users can run the edge detector over .bmp files, using the repa-canny program from <a href="http://hackage.haskell.org/package/repa-examples">repa-examples</a>.http://disciple-devel.blogspot.com/2011/03/real-time-edge-detection-in-haskell.htmlnoreply@blogger.com (Ben Lippmeier)1tag:blogger.com,1999:blog-301893373063372699.post-3142037159814850638Mon, 03 Jan 2011 11:54:00 +00002011-01-04T00:03:29.475+11:00Papers, Proofs, PrimOps, Allocation and Strengthening<span style="font-weight:bold;">Papers</span><br />General status update: I got the reviews back for the <a href="http://www.cse.unsw.edu.au/~benl/papers/witness/witnessing-jfp-sub.pdf">paper</a> I submitted to JFP on the DDC core language. The reviews were very helpful, and a huge thank you to the reviewers if you're reading this. I will endeavour to make all my own reviews as helpful.<br /><br />The bottom line was the paper looked promising, but needs more motivation for the language design, and more meta-theory. The motivation is easy enough to 'port across from my thesis, so this part shouldn't take too long. For background, the JFP paper is an extension of the APLAS 2009 paper I wrote just after I finished my thesis. The original version of that paper had more motivational material, but I ended up cutting it to fit the page limit. For the JFP paper I tried to stick to the details of the core language, but I suppose not discussing the source language and wider context makes it hard to see the point of it all. There were also suggestions and questions as to the details of the language and typing rules, which I'll either fold in or explain why.<br /><br /><span style="font-weight:bold;">Proofs</span><br />For the meta-theory, the paper needs proofs of observational equivalence for the example optimisations, showing how the witnesses are used during optimisation. I had a proof of soundness for the type system in the appendix, and worked examples of the optimisations, but not formal proofs that the optimisations are ok. Said optimisations are just standard ones like let floating and full-laziness, but I agree that if the point of having witnesses of purity and constancy in the core language is to perform optimisations in the presence of side-effects, then we want some formal link between the witnesses and those optimisations.<br /><br />Anyway, I started on the proofs over the christmas break, and have the structure of one showing that let-floating is sound (changing the order of let-bindings with non-interferring effects), though I still need to fill in some details. I'm estimating about two or three weeks work (full time equivalent) to finish the proofs and update the paper.<br /><br /><span style="font-weight:bold;">PrimOps</span><br />I went to a hackathon hosted by <a href="http://www.mega-nerd.com/erikd/Blog">Erik</a> last week where we worked on LLVM support for DDC. Erik's written all the code for the backend, though to push it all the way through I needed to change how primops are handled in the rest of the compiler (which is explained in Erik's blog post). I just finished that this afternoon, so we're gunning for full LLVM support in the near future. The primop handling is pretty clean now, so if we wanted to, say, add support for vector operations like MMX and SSE then this should be reasonably straightforward.<br /><br /><span style="font-weight:bold;">Allocation Effects</span> <br />The paper reviews have also convinced me that we want explicit allocation effects in the source and core languages. One of my thesis reviewers also suggested this, though so far I've left them out because they weren't absolutely needed. Allocation effects have a slightly different flavour to world effects (like writing to the Console) and to read/writing mutable data. When two expressions have world or read/write effects then this means they can interfere in such a way that you cannot change the <span style="font-style:italic;">order</span> of these two expressions. In contrast, you can always change the order of expressions that just allocate new objects.<br /><br />However, if an expression allocates new <span style="font-style:italic;">mutable</span> objects, then you cannot hoist it out of a lambda (apply the full-laziness transform). Hoisting an expression means it is potentially executed <span style="font-style:italic;">less often</span>, so we're not just changing the order of two actions in time. If you want to do hoisting without information about allocation, then it's sufficient to check that the binding to be hoisted performs no world or read/write effects, and does not return mutable data. However, using allocation effects will make this reasoning clearer, and can enable more optimisations later, so I'll add them in. I started on a <a href="http://code.ouroborus.net/ddc/ddc-alloc/">branch</a> that has them, but now I've convinced myself they're easy enough to add I'm going to leave the hacking to later and focus on getting the paper finished first.<br /><br /><span style="font-weight:bold;">Strengthening</span> <br />In the type inferencer, it turns out that strengthening more-than constraints to equality constraints during inference breaks programs that use interesting higher order functions (like with parser combinators). This isn't a problem with standard 2nd order functions like map, fold, filter etc, but only appears with 3rd order functions and higher, or with data containing functions, which is why I hadn't noticed it before. By "break" I mean the inferencer returns a bad type, so we get a panic when we try to type check the corresponding core program. We still want to do the strengthening because it removes the need for a pile of redundant effect applications in the core programs, but it should be done while converting the desugared language to core, instead of during inference, which is fairly obvious in hindsight. I've started fixing this as well, but need to adjust the type utils (like type simplification and effect masking) so they'll handle the non-strengthened forms. Another job to be left until after the paper is finished...http://disciple-devel.blogspot.com/2011/01/papers-proofs-primops-allocation-and_03.htmlnoreply@blogger.com (Ben Lippmeier)0tag:blogger.com,1999:blog-301893373063372699.post-6915360865043118372Sat, 27 Nov 2010 02:56:00 +00002010-11-27T16:28:42.093+11:00Shootout: Fannkuch reduxA few times after finishing some GHC hacking, I've been talking to Simon Marlow and mentioned that whatever it was I was doing now "works".<br />Usually, his reply is then "for what value of works?". Especially with compilers, there is quite a gap between "sorta works", "seems to work" and "industrial strength". <br /><br />Last night I was taking a stroll through the "sorta" end of town while hacking on a Disciple version of the fannkuch-redux program from the languages benchmarks game. Disciple is a dialect of Haskell, so I started with one of the nicer existing Haskell versions.<br /><br />Here is the version by Miha Vučkovič. I've reformatted it a bit and used the Disciple string pasting operator % instead of the (hated) Haskell ++, but it's otherwise the same program. (I've also added a type sig because DDC doesn't do dictionary passing yet, and used iterateL, but more on that later.)<br /><pre><br />flop (2:x1:t) = x1:2:t<br />flop (3:x1:x2:t) = x2:x1:3:t<br />flop (4:x1:x2:x3:t) = x3:x2:x1:4:t<br />flop (5:x1:x2:x3:x4:t) = x4:x3:x2:x1:5:t<br />flop (6:x1:x2:x3:x4:x5:t) = x5:x4:x3:x2:x1:6:t<br />flop (7:x1:x2:x3:x4:x5:x6:t) = x6:x5:x4:x3:x2:x1:7:t<br /><br />flop lst@(h:_) = r<br /> where flop' 0 (t, r) = (t, r)<br /> flop' n ((h:t), r) = flop' (n-1) (t, h:r)<br /> (t, r) = flop' h (lst, t)<br /> <br />flopS (1:_) = 0<br />flopS lst = 1 + flopS (flop lst)<br /><br />rotate n (h:t) = rotate' (n-1) t <br /> where rotate' 0 l = h:l<br /> rotate' n (f:t) = f:(rotate' (n-1) t)<br /><br />checksum :: Int -> Int -> Int<br />checksum i f<br /> | mod i 2 == 0 = f<br /> | True = -f<br /><br />pfold :: (Int, Int) -> [(Int, Int)] -> (Int, Int)<br />pfold r [] = r<br />pfold (ac, af) ((c, f):t) = pfold (sc, sf) t <br /> where sc = ac+c<br /> sf = max af f<br /><br />permut n = foldr perm [[1..n]] [2..n] <br /> where perm x lst = concat [take x $ iterateL (rotate x) l | l <- lst]<br /><br />main () <br /> = do n = 8<br /> (chksm, mflops) <br /> = pfold (0,0) <br /> $ map (\(i, p) -> let flops = flopS p <br /> in (checksum i flops, flops))<br /> $ zip [0..] (permut n)<br /><br /> putStr $ show chksm % "\nPfannkuchen(" % show n % ") = " % show mflops % "\n"<br /></pre>It doesn't matter what it does, or how it does it. Check the shootout page if you're interested, we're just trying to get it running for now. <br /><br /><h3>Faking value recursion</h3><br />My first compile attempt reported:<pre><br />desire:ddc-head-devel benl$ bin/ddc -O --make test/60-Shootout/Fannkuch-Redex/MainShoot.hs <br />[1 of 1] Compiling MainShoot<br />ddc: ERROR<br />./test/60-Shootout/Fannkuch-Redex/MainShoot.hs:16:29<br /> Conflicting region constraints.<br /> <br /> constraint: Direct %115<br /> from the use of: flop<br /> at: ./test/60-Shootout/Fannkuch-Redex/MainShoot.hs:16:29<br /> <br /> conflicts with,<br /> constraint: Lazy %115<br /> from the use of: permut<br /> at: ./test/60-Shootout/Fannkuch-Redex/MainShoot.hs:42:29<br /></pre><br />Confusing, yes, but at least it's a civilised message. What it's trying to say it thought the region named %115 was Direct, meaning it cannot contain thunks, but later code wants to add thunks to it (make it Lazy). Disciple tracks what objects can or cannot be thunks, because if the backend doesn't have to check for them then the code can be a lot faster.<br /><br />The confusing thing is why it thinks this region is supposed to be Direct. Top level data (like global integers) default to Direct because when we create them we know they're fully evaluated, and not thunks. However, flop is a function so should be polymorphic in this aspect, it should not require its arguments to be Direct.<br /><br />Ok, looking in the interface file for the inferred type of flop says:<pre><br />flop :: Data.List.List %rTC0 (Int32 %rTC4) -(!e0 $c0)> Data.List.List %rTC0 (Int32 %rTC4)<br /> :- $c0 = ${t : %rTC0} + ${t : %rTC4}<br /> , !e0 = !Read %rTC0 + !Read %rTC4<br /> , Const %rTC0<br /> , Const %rTC4<br /> , Direct %rTC0<br /> , Direct %rTC4<br /></pre><br />What I'm looking at is the line: <pre><br /> $c0 = ${t : %rTC0} + ${t : %rTC4}<br /></pre>This is a closure constraint, and reveals that the inferencer thinks this function is referencing data in regions %rTC0 and %rTC4 from its environment. This might be true if it was referring to some top level piece of data, but it's not. The closure constraint also reports the variable being used to refer to this data, namely "t".<br /><br />A closer inspection of flop reveals the underlying problem:<pre><br />flop lst@(h:_) = r<br /> where flop' 0 (t, r) = (t, r)<br /> flop' n ((h:t), r) = flop' (n-1) (t, h:r)<br /> (t, r) = flop' h (lst, t)<br /></pre><br />Here, Miha has been tricky and used value recursion in the last line. Notice that "t" appears on both the left and the right of the binding, but the binding isn't defining a recursive function -- it's a recursive value.<br /><br />Disciple doesn't support value recursion yet, as it's tricky to implement in a default-strict language, but I'd like to look into it in the future. Anyway, in Disciple today only function bindings can be recursive, so what <span style="font-style:italic;">should</span> have happened is that the renamer should have said that "t" is not in scope in the right of the binding. Instead, that screwup has gotten through the renamer, and now the inferencer thinks that "t" has been defined outside of "flop". It has then defaulted its region to Direct and Const, as there are no opposing Lazy or Mutable constraints.<br /><br />The work around is to rewrite this as an actual function binding, then use laziness to break the recursive loop:<pre><br />flop lst@(h:_) = snd (thing ())<br /> where flop' 0 (t, r) = (t, r)<br /> flop' n ((h:t), r) = flop' (n-1) (t, h:r)<br /> thing () = flop' h @ (lst, fst @ (thing @ ()))<br /></pre><br />A @ on the right of a binding means "lazy function application". For example, the expression (f @ x) builds a thunk holding pointers to the code for "f" and its argument "x". The programmer decides where to suspend expressions, and they are forced automatically by consumers as needed. I should probably use a different operator because it looks weird with respect to the @ on the left of bindings, but no matter.<br /><br />Changing this makes the program run fine, at least with an input size of 7 (the "n" in the main function).<pre><br />desire:ddc-head-devel benl$ bin/ddc -O --make test/60-Shootout/Fannkuch-Redex/MainShoot.hs <br />[1 of 1] Compiling MainShoot<br /><br />desire:ddc-head-devel benl$ ./a.out <br />228<br />Pfannkuchen(7) = 16</pre><br />Sadly, setting n >= 8 gives<pre><br />desire:ddc-head-devel benl$ ./a.out <br />*** DDC RTS PANIC! Slot stack overflow.<br />Abort trap<br /></pre><br /><h3>Explicit Laziness</h3><br />The "slot stack" is the stack that holds pointers to heap objects. This is sometimes called a "shadow stack" and the pointers are used as roots for the garbage collector. The reason it's overflown is that Disciple is a default strict language, so our friends map, fold, filter etc evaluate the entire list when called. We can see where the stack has overflown by running the program in GDB, the GNU debugger.<pre><br />desire:ddc-head-devel benl$ gdb a.out <br />GNU gdb 6.3.50-20050815 (Apple version gdb-1346) (Fri Sep 18 20:40:51 UTC 2009)<br />...<br />(gdb) run<br />Starting program: /Users/benl/devel/ddc/ddc-head-devel/a.out <br />Reading symbols for shared libraries ++. done<br />*** DDC RTS PANIC! Slot stack overflow.<br /><br />Program received signal SIGABRT, Aborted.<br />0x93a74732 in __kill ()<br />(gdb) <br />(gdb) bt<br />#0 0x93a74732 in __kill ()<br />#1 0x93a74724 in kill$UNIX2003 ()<br />#2 0x93b0798d in raise ()<br />#3 0x93b1da44 in abort ()<br />#4 0x00012041 in _panicOutOfSlots ()<br />#5 0x00003a0c in Data_List__symCl ()<br />#6 0x00003c04 in Data_List_iterateL ()<br />#7 0x00014d17 in _forceStep ()<br />#8 0x00014db8 in _force ()<br />#9 0x000048a0 in Data_List_take ()<br />#10 0x000049c5 in Data_List_take ()<br />#11 0x0000363c in Data_Function__symDl ()<br />#12 0x0000679f in _permut_vCL2 ()<br />#13 0x0000434a in Data_List_concatMap ()<br />#14 0x00004362 in Data_List_concatMap ()<br />#15 0x00004362 in Data_List_concatMap ()<br />#16 0x00004362 in Data_List_concatMap ()<br />#17 0x00004362 in Data_List_concatMap ()<br />#18 0x00004362 in Data_List_concatMap ()<br />#19 0x00004362 in Data_List_concatMap ()<br />#20 0x00004362 in Data_List_concatMap ()<br />#21 0x00004362 in Data_List_concatMap ()<br />#22 0x00004362 in Data_List_concatMap ()<br />#23 0x00004362 in Data_List_concatMap ()<br />... 5000 more lines<br /><br /></pre>Usefully, DDC compiles via C and also uses the regular C stack for return addresses and unboxed data. The intermediate C files look like they almost could have been written by a human, and there is no evil mangler butchering the assembly code once it's been compiled. This means we get real stack traces, and I don't have the same sense of impending doom as when a GHC compiled program segfaults on me.<br /><br />From the stack trace, it looks a bit like it's run out of space in concatMap. There is no concatMap in the source code, but of course every Haskell hacker knows that list comprehensions are desugared to concatMap, so it's run out of space in the list comprehension.<br /><br />Disciple is default strict, so if you want producer/consumer style list processing to run in constant space using laziness, then you have to say so. Here are new versions of permut and main after rewriting the list comprehension to use a map, and converting to the lazy list functions concatL mapL and zipL:<pre><br />permut n = foldr perm [[1..n]] [2..n] <br /> where perm x lst = concatL $ mapL (\l -> take x $ iterateL (rotate x) l) lst<br /><br />main () <br /> = do n = 10<br /> (chksm, mflops) <br /> = pfold (0,0) <br /> $ mapL (\(i, p) -> let flops = flopS p <br /> in (checksum i flops, flops))<br /> $ zipL [0..] (permut n)<br /><br /> putStr $ show chksm % "\nPfannkuchen(" % show n % ") = " % show mflops % "\n"<br /></pre><br />The functions concatL, mapL and zipL have similar definitions to the strict versions, but they're more lazy. This is like the difference between foldl and foldl' in the Haskell libraries. iterateL builds an infinite list, so there is no strict version called plain "iterate".<br /><br />This makes n=8 work, but n=9 runs out of heap.<pre><br />desire:ddc-head-devel benl$ bin/ddc -O --make test/60-Shootout/Fannkuch-Redex/MainShoot.hs <br />[1 of 1] Compiling MainShoot<br />desire:ddc-head-devel benl$ ./a.out <br />*** DDC RTS PANIC! Out of heap space.<br /> current (full) heap size: 9999999 bytes<br /> could not allocate another: 16 bytes<br />Abort trap<br /></pre>In a real runtime system, when the runtime heap is full it would ask for more from the OS. Unfortunately, the Disciple runtime is less featureful, uses a fixed size heap, and when it's full it's dead. It does have a garbage collector, but when all data in the heap is live there's nothing more the GC can do. However, you can set the size of the heap on the command line. Giving it 100Megs gets us n=9.<pre><br />desire:ddc-head-devel benl$ ./a.out +RTS -H 100000000<br />8629<br />Pfannkuchen(9) = 30<br /></pre><br /><br /><h3>The Treasure Finder</h3><br />Garbage collection is a misnomer. Garbage collectors don't collect unused data, they copy out live data to a new space. It'd be more accurate to call them "Treasure Finders". Anyway, for interests sake we can examine the rate treasure is being found in our program. The DDC runtime supports some simple statistics like so:<pre><br />desire:ddc-head-devel benl$ ./a.out +RTS -H 100000000 -profile-gc<br />8629<br />Pfannkuchen(9) = 30<br />desire:ddc-head-devel benl$ cat ddc-rts.prof <br /><br />-- Garbage Collection<br /> collection count = 17<br /><br /> alloc total = 760,836,560 (bytes)<br /><br /> copy total = 1,034,618,572 (bytes)<br /> count = 68,351,051 (objects)<br /> avg object size = 15.137 (bytes)<br /><br /> process time user = 6.120 (s)<br /> system = 0.070 (s)<br /><br /> mutator total = 1.760 (s)<br /> user = 1.700 (s)<br /> system = 0.060 (s)<br /><br /> collector total = 4.430 (s)<br /> user = 4.420 (s)<br /> system = 0.010 (s)<br /><br /> time efficiency = 28.433 (%)<br /></pre><br />Heh, 28% time efficiency isn't the best. The Disciple GC is a simple two space collector. It's not a generational GC and there is no nursery. The way the code is compiled also makes it hold onto far too much intermediate data. Fixing that would be equivalent do doing "register allocation" on the GC slot stack, so it doesn't treat an object as a root unless it really needs it.<br /><br />Of course, the great thing about having 8GB of RAM in your desktop machine is that you can achieve so much with a naive language implementation. Let's compare against GHC to see how we're going:<pre><br />desire:ddc-head-devel benl$ /usr/bin/time ./a.out +RTS -H 1000000000<br />73196<br />Pfannkuchen(10) = 38<br /> 83.04 real 82.12 user 0.90 sys<br /><br /><br />desire:ddc-head-devel benl$ ghc -O2 --make test/60-Shootout/Fannkuch-Redex/MainGHC.hs -o Main<br />desire:ddc-head-devel benl$ /usr/bin/time ./Main 10<br />73196<br />Pfannkuchen(10) = 38<br /> 2.10 real 2.08 user 0.01 sys<br /></pre><br />That's when I went to bed.<br /><br><br /><br>http://disciple-devel.blogspot.com/2010/11/shootout-fannkuch-redux.htmlnoreply@blogger.com (Ben Lippmeier)0tag:blogger.com,1999:blog-301893373063372699.post-3310805120073891485Wed, 20 Oct 2010 06:53:00 +00002010-10-20T18:19:54.244+11:00Effect inferenceA few people have mentioned they are worried about the type signatures from my previous post.<br /><br />For a sig like:<pre><br />fillPrimes :: Array Int -> Int -> Int -> Int -> ()<br /></pre><br />The fact that there is no 'IO' constructor in that type doesn't mean that side effects aren't being tracked. The type inferencer knows what's going on, and so does the optimiser, you just don't have to write effect information in sigs if you don't want to.<br /><br />If we add all the information about effects and mutability we actually have:<pre><br />fillPrimes<br /> :: forall %r0 %r1 %r2 %r3 %r4<br /> . Array %r0 (Int %r1) -> Int %r2 -> Int %r3 -> Int %r4 -(!e0)> ()<br /> :- !e0 = !Read %r0 + !Read %r1 + !Write %r1 + !Read %r2 + !Read %r3 + !Read %r4 <br /> , Mutable %r1<br /></pre><br /><br />A term like Array %r0 (Int %r1) means that the fillPrimes accepts an array of boxed integers. The pointers (handles) to those integers are in some region named %r0, and the integers themselves are in region %r1. The effect term !Write %r1 reveals that fillPrimes updates those integers, which requires them to be mutable, hence the Mutable %r1 constraint.<br /><br />Writing sigs this way is a bit verbose, so we also support the following "short hand" syntax. This sig gets desugared to the previous one using some simple defaulting rules:<pre><br />fillPrimes :: Array (Int{modify}) -> Int{read} -> Int{read} -> Int{read} -> ()<br /></pre><br />Full information about effects and mutability is exported in the interface files. I'm currently working on a system whereby you can say "These are the effects I think this function has, but tell me if you think it has more." I'll post more about this when it's done (it half works now).http://disciple-devel.blogspot.com/2010/10/effect-inference.htmlnoreply@blogger.com (Ben Lippmeier)6tag:blogger.com,1999:blog-301893373063372699.post-5323508322367862068Sat, 16 Oct 2010 09:24:00 +00002010-10-16T21:39:16.840+11:00Code: Three primes programsThis post illustrates some of the programming styles possible in Disciple.<br /><br />We'll start with ye'olde primes program straight from the nofib benchmark suite. This is the exact code, except that for the sake of example I've hard wired it to show the 1500th prime instead of taking a command line argument.<br /><pre><br />suCC :: Int -> Int<br />suCC x = x + 1<br /><br />isdivs :: Int -> Int -> Bool<br />isdivs n x = mod x n /= 0<br /><br />the_filter :: [Int] -> [Int]<br />the_filter (n:ns) = filter (isdivs n) ns<br /><br />primes :: [Int]<br />primes = map head (iterate the_filter (iterate suCC 2))<br /><br />main = print $ primes !! 1500<br /></pre><br />This program builds an infinite list of primes, and then uses list indexing to take the 1500'th one. A couple of nice functional programming paradigms are used, namely higher order functions and laziness. Disciple is a default-strict dialect of Haskell, so we can use almost the same code, except we have to be explicit when we want laziness:<br /><pre><br />suCC :: Int -> Int<br />suCC x = x + 1<br /><br />isdivs :: Int -> Int -> Bool<br />isdivs n x = mod x n /= 0<br /><br />the_filter :: [Int] -> [Int]<br />the_filter (n:ns) = filterL (isdivs n) ns<br /><br />primes :: [Int]<br />primes = mapL head (iterateL the_filter (iterateL suCC 2))<br /><br />main () = println $ show $ primes !! 1500<br /></pre><br />That's it. I've changed map, filter and iterate to mapL, filterL and iterateL, and wibbled the main function slightly, but it's otherwise the same code. The difference between something like map and mapL is like the difference between foldl and foldl' in Haskell, they're implemented almost the same way, but have slightly different behaviours. Check the base libs for details.<br /><br />Ok, so Disciple can express the dominant paradigms of Haskell, now it's time to subvert them. In the "Beyond Haskell" session at HIW 2010 we talked about how sometimes when optimising a program in a high level language, you reach a point where you just have to drop down to a "lower level" language for performance reasons. This is like when you're writing a Ruby program and it's running so slow there's simply nothing else to do than to "shell out" to a C routine. This also happens in Haskell, but perhaps not as much.<br /><br />Anyway, one of the goals of Disciple is to avoid this problem if at all possible. If the lazy, purely functional version of primes just isn't fast enough for you, then let's write the same program using destructive update of a mutable array:<pre><br />-- | Check if an int is a multiple of any in a given array.<br />checkPrime :: Array Int -> Int -> Int -> Int -> Bool<br />checkPrime array high x n<br /> | n >= high = True<br /> | mod x array.(n) == 0 = False<br /> | otherwise = checkPrime array high x (n + 1)<br /><br /><br />-- | Fill an array with primes.<br />fillPrimes :: Array Int -> Int -> Int -> Int -> ()<br />fillPrimes primes max high i<br /> | high > max = ()<br /><br /> | checkPrime primes high i 0 <br /> = do primes.(high) := i<br /> fillPrimes primes max (high + 1) (i + 1)<br /><br /> | otherwise<br /> = fillPrimes primes max high (i + 1)<br /><br /><br />main ()<br /> = do -- We want the 1500'th prime.<br /> max = 1500<br /><br /> -- Start with an array containing the first prime as its first element.<br /> primes = generate&{Array Int} (max + 1) (\_ -> 0)<br /> primes.(0) := 2<br /> <br /> -- Fill the array with more primes.<br /> fillPrimes primes max 1 2<br /><br /> -- Take the last prime found as the answer.<br /> println $ show primes.(max)<br /></pre><br />The syntax primes.(max) means "return the max'th element of the primes array". The syntax primes.(high) := i means "destructively update the high'th element of the primes array to the value i". Updating an array requires the array to be mutable. It's possible to express this mutability constraint in the type signature, but it's not required, so I usually just leave it to the inferencer. The compiler knows that you're using side effects, and will optimise around them, but you usually don't have to say anything about it in source level type sigs. I'll talk about the cases when you do in another post. Note that we don't have to pollute the type sigs with the IO constructor just to use destructive update. After all, checkPrime and fillPrimes aren't doing any IO...<br /><br />The above code is good, but it still feels like a Haskell program. I'm particularly looking at the tail calls to checkPrime and fillPrimes. This is how you express primitive looping in many functional languages, but it can become tedious when there are lots of state variables to pass back to each iteration. Here is another version of primes written in a more imperative style.<pre><br />-- | Check if an int is a multiple of any in a given array.<br />checkPrime :: Array Int -> Int -> Int -> Bool<br />checkPrime array high x<br /> = do n = 0<br /> isPrime = 1<br /><br /> while (n < high)<br /> do when (mod x array.(n) == 0)<br /> do isPrime := 0<br /> break<br /><br /> n := n + 1<br /><br /> isPrime /= 0<br /><br /><br />main ()<br /> = do -- We want the 1500'th prime.<br /> max = 1500<br /><br /> -- Start with an array containing the first prime as its first element.<br /> primes = generate&{Array Int} (max + 1) (\_ -> 0)<br /> primes.(0) := 2<br /> <br /> -- Fill the array with primes.<br /> high = 1<br /> i = 2<br /> while (high <= max)<br /> do when (checkPrime primes high i)<br /> do primes.(high) := i<br /> high := high + 1<br /><br /> i := i + 1<br /><br /> -- Take the last prime found as the answer.<br /> println $ show primes.(max)<br /></pre><br />Now we've got while loops, and break, even. Of course, the while syntax desugars to a simple higher order function that takes the loop body, and break just throws an exception to end the iteration. Such syntax should help the Python programmers feel at home, but Disciple is still lambdas all the way down.<br /><br />Finally, note that we started with a lazy, purely functional program but arrived at an unapologetically imperative program all in the same language. In my dreams people talk about "functional vs imperative" programs instead of "functional vs imperative" <span style="font-style:italic;">languages</span>. <br /><br />Write the lazy functional program because it's shorter, cleaner, and easier to understand.. but when the clock is tickin' and the space is leakin' then wouldn't it be nicer "shell out" to the same language?<br /><br />PS: Code examples are part of the DDC test suite <a href="http://code.haskell.org/ddc/ddc-head/test/50-Defib/Primes/">here</a><br /><br />PPS: Of couse the absolute performance of these programs compiled with DDC today is atrocious. This is a development blog after all, and we're still developing. They do work though. See <a href="http://trac.haskell.org/ddc/">the wiki</a> to find out how you can help.http://disciple-devel.blogspot.com/2010/10/code-three-primes-programs.htmlnoreply@blogger.com (Ben Lippmeier)0tag:blogger.com,1999:blog-301893373063372699.post-1466457928077864965Sat, 16 Oct 2010 03:22:00 +00002010-10-16T17:41:32.894+11:00Hacks: Type error beautificationIn this series of blog posts I'll name the ones that describe what I'm currently working on after "Hacks: whatever".<br /><br />I'm currently working on type error beautification. The problem is that without it you get error messages that look like this: <pre><br />./test/15-Typing/Signatures/LargerEffect/Test.ds:2:8<br /> Inferred type:<br /> add<br /> :: forall %rDE111 %rDE112 %rDE113<br /> . Int32 %rDE111 -> Int32 %rDE112 -(!eDE484 $cDE487)> Int32 %rDE113<br /> :- !eDE484 = !{!Read %rDE111; !Read %rDE112}<br /> , $cDE487 = Test.x : %rDE111<br /><br /> Is bigger than signature:<br /> add<br /> :: forall %rDE111 %rDE112 %rDE113<br /> . Int32 %rDE111 -> Int32 %rDE112 -(!eDE484 $cDE487)> Int32 %rDE113<br /> :- !eDE484 = !Read %rDE111<br /> , $cDE487 = xDE485 : %rDE111<br /><br /> Because !{!Read %rDE111; !Read %rDE112} does not match !Read %rDE111 <br /></pre><br />That's all fine, apart from the variable names like %rDE111 and $cDE485. Those are "dummy" variables that have been invented by the type elaborator, which fills in missing effect terms in type sigs provided by the user. For example, the source program that gave the above error message is actually:<br /><pre><br />add <: Int{read} -> Int -> Int<br />add x y = x + y<br /></pre><br />The type signature says that the function is permitted to read its first argument, where {read} is a short-hand way of giving the real region and effect term. The elaborator takes the short-hand sigs provided by the user, then fills in the missing bits using some simple defaulting rules, before passing them to the type checker proper. Of course, this function actually reads both of its arguments, hence the error message.<br /><br />The problem is that when the elaborator needs a new variable it just takes the next available name in the namespace, so the exact names you get depend on what other signatures the elaborator has already processed. This means that if you add a new signature to an imported module, the dummy variables in all the successive signatures change. That's fine for type checking, but it means that error messages presented to the user aren't consistent. It's become a problem because our regression tests compare error messages to what they were before, so now if you modify any of the base libraries, you have to accept a bunch of wibbles in the test suite.<br /><br />The solution is to rewrite variables in each set of error messages to use consistent names like t0, t1, %r1, %r2 etc before presenting them to the user. Complicating factors are that if a variable was a "real" variable that came from the source program we don't want to rewrite it, and that generated/elaborated names should not collide with real names from the source program. <br /><br />This is one of those PITAs you discover when you sit down and try to write a real compiler. No sexy semantics, no fancy extensions to the type system. I need to format error messages properly so my regression tests don't break after every commit... but we press onward for the glory.http://disciple-devel.blogspot.com/2010/10/hacks-type-error-beautification.htmlnoreply@blogger.com (Ben Lippmeier)0tag:blogger.com,1999:blog-301893373063372699.post-2262800600424962439Sat, 16 Oct 2010 02:27:00 +00002010-10-16T21:51:14.599+11:00For Human ConsumptionOver the last year I've spent a lot of time cleaning up the internals of DDC, and I've reached a point where I'm no longer embarrassed about the implementation. It's still full of bugs and holes, but enough of the structure and design is there that others should be able to find their way around the code base without too much trouble. I could also use some help, so this blog is an attempt to publicise the project and hopefully convince some of you wonderful people that Disciple/DDC is indeed the way of the future, and that spending some time hacking on it would be a fun thing to do...<br /><br />I've been working on DDC since I started my PhD back in 2004, though there were several false starts and major rewrites along the way. Back then it was called "Trauma", because all the good names like "Clean", "Hope" and "Eden" were already taken, and I wanted a name that reflected what language design / compiler development / doing a PhD was really like. <br /><br />DDC as a "plan" started to congeal in 2006 when I had done enough reading and research to form a vision of where I wanted to go. The first patch in the repo is from Sunday September 9th 2007. Before then my "version control" consisted of making a tarball every couple of weeks. Prior to starting my PhD I worked as a C++ programmer for the Australian government, and my experience of ClearCase there was enough to put me off version control systems entirely until I met darcs while working on GHC at MSR in 2007.<br /><br />Anyway, from "Trauma" we now have "Disciplined Disciple". A few people have asked me about the awkwardly religious sounding name, so I'll explain it here for the record. I chose this name to remind myself to spend more time reading papers than dreaming up crazy language features. Writing a compiler is a massive amount of work, and the only way I see myself ending up with a usable system is by sticking with the game plan and not getting side tracked. Disciple is supposed to be Haskell - Laziness + Effect typing + Mutability Polymorphism. The way I see it there are a huge number of ideas and libraries from the Haskell community that would really shine if we just shifted the base language a bit, and if you've seen the <a href="http://vimeo.com/15576718">"Beyond Haskell"</a> talk from HIW 2010 in Baltimore you'll know what I mean by that. I want to see the next version of Quake written ground-up in a functional language, with no cop-outs about how it's good enough just to influence the design of C#2++. I had a gut-full of pointer based, monomorphic, imperative programming in my previous life and I'm not planning on going back.<br /><br />I won't live long enough to invent a whole new eco system or approach to writing programs myself, but if I stick to the game plan and follow the leaders wherever possible I hope that will be enough to push it though. I am the Disciplined Disciple.http://disciple-devel.blogspot.com/2010/10/for-human-consumption.htmlnoreply@blogger.com (Ben Lippmeier)1