Planet Scheme

August 28, 2015

Programming Praxis

Maximum Product Of Two Primes Less Than N

Today we have a fun little exercise based on prime numbers.

Given an integer n > 4, find the maximum product of two prime numbers such that the product is less than n. For instance, when n = 27, the maximum is 2 * 13 = 26, and when n = 50, the maximum is 7 * 7 = 49.

Your task is to write a program to find the maximum product of two primes less than n. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

by programmingpraxis at August 28, 2015 09:00 AM

August 27, 2015

Joe Marshall

n-ary functions and argument lists

Suppose we have a binary operation Op2 = (lambda (left right) ...). If it is closed over some type, you can make expression trees.
(Op2 (Op2 <arg1> <arg2>)
          (Op2 <arg3> <arg4>)
If Op2 is associative as well, these are equal:
(Op2 (Op2 <arg1> <arg2>)
     (Op2 <arg3>
         (Op2 <arg4> <arg5>)))

(Op2 <arg1> 
     (Op2 (Op2 <arg2> <arg3>)
          (Op2 <arg4> <arg5>)))

This makes the tree structure irrelevant so long as the fringe of the tree stays in order, so we can flatten the tree by making an N-ary version of the binary operation:
(define (binary->nary Op2)
  (lambda (a1 a2 . an)
    (fold-left Op2 (Op2 a1 a2) an)))

((binary->n-ary Op2) <arg1> <arg2> <arg3> <arg4> <arg5> <arg6> etc.)
The value of this expression naturally depends upon the values of the arguments. Changing the argument list is highly likely to change the value of the entire expression. However, we can make certain changes to the argument list without changing the value of the entire expression. If we know what those changes are, we can manipulate the argument list before invoking the operator, and still get the same answer. Naturally, most of the changes we can make to the argument list depend on the specifics of Op2, but it turns out that some interesting changes are possible without knowing any specifics, only knowing a few high-level properties of Op2.

Obviously, the first thing we can do is reduce the argument list through evaluation. Simply replace the first two arguments with the value of (Op2 <arg1> <arg2>)
((binary->n-ary Op2) <arg1> <arg2> <arg3> <arg4> <arg5> <arg6> etc.) =

((binary->n-ary Op2) (Op2 <arg1> <arg2>) <arg3> <arg4> <arg5> <arg6> etc.) =

((binary->n-ary Op2) <result> <arg3> <arg4> <arg5> <arg6> etc.)
Since Op2 is associative, we can replace any 2 adjacent arguments with their combination.

Now suppose there is an identity element among the arguments we can give to Op2.
(Op2 <arg> id) = <arg>  and
(Op2 id <arg>) = <arg>
We can do this:
(define (binary->nary Op2)
  (lambda an
    (fold-left Op2 id an)))

(define Op (binary->nary Op2))
Which is cleaner than the original. We also get a new way to manipulate the argument list to Op. We can add the identity element anywhere we wish, or we can delete the identity element wherever we find one.
(Op <arg1> <arg2> <arg3> Id <arg4> <arg5> <arg6> etc.) =

(Op <arg1> <arg2> <arg3> <arg4> <arg5> <arg6> etc.) =

(Op <arg1> Id <arg2> <arg3> <arg4> <arg5> Id <arg6> etc.)

One more restriction. We want Op2 to be invertible. Suppose (Op2 <arg1> <arg2>) = <result>. Op2 is invertible if, given any two of <arg1>, <arg2>, and <result>, the third can be uniquely determined. If you have one <arg> and a <result>, you can run things backwards and get the other <arg>.

Requiring Op2 to be invertible has many consequences, some of them quite non-obvious. An obvious consequence, though, is that we can define inverse elements. If (Op2 <argA> <argB>) = Id, then we say that <argB> is the inverse of <argA> (and vice versa). We find the inverse of an argument by fixing the output as the identity element and running Op2 backwards to find the other argument.

This gives us the final way to manipulate the argument list. If an element appears next to its inverse, both can be removed:
(Op <arg1> <arg2> <arg3> (inverse <arg3>) <arg5> <arg6> etc.) =
(Op <arg1> <arg2> (Op2 <arg3> (inverse <arg3>)) <arg5> <arg6> etc.) =
(Op <arg1> <arg2> Id <arg5> <arg6> etc.) =
(Op <arg1> <arg2> <arg5> <arg6> etc.)

So here are all the restrictions on Op2:
  • Closed over an set of arguments
  • Associative
  • Has an identity argument
  • Invertible
If Op2 has these properties (and a lot of binary operations do), then we can define an n-ary Op and play with its argument list. If you do this, you might notice that it looks kind of familiar:
(op f g (inverse g) j id h) = (op f j id h) = (op f j h)

The argument list sort of looks like a function pipeline. The allowed manipulations of the argument list are compatible with a function pipeline, too. In fact, it could be a function pipeline if Op is the compose operator, and f, g, and h are appropriate invertible unary functions. But whatever it is, the point is that it looks enough like a function pipeline that we can pretend that it is.

by Joe Marshall ( at August 27, 2015 06:18 PM

August 25, 2015

Programming Praxis

Collect Sets Of Ranges

A frequent idiom in data processing is the control-break idiom, where some processing must be done every time there is a change in some value. A simple example comes from collecting ranges, for instance, converting the sequence 0, 1, 2, 7, 21, 22, 108, 109 to the ranges 0-2, 7, 21-22, 108-109, where a break occurs whenever two numbers aren’t consecutive.

Writing control-break programs can be difficult, for two reasons. First, you don’t know there is a break until you see the next record after the break, so you either need to look ahead in the input or keep track of what you have seen. Second, there is an implied break at the end of the input, which occurs when there is no record at all. Depending on the situation, either or both of those can be tricky.

Your task is to write a program that converts a sequence to a set of ranges, as shown above. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

by programmingpraxis at August 25, 2015 09:00 AM

August 23, 2015

Joe Marshall

Playing with linear fractional transforms

I wanted to play with continued fractions and linear fractional transforms, so I wrote some code to make it easier. A linear fractional transform (also called a homographic function or Mobius transform) is a function like this:
In MIT/GNU Scheme:
;; x => (3x + 1)/(x + 4)

1 ]=> (define foo (make-linear-fractional-transform 3 1 1 4))

;Value: foo

1 ]=> (foo 2)

;Value: 7/6
I used an entity object so, in addition to invoking it on a number, there are two more ways to manipulate a linear fractional transform:
;; A predicate
1 ]=> (linear-fractional-transform? foo)

;Value: #t

;; And a CPS accessor
1 ]=> (lft/spread-coefficients foo (lambda (A B C D) (list A B C D)))

;Value 307: (3 1 1 4)
I also added a print method:
1 ]=> foo

;Value 308: #[linear-fractional-transform 308 (3x + 1)/(x + 4)]

As I mentioned in a prior post, you can partly apply a linear fractional transform:
1 ]=> foo

;Value 308: #[linear-fractional-transform 308 (3x + 1)/(x + 4)]

1 ]=> (lft/partly-apply foo 2)

;Value 315: #[linear-fractional-transform 315 (7x + 3)/(6x + 1)]
Since I want to reason about applying a linear fractional transform to an argument, I wrote an abstraction for that:
;; Apply LFT foo to continued fraction phi.
1 ]=> (make-lft-application foo phi)

;Value 311: #[lft-application 311 (3x + 1)/(x + 4) {1 ...}]
So now we can write a procedure that takes an application, peels off the first term in the continued fraction, feeds it to the linear fractional transform, and creates a new application:
(define (application/step lft-application)
  (let ((lft (application-function lft-application))
 (cf  (application-continued-fraction lft-application)))
     (lft/partly-apply lft (head cf))
     (tail cf))))

1 ]=> (define appl (make-lft-application lft/identity sqrt-two))

;Value: appl

1 ]=> appl

;Value 317: #[lft-application 317 x {1 2 2 2 ...}]

1 ]=> (application/step appl)

;Value 318: #[lft-application 318 (x + 1)/x {2 2 2 ...}]

1 ]=> (application/step (application/step appl))

;Value 319: #[lft-application 319 (3x + 1)/(2x + 1) {2 2 ...}]

1 ]=> (application/step (application/step (application/step appl)))

;Value 320: #[lft-application 320 (7x + 3)/(5x + 2) {2 ...}]
All these lft-application objects should be (numerically) equal.

In an earlier post I showed how a linear fractional transform can be partly evaluated by determining the integer-part of the transform. The integer-part of an application is the integer-part of the application-function. You can get the fractional part by subtracting the integer-part.

A digression

If you apply a linear fractional transform to zero, it's obvious the answer is B/D. On the other hand, if you apply a transform to a sufficiently large x, you'll get as close as you want to A/C.

If the denominator of a linear fractional transform is zero for some value of x, there should be a vertical asymptote at that point. That's the pole of the transform. The pole is at (- D)/C. The pole will be at zero if D is zero. It will be at a negative number if D and C are the same sign and at a positive number if D and C differ in sign.

If you take a linear fractional transform with a pole at a negative number, and you sweep the input from 0 up on to infinity, the output will vary smoothly and monotonically from B/D approaching A/C and staying between both values at all times.
1 ]=> lft1

;Value 675: #[linear-fractional-transform 675 (3x + 1)/(4x + 2)]

1 ]=> (lft1 0)

;Value: 1/2

1 ]=> (lft1 1000000)

;Value: 3000001/4000002

1 ]=> (exact->inexact 3000001/4000002)

;Value: .7499998750000625

(On the other hand, if the pole is at a positive number, as you sweep the input from 0 up to infinity, the output starts at B/D, but flees away from A/C until the input gets to the pole. Then the output approaches A/C, but from the opposite direction. In any case, if the pole is positive, then the output will vary from B/D and eventually approach A/C, but never being between them.)

We can represent intervals as linear fractional transforms. The endpoints of the interval are A/C and B/D.

To get the width of the interval, just subtract the endpoints: A/C - B/D = (A*D - B*C)/(C * D)

Imagine you are performing some calculation with continued fractions. Since there may be an infinite number of terms, the calculation will proceed incrementally, using up terms as needed and generating other terms. So you can think of a more complex calculation as a tree, where a node in the tree is a linear fractional transform and the continued fraction terms flow between the nodes.

When we do an application/step, we move a term from the continued fraction into the linear fractional transform. Now consider a term as an element of information. We've moved this information out of the continued fraction and into the linear fractional transform. The information is apparently "stored" in the linear fractional transform until it is extracted as an output term for the next stage in the computation. But if you think about it, the "format" of the information is different depending upon whether it is flowing between nodes, where it is a series of continued fraction terms, or if it is stored in a linear fractional transform, where it is encoded somehow in the values of the coefficients. The act of partly-evaluating a linear fractional transform is in effect "encoding" some information as a continued fraction term. Partly applying a linear fractional transform is in effect "decoding" the continued fraction term (presumably generated by an earlier computation). Why not change to a more efficient communication channel?

When a node sends information to another node, instead of converting the information to several continued fraction terms to be assembled at the other end, we'll send the information as a single linear fractional transform. It contains the desired information already in the right "format". (See Peter Potts's work.)

A digression

What happens if we compose two linear fractional transforms?
(compose (lambda (x)
           (/ (+ (* A x) B)
              (+ (* C x) D)))
         (lambda (y)
           (/ (+ (* p y) q)
              (+ (* r y) s))))
We get
(lambda (x)
   (/ (+ (* A (/ (+ (* p x) q)
                 (+ (* r x) s))) B)
      (+ (* C (/ (+ (* p x) q)
                 (+ (* r x) s))) D)))
Which, after some algebra, turns into this:
(lambda (x)
   (/ (+ (* (+ (* A p) (* B r)) x) (+ (* A q) (* B s)))
      (+ (* (+ (* C p) (* D r)) x) (+ (* C q) (* D s)))))
Which is equivalent to this:
(lambda (x)
  (let ((E (+ (* A p) (* B r)))
        (F (+ (* A q) (* B s)))
 (G (+ (* C p) (* D r)))
 (H (+ (* C q) (* D s))))

   (/ (+ (* E x) F)
      (+ (* G x) H))))
Which you can see is another linear fractional transform.

If we have a linear fractional transform
(lambda (x)
  (/ (+ (* A x) B)
     (+ (* C x) D)))
It's inverse (if it has one) is:
(lambda (x)
  (/ (+ (* D x) (- B))
     (+ (* (- C) x) A))))
Which is yet another linear fractional transform. These things are everywhere.

Let's see, if we have a binary operation binop that is
  1. Closed over some set, i.e. given any two elements of the set, the operation applied to the elements produces another element of the set. In other words, binop takes two arguments, returns one value, and the type of both arguments and return value are the same.
  2. Associative, i.e. (binop a (binop b c)) = (binop (binop a b) c)
  3. Has an identity argument. A "left identity" is an argument such that (binop left-identity x) = x. A "right identity" is an argument such that (binop x right-identity) = x. An "identity" argument works as a left or right identity.
  4. Is invertible, i.e. for any objects a and b, there is a unique object x such that (binop a x) = b and a unique object y such that (binop y b) = a

then we have a group.

The compose function is a binary operation. When you compose a linear fractional transform with another, you get a third linear fractional transform.
1 ]=> (define lft1 (make-linear-fractional-transform 3 1 4 2))

;Value: lft1

1 ]=> (define lft2 (make-linear-fractional-transform 5 1 1 0))

;Value: lft2

1 ]=> (lft/compose lft1 lft2)

;Value 662: #[linear-fractional-transform 662 (16x + 3)/(22x + 4)]
Linear fractional transforms are associative.
1 ]=> (define lft3 (make-linear-fractional-transform 7 2 1 3))

;Value: lft3

1 ]=> (lft/compose lft1 (lft/compose lft2 lft3))

;Value 663: #[linear-fractional-transform 663 (115x + 41)/(158x + 56)]

1 ]=> (lft/compose (lft/compose lft1 lft2) lft3)

;Value 664: #[linear-fractional-transform 664 (115x + 41)/(158x + 56)]

The linear fractional transform (make-linear-fractional-transform 1 0 0 1) is both a left and right identity when composed with another linear fractional transform.
1 ]=> (define lft/identity (make-linear-fractional-transform 1 0 0 1))

;Value: lft/identity

1 ]=> (lft/compose lft/identity lft1)

;Value 665: #[linear-fractional-transform 665 (3x + 1)/(4x + 2)]

1 ]=> (lft/compose lft1 lft/identity)

;Value 666: #[linear-fractional-transform 666 (3x + 1)/(4x + 2)]
Given lft1 and lft2, there is a unique linear fractional transform x such that (compose lft1 x) = lft2, and a unique linear fractional transform y such that (compose y lft1) = lft2. x should be (compose (inverse lft1) lft2), and y should be (compose lft2 (inverse lft1))
1 ]=> lft1

;Value 675: #[linear-fractional-transform 675 (3x + 1)/(4x + 2)]

1 ]=> lft2

;Value 687: #[linear-fractional-transform 687 (5x + 1)/x]

1 ]=> (define x (lft/compose (lft/inverse lft1) lft2)))

;Value: x

1 ]=> (lft/compose lft1 x)

;Value 690: #[linear-fractional-transform 690 (5x + 1)/x]

1 ]=> (define y (lft/compose lft2 (lft/inverse lft1)))

;Value: y

1 ]=> (lft/compose y lft1)

;Value 691: #[linear-fractional-transform 691 (5x + 1)/x]
It sure looks like linear fractional transforms form a group under function composition.
I guess it's time to learn a little group theory.

by Joe Marshall ( at August 23, 2015 07:23 PM

August 21, 2015

Programming Praxis

Two Homework Problems

I can see from my statistics that the new academic year is beginning. Again, as in a previous exercise, in the spirit of helping programming students who are just starting a new school year, we have two typical homework problems:

1. Given an array of positive integers, find the inflection point where the total of the integers before the inflection point and the total of the integers after the inflection point are least different. For instance, given the array [3, 7, 9, 8, 2, 5, 6], the inflection point is between the 9 and 8, which leaves a total of 19 before the inflection point and 21 after, a difference of 2.

2. Write a program that reads a file from disk and writes the last n lines of the file, where n is an input parameter.

Your task is to write programs to solve these problems. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

by programmingpraxis at August 21, 2015 09:00 AM

August 20, 2015

Greg Hendershott


If you’ve heard of Racket “at-expressions”, maybe you think they’re “that funny Scribble notation in which you write Racket documentation.”

In fact at-expressions are a general, alternative way to write s-expressions. They can be used in various handy ways.

Let’s look at using at-expressions for a few practical things like:

  • “string interpolation”
  • regular expressions
  • “here” strings

#lang at-exp ...

You can use the at-expression reader with a language by supplying at-exp before the language. Examples:

#lang at-exp racket
#lang at-exp typed/racket

In the examples below, make sure you’re using:

#lang at-exp racket


Before we talk more about at-expressions, note that racket/format provides the function ~a. (~a v) is a kind of shorthand for (format "~a" v), plus it offers many more formatting options.

#lang racket
(format "~a" "hi") ;"hi"
(~a "hi")          ;"hi"
(format "~a" 1)    ;"1"
(~a 1)             ;"1"

We’ll use ~a below.

Basic at-expressions

At-expressions are a very well thought-out system; you can read about the full syntax. For this post, we simply need to know that @func{string} is equivalent to (func "string"). So we can rewrite:

(~a "foo bar") ; "foo bar"


@~a{foo bar}   ; "foo bar"

(Note that ~a is the name of the function we’re calling. The ~ has nothing to do with at-expressions; it’s part of this function’s name.)

Also special characters like \ and " are automatically escaped:

@~a{A back slash \}   ; "A back slash \\"
@~a{"Double quotes"}  ; "\"Double quotes\""

Inside the curly brackets, you may use @ again to “escape” to any Racket expression. For example an expression like (+ 1 1):

@~a{The sum of one and one is @(+ 1 1).} ; "The sum of one and one is 2."

Or simply the name of a variable like x:

(define x 0)
@~a{x is @x} ; "x is 0"

String interpolation

You can use at-exps as the equivalent of “string interpolation” in some other languages:

(define x 0)
(define y "foo")

@~a{x is @x and y is @y} ; "x is 0 and y is foo"

Normally in Racket you’d write that as:

(format "x is ~a and y is ~a" x y)

Which is fine, albeit you have to flip between the ~as on the left and the values on the right, making sure they match up. The string interpolation style is arguably easier to write, to read, and to update later without making a mistake.

How about mixing formats, such as ~a (display) and ~v (print)? For example with format we can write

(format "x is ~a and y is ~v" x y)` ; "x is 0 and y is \"foo\""

How can we do this using our at-exp? Well since ~a is the outer function it will display the value of any ~v inside. Remember that @ lets us “escape” to any Racket expression, not just a variable, it could be a function application. So:

@~a{x is @x and y is @(~v y)} ; "x is 0 and y is \"foo\""

You can also surround the Racket expression in | characters. This is useful if the expression needs to end next to plain text. You can demarcate the identifier from the text:

@~a{x is @|x| and y is @|y|!} ; "x is 0 and y is foo!"

The | keeps ! from being read as part of the identifier y.

Regular expressions

Do you enjoy writing regular expressions like #px"\\d\\.\\d"? Me neither.

Another useful example is avoiding the need to use \\ to get a \ in string literals. This is especially handy for regular expressions:

@pregexp{\d\.\d}  ; #px"\\d\\.\\d"

If you find pregexp too verbose, you could define a little alias:

(define px pregexp)
@px{\d\.\d}      ; #px"\\d\\.\\d"

“Here” strings

Like shells, Racket has “here” strings:

(define multi-line #<<EOF
Some multi-line
string literal.
multi-line ; "Some multi-line\nstring literal."

Cool. However the indentation is tricky. You get extra spaces if you do this:

(define multi-line #<<EOF
  Some multi-line
  string literal.
multi-line ; "  Some multi-line\n  string literal."


Also the EOF must be alone on a line and in column 0. You can’t let that get indented, and you can’t put the closing paren on the same line.

At-exps are more elegant and survive typical re-indentation:

(define multi-line @~a{Some multi-line
                       string literal})
multi-line ; "Some multi-line\nstring literal"

How to write a literal @

If @ is a magic escape character, how do you write a literal @?

  1. We want a string, "@".

  2. How do we escape to any Racket expression, including (say) a string? Using @.

  3. Therefore prepend a @ to "@":


So for example:

@~a{The email is foo@"@"} ; "The email is"


This was a quick look at some practical ways to use at-expressions for more than writing Scribble documentation. Again, feel free to read up on the full syntax.

by Greg Hendershott at August 20, 2015 02:15 PM

August 18, 2015

Programming Praxis

K-Factorials And Factorions

We study today a topic from recreational mathematics. Factorions are numbers that are equal to the sum of the factorials of their digits. For instance, 145 is a factorion because 1! + 4! + 5! = 1 + 24 + 120 = 145. There are four factorions to base 10: 1, 2, 145 and 40585.

A double factorial, written n!!, is the product of all integers less than or equal to n that are congruent to n (mod 2). A triple factorial, written n!!!, is the product of all integers less than or equal to n that are congruent to n (mod 3). And so on for higher factorials. Thus, a double factorion is a number that is equal to the sum of the double factorials if its digits, a triple factorion is a number that is equal to the sum of the triple factorials of its digits, and so on. As an example, 81 is a triple factorion because 8!!! + 1!!! = 8*5*2 + 1 = 80 + 1 = 81.

It is also possible to consider factorions to bases other than 10. For instance, there are four factorions to base 6: 1, 2, 25, 26.

Your task is to write functions that allow you to explore the strange world of k-factorials and factorions; use your imagination to think of tasks that interest you. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

by programmingpraxis at August 18, 2015 09:00 AM

August 16, 2015

Peter Bex

CHICKEN internals: data representation

In my earlier post about the garbage collector, I lied a little bit about the data representation that CHICKEN uses. At the end of the post I briefly mentioned how CHICKEN really stores objects. If you want to fully understand the way CHICKEN works, it is important to have a good grasp on how it stores data internally.

Basic idea

CHICKEN attempts to store data in the most "native" way it can. Even though it's written in C, it tries hard to use machine words everywhere. So on a 32-bit machine, the native code that's eventually generated will use 32-bit wide integers and pointers. On a 64-bit machine it will use 64-bit wide integers and pointers.

This is known as a C_word, which is usually defined as an int or a long, depending on the platform. By the way, the C_ prefix stands for CHICKEN, not the C language. Every Scheme value is represented as a C_word internally. To understand how this can work, you need to know that there are roughly two kinds of objects.

Immediate values

First, there are the immediate values. These are the typical "atomic" values that come up a lot in computations. It is important to represent these as efficiently as possible, so they are packed directly in a C_word. This includes booleans, the empty list, small integers (these are called fixnums), characters and a few other special values.

Because these values are represented directly by a C_word, they can be compared in one instruction: eq? in Scheme. These values do not need to be heap-allocated: they fit directly in a register, and can be passed around "by value" in C. This also means they don't need to be tracked by the garbage collector!

At a high enough level, these values simply look like this:

This doesn't really show anything, does it? Well, bear with me...

Block objects

The other kind of value is the block object. This is a value that is represented as a pointer to a structure that contains a header and a variable-length data block.

The data block is a pointer which can conceptually be one of two types. In case of a string or srfi-4 object, the data block is simply an opaque "blob" or byte-vector. In most other cases, the block is a compound value consisting of other Scheme objects. Typical examples are pairs, vectors and records.

Because these values are heap-allocated, two distinct objects are not stored at the same memory address, even if they store the same value. That's why comparing their values is a complex operation. This operation is either equal? for deep structural comparison, or eqv? for value comparisons of numbers and symbols.

The R5RS specification explains that the difference between eq? and eqv? is not necessarily the same across Scheme implementations. For example, in CHICKEN, eq? can be used to compare characters and fixnums, because they are stored as immediate values. Portable programs should not rely on that. If you use eq? on block objects, their pointers will be compared. That means it checks whether they are one and the same object. This can be a useful operation in its own right.

Objects represented by data blocks also have to be tracked by the garbage collector: if there are still references to the block, its data must be copied (recursively) to keep it alive across GC events.

Here are some "high-level" examples of block objects:

This picture should look somewhat familiar to students of SICP: it is reminiscent of the box-and-pointer notation used to illustrate the structure of lists. The boxes containing green text represent the object headers. The header indicates the type of object and the object's size. It also determines whether the object's data block is a byte block or a block containing Scheme objects: if it contains Scheme objects, the header tells us how many slots (locations for storing Scheme objects) the object has. Byte blocks, on the other hand, are opaque and can contain any data. Their size is stored as a byte count.

From top to bottom, left to right, these represent the following values:

  • (#\a . #\b) is a pair containing the character "a" in its car and "b" in its cdr.
  • #(#f 123 456 #f 42) is a regular Scheme vector containing fixnums and false values.
  • "hello" is a string consisting of 5 characters (strings are treated as byte vectors in CHICKEN).
  • 12.5 is an inexact representation of the number twelve and a half (a "flonum"). This is a byte block storing the raw byte value of a C double.
  • ("hello" . (12.5 . ())) is the first pair of a proper list which contains a string and a flonum.
  • (12.5 . ()) is the cdr of that list; a pair containing a number and the end-of-list marker.

The final two pair objects show that slots (like any C_word) can hold not only immediate values, but also pointers to block objects. This leads us to the question: how to differentiate between a pointer to an object and an immediate object?

Bit fiddling

Most platforms require pointers to words to be aligned on a word boundary. Thus, on a 32-bit machine, memory addresses will always have zero in the lower 2 bits, because we can only point to multiples of 4 bytes. On a 64-bit machine, word addresses will have zero in the lower 4 bits.

Because the lower two bits are never used, we can perform a simple trick: any value that has either of the lower two bits set cannot be a word pointer, so we enforce immediate objects to have either bit set. It may feel like a gross hack to people who are used to working with "clean", high-level C code, but it is a technique which goes back a long way: Orbit, one of the earliest optimising compilers for Scheme, did exactly the same thing. Other modern Schemes like Larceny and Gambit do the same thing. Even Scheme48, which is probably the cleanest Scheme implementation, uses tagged words. Other Lisps use this representation as well. See Steel Bank Common Lisp, for example.

Many other dynamic languages don't use a packed data representation like this. Many prefer the simpler but bulkier struct representation. At the other end of the spectrum, we have statically typed, non-garbage collected languages. They generally don't need to store the type of a value along with it. Instead, they can directly store the "un-boxed" value in memory. This, and the relation to garbage collection, is explained rather well in Appel's 1989 paper "Runtime Tags Aren't Necessary" (sorry, this is in PostScript format).

Representation of objects

We've learned how CHICKEN distinguishes between pointers to (block) objects and immediate values. Now we will look into the nitty-gritty details of the object representation.

We can make the following breakdown of bit patterns (assuming a 32-bit platform):

This shows that the lower two bits can be used to distinguish between block objects (zero) and immediate objects (nonzero). For immediate objects, the low bit can be used to distinguish between fixnum objects and other kinds of immediate objects. The colouring indicates which bits are used for tagging objects of that kind. The uncoloured bits are used for representing the object being stored.

Fixnums are distinguished from "other immediate" values because fixnums are so incredibly common: they are used for indexing into strings, loop counters and many calculations. These have to be represented as efficiently as possible while storing the widest possible range of values. Run time type checking for fixnums should use as few CPU instructions as possible.

The "other immediate" types are further differentiated through the top two bits of the lower nibble:

The unused "other immediate" type of 0010 is reserved for future use. To get a good feel for the representation of immediates, let us look at a few example bit patterns. I'll also show you how to construct them in C.

Bit patterns of immediate values


These small integer values are stored in regular old two's complement representation, like the CPU uses. The lowest bit is always 1, due to the fixnum tag bit. The highest bit is used to determine the sign of the number.

The C_fix() macro shifts its argument one bit to the left, and sets the lower bit through a bit-wise OR with 1. To convert a Scheme fixnum back to a C integer, you can use the C_unfix() macro. This shifts its argument one bit to the right.

You might wonder what happens when you calculate or enter a very large integer. In CHICKEN 4, it will be coerced to a flonum. In CHICKEN 5, it will be stored as a bignum. Bignums are block objects, not immediates, because they may be arbitrarily large.


That's a very large bit space for only two values. However, reserving a special type tag just for booleans simplifies type detection code: we only have to compare the lower four bits with 0110 to check whether an object is a boolean.


Characters do not make full use of the available bits, because the lower byte's high nibble is always 0000. This means that only 24 bits are available for representing the character on 32-bit platforms. Luckily, this is enough for representing the full Unicode range. If Unicode ever starts using up a bigger code space, we can always sneak in 4 more bits.

Special objects

This list is exhaustive: currently there are only four special objects. There is a lot of room for adding other special objects, if that ever becomes necessary.

The "unbound variable" representation cannot be captured by a program: when it is evaluated, it immediately raises an exception. This is its intended function.

A closer look at block objects

Now that we know all about immediate values, let's turn to block objects. These are represented by a pointer to a C structure with a header and a data block. Slightly simplified, it looks like this:

#define C_uword  unsigned C_word
#define C_header C_uword

typedef struct
  C_header header;
  C_word data[];    /* Variable-length array: header determines length */

The header's bit pattern is broken up into three parts:

The bottom 24 bits encode the size of the object. On 64-bit machines, the bottom 56 bits are used for the size. The middle 4 bits encode the type of the object. The top 4 bits encode special properties to make the garbage collector's work easier:

  • C_GC_FORWARDING_BIT indicates this object has been forwarded elsewhere. To find the object at its new location, the entire header is shifted to the left (which shifts out this bit). Then, the value is reinterpreted as a pointer. Remember, the lowest two bits of word pointers are always zero, so we can do this with impunity!
  • C_BYTEBLOCK_BIT indicates this is a byte blob (size bits are interpreted in bytes, not words).
  • C_SPECIALBLOCK_BIT indicates that the first slot is special and should be skipped by the GC.
  • C_8ALIGN_BIT indicates that for this object, alignment must be maintained at an 8-byte boundary.

The type bits are assigned incrementally. There is room for 16 types, only 2 of which are currently unused. Let's look at the definitions, which should also help to explain the practical use of the latter 3 GC bits:

#define C_SYMBOL_TYPE            (0x01000000L)
#define C_STRING_TYPE            (0x02000000L | C_BYTEBLOCK_BIT)
#define C_PAIR_TYPE              (0x03000000L)
#define C_CLOSURE_TYPE           (0x04000000L | C_SPECIALBLOCK_BIT)
#define C_FLONUM_TYPE            (0x05000000L | C_BYTEBLOCK_BIT | C_8ALIGN_BIT)
/*      unused                   (0x06000000L ...) */
#define C_PORT_TYPE              (0x07000000L | C_SPECIALBLOCK_BIT)
#define C_STRUCTURE_TYPE         (0x08000000L)
#define C_POINTER_TYPE           (0x09000000L | C_SPECIALBLOCK_BIT)
#define C_LOCATIVE_TYPE          (0x0a000000L | C_SPECIALBLOCK_BIT)
#define C_SWIG_POINTER_TYPE      (0x0c000000L | C_SPECIALBLOCK_BIT)
#define C_LAMBDA_INFO_TYPE       (0x0d000000L | C_BYTEBLOCK_BIT)
/*      unused                   (0x0e000000L ...) */
#define C_BUCKET_TYPE            (0x0f000000L)

Most of the types should be self-explanatory to a seasoned Schemer, but a few things deserve further explanation.

You'll note that in the STRING type tag, C_BYTEBLOCK_BIT is also set, for obvious reasons: strings do not consist of slots containing Scheme values, but of bytes, which are opaque. Because the header's size bits store the length in bytes instead of in words, we can spot a very important limitation: CHICKEN strings can only hold 16 MiB of data on a 32-bit machine (on a 64-bit machine, strings are "limited" to 65536 TiB).

The CLOSURE type uses C_SPECIALBLOCK_BIT. This indicates to the garbage collector that the first slot contains a raw non-Scheme value. In the case of a closure, it contains a pointer to a C function. The other slots contain free variables that were closed over ("captured") by the lambda, which are normal Scheme objects. The compiled C function "knows" which variable lives in which slot.

The FLONUM type uses C_BYTEBLOCK_BIT, because an un-boxed C double value is not a Scheme object: we want to treat the data as an opaque blob. On a 32-bit system, the double will take up two machine words, so we can't use C_SPECIALBLOCK_BIT. The header will therefore hold the value 8 as its size. It also has another GC bit: C_8ALIGN_BIT. This ensures that the 64-bit double is aligned on a 8-byte boundary, to avoid unaligned access on 32-bit systems. This adds some complexity to garbage collection and memory allocation.

The STRUCTURE type refers to a SRFI-9 type of record object. Its slots hold the record's fields, and the accessors and constructors "know" which field is stored at which index.

The POINTER type holds a raw C pointer inside a Scheme object. Again, because C pointers are not Scheme objects, the object's first (and only) slot is treated specially, via C_SPECIALBLOCK_BIT.

The LOCATIVE type represents a rather complicated object. It acts a bit like a pointer into a slab of memory. You can use it as a single value which represents a location inside another block object. This can then be used as an argument to a foreign function that expects a pointer. Its first slot holds a raw pointer. The other slots hold the offset, the type of pointer (encoded as fixnum) and the original object, unless it is a weak reference.

The TAGGED_POINTER type is exactly like POINTER, but it has an extra user-defined tag. This can make it easier for code to identify the pointer's type. The tag is a Scheme value held in its second slot.

The SWIG_POINTER has been removed in CHICKEN 5 and was used for compatibility with SWIG. It is basically the same as POINTER, with additional SWIG data added to it.

The LAMBDA_INFO type stores procedure introspection information (mostly for debugging).

The BUCKET type is a special internal pair-like object which is used in the linked list of symbols under a hash table bucket in the symbol table. It does not count as a reference, so that symbols can be garbage collected when only the symbol table still refers to them.

So far, the only numeric types we've seen are fixnums and flonums. What about the other numeric types? After all, CHICKEN 5 will (finally) have a full numeric tower!

In CHICKEN 5, rational and complex numbers are viewed as two simpler numbers stuck together. They're stored as records with a special tag, which the run-time system recognises. Bignums are a different story altogether. When I first implemented them, they used one of the two unused header types in the list above. For various reasons I won't go into now, they are now also represented as a record with a special tag and a slot that refers to the byte blob containing the actual bignum value. Perhaps this is something for a later blog post.

Putting it all together in the garbage collector

So far, all of this perhaps sounds rather arbitrary and complex. The data representation is finely tuned to fit the garbage collector, and vice versa, so it may help to see how this simplifies the garbage collector.

The way the data representation is set up, the garbage collector only has to perform a few very basic checks. It does not need to know about any of the data types at all, it only needs to look at the special GC bits, and the size of an object!

Now we're finally ready to understand the heart of the garbage collector, which scans the live data and marks nested objects. This part of CHICKEN implements the Cheney algorithm. It's only 22 lines of code, without any simplifications. This is taken directly from runtime.c, with comments added for exposition:

/* Mark nested values in already moved (marked) blocks
   in breadth-first manner: */
while(heap_scan_top < (gc_mode == GC_MINOR ? C_fromspace_top : tospace_top)) {
  bp = (C_SCHEME_BLOCK *)heap_scan_top; /* Get next object from queue */

  /* If this word is an alignment hole marker, skip it */
  if(*((C_word *)bp) == ALIGNMENT_HOLE_MARKER)
    bp = (C_SCHEME_BLOCK *)((C_word *)bp + 1);

  n = C_header_size(bp);  /* Extract size bits from header */
  h = bp->header;         /* Remember header for masking other bits */
  bytes = (h & C_BYTEBLOCK_BIT) ? n : n * sizeof(C_word);  /* Size in bytes */
  p = bp->data;           /* Data block (first slot) */

  if(n > 0 && (h & C_BYTEBLOCK_BIT) == 0) { /* Contains slots, not bytes? */
    if(h & C_SPECIALBLOCK_BIT) { /* Skip first word (not a Scheme object) */

    while(n--) mark(p++); /* Mark Scheme objects in data slots */

  /* Advance onto next word just after object */
  heap_scan_top = (C_byte *)bp + C_align(bytes) + sizeof(C_word);

The comment at the start refers to the fact that the "tip of the iceberg" of live data has already been copied; this code scans that set for nested objects referred to by those live objects. See my post about the garbage collector for more about how the GC and Cheney's algorithm work.

If we're in a minor GC, this code scans over the fromspace, which is the memory area into which the nursery objects will be copied. If we're in a major GC, we're scanning over tospace, which is the other half of the heap, to which the fromspace will be copied.

The code above simply advances the heap_scan_top pointer over the objects we need to look at until we hit the end of this space. It then checks for an ALIGNMENT_HOLE_MARKER, which is a magic value that gets used as a placeholder to indicate that this machine word should be skipped. This placeholder may get inserted when allocating a C_8ALIGN_BIT object, to avoid unaligned access.

Next, the size (in bytes) of the object is determined, based on the C_BYTEBLOCK_BIT. Finally, if it's a data block (C_BYTEBLOCK_BIT is not set), we loop over the data slots. The first word is skipped if it's indicated as "special" via C_SPECIALBLOCK_BIT.

The mark() call hides the hairy part. It performs the following steps:

  • Check that the word contains a block object. Otherwise, return because it's an immediate value.
  • Check that the word points to memory that's being moved, otherwise return. This avoids copying already copied or evicted data.
  • If the object has the C_GC_FORWARDING_BIT set, just update the marked slot with the new location the object was forwarded to, and return.
  • If we're on a 32-bit machine, the object to be copied has the C_8ALIGN_BIT set, and the current top of the target heap area is not aligned, insert an ALIGNMENT_HOLE_MARKER.
  • In case the target area is too small to hold the object, interrupt the current GC and trigger the "next" GC type. This will be a major collection if we're currently doing a minor collection, or a heap reallocating major collection if we're in a regular major collection.
  • Finally, copy the object via a simple memcpy().

Because this is done by mark() and not by the scanning code shown above, all this is only performed if the object in question is a block object which needs to be copied (the mark() macro inlines the first check). Just scanning the live data is extremely fast. We can thank the data representation's simplicity for that speed!

Further reading

There is a lot of information about this stuff, but it can be a little hard to find. Here are a few links I found during my research for this blog post:

by Peter Bex at August 16, 2015 12:19 PM

August 14, 2015

Programming Praxis

File Bundles

It is sometimes convenient to package a group of files into a bundle, for transmission to a different computer or for archiving. Nowadays the most likely method involves tar and gzip, but in the past a “shell archive” was frequently used; the files, which are assumed to include only printable ascii characters, are collected into a single program file that can be executed to self-extract the included files.

Your task is to write a program that creates file bundles. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

by programmingpraxis at August 14, 2015 09:00 AM

August 11, 2015

The Racket Blog

Modules, Packages and Collections

Racket, the Racket docs and Racketeers use a number of terms to refer to various units of Racket code. Of those, module, package and collection refer to related but distinct concepts. Their exact  relations and distinctions can be confusing for new users. This is an attempt at explaining those concepts, what they are for, and how they relate to each other.

To begin with the smallest of the three, a file that begins with #lang and the name of a language is a module. There are also other ways to construct modules, but let's not worry about those.

A module is the basic unit of functionality for Racket code.

Once your Racket programs get larger, though, you'll want to split them over multiple modules. This allows you to organize your source better, enables separate compilation, and makes it possible for you to mix and match modules written in different Racket languages (Racket, Typed Racket, Datalog, Scribble, etc.).

That's where packages and collections come in. They help you organize your modules.

A package is an group of modules that you can install together, and that usually provide one piece of functionality. To pick a random example, take the pict3d package from That package is a collection of modules which together implement a functional 3D engine. You can install it using raco pkg install pict3d, or via the graphical package manager in DrRacket.

So, to sum up, packages are units of code distribution.

A collection is a group of modules whose functionality is related to the same topic, for example data structures (the data collection), or wrapper libraries for use with Typed Racket (the typed collection). Modules are referred to and required using collection paths. For example, when you require racket/class, you're requiring the class module from the racket collection.

Modules within a collection do not necessarily come from the same package, and may not be developed together. For example, some data structures in the data collection are provided as part of the core of Racket, such as the integer sets in data/integer-set. Other data structures are provided by additional packages which you may need to install separately, such as the hash-array-mapped tries in data/hamt, which are provided by the hamt package. Having both of those in the data collection signals that they both provide data structures. If you develop your own data structures, putting them in the data collection is probably the right thing to do.

Many packages, however, provide functionality that does not fall under existing categories, and provide their own, new collection. For example, the pict3d package we discussed above puts its modules in the pict3d collection. For that reason, the distinction between package and collection is sometimes a bit blurred.

So, to sum up, collections are units of code classification.

The term library does not have a technical meaning in Racket. We usually use it to refer to a package, or to a set of packages that are developed together. For example, the Rackunit library is split across multiple packages: rackunit, rackunit-lib, rackunit-gui, rackunit-plugin-lib, rackunit-doc and rackunit-test. This allows packages to only depend on part of Rackunit. For example, a package for a string-processing library probably should not depend on the Racket GUI library (to be deployed on headless servers, for example), and so should depend on the rackunit-lib package for its testing, instead of on the full rackunit package, which brings in GUI support via the rackunit-gui package, and would introduce a dependency to Racket's GUI library.

Hopefully, this clarifies the Racket code organization terminology a bit.

by Vincent St-Amour ( at August 11, 2015 06:33 PM

Programming Praxis

Bridge Hands

[ Thanks to all who wrote with good wishes after my post last Friday. I am fully recovered and back at work. ]

Newspapers and textbooks often print bridge hands in the format shown below, then discuss the proper playing of the hand:

                    S: A Q J 10 8
                    H: 5 4 2
                    D: 9
                    C: 10 7 6 2
WEST                                    EAST
S: 7                                    S: 6
H: Q J 7 6 3                            H: K 10 8
D: J 10 6 4 3                           D: A K Q 5
C: A 8                                  C: K 9 5 4 3
                    S: K 9 5 4 3 2
                    H: A 9
                    D: 8 7 2
                    C: Q J

Your task is to write a program to generate random bridge hands and print them in the format shown above. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

by programmingpraxis at August 11, 2015 09:00 AM

August 10, 2015

The Racket Blog

Racket v6.2.1

Racket v6.2.1 is now available from

Version 6.2.1 patches the recent v6.2 release in three small ways:
  • For the How to Design Programs teaching languages, DrRacket offers an option to use the old style for printing the constants true, false, and empty instead of #true, #false, and '().
  • The teaching languages come with some additional functions to match the August 2015 stable release of HtDP 2nd edition.
  • A repair to the compiler avoids an infinite loop at compile time for certain expressions that should loop forever at run time.
Feedback Welcome

by Ryan Culpepper ( at August 10, 2015 09:52 PM

August 07, 2015

Programming Praxis

Public Service Announcement

Long-time readers of this blog will remember that five years ago I suffered a bi-lateral pulmonary embolism that nearly killed me; my right lung was 100% blocked, my left lung 60%. This past Tuesday evening I suffered a second pulmonary embolism. It was not nearly as serious as the first, I even went to work as normal on Wednesday, but with growing pain during the day I went to the hospital on Wednesday evening, was diagnosed, received medication to break up the clots — two shots in the belly, twelve hours apart, no fun I assure you — and came home Thursday afternoon.

Broadly speaking, there are two contributing factors to pulmonary embolism. The primary factor is blood chemistry, and that’s genetic; there’s nothing you can do about it, though if you know you are predisposed to blood clots, as I am, there is medication that can attenuate the risk — I’ll be talking to a hematologist in about two weeks. The secondary factor is lifestyle: smoking and obesity are both contra-indicated, as is a sedentary lifestyle. Sedentary in this context doesn’t mean sitting in front of a computer monitor for hours a day — recall that Serena Williams, one of the greatest tennis players of all time, had a pulmonary embolism a few years ago — it just means that you spend a few or several hours a day sitting still.

I assume that most of my readers are computer programmers, as I am, and spend much time sitting still. I urge you to get out of your chair every forty-five minutes or so and walk around for five or ten minutes to get your blood moving. It may save your life.

I’ll have another exercise for you next Tuesday.

by programmingpraxis at August 07, 2015 09:00 AM

August 04, 2015

Andy Wingo

developing v8 with guix

a guided descent into hell

It all started off so simply. My primary development machine is a desktop computer that I never turn off. I suspend it when I leave work, and then resume it when I come back. It's always where I left it, as it should be.

I rarely update this machine because it works well enough for me, and anyway my focus isn't the machine, it's the things I do on it. Mostly I work on V8. The setup is so boring that I certainly didn't imagine myself writing an article about it today, but circumstances have forced my hand.

This machine runs Debian. It used to run the testing distribution, but somehow in the past I needed something that wasn't in testing so it runs unstable. I've been using Debian for some 16 years now, though not continuously, so although running unstable can be risky, usually it isn't, and I've unborked it enough times that I felt pretty comfortable.

Perhaps you see where this is going!

I went to install something, I can't even remember what it was now, and the downloads failed because I hadn't updated in a while. So I update, install the thing, and all is well. Except my instant messaging isn't working any more because there are a few moving parts (empathy / telepathy / mission control / gabble / dbus / whatwhat), and the install must have pulled in something that broke one of them. No biggie, this happens. Might as well go ahead and update the rest of the system while I'm at it and get a reboot to make sure I'm not running old software.

Most Debian users know that you probably shouldn't do a dist-upgrade from an old system -- you upgrade and then you dist-upgrade. Or perhaps this isn't even true, it's tribal lore to avoid getting eaten by the wild beasts of bork that roam around the village walls at night. Anyway that's what I did -- an upgrade, let it chunk for a while, then a dist-upgrade, check the list to make sure it didn't decide to remove one of my kidneys to satisfy the priorities of the bearded demon that lives inside apt-get, OK, let it go, all is well, reboot. Swell.

Or not! The computer restarts to a blank screen. Ha ha ha you have been bitten by a bork-beast! Switch to a terminal and try to see what's going on with GDM. It's gone! Ha ha ha! Your organs are being masticated as we speak! How does that feel! Try to figure out which package is causing it, happily with another computer that actually works. Surely this will be fixed in some update coming soon. Oh it's something that's going to take a few weeks!!!! Ninth level, end of the line, all passengers off!

my gods

I know how we got here, I love Debian, but it is just unacceptable and revolting that software development in 2015 is exposed to an upgrade process which (1) can break your system (2) by default and (3) can't be rolled back. The last one is the killer: who would design software this way? If you make a system like this in 2015 I'd say you're committing malpractice.

Well yesterday I resolved that this would be the last time this happens to me. Of course I could just develop in a virtual machine, and save and restore around upgrades, but that's kinda trash. Or I could use btrfs and be able to rewind changes to the file system, but then it would rewind everything, not just the system state.

Fortunately there is a better option in the form of functional package managers, like Nix and Guix. Instead of upgrading your system by mutating /usr, Nix and Guix store all files in a content-addressed store (/nix/store and /gnu/store, respectively). A user accesses the store via a "profile", which is a forest of symlinks into the store.

For example, on my machine with a NixOS system installation, I have:

$ which ls

$ ls -l /run/current-system/sw/bin/ls
lrwxrwxrwx 1 root nixbld 65 Jan  1  1970
  /run/current-system/sw/bin/ls ->

$ ldd /nix/store/wc472nw0kyw0iwgl6352ii5czxd97js2-coreutils-8.23/bin/ls (0x00007fff5d3c4000) => /nix/store/c2p56z920h4mxw12pjw053sqfhhh0l0y-acl-2.2.52/lib/ (0x00007fce99d5d000) => /nix/store/la5imi1602jxhpds9675n2n2d0683lbq-glibc-2.20/lib/ (0x00007fce999c0000) => /nix/store/jd3gggw5bs3a6sbjnwhjapcqr8g78f5c-attr-2.4.47/lib/ (0x00007fce997bc000)
  /nix/store/la5imi1602jxhpds9675n2n2d0683lbq-glibc-2.20/lib/ (0x00007fce99f65000)

Content-addressed linkage means that files in the store are never mutated: they will never be overwritten by a software upgrade. Never. Never will I again gaze in horror at the frozen beardcicles of a Debian system in the throes of "oops I just deleted all your programs, like that time a few months ago, wasn't that cool, it's really cold down here, how do you like my frozen facial tresses and also the horns".

At the same time, I don't have to give up upgrades. Paradoxically, immutable software facilitates change and gives me the freedom to upgrade my system without anxiety and lost work.

nix and guix

So, there's Nix and there's Guix. Both are great. I'll get to comparing them, but first a digression on the ways they can be installed.

Both Nix and Guix can be installed either as the operating system of your computer, or just as a user-space package manager. I would actually recommend to people to start with the latter way of working, and move on to the OS if you feel comfortable. The fundamental observation here is that because /nix/store doesn't depend on or conflict with /usr, you can run Nix or Guix as a user on a (e.g.) Debian system with no problems. You can have a forest of symlinks in ~/.guix-profile/bin that links to nifty things you've installed in the store and that's cool, you don't have to tell Debian.

and now look at me

In my case I wanted to also have the system managed by Nix or Guix. GuixSD, the name of the Guix OS install, isn't appropriate for me yet because it doesn't do GNOME. I am used to GNOME and don't care to change, so I installed NixOS instead. It works fine. There have been some irritations -- for example it just took me 30 minutes to figure out how to install dict, with a local wordnet dictionary server -- but mostly it has the packages I need. Again, I don't recommend starting with the OS install though.

GuixSD, the OS installation of Guix, is a bit harder even than NixOS. It has fewer packages, though what it does have tends to be more up-to-date than Nix. There are two big things about GuixSD though. One is that it aims to be fully free, including avoiding non-free firmware. Because they build deterministic build products from source, Nix and Guix can offer completely reproducible builds, which is swell for software reliability. Many reliability people also care a lot about software freedom and although Nix does support software freedom very well, it also includes options to turn on the Flash plugin, for example, and of course includes the Linux kernel with all of the firmware. Well GuixSD eschews non-free firmware, and uses the Linux-Libre kernel. For myself I have a local build on another machine that uses the stock Linux kernel with firmware for my Intel wireless device, and I was really discouraged from even sharing the existence of this hack. I guess it makes sense, it takes a world to make software freedom, but that particular part is not my fight.

The other thing about Guix is that it's really GNU-focused. This is great but also affects the product in some negative ways. They use "dmd" as an init system, for example, which is kinda like systemd but not. One consequence of this is that GuixSD doesn't have an implementation of the org.freedesktop.login1 seat management interface, which these days is implemented by part of systemd, which in turn precludes a bunch of other things GNOME-related. At one point I started working on a fork of systemd that pulled logind out to a separate project, which makes sense to me for distros that want seat management but not systemd, but TBH I have no horse in the systemd race and in fact systemd works well for me. But, a system with elogind would also work well for me. Anyway, the upshot is that unless you care a lot about the distro itself or are willing to adapt to e.g. Xfce or Xmonad or something, NixOS is a more pragmatic choice.

i'm on a horse

I actually like Guix's tools better than Nix's, and not just because they are written in Guile. Guix also has all the tools I need for software development, so I prefer it and ended up installing it as a user-space package manager on this NixOS system. Sounds bizarre but it actually works pretty well.

So, the point of this article is to be a little guide of how to build V8 with Guix. Here we go!

up and running with guix

First, check the manual. It's great and well-written and answers many questions and in fact includes all of this.

Now, I assume you're on an x86-64 Linux system, so we're going to use the awesome binary installation mechanism. Check it out: because everything in /gnu/store is linked directly to each other, all you have to do is to copy a reified /gnu/store onto a working system, then copy a sqlite thing into /var, and you've installed Guix. Sweet, eh? And actually you can take a running system and clone it onto other systems in that way, and Guix even provides a tool to generate such a tarball for you. Neat stuff.

cd /tmp
tar xf guix-binary-0.8.3.x86_64-linux.tar.xz
mv var/guix /var/ && mv gnu /

This Guix installation has a built-in profile for the root user, so let's go ahead and add a link from ~root to the store.

ln -sf /var/guix/profiles/per-user/root/guix-profile \

Since we're root, we can add the bin/ part of the Guix profile to our environment.

export PATH="$HOME/.guix-profile/bin:$HOME/.guix-profile/sbin:$PATH"

Perhaps we add that line to our ~root/.bash_profile. Anyway, now we have Guix. Or rather, we almost have Guix -- we need to start the daemon that actually manages the store. Create some users:

groupadd --system guixbuild

for i in `seq -w 1 10`; do
  useradd -g guixbuild -G guixbuild           \
          -d /var/empty -s `which nologin`    \
          -c "Guix build user $i" --system    \

And now run the daemon:

guix-daemon --build-users-group=guixbuild

If your host distro uses systemd, there's a unit that you can drop into the systemd folder. See the manual.

A few more things. One, usually when you go to install something, you'll want to fetch a pre-built copy of that software if it's available. Although Guix is fundamentally a build-from-source distro, Guix also runs a continuous builder service to make sure that binaries are available, if you trust the machine building the binaries of course. To do that, we tell the daemon to trust

guix archive --authorize < ~root/.guix-profile/share/guix/

as a user

OK now we have Guix installed. Running Guix commands will install things into the store as needed, and populate the forest of symlinks in the current user's $HOME/.guix-profile. So probably what you want to do is to run, as your user:

/var/guix/profiles/per-user/root/guix-profile/bin/guix \
  package --install guix

This will make Guix available in your own user's profile. From here you can begin to install software; for example, if you run

guix package --install emacs

You'll then have an emacs in ~/.guix-profile/bin/emacs which you can run. Pretty cool stuff.

back on the horse

So what does it mean for software development? Well, when I develop software, I usually want to know exactly what the inputs are, and to not have inputs to the build process that I don't control, and not have my build depend on unrelated software upgrades on my system. That's what Guix provides for me. For example, when I develop V8, I just need a few things. In fact I need these things:

;; Save as ~/src/profiles/v8.scm
(use-package-modules gcc llvm base python version-control less ccache)

 (list clang
       (list gcc-4.9 "lib")

This set of Guix packages is what it took for me to set up a V8 development environment. I can make a development environment containing only these packages and no others by saving the above file as v8.scm and then sourcing this script:

~/.guix-profile/bin/guix package -p ~/src/profiles/v8 -m ~/src/profiles/v8.scm
eval `~/.guix-profile/bin/guix package -p ~/src/profiles/v8 --search-paths`
export GYP_DEFINES='linux_use_bundled_gold=0 linux_use_gold_flags=0 linux_use_bundled_binutils=0'
export CXX='ccache clang++'
export CC='ccache clang'
export LD_LIBRARY_PATH=$HOME/src/profiles/v8/lib

Let's take this one line at a time. The first line takes my manifest -- the set of packages that collectively form my build environment -- and arranges to populate a symlink forest at ~/src/profiles/v8.

$ ls -l ~/src/profiles/v8/
total 44
dr-xr-xr-x  2 root guixbuild  4096 Jan  1  1970 bin
dr-xr-xr-x  2 root guixbuild  4096 Jan  1  1970 etc
dr-xr-xr-x  4 root guixbuild  4096 Jan  1  1970 include
dr-xr-xr-x  2 root guixbuild 12288 Jan  1  1970 lib
dr-xr-xr-x  2 root guixbuild  4096 Jan  1  1970 libexec
-r--r--r--  2 root guixbuild  4138 Jan  1  1970 manifest
lrwxrwxrwx 12 root guixbuild    59 Jan  1  1970 sbin -> /gnu/store/1g78hxc8vn7q7x9wq3iswxqd8lbpfnwj-glibc-2.21/sbin
dr-xr-xr-x  6 root guixbuild  4096 Jan  1  1970 share
lrwxrwxrwx 12 root guixbuild    58 Jan  1  1970 var -> /gnu/store/1g78hxc8vn7q7x9wq3iswxqd8lbpfnwj-glibc-2.21/var
lrwxrwxrwx 12 root guixbuild    82 Jan  1  1970 x86_64-unknown-linux-gnu -> /gnu/store/wq6q6ahqs9rr0chp97h461yj8w9ympvm-binutils-2.25/x86_64-unknown-linux-gnu

So that's totally scrolling off the right for you, that's the thing about Nix and Guix names. What it means is that I have a tree of software, and most directories contain a union of links from various packages. It so happens that sbin though just has links from glibc, so it links directly into the store. Anyway. The next line in my arranges to point my shell into that environment.

$ guix package -p ~/src/profiles/v8 --search-paths
export PATH="/home/wingo/src/profiles/v8/bin:/home/wingo/src/profiles/v8/sbin"
export CPATH="/home/wingo/src/profiles/v8/include"
export LIBRARY_PATH="/home/wingo/src/profiles/v8/lib"
export LOCPATH="/home/wingo/src/profiles/v8/lib/locale"
export PYTHONPATH="/home/wingo/src/profiles/v8/lib/python2.7/site-packages"

Having sourced this into my environment, my shell's ls for example now points into my new profile:

$ which ls

Neat. Next we have some V8 defines. On x86_64 on Linux, v8 wants to use some binutils things that it bundles itself, but oddly enough for months under Debian I was seeing spurious intermittent segfaults while linking with their bundled gold linker binary. I don't want to use their idea of what a linker is anyway, so I set some defines to make v8's build tool use Guix's linker. (Incidentally, figuring out what those defines were took spelunking through makefiles, to gyp files, to the source of gyp itself, to the source of the standard shlex Python module to figure out what delimiters shlex.split actually splits on... yaaarrggh!)

Then some defines to use ccache, then a strange thing: what's up with that LD_LIBRARY_PATH?

Well. I'm not sure. However the normal thing for dynamic linking under Linux is that you end up with binaries that are just linked against e.g., whereever the system will find That's not what we want in Guix -- we want to link against a specific version of every dependency, not just any old version. Guix's builders normally do this when building software for Guix, but somehow in this case I haven't managed to make that happen, so the binaries that are built as part of the build process can end up not specifying the path of the libraries they are linked to. I don't know whether this is an issue with v8's build system, that it doesn't want to work well with Nix / Guix, or if it's something else. Anyway I hack around it by assuming that whatever's in my artisanally assembled symlink forest ("profile") is the right thing, so I set it as the search path for the dynamic linker. Suggestions welcome here.

And from here... well it just works! I've gained the ability to precisely specify a reproducible build environment for the software I am working on, which is entirely separated from the set of software that I have installed on my system, which I can reproduce precisely with a script, and yet which is still part of my system -- I'm not isolated from it by container or VM boundaries (though I can be; see NixOps for more in that direction).

OK I lied a little bit. I had to apply this patch to V8:

$ git diff
diff --git a/build/standalone.gypi b/build/standalone.gypi
index 2bdd39d..941b9d7 100644
--- a/build/standalone.gypi
+++ b/build/standalone.gypi
@@ -98,7 +98,7 @@
         ['OS=="win"', {
           'gomadir': 'c:\\goma\\goma-win',
         }, {
-          'gomadir': '<!(/bin/echo -n ${HOME}/goma)',
+          'gomadir': '<!(/usr/bin/env echo -n ${HOME}/goma)',
         ['host_arch!="ppc" and host_arch!="ppc64" and host_arch!="ppc64le"', {
           'host_clang%': '1',

See? Because my system is NixOS, there is no /bin/echo. It does helpfully install a /usr/bin/env though, which other shell invocations in this build script use, so I use that instead. I mention this as an example of what works and what workarounds there are.

dpkg --purgatory

So now I have NixOS as my OS, and I mostly use Guix for software development. This is a new setup and we'll see how it works in practice.

Installing NixOS on top of Debian was a bit irritating. I ended up making a bootable USB installation image, then installing over to my Debian partition, happy in the idea that it wouldn't conflict with my system. But in that I forgot about /etc and /var and all that. So I copied /etc to /etc-debian, just as a backup, and NixOS appeared to install fine. However it wouldn't boot, and that's because some systemd state from my old /etc which was still in place conflicted with... something? In the end I redid the install, moving my old /usr, /etc and such directories to backup names and letting NixOS have control. That worked fine.

I have GuixSD on a laptop but I really don't recommend it right now -- not unless you have time and are willing to hack on it. But that's OK, install NixOS and you'll be happy on the system side, and if you want Guix you can install it as a user.

Comments and corrections welcome, and happy hacking!

by Andy Wingo at August 04, 2015 04:23 PM

Programming Praxis

Three Homework Problems

I get lots of emails, and even some comment postings, from students whe want help with their homework. I never respond, even to offer a hint or a simple word of encouragement. Sorry, but that’s not what I do. But many of my exercises are based on typical homework problems for programming students, and with a new academic year about to start, I figure now is a good time to write some typical homework exercises.

1. Write a function that takes as input three positive integers and finds the sum of the squares of the two largest of the three.

2. Write a function that takes a positive integer as input and determines if it is a base-10 palindrome.

3. Write a function that takes a positive integer as input and determines the number of trailing zeroes in the output of that number’s factorial.

Your task is to write the three requested functions in the manner of a beginning programming student. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below.

by programmingpraxis at August 04, 2015 09:00 AM