Archive for the ‘language’ Category

The MuT music tool

Thursday, February 11th, 2016

For 8088 MPH I wrote a tool to convert Amiga MOD (module) files to the format required for playback with the 4.77MHz 8088 PC speaker 4 channel playback routine. The MOD file solution never felt quite ideal to me because the playback routine has some possibilities (like SID-style ring modulation) which can't be expressed in a MOD file and there are also a lot of things you can do in a MOD that won't really work with my player, so if I make it easy to try arbitrary MODs with the player, people are likely to try MODs that don't come out very well and conclude that the player is rubbish.

What I really wanted was to write my own tracker designed specifically for the player routines I had written (and some variants that I might want to write). But writing a whole tracker is a big project - particularly the GUI (GUIs take ages and aren't the most interesting things to program).

So I started thinking: what's the simplest piece of software I could write that a musician could use to compose music for this player? What about a command line tool - a sort of compiler which accepts text files as input and generates binary music data in the appropriate format (or indeed various formats) as output? This isn't entirely unprecedented - there have been various tools for processing text files into music such as Grigasoft's "Polyphonic Music" and John Worley's "Clockwork Pianola".

This is the design I came up with - I'm calling it "MuT" ("music tool", pronounced "mute") for now - it can't seem to decide if it's a tracker, a musical instrument or a programming language.

Inside the text files we would probably want to have something similar to a more traditional tracker's pattern grid, with different channels arranged horizontally and time going vertically. Rather than using semantically-significant spaces and newlines (which cause all sorts of trouble) I think a nice way to do it would be for the musician to lay out the grid using the "&" character to separate voices (think "C & E" means a C and an E playing at the same time) and the "|" character to indicate a new time division (think "bar is a measure of time" though the "|" interval would usually be shorter than a musical bar, obviously). So an empty grid would look something like:

output =
     &     &     &     |
     &     &     &     |
     &     &     &     |
     &     &     &     ;

The spaces could then be filled in with notes:

output = sine@(
 C4 & E4 & G4 & C5  |
 C4 & E4 & A4 & C5  |
 C4 & F4 & A4 & C5  |
 D4 & F4 & A4 & D5  |
 D4 & F4 & B4 & D5  |
 D4 & G4 & B4 & D5  |
 E4 & G4 & B4 & E5  |
 E4 & G4 & C5 & E5  );

Leaving a space blank causes the note in the same voice in the previous division to continue, so this could also be written:

output = sine@(
 C4 & E4 & G4 & C5  |
    &    & A4 &     |
    & F4 &    &     |
 D4 &    &    & D5  |
    &    & B4 &     |
    & G4 &    &     |
 E4 &    &    & E5  |
    &    & C5 &     );

If you want to silence a voice, put a 0 in there instead of a blank:

output = sine@(
 C4 & E4 & G4 & C5  |
 0  & 0  & A4 & 0   |
    & F4 & 0  &     |
 D4 & 0  &    & D5  |
 0  &    & B4 & 0   |
    & G4 & 0  &     |
 E4 & 0  &    & E5  |
 0  &    & C5 & 0   );

One can also put different instruments into the grid:

output =
 sine@C4 & square@E4 & triangle@G4 & bell@C5  |
 sine@C4 & square@E4 & triangle@A4 & bell@C5  |
 sine@C4 & square@F4 & triangle@A4 & bell@C5  |
 sine@D4 & square@F4 & triangle@A4 & bell@D5  |
 sine@D4 & square@F4 & triangle@B4 & bell@D5  |
 sine@D4 & square@G4 & triangle@B4 & bell@D5  |
 sine@E4 & square@G4 & triangle@B4 & bell@E5  |
 sine@E4 & square@G4 & triangle@C5 & bell@E5  ;

Instrument names are resolved per voice, and "." is (by convention) a sort of default instrument name, so this can also be written:

output =
 { . = sine; } .@C4 &
      { . = square; } .@E4 &
           { . = triangle; } .@G4 &
                      { . = bell; } .@C5  |
               .@C4 & .@E4 & .@A4 & .@C5  |
               .@C4 & .@F4 & .@A4 & .@C5  |
               .@D4 & .@F4 & .@A4 & .@D5  |
               .@D4 & .@F4 & .@B4 & .@D5  |
               .@D4 & .@G4 & .@B4 & .@D5  |
               .@E4 & .@G4 & .@B4 & .@E5  |
               .@E4 & .@G4 & .@C5 & .@E5  ;

One can make instruments in all sorts of ways. A few simple (chiptune-esque) waveforms are built into the program, or you can load them from a file:

bell = load("BELL.WAV")@s/440;

The MuT syntax is extremely flexible and rich enough to do just about any kind of audio processing, and hopefully it's also intuitive enough that it's not too difficult to figure out how to do just that.

What follows is a more technical and in-depth explanation, so if you're not a programmer or mathematician you might want to stop reading here.

Basically most MuT objects are functions from the real numbers to the complex numbers. Both the range and domain of these functions have units (some integer power of seconds - s) which are tracked by MuT so that it can distinguish between time-domain functions like sine@C4 and pure functions like sine.

There are various operators for acting on MuT objects:

+ - pointwise addition: output = sine@C4 + square@E4; sounds the same as output = sine@C4 & square@E4; but it has some different behavior in other ways - you can't do the "empty space means continue playing the same instrument in that voice" trick if you use + instead of & in your grid, and you can't use a + b as an l-value.

- - pointwise subtraction: same as + but the function on the right has its phase inverted.

* - pointwise multiplication (aka modulation). Also useful for volume envelopes.

/ and ^ - pointwise division and power respectively. Mostly for completeness sake, though they do have some uses especially for scalars.

[] indexes into a function, just like an array in C or Pascal. So sine[1] = 0.841.... When you put a function instead of a scalar inside the brackets, you (naturally) get function composition. So a[b][x] = a[b[x]].

{} allows execution of statements during the evaluation of an expression. This is handy for redefining instruments while inside the grid (see example above), or changing the tempo, amongst other things. The tempo (i.e. the amount of time covered by one "|" is set by the special variable division. So if MuT sees { division /= 2; } inside an expression, the following grid rows will be played at twice the speed (only durations are affected, not frequencies). The scope of any changes inside {} is the remainder of the statement in which it occurs.

@ - basis scale. This is where it gets really interesting. This is an operator only applicable to functions, not scalars (so unlike +, -, *, / and ^ it isn't found on calculators). Suppose you have a function f and a scalar x. Then (f@k)[x] = f[k*x]. So if f is a waveform then f@2 is the same waveform played twice as fast (and an octave higher). The @ operator also adjusts domain units if the right-hand side has a value which is not dimensionless. So, if f has a dimensionless domain (e.g. sine) then f@110*Hz (or f@110/s) will be a normal time-indexed waveform (i.e. a sine wave with a frequency of 110Hz). As I'm sure you can see this is a very useful operator for an audio program!

It gets better, though. Suppose we have a complex unit i = sqrt(-1); and we make f@i compute the Fourier transform of f. Surprisingly, I discovered (after defining it that way) that the mathematics of doing so work out very neatly - time scaling and Fourier transformations are closely related, mathematically - they are both Linear Canonical Transformations, and there's a nice mapping from complex numbers to LCTs which gives an (efficiently computable) meaning to f@z for any complex number z. Using f@i in combination with * allows us to do convolutions and therefore any linear filters you care to describe (high-pass, low-pass, band-pass, notch, echos, resonances, you name it). Using other complex numbers on the right-hand side of @ gives us inverse Fourier transforms and fractional Fourier transforms (the musical utility of which I know not, but I'm sure inventive musicians will find some interesting uses for it).

One can also use a function on the right-hand side of @, which will result in a different basis scaling for each sample in the output - i.e. for scalar x and functions f and w, (f@w)[x] = (f@(w[x]))[x]. That's how the first non-trivial example above works.

& - next voice operator. This just puts moves to the next voice, as we've seen above. It can also be used on the left-hand side of an assignment: (a & b) = (c & d); does the same thing as a = c; b = d;. This is useful for grouping values together into a compound object.

| - time sequence operator. c = a | b yields a for division followed by b for division. Functions in MuT repeat once they are complete, so if you evaluate c in a time sequence with a sufficiently long division, it'll sound like a | b | a | b | .... Similarly you can loop a division-long section of just one function by doing c = a | ;.

As with &, | can be used on the left-hand side of an assignment: (a | b) = c; assigns the part of c between 0 and division to a and the part of c from division to division*2 to b. So by using just assignment, the special division variable and the | operator you can make whatever edits to a waveform you like.

The comparison operators ==, !=, <, <=, >, >= work the same way as their C equivalents, yielding (in general) a function whose values are boolean (true, false) elements rather than numbers. These can be used with the ?: operator to combine two functions via a predicate.

The % operator has a couple of uses (which I might change to use different characters). On its own it gives the output from the previous voice in a voice set, which is useful for ring-modulation style effects (such as those that can be done by the SID chip and by my MOD player routine). If it comes immediately after a sequence, it yields a function which is the time-to-index mapping of that sequence. So (a|b|c)% is a function that is 0 from 0 to division, 1 from division to division*2 and 2 from division*2 to division*3. You can also change the time-to-index mapping of an l-value by assigning to this function. If the time-to-index mapping function takes a non-integral value at any point then the corresponding two elements of the sequence are mixed, so you can create fades and glissandos. Time-scaling the time-to-index mapping function will speed up or slow down the playing of that sequence without affecting the frequencies.

If you combine (via multiplication or basis-scaling) two functions which don't have the same domain units you end up with a compound object. When this compound object is basis-scaled, it'll move the dimensions of the compound object closer and leave other elements unchanged. So you can create an instrument by combining a waveform (dimensionless domain) and a volume envelope (time domain), and when this is time-scaled the scaling will change the frequency of the waveform part without speeding up or slowing down the volume envelope.

I'm contemplating a built-in function that will convert a waveform into a sequence so that it can be time-scaled without changing the pitch (or vice-versa) in the manner of a phase vocoder, but I haven't quite got all the details ironed out yet.

Arbitrary functions can be defined: e.g. major(x) = x + x@5/4 + x@3/2; will turn any waveform or function into a major chord using its input as the base note. If we allow functions to be recursive, then (by the Church-Turing thesis) MuT becomes a Turing-complete programming language (for better or for worse).

It's handy to be able to have libraries of predefined functions, instruments, values (note frequencies, intervals) etc. Executing an include statement will read in a file and insert its contents into the program at that point (much like C's #include directive). Conventionally, a MuT input file will start with include "standard.mut"; to include the predefined variables which come with the program, or include "pc_speaker.mut"; which does the same but also sets up special variables for the PC speaker output mode.

There's some more design documents (which are somewhat out of date but cover a few more dark corners) and the beginnings of an implementation on my github. Being a command-line tool, it's not terribly user-friendly but it probably wouldn't be hard for someone more GUI-oriented than me to slap a user interface on top of it. One fairly simple user-interface for this might just be a special text editor, which plays whatever object is under the cursor whenever a particular key combination is pressed (and which highlights the parts of the MuT program which are being used to compute the currently-playing notes). An interesting enhancement would be a little knob that appears whenever the mouse hovers over a numeric value; dragging this knob changes that numeric value in real-time, changing the currently playing sound if that value is involved in it.

A tale of two debuggers

Tuesday, February 9th, 2016

Every non-trivial compiler to have an intermediate representation - that is, a form of code that is neither the input source code nor the output binary or assembly code. Apart from the fact that neither of these are good representations on which to do many optimizations, it's also good for a compiler to be able to accept multiple different input languages and target multiple different output architectures.

A particularly important part of writing a compiler is figuring out a good intermediate representation. And one particularly important part of doing that is deciding on how to store debugging information. I'm not planning to write an ALFE debugger immanently but I expect that one would be useful eventually so I want to make sure that the infrastructure is sufficient to do so when I do.

I think that some desirable properties to have in a debugger are consistency, predictability and flexibility even in the face of highly optimized code - ideally it shouldn't even be necessary to have separate "debug" and "release" configurations - you'd debug on the same build of your program that ended up on the user's machine. Though one might still make a separate build with some extra checking, logging or other debugging infrastructure in order to investigate a particular bug.

There's two ways to approach writing a consistent, predictable debugger. One is to approach it from the machine code side: the places the debugger can stop are the spaces between machine instructions (let's call these red locations), and when it's stopped you can examine and modify register contents and memory locations. In other words, an old-style low-level debugger like DEBUG in MS-DOS, that understands nothing about the language that the program was written in.

The other way is to work from the source level. and is kind of a holy grail of debuggers. The places the debugger can stop are the spaces between statements or expressions with side effects (let's call these blue locations). Any source-level expression can be evaluated, any statement can be executed and any lvalue modified. What's more, the order in which things happen is predictable.

To understand what this means, consider the following ALFE program:

Int d = 0;
Int e = 0;
Int a() { ++d; return 2; }
Int b() { ++e; return 3; }
Int c = a() + b();

For the object code that results from the compilation of the last line, the compiler is entitled to generate the code that evaluates a() and b() in either order (since the order doesn't make any difference to the output of the program) or even both at once if that's possible in the target architecture's instruction set. However, when debugging we would like the order to be predictable - i.e. we would want the debugger to act as if a() is evaluated before b(). That's obviously a trivial example, but in real code optimizations can cause debugging to be unpredictable in far more severe ways.

One way this could be done is to ignore the binary program completely and interpret the source program (or, equivalently, rebuild the program with no optimizations and debug that). That would make the program run much slower under the debugger than when running normally (which may make it too slow to be practical - I've written programs where trying to debug using the non-optimized version was just too slow, which made debugging very tricky).

So, most debuggers don't try to be that predictable - they implement some hybrid approach. For example, executing things in machine code order rather than trying to simulate anything, and storing in the debug information an instruction address for each source code line (so that, unless explicitly stepping through instructions rather then the source code) you can only stop on instructions corresponding to source code lines.

I think a better approach (from the point of view of predictability) might be for the debugger to run the normal, optimized binary code until a breakpoint is hit, at which point it switches to interpretation, reconstructing all the information it needs about the program state from the stopped program's registers and memory. That might mean that on hitting a breakpoint in b() you might see d unincremented in the above example, but stepping through the same line of code would give a different result. I'm not sure if that causes more confusion than it solves.

Seeing e incremented before d should have no observable consequences (outside of the debugger) to the execution of the program - otherwise the compiler would be incorrect in swapping the calls. Checking that the compiler and debugger are correct is a specific non-goal of the debugger - we really have to work on the assumption that they are correct, otherwise we'd be second-guessing ourselves.

With an incremental compiler sufficiently well integrated with the debugger, interpretation is unnecessary - the debugger can invoke the compiler to add the breakpoint as a function call and recompile. The breakpoint then becomes an observable of the program, so that only the specific instances of the specific optimizations that need to be are disabled. Care needs to be taken to ensure that the format of in-memory data structures doesn't change in order that the program doesn't have to be restarted, but that shouldn't be too difficult.

Anyway, what does this mean for debug information? We need to somehow relate items in the object code to items in the source code and vice-versa. What does this mean in practice?

For one thing, we'll need to find a relationship between red locations and blue locations. One way to represent this would be a linked list of locations which may be either red or blue or purple (for the case where a red location perfectly coincides with a blue one). Given a location and type, you can obtain the next red location, the next blue location, the next purple location or the next location of any colour. This isn't sufficient for all optimization transformations (for example in a nested loop swapping the inner loop with the outer one so that the former becomes outer and the latter becomes inner) but it's a start.

The other thing we need is enough information to reconstruct the value of any variable given the register values and memory contents (and vice versa, for going back to the "running" state after interpreting for a while). The naive way of doing that would be to have a big matrix with red locations horizontally and variables vertically, and in each cell an algorithm for computing that variable and another for modifying the machine state when that variable is modified. Obviously that would take far too much memory for all but the most trivial programs, but it might be possible to store it in a highly compressed form, such as just storing changes to this table for each instruction.

Going back to the intermediate form, I think all this means is that I want to be able to store two things:

  • Some "pseudo-instructions" (which don't do anything to the output object code itself, but mark the position of a blue location, and have pointers to the previous and next blue locations, which may not be the same as the previous and next blue locations in the object code code). Some of these pseudo-instructions may also coincide with an actual instruction (when the blue location is between two consecutive red locations).
  • For each instruction, a list of changes to the "how to compute a high level variable" table.

Variadic templates in ALFE

Wednesday, October 24th, 2012

Some of the types in ALFE are templates which can accept different numbers of parameters - in other words, variadic templates. How does this fit in with the Kind system I wrote about before?

As well as the Kind constructors "<>" (template) and "" (type), we need a third Kind constructor which represents zero or more Kinds in a sequence suitable for template arguments. Let's call this Kind constructor "...". As with C++ templates, we should allow the members of the sequence to have arbitrary kind. So <...> is the kind of a template that takes zero or more arguments whose kinds are type, while <<>...> is the kind of a template that takes zero or more arguments whose kinds are <> (a single argument template whose argument is of kind type). The kind of Function is <, ...> (the comma is required because Function has one or more arguments rather than zero or more). More complicated kinds like <<...>, ...> are also possible.

To actually create a variadic template, we need to implement "recursive case" and "base case" templates and have the compiler choose between them by pattern matching, just as in C++. So the definition of Tuple might look something like:

Tuple<@T, ...> = Structure { T first; Tuple<...> rest; };
Tuple<> = Structure { };

Is that enough? Well, I believe this gives ALFE's kind system parity with C++'s. However, there is one more possible Kind constructor that I can imagine - the Kind constructor which can represent any Kind at all - let's call it $. A template with an argument of this kind (i.e. has kind <$>) can be instantiated with an argument of any kind. So you could have:

Foo<@T> = { ... };
Foo<@T<>> = { ... };
Foo<@T<@R<$>>> = { ... };

The first template is used when Foo is instantiated with a type, the second is used when it's instantiated with a template of kind <> and the third is used when it's instantiated with any other single-argument template, no matter what the kind of the argument is. The third template could then pass R as an argument to another instantiation of Foo and thereby "peel apart" the kind of the argument (as long as none of the kinds involved have multiple arguments).

By combining $ with ... I think we could potentially peel apart completely arbitrary kinds. However, I'm not sure if this is worth implementing since I can't think of any use for such a thing. Still, it's good to have some idea about how to proceed if I do eventually come across such a use!

Edited 31st January 2016: Since writing the above, I have realized that the concept of the $ Kind specifier has a fatal flaw. Just because our Foo template has been instantiated with a template of a particular kind, does not mean that we have any way of obtaining type constructors of suitable kinds for instantiating that template. We can't pass R as an argument to another instantiation of Foo because we have no way to find such a thing to pass - all we have is a template that we know takes such a thing. There might be no suitable type constructors in the entire program! So there is really nothing we can do with our T here at all - we can't instantiate it (even partially) since we don't know the kind of its first argument. We could pass it to another template, but that just moves the problem around without making any progress on it.

Classes vs structures in ALFE

Tuesday, October 23rd, 2012

A lot of the time in my C++ code I find myself writing class hierarchies. However, because of the way that inheritance works in C++, you need to use a pointer to point to an object which is of a particular type or some subtype of that type. In my code these pointers tend to be reference counted smart pointers to alleviate thinking about lifetime issues but the principle is the same.

Then, because I don't want to have to keep typing "Reference foo" everywhere, I encapsulate these smart pointers into value classes which provide functions that call the (virtual) functions in the implementation class. So I end up with code like this:

class Foo
{
public:
    Foo() : _implementation(new Implementation) { }
    void frob() { _implementation->frob(); }
protected:
    Foo(Implementation* implementation) : _implementation(implementation) { }
    class Implementation : public ReferenceCounted
    {
    public:
        virtual void frob() { ... }
    };
private:
    Reference<Implementation> _implementation;
};
 
class Bar : public Foo
{
public:
    Bar() : Foo(new Implementation) { }
protected:
    class Implementation : public Foo::Implementation
    {
    public:
        virtual void frob() { ... }
    private:
        ...
    };
};

That's really more boilerplate than I want to write for every class. I'd really much rather write code that looks more like C# or Java:

class Foo
{
    public Foo() { }
    public virtual void frob() { ... }
};
 
class Bar
{
    public Bar() { }
    public override void frob() { ... }
    ...
};

So I'm thinking that in ALFE I should have some syntax that looks close to the latter and behaves like the former. I'm leaning towards making "Struct" behave like C++ struct and class (i.e. a value type), and "Class" behave like a C# class (reference semantics). So:

Foo = Structure : Reference<Implementation> {
    Foo() : base(new Implementation) { }
    Void frob() { referent()->frob(); }
access<Foo>:
    Foo(Implementation* implementation) : base(implementation) { }
    Implementation = Structure : ReferenceCounted {
        virtual Void frob() { ... }
    };
};
 
Bar = Structure : Foo {
    Bar() : Foo(new Implementation) { }
access<Bar>:
    Implementation = Structure : Foo.Implementation {
        virtual Void frob() { ... }
    access<>:
        ...
    };
};

is the same as:

Foo = Class {
    Foo() { }
    virtual Void frob() { ... }
};
 
Bar = Class : Foo {
    Bar() { }
    virtual Void frob() { ... }
access<>:
    ...
};

There is a comparison to be made here between POD (Plain Old Data) and non-POD classes in C++ (though it's not quite the same because the Implementation classes would be non-POD in C++).

Obviously there's a lot of details here which would need to be fleshed out but I think something along these lines would be useful.

The Label type in ALFE

Monday, October 22nd, 2012

This is part of the ALFE types series.

The Label type holds an address which can be passed to the goto statement, and the possible values are defined by the actual labels that are in scope. It's somewhat like a function pointer, except that there are no arguments and no return value. It's based on the GCC labels as values extension, except there's a proper type for it instead of void*.

Labels in ALFE are local, which means that the scope of a label is the block in which it is defined. So you can't use goto to jump into a block, only out of one or within the current block. Since defining a variable creates a new block with the scope of that variable, this eliminates the problem of goto skipping variable constructors. Unless you stash a label into a Label variable and then execute the goto from outside the label's scope. That's just as evil as taking the address of a local variable and then accessing it via the pointer when the object is out of scope, though - don't do that.

I think these restrictions eliminate the most heinous abuses of goto, while still allowing the useful cases (which are themselves pretty rare).

The Variant type in ALFE

Sunday, October 21st, 2012

This is part of the ALFE types series.

Variant is a type that can hold any value. It's the type of all variables in a dynamically typed language. It's a bit like a sum type consisting of all other types summed together.

In practice it's probably not very useful, since to do anything with a Variant variable you have to cast it to some type, and since there's a finite number of casts that you can write in one program, you might as well just make it a sum type of those. Still, it might come in handy for translating some programs written in other languages into ALFE, or even if you just don't want to have to think too much about what the possible types are.

The Bottom type in ALFE

Saturday, October 20th, 2012

This is part of the ALFE types series.

Bottom is a special type that has no values associated with it. So declaring a variable or member of type Bottom is forbidden. It's useful for using as the return type of a function that doesn't ever return normally (i.e. a function that always throws an exception or terminates the process).

[Bottom] is also another name for Null (or maybe it's a different one-element type, I haven't decided yet).

Another way of writing Bottom is Either<> (i.e. an algebraic data type with no types being summed).

I might end up calling it something different - the name has slightly too many connotations of anatomy and (worse) type theory for my taste. I'm not sure what would be better, though. Empty isn't quite explanatory enough, and NoReturn doesn't quite capture the full range of Bottom's behavior.

Fixed-length arrays in ALFE

Friday, October 19th, 2012

This is part of the ALFE types series.

Foo[2], Foo[3], Foo[4] etc. are arrays whose length is fixed at compile time (not to be confused with [Foo] whose length is unknown and possibly undefined even at runtime). Passing around a value of array type copies the entire array (arrays don't decay to pointers). Operations defined on the elements can also be applied (element-wise) to the array itself, so you can write things like:

Int[2] centre(Int[2] topLeft, Int[2] bottomRight)
{
    return (topLeft + bottomRight)/2;
}

Foo[2] also has a template form, but since template arguments can only be types and not integers like they can in C++, there is a twist: Array<Foo, Class {Int n=2;}>. Any type can be used for the second argument as long as it has a public member named n of type Int with value known at compile time. This allows templates to work generically with arrays of different lengths.

Foo[n] can be indexed by an integer to yield an (RValue or LValue) Foo. It can also be indexed by a sequence (whose values are known at run time) to yield a slice of the array.

An array can be coerced to a sequence, and the compiler will box up the element count and pointer as necessary.

Sequence types in ALFE

Thursday, October 18th, 2012

This is part of the ALFE types series.

[Foo] is the type of an immutable sequence of Foo values (the sequence itself is immutable but the values in it are not necessarily immutable). It is syntactic sugar for Sequence<Foo>. It has methods "Boolean isEmpty()", "Foo first()" and "[Foo] rest()". As usual the actual implementation is up to the compiler (and may even be different for the same sequence in different parts of the same program). Values like [], [foo] and [foo1, foo2] are allowed.

There is also a special operator for creating sequences of consecutive integers: "..". It's closed on the left and open on the right, so 0..5 is equivalent to [0, 1, 2, 3, 4]. Infinite sequences like 0.. are also allowed, since there's no need in the Sequence<> interface to know how many elements there are up front. I'd like to have consecutive sequences that count up in steps other than 1, but I haven't decided on a good syntax for that yet. Python's extended slicing syntax is quite nice, but I think ".." is more natural than ":", and I already have quite a lot of meanings for the latter.

The sequence type can also be used as the return type for a generator function:

[Int] primes()
{
    Primes = Class : [Int] {
        construct(Int m = 2) { n = m; }
        Boolean isEmpty() { return false; }
        Int first() { return n; }
        [Int] rest()
        {
            for (Int m in (n+1)..) {
                Int i = 2;
                while (m/i >= i) {
                    if (m % i == 0)
                        break;
                    ++i;
                }
                done
                    return Primes(m);
            }
        }
        Int n;
    };
    return Primes;
}

Disregarding the usage of trial division instead of a more efficient sieve of Eratosthenes, it may seem terribly inefficient to create a whole new Primes object each time we get the next element of the sequence. However, it is the expectation that the compiler should inline the entire rest function into the calling loop so that all the object creation and destruction is optimized away.

Product types in ALFE

Wednesday, October 17th, 2012

This is part of the ALFE types series.

(Foo, Bar) is a "product" type, aka Tuple<Foo, Bar> or a structure with unnamed fields. Tuple is another variadic template, so (Foo, Bar, Baz) is syntactic sugar for Tuple<Foo, Bar, Baz>. Values of Tuple type are written in the same way as the type itself, except with values instead of types inside the parentheses: (foo, bar). You can have tuples of LValues as well as RValues, which is handy for functions that return Tuple values:

(foo, bar) = getFooBar();

You can also have a tuple of declarations:

(Foo foo, Bar bar) = getFooBar();

(Foo) is identical to Foo just as it is in C, but in ALFE it's also a 1-tuple (so 1-tuplifying is really just a no-op). There is also a single 0-tuple type () aka Tuple<> which is another name for Null. There is a single value inhabiting this type, which is also (by slight abuse of notation that took me a while to determine wasn't too amiguous or confusing) called () and Null (a type specifier can be used as a value if it is default-constructable).