Archive for the ‘computer’ Category

Multiplying faster with squares

Tuesday, October 30th, 2012

The 8088 has a hardware multiplier, but it's quite slow:

MUL with byte arguments: 69 cycles plus 1 for each set bit in AL plus 1 if the high byte of the result is 0
MUL with word arguments: 123 cycles plus 1 for each set bit in AX plus 1 if the high word of the result is 0

Signed multiplies are even slower (taking at least 80 cycles for a byte, 134 cycles for a word), and depend on the number of set bits in the absolute value of the accumulator, the signs of the operands and whether or not the explicit operand is -0x80 (-0x8000 for word multiplies). I also measured some word IMULs apparently taking a half-integer number of cycles to run, suggesting that there's either some very weird behavior going on with the 8088's multiplier or that there's a bug in my timing program (possibly both).

Can we beat the 8088's hardware multiplier with a software routine? There's a clever trick which can sometimes be used to speed up multiplications, based on the following:

(a+b)2 = a2 + b2 + 2ab
(a-b)2 = a2 + b2 - 2ab

Subtracting these gives:
4ab = (a+b)2 - (a-b)2

Or:
ab = (a+b)2/4 - (a-b)2/4

So if we keep in memory a table of x2/4 we can do a multiply with an add, two subtractions and two table lookups.

Does this still work if the entries in the table are fractional? Yes, it does, and here's why. x2/4 is an integer for even x, and for odd x it will always be an integer plus a quarter, as a quick induction will show. (a+b) and (a-b) are always both odd or both even (since they differ by 2b which is always even) so the quarters always cancel and we can just ignore them.

In 8088 assembly code, this can be written like this (assuming the operands are in ax and cx and that we don't care about the high word of the result):

  xchg ax,bx
  mov si,offset table
  shl bx,1
  shl cx,1
  add bx,cx
  mov ax,[si+bx]
  sub bx,cx
  sub bx,cx
  sub ax,[si+bx]

That works out to about 88 cycles (93 with DRAM refresh) - already quite an improvement over the hardware multiplier. We can do even better, though - if we place our table of squares at DS:0 and assume that our operands are in si and bx we get:

  shl bx,1
  shl si,1
  mov ax,[si+bx]
  neg bx
  sub ax,[si+bx]

Which is just 56 cycles (59 with DRAM refresh) - more than twice as fast as the hardware multiplier!

For byte-sized multiplies we can do something similar in just 34 cycles (36 with DRAM refresh):

  mov al,[si+bx]
  neg bx
  sub al,[si+bx]

However, it's less important for byte-sized multiplies since we can fit an entire 8-bit multiply table in memory and do:

  xchg bh,bl
  mov al,[si+bx]

Which is only about 20 cycles (21 with DRAM refresh). There's obviously a bit of a memory-usage vs time tradeoff there though.

There are some more nice things about these algorithms:

  1. There's more choices about which registers to use for the operands and results.
  2. If you're doing a lot of multiplies and want the results of them all scaled by a constant factor, you can build this into the table.
  3. If you're summing the results of several multiplies (e.g. computing a dot product) you can do it right in the accumulator (just change the "mov" to an "add".

The downsides (apart from the memory usage and time to initialize the table) are that you don't get the high byte/word for free and that it only works for signed multiplies.

How to get away with disabling DRAM refresh

Monday, October 29th, 2012

On the original IBM PC 5150 (and it's mostly electrically-equivalent derivatives, the 5155 and 5160) the operation of the bus (the data channel between the CPU and it's memory and peripherals) is interrupted is interrupted 2,187,500 times every 33 seconds (a rate of about 66KHz) for 11/13,125,000 of a second each time (i.e. 4 out of every 72 CPU cycles). During that time, very little of the machine can operate - no RAM can be read or written and no peripherals can be accessed (the CPU might be able to continue doing it's thing if it's in the middle of a long calculation, and the peripherals will continue to operate - it's just that nothing can communicate with each other).

Why does this happen? Well, most computers (including the one this blog post is about) use DRAM (Dynamic RAM) chips for their main memory, as it's fast and much cheaper than the slightly faster SRAM (Static RAM) chips. That's because each DRAM bit consists of a single capacitor and transistor as opposed to the 4 or more transistors that make up a bit of SRAM. That capacitor saves a lot of hardware but it has a big disadvantage - it discharges with time. So DRAM cells have to be "refreshed" periodically (every 2ms for the 16 kbit 4116 DRAMs in the original 5150) to maintain their contents. Reading a bit of DRAM involves recharging the capacitor if it's discharged, which refreshes it.

But a computer system won't generally read every bit of RAM in any given interval. In fact, if it's sitting in a tight idle loop it might very well not access most of the memory for many minutes at a time. But we would be justified in complaining if our computers forgot everything in their memories whenever they were left idle! So all general-purpose computers using DRAM will have some kind of circuitry for automatically accessing each memory location periodically to refresh it. In the 5150, this is done by having channel 1 of the 8253 Programmable Interval Timer (PIT) hooked up to the Direct Memory Access (DMA) controller's channel 0. The BIOS ROM programs the PIT for the 66KHz frequency mentioned above, and programs the DMA controller to read a byte each time it's triggered on channel 0. The address it reads counts up from 0 to 65,535 for each access, then goes back down to 0 again.

If the DRAM needs to be refreshed every 2ms why does the refresh circuit run at 66KHz and not 500Hz, or for that matter 8.192MHz? To answer that question, we need to know something about how the memory is organized. The original 5150 had banks of 8 chips (plus a 9th for parity checking). Each chip is 16 kbit, so a bank is 16KBytes. If you had a full 640KB of RAM organized this way, that would be 40 banks or 360 separate chips! (By the time that much memory become common, we were mostly using 64 kbit chips though.) Within each chip, the 16 kbits are organized in a grid of 128 "rows" and 128 "columns". To read a bit, you input the "row" address, then the "column" address, then read back the result (hence the chips could have just 16 pins each, as each address pin corresponds to both a "row" bit and a "column" bit). Happily, whenever a row is accessed, all the DRAM cells on that row are refreshed no matter what column address is ultimately accessed. Also, the low 7 bits of the physical byte address correspond to rows and the next 7 bits correspond to columns (the remaining 6 bits correspond to the bank address). So actually you could quite happily get away with just refreshing addresses 0-127 instead of addresses 0-65,535 on this machine (though there was a good reason for doing so, as we'll see later).

To ensure that they meet tolerances, electronic components (including DRAM chips) are manufactured with certain margins of error, which means that often one could get away with reprogramming the PIT to reduce the DRAM refresh rate a bit in order to squeeze a little bit more performance out of these old machines - it was a common hack to try, and I remember trying it on the family computer (an Amstrad PC1512) after reading a little bit about DRAM refresh in a computer magazine. I seem to recall that I got it up from the standard 1/18 to maybe 1/19 or 1/20 before it became unstable, but the performance improvement was too small to notice, so the little .COM file I made with DEBUG never made it as far as my AUTOEXEC.BAT.

For many of the timing experiments and tight loops I've been playing with on my XT, I've been disabling DRAM refresh altogether. This squeezes out a bit more performance which is nice but more importantly it makes the timings much more consistent (which is essential for anything involving lockstep). However, whenever I've told people about this the reaction is "doesn't that make the machine crash?" The answer is "no, it doesn't - if you're careful". If you turn off the refresh circuitry altogether you have to be sure that the program you're running accesses each DRAM row itself, which happens automatically if you're scanning through consecutive areas of RAM at a rate of more than 66KB/s, or for that matter if you've done enough loop unrolling that your inner loop covers more than 127 consecutive bytes of code. Since these old machines don't have caches as such, unrolled loops are almost always faster than rolled up ones anyway, so that's not such a great hardship.

Not all of the machines I'm tinkering with use 4116 DRAM chips. Later (64KB-256KB) 5150 motherboards and XTs use 4164 (64 kbit) chips, and modified machines (and possibly also clones) use 41256 (256 kbit) chips. The principles are exactly the same except these denser chips are arranged as 256x256 and 512x512 bits respectively, which means that there are 8 or 9 row bits, which means that instead of accessing 128 consecutive bytes every 2ms you have to access 256 consecutive bytes every 4ms or 512 consecutive bytes every 8ms respectively (the PIT and DMA settings were kept the same for maximum compatibility - fortunately the higher density DRAMs decay more slowly so this is possible). So when disabling DRAM refresh, one should be sure to access 512 consecutive bytes every 8ms since this will work for all 3 DRAM types.

The cycle-exact emulator I'm writing will be able to keep track of how long it's been since each DRAM row has been refreshed and will emit a warning if a row is unrefreshed for too long and decays. That will catch DRAM refresh problems that are missed due to the margins of error in real hardware, and also problems affecting only 41256 chips (my machine uses 4164s).

Modern PCs still use DRAM, and still have refresh cycles, though the overhead has gone down by an order of magnitude and the exact mechanisms have changed a few times over the years.

Variadic templates in ALFE

Wednesday, October 24th, 2012

Some of the types in ALFE are templates which can accept different numbers of parameters - in other words, variadic templates. How does this fit in with the Kind system I wrote about before?

As well as the Kind constructors "<>" (template) and "" (type), we need a third Kind constructor which represents zero or more Kinds in a sequence suitable for template arguments. Let's call this Kind constructor "...". As with C++ templates, we should allow the members of the sequence to have arbitrary kind. So <...> is the kind of a template that takes zero or more arguments whose kinds are type, while <<>...> is the kind of a template that takes zero or more arguments whose kinds are <> (a single argument template whose argument is of kind type). The kind of Function is <, ...> (the comma is required because Function has one or more arguments rather than zero or more). More complicated kinds like <<...>, ...> are also possible.

To actually create a variadic template, we need to implement "recursive case" and "base case" templates and have the compiler choose between them by pattern matching, just as in C++. So the definition of Tuple might look something like:

Tuple<@T, ...> = Structure { T first; Tuple<...> rest; };
Tuple<> = Structure { };

Is that enough? Well, I believe this gives ALFE's kind system parity with C++'s. However, there is one more possible Kind constructor that I can imagine - the Kind constructor which can represent any Kind at all - let's call it $. A template with an argument of this kind (i.e. has kind <$>) can be instantiated with an argument of any kind. So you could have:

Foo<@T> = { ... };
Foo<@T<>> = { ... };
Foo<@T<@R<$>>> = { ... };

The first template is used when Foo is instantiated with a type, the second is used when it's instantiated with a template of kind <> and the third is used when it's instantiated with any other single-argument template, no matter what the kind of the argument is. The third template could then pass R as an argument to another instantiation of Foo and thereby "peel apart" the kind of the argument (as long as none of the kinds involved have multiple arguments).

By combining $ with ... I think we could potentially peel apart completely arbitrary kinds. However, I'm not sure if this is worth implementing since I can't think of any use for such a thing. Still, it's good to have some idea about how to proceed if I do eventually come across such a use!

Classes vs structures in ALFE

Tuesday, October 23rd, 2012

A lot of the time in my C++ code I find myself writing class hierarchies. However, because of the way that inheritance works in C++, you need to use a pointer to point to an object which is of a particular type or some subtype of that type. In my code these pointers tend to be reference counted smart pointers to alleviate thinking about lifetime issues but the principle is the same.

Then, because I don't want to have to keep typing "Reference foo" everywhere, I encapsulate these smart pointers into value classes which provide functions that call the (virtual) functions in the implementation class. So I end up with code like this:

class Foo
{
public:
    Foo() : _implementation(new Implementation) { }
    void frob() { _implementation->frob(); }
protected:
    Foo(Implementation* implementation) : _implementation(implementation) { }
    class Implementation : public ReferenceCounted
    {
    public:
        virtual void frob() { ... }
    };
private:
    Reference<Implementation> _implementation;
};
 
class Bar : public Foo
{
public:
    Bar() : Foo(new Implementation) { }
protected:
    class Implementation : public Foo::Implementation
    {
    public:
        virtual void frob() { ... }
    private:
        ...
    };
};

That's really more boilerplate than I want to write for every class. I'd really much rather write code that looks more like C# or Java:

class Foo
{
    public Foo() { }
    public virtual void frob() { ... }
};
 
class Bar
{
    public Bar() { }
    public override void frob() { ... }
    ...
};

So I'm thinking that in ALFE I should have some syntax that looks close to the latter and behaves like the former. I'm leaning towards making "Struct" behave like C++ struct and class (i.e. a value type), and "Class" behave like a C# class (reference semantics). So:

Foo = Structure : Reference<Implementation> {
    Foo() : base(new Implementation) { }
    Void frob() { referent()->frob(); }
access<Foo>:
    Foo(Implementation* implementation) : base(implementation) { }
    Implementation = Structure : ReferenceCounted {
        virtual Void frob() { ... }
    };
};
 
Bar = Structure : Foo {
    Bar() : Foo(new Implementation) { }
access<Bar>:
    Implementation = Structure : Foo.Implementation {
        virtual Void frob() { ... }
    access<>:
        ...
    };
};

is the same as:

Foo = Class {
    Foo() { }
    virtual Void frob() { ... }
};
 
Bar = Class : Foo {
    Bar() { }
    virtual Void frob() { ... }
access<>:
    ...
};

There is a comparison to be made here between POD (Plain Old Data) and non-POD classes in C++ (though it's not quite the same because the Implementation classes would be non-POD in C++).

Obviously there's a lot of details here which would need to be fleshed out but I think something along these lines would be useful.

The Label type in ALFE

Monday, October 22nd, 2012

This is part of the ALFE types series.

The Label type holds an address which can be passed to the goto statement, and the possible values are defined by the actual labels that are in scope. It's somewhat like a function pointer, except that there are no arguments and no return value. It's based on the GCC labels as values extension, except there's a proper type for it instead of void*.

Labels in ALFE are local, which means that the scope of a label is the block in which it is defined. So you can't use goto to jump into a block, only out of one or within the current block. Since defining a variable creates a new block with the scope of that variable, this eliminates the problem of goto skipping variable constructors. Unless you stash a label into a Label variable and then execute the goto from outside the label's scope. That's just as evil as taking the address of a local variable and then accessing it via the pointer when the object is out of scope, though - don't do that.

I think these restrictions eliminate the most heinous abuses of goto, while still allowing the useful cases (which are themselves pretty rare).

The Variant type in ALFE

Sunday, October 21st, 2012

This is part of the ALFE types series.

Variant is a type that can hold any value. It's the type of all variables in a dynamically typed language. It's a bit like a sum type consisting of all other types summed together.

In practice it's probably not very useful, since to do anything with a Variant variable you have to cast it to some type, and since there's a finite number of casts that you can write in one program, you might as well just make it a sum type of those. Still, it might come in handy for translating some programs written in other languages into ALFE, or even if you just don't want to have to think too much about what the possible types are.

The Bottom type in ALFE

Saturday, October 20th, 2012

This is part of the ALFE types series.

Bottom is a special type that has no values associated with it. So declaring a variable or member of type Bottom is forbidden. It's useful for using as the return type of a function that doesn't ever return normally (i.e. a function that always throws an exception or terminates the process).

[Bottom] is also another name for Null (or maybe it's a different one-element type, I haven't decided yet).

Another way of writing Bottom is Either<> (i.e. an algebraic data type with no types being summed).

I might end up calling it something different - the name has slightly too many connotations of anatomy and (worse) type theory for my taste. I'm not sure what would be better, though. Empty isn't quite explanatory enough, and NoReturn doesn't quite capture the full range of Bottom's behavior.

Fixed-length arrays in ALFE

Friday, October 19th, 2012

This is part of the ALFE types series.

Foo[2], Foo[3], Foo[4] etc. are arrays whose length is fixed at compile time (not to be confused with [Foo] whose length is unknown and possibly undefined even at runtime). Passing around a value of array type copies the entire array (arrays don't decay to pointers). Operations defined on the elements can also be applied (element-wise) to the array itself, so you can write things like:

Int[2] centre(Int[2] topLeft, Int[2] bottomRight)
{
    return (topLeft + bottomRight)/2;
}

Foo[2] also has a template form, but since template arguments can only be types and not integers like they can in C++, there is a twist: Array<Foo, Class {Int n=2;}>. Any type can be used for the second argument as long as it has a public member named n of type Int with value known at compile time. This allows templates to work generically with arrays of different lengths.

Foo[n] can be indexed by an integer to yield an (RValue or LValue) Foo. It can also be indexed by a sequence (whose values are known at run time) to yield a slice of the array.

An array can be coerced to a sequence, and the compiler will box up the element count and pointer as necessary.

Sequence types in ALFE

Thursday, October 18th, 2012

This is part of the ALFE types series.

[Foo] is the type of an immutable sequence of Foo values (the sequence itself is immutable but the values in it are not necessarily immutable). It is syntactic sugar for Sequence<Foo>. It has methods "Boolean isEmpty()", "Foo first()" and "[Foo] rest()". As usual the actual implementation is up to the compiler (and may even be different for the same sequence in different parts of the same program). Values like [], [foo] and [foo1, foo2] are allowed.

There is also a special operator for creating sequences of consecutive integers: "..". It's closed on the left and open on the right, so 0..5 is equivalent to [0, 1, 2, 3, 4]. Infinite sequences like 0.. are also allowed, since there's no need in the Sequence<> interface to know how many elements there are up front. I'd like to have consecutive sequences that count up in steps other than 1, but I haven't decided on a good syntax for that yet. Python's extended slicing syntax is quite nice, but I think ".." is more natural than ":", and I already have quite a lot of meanings for the latter.

The sequence type can also be used as the return type for a generator function:

[Int] primes()
{
    Primes = Class : [Int] {
        construct(Int m = 2) { n = m; }
        Boolean isEmpty() { return false; }
        Int first() { return n; }
        [Int] rest()
        {
            for (Int m in (n+1)..) {
                Int i = 2;
                while (m/i >= i) {
                    if (m % i == 0)
                        break;
                    ++i;
                }
                done
                    return Primes(m);
            }
        }
        Int n;
    };
    return Primes;
}

Disregarding the usage of trial division instead of a more efficient sieve of Eratosthenes, it may seem terribly inefficient to create a whole new Primes object each time we get the next element of the sequence. However, it is the expectation that the compiler should inline the entire rest function into the calling loop so that all the object creation and destruction is optimized away.

Product types in ALFE

Wednesday, October 17th, 2012

This is part of the ALFE types series.

(Foo, Bar) is a "product" type, aka Tuple<Foo, Bar> or a structure with unnamed fields. Tuple is another variadic template, so (Foo, Bar, Baz) is syntactic sugar for Tuple<Foo, Bar, Baz>. Values of Tuple type are written in the same way as the type itself, except with values instead of types inside the parentheses: (foo, bar). You can have tuples of LValues as well as RValues, which is handy for functions that return Tuple values:

(foo, bar) = getFooBar();

You can also have a tuple of declarations:

(Foo foo, Bar bar) = getFooBar();

(Foo) is identical to Foo just as it is in C, but in ALFE it's also a 1-tuple (so 1-tuplifying is really just a no-op). There is also a single 0-tuple type () aka Tuple<> which is another name for Null. There is a single value inhabiting this type, which is also (by slight abuse of notation that took me a while to determine wasn't too amiguous or confusing) called () and Null (a type specifier can be used as a value if it is default-constructable).