Contents | Return to the Tour ==>

The Virtual Machine (cont.)

Experimental Fast Floating Point

One of the classic performance problems in Smalltalk is the performance of double-precision (64 bit) floating point operations. This is due to the fact that all numbers are objects in Smalltalk, and on current machines object references are 32 bits, with 1 or 2 bits reserved for tag bits, which is far too small to encode a double-precision floating point value. This is not a problem for integers, since 30 or 31 bits is plenty to represent numbers up to at least 1 billion. But a single precision floating point value with tag bits subtracted, which is all you can fit into an immediate object reference, has far too few digits of precision (about 7) for most floating point applications. Thus, they have to be allocated as real objects, and this causes a massive performance penalty of one or two orders of magnitude for double-precision floating point calculations, since a new object must be allocated for the result of every operation.

The Strongtalk system includes an experimental scheme for performing fast double-precision arithmetic without allocation. This scheme was not developed very far, and has several drawbacks and limitations:

However, if a significant number of operations are combined in a single method, it can produce huge speedups by eliminating all the allocation for intermediate results. Click here to launch a primitive sample application that uses fast floating point to compute the Mandelbrot set. Use the "Control/Compute" menu item to run the program. You can compare the performance relate to both normal Smalltalk "boxed" floats, and to a C routine, using the options menu. The compute time in milliseconds is printed in the Transcript.

Return to the Tour ==>