« October 2017 | Main | December 2017 »

Thursday, November 30, 2017

Univac Document Archive: 1107 EXEC II Manual Added

I have added the following document to the Univac 1107 section of the Univac Document Archive. This is a PDF of a scanned paper document in my collection. This document is more than fifty years old and may appear wonky to contemporary eyes: text is sometimes misaligned on the page and multiple fonts are intermixed like a ransom note. These are not artefacts of scanning—it's how the document actually appears. Recall that only around 38 Univac 1107s were sold, so documents describing it were produced in small numbers and didn't, in the eyes of Univac, merit the expense of the high production values of contemporary IBM manuals.

EXEC II was originally developed by Computer Sciences Corporation (CSC) and dubbed the “1107 Monitor System”. When Univac adopted it as an alternative high-performance serial batch system, they renamed it “EXEC II”, redesignating their own multi-tasking system “EXEC I”. This manual was produced from CSC's manual by pasting corrections which changed the name of the system and some of its terminology over the original text. The replacement text was typewritten in a different font, and is obvious. Although EXEC II ran a single user job at a time, it had the ability to operate peripherals such as card readers, card punches, and line printers simultaneously with computation via interrupt-driven background tasks. CSC called these “parasites”, but Univac deemed this terminology infelicitous and renamed them “symbionts”, crudely replacing the text throughout the manual. Well, almost everywhere. Eagle-eyed readers will be amused to discover they missed a few.

EXEC II was also available for the Univac 1108, and as late as 1973 I worked for a site which still ran it. The Univac Document Archive also includes a 1966 1108 EXEC II manual which has been entirely re-set from the 1963 original and includes additional information. There was little difference between the 1107 and 1108 versions of EXEC II: on the 1108 the system did not take advantage of the 1108's memory relocation or guard mode facilities, and remained vulnerable to rogue programs which could destroy it.

Posted at 23:36 Permalink

Friday, November 24, 2017

Floating Point Benchmark: Modula-2 Language Added, Python 3.x Added

I have posted a new edition of the floating point benchmark collection which adds the Modula-2 language to those in which the benchmark has been implemented. In addition, this release adds a version of the benchmark compatible with Python version 3.x (a Python 2.x version of the benchmark has been included in the collection since November 2006).

Modula-2 was developed by Niklaus Wirth between 1977 and 1985. He viewed the language as the successor to Pascal and Modula, and used it for all system and application programming for his Lilith workstation. Modula-2 extended the earlier languages by supporting separate compilation and encapsulation with a module mechanism, coroutines for concurrent processes, and facilities for access to low-level data structures for system programming. Several versions of Modula-2 were released between 1983 and 1988, refining the language specification. In 1996 and 1998, ISO issued three standards under ISO/IEC 10514 for the base language and extensions for object-oriented programming and generics.

Modula-2 is a conventional imperative programming language. Programmers familiar with Pascal will have little difficulty mastering Modula-2. Modula-2 has been used for a number of projects in industry including embedded systems including General Motors' first engine control computer.

Philippe Guiochon developed a Modula-2 version of the benchmark, starting from the FreeBASIC version of the program. He contributed two versions of the program for different compilers: J.P.I. TopSpeed v3.1 for MS-DOS and Excelsior XDS v2.60 beta for Windows 32. Due to small differences in the libraries supported by these compilers, two different versions of the benchmark source code are supplied as fbench_JPI.mod and fbench_XDS.mod. M. Guiochon ran timing tests on a variety of computers, which are included in the source code for the benchmarks.

Since I do not have a machine which can run these programs (Fourmilab is Microsoft free), I was unable to run an archival benchmark and comparison against C for the language comparison table. Since there is a GNU Modula-2 compiler, built atop the GCC (GNU Compiler Collection) infrastructure, that supports the ISO dialect of the language, I decided to see if I could get the code working with it.

I used fbench_JPI.mod as the point of departure to develop a version compatible with GNU Modula-2 in ISO mode with its ISO libraries. I restructured the program, removed the INTRIG code (I'm interested in comparing languages, not their library mathematical functions), and tried to make it more in keeping with the style of Modula-2 than its Basic ancestor. This program, fbench.mod, compiled with GNU Modula-2 version gm2-1.6.4, which is built on gcc-6.4.0 with:

gm2 -O3 -fiso -flibs=iso,pim -Wpedantic fbench.mod -o fbench
is the basis for the timing tests.

After a preliminary run to estimate an iteration count, I ran five runs of 175,495,723 iterations on an idle machine with a mean run time of 294.85 seconds, or 1.6801 microseconds per iteration. This compares with the reference C implementation, which on GCC 5.4.0 ran 166,051,660 iterations in a mean time of 296.536, or 1.7858 microseconds per iteration.

Thus the execution time is 0.941 times that of C (Modula2 is about 6% faster than C). Note that this is almost precisely the speed ratio of the C++ version of the benchmark (0.939) compiled with GCC compared to the C implementation compiled with the same compiler. This makes sense since the back-end of the compiler is the same for both the G++ and GM2 compilers, and the structure of the programs they're compiling is similar.

Philippe Guiochon then re-tested the fbench.mod program with the Excelsior XDS Modula-2 compiler on Windows 32, and encountered both compile errors and accuracy problems. The XDS compiler defines the REAL type to be 32-bit single-precision floating point, which is insufficient for the accuracy needs of this algorithm, and requires variables to be declared as LONGREAL to use 64-bit double precision (which then requires importing the trigonometric functions from the LongMath module). In addition, this compiler does not permit enumeration types to be used as array indices or bounds in FOR statements and requires them to be explicitly converted to CARDINAL with the ORD() function. Using the ORD() function is ugly but compatible with GNU Modula-2, but declaring variables as LONGREAL causes that compiler to use the Intel 80-bit floating point type (GCC's “long double”) instead of the standard 64-bit data type it uses for REAL. Since this may affect the timing (and still other compilers may define REAL and LONGREAL in yet other ways), I have kept this version separate as fbench_iso_XDS.mod and used the original fbench.mod for the timing tests with GNU Modula-2.

Source code for all of these programs is included in the modula2 directory of the distribution.

Completely unrelated to Modula-2, this release of the benchmark collection adds an implementation of the benchmark compatible with version 3 of the Python language. The original version of the Python benchmark, developed in November 2006, was based upon version 2 of Python. Python version 3 made some incompatible changes to the syntax and semantics of the language, requiring changes in many programs. The impact of the changes on the benchmark were relatively minor and required only the following modifications:

With these changes, the benchmark ran with no errors or warnings on Python 3.5.2 and PyPy 5.9.0-beta0, which is based upon Python 3.5.3. The timings were comparable to those on Python 2.7.6 and PyPy 2.2.1, so I did not bother re-running archival timing tests.

I have restructured the python directory in the distribution to contain two subdirectories, python2 and python3, which contain fbench.py files for their respective versions of the language.

Download floating point benchmark collection

Language Relative
Time
Details
C 1 GCC 3.2.3 -O3, Linux
JavaScript 0.372
0.424
1.334
1.378
1.386
1.495
Mozilla Firefox 55.0.2, Linux
Safari 11.0, MacOS X
Brave 0.18.36, Linux
Google Chrome 61.0.3163.91, Linux
Chromium 60.0.3112.113, Linux
Node.js v6.11.3, Linux
Chapel 0.528
0.0314
Chapel 1.16.0, -fast, Linux
Parallel, 64 threads
Visual Basic .NET 0.866 All optimisations, Windows XP
C++ 0.939
0.964
31.00
189.7
499.9
G++ 5.4.0, -O3, Linux, double
long double (80 bit)
__float128 (128 bit)
MPFR (128 bit)
MPFR (512 bit)
Modula-2 0.941 GNU Modula-2 gm2-1.6.4 -O3, Linux
FORTRAN 1.008 GNU Fortran (g77) 3.2.3 -O3, Linux
Pascal 1.027
1.077
Free Pascal 2.2.0 -O3, Linux
GNU Pascal 2.1 (GCC 2.95.2) -O3, Linux
Swift 1.054 Swift 3.0.1, -O, Linux
Rust 1.077 Rust 0.13.0, --release, Linux
Java 1.121 Sun JDK 1.5.0_04-b05, Linux
Visual Basic 6 1.132 All optimisations, Windows XP
Haskell 1.223 GHC 7.4.1-O2 -funbox-strict-fields, Linux
Scala 1.263 Scala 2.12.3, OpenJDK 9, Linux
FreeBASIC 1.306 FreeBASIC 1.05.0, Linux
Ada 1.401 GNAT/GCC 3.4.4 -O3, Linux
Go 1.481 Go version go1.1.1 linux/amd64, Linux
Julia 1.501 Julia version 0.6.1 64-bit -O2 --check-bounds=no, Linux
Simula 2.099 GNU Cim 5.1, GCC 4.8.1 -O2, Linux
Lua 2.515
22.7
LuaJIT 2.0.3, Linux
Lua 5.2.3, Linux
Python 2.633
30.0
PyPy 2.2.1 (Python 2.7.3), Linux
Python 2.7.6, Linux
Erlang 3.663
9.335
Erlang/OTP 17, emulator 6.0, HiPE [native, {hipe, [o3]}]
Byte code (BEAM), Linux
ALGOL 60 3.951 MARST 2.7, GCC 4.8.1 -O3, Linux
PL/I 5.667 Iron Spring PL/I 0.9.9b beta, Linux
Lisp 7.41
19.8
GNU Common Lisp 2.6.7, Compiled, Linux
GNU Common Lisp 2.6.7, Interpreted
Smalltalk 7.59 GNU Smalltalk 2.3.5, Linux
Ruby 7.832 Ruby 2.4.2p198, Linux
Forth 9.92 Gforth 0.7.0, Linux
Prolog 11.72
5.747
SWI-Prolog 7.6.0-rc2, Linux
GNU Prolog 1.4.4, Linux, (limited iterations)
COBOL 12.5
46.3
Micro Focus Visual COBOL 2010, Windows 7
Fixed decimal instead of computational-2
Algol 68 15.2 Algol 68 Genie 2.4.1 -O3, Linux
Perl 23.6 Perl v5.8.0, Linux
BASICA/GW-BASIC 53.42 Bas 2.4, Linux
QBasic 148.3 MS-DOS QBasic 1.1, Windows XP Console
Mathematica 391.6 Mathematica 10.3.1.0, Raspberry Pi 3, Raspbian

Posted at 22:25 Permalink

Tuesday, November 21, 2017

Floating Point Benchmark: Julia Language Added

I have posted a new edition of the floating point benchmark collection which adds the Julia language.

Julia is a language intended for numerical computation in science and engineering. It combines aspects of object orientation, functional programming, and conventional imperative languages. It has a dynamic type system, automatic storage management with garbage collection, macros, and support for parallel processing in both the single instruction multiple data (SIMD) and symmetrical multiprocessing paradigms. An extensive mathematical function library is included, and support for complex numbers, multiple precision integers and floating point, and vector and matrix algebra are built in.

An interactive evaluator is provided, and programs are compiled into machine code with a just in time (JIT) compiler based upon the LLVM compiler infrastructure.

Implementation of the benchmark was straightforward, and based upon the C++ version. Some restructuring was required because Julia does not provide the class and method mechanism of C++, nor does it allow passing pointers as function arguments.

As with all ports of the benchmark, I tried to do things in the style of the language. This means I adopted a style much like that of the Haskell and Scala benchmarks, using a trace context to manage the state as a ray is traced through the surfaces of the design. The transitSurface() function, which accounts for the bulk of the benchmark's run time, takes a TraceContext as an argument and returns a TraceContext updated to reflect the propagation of the ray through the current surface. This means on each call to the function a new TraceContext is allocated and one discarded to be cleaned up by garbage collection. You might think this overhead could be reduced by replacing the dynamically allocated TraceContext with global mutable variables, but that makes things dramatically worse. The program fbench_static.jl uses static variables as the original C and Fortran versions do, and runs almost four and a half times slower than fbench.jl.

I did some preliminary timing tests to try different compiler options, none of which seemed to make much difference in run time. I ran the timing tests with code compiled with the “-O2 --check-bounds=no” options. The latter suppresses array bound checking. After initial runs to determine an iteration count resulting in a run time of around five minutes, I ran five timing tests. Julia's @time macro was used to time the inner loop. For an iteration count of 110,327,622, the mean time of of five runs on an idle system was 295.681 seconds, or 2.68 microseconds per iteration.

My runs of the reference C benchmark on GCC 5.4.0 had a mean time of 296.536 seconds for 166,051,660 iterations, or 1.7858 microseconds per iteration. Thus, for this benchmark, the run time for Julia was 1.501 times longer than that of C or, equivalently, Julia was 50% slower than C.

Download floating point benchmark collection

Language Relative
Time
Details
C 1 GCC 3.2.3 -O3, Linux
JavaScript 0.372
0.424
1.334
1.378
1.386
1.495
Mozilla Firefox 55.0.2, Linux
Safari 11.0, MacOS X
Brave 0.18.36, Linux
Google Chrome 61.0.3163.91, Linux
Chromium 60.0.3112.113, Linux
Node.js v6.11.3, Linux
Chapel 0.528
0.0314
Chapel 1.16.0, -fast, Linux
Parallel, 64 threads
Visual Basic .NET 0.866 All optimisations, Windows XP
C++ 0.939
0.964
31.00
189.7
499.9
G++ 5.4.0, -O3, Linux, double
long double (80 bit)
__float128 (128 bit)
MPFR (128 bit)
MPFR (512 bit)
FORTRAN 1.008 GNU Fortran (g77) 3.2.3 -O3, Linux
Pascal 1.027
1.077
Free Pascal 2.2.0 -O3, Linux
GNU Pascal 2.1 (GCC 2.95.2) -O3, Linux
Swift 1.054 Swift 3.0.1, -O, Linux
Rust 1.077 Rust 0.13.0, --release, Linux
Java 1.121 Sun JDK 1.5.0_04-b05, Linux
Visual Basic 6 1.132 All optimisations, Windows XP
Haskell 1.223 GHC 7.4.1-O2 -funbox-strict-fields, Linux
Scala 1.263 Scala 2.12.3, OpenJDK 9, Linux
FreeBASIC 1.306 FreeBASIC 1.05.0, Linux
Ada 1.401 GNAT/GCC 3.4.4 -O3, Linux
Go 1.481 Go version go1.1.1 linux/amd64, Linux
Julia 1.501 Julia version 0.6.1 64-bit -O2 --check-bounds=no, Linux
Simula 2.099 GNU Cim 5.1, GCC 4.8.1 -O2, Linux
Lua 2.515
22.7
LuaJIT 2.0.3, Linux
Lua 5.2.3, Linux
Python 2.633
30.0
PyPy 2.2.1 (Python 2.7.3), Linux
Python 2.7.6, Linux
Erlang 3.663
9.335
Erlang/OTP 17, emulator 6.0, HiPE [native, {hipe, [o3]}]
Byte code (BEAM), Linux
ALGOL 60 3.951 MARST 2.7, GCC 4.8.1 -O3, Linux
PL/I 5.667 Iron Spring PL/I 0.9.9b beta, Linux
Lisp 7.41
19.8
GNU Common Lisp 2.6.7, Compiled, Linux
GNU Common Lisp 2.6.7, Interpreted
Smalltalk 7.59 GNU Smalltalk 2.3.5, Linux
Ruby 7.832 Ruby 2.4.2p198, Linux
Forth 9.92 Gforth 0.7.0, Linux
Prolog 11.72
5.747
SWI-Prolog 7.6.0-rc2, Linux
GNU Prolog 1.4.4, Linux, (limited iterations)
COBOL 12.5
46.3
Micro Focus Visual COBOL 2010, Windows 7
Fixed decimal instead of computational-2
Algol 68 15.2 Algol 68 Genie 2.4.1 -O3, Linux
Perl 23.6 Perl v5.8.0, Linux
BASICA/GW-BASIC 53.42 Bas 2.4, Linux
QBasic 148.3 MS-DOS QBasic 1.1, Windows XP Console
Mathematica 391.6 Mathematica 10.3.1.0, Raspberry Pi 3, Raspbian

Posted at 22:35 Permalink

Sunday, November 19, 2017

Univac Document Archive: 1107 SLEUTH II and PROCS Manuals Added

I have added the following documents to the Univac 1107 section of the Univac Document Archive. These are PDFs of scanned paper documents in my collection. These documents are fifty years old and may appear wonky to contemporary eyes: text is sometimes misaligned on the page, multiple fonts are intermixed like a ransom note, and sample code sometimes appears as handwriting on coding forms. These are not artefacts of scanning—it's how the documents actually appeared. Recall that only around 38 Univac 1107s were sold, so documents describing it were produced in small numbers and didn't, in the eyes of Univac, merit the expense of the high production values of contemporary IBM manuals.

The Univac 1107 was originally supplied with a machine-specific assembler called SLEUTH (later renamed SLEUTH I). Computer Sciences Corporation subsequently developed an optimising Fortran compiler, a batch operating system initially called the Monitor System and later EXEC II, and a new assembler, SLEUTH II. SLEUTH II was a “meta-assembler”: within the constraints of a maximum word length of 36 bits, it could assemble code for any machine. The 1107 instruction set was defined by procedure definitions, but by writing new definitions, code for other machines could be easily generated. I used descendants of SLEUTH II to assemble code for a variety of minicomputer and microprocessor systems, including early versions of my Marinchip Systems software. SLEUTH II had a powerful procedure and function definition facility which was, rare for languages of the day, fully recursive. (I recall writing Ackermann's Function as a SLEUTH II FUNC just because I could.) A companion manual explained these procedures (PROCS) and functions in greater detail.

Posted at 15:16 Permalink

Friday, November 10, 2017

New: Univac Document Archive

I have just posted a new section on Univac Memories, the Univac Document Archive, which contains PDF scans of hardware and software manuals, sales brochures, and related documents for the Univac 1100 series from the 1107 through the 1100/80.

This collection includes some classics, including the original 1966 EXEC-8 manual whose camera ready copy appears to have been printed (in all capitals) on a 1004 line printer.

There remain a number of lacunæ. I'd love to add hardware manuals for the FH-432 and FH-1782 drums, the FASTRAND, and the CTMC, and software manuals for the Collector, SECURE, FORTRAN V, and others from the era. If you have any of these gathering dust in your attic, why not dig them out, fire up the scanner, and send them my way? (Please scan with at least 300 dpi resolution. I have a PDF editor which allows me to clean up raw scans and put them in publication-ready form.)

Posted at 21:46 Permalink

Monday, November 6, 2017

Floating Point Benchmark: Back to BASICs

The floating point benchmark was born in BASIC. The progenitor of the benchmark was an interactive optical design and ray tracing application I wrote in 1980 in Marinchip QBASIC [PDF, 19 Mb]. This was, for the time, a computationally intensive process, as analysis of a multi-element lens design required tracing four light rays with different wavelengths and axial incidence through each surface of the assembly, with multiple trigonometric function evaluations for each surface transit. In the days of software floating point, before the advent of math coprocessors or integrated floating point units, this took a while; more than a second for each analysis.

After I became involved in Autodesk, and we began to receive requests from numerous computer manufacturers and resellers to develop versions of AutoCAD for their machines, I perceived a need to be able to evaluate the relative performance of candidate platforms before investing the major effort to do a full port of AutoCAD. It was clear that the main bottleneck in AutoCAD's “REGEN” performance (the time it took to display a view of a drawing file on the screen) was the machine's floating point performance. Unlike many competitors at the time, AutoCAD used floating point numbers (double precision for the 80x86 version of the program) for the coordinates in its database, and mapping these to the screen required a great deal of floating point arithmetic to be performed. It occurred to me that the largely forgotten lens design program might be a good model for AutoCAD's performance, and since it was only a few pages of code, even less when stripped of its interactive user interface and ability to load and save designs, it could be easily ported to new machines. I made a prototype of the benchmark on the Marinchip machine by stripping the lens design program down to its essential inner loop and then, after testing, rewrote the program in C, using the Lattice C compiler we used for the IBM PC and other 8086 versions of AutoCAD.

Other than running a timing test on the Marinchip 9900 to establish that, even in 1986, it was still faster than the IBM PC/AT, the QBASIC version of the benchmark was set aside and is now lost in the mists of time. Throughout the 1980s and '90s, the C version was widely used to test candidate machines and proved unreasonably effective in predicting how fast AutoCAD would run when ported to them. Since by then most machines had compatible C compilers, running the benchmark was simply a matter of recompiling it and running a timing test. From the start, the C version of the benchmark checked the results of the ray trace and analysis to the last (11th) decimal place against the reference results from the original QBASIC program. This was useful in detecting errors in floating point implementations and mathematical function libraries which would be disastrous for AutoCAD and would preclude a port until remedied. The benchmark's accuracy check was shown to be invariant of the underlying floating point format, producing identical results on a variety of implementations.

Later, I became interested in comparing the performance of different programming languages for scientific computation, so I began to port the C benchmark to various languages, resulting in the comparison table at the end of this article. First was a port of the original QBASIC program to the Microsoft/IBM BASICA included with MS-DOS on the IBM PC and PC/AT. This involved transforming QBASIC's relatively clean (for BASIC, anyway) syntax to the gnarly world of line numbers, two character variable names, and GOSUBs which was BASICA. This allowed comparing the speed of BASICA with the C compiler we were using and showed that, indeed, C was much faster. The BASICA version of the benchmark was preserved and has been included in distributions of the benchmark collection for years in the mbasic (for Microsoft BASIC) directory, but due to the obsolescence of the language, no timings of it have been done since the original run on the PC/AT in 1984.

I was curious how this archaic version of the benchmark might perform on a modern machine, so when I happened upon Michael Haardt's Bas, a free implementation of the original BASICA/GW-BASIC language written in portable C, I realised the opportunity for such a comparison might be at hand. I downloaded version 2.4 of Bas and built it without any problems. I was delighted to discover that it ran the Microsoft BASIC version of the benchmark, last saved in 1998, without any modifications, and produced results accurate to the last digit.

Bas is a tokenising interpreter, not a compiler, so it could be expected to run much slower than any compiled language. I started with the usual comparison to C. I ran a preliminary timing test to determine an iteration count which would yield a run time of around five minutes, ran five timing runs on an idle machine, and for 3,056,858 iterations obtained a mean run time of 291.64 seconds, or 95.4052 microseconds per iteration. Compared the the C benchmark, which runs in 1.7856 microseconds per iteration on the same machine, this is 53.42 times slower, which is shown in the table below in the “BASICA/GW-BASIC” row. This is still 2.78 times faster than Microsoft QBasic (not to be confused with Marinchip QBASIC) which, when I compared it to C on a Windows XP machine in the early 2000s (running in the MS-DOS console window), ran 148.3 times slower than the C version compiled with the Microsoft Visual C compiler on the same machine.

Since I had benchmarked this program with IBM BASICA in the 1980s, it was possible to do a machine performance comparison. The IBM PC/AT, running at 6 MHz, with BASICA version A3.00 and software floating point ran 1000 iterations of the benchmark in 3290 seconds (yes, almost an hour), for a time per iteration of 3.29 seconds per iteration. Dividing this by the present day time per iteration of 95.4052 microseconds per iteration with Bas, we find that the same program, still running in interpreted mode on a modern machine with hardware floating point, runs 34,484 times faster than 1984's flagship personal computer.

This made me curious how a modern compiled BASIC might stack up. In 2005 Jim White ported the C benchmark to Microsoft Visual BASIC (both version 6 and .NET), and obtained excellent results, with the .NET release actually running faster than Microsoft Visual C on Windows XP. (Well, of course this was a comparison against Monkey C, so maybe I shouldn't be surprised.) These ports of the benchmark are available in the visualbasic directory of the benchmark collection.

FreeBASIC is an open source (GPL) command line compiler for a language which is essentially compatible with Microsoft QuickBasic with some extensions for access to operating system facilities. The compiler produces executable code, using the GNU Binutils suite as its back-end. The compiler runs on multiple platforms including Linux and Microsoft Windows. Since Visual Basic is an extension of QuickBasic, crudded up with Windows-specific junk, I decided to try to port Jim White's Visual Basic version 6 code to FreeBASIC.

This wasn't as easy as I'd hoped, because in addition to stripping out all of the “Form” crap, I had to substantially change the source code due to Microsoft-typical fiddling with the language in their endless quest to torpedo developers foolish enough to invest work in their wasting asset platforms. I restructured the program in the interest of readability and added comments to explain what the program is doing. The “Form” output was rewritten to use conventional “Print using” statements. The internal Microsoft-specific timing code was removed and replaced with external timing. The INTRIG compile option (local math functions written in BASIC) was removed—I'm interested in the performance of the language's math functions, not the language for writing math functions.

After getting the program to compile and verifying that it produced correct output, I once again ran a preliminary timing test, determined an iteration count, and ran the archival timing tests, yielding a mean time of 296.54 seconds for 127,172,531 iterations, or 2.3318 microseconds per iteration. Compared to C's 1.7858 microseconds per iteration, this gives a run time of 1.306 times than of C, or almost exactly 30% slower. This is in the middle of the pack for compiled languages, although slower than the heavily optimised ones. See the “FreeBASIC” line in the table for the relative ranking.

The relative performance of the various language implementations (with C taken as 1) is as follows. All language implementations of the benchmark listed below produced identical results to the last (11th) decimal place.

Language Relative
Time
Details
C 1 GCC 3.2.3 -O3, Linux
JavaScript 0.372
0.424
1.334
1.378
1.386
1.495
Mozilla Firefox 55.0.2, Linux
Safari 11.0, MacOS X
Brave 0.18.36, Linux
Google Chrome 61.0.3163.91, Linux
Chromium 60.0.3112.113, Linux
Node.js v6.11.3, Linux
Chapel 0.528
0.0314
Chapel 1.16.0, -fast, Linux
Parallel, 64 threads
Visual Basic .NET 0.866 All optimisations, Windows XP
C++ 0.939
0.964
31.00
189.7
499.9
G++ 5.4.0, -O3, Linux, double
long double (80 bit)
__float128 (128 bit)
MPFR (128 bit)
MPFR (512 bit)
FORTRAN 1.008 GNU Fortran (g77) 3.2.3 -O3, Linux
Pascal 1.027
1.077
Free Pascal 2.2.0 -O3, Linux
GNU Pascal 2.1 (GCC 2.95.2) -O3, Linux
Swift 1.054 Swift 3.0.1, -O, Linux
Rust 1.077 Rust 0.13.0, --release, Linux
Java 1.121 Sun JDK 1.5.0_04-b05, Linux
Visual Basic 6 1.132 All optimisations, Windows XP
Haskell 1.223 GHC 7.4.1-O2 -funbox-strict-fields, Linux
Scala 1.263 Scala 2.12.3, OpenJDK 9, Linux
FreeBASIC 1.306 FreeBASIC 1.05.0, Linux
Ada 1.401 GNAT/GCC 3.4.4 -O3, Linux
Go 1.481 Go version go1.1.1 linux/amd64, Linux
Simula 2.099 GNU Cim 5.1, GCC 4.8.1 -O2, Linux
Lua 2.515
22.7
LuaJIT 2.0.3, Linux
Lua 5.2.3, Linux
Python 2.633
30.0
PyPy 2.2.1 (Python 2.7.3), Linux
Python 2.7.6, Linux
Erlang 3.663
9.335
Erlang/OTP 17, emulator 6.0, HiPE [native, {hipe, [o3]}]
Byte code (BEAM), Linux
ALGOL 60 3.951 MARST 2.7, GCC 4.8.1 -O3, Linux
PL/I 5.667 Iron Spring PL/I 0.9.9b beta, Linux
Lisp 7.41
19.8
GNU Common Lisp 2.6.7, Compiled, Linux
GNU Common Lisp 2.6.7, Interpreted
Smalltalk 7.59 GNU Smalltalk 2.3.5, Linux
Ruby 7.832 Ruby 2.4.2p198, Linux
Forth 9.92 Gforth 0.7.0, Linux
Prolog 11.72
5.747
SWI-Prolog 7.6.0-rc2, Linux
GNU Prolog 1.4.4, Linux, (limited iterations)
COBOL 12.5
46.3
Micro Focus Visual COBOL 2010, Windows 7
Fixed decimal instead of computational-2
Algol 68 15.2 Algol 68 Genie 2.4.1 -O3, Linux
Perl 23.6 Perl v5.8.0, Linux
BASICA/GW-BASIC 53.42 Bas 2.4, Linux
QBasic 148.3 MS-DOS QBasic 1.1, Windows XP Console
Mathematica 391.6 Mathematica 10.3.1.0, Raspberry Pi 3, Raspbian

Posted at 14:04 Permalink

Thursday, November 2, 2017

Floating Point Benchmark: C++ Language Added, Multiple Precision Arithmetic

I have posted a new edition of the floating point benchmark collection which adds the C++ language and compares the performance of four floating point implementations with different precisions: standard double (64 bit), long double (80 bit), GNU libquadmath (__float128, 128 bit), and the GNU MPFR multiple-precision library, tested at both 128 and 512 bit precision.

It is, of course, possible to compile the ANSI C version of the benchmark with a C++ compiler, as almost any ANSI C program is a valid C++ program, but this program is a complete rewrite of the benchmark algorithm in C++, using the features of the language as they were intended to improve the readability, modularity, and generality of the program. As with all versions of the benchmark, identical results are produced, to the last decimal place, and the results are checked against a reference to verify correctness.

This benchmark was developed to explore whether writing a program using the features of C++ imposed a speed penalty compared to the base C language, and also to explore the relative performance of four different implementations of floating point arithmetic and mathematical function libraries, with different precision. The operator overloading features of C++ make it possible to easily port code to multiple precision arithmetic libraries without the cumbersome and error-prone function calls such code requires in C.

The resulting program is object-oriented, with objects representing items such as spectral lines, surface boundaries in an optical assembly, a complete lens design, the trace of a ray of light through the lens, and an evaluation of the aberrations of the design compared to acceptable optical quality standards. Each object has methods which perform computation related to its contents. All floating point quantities in the program are declared as type Real, which is typedef-ed to the precision being tested.

The numbers supported by libquadmath and MPFR cannot be directly converted to strings by snprintf() format phrases, so when using these libraries auxiliary code is generated to use those packages' facilities for conversion to strings. In a run of the benchmark which typically runs hundreds of thousands or millions of executions of the inner loop, this code only executes once, so it has negligible impact on run time.

I first tested the program with standard double arithmetic. As always, I do a preliminary run and time it, then compute an iteration count to yield a run time of around five minutes. I then perform five runs on an idle system, time them, and compute the mean run time. Next, the mean time is divided by the iteration count to compute microseconds per iteration. All tests were done with GCC/G++ 5.4.0.

Comparing with a run of the ANSI C benchmark, the C++ time was 0.9392 of the C run time. Not only didn't we pay a penalty for using C++, we actually picked up around 6% in speed. Presumably, the cleaner structure of the code allowed the compiler to optimise a bit better whereas the global variables in the original C program might have prevented some optimisations.

Next I tested with a long double data type, which uses the 80 bit internal representation of the Intel floating point unit. I used the same iteration count as with the original double test.

Here, the run time was 0.9636 that of C, still faster, and not that much longer than double. If the extra precision of long double makes a difference for your application, there's little cost in using it. Note that support for long double varies from compiler to compiler and architecture to architecture: whether it's available and, if so, what it means depends upon which compiler and machine you're using. These test results apply only to GCC on the x86 (actually x86_64) architecture.

GCC also provides a nonstandard data type, __float128, which implements 128 bit (quadruple precision) floating point arithmetic in software. The libquadmath library includes its own mathematical functions which end in “q” (for example sinq instead of sin), which must be called instead of the standard library functions, and a quadmath_snprintf function for editing numbers to strings. The benchmark contains conditional code and macro definitions to accommodate these changes.

This was 31.0031 times slower than C. Here, we pay a heavy price for doing every floating point operation in software instead of using the CPU's built in floating point unit. If you have an algorithm which requires this accuracy, it's important to perform the numerical analysis to determine where the accuracy is actually needed and employ quadruple precision only where necessary.

Finally, I tested the program using the GNU MPFR multiple-precision library which is built atop the GMP package. I used the MPFR C++ bindings developed by Pavel Holoborodko, which overload the arithmetic operators and define versions of the mathematical functions which make integrating MPFR into a C++ program almost seamless. As with __float128, the output editing code must be rewritten to accommodate MPFR's toString() formatting mechanism. MPFR allows a user-selected precision and rounding mode. I always use the default round to nearest mode, but allow specifying the precision in bits by setting MPFR_PRECISION when the program is compiled. I started with a precision of 128 bits, the same as __float128 above. The result was 189.72 times slower than C. The added generality of MPFR over __float128 comes at a steep price. Clearly, if 128 bits suffices for your application, __float128, is the way to go.

Next, I wanted to see how run time scaled with precision. I rebuilt for 512 bit precision and reran the benchmark. Now we're 499.865 times slower than C—almost exactly 1/500 the speed. This is great to have if you really need it, but you'd be wise to use it sparingly.

The program produced identical output for all choices of floating point precision. By experimentation, I determined that I could reduce MPFR_PRECISION to as low as 47 without getting errors in the least significant digits of the results. At 46 bits and below, errors start to creep in.

The relative performance of the various language implementations (with C taken as 1) is as follows. All language implementations of the benchmark listed below produced identical results to the last (11th) decimal place.

Language Relative
Time
Details
C 1 GCC 3.2.3 -O3, Linux
JavaScript 0.372
0.424
1.334
1.378
1.386
1.495
Mozilla Firefox 55.0.2, Linux
Safari 11.0, MacOS X
Brave 0.18.36, Linux
Google Chrome 61.0.3163.91, Linux
Chromium 60.0.3112.113, Linux
Node.js v6.11.3, Linux
Chapel 0.528
0.0314
Chapel 1.16.0, -fast, Linux
Parallel, 64 threads
Visual Basic .NET 0.866 All optimisations, Windows XP
C++ 0.939
0.964
31.00
189.7
499.9
G++ 5.4.0, -O3, Linux, double
long double (80 bit)
__float128 (128 bit)
MPFR (128 bit)
MPFR (512 bit)
FORTRAN 1.008 GNU Fortran (g77) 3.2.3 -O3, Linux
Pascal 1.027
1.077
Free Pascal 2.2.0 -O3, Linux
GNU Pascal 2.1 (GCC 2.95.2) -O3, Linux
Swift 1.054 Swift 3.0.1, -O, Linux
Rust 1.077 Rust 0.13.0, --release, Linux
Java 1.121 Sun JDK 1.5.0_04-b05, Linux
Visual Basic 6 1.132 All optimisations, Windows XP
Haskell 1.223 GHC 7.4.1-O2 -funbox-strict-fields, Linux
Scala 1.263 Scala 2.12.3, OpenJDK 9, Linux
Ada 1.401 GNAT/GCC 3.4.4 -O3, Linux
Go 1.481 Go version go1.1.1 linux/amd64, Linux
Simula 2.099 GNU Cim 5.1, GCC 4.8.1 -O2, Linux
Lua 2.515
22.7
LuaJIT 2.0.3, Linux
Lua 5.2.3, Linux
Python 2.633
30.0
PyPy 2.2.1 (Python 2.7.3), Linux
Python 2.7.6, Linux
Erlang 3.663
9.335
Erlang/OTP 17, emulator 6.0, HiPE [native, {hipe, [o3]}]
Byte code (BEAM), Linux
ALGOL 60 3.951 MARST 2.7, GCC 4.8.1 -O3, Linux
PL/I 5.667 Iron Spring PL/I 0.9.9b beta, Linux
Lisp 7.41
19.8
GNU Common Lisp 2.6.7, Compiled, Linux
GNU Common Lisp 2.6.7, Interpreted
Smalltalk 7.59 GNU Smalltalk 2.3.5, Linux
Ruby 7.832 Ruby 2.4.2p198, Linux
Forth 9.92 Gforth 0.7.0, Linux
Prolog 11.72
5.747
SWI-Prolog 7.6.0-rc2, Linux
GNU Prolog 1.4.4, Linux, (limited iterations)
COBOL 12.5
46.3
Micro Focus Visual COBOL 2010, Windows 7
Fixed decimal instead of computational-2
Algol 68 15.2 Algol 68 Genie 2.4.1 -O3, Linux
Perl 23.6 Perl v5.8.0, Linux
QBasic 148.3 MS-DOS QBasic 1.1, Windows XP Console
Mathematica 391.6 Mathematica 10.3.1.0, Raspberry Pi 3, Raspbian

Posted at 22:45 Permalink