fbench

Trigonometry Intense Floating Point Benchmark

by John Walker
December 1980
Last update: October 2nd, 2017

Introduction

Fbench is a complete optical design raytracing algorithm, shorn of its user interface and recast into portable C. It not only determines execution speed on an extremely floating point (including trigonometric functions) intensive real-world application, it checks accuracy on an algorithm that is exquisitely sensitive to errors. The performance of this program is typically far more sensitive to changes in the efficiency of the trigonometric library routines than the average floating point program.

The benchmark may be compiled in two modes. If the symbol INTRIG is defined, built-in trigonometric and square root routines will be used for all calculations. Timings made with INTRIG defined reflect the machine's basic floating point performance for the arithmetic operators. If INTRIG is not defined, the system library <math.h> functions are used. Results with INTRIG not defined reflect the system's library performance and/or floating point hardware support for trig functions and square root. Results with INTRIG defined are a good guide to general floating point performance, while results with INTRIG undefined indicate the performance of an application which is math function intensive.

Special note regarding errors in accuracy: this program has generated numbers identical to the last digit it formats and checks on the following machines, floating point architectures, and languages:

Machine Language Floating Point Format
Marinchip 9900 QBASIC IBM 370 double-precision (REAL * 8) format
IBM PC /XT/ AT,
Intel ix86 / Pentium n
Lattice C IEEE 64 bit, 80 bit temporaries
High C same, in line 80x87 code
GCC same, native FPU code
BASICA “Double precision”
Quick BASIC IEEE double precision, software routines
Sun 3 C IEEE 64 bit, 80 bit temporaries, in-line 68881 code, in-line FPA code.
MicroVAX II C VAX “G” format floating point
Macintosh Plus MPW C SANE floating point, IEEE 64 bit format implemented in ROM.

Inaccuracies reported by this program should be taken very seriously indeed, as the program has been demonstrated to be invariant under changes in floating point format, as long as the format is a recognised double precision format. If you encounter errors, please remember that they are just as likely to be in the floating point editing library or the trigonometric libraries (unless compiled with INTRIG defined) as in the low level operator code. Now that virtually all computers use IEEE floating point format, differences in results are almost certainly indicative of errors in code generation, optimisation, or library routines.

The benchmark assumes that results are basically reliable, and only tests the last result computed against the reference. If you're running on a suspect system you can compile this program with ACCURACY defined. This will generate a version which executes as an infinite loop, performing the ray trace and checking the results on every pass. All incorrect results will be reported.

Benchmark Results for Various Systems

Representative timings are given below. All have been normalised as if run for 1000 iterations.

Time in Seconds Computer, Compiler, and Notes
Normal INTRIG
3466.00 4031.00 Commodore 128, 2 MHz 8510 with software floating point. Abacus Software/Data-Becker Super-C 128, version 3.00, run in fast (2 MHz) mode. Note: the results generated by this system differed from the reference results in the 8th to 10th decimal place.
3290.00   IBM PC/AT 6 MHz, Microsoft/IBM BASICA version A3.00. Run with the “/d” switch, software floating point.
2131.50   IBM PC/AT 6 MHz, Lattice C version 2.14, small model. This version of Lattice compiles subroutine calls which either do software floating point or use the 80x87. The machine on which I ran this had an 80287, but the results were so bad I wonder if it was being used.
1598.00   Macintosh Plus, MPW C, SANE Software floating point.
1582.13   Marinchip 9900 2 MHz, QBASIC compiler with software floating point. This was a QBASIC version of the program which contained the identical algorithm.
404.00   IBM PC/AT 6 MHz, Microsoft QuickBASIC version 2.0. Software floating point.
165.15   IBM PC/AT 6 MHz, Metaware High C version 1.3, small model. This was compiled to call subroutines for floating point, and the machine contained an 80287 which was used by the subroutines.
143.20   Macintosh II, MPW C, SANE calls. I was unable to determine whether SANE was using the 68881 chip or not.
121.80   Sun 3/160 16 MHz, Sun C Compiled with “-fsoft” switch which executes floating point in software.
78.78 110.11 IBM RT PC (Model 6150). IBM AIX 1.0 C compiler with “-O” switch.
75.2 254.0 Microsoft Quick C 1.0, in-line 8087 instructions, compiled with 80286 optimisation on. (Switches were “-Ol -FPi87-G2 -AS”). Small memory model.
69.50   IBM PC/AT 6 MHz, Borland Turbo BASIC 1.0. Compiled in “8087 required” mode to generate in-line code for the math coprocessor.
66.96   IBM PC/AT 6 MHz, Microsoft QuickBASIC 4.0. This release of QuickBASIC compiles code for the 80287 math coprocessor.
66.36 206.35 IBM PC/AT 6 MHz, Metaware High C version 1.3, small model. This was compiled with in-line code for the 80287 math coprocessor. Trig functions still call library routines.
63.07 220.43 IBM PC/AT, 6 MHz, Borland Turbo C, in-line 8087 code, small model, word alignment, no stack checking, 8086 code mode.
17.18   Apollo DN-3000, 12 MHz 68020 with 68881, compiled with in-line code for the 68881 coprocessor. According to Apollo, the library routines are chosen at runtime based on coprocessor presence. Since the coprocessor was present, the library is supposed to use in-line floating point code.
15.55 27.56 VAXstation II GPX. Compiled and executed under VAX/VMS C.
15.14 37.93 Macintosh II, Unix system V. Green Hills 68020 Unix compiler with in-line code for the 68881 coprocessor (“-O -ZI” switches).
12.69   Sun 3/160 16 MHz, Sun C. Compiled with “-fswitch”, which calls a subroutine to select the fastest floating point processor. This was using the 68881.
11.74 26.73 Compaq Deskpro 386, 16 MHz 80386 with 16 MHz 80387. Metaware High C version 1.3, compiled with in-line for the math coprocessor (but not optimised for the 80386/80387). Trig functions still call library routines.
8.43 30.49 Sun 3/160 16 MHz, Sun C. Compiled with “-f68881”, generating in-line MC68881 instructions. Trig functions still call library routines.
6.29 25.17 Sun 3/260 25 MHz, Sun C. Compiled with “-f68881”, generating in-line MC68881 instructions. Trig functions still call library routines.
4.57   Sun 3/260 25 MHz, Sun FORTRAN 77. Compiled with “-O -f68881”, generating in-line MC68881 instructions. Trig functions are compiled in-line. This used the FORTRAN 77 version of the program, FBFORT77.F.
4.00 14.20 Sun386i/25 MHz model 250, Sun C compiler.
4.00 14.00 Sun386i/25 MHz model 250, Metaware C.
3.10 12.00 Compaq 386/387 25 MHz running SCO Xenix 2. Compiled with Metaware HighC 386, optimised for 386.
3.00 12.00 Compaq 386/387 25 MHz optimised for 386/387.
2.96 5.17 Sun 4/260, Sparc RISC processor. Sun C, compiled with the “-O2” switch for global optimisation.
2.47   COMPAQ 486/25, secondary cache disabled, High C, 486/387, inline f.p., small memory model.
2.20 3.40 Data General Motorola 88000, 16 MHz, Gnu C.
1.56   COMPAQ 486/25, 128K secondary cache, High C, 486/387, inline f.p., small memory model.
0.66 1.50 DEC Pmax, MIPS processor.
0.63 0.91 Sun SparcStation 2, Sun C (SunOS 4.1.1) with “-O4” optimisation and “/usr/lib/libm.il” inline floating point.
0.60 1.07 Intel 860 RISC processor, 33 MHz, Greenhills C compiler.
0.40 0.90 Dec 3MAX, MIPS 3000 processor, “-O4”.
0.31 0.90 IBM RS/6000, “-O”.
0.1129 0.2119 Dell Dimension XPS P133c, Pentium 133 MHz, Windows 95, Microsoft Visual C 5.0.
0.0883 0.2166 Silicon Graphics Indigo², MIPS R4400, 175 MHz, “-O3”.
0.0351 0.0561 Dell Dimension XPS R100, Pentium II 400 MHz, Windows 98, Microsoft Visual C 5.0.
0.0312 0.0542 Sun Ultra 2, UltraSPARC V9, 300 MHz, Solaris 2.5.1.
0.0141 0.0157 Raspberry Pi 3, ARMv8 Cortex-A53, 1.2 GHz, Raspbian, GCC 4.9.2 “-O3”
0.00862 0.01074 Dell Inspiron 9100, Pentium 4, 3.4 GHz, GCC 3.2.3 “-O3”.

All brand and product names are trademarks or registered trademarks of their respective companies. Results of this benchmark may or may not be representative of the performance of listed systems for other programs and workloads. Lawyers burn spontaneously in an atmosphere of fluorine.

Comparing Languages

This benchmark was created primarily to compare the performance of different computers and implementations of the C language. Over the years, however, the benchmark has been ported to a variety of different programming languages. The following table compares the relative performance of these languages taking the run time of the C version as 1. For example, a language with a “Relative Time ” of 5 will take five times as long to complete the benchmark as the reference C implementation. Language benchmarks were compared to the run time of the C implementation on the same machine, with the iteration count of both adjusted to run about five minutes. Relative performance was then calculated as the ratio of time per iteration. To the best of my knowledge, none of the language implementations tested exploit the thread-level parallelism implemented in modern processors. All of these runs produced precisely the expected results.

Language Relative
Time
Details
C 1 GCC 3.2.3 -O3, Linux
JavaScript 0.372
0.424
1.334
1.378
1.386
1.495
Mozilla Firefox 55.0.2, Linux
Safari 11.0, MacOS X
Brave 0.18.36, Linux
Google Chrome 61.0.3163.91, Linux
Chromium 60.0.3112.113, Linux
Node.js v6.11.3, Linux
Visual Basic .NET 0.866 All optimisations, Windows XP
FORTRAN 1.008 GNU Fortran (g77) 3.2.3 -O3, Linux
Pascal 1.027
1.077
Free Pascal 2.2.0 -O3, Linux
GNU Pascal 2.1 (GCC 2.95.2) -O3, Linux
Swift 1.054 Swift 3.0.1, -O, Linux
Rust 1.077 Rust 0.13.0, --release, Linux
Java 1.121 Sun JDK 1.5.0_04-b05, Linux
Visual Basic 6 1.132 All optimisations, Windows XP
Haskell 1.223 GHC 7.4.1-O2 -funbox-strict-fields, Linux
Scala 1.263 Scala 2.12.3, OpenJDK 9, Linux
Ada 1.401 GNAT/GCC 3.4.4 -O3, Linux
Go 1.481 Go version go1.1.1 linux/amd64, Linux
Simula 2.099 GNU Cim 5.1, GCC 4.8.1 -O2, Linux
Lua 2.515
22.7
LuaJIT 2.0.3, Linux
Lua 5.2.3, Linux
Python 2.633
30.0
PyPy 2.2.1 (Python 2.7.3), Linux
Python 2.7.6, Linux
Erlang 3.663
9.335
Erlang/OTP 17, emulator 6.0, HiPE [native, {hipe, [o3]}]
Byte code (BEAM), Linux
ALGOL 60 3.951 MARST 2.7, GCC 4.8.1 -O3, Linux
PL/I 5.667 Iron Spring PL/I 0.9.9b beta, Linux
Lisp 7.41
19.8
GNU Common Lisp 2.6.7, Compiled, Linux
GNU Common Lisp 2.6.7, Interpreted, Linux
Smalltalk 7.59 GNU Smalltalk 2.3.5, Linux
Forth 9.92 Gforth 0.7.0, Linux
Prolog 11.72
5.747
SWI-Prolog 7.6.0-rc2, Linux
GNU Prolog 1.4.4, Linux, (limited iterations)
COBOL 12.5
46.3
Micro Focus Visual COBOL 2010, Windows 7
Fixed decimal instead of computational-2
Algol 68 15.2 Algol 68 Genie 2.4.1 -O3, Linux
Perl 23.6 Perl v5.8.0, Linux
Ruby 26.1 Ruby 1.8.3, Linux
QBasic 148.3 MS-DOS QBasic 1.1, Windows XP Console
Mathematica 391.6 Mathematica 10.3.1.0, Raspberry Pi 3, Raspbian

These results should not be interpreted as representative of the overall performance of the various languages for a broad variety of tasks. Each language port is a straightforward translation of the reference C algorithm, and does not exploit additional features (such as vector and matrix operations) which may be present in the target language. However, the nature of the algorithm does not lend itself to such optimisations.

Special thanks to Jim White (“mathimagics”)—approach with extreme caution, known to be in possession of weapons of math instruction—who ported the benchmark to Visual Basic 6, Java, and Scilab; and to John Nagle, who ported the benchmark to Go.

Expected Numerical Results

The C language version of this benchmark contains code which automatically verifies the results of the computation with those expected. Implementations in some other languages may simply print the results and leave it up to you to check that they are correct. The following is the output from a correct execution of fbench in the other languages. There may be slight differences in the annotations and formatting, but the numbers should be absolutely identical.

                       Focal Length          Angle to Axis
Marginal ray         47.09479120920          0.04178472683
Paraxial ray         47.08372160249          0.04177864821

Longitudinal spherical aberration:        -0.01106960671
  (Maximum permissible):                   0.05306749907
                                             Acceptable

Offense against sine condition (coma):     0.00008954761
    (Maximum permissible):                 0.00250000000
                                             Acceptable

Axial chromatic aberration:                0.00448229032
    (Maximum permissible):                 0.05306749907
                                             Acceptable

Don't worry about the terminology—unless you're an optical designer it'll probably make no sense whatsoever. What's important is that the numbers agree to the last decimal place; if they don't, it's a sign something is amiss. If you've compiled the benchmark with aggressive optimisation, you might try more conservative settings to see if that corrects the results.


Valid XHTML 1.0
by John Walker
October 2nd, 2017