« November 20, 2016 | Main | December 2, 2016 »

Monday, November 28, 2016

Floating Point Benchmark: Mathematica Language Added

I have posted an update to my trigonometry-intense floating point benchmark which adds Wolfram's Mathematica (or, if you like, “Wolfram Language”) to the list of languages in which the benchmark is implemented. A new release of the benchmark collection including Mathematica is now available for downloading.

The relative performance of the various language implementations (with C taken as 1) is as follows. All language implementations of the benchmark listed below produced identical results to the last (11th) decimal place.

Language Relative
Time
Details
C 1 GCC 3.2.3 -O3, Linux
Visual Basic .NET 0.866 All optimisations, Windows XP
FORTRAN 1.008 GNU Fortran (g77) 3.2.3 -O3, Linux
Pascal 1.027
1.077
Free Pascal 2.2.0 -O3, Linux
GNU Pascal 2.1 (GCC 2.95.2) -O3, Linux
Rust 1.077 Rust 0.13.0, --release, Linux
Java 1.121 Sun JDK 1.5.0_04-b05, Linux
Visual Basic 6 1.132 All optimisations, Windows XP
Haskell 1.223 GHC 7.4.1-O2 -funbox-strict-fields, Linux
Ada 1.401 GNAT/GCC 3.4.4 -O3, Linux
Go 1.481 Go version go1.1.1 linux/amd64, Linux
Simula 2.099 GNU Cim 5.1, GCC 4.8.1 -O2, Linux
Lua 2.515
22.7
LuaJIT 2.0.3, Linux
Lua 5.2.3, Linux
Python 2.633
30.0
PyPy 2.2.1 (Python 2.7.3), Linux
Python 2.7.6, Linux
Erlang 3.663
9.335
Erlang/OTP 17, emulator 6.0, HiPE [native, {hipe, [o3]}]
Byte code (BEAM), Linux
ALGOL 60 3.951 MARST 2.7, GCC 4.8.1 -O3, Linux
Lisp 7.41
19.8
GNU Common Lisp 2.6.7, Compiled, Linux
GNU Common Lisp 2.6.7, Interpreted
Smalltalk 7.59 GNU Smalltalk 2.3.5, Linux
Forth 9.92 Gforth 0.7.0, Linux
COBOL 12.5
46.3
Micro Focus Visual COBOL 2010, Windows 7
Fixed decimal instead of computational-2
Algol 68 15.2 Algol 68 Genie 2.4.1 -O3, Linux
Perl 23.6 Perl v5.8.0, Linux
Ruby 26.1 Ruby 1.8.3, Linux
JavaScript 27.6
39.1
46.9
Opera 8.0, Linux
Internet Explorer 6.0.2900, Windows XP
Mozilla Firefox 1.0.6, Linux
QBasic 148.3 MS-DOS QBasic 1.1, Windows XP Console
Mathematica 391.6 Mathematica 10.3.1.0, Raspberry Pi 3, Raspbian

The implementation of the benchmark program is completely straightforward: no implementation tricks intended to improve performance are used and no optimisations such as compiling heavily-used functions are done. The program is written in functional style, with all assignments immutable. The only iteration is that used to run the benchmark multiple times: tail recursion is used elsewhere. The code which puts together the summary of the computation (evaluationReport[]) is particularly ugly, but is not included in the benchmark timing.

To compare performance with native C code, I ran the C language version of the benchmark three times for about five minutes each on the Raspberry Pi 3 platform and measured a mean time per iteration of 14.06 microseconds. I then ran the Mathematica benchmark three times for five minutes and computed a mean time per iteration of 5506 microseconds. The C code thus runs around 391.6 times faster than Mathematica.

Note that the Raspberry Pi 3 runs Mathematica very slowly compared to most other desktop platforms. When I ran the identical benchmark in the Wolfram Cloud, it runs at about 681.7 microseconds per iteration, or eight times faster.

It is, of course, absurd to use a computer mathematics system to perform heavy-duty floating point scientific computation (at least without investing the effort to optimise the most computationally-intense portions of the task), so the performance measured by running this program should not be taken as indicative of the merit of Mathematica when used for the purposes for which it is intended. Like the COBOL implementation of the benchmark, this is mostly an exercise in seeing if it's possible and comparing how easily the algorithm can be expressed in different programming languages.

I have also added timings for the C implementations of the fbench and ffbench programs when run on the Raspberry Pi 3.

Posted at 23:42 Permalink