Thursday, July 31, 2014

Reading List: Conversations with My Agent (and Set Up, Joke, Set Up, Joke)

Long, Rob. Conversations with My Agent (and Set Up, Joke, Set Up, Joke). London: Bloomsbury Publishing, [1996, 2005] 2014. ISBN 978-1-4088-5583-6.
Hollywood is a strange place, where the normal rules of business, economics, and personal and professional relationships seem to have been suspended. When he arrived in Hollywood in 1930, P. G. Wodehouse found the customs and antics of its denizens so bizarre that he parodied them in a series of hilarious stories. After a year in Hollywood, he'd had enough and never returned. When Rob Long arrived in Hollywood to attend UCLA film school, the television industry was on the threshold of a technology-driven change which would remake it and forever put an end to the domination by three large networks which had existed since its inception. The advent of cable and, later, direct to home satellite broadcasting eliminated the terrestrial bandwidth constraints which had made establishing a television outlet forbiddingly expensive and, at the same time, side-stepped many of the regulatory constraints which forbade “edgy” content on broadcast channels. Long began his television career as a screenwriter for Cheers in 1990, and became an executive producer of the show in 1992. After the end of Cheers, he created and produced other television projects, including Sullivan & Son, which is currently on the air.

Television ratings measure both “rating points”: the absolute number of television sets tuned into the program, and “share points”: the fraction of television sets turned on at the time viewing the program. In the era of Cheers, a typical episode might have a rating equivalent to more than 22 million viewers and a share of 32%, meaning it pulled in around one third of all television viewers in its time slot. The proliferation of channels makes it unlikely any show will achieve numbers like this again. The extremely popular 24 attracted between 9 and 14 million viewers in its eight seasons, and the highly critically regarded Mad Men never topped a mean viewership of 2.7 million in its best season.

It was into this new world of diminishing viewership expectations but voracious thirst for content to fill all the new channels that the author launched his post-Cheers career. The present volume collects two books originally published independently, Conversations with My Agent from 1998, and 2005's Set Up, Joke, Set Up, Joke, written as Hollywood's перестро́йка was well-advanced. The volumes fit together almost seamlessly, and many readers will barely notice the transition.

This is a very funny book, but there is also a great deal of wisdom about the ways of Hollywood, how television projects are created, pitched to a studio, marketed to a network, and the tortuous process leading from concept to script to pilot to series and, all too often, to cancellation. The book is written as a screenplay, complete with scene descriptions, directions, dialogue, transitions, and sound effect call-outs. Most of the scenes are indeed conversations between the author and his agent in various circumstances, but we also get to be a fly on the wall at story pitches, meetings with the network, casting, shooting an episode, focus group testing, and many other milestones in the life cycle of a situation comedy. The circumstances are fictional, but are clearly informed by real-life experience. Anybody contemplating a career in Hollywood, especially as a television screenwriter, would be insane not to read this book. You'll laugh a lot, but also learn something on almost every page.

The reader will also begin to appreciate the curious ways of Hollywood business, what the author calls “HIPE”: the Hollywood Inversion Principle of Economics. “The HIPE, as it will come to be known, postulates that every commonly understood, standard business practice of the outside world has its counterpart in the entertainment industry. Only it's backwards.” And anybody who thinks accounting is not a creative profession has never had experience with a Hollywood project. The culture of the entertainment business is also on display—an intricate pecking order involving writers, producers, actors, agents, studio and network executives, and “below the line” specialists such as camera operators and editors, all of whom have to read the trade papers to know who's up and who's not.

This book provides an insider's perspective on the strange way television programs come to be. In a way, it resembles some aspects of venture capital: most projects come to nothing, and most of those which are funded fail, losing the entire investment. But the few which succeed can generate sufficient money to cover all the losses and still yield a large return. One television show that runs for five years, producing solid ratings and 100+ episodes to go into syndication, can set up its writers and producers for life and cover the studio's losses on all of the dogs and cats.

Posted at 22:15 Permalink

Wednesday, July 30, 2014

Reading List: Robert A. Heinlein: In Dialogue with His Century. Vol 1

Patterson, William H., Jr. Robert A. Heinlein: In Dialogue with His Century. Vol. 1 New York: Tor Books, 2010. ISBN 978-0-765-31960-9.
Robert Heinlein came from a family who had been present in America before there were the United States, and whose members had served in all of the wars of the Republic. Despite being thin, frail, and with dodgy eyesight, he managed to be appointed to the U.S. Naval Academy where, despite demerits for being a hellion, he graduated and was commissioned as a naval officer. He was on the track to a naval career when felled by tuberculosis (which was, in the 1930s, a potential death sentence, with the possibility of recurrence any time in later life).

Heinlein had written while in the Navy, but after his forced medical retirement, turned his attention to writing science fiction for pulp magazines, and after receiving a cheque for US$ 70 for his first short story, “Life-Line”, he exclaimed, “How long has this racket been going on? And why didn't anybody tell me about it sooner?” Heinlein always viewed writing as a business, and kept a thermometer on which he charted his revenue toward paying off the mortgage on his house.

While Heinlein fit in very well with the Navy, and might have been, absent medical problems, a significant commander in the fleet in World War II, he was also, at heart, a bohemian, with a soul almost orthogonal to military tradition and discipline. His first marriage wa a fling with a woman who introduced him to physical delights of which he was unaware. That ended quickly, and then he married Leslyn, who was his muse, copy-editor, and business manager in a marriage which persisted throughout World War II, when both were involved in war work. Leslyn worked herself in this effort into insanity and alcoholism, and they divorced in 1947.

It was Robert Heinlein who vaulted science fiction from the ghetto of the pulp magazines to the “slicks” such as Collier's and the Saturday Evening Post. This was due to a technological transition in the publishing industry which is comparable to that presently underway in the migration from print to electronic publishing. Rationing of paper during World War II helped to create the “pocket book” or paperback publishing industry. After the end of the war, these new entrants in the publishing market saw a major opportunity in publishing anthologies of stories previously published in the pulps. The pulp publishers viewed this as an existential threat—who would buy a pulp magazine if, for almost the same price, one could buy a collection of the best stories from the last decade in all of those magazines?

Heinlein found his fiction entrapped in this struggle. While today, when you sell a story to a magazine in the U.S., you usually only sell “First North American serial rights”, in the 1930s and 1940s, authors sold all rights, and it was up to the publisher to release their rights for republication of a work in an anthology or adaptation into a screenplay. This is parallel to the contemporary battle between traditional publishers and independent publishing platforms, which have become the heart of science fiction.

Heinlein was complex. While an exemplary naval officer, he was a nudist, married three times, interested in the esoteric (and a close associate of Jack Parsons and L. Ron Hubbard). He was an enthusiastic supporter of Upton Sinclair's EPIC movement and his “Social Credit” agenda.

This authorised biography, with major contributions from Heinlein's widow, Virginia, chronicles the master storyteller's life in his first forty years—until he found, or created, an audience receptive to the tales of wonder he spun. If you've read all of Heinlein's fiction, it may be difficult to imagine how much of it was based in Heinlein's own life. If you thought Heinlein's later novels were weird, appreciate how the master was weird before you were born.

I had the privilege of meeting Robert and Virginia Heinlein in 1984. I shall always cherish that moment.

Posted at 02:27 Permalink

Sunday, July 27, 2014

Reading List: The Guns of August

Tuchman, Barbara W. The Guns of August. New York: Presidio Press, [1962, 1988, 1994] 2004. ISBN 978-0-345-47609-8.
One hundred years ago the world was on the brink of a cataclysmic confrontation which would cause casualties numbered in the tens of millions, destroy the pre-existing international order, depose royalty and dissolve empires, and plant the seeds for tyrannical regimes and future conflicts with an even more horrific toll in human suffering. It is not exaggeration to speak of World War I as the pivotal event of the 20th century, since so much that followed can be viewed as sequelæ which can be traced directly to that conflict.

It is thus important to understand how that war came to be, and how in the first month after its outbreak the expectations of all parties to the conflict, arrived at through the most exhaustive study by military and political élites, were proven completely wrong and what was expected to be a short, conclusive war turned instead into a protracted blood-letting which would continue for more than four years of largely static warfare. This magnificent book, which covers the events leading to the war and the first month after its outbreak, provides a highly readable narrative history of the period with insight into both the grand folly of war plans drawn up in isolation and mechanically followed even after abundant evidence of their faults have caused tragedy, but also how contingency—chance, and the decisions of fallible human beings in positions of authority can tilt the balance of history.

The author is not an academic historian, and she writes for a popular audience. This has caused some to sniff at her work, but as she noted, Herodotus, Thucydides, Gibbon, and MacCauley did not have Ph.D.s. She immerses the reader in the world before the war, beginning with the 1910 funeral in London of Edward VII where nine monarchs rode in the cortège, most of whose nations would be at war four years hence. The system of alliances is described in detail, as is the mobilisation plans of the future combatants, all of which would contribute to fatal instability of the system to a small perturbation.

Germany, France, Russia, and Austria-Hungary had all drawn up detailed mobilisation plans for assembling, deploying, and operating their conscript armies in the event of war. (Britain, with an all-volunteer regular army which was tiny by continental standards, had no pre-defined mobilisation plan.) As you might expect, Germany's plan was the most detailed, specifying railroad schedules and the composition of individual trains. Now, the important thing to keep in mind about these plans is that, together, they created a powerful first-mover advantage. If Russia began to mobilise, and Germany hesitated in its own mobilisation in the hope of defusing the conflict, it might be at a great disadvantage if Russia had only a few days of advance in assembling its forces. This means that there was a powerful incentive in issuing the mobilisation order first, and a compelling reason for an adversary to begin his own mobilisation order once news of it became known.

Compounding this instability were alliances which compelled parties to them to come to the assistance of others. France had no direct interest in the conflict between Germany and Austria-Hungary and Russia in the Balkans, but it had an alliance with Russia, and was pulled into the conflict. When France began to mobilise, Germany activated its own mobilisation and the Schlieffen plan to invade France through Belgium. Once the Germans violated the neutrality of Belgium, Britain's guarantee of that neutrality required (after the customary ambiguity and dithering) a declaration of war against Germany, and the stage was set for a general war in Europe.

The focus here is on the initial phase of the war: where Germany, France, and Russia were all following their pre-war plans, all initially expecting a swift conquest of their opponents—the Battle of the Frontiers, which occupied most of the month of August 1914. An afterword covers the First Battle of the Marne where the German offensive on the Western front was halted and the stage set for the static trench warfare which was to ensue. At the conclusion of that battle, all of the shining pre-war plans were in tatters, many commanders were disgraced or cashiered, and lessons learned through the tragedy “by which God teaches the law to kings” (p. 275).

A century later, the lessons of the outbreak of World War I could not be more relevant. On the eve of the war, many believed that the interconnection of the soon-to-be belligerents through trade was such that war was unthinkable, as it would quickly impoverish them. Today, the world is even more connected and yet there are conflicts all around the margins, with alliances spanning the globe. Unlike 1914, when the world was largely dominated by great powers, now there are rogue states, non-state actors, movements dominated by religion, and neo-barbarism and piracy loose upon the stage, and some of these may lay their hands on weapons whose destructive power dwarf those of 1914–1918. This book, published more than fifty years ago, about a conflict a century old, could not be more timely.

Posted at 22:49 Permalink

Thursday, July 24, 2014

Floating Point Benchmark: Lua Language Added

I have posted an update to my trigonometry-intense floating point benchmark which adds Lua to the list of languages in which the benchmark is implemented. A new release of the benchmark collection including Lua is now available for downloading.

Lua was developed with the intention of being a small-footprint scripting language which could be easily embedded in applications. Despite this design goal, which it has achieved superbly, being widely adopted as the means of extensibility for numerous games and applications, it is a remarkably sophisticated language, with support for floating point, complex data structures, object oriented programming, and functional programming. It is a modern realisation of what I attempted to achieve with Atlast in 1990, but with a syntax which most programmers will find familiar and a completely memory-safe architecture (unless compromised by user extensions). If I were developing an application for which I needed scripting or user extensibility, Lua would be my tool of choice, and in porting the benchmark to the language I encountered no problems whatsoever—indeed, it worked the first time.

The relative performance of the various language implementations (with C taken as 1) is as follows. All language implementations of the benchmark listed below produced identical results to the last (11th) decimal place.

Language Relative
Time
Details
C 1 GCC 3.2.3 -O3, Linux
Visual Basic .NET 0.866 All optimisations, Windows XP
FORTRAN 1.008 GNU Fortran (g77) 3.2.3 -O3, Linux
Pascal 1.027
1.077
Free Pascal 2.2.0 -O3, Linux
GNU Pascal 2.1 (GCC 2.95.2) -O3, Linux
Java 1.121 Sun JDK 1.5.0_04-b05, Linux
Visual Basic 6 1.132 All optimisations, Windows XP
Haskell 1.223 GHC 7.4.1-O2 -funbox-strict-fields, Linux
Ada 1.401 GNAT/GCC 3.4.4 -O3, Linux
Go 1.481 Go version go1.1.1 linux/amd64, Linux
Simula 2.099 GNU Cim 5.1, GCC 4.8.1 -O2, Linux
Lua 2.515
22.7
LuaJIT 2.0.3, Linux
Lua 5.2.3, Linux
Python 2.633
30.0
PyPy 2.2.1 (Python 2.7.3), Linux
Python 2.7.6, Linux
Erlang 3.663
9.335
Erlang/OTP 17, emulator 6.0, HiPE [native, {hipe, [o3]}]
Byte code (BEAM), Linux
ALGOL 60 3.951 MARST 2.7, GCC 4.8.1 -O3, Linux
Lisp 7.41
19.8
GNU Common Lisp 2.6.7, Compiled, Linux
GNU Common Lisp 2.6.7, Interpreted
Smalltalk 7.59 GNU Smalltalk 2.3.5, Linux
Forth 9.92 Gforth 0.7.0, Linux
COBOL 12.5
46.3
Micro Focus Visual COBOL 2010, Windows 7
Fixed decimal instead of computational-2
Algol 68 15.2 Algol 68 Genie 2.4.1 -O3, Linux
Perl 23.6 Perl v5.8.0, Linux
Ruby 26.1 Ruby 1.8.3, Linux
JavaScript 27.6
39.1
46.9
Opera 8.0, Linux
Internet Explorer 6.0.2900, Windows XP
Mozilla Firefox 1.0.6, Linux
QBasic 148.3 MS-DOS QBasic 1.1, Windows XP Console

The performance of the reference implementation of Lua is comparable to other scripting languages which compile to and execute byte-codes, such as Perl, Python, and Ruby. Raw CPU performance is rarely important in a scripting language, as it is mostly used as “glue” to invoke facilities of the host application which run at native code speed.

Update: I have added results in the above table for the benchmark run under the LuaJIT just-in-time compiler for Lua, generating code for the x86_64 architecture. This runs almost ten times faster than the standard implementation of Lua and is comparable with other compiled languages. The benchmark ran without any modifications on LuaJIT.

I have also added results from running the Python benchmark under the PyPy just-in-time compiler for Python. Again, there was a dramatic speed increase, vaulting Python into the ranks of compiled languages. Since the Python benchmark was last run with the standard implementation of Python in 2006, I re-ran it on Python 2.7.6 and found, compared to C on an x86_64 architecture, to be substantially slower. I do not know whether this is due to better performance in C code, worse performance in Python, or due to changes in machine architecture compared to the 32-bit system on which the benchmark was run in 2006. (2014-07-26 22:25 UTC)

Posted at 22:32 Permalink

Wednesday, July 16, 2014

Atlast 2.0 (64-bit) Released

I have just posted the first update to Atlast since 2007. Atlast is a FORTH-like language toolkit intended to make it easy to open the internal facilities of applications to users, especially on embedded platforms with limited computing and memory resources.

Like FORTH, Atlast provides low-level access to the memory architecture of the machine on which it runs, and is sensitive to the length of data objects. The 1.x releases of Atlast assume integers and pointers are 32 bit quantities and floating point numbers are 64 bit, occupying two stack items. This assumption is no longer the case when building programs in native mode on 64-bit systems: integers, pointers, and floating point values are all 64 bits.

Release 2.0 of Atlast is a dedicated 64-bit implementation of the language. If you are developing on a 64-bit platform and are confident you will only target such platforms, it provides a simpler architecture (no need for double word operations for floating point) and a larger address space and integers. This comes at the cost of loss of source code compatibility with the 32-bit 1.x releases, particularly for floating point code. If your target platform is a 32-bit system and your development machine is 64-bit, it's best to use version 1.2 (which is functionally identical to 2.0), cross-compiled as 32-bit code. If you don't use floating point or do low-level memory twiddling, it's likely your programs will work on both 32- and 64-bit versions.

Although Atlast includes comprehensive pointer and stack limit checking, it is not memory-safe, and consequently I do not encourage its use in modern applications. When it was originally developed in the late 1980s, its ability to fit in a small memory footprint was of surpassing importance. With the extravagant memory and compute power of contemporary machines, this is less important and other scripting languages which are both entirely safe and less obscure in syntax will usually be preferable. Still, some people working with embedded systems very close to the hardware continue to find Atlast useful, and this release updates it for 64-bit architectures.

The distribution archive has been re-organised in 2.0, collecting the regression test, examples from the user manual, and benchmarks in subdirectories. An implementation of my floating point benchmark is included among the examples.

Posted at 23:56 Permalink

Saturday, June 28, 2014

Reading List: The Case for Space Solar Power

Mankins, John C. The Case for Space Solar Power. Houston: Virginia Edition, 2014. ISBN 978-0-9913370-0-2.
As world population continues to grow and people in the developing world improve their standard of living toward the level of residents of industrialised nations, demand for energy will increase enormously. Even taking into account anticipated progress in energy conservation and forecasts that world population will reach a mid-century peak and then stabilise, the demand for electricity alone is forecasted to quadruple in the century from 2000 to 2100. If electric vehicles shift a substantial part of the energy consumed for transportation from hydrocarbon fuels to electricity, the demand for electric power will be greater still.

Providing this electricity in an affordable, sustainable way is a tremendous challenge. Most electricity today is produced by burning fuels such as coal, natural gas, and petroleum; by nuclear fission reactors; and by hydroelectric power generated by dams. Quadrupling electric power generation by any of these means poses serious problems. Fossil fuels may be subject to depletion, pose environmental consequences both in extraction and release of combustion products into the atmosphere, and are distributed unevenly around the world, leading to geopolitical tensions between have and have-not countries. Uranium fission is a technology with few environmental drawbacks, but operating it in a safe manner is very demanding and requires continuous vigilance over the decades-long lifespan of a power station. Further, the risk exists that nuclear material can be diverted for weapons use, especially if nuclear power stations proliferate into areas which are politically unstable. Hydroelectric power is clean, generally reliable (except in the case of extreme droughts), and inexhaustible, but unfortunately most rivers which are suitable for its generation have already been dammed, and potential projects which might be developed are insufficient to meet the demand.

Well, what about those “sustainable energy” projects the environmentalists are always babbling about: solar panels, eagle shredders (wind turbines), and the like? They do generate energy without fuel, but they are not the solution to the problem. In order to understand why, we need to look into the nature of the market for electricity, which is segmented into two components, even though the current flows through the same wires. The first is “base load” power. The demand for electricity varies during the day, from day to day, and seasonally (for example, electricity for air conditioning peaks during the mid-day hours of summer). The base load is the electricity demand which is always present, regardless of these changes in demand. If you look at a long-term plot of electricity demand and draw a line through the troughs in the curve, everything below that line is base load power and everything above it is “peak” power. Base load power is typically provided by the sources discussed in the previous paragraph: hydrocarbon, nuclear, and hydroelectric. Because there is a continuous demand for the power they generate, these plants are designed to run non-stop (with excess capacity to cover stand-downs for maintenance), and may be complicated to start up or shut down. In Switzerland, for example, 56% of base load power is produced from hydroelectric plants and 39% from nuclear fission reactors.

The balance of electrical demand, peak power, is usually generated by smaller power plants which can be brought on-line and shut down quickly as demand varies. Peaking plants sell their power onto the grid at prices substantially higher than base load plants, which compensates for their less efficient operation and higher capital costs for intermittent operation. In Switzerland, most peak energy is generated by thermal plants which can burn either natural gas or oil.

Now the problem with “alternative energy” sources such as solar panels and windmills becomes apparent: they produce neither base load nor peak power. Solar panels produce electricity only during the day, and when the Sun is not obscured by clouds. Windmills, obviously, only generate when the wind is blowing. Since there is no way to efficiently store large quantities of energy (all existing storage technologies raise the cost of electricity to uneconomic levels), these technologies cannot be used for base load power, since they cannot be relied upon to continuously furnish power to the grid. Neither can they be used for peak power generation, since the times at which they are producing power may not coincide with times of peak demand. That isn't to say these energy sources cannot be useful. For example, solar panels on the roofs of buildings in the American southwest make a tremendous amount of sense since they tend to produce power at precisely the times the demand for air conditioning is greatest. This can smooth out, but not replace, the need for peak power generation on the grid.

If we wish to dramatically expand electricity generation without relying on fossil fuels for base load power, there are remarkably few potential technologies. Geothermal power is reliable and inexpensive, but is only available in a limited number of areas and cannot come close to meeting the demand. Nuclear fission, especially modern, modular designs is feasible, but faces formidable opposition from the fear-based community. If nuclear fusion ever becomes practical, we will have a limitless, mostly clean energy source, but after sixty years of research we are still decades away from an operational power plant, and it is entirely possible the entire effort may fail. The liquid fluoride thorium reactor, a technology demonstrated in the 1960s, could provide centuries of energy without the nuclear waste or weapons diversion risks of uranium-based nuclear power, but even if it were developed to industrial scale it's still a “nuclear reactor” and can be expected to stimulate the same hysteria as existing nuclear technology.

This book explores an entirely different alternative. Think about it: once you get above the Earth's atmosphere and sufficiently far from the Earth to avoid its shadow, the Sun provides a steady 1.368 kilowatts per square metre, and will continue to do so, non-stop, for billions of years into the future (actually, the Sun is gradually brightening, so on the scale of hundreds of millions of years this figure will increase). If this energy could be harvested and delivered efficiently to Earth, the electricity needs of a global technological civilisation could be met with a negligible impact on the Earth's environment. With present-day photovoltaic cells, we can convert 40% of incident sunlight to electricity, and wireless power transmission in the microwave band (to which the Earth's atmosphere is transparent, even in the presence of clouds and precipitation) has been demonstrated at 40% efficiency, with 60% end-to-end efficiency expected for future systems.

Thus, no scientific breakthrough of any kind is required to harvest abundant solar energy which presently streams past the Earth and deliver it to receiving stations on the ground which feed it into the power grid. Since the solar power satellites would generate energy 99.5% of the time (with short outages when passing through the Earth's shadow near the equinoxes, at which time another satellite at a different longitude could pick up the load), this would be base load power, with no fuel source required. It's “just a matter of engineering” to calculate what would be required to build the collector satellite, launch it into geostationary orbit (where it would stay above the same point on Earth), and build the receiver station on the ground to collect the energy beamed down by the satellite. Then, given a proposed design, one can calculate the capital cost to bring such a system into production, its operating cost, the price of power it would deliver to the grid, and the time to recover the investment in the system.

Solar power satellites are not a new idea. In 1968, Peter Glaser published a description of a system with photovoltaic electricity generation and microwave power transmission to an antenna on Earth; in 1973 he was granted U.S. patent 3,781,647 for the system. In the 1970s NASA and the Department of Energy conducted a detailed study of the concept, publishing a reference design in 1979 which envisioned a platform in geostationary orbit with solar arrays measuring 5 by 25 kilometres and requiring a monstrous space shuttle with payload of 250 metric tons and space factories to assemble the platforms. Design was entirely conventional, using much the same technologies as were later used in the International Space Station (ISS) (but for a structure twenty times its size). Given that the ISS has a cost estimated at US$ 150 billion, NASA's 1979 estimate that a complete, operational solar power satellite system comprising 60 power generation platforms and Earth-based infrastructure would cost (in 2014 dollars) between 2.9 and 8.7 trillion might be considered optimistic. Back then, a trillion dollars was a lot of money, and this study pretty much put an end to serious consideration of solar power satellites in the U.S.for almost two decades. In the late 1990s, NASA, realising that much progress has been made in many of the enabling technologies for space solar power, commissioned a “Fresh Look Study”, which concluded that the state of the art was still insufficiently advanced to make power satellites economically feasible.

In this book, the author, after a 25-year career at NASA, recounts the history of solar power satellites to date and presents a radically new design, SPS-ALPHA (Solar Power Satellite by means of Arbitrarily Large Phased Array), which he argues is congruent with 21st century manufacturing technology. There are two fundamental reasons previous cost estimates for solar power satellites have come up with such forbidding figures. First, space hardware is hideously expensive to develop and manufacture. Measured in US$ per kilogram, a laptop computer is around $200/kg, a Boeing 747 $1400/kg, and a smart phone $1800/kg. By comparison, the Space Shuttle Orbiter cost $86,000/kg and the International Space Station around $110,000/kg. Most of the exorbitant cost of space hardware has little to do with the space environment, but is due to its being essentially hand-built in small numbers, and thus never having the benefit of moving down the learning curve as a product is put into mass production nor of automation in manufacturing (which isn't cost-effective when you're only making a few of a product). Second, once you've paid that enormous cost per kilogram for the space hardware, you have launch it from the Earth into space and transport it to the orbit in which it will operate. For communication satellites which, like solar power satellites, operate in geostationary orbit, current launchers cost around US$ 50,000 per kilogram delivered there. New entrants into the market may substantially reduce this cost, but without a breakthrough such as full reusability of the launcher, it will stay at an elevated level.

SPS-ALPHA tackles the high cost of space hardware by adopting a “hyper modular” design, in which the power satellite is composed of huge numbers of identical modules of just eight different types. Each of these modules is on a scale which permits prototypes to be fabricated in facilities no more sophisticated than university laboratories and light enough they fall into the “smallsat” category, permitting inexpensive tests in the space environment as required. A production power satellite, designed to deliver 2 gigawatts of electricity to Earth, will have almost four hundred thousand of each of three types of these modules, assembled in space by 4,888 robot arm modules, using more than two million interconnect modules. These are numbers where mass production economies kick in: once the module design has been tested and certified you can put it out for bids for serial production. And a factory which invests in making these modules inexpensively can be assured of follow-on business if the initial power satellite is a success, since there will a demand for dozens or hundreds more once its practicality is demonstrated. None of these modules is remotely as complicated as an iPhone, and once they are made in comparable quantities shouldn't cost any more. What would an iPhone cost if they only made five of them?

Modularity also requires the design to be distributed and redundant. There is no single-point failure mode in the system. The propulsion and attitude control module is replicated 200 times in the full design. As modules fail, for whatever cause, they will have minimal impact on the performance of the satellite and can be swapped out as part of routine maintenance. The author estimates than on an ongoing basis, around 3% of modules will be replaced per year.

The problem of launch cost is addressed indirectly by the modular design. Since no module masses more than 600 kg (the propulsion module) and none of the others exceed 100 kg, they do not require a heavy lift launcher. Modules can simply be apportioned out among a large number of flights of the most economical launchers available. Construction of a full scale solar power satellite will require between 500 and 1000 launches per year of a launcher with a capacity in the 10 to 20 metric ton range. This dwarfs the entire global launch industry, and will provide motivation to fund the development of new, reusable, launcher designs and the volume of business to push their cost down the learning curve, with a goal of reducing cost for launch to low Earth orbit to US$ 300–500 per kilogram. Note that the SpaceX Falcon Heavy, under development with a projected first flight in 2015, already is priced around US$ 1000/kg without reusability of the three core stages which is expected to be introduced in the future.

The author lays out five “Design Reference Missions” which progress from small-scale tests of a few modules in low Earth orbit to a full production power satellite delivering 2 gigawatts to the electrical grid. He estimates a cost of around US$ 5 billion to the pilot plant demonstrator and 20 billion to the first full scale power satellite. This is not a small sum of money, but is comparable to the approximately US$ 26 billion cost of the Three Gorges Dam in China. Once power satellites start to come on line, each feeding power into the grid with no cost for fuel and modest maintenance expenses (comparable to those for a hydroelectric dam), the initial investment does not take long to be recovered. Further, the power satellite effort will bootstrap the infrastructure for routine, inexpensive access to space, and the power satellite modules can also be used in other space applications (for example, very high power communication satellites).

The most frequently raised objection when power satellites are mentioned is fear that they could be used as a “death ray”. This is, quite simply, nonsense. The microwave power beam arriving at the Earth's surface will have an intensity between 10–20% of summer sunlight, so a mirror reflecting the Sun would be a more effective death ray. Extensive tests were done to determine if the beam would affect birds, insects, and aircraft flying through it and all concluded there was no risk. A power satellite which beamed down its power with a laser could be weaponised, but nobody is proposing that, since it would have problems with atmospheric conditions and cost more than microwave transmission.

This book provides a comprehensive examination of the history of the concept of solar power from space, the various designs proposed over the years and studies conducted of them, and an in-depth presentation of the technology and economic rationale for the SPS-ALPHA system. It presents an energy future which is very different from that which most people envision, provides a way to bring the benefits of electrification to developing regions without any environmental consequences whatever, and ensure a secure supply of electricity for the foreseeable future.

This is a rewarding, but rather tedious read. Perhaps it's due to the author's 25 years at NASA, but the text is cluttered with acronyms—there are fourteen pages of them defined in a glossary at the end of the book—and busy charts, some of which are difficult to read as reproduced in the Kindle edition. Copy editing is so-so: I noted 28 errors, and I wasn't especially looking for them. The index in the Kindle edition lists page numbers in the print edition which are useless because the electronic edition does not contain page numbers.

Posted at 18:24 Permalink

Thursday, June 26, 2014

Reading List: The Death of Money

Rickards, James. The Death of Money. New York: Portfolio / Penguin, 2014. ISBN 978-1-591-84670-3.
In his 2011 book Currency Wars (November 2011), the author discusses what he sees as an inevitable conflict among fiat currencies for dominance in international trade as the dollar, debased as a result of profligate spending and assumption of debt by the government that issues it, is displaced as the world's preeminent trading and reserve currency. With all currencies backed by nothing more than promises made by those who issue them, the stage is set for a race to the bottom: one government weakens its currency to obtain short-term advantage in international trade, only to have its competitors devalue, setting off a chain of competitive devaluations which disrupt trade, cause investment to be deferred due to uncertainty, and destroy the savings of those holding the currencies in question. In 2011, Rickards wrote that it was still possible to avert an era of currency war, although that was not the way to bet. In this volume, three years later, he surveys the scene and concludes that we are now in the early stages of a collapse of the global monetary system, which will be replaced by something very different from the status quo, but whose details we cannot, at this time, confidently predict. Investors and companies involved in international commerce need to understand what is happening and take steps to protect themselves in the era of turbulence which is ahead.

We often speak of “globalisation” as if it were something new, emerging only in recent years, but in fact it is an ongoing trend which dates from the age of wooden ships and sail. Once ocean commerce became practical in the 18th century, comparative advantage caused production and processing of goods to be concentrated in locations where they could be done most efficiently, linked by the sea lanes. This commerce was enormously facilitated by a global currency—if trading partners all used their own currencies, a plantation owner in the West Indies shipping sugar to Great Britain might see his profit wiped out if the exchange rate between his currency and the British pound changed by the time the ship arrived and he was paid. From the dawn of global trade to the present there has been a global currency. Initially, it was the British pound, backed by gold in the vaults of the Bank of England. Even commerce between, say, Argentina and Italy, was usually denominated in pounds and cleared through banks in London. The impoverishment of Britain in World War I began a shift of the centre of financial power from London to New York, and after World War II the Bretton Woods conference established the U.S. dollar, backed by gold, as the world's reserve and trade currency. The world continued to have a global currency, but now it was issued in Washington, not London. (The communist bloc did not use dollars for trade within itself, but conducted its trade with nations outside the bloc in dollars.) In 1971, the U.S. suspended the convertibility of the dollar to gold, and ever since the dollar has been entirely a fiat currency, backed only by the confidence of those who hold it that they will be able to exchange it for goods in the future.

The international monetary system is now in a most unusual period. The dollar remains the nominal reserve and trade currency, but the fraction of reserves held and trade conducted in dollars continues to fall. All of the major currencies: the dollar, euro, yen, pound, yuan, rouble—are pure fiat currencies unbacked by any tangible asset, and valued only against one another in ever-shifting foreign exchange markets. Most of these currencies are issued by central banks of governments which have taken on vast amounts of debt which nobody in their right mind believes can ever be paid off, and is approaching levels at which even a modest rise in interest rates to historical mean levels would make the interest on the debt impossible to service. There is every reason for countries holding large reserves of dollars to be worried, but there isn't any other currency which looks substantially better as an alternative. The dollar is, essentially, the best horse in the glue factory.

The author argues that we are on the threshold of a collapse of the international monetary system, and that the outlines of what will replace it are not yet clear. The phrase “collapse of the international monetary system” sounds apocalyptic, but we're not talking about some kind of Mad Max societal cataclysm. As the author observes, the international monetary system collapsed three times in the last century: in 1914, 1939, and 1971, and life went on (albeit in the first two cases, with disastrous and sanguinary wars), and eventually the financial system was reconstructed. There were, in each case, winners and losers, and investors who failed to protect themselves against these turbulent changes paid dearly for their complacency.

In this book, the author surveys the evolving international financial scene. He comes to conclusions which may surprise observers from a variety of perspectives. He believes the Euro is here to stay, and that its advantages to Germany coupled with Germany's economic power will carry it through its current problems. Ultimately, the countries on the periphery will consider the Euro, whatever its costs to them in unemployment and austerity, better than the instability of their national currencies before joining the Eurozone. China is seen as the victim of its own success, with financial warlords skimming off the prosperity of its rapid growth, aided by an opaque and deeply corrupt political class. The developing world is increasingly forging bilateral agreements which bypass the dollar and trade in their own currencies.

What is an investor to do faced with such uncertainty? Well, that's far from clear. The one thing one shouldn't do is assume the present system will persist until you're ready to retire, and invest your retirement savings entirely on the assumption nothing will change. Fortunately, there are alternative investments (for example, gold and silver, farm land, fine art, funds investing in natural resources, and, yes, cash in a variety of currencies [to enable you to pick up bargains when other assets crater]) which will appreciate enormously when the monetary system collapses. You don't have to (and shouldn't) bet everything on a collapse: a relatively small hedge against it will protect you should it happen.

This is an extensively researched and deep investigation of the present state of the international monetary system. As the author notes, ever since all currencies were severed from gold in 1971 and began to float against one another, the complexity of the system has increased enormously. What were once fixed exchange rates, adjusted only when countries faced financial crisis, have been replaced by exchange rates which change in milliseconds, with a huge superstructure of futures, options, currency swaps, and other derivatives whose notional value dwarfs the actual currencies in circulation. This is an immensely fragile system which even a small perturbation can cause to collapse. Faced with a risk whose probability and consequences are impossible to quantify, the prudent investor takes steps to mitigate it. This book provides background for developing such a plan.

Posted at 23:44 Permalink

Sunday, June 22, 2014

Tom Swift and His Airship updated, EPUB added

All 25 of the public domain Tom Swift novels have been posted in the Tom Swift and His Pocket Library collection. I am now returning to the earlier novels, upgrading them to use the more modern typography of those I've done in the last few years. The third novel in the series, Tom Swift and His Airship, has now been updated. Several typographical errors in the original edition have been corrected, and Unicode text entities are used for special characters such as single and double quotes and dashes.

An EPUB edition of this novel is now available which may be downloaded to compatible reader devices; the details of how to do this differ from device to device—please consult the documentation for your reader for details.

For additional details about this novel, see the review I wrote when it was originally posted in 2005.

Posted at 14:02 Permalink

Monday, June 16, 2014

Floating Point Benchmark: Erlang Language Added

I have posted an update to my trigonometry-intense floating point benchmark which adds Erlang to the list of languages in which the benchmark is implemented. A new release of the benchmark collection including Erlang is now available for downloading.

The Erlang programming language was originally developed by the Swedish telecommunication equipment manufacturer Ericsson. Its name is simultaneously a reference to the unit of circuit load used in circuit-switched communication systems, the Danish engineer after whom the unit is named, and an abbreviation for “Ericsson Language”. While originally a proprietary Ericsson product for in-house use, in 1998 the language and software was released as an open source product and is now distributed by erlang.org.

Erlang is intended to facilitate the implementation of concurrent, scalable, distributed, and high-availability systems of the kinds needed to support large telecommunication networks. Concurrency, based upon a message-passing architecture, is built into the language at a low level, as is support for symmetric multiprocessing, allowing Erlang programs to easily exploit modern multi-core microprocessors. A general port architecture allows moving components of a system among physical hardware without reprogramming, and fault tolerance allows the detection of errors and restarting modules which have failed. It is possible to “hot swap” components in running systems without taking the entire system down for the update.

The language adopts the functional programming model, although it is not as strict in enforcing this paradigm as Haskell. The language is dynamically typed, does not use lazy evaluation, allows input/output within functions not specially marked as having side effects, and has ways a sneaky programmer can store global state or create side effects. While these may seem to be (indeed, they are) compromises with a strict reading of functional programming, Erlang is clearly a tool created by practical programmers intended to be used to build very large production systems which have to keep running no matter what. It is not an academic project made to prove a point. (But then Haskell is very pure, and yet that has not kept large systems from being built using it.)

One advantage of choosing Erlang when building a system which may need to grow at a vertiginous rate from its original implementation is that it's “been there; done that”. Companies such as WhatsApp (messaging service), Facebook (chat service), and Goldman Sachs (high-frequency trading) all use Erlang. You may not be dealing with the most philosophically pure functional language, but one which has earned its chops in the real world.

The relative performance of the various language implementations (with C taken as 1) is as follows. All language implementations of the benchmark listed below produced identical results to the last (11th) decimal place.

Language Relative
Time
Details
C 1 GCC 3.2.3 -O3, Linux
Visual Basic .NET 0.866 All optimisations, Windows XP
FORTRAN 1.008 GNU Fortran (g77) 3.2.3 -O3, Linux
Pascal 1.027
1.077
Free Pascal 2.2.0 -O3, Linux
GNU Pascal 2.1 (GCC 2.95.2) -O3, Linux
Java 1.121 Sun JDK 1.5.0_04-b05, Linux
Visual Basic 6 1.132 All optimisations, Windows XP
Haskell 1.223 GHC 7.4.1-O2 -funbox-strict-fields, Linux
Ada 1.401 GNAT/GCC 3.4.4 -O3, Linux
Go 1.481 Go version go1.1.1 linux/amd64, Linux
Simula 2.099 GNU Cim 5.1, GCC 4.8.1 -O2, Linux
Erlang 3.663
9.335
Erlang/OTP 17, emulator 6.0, HiPE [native, {hipe, [o3]}]
Byte code (BEAM), Linux
ALGOL 60 3.951 MARST 2.7, GCC 4.8.1 -O3, Linux
Lisp 7.41
19.8
GNU Common Lisp 2.6.7, Compiled, Linux
GNU Common Lisp 2.6.7, Interpreted
Smalltalk 7.59 GNU Smalltalk 2.3.5, Linux
Forth 9.92 Gforth 0.7.0, Linux
COBOL 12.5
46.3
Micro Focus Visual COBOL 2010, Windows 7
Fixed decimal instead of computational-2
Algol 68 15.2 Algol 68 Genie 2.4.1 -O3, Linux
Python 17.6 Python 2.3.3 -OO, Linux
Perl 23.6 Perl v5.8.0, Linux
Ruby 26.1 Ruby 1.8.3, Linux
JavaScript 27.6
39.1
46.9
Opera 8.0, Linux
Internet Explorer 6.0.2900, Windows XP
Mozilla Firefox 1.0.6, Linux
QBasic 148.3 MS-DOS QBasic 1.1, Windows XP Console

By default, Erlang compiles to a machine-independent byte code which is executed by the runtime system. The developers say this is typically about ten times slower than C compiled to native code for computationally intense numeric and text processing work. My test confirmed this, with the byte code compiled benchmark running 9.3 times slower than C. On some platforms, including the GNU/Linux x86_64 machine on which I ran the benchmark, Erlang supports HiPE (High Performance Erlang), which allows compilation to native machine code. Using this option and specifying the highest level of optimisation produces a program which runs just 3.7 times slower than C. While this may seem a substantial penalty, it's worth noting that telecommunication systems rarely do serious number crunching or text shuffling: they're all about database accesses and intercommunication, and what is paramount is reliability, scalability, and the ability to distribute the application across multiple hardware platforms. Erlang's strengths in these areas may outweigh its greater CPU usage for large-scale distributed systems.

Posted at 23:37 Permalink

Friday, June 13, 2014

Floating Point Benchmark: Simula Language Added

I have posted an update to my trigonometry-intense floating point benchmark which adds Simula to the list of languages in which the benchmark is implemented. A new release of the benchmark collection including Simula is now available for downloading.

Simula may be the most significant computer language of which you've never heard. In the 1960s, it introduced almost all of the essential concepts of object oriented programming: classes, inheritance, virtual procedures, and included facilities for discrete event simulation. Memory management includes dynamic storage allocation and garbage collection. When programming in Simula, one has the sense of using a computer language of the 1990s which somehow dropped into the 1960s, retaining some of the archaic syntax of that epoch. (What was it about academic language designers of the era that they did not appreciate the value of initialising variables and declaring symbolic constants? COBOL had both from inception.) In fact, although few programmers were aware of Simula, it was well known among the computer science community and language designers and was the direct inspiration for languages such as C++, Objective C, Smalltalk, and Java. Had it not been relegated to the niche of a “simulation language”, we might have been writing object oriented code since the early 1970s. So it goes.

The relative performance of the various language implementations (with C taken as 1) is as follows. All language implementations of the benchmark listed below produced identical results to the last (11th) decimal place.

Language Relative
Time
Details
C 1 GCC 3.2.3 -O3, Linux
Visual Basic .NET 0.866 All optimisations, Windows XP
FORTRAN 1.008 GNU Fortran (g77) 3.2.3 -O3, Linux
Pascal 1.027
1.077
Free Pascal 2.2.0 -O3, Linux
GNU Pascal 2.1 (GCC 2.95.2) -O3, Linux
Java 1.121 Sun JDK 1.5.0_04-b05, Linux
Visual Basic 6 1.132 All optimisations, Windows XP
Haskell 1.223 GHC 7.4.1-O2 -funbox-strict-fields, Linux
Ada 1.401 GNAT/GCC 3.4.4 -O3, Linux
Go 1.481 Go version go1.1.1 linux/amd64, Linux
Simula 2.099 GNU Cim 5.1, GCC 4.8.1 -O2, Linux
ALGOL 60 3.951 MARST 2.7, GCC 4.8.1 -O3, Linux
Lisp 7.41
19.8
GNU Common Lisp 2.6.7, Compiled, Linux
GNU Common Lisp 2.6.7, Interpreted
Smalltalk 7.59 GNU Smalltalk 2.3.5, Linux
Forth 9.92 Gforth 0.7.0, Linux
COBOL 12.5
46.3
Micro Focus Visual COBOL 2010, Windows 7
Fixed decimal instead of computational-2
Algol 68 15.2 Algol 68 Genie 2.4.1 -O3, Linux
Python 17.6 Python 2.3.3 -OO, Linux
Perl 23.6 Perl v5.8.0, Linux
Ruby 26.1 Ruby 1.8.3, Linux
JavaScript 27.6
39.1
46.9
Opera 8.0, Linux
Internet Explorer 6.0.2900, Windows XP
Mozilla Firefox 1.0.6, Linux
QBasic 148.3 MS-DOS QBasic 1.1, Windows XP Console

The Simula benchmark was developed and run under the GNU Cim 5.1 Simula to C translator. C code was compiled with GCC 4.8.1 with -O2 for the x86_64 architecture. There is no reason to believe a purpose-built optimising Simula compiler could not perform as well as a present-day C++ compiler.

Posted at 14:26 Permalink