Sunday, August 5, 2018

Reading List: Losing the Nobel Prize

Keating, Brian. Losing the Nobel Prize. New York: W. W. Norton, 2018. ISBN 978-1-324-00091-4.
Ever since the time of Galileo, the history of astronomy has been punctuated by a series of “great debates”—disputes between competing theories of the organisation of the universe which observation and experiment using available technology are not yet able to resolve one way or another. In Galileo's time, the great debate was between the Ptolemaic model, which placed the Earth at the centre of the solar system (and universe) and the competing Copernican model which had the planets all revolving around the Sun. Both models worked about as well in predicting astronomical phenomena such as eclipses and the motion of planets, and no observation made so far had been able to distinguish them.

Then, in 1610, Galileo turned his primitive telescope to the sky and observed the bright planets Venus and Jupiter. He found Venus to exhibit phases, just like the Moon, which changed over time. This would not happen in the Ptolemaic system, but is precisely what would be expected in the Copernican model—where Venus circled the Sun in an orbit inside that of Earth. Turning to Jupiter, he found it to be surrounded by four bright satellites (now called the Galilean moons) which orbited the giant planet. This further falsified Ptolemy's model, in which the Earth was the sole source of attraction around which all celestial bodies revolved. Since anybody could build their own telescope and confirm these observations, this effectively resolved the first great debate in favour of the Copernican heliocentric model, although some hold-outs in positions of authority resisted its dethroning of the Earth as the centre of the universe.

This dethroning came to be called the “Copernican principle”, that Earth occupies no special place in the universe: it is one of a number of planets orbiting an ordinary star in a universe filled with a multitude of other stars. Indeed, when Galileo observed the star cluster we call the Pleiades, he saw myriad stars too dim to be visible to the unaided eye. Further, the bright stars were surrounded by a diffuse bluish glow. Applying the Copernican principle again, he argued that the glow was due to innumerably more stars too remote and dim for his telescope to resolve, and then generalised that the glow of the Milky Way was also composed of uncountably many stars. Not only had the Earth been demoted from the centre of the solar system, so had the Sun been dethroned to being just one of a host of stars possibly stretching to infinity.

But Galileo's inference from observing the Pleiades was wrong. The glow that surrounds the bright stars is due to interstellar dust and gas which reflect light from the stars toward Earth. No matter how large or powerful the telescope you point toward such a reflection nebula, all you'll ever see is a smooth glow. Driven by the desire to confirm his Copernican convictions, Galileo had been fooled by dust. He would not be the last.

William Herschel was an eminent musician and composer, but his passion was astronomy. He pioneered the large reflecting telescope, building more than sixty telescopes. In 1789, funded by a grant from King George III, Herschel completed a reflector with a mirror 1.26 metres in diameter, which remained the largest aperture telescope in existence for the next fifty years. In Herschel's day, the great debate was about the Sun's position among the surrounding stars. At the time, there was no way to determine the distance or absolute brightness of stars, but Herschel decided that he could compile a map of the galaxy (then considered to be the entire universe) by surveying the number of stars in different directions. Only if the Sun was at the centre of the galaxy would the counts be equal in all directions.

Aided by his sister Caroline, a talented astronomer herself, he eventually compiled a map which indicated the galaxy was in the shape of a disc, with the Sun at the centre. This seemed to refute the Copernican view that there was nothing special about the Sun's position. Such was Herschel's reputation that this finding, however puzzling, remained unchallenged until 1847 when Wilhelm Struve discovered that Herschel's results had been rendered invalid by his failing to take into account the absorption and scattering of starlight by interstellar dust. Just as you can only see the same distance in all directions while within a patch of fog, regardless of the shape of the patch, Herschel's survey could only see so far before extinction of light by dust cut off his view of stars. Later it was discovered that the Sun is far from the centre of the galaxy. Herschel had been fooled by dust.

In the 1920s, another great debate consumed astronomy. Was the Milky Way the entire universe, or were the “spiral nebulæ” other “island universes”, galaxies in their own right, peers of the Milky Way? With no way to measure distance or telescopes able to resolve them into stars, many astronomers believed spiral neublæ were nearby objects, perhaps other solar systems in the process of formation. The discovery of a Cepheid variable star in the nearby Andromeda “nebula” by Edwin Hubble in 1923 allowed settling this debate. Andromeda was much farther away than the most distant stars found in the Milky Way. It must, then be a separate galaxy. Once again, demotion: the Milky Way was not the entire universe, but just one galaxy among a multitude.

But how far away were the galaxies? Hubble continued his search and measurements and found that the more distant the galaxy, the more rapidly it was receding from us. This meant the universe was expanding. Hubble was then able to calculate the age of the universe—the time when all of the galaxies must have been squeezed together into a single point. From his observations, he computed this age at two billion years. This was a major embarrassment: astrophysicists and geologists were confident in dating the Sun and Earth at around five billion years. It didn't make any sense for them to be more than twice as old as the universe of which they were a part. Some years later, it was discovered that Hubble's distance estimates were far understated because he failed to account for extinction of light from the stars he measured due to dust. The universe is now known to be seven times the age Hubble estimated. Hubble had been fooled by dust.

By the 1950s, the expanding universe was generally accepted and the great debate was whether it had come into being in some cataclysmic event in the past (the “Big Bang”) or was eternal, with new matter spontaneously appearing to form new galaxies and stars as the existing ones receded from one another (the “Steady State” theory). Once again, there were no observational data to falsify either theory. The Steady State theory was attractive to many astronomers because it was the more “Copernican”—the universe would appear overall the same at any time in an infinite past and future, so our position in time is not privileged in any way, while in the Big Bang the distant past and future are very different than the conditions we observe today. (The rate of matter creation required by the Steady State theory was so low that no plausible laboratory experiment could detect it.)

The discovery of the cosmic background radiation in 1965 definitively settled the debate in favour of the Big Bang. It was precisely what was expected if the early universe were much denser and hotter than conditions today, as predicted by the Big Bang. The Steady State theory made no such prediction and was, despite rear-guard actions by some of its defenders (invoking dust to explain the detected radiation!), was considered falsified by most researchers.

But the Big Bang was not without its own problems. In particular, in order to end up with anything like the universe we observe today, the initial conditions at the time of the Big Bang seemed to have been fantastically fine-tuned (for example, an infinitesimal change in the balance between the density and rate of expansion in the early universe would have caused the universe to quickly collapse into a black hole or disperse into the void without forming stars and galaxies). There was no physical reason to explain these fine-tuned values; you had to assume that's just the way things happened to be, or that a Creator had set the dial with a precision of dozens of decimal places.

In 1979, the theory of inflation was proposed. Inflation held that in an instant after the Big Bang the size of the universe blew up exponentially so that all the observable universe today was, before inflation, the size of an elementary particle today. Thus, it's no surprise that the universe we now observe appears so uniform. Inflation so neatly resolved the tensions between the Big Bang theory and observation that it (and refinements over the years) became widely accepted. But could inflation be observed? That is the ultimate test of a scientific theory.

There have been numerous cases in science where many years elapsed between a theory being proposed and definitive experimental evidence for it being found. After Galileo's observations, the Copernican theory that the Earth orbits the Sun became widely accepted, but there was no direct evidence for the Earth's motion with respect to the distant stars until the discovery of the aberration of light in 1727. Einstein's theory of general relativity predicted gravitational radiation in 1915, but the phenomenon was not directly detected by experiment until a century later. Would inflation have to wait as long or longer?

Things didn't look promising. Almost everything we know about the universe comes from observations of electromagnetic radiation: light, radio waves, X-rays, etc., with a little bit more from particles (cosmic rays and neutrinos). But the cosmic background radiation forms an impenetrable curtain behind which we cannot observe anything via the electromagnetic spectrum, and it dates from around 380,000 years after the Big Bang. The era of inflation was believed to have ended 10−32 seconds after the Bang; considerably earlier. The only “messenger” which could possibly have reached us from that era is gravitational radiation. We've just recently become able to detect gravitational radiation from the most violent events in the universe, but no conceivable experiment would be able to detect this signal from the baby universe.

So is it hopeless? Well, not necessarily…. The cosmic background radiation is a snapshot of the universe as it existed 380,000 years after the Big Bang, and only a few years after it was first detected, it was realised that gravitational waves from the very early universe might have left subtle imprints upon the radiation we observe today. In particular, gravitational radiation creates a form of polarisation called B-modes which most other sources cannot create.

If it were possible to detect B-mode polarisation in the cosmic background radiation, it would be a direct detection of inflation. While the experiment would be demanding and eventually result in literally going to the end of the Earth, it would be strong evidence for the process which shaped the universe we inhabit and, in all likelihood, a ticket to Stockholm for those who made the discovery.

This was the quest on which the author embarked in the year 2000, resulting in the deployment of an instrument called BICEP1 (Background Imaging of Cosmic Extragalactic Polarization) in the Dark Sector Laboratory at the South Pole. Here is my picture of that laboratory in January 2013. The BICEP telescope is located in the foreground inside a conical shield which protects it against thermal radiation from the surrounding ice. In the background is the South Pole Telescope, a millimetre wave antenna which was not involved in this research.

BICEP2 and South Pole Telescope, 2013-01-09

BICEP1 was a prototype, intended to test the technologies to be used in the experiment. These included cooling the entire telescope (which was a modest aperture [26 cm] refractor, not unlike Galileo's, but operating at millimetre wavelengths instead of visible light) to the temperature of interstellar space, with its detector cooled to just ¼ degree above absolute zero. In 2010 its successor, BICEP2, began observation at the South Pole, and continued its run into 2012. When I took the photo above, BICEP2 had recently concluded its observations.

On March 17th, 2014, the BICEP2 collaboration announced, at a press conference, the detection of B-mode polarisation in the region of the southern sky they had monitored. Note the swirling pattern of polarisation which is the signature of B-modes, as opposed to the starburst pattern of other kinds of polarisation.

B-mode polarisation in BICEP2 observations, 2014-03-17

But, not so fast, other researchers cautioned. The risk in doing “science by press release” is that the research is not subjected to peer review—criticism by other researchers in the field—before publication and further criticism in subsequent publications. The BICEP2 results went immediately to the front pages of major newspapers. Here was direct evidence of the birth cry of the universe and confirmation of a theory which some argued implied the existence of a multiverse—the latest Copernican demotion—the idea that our universe was just one of an ensemble, possibly infinite, of parallel universes in which every possibility was instantiated somewhere. Amid the frenzy, a few specialists in the field, including researchers on competing projects, raised the question, “What about the dust?” Dust again! As it happens, while gravitational radiation can induce B-mode polarisation, it isn't the only thing which can do so. Our galaxy is filled with dust and magnetic fields which can cause those dust particles to align with them. Aligned dust particles cause polarised reflections which can mimic the B-mode signature of the gravitational radiation sought by BICEP2.

The BICEP2 team was well aware of this potential contamination problem. Unfortunately, their telescope was sensitive only to one wavelength, chosen to be the most sensitive to B-modes due to primordial gravitational radiation. It could not, however, distinguish a signal from that cause from one due to foreground dust. At the same time, however, the European Space Agency Planck spacecraft was collecting precision data on the cosmic background radiation in a variety of wavelengths, including one sensitive primarily to dust. Those data would have allowed the BICEP2 investigators to quantify the degree their signal was due to dust. But there was a problem: BICEP2 and Planck were direct competitors.

Planck had the data, but had not released them to other researchers. However, the BICEP2 team discovered that a member of the Planck collaboration had shown a slide at a conference of unpublished Planck observations of dust. A member of the BICEP2 team digitised an image of the slide, created a model from it, and concluded that dust contamination of the BICEP2 data would not be significant. This was a highly dubious, if not explicitly unethical move. It confirmed measurements from earlier experiments and provided confidence in the results.

In September 2014, a preprint from the Planck collaboration (eventually published in 2016) showed that B-modes from foreground dust could account for all of the signal detected by BICEP2. In January 2015, the European Space Agency published an analysis of the Planck and BICEP2 observations which showed the entire BICEP2 detection was consistent with dust in the Milky Way. The epochal detection of inflation had been deflated. The BICEP2 researchers had been deceived by dust.

The author, a founder of the original BICEP project, was so close to a Nobel prize he was already trying to read the minds of the Nobel committee to divine who among the many members of the collaboration they would reward with the gold medal. Then it all went away, seemingly overnight, turned to dust. Some said that the entire episode had injured the public's perception of science, but to me it seems an excellent example of science working precisely as intended. A result is placed before the public; others, with access to the same raw data are given an opportunity to critique them, setting forth their own raw data; and eventually researchers in the field decide whether the original results are correct. Yes, it would probably be better if all of this happened in musty library stacks of journals almost nobody reads before bursting out of the chest of mass media, but in an age where scientific research is funded by agencies spending money taken from hairdressers and cab drivers by coercive governments under implicit threat of violence, it is inevitable they will force researchers into the public arena to trumpet their “achievements”.

In parallel with the saga of BICEP2, the author discusses the Nobel Prizes and what he considers to be their dysfunction in today's scientific research environment. I was surprised to learn that many of the curious restrictions on awards of the Nobel Prize were not, as I had heard and many believe, conditions of Alfred Nobel's will. In fact, the conditions that the prize be shared no more than three ways, not be awarded posthumously, and not awarded to a group (with the exception of the Peace prize) appear nowhere in Nobel's will, but were imposed later by the Nobel Foundation. Further, Nobel's will explicitly states that the prizes shall be awarded to “those who, during the preceding year, shall have conferred the greatest benefit to mankind”. This constraint (emphasis mine) has been ignored since the inception of the prizes.

He decries the lack of “diversity” in Nobel laureates (by which he means, almost entirely, how few women have won prizes). While there have certainly been women who deserved prizes and didn't win (Lise Meitner, Jocelyn Bell Burnell, and Vera Rubin are prime examples), there are many more men who didn't make the three laureates cut-off (Freeman Dyson an obvious example for the 1965 Physics Nobel for quantum electrodynamics). The whole Nobel prize concept is capricious, and rewards only those who happen to be in the right place at the right time in the right field that the committee has decided deserves an award this year and are lucky enough not to die before the prize is awarded. To imagine it to be “fair” or representative of scientific merit is, in the estimation of this scribbler, in flying unicorn territory.

In all, this is a candid view of how science is done at the top of the field today, with all of the budget squabbles, maneuvering for recognition, rivalry among competing groups of researchers, balancing the desire to get things right with the compulsion to get there first, and the eye on that prize, given only to a few in a generation, which can change one's life forever.

Personally, I can't imagine being so fixated on winning a prize one has so little chance of gaining. It's like being obsessed with winning the lottery—and about as likely.

In parallel with all of this is an autobiographical account of the career of a scientist with its ups and downs, which is both a cautionary tale and an inspiration to those who choose to pursue that difficult and intensely meritocratic career path.

I recommend this book on all three tracks: a story of scientific discovery, mis-interpretation, and self-correction, the dysfunction of the Nobel Prizes and how they might be remedied, and the candid story of a working scientist in today's deeply corrupt coercively-funded research environment.

Posted at 10:51 Permalink

Tuesday, July 24, 2018

Reading List: Une Fantaisie du Docteur Ox

Verne, Jules. Une Fantaisie du Docteur Ox. Seattle: CreateSpace, [1874] 2017. ISBN 978-1-5470-6408-3.
After reading and reviewing Jules Verne's Hector Servadac last year, I stumbled upon a phenomenal bargain: a Kindle edition of the complete works of Jules Verne—160 titles, with 5400 illustrations—for US$ 2.51 at this writing, published by Arvensa. This is not a cheap public domain knock-off, but a thoroughly professional publication with very few errors. For less than the price of a paperback book, you get just about everything Jules Verne ever wrote in Kindle format which, if you download the free Kindle French dictionary, allows you to quickly look up the obscure terms and jargon of which Verne is so fond without flipping through the Little Bob. That's how I read this work, although I have cited a print edition in the header for those who prefer such.

The strange story of Doctor Ox would be considered a novella in modern publishing terms, coming in at 19,240 words. It is divided into 17 chapters and is written in much the same style as the author's Voyages extraordinaires, with his customary huge vocabulary, fondness for lengthy enumerations, and witty parody of the national character of foreigners.

Here, the foreigners in question are the Flemish, speakers of dialects of the Dutch language who live in the northern part of Belgium. The Flemish are known for being phlegmatic, and nowhere is this more in evidence than the small city of Quiquendone. Its 2,393 residents and their ancestors have lived there since the city was founded in 1197, and very little has happened to disturb their placid lives; they like it that way. Its major industries are the manufacture of whipped cream and barley sugar. Its inhabitants are taciturn and, when they speak, do so slowly. For centuries, what little government they require has been provided by generations of the van Tricasse family, son succeeding father as burgomaster. There is little for the burgomaster to do, and one of the few items on his agenda, inherited from his father twenty years ago, is whether the city should dispense with the services of its sole policeman, who hasn't had anything to do for decades.

Burgomaster van Tricasse exemplifies the moderation in all things of the residents of his city. I cannot resist quoting this quintessentially Jules Verne description in full.

Le bourgmestre était un personnage de cinquante ans, ni gras ni maigre, ni petit ni grand, ni vieux ni jeune, ni coloré ni pâle, ni gai ni triste, ni content ni ennuyé, ni énergique ni mou, ni fier ni humble, ni bon ni méchant, ni généreux ni avare, ni brave ni poltron, ni trop ni trop peu, — ne quid nimis, — un homme modéré en tout ; mais à la lenteur invariable de ses mouvements, à sa mâchoire inférieure un peu pendante, à sa paupière supérieure immuablement relevée, à son front uni comme une plaque de cuivre jaune et sans une ride, à ses muscles peu salliants, un physionomiste eût sans peine reconnu que le bourgomestre van Tricasse était le flegme personnifié.

Imagine how startled this paragon of moderation and peace must have been when the city's policeman—he whose job has been at risk for decades—pounds on the door and, when admitted, reports that the city's doctor and lawyer, visiting the house of scientist Doctor Ox, had gotten into an argument. They had been talking politics! Such a thing had not happened in Quiquendone in over a century. Words were exchanged that might lead to a duel!

Who is this Doctor Ox? A recent arrival in Quiquendone, he is a celebrated scientist, considered a leader in the field of physiology. He stands out against the other inhabitants of the city. Of no well-defined nationality, he is a genuine eccentric, self-confident, ambitious, and known even to smile in public. He and his laboratory assistant Gédéon Ygène work on their experiments and never speak of them to others.

Shortly after arriving in Quiquendone, Dr Ox approached the burgomaster and city council with a proposal: to illuminate the city and its buildings, not with the new-fangled electric lights which other cities were adopting, but with a new invention of his own, oxy-hydric gas. Using powerful electric batteries he invented, water would be decomposed into hydrogen and oxygen gas, stored separately, then delivered in parallel pipes to individual taps where they would be combined and burned, producing a light much brighter and pure than electric lights, not to mention conventional gaslights burning natural or manufactured gas. In storage and distribution, hydrogen and oxygen would be strictly segregated, as any mixing prior to the point of use ran the risk of an explosion. Dr Ox offered to pay all of the expenses of building the gas production plant, storage facilities, and installation of the underground pipes and light fixtures in public buildings and private residences. After a demonstration of oxy-hydric lighting, city fathers gave the go-ahead for the installation, presuming Dr Ox was willing to assume all the costs in order to demonstrate his invention to other potential customers.

Over succeeding days and weeks, things before unimagined, indeed, unimaginable begin to occur. On a visit to Dr Ox, the burgomaster himself and his best friend city council president Niklausse find themselves in—dare it be said—a political argument. At the opera house, where musicians and singers usually so moderate the tempo that works are performed over multiple days, one act per night, a performance of Meyerbeer's Les Hugenots becomes frenetic and incites the audience to what can only be described as a riot. A ball at the house of the banker becomes a whirlwind of sound and motion. And yet, each time, after people go home, they return to normal and find it difficult to believe what they did the night before.

Over time, the phenomenon, at first only seen in large public gatherings, begins to spread into individual homes and private lives. You would think the placid Flemish had been transformed into the hotter tempered denizens of countries to the south. Twenty newspapers spring up, each advocating its own radical agenda. Even plants start growing to enormous size, and cats and dogs, previously as reserved as their masters, begin to bare fangs and claws. Finally, a mass movement rises to avenge the honour of Quiquendone for an injury committed in the year 1185 by a cow from the neighbouring town of Virgamen.

What was happening? Whence the madness? What would be the result when the citizens of Quiquendone, armed with everything they could lay their hands on, marched upon their neighbours?

This is a classic “puzzle story”, seasoned with a mad scientist of whom the author allows us occasional candid glimpses as the story unfolds. You'll probably solve the puzzle yourself long before the big reveal at the end. Jules Verne, always anticipating the future, foresaw this: the penultimate chapter is titled (my translation), “Where the intelligent reader sees that he guessed correctly, despite every precaution by the author”. The enjoyment here is not so much the puzzle but rather Verne's language and delicious description of characters and events, which are up to the standard of his better-known works.

This is “minor Verne”, written originally for a public reading and then published in a newspaper in Amiens, his adopted home. Many believed that in Quiquendone he was satirising Amiens and his placid neighbours.

Doctor Ox would reappear in the work of Jules Verne in his 1882 play Voyage à travers l'impossible (Journey Through the Impossible), a work which, after 97 performances in Paris, was believed lost until a single handwritten manuscript was found in 1978. Dr Ox reprises his role as mad scientist, joining other characters from Verne's novels on their own extraordinary voyages. After that work, Doctor Ox disappears from the world. But when I regard the frenzied serial madness loose today, from “bathroom equality”, tearing down Civil War monuments, masked “Antifa” blackshirts beating up people in the streets, the “refugee” racket, and Russians under every bed, I sometimes wonder if he's taken up residence in today's United States.

An English translation is available. Verne's reputation has often suffered due to poor English translations of his work; I have not read this edition and don't know how good it is. Warning: the description of this book at Amazon contains a huge spoiler for the central puzzle of the story.

Posted at 21:12 Permalink

Sunday, July 22, 2018

Reading List: Sanity

Neovictorian [pseud.] and Neal Van Wahr. Sanity. Seattle: Amazon Digital Services, [2017] 2018. ISBN 978-1-980820-95-6.
Have you sometimes felt, since an early age, that you were an alien, somehow placed on Earth and observing the antics of humans as if they were a different species? Why do they believe such stupid things? Why do they do such dumb things? Any why do they keep doing them over and over again seemingly incapable of learning from the bad outcomes of all the previous attempts?

That is how Cal Adler felt since childhood and, like most people with such feelings, kept them quiet and bottled up while trying to get ahead in a game whose rules often seemed absurd. In his senior year in high school, he encounters a substitute guidance counsellor who tells him, without any preliminary conversation, precisely how he feels. He's assured he is not alone, and that over time he will meet others. He is given an enigmatic contact in case of emergency. He is advised, as any alien in a strange land, to blend in while observing and developing his own talents. And that's the last he sees of the counsellor.

Cal's subsequent life is punctuated by singular events: a terrorist incident in which he spontaneously rises to the occasion, encountering extraordinary people, and being initiated into skills he never imagined he'd possess. He begins to put together a picture of a shadowy…something…of which he may or may not be a part, whose goals are unclear, but whose people are extraordinary.

Meanwhile, a pop religion called ReHumanism, founded by a science fiction writer, is gaining adherents among prominent figures in business, entertainment, and technology. Its “scriptures” advocate escape from the tragic cycle of progress and collapse which has characterised the human experience by turning away from the artificial environment in which we have immersed ourselves and rediscovering our inherent human nature which may, to many in the modern world, seem alien. Is there a connection between ReHumanism (which seems like a flaky scam to Cal) and the mysterious people he is encountering?

All of these threads begin to come together when Cal, working as a private investigator in Reno, Nevada, is retained by the daughter of a recently-deceased billionaire industrialist to find her mother, who has disappeared during a tourist visit to Alaska. The mother is revealed have become a convert to and supporter of ReHumanism. Are they involved? And how did the daughter find Cal, who, after previous events, has achieved a level of low observability stealth aircraft designers can only dream of?

An adventure begins in which nothing is as it seems and all of Cal's formidable talents are tested to their limits.

This is an engaging and provocative mystery/thriller which will resonate with those who identify with the kind of heroic, independent, and inner-directed characters that populate the fiction of Robert A. Heinlein and other writers of the golden age of science fiction. It speaks directly to those sworn to chart their own course through life regardless of what others may think or say. I'm not sure the shadowy organisation we glimpse here actually exists, but I wish it did…and I wish they'd contacted me. There are many tips of the hat here to works and authors of fiction with similar themes, and I'm sure many more I missed.

This is an example of the efflorescence of independent science fiction which the obsolescence of the traditional gatekeeper publishers has engendered. With the advent of low-cost, high-margin self-publishing and customer reviews and ratings to evaluate quality, an entire new cohort of authors whose work would never before have seen the light of day is now enriching the genre and the lives of their enthusiastic readers. The work is not free of typographical and grammatical errors, but I've read books from major science fiction publishers with more. The Kindle edition is free to Kindle Unlimited subscribers.

Posted at 22:10 Permalink

Saturday, July 21, 2018

Twitterbot is a Bad, Bad Boy

After I migrated the WordPress/BuddyPress site I administer,, to the Amazon Web Services (AWS) Linux 2 operating system platform on 2018-07-08, I observed intermittent errors in the system log reporting “php-fpm[21865]: [WARNING] [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 3 idle, and 27 total children” or some such. After correlating these with the HTTPD access_log, I found that they were due to the PHP-fpm mechanism (which is new in Linux 2) running out of worker processes or, even worse, launching so many of them it exhausts system memory and causes worker processes to crash. (And don't tell me to configure a swap file; that will only turn process crashes into system-wide thrashing oblivion.)

And why were all of these PHP processes running around? After all, this is a discussion site with fewer than 120 members and modest traffic. Looking at the log pointed the finger at Twitterbot, a Web crawler operated by the Californian socialist network called Twitter, which claims it's accessing sites to see if they provide “Twitter cards” for URLs posted on its system. Well, it's awfully frenetic in doing so. In the first incident I investigated, it hit my site from four different IP addresses ( a total of 16 times within one second, all requesting the same page. You may call this a Web crawler. To me it looks like a denial of service attack. These requests will all spawn PHP-fpm worker processes and may blow away system memory, and for no reason. We do not support Twitter cards, and there is no conceivable reason for Twitter to make more than one request to determine we don't.

Enough is enough. I decided to tell Twitter to buzz (or flippy-flap) off. I added:

    User-agent: Twitterbot
    Disallow: /
to robots.txt and sat back to to see what would happen. Result? Essentially nothing: it continued to hit the site as before. All right, time to up the ante. I decided to consign Twitterbot to Stalag 403 with the following in .htaccess:
    # Block rogue user agents
    BrowserMatchNoCase 'Twitterbot' evilbots
    Order Allow,Deny
    Allow from ALL
    Deny from env=evilbots
so that any access from Twitterbot will receive a 403 and be informed that its access is forbidden and should not be retried. That ought to fix it, right?


In the last 24 hours there have been three request storms, all for /index.php, with 16 requests the first time and 18 on the second and third. All of these requests were sent within a period of one second, from four different IP addresses: The second and third storms were 19 seconds apart, for a total of 36 hits within a period of less than 20 seconds.

For any site running PHP-fpm, this amounts to a denial of service attack: it will blow up the number of worker processes and possibly exhaust memory or start page thrashing and, in any case, delay legitimate user requests. Second, it isn't like the bot is crawling the site: it's making repeated requests for the same page over and over again, from four different IP addresses. Finally, it's violating HTTP protocol. A 403 status means the client has been forbidden access from the server, and the HTTP standard reads, “Authorization will not help and the request SHOULD NOT be repeated.” (capitals in the original). And yet in the third storm a single IP address hammered in 8 requests for the same page after having received a 403 on the first one. This is either exceptionally stupid or malicious, and I'm beginning to suspect the latter. I'm getting closer and closer to firewalling this IP range. This may break our anouncement of posts on Twitter, but at this point I'm not so sure that would be such a bad thing. The IP range is Twitter's published outbound IP ranges are much larger: and, but so far I've only seen Twitterbot coming from the four addresses in the first block.

I guess we shouldn't expect too much from a “social network” headquartered in a city now known for human feces and used addict needles on its sidewalks. (Hayek noted that any word in the English language is reduced in value by preceding it with “social”.) But once is happenstance, twice is coincidence, and three times is enemy action (Ian Fleming). Thirty-six times in twenty seconds? Welcome to my firewall.

(And note that these requests came from IPv4 address ranges which Twitter acknowledges are their own and were confirmed by WHOIS. So it's not somebody impersonating Twitterbot.)

By the way, if you're interested in intelligent, civil, and wide-ranging conversation, check out Ratburger,org. It's free; there are no advertisements, and no intrusive tracking. All members can post, comment, create and participate in interest groups, and join our weekly audio meet-up.

Posted at 23:27 Permalink

Tuesday, July 17, 2018

Hebrew Bible Updated to Unicode, XHTML Strict

The Web edition of the Hebrew Bible has been available at Fourmilab since 1998. It originally required a browser extension to support downloadable fonts. When this became obsolete, a second edition was released in 2002 which used the ISO 8859-8 character set, which includes the ASCII Latin character set and Hebrew letters (but no vowel signs). Most Web browsers at the time supported this character set, although some required the installation of a "language pack" or font in order to display it.

At the time, I remarked that when Unicode became widely adopted, all of the complexity of special character sets for each language would evaporate, as we'd have a single character encoding which could handle all commonly-used languages (and many obscure ones, as well). Now, in 2018, we have made our landfall on that happy shore. The vast majority of widely-used operating systems and Web browsers support Unicode and provide at least one font with characters for the major languages.

I have just released a third edition of the Fourmilab Hebrew Bible, in which all documents use Unicode for all text, using the UTF-8 representation which now accounts for more than 90% of traffic on the Web. Any browser which supports Unicode and includes a font providing the Hebrew character set will be able to display these documents without any special configuration required—it should just work.

I have also updated all documents to the XHTML 1.0 Strict standard. I prefer this standard to HTML5 for documents which do not require features of the latter standard (such as embedded audio and video or the canvas element) since, being well-formed XML, XHTML documents can easily be parsed by computer programs which wish to process their content.

You can cite a chapter within a book of the Bible with a URL like:
or an individual verse with:

Previous editions of the Hebrew Bible did not require the “c” or “v” before the chapter or chapter:verse; this is a requirement of XHTML, in which the “id=” attribute must not start with a digit. For compatibility with existing citations, the “c” or “v” may be omitted, but in direct URLs citing the book document itself, they must be supplied.

This edition of the Hebrew Bible, like its predecessors, does not rely upon the so-called “Unicode Bidirectional Algorithm”. Instead, characters appear in the source HTML documents in the order they are presented in the page, with Hebrew text being explicitly reversed in order to read from right to left. In my experience, getting involved with automatic bidirectional text handling is the royal road to madness, and programmers who wish to keep what little hair that remains after half a century unscrewing the inscrutable trust their instinct about things to avoid. Hebrew text, which would otherwise automatically be rendered right-to-left by the browser, is explicitly surrounded by HTML tags:

<bdo dir="ltr">ת ישארב</bdo>

to override the default direction based upon the characters, in the example, the first word of Genesis. (You can also override the directionality of text by prefixing the Unicode LRO [&#8237;] or RLO [&#8238;] character and appending a PDF [&#8236;] to the string. I chose to use the XHTML override tag since it makes the intent clearer when processing the document with a program.)

To fully appreciate the insanity that Unicode bidirectional mode can induce in the minds of authors of multilingual documents, consider the following simplified HTML code for a sentence from the Hebrew Bible help file.

One writes:
100 as &#1511;,
101 as &#1488;&#1511;,
110 as &#1497;&#1511;, and
111 as &#1488;&#1497;&#1511;.

Want to guess how the browser renders this? Go ahead, guess. What you get is:

One writes: 100 as ק, 101 as אק, 110 as יק, and 111 as איק.

What? Why?? This way leads to the asylum. If you wrap the Hebrew with:

One writes:
100 as <bdo dir="ltr">&#1511;</bdo>,
101 as <bdo dir="ltr">&#1488;&#1511;</bdo>,
110 as <bdo dir="ltr">&#1497;&#1511;</bdo>, and
111 as <bdo dir="ltr">&#1488;&#1497;&#1511;</bdo>.

you get the desired:

One writes: 100 as ק, 101 as אק, 110 as יק, and 111 as איק.

In these examples, I have used HTML text entities (such as “&#1488;”) in the interest of comprehensibility. If you use actual Unicode characters and edit with a text editor such as Geany which infers text direction from the characters adjacent to the cursor, things get even more bewildering. The Hebrew Bible files contain Unicode characters, not text entities, but I only process them with custom Perl programs, never with a text editor.

In case somebody needs it, the ISO 8859-8 edition remains available.

Posted at 13:53 Permalink

Sunday, July 15, 2018

Recipes: Steak with Roquefort Mushroom Sauce

Here is a meal you can make yourself from all natural ingredients in minimal time with little to clean up afterward. It never fails and requires very little of your time. I use one low-tech gizmo to save time and ensure success, but you can use alternative means at the cost of a bit more fussiness and time.

Start with:

  • A good cut of steak, 250 to 350 grams per person
  • Roquefort cheese, 100 g
  • Sliced mushrooms, 200 g before draining
  • Garlic purée, around a tablespoon (15 ml)
  • Cooking oil (olive, etc.)

We're going to cook the steak in a Tefal Actifry. This device is colloquially called an “air fryer”, but that is misleading: it actually cooks by blowing very hot air onto the food. This creates much the same effect as deep frying, but without a bath of hot oil or tendency to make the food greasy. What I discovered when developing this recipe is that, delightfully, when used on meat, the process triggers the Maillard reaction which makes flame-seared steaks so attractive in appearance and delicious.

Start by drizzling a little oil (about a teaspoon or two, 5–10 ml) in the back part of the Actifry pan, below the hot air input. Now drag the steak through the oil, coating both sides and the edges with a thin film of oil. Ideally, when you're done, there will be hardly any oil left over in the pan. The Actifry stirrer should be removed; the steak will be stationary beneath the air vent (visible at the top of the picture). Close the lid, set the timer for 8 minutes, and press the start button. (There is no temperature setting on the Actifry.)

While the steak is cooking, place the Roquefort cheese, sliced mushrooms (drained), and a squirt of garlic purée in a small saucepan and put on very low heat. You can break the cheese up into chunks with a stirring spoon if you like, but if you don't it will still work fine. As the cheese melts, stir all the ingredients together. Once the cheese is melted and everything is mixed, turn the heat down to the lowest level or off and cover. You don't want to overheat the cheese, which will denature it and make a mess.

When the Actifry beeps at the end of the 8 minutes, open it and turn over the steak, keeping it at the back under the air input. Set the timer for 7 minutes and restart. When it beeps again, the steak is ready. Take it out of the Actifry pan and put it in a bowl. Pour the juice from the pan into the sauce pan and stir it into the sauce, then pour the sauce on top of the steak. You're ready to eat!

While you're enjoying the steak, let's get the Actifry busy making a companion: chips or French fries. Install the stirrer in the pan, and add your desired quantity of store-bought frozen chips. Try to get the kind intended to be prepared by deep-frying, not those made to be cooked in the oven. The latter will work, but may come out oily and less than ideal. Drizzle a very small amount of oil on top of the frozen chips, close the lid, set the timer for 15 minutes and press start. Don't bother cleaning the pan; the remaining juices from the steak will add flavour to the chips

When next you hear the beep, dump the chips into a bowl, give them a few sprays of Balsamic vinegar, season with salt and pepper, and bring to the table. Catsup? Catsup! What do you take me for, an American?

After dinner, cleaning up amounts to loading the Actifry pan, stirrer, and filter, the saucepan, and the bowls and silverware into the dishwasher. There's no grill to scrub, charcoal to extinguish and dispose of, frying oil to filter and eventually recycle, or other detritus.

The cooking times given result in a medium rare (à point) steak. If you prefer a different degree of doneness, adjust the time accordingly. This recipe and the Roquefort sauce also work well with boneless chicken breasts. When cooking chicken, you may have to increase the cooking time slightly so the cooked meat isn't pink in the centre—chicken should always be cooked well done to eliminate the risk of Salmonella. The core temperature of cooked chicken should always be at least 75° C.

This recipe is sized for one person. For two, simply double the quantities. Place the two steaks side by side in the back of the Actifry. The cooking times do not change. I have not tried cooking more than two steaks at once in the Actifry; since additional steaks would be farther from the air input, they may not cook as well—you'll have to experiment if you want to do this.

If you consider the sauce a Continental desecration of red meat, don't make it! The steak will be just fine by itself. If you prefer to use fresh mushrooms rather than store-bought prepared ones, start with around 250 g of brown or white mushrooms, cut off and discard the bottoms of the stalks, cut into slices and place in the Actifry pan with the stirrer installed. Drizzle a teaspoon or two of oil on the top and cook for 10 minutes. You can cook the mushrooms first and set aside to add to the sauce while the steak is cooking.

Posted at 20:42 Permalink

Sunday, July 8, 2018

Reading List: Bad Blood

Carreyrou, John. Bad Blood. New York: Alfred A. Knopf, 2018. ISBN 978-1-984833-63-1.
The drawing of blood for laboratory tests is one of my least favourite parts of a routine visit to the doctor's office. Now, I have no fear of needles and hardly notice the stick, but frequently the doctor's assistant who draws the blood (whom I've nicknamed Vampira) has difficulty finding the vein to get a good flow and has to try several times. On one occasion she made an internal puncture which resulted in a huge, ugly bruise that looked like I'd slammed a car door on my arm. I wondered why they need so much blood, and why draw it into so many different containers? (Eventually, I researched this, having been intrigued by the issue during the O. J. Simpson trial; if you're curious, here is the information.) Then, after the blood is drawn, it has to be sent off to the laboratory, which sends back the results days later. If something pops up in the test results, you have to go back for a second visit with the doctor to discuss it.

Wouldn't it be great if they could just stick a fingertip and draw a drop or two of blood, as is done by diabetics to test blood sugar, then run all the tests on it? Further, imagine if, after taking the drop of blood, it could be put into a desktop machine right in the doctor's office which would, in a matter of minutes, produce test results you could discuss immediately with the doctor. And if such a technology existed and followed the history of decline in price with increase in volume which has characterised other high technology products since the 1970s, it might be possible to deploy the machines into the homes of patients being treated with medications so their effects could be monitored and relayed directly to their physicians in case an anomaly was detected. It wouldn't quite be a Star Trek medical tricorder, but it would be one step closer. With the cost of medical care rising steeply, automating diagnostic blood tests and bringing them to the mass market seemed an excellent candidate as the “next big thing” for Silicon Valley to revolutionise.

This was the vision that came to 19 year old Elizabeth Holmes after completing a summer internship at the Genome Institute of Singapore after her freshman year as a chemical engineering major at Stanford. Holmes had decided on a career in entrepreneurship from an early age and, after her first semester told her father, “No, Dad, I'm, not interested in getting a Ph.D. I want to make money.” And Stanford, in the heart of Silicon Valley, was surrounded by companies started by professors and graduates who had turned inventions into vast fortunes. With only one year of college behind her, she was sure she'd found her opportunity. She showed the patent application she'd drafted for an arm patch that would diagnose medical conditions to Channing Robertson, professor of chemical engineering at Stanford, and Shaunak Roy, the Ph.D. student in whose lab she had worked as an assistant during her freshman year. Robertson was enthusiastic, and when Holmes said she intended to leave Stanford and start a company to commercialise the idea, he encouraged her. When the company was incorporated in 2004, Roy, then a newly-minted Ph.D., became its first employee and Robertson joined the board.

From the outset, the company was funded by other people's money. Holmes persuaded a family friend, Tim Draper, a second-generation venture capitalist who had backed, among other companies, Hotmail, to invest US$ 1 million in first round funding. Draper was soon joined by Victor Palmieri, a corporate turnaround artist and friend of Holmes' father. The company was named Theranos, from “therapy” and “diagnosis”. Elizabeth, unlike this scribbler, had a lifelong aversion to needles, and the invention she described in the business plan pitched to investors was informed by this. A skin patch would draw tiny quantities of blood without pain by means of “micro-needles”, the blood would be analysed by micro-miniaturised sensors in the patch and, if needed, medication could be injected. A wireless data link would send results to the doctor.

This concept, and Elizabeth's enthusiasm and high-energy pitch allowed her to recruit additional investors, raising almost US$ 6 million in 2004. But there were some who failed to be persuaded: MedVentures Associates, a firm that specialised in medical technology, turned her down after discovering she had no answers for the technical questions raised in a meeting with the partners, who had in-depth experience with diagnostic technology. This would be a harbinger of the company's fund-raising in the future: in its entire history, not a single venture fund or investor with experience in medical or diagnostic technology would put money into the company.

Shaunak Roy, who, unlike Holmes, actually knew something about chemistry, quickly realised that Elizabeth's concept, while appealing to the uninformed, was science fiction, not science, and no amount of arm-waving about nanotechnology, microfluidics, or laboratories on a chip would suffice to build something which was far beyond the state of the art. This led to a “de-scoping” of the company's ambition—the first of many which would happen over succeeding years. Instead of Elizabeth's magical patch, a small quantity of blood would be drawn from a finger stick and placed into a cartridge around the size of a credit card. The disposable cartridge would then be placed into a desktop “reader” machine, which would, using the blood and reagents stored in the cartridge, perform a series of analyses and report the results. This was originally called Theranos 1.0, but after a series of painful redesigns, was dubbed the “Edison”. This was the prototype Theranos ultimately showed to potential customers and prospective investors.

This was a far cry from the original ambitious concept. The hundreds of laboratory tests doctors can order are divided into four major categories: immunoassays, general chemistry, hæmatology, and DNA amplification. In immunoassay tests, blood plasma is exposed to an antibody that detects the presence of a substance in the plasma. The antibody contains a marker which can be detected by its effect on light passed through the sample. Immunoassays are used in a number of common blood tests, such the 25(OH)D assay used to test for vitamin D deficiency, but cannot perform other frequently ordered tests such as blood sugar and red and white blood cell counts. Edison could only perform what is called “chemiluminescent immunoassays”, and thus could only perform a fraction of the tests regularly ordered. The rationale for installing an Edison in the doctor's office was dramatically reduced if it could only do some tests but still required a venous blood draw be sent off to the laboratory for the balance.

This didn't deter Elizabeth, who combined her formidable salesmanship with arm-waving about the capabilities of the company's products. She was working on a deal to sell four hundred Edisons to the Mexican government to cope with an outbreak of swine flu, which would generate immediate revenue. Money was much on the minds of Theranos' senior management. By the end of 2009, the company had burned through the US$ 47 million raised in its first three rounds of funding and, without a viable product or prospects for sales, would have difficulty keeping the lights on.

But the real bonanza loomed on the horizon in 2010. Drugstore giant Walgreens was interested in expanding their retail business into the “wellness market”: providing in-store health services to their mass market clientèle. Theranos pitched them on offering in-store blood testing. Doctors could send their patients to the local Walgreens to have their blood tested from a simple finger stick and eliminate the need to draw blood in the office or deal with laboratories. With more than 8,000 locations in the U.S., if each were to be equipped with one Edison, the revenue to Theranos (including the single-use testing cartridges) would put them on the map as another Silicon Valley disruptor that went from zero to hundreds of millions in revenue overnight. But here, as well, the Elizabeth effect was in evidence. Of the 192 tests she told Walgreens Theranos could perform, fewer than half were immunoassays the Edisons could run. The rest could be done only on conventional laboratory equipment, and certainly not on a while-you-wait basis.

Walgreens wasn't the only potential saviour on the horizon. Grocery godzilla Safeway, struggling with sales and earnings which seemed to have reached a peak, saw in-store blood testing with Theranos machines as a high-margin profit centre. They loaned Theranos US$ 30 million and began to plan for installation of blood testing clinics in their stores.

But there was a problem, and as the months wore on, this became increasingly apparent to people at both Walgreens and Safeway, although dismissed by those in senior management under the spell of Elizabeth's reality distortion field. Deadlines were missed. Simple requests, such as A/B comparison tests run on the Theranos hardware and at conventional labs were first refused, then postponed, then run but results not disclosed. The list of tests which could be run, how blood for them would be drawn, and how they would be processed seemed to dissolve into fog whenever specific requests were made for this information, which was essential for planning the in-store clinics.

There was, indeed, a problem, and it was pretty severe, especially for a start-up which had burned through US$ 50 million and sold nothing. The product didn't work. Not only could the Edison only run a fraction of the tests its prospective customers had been led by Theranos to believe it could, for those it did run the results were wildly unreliable. The small quantity of blood used in the test introduced random errors due to dilution of the sample; the small tubes in the cartridge were prone to clogging; and capillary blood collected from a finger stick was prone to errors due to “hemolysis”, the rupture of red blood cells, which is minimal in a venous blood draw but so prevalent in finger stick blood it could lead to some tests producing values which indicated the patient was dead.

Meanwhile, people who came to work at Theranos quickly became aware that it was not a normal company, even by the eccentric standards of Silicon Valley. There was an obsession with security, with doors opened by badge readers; logging of employee movement; information restricted to narrow silos prohibiting collaboration between, say, engineering and marketing which is the norm in technological start-ups; monitoring of employee Internet access, E-mail, and social media presence; a security detail of menacing-looking people in black suits and earpieces (which eventually reached a total of twenty); a propensity of people, even senior executives, to “vanish”, Stalin-era purge-like, overnight; and a climate of fear that anybody, employee or former employee, who spoke about the company or its products to an outsider, especially the media, would be pursued, harassed, and bankrupted by lawsuits. There aren't many start-ups whose senior scientists are summarily demoted and subsequently commit suicide. That happened at Theranos. The company held no memorial for him.

Throughout all of this, a curious presence in the company was Ramesh (“Sunny”) Balwani, a Pakistani-born software engineer who had made a fortune of more than US$ 40 million in the dot-com boom and cashed out before the bust. He joined Theranos in late 2009 as Elizabeth's second in command and rapidly became known as a hatchet man, domineering boss, and clueless when it came to the company's key technologies (on one occasion, an engineer mentioned a robotic arm's “end effector”, after which Sunny would frequently speak of its “endofactor”). Unbeknownst to employees and investors, Elizabeth and Sunny had been living together since 2005. Such an arrangement would be a major scandal in a public company, but even in a private firm, concealing such information from the board and investors is a serious breach of trust.

Let's talk about the board, shall we? Elizabeth was not only persuasive, but well-connected. She would parley one connection into another, and before long had recruited many prominent figures including:

  • George Schultz (former U.S. Secretary of State)
  • Henry Kissinger (former U.S. Secretary of State)
  • Bill Frist (former U.S. Senator and medical doctor)
  • James Mattis (General, U.S. Marine Corps)
  • Riley Bechtel (Chairman and former CEO, Bechtel Group)
  • Sam Nunn (former U.S. Senator)
  • Richard Kobacevich (former Wells Fargo chairman and CEO)

Later, super-lawyer David Boies would join the board, and lead its attacks against the company's detractors. It is notable that, as with its investors, not a single board member had experience in medical or diagnostic technology. Bill Frist was an M.D., but his speciality was heart and lung transplants, not laboratory tests.

By 2014, Elizabeth Holmes had come onto the media radar. Photogenic, articulate, and with a story of high-tech disruption of an industry much in the news, she began to be featured as the “female Steve Jobs”, which must have pleased her, since she affected black turtlenecks, kale shakes, and even a car with no license plates to emulate her role model. She appeared on the cover of Fortune in January 2014, made the Forbes list of 400 most wealthy shortly thereafter, was featured in puff pieces in business and general market media, and was named by Time as one of the hundred most influential people in the world. The year 2014 closed with another glowing profile in the New Yorker. This would be the beginning of the end, as it happened to be read by somebody who actually knew something about blood testing.

Adam Clapper, a pathologist in Missouri, spent his spare time writing Pathology Blawg, with a readership of practising pathologists. Clapper read what Elizabeth was claiming to do with a couple of drops of blood from a finger stick and it didn't pass the sniff test. He wrote a sceptical piece on his blog and, as it passed from hand to hand, he became a lightning rod for others dubious of Theranos' claims, including those with direct or indirect experience with the company. Earlier, he had helped a Wall Street Journal reporter comprehend the tangled web of medical laboratory billing, and he decided to pass on the tip to the author of this book.

Thus began the unravelling of one of the greatest scams and scandals in the history of high technology, Silicon Valley, and venture investing. At the peak, privately-held Theranos was valued at around US$ 9 billion, with Elizabeth Holmes holding around half of its common stock, and with one of those innovative capital structures of which Silicon Valley is so fond, 99.7% of the voting rights. Altogether, over its history, the company raised around US$ 900 million from investors (including US$ 125 million from Rupert Murdoch in the US$ 430 million final round of funding). Most of the investors' money was ultimately spent on legal fees as the whole fairy castle crumbled.

The story of the decline and fall is gripping, involving the grandson of a Secretary of State, gumshoes following whistleblowers and reporters, what amounts to legal terrorism by the ever-slimy David Boies, courageous people who stood their ground in the interest of scientific integrity against enormous personal and financial pressure, and the saga of one of the most cunning and naturally talented confidence women ever, equipped with only two semesters of freshman chemical engineering, who managed to raise and blow through almost a billion dollars of other people's money without checking off the first box on the conventional start-up check list: “Build the product”.

I have, in my career, met three world-class con men. Three times, I (just barely) managed to pick up the warning signs and beg my associates to walk away. Each time I was ignored. After reading this book, I am absolutely sure that had Elizabeth Holmes pitched me on Theranos (about which I never heard before the fraud began to be exposed), I would have been taken in. Walker's law is “Absent evidence to the contrary, assume everything is a scam”. A corollary is “No matter how cautious you are, there's always a confidence man (or woman) who can scam you if you don't do your homework.”

Here is Elizabeth Holmes at Stanford in 2013, when Theranos was riding high and she was doing her “female Steve Jobs” act.

Elizabeth Holmes at Stanford: 2013

This is a CNN piece, filmed after the Theranos scam had begun to collapse, in which you can still glimpse the Elizabeth Holmes reality distortion field at full intensity directed at CNN medical correspondent Sanjay Gupta. There are several curious things about this video. The machine that Gupta is shown is the “miniLab”, a prototype second-generation machine which never worked acceptably, not the Edison, which was actually used in the Walgreens and Safeway tests. Gupta's blood is drawn and tested, but the process used to perform the test is never shown. The result reported is a cholesterol test, but the Edison cannot perform such tests. In the plans for the Walgreens and Safeway roll-outs, such tests were performed on purchased Siemens analysers which had been secretly hacked by Theranos to work with blood diluted well below their regulatory-approved specifications (the dilution was required due to the small volume of blood from the finger stick). Since the miniLab never really worked, the odds are that Gupta's blood was tested on one of the Siemens machines, not a Theranos product at all.

CNN: Inside the Theranos Lab (2016)

In a June 2018 interview, author John Carreyrou recounts the story of Theranos and his part in revealing the truth.

John Carreyrou on investigating Theranos (2018)

If you are a connoisseur of the art of the con, here is a masterpiece. After the Wall Street Journal exposé had broken, after retracting tens of thousands of blood tests, and after Theranos had been banned from running a clinical laboratory by its regulators, Holmes got up before an audience of 2500 people at the meeting of the American Association of Clinical Chemistry and turned up the reality distortion field to eleven. Watch a master at work. She comes on the stage at the six minute mark.

Elizabeth Holmes at the American Association of Clinical Chemistry (2016)

Posted at 21:32 Permalink

Tuesday, July 3, 2018

UNUM 3.0: Updated to Unicode 11

Version 3.0 of UNUM is now available for downloading. Version 3.0 incorporates the Unicode 11.0.0 standard, released on June 5th, 2018. The update to Unicode adds support for seven scripts for languages, additional CJK (Chinese, Japanese, and Korean) symbols, 66 new emoji, and assorted symbols such as half-stars for rating systems. There are a total of 137,374 characters in 11.0.0, of which 684 are new since 10.0.0. (UNUM also supports an additional 65 ASCII control characters, which are not assigned graphic code points in the Unicode database.)

This is an incremental update to Unicode. There are no structural changes in how characters are defined in the databases, and other than the presence of the new characters, the operation of UNUM is unchanged.

UNUM also contains a database of HTML named character references (the sequences like “&lt;” you use in HTML source code when you need to represent a character which has a syntactic meaning in HTML or which can't be directly included in a file with the character encoding you're using to write it). There have been no changes to this standard since UNUM 2.2 was released in September 2017, so UNUM 3.0 will behave identically when querying these references except, of course, that numerical references to the new Unicode characters will be interpreted correctly. (Is your browser totally with it? See what it does with “&#129465;” in an HTML document! And here we go…“🦹”.)

UNUM Documentation and Download Page

Posted at 20:00 Permalink

Monday, June 25, 2018

Reading List: La Mort de Staline

Nury, Fabien and Thierry Robin. La Mort de Staline. Paris: Dargaud, [2010, 2012] 2014. ISBN 978-2-205-07351-5.
The 2017 film, The Death of Stalin, was based upon this French bande dessinée (BD, graphic novel, or comic). The story is based around the death of Stalin and the events that ensued: the scheming and struggle for power among the members of his inner circle, the reactions and relationships of his daughter Svetlana and wastrel son Vasily, the conflict between the Red Army and NKVD, the maneuvering over the arrangements for Stalin's funeral, and the all-encompassing fear and suspicion that Stalin's paranoia had infused into the Soviet society. This is a fictional account, grounded in documented historical events, in which the major characters were real people. But the authors are forthright in saying they invented events and dialogue to tell a story which is intended to give one a sense of the «folie furieuse de Staline et de son entourage» rather than provide a historical narrative.

The film adaptation is listed as a comedy and, particularly if you have a taste for black humour, is quite funny. This BD is not explicitly funny, except in an ironic sense, illustrating the pathological behaviour of those surrounding Stalin. Many of the sequences in this work could have been used as storyboards for the movie, but there are significant events here which did make it into the screenplay. The pervasive strong language which earned the film an R rating is little in evidence here.

The principal characters and their positions are introduced by boxes overlaying the graphics, much as was done in the movie. Readers who aren't familiar with the players in Stalin's Soviet Union such as Beria, Zhukov, Molotov, Malenkov, Khrushchev, Mikoyan, and Bulganin, may miss some of the nuances of their behaviour here, which is driven by this back-story. Their names are given using the French transliteration of Russian, which is somewhat different from that used in English (for example, “Krouchtchev” instead of “Khrushchev”). The artwork is intricately drawn in the realistic style, with only a few comic idioms sparsely used to illustrate things like gunshots.

I enjoyed both the movie (which I saw first, not knowing until the end credits that it was based upon this work) and the BD. They're different takes on the same story, and both work on their own terms. This is not the kind of story for which “spoilers” apply, so you'll lose nothing by enjoying both in either order.

The album cited above contains both volumes of the original print edition. The Kindle edition continues to be published in two volumes (Vol. 1, Vol. 2). An English translation of the graphic novel is available. I have not looked at it beyond the few preview pages available on Amazon.

Posted at 21:35 Permalink

Sunday, June 24, 2018

Google Chrome Drops Support for Setting Cookies with "meta http-equiv"

Many Web sites use “HTTP cookies” to follow a user through a session which may involve a series of individual Web pages. While cookies can be used for intrusive tracking and have developed a somewhat dodgy reputation as a result, for some applications such as providing persistent log-ins to a site from a certain computer and browser they are nearly essential.

The Hacker's Diet Online uses cookies to implement its “Remember me” login feature. If this box is checked when the user logs in, a cookie is stored in the browser which contains an opaque credential that allows the user to access the application from the same machine and browser without logging in for the next ninety days. Recently, this feature stopped working for users of the Google Chrome browser who had updated to version 65 or above.

When cookies were originally introduced in the mid 1990s, they were set by a Web server's sending a “Set-CookieHTTP header field in which the cookie name, value, and optional parameters such as expiration date and scope (source domain and document path). Because many Web applications do not have the ability to directly cause the server to emit header fields, they commonly used a HTML meta element with the “http-equiv” attribute, which causes the browser to treat the element's “content” field as if it had been sent by the server as a header field. For example, to set a cookie, one might use:

<meta http-equiv="Set-Cookie" content="session=6be5123e0" />

to remember a user's session number. (In practice, such cookies would usually contain a scope and expiration date, but these complexities are ignored here.)

Another way of setting a cookie is to use the JavaScript document.cookie property. This, of course, requires that the user's browser support JavaScript and that it be enabled.

The Hacker's Diet Online has been carefully designed not to require JavaScript. Some user interface features, such as dynamic updates for abbreviated data entry and plotting chart items as soon as they are entered in a table will not work without JavaScript, but the full functionality of the application remains available. Consequently, the “Remember me” cookie (the only cookie used by the application, and only if the user requests this feature) was set with an HTML meta element.

Then, the pointy-heads at Google Chrome went and took it away. Why? Who knows—the document linked to by the warning message that appears in the browser debug console is a model of opacity, and the document it cites seems like the decision to remove support for a feature widely used on the Web for 23 years was more like a developers' whim rather than a carefully considered design decision.

Still, whatever you think of this browser and the company that develops it, it has, depending on who's measuring, somewhere between a little less than half to 60% of the market and more than that on desktop platforms. The only way to restore the “Remember me” functionality for its users is to eliminate setting the cookie with the meta tag and use JavaScript instead. This has been implemented in Build 5223 of the application. This, of course, means that users whose browsers do not support JavaScript or who have disabled it in the interests of security and privacy will no longer have access to this capability and will have to log in every time they open a new session with the application.

Google is known as a champion of “progressive” values and for being a hotbed of “progressives”. Welcome to “progress”.

Posted at 14:17 Permalink