« February 2019 | Main | May 2019 »

Sunday, April 28, 2019

Reading List: The Code Hunters

Coppley, Jackson. The Code Hunters. Chevy Chase, MD: Contour Press, 2019. ISBN 978-1-09-107011-0.
A team of expert cavers exploring a challenging cave in New Mexico in search of a possible connection to Carlsbad Caverns tumble into a chamber deep underground containing something which just shouldn't be there: a huge slab of metal, like titanium, twenty-four feet square and eight inches thick, set into the rock of the cave, bearing markings which resemble the pits and lands on an optical storage disc. No evidence for human presence in the cave prior to the discoverers is found, and dating confirms that the slab is at least ten thousand years old. There is no way an object that large could be brought through the cramped and twisting passages of the cave to the chamber where it was found.

Wealthy adventurer Nicholas Foxe, with degrees in archaeology and cryptography, gets wind of the discovery and pulls strings to get access to the cave, putting together a research program to try to understand the origin of the slab and decode its enigmatic inscription. But as news of the discovery reaches others, they begin to pursue their own priorities. A New Mexico senator sends his on-the-make assistant to find out what is going on and see how it might be exploited to his advantage. An ex-Army special forces operator makes stealthy plans. An MIT string theorist with a wide range of interests begins exploring unorthodox ideas about how the inscriptions might be encoded. A televangelist facing hard times sees the Tablet as the way back to the top of the heap. A wealthy Texan sees the potential in the slab for wealth beyond his abundant dreams of avarice. As the adventure unfolds, we encounter a panoply of fascinating characters: a World Health Organization scientist, an Italian violin maker with an eccentric theory of language and his autistic daughter, and a “just the facts” police inspector. As clues are teased from the enigma, we visit exotic locations and experience harrowing adventure, finally grasping the significance of a discovery that bears on the very origin of modern humans.

About now, you might be thinking “This sounds like a Dan Brown novel”, and in a sense you'd be right. But this is the kind of story Dan Brown would craft if he were a lot better author than he is: whereas Dan Brown books have become stereotypes of cardboard characters and fill-in-the-blanks plots with pseudo-scientific bafflegab stirred into the mix (see my review of Origin [May 2018]), this is a gripping tale filled with complex, quirky characters, unexpected plot twists, beautifully sketched locales, and a growing sense of wonder as the significance of the discovery is grasped. If anybody in Hollywood had any sense (yes, I know…) they would make this into a movie instead of doing another tedious Dan Brown sequel. This is subtitled “A Nicholas Foxe Adventure”: I sincerely hope there will be more to come.

The author kindly let me read a pre-publication manuscript of this novel. The Kindle edition is free to Kindle Unlimited subscribers.

Posted at 22:22 Permalink

Tuesday, April 9, 2019

Reading List: The Powers of the Earth

Corcoran, Travis J. I. The Powers of the Earth. New Hampshire: Morlock Publishing, 2017. ISBN 978-1-9733-1114-0.
Corcoran, Travis J. I. Causes of Separation. New Hampshire: Morlock Publishing, 2018. ISBN 978-1-9804-3744-4.
(Note: This is novel is the first of an envisioned four volume series titled Aristillus. It and the second book, Causes of Separation, published in May, 2018, together tell a single story which reaches a decisive moment just as the first book ends. Unusually, this will be a review of both novels, taken as a whole. If you like this kind of story at all, there's no way you'll not immediately plunge into the second book after setting down the first.)

Around the year 2050, collectivists were firmly in power everywhere on Earth. Nations were subordinated to the United Nations, whose force of Peace Keepers (PKs) had absorbed all but elite special forces, and were known for being simultaneously brutal, corrupt, and incompetent. (Due to the equality laws, military units had to contain a quota of “Alternatively Abled Soldiers” who other troops had to wheel into combat.) The United States still existed as a country, but after decades of rule by two factions of the Democrat party: Populist and Internationalist, was mired in stagnation, bureaucracy, crumbling infrastructure, and on the verge of bankruptcy. The U.S. President, Themba Johnson, a former talk show host who combined cluelessness, a volatile temper, and vulpine cunning when it came to manipulating public opinion, is confronted with all of these problems and looking for a masterstroke to get beyond the next election.

Around 2050, when the collectivists entered the inevitable end game their policies lead to everywhere they are tried, with the Bureau of Sustainable Research (BuSuR) suppressing new technologies in every field and the Construction Jobs Preservation Act and Bureau of Industrial Planning banning anything which might increase productivity, a final grasp to loot the remaining seed corn resulted in the CEO Trials aimed at the few remaining successful companies, with expropriation of their assets and imprisonment of their leaders. CEO Mike Martin manages to escape from prison and link up with renegade physicist Ponnala (“Ponzie”) Srinivas, inventor of an anti-gravity drive he doesn't want the slavers to control. Mike buys a rustbucket oceangoing cargo ship, equips it with the drive, an airtight compartment and life support, and flees Earth with a cargo of tunnel boring machines and water to exile on the Moon, in the crater Aristillus in Mare Imbrium on the lunar near side where, fortuitously, the impact of a metal-rich asteroid millions of years ago enriched the sub-surface with metals rare in the Moon's crust.

Let me say a few words about the anti-gravity drive, which is very unusual and original, and whose properties play a significant role in the story. The drive works by coupling to the gravitational field of a massive body and then pushing against it, expending energy as it rises and gains gravitational potential energy. Momentum is conserved, as an equal and opposite force is exerted on the massive body against which it is pushing. The force vector is always along the line connecting the centre of mass of the massive body and the drive unit, directed away from the centre of mass. The force is proportional to the strength of the gravitational field in which the drive is operating, and hence stronger when pushing against a body like Earth as opposed to a less massive one like the Moon. The drive's force diminishes with distance from the massive body as its gravitational field falls off with the inverse square law, and hence the drive generates essentially no force when in empty space far from a gravitating body. When used to brake a descent toward a massive body, the drive converts gravitational potential energy into electricity like the regenerative braking system of an electric vehicle: energy which can be stored for use when later leaving the body.

Because the drive can only push outward radially, when used to, say, launch from the Earth to the Moon, it is much like Jules Verne's giant cannon—the launch must occur at the latitude and longitude on Earth where the Moon will be directly overhead at the time the ship arrives at the Moon. In practice, the converted ships also carried auxiliary chemical rockets and reaction control thrusters for trajectory corrections and precision maneuvering which could not be accomplished with the anti-gravity drive.

By 2064, the lunar settlement, called Aristillus by its inhabitants, was thriving, with more than a hundred thousand residents, and growing at almost twenty percent a year. (Well, nobody knew for sure, because from the start the outlook shared by the settlers was aligned with Mike Martin's anarcho-capitalist worldview. There was no government, no taxes, no ID cards, no business licenses, no regulations, no zoning [except covenants imposed by property owners on those who sub-leased property from them], no central bank, no paper money [an entrepreneur had found a vein of gold left by the ancient impactor and gone into business providing hard currency], no elections, no politicians, no forms to fill out, no police, and no army.) Some of these “features” of life on grey, regimented Earth were provided by private firms, while many of the others were found to be unnecessary altogether.

The community prospered as it grew. Like many frontier settlements, labour was in chronic short supply, and even augmented by robot rovers and machines (free of the yoke of BuSuR), there was work for anybody who wanted it and job offers awaiting new arrivals. A fleet of privately operated ships maintained a clandestine trade with Earth, bringing goods which couldn't yet be produced on the Moon, atmosphere, water from the oceans (in converted tanker ships), and new immigrants who had sold their Earthly goods and quit the slave planet. Waves of immigrants from blood-soaked Nigeria and chaotic China established their own communities and neighbourhoods in the ever-growing network of tunnels beneath Aristillus.

The Moon has not just become a refuge for humans. When BuSuR put its boot on the neck of technology, it ordered the shutdown of a project to genetically “uplift” dogs to human intelligence and beyond, creating “Dogs” (the capital letter denoting the uplift) and all existing Dogs to be euthanised. Many were, but John (we never learn his last name), a former U.S. Special Forces operator, manages to rescue a colony of Dogs from one of the labs before the killers arrive and escape with them to Aristillus, where they have set up the Den and engage in their own priorities, including role-playing games, software development, and trading on the betting markets. Also rescued by John was Gamma, the first Artificial General Intelligence to be created, whose intelligence is above the human level but not (yet, anyway) intelligence runaway singularity-level transcendent. Gamma has established itself in its own facility in Sinus Lunicus on the other side of Mare Imbrium, and has little contact with the human or Dog settlers.

Inevitably, liberty produces prosperity, and prosperity eventually causes slavers to regard the free with envious eyes, and slowly and surely draw their plans against them.

This is the story of the first interplanetary conflict, and a rousing tale of liberty versus tyranny, frontier innovation against collectivised incompetence, and principles (there is even the intervention of a Vatican diplomat) confronting brutal expedience. There are delicious side-stories about the creation of fake news, scheming politicians, would-be politicians in a libertarian paradise, open source technology, treachery, redemption, and heroism. How do three distinct species: human, Dog, and AI work together without a top-down structure or subordinating one to another? Can the lunar colony protect itself without becoming what its settlers left Earth to escape?

Woven into the story is a look at how a libertarian society works (and sometimes doesn't work) in practice. Aristillus is in no sense a utopia: it has plenty of rough edges and things to criticise. But people there are free, and they prefer it to the prison planet they escaped.

This is a wonderful, sprawling, action-packed story with interesting characters, complicated conflicts, and realistic treatment of what a small colony faces when confronted by a hostile planet of nine billion slaves. Think of this as Heinlein's The Moon is a Harsh Mistress done better. There are generous tips of the hat to Heinlein and other science fiction in the book, but this is a very different story with an entirely different outcome, and truer to the principles of individualism and liberty. I devoured these books and give them my highest recommendation. The Powers of the Earth won the 2018 Prometheus Award for best libertarian science fiction novel.

Posted at 15:07 Permalink

Sunday, April 7, 2019

Reading List: Connected: The Emergence of Global Consciousness

Nelson, Roger D. Connected: The Emergence of Global Consciousness. Princeton: ICRL Press, 2019. ISBN 978-1-936033-35-5.
In the first half of the twentieth century Pierre Teilhard de Chardin developed the idea that the process of evolution which had produced complex life and eventually human intelligence on Earth was continuing and destined to eventually reach an Omega Point in which, just as individual neurons self-organise to produce the unified consciousness and intelligence of the human brain, eventually individual human minds would coalesce (he was thinking mostly of institutions and technology, not a mystical global mind) into what he called the noosphere—a sphere of unified thought surrounding the globe just like the atmosphere. Could this be possible? Might the Internet be the baby picture of the noosphere? And if a global mind was beginning to emerge, might we be able to detect it with the tools of science? That is the subject of this book about the Global Consciousness Project, which has now been operating for more than two decades, collecting an immense data set which has been, from inception, completely transparent and accessible to anyone inclined to analyse it in any way they can imagine. Written by the founder of the project and operator of the network over its entire history, the book presents the history, technical details, experimental design, formal results, exploratory investigations from the data set, and thoughts about what it all might mean.

Over millennia, many esoteric traditions have held that “all is one”—that all humans and, in some systems of belief, all living things or all of nature are connected in some way and can interact in ways other than physical (ultimately mediated by the electromagnetic force). A common aspect of these philosophies and religions is that individual consciousness is independent of the physical being and may in some way be part of a larger, shared consciousness which we may be able to access through techniques such as meditation and prayer. In this view, consciousness may be thought of as a kind of “field” with the brain acting as a receiver in the same sense that a radio is a receiver of structured information transmitted via the electromagnetic field. Belief in reincarnation, for example, is often based upon the view that death of the brain (the receiver) does not destroy the coherent information in the consciousness field which may later be instantiated in another living brain which may, under some circumstances, access memories and information from previous hosts.

Such beliefs have been common over much of human history and in a wide variety of very diverse cultures around the globe, but in recent centuries these beliefs have been displaced by the view of mechanistic, reductionist science, which argues that the brain is just a kind of (phenomenally complicated) biological computer and that consciousness can be thought of as an emergent phenomenon which arises when the brain computer's software becomes sufficiently complex to be able to examine its own operation. From this perspective, consciousness is confined within the brain, cannot affect the outside world or the consciousness of others except by physical interactions initiated by motor neurons, and perceives the world only through sensory neurons. There is no “consciousness field”, and individual consciousness dies when the brain does.

But while this view is more in tune with the scientific outlook which spawned the technological revolution that has transformed the world and continues to accelerate, it has, so far, made essentially zero progress in understanding consciousness. Although we have built electronic computers which can perform mathematical calculations trillions of times faster than the human brain, and are on track to equal the storage capacity of that brain some time in the next decade or so, we still don't have the slightest idea how to program a computer to be conscious: to be self-aware and act out of a sense of free will (if free will, however defined, actually exists). So, if we adopt a properly scientific and sceptical view, we must conclude that the jury is still out on the question of consciousness. If we don't understand enough about it to program it into a computer, then we can't be entirely confident that it is something we could program into a computer, or that it is just some kind of software running on our brain-computer.

It looks like humans are, dare I say, programmed to believe in consciousness as a force not confined to the brain. Many cultures have developed shamanism, religions, philosophies, and practices which presume the existence of the following kinds of what Dean Radin calls Real Magic, and which I quote from my review of his book with that title.

  • Force of will: mental influence on the physical world, traditionally associated with spell-casting and other forms of “mind over matter”.
  • Divination: perceiving objects or events distant in time and space, traditionally involving such practices as reading the Tarot or projecting consciousness to other places.
  • Theurgy: communicating with non-material consciousness: mediums channelling spirits or communicating with the dead, summoning demons.

Starting in the 19th century, a small number of scientists undertook to investigate whether these phenomena could possibly be real, whether they could be demonstrated under controlled conditions, and what mechanism might explain these kinds of links between consciousness and will and the physical world. In 1882 the Society for Psychical Research was founded in London and continues to operate today, publishing three journals. Psychic research, now more commonly called parapsychology, continues to investigate the interaction of consciousness with the outside world through (unspecified) means other than the known senses, usually in laboratory settings where great care is taken to ensure no conventional transfer of information occurs and with elaborate safeguards against fraud, either by experimenters or test subjects. For a recent review of the state of parapsychology research, I recommend Dean Radin's excellent 2006 book, Entangled Minds.

Parapsychologists such as Radin argue that while phenomena such as telepathy, precognition, and psychokinesis are very weak effects, elusive, and impossible to produce reliably on demand, the statistical evidence for their existence from large numbers of laboratory experiments is overwhelming, with a vanishingly small probability that the observed results are due to chance. Indeed, the measured confidence levels and effect sizes of some categories of parapsychological experiments exceed those of medical clinical trials such as those which resulted in the recommendation of routine aspirin administration to reduce the risk of heart disease in older males.

For more than a quarter of a century, an important centre of parapsychology research was the Princeton Engineering Anomalies Research (PEAR) laboratory, established in 1979 by Princeton University's Dean of Engineering, Robert G. Jahn. (The lab closed in 2007 with Prof. Jahn's retirement, and has now been incorporated into the International Consciousness Research Laboratories, which is the publisher of the present book.) An important part of PEAR's research was with electronic random event generators (REGs) connected to computers in experiments where a subject (or “operator”, in PEAR terminology) would try to influence the generator to produce an excess of one or zero bits. In a large series of experiments [PDF] run over a period of twelve years with multiple operators, it was reported that an influence in the direction of the operator's intention was seen with a highly significant probability of chance of one in a trillion. The effect size was minuscule, with around one bit in ten thousand flipping in the direction of the operator's stated goal.

If one operator can produce a tiny effect on the random data, what if many people were acting together, not necessarily with active intention, but with their consciousnesses focused on a single thing, for example at a sporting event, musical concert, or religious ceremony? The miniaturisation of electronics and computers eventually made it possible to build a portable REG and computer which could be taken into the field. This led to the FieldREG experiments in which this portable unit was taken to a variety of places and events to monitor its behaviour. The results were suggestive of an effect, but the data set was far too small to be conclusive.

Mindsong random event generator In 1998, Roger D. Nelson, the author of this book, realised that the rapid development and worldwide deployment of the Internet made it possible to expand the FieldREG concept to a global scale. Random event generators based upon quantum effects (usually shot noise from tunnelling across a back-biased Zener diode or a resistor) had been scaled down to small, inexpensive devices which could be attached to personal computers via an RS-232 serial port. With more and more people gaining access to the Internet (originally mostly via dial-up to commercial Internet Service Providers, then increasingly via persistent broadband connections such as ADSL service over telephone wires or a cable television connection), it might be possible to deploy a network of random event generators at locations all around the world, each of which would constantly collect timestamped data which would be transmitted to a central server, collected there, and made available to researchers for analysis by whatever means they chose to apply.

As Roger Nelson discussed the project with his son Greg (who would go on to be the principal software developer for the project), Greg suggested that what was proposed was essentially an electroencephalogram (EEG) for the hypothetical emerging global mind, an “ElectroGaiaGram” or EGG. Thus was born the “EGG Project” or, as it is now formally called, the Global Consciousness Project. Just as the many probes of an EEG provide a (crude) view into the operation of a single brain, perhaps the wide-flung, always-on network of REGs would pick up evidence of coherence when a large number of the world's minds were focused on a single event or idea. Once the EGG project was named, terminology followed naturally: the individual hosts running the random event generators would be “eggs” and the central data archiving server the “basket”.

In April 1998, Roger Nelson released the original proposal for the project and shortly thereafter Greg Nelson began development of the egg and basket software. I became involved in the project in mid-summer 1998 and contributed code to the egg and basket software, principally to allow it to be portable to other variants of Unix systems (it was originally developed on Linux) and machines with different byte order than the Intel processors on which it ran, and also to reduce the resource requirements on the egg host, making it easier to run on a non-dedicated machine. I also contributed programs for the basket server to assemble daily data summaries from the raw data collected by the basket and to produce a real-time network status report. Evolved versions of these programs remain in use today, more than two decades later. On August 2nd, 1998, I began to run the second egg in the network, originally on a Sun workstation running Solaris; this was the first non-Linux, non-Intel, big-endian egg host in the network. A few days later, I brought up the fourth egg, running on a Sun server in the Hall of the Servers one floor below the second egg; this used a different kind of REG, but was otherwise identical. Both of these eggs have been in continuous operation from 1998 to the present (albeit with brief outages due to power failures, machine crashes, and other assorted disasters over the years), and have migrated from machine to machine over time. The second egg is now connected to Raspberry Pi running Linux, while the fourth is now hosted on a Dell Intel-based server also running Linux, which was the first egg host to run on a 64-bit machine in native mode.

Here is precisely how the network measures deviation from the expectation for genuinely random data. The egg hosts all run a Network Time Protocol (NTP) client to provide accurate synchronisation with Internet time server hosts which are ultimately synchronised to atomic clocks or GPS. At the start of every second a total of 200 bits are read from the random event generator. Since all the existing generators provide eight bits of random data transmitted as bytes on a 9600 baud serial port, this involves waiting until the start of the second, reading 25 bytes from the serial port (first flushing any potentially buffered data), then breaking the eight bits out of each byte of data. A precision timing loop guarantees that the sampling starts at the beginning of the second-long interval to the accuracy of the computer's clock.

This process produces 200 random bits. These bits, one or zero, are summed to produce a “sample” which counts the number of one bits for that second. This sample is stored in a buffer on the egg host, along with a timestamp (in Unix time() format), which indicates when it was taken.

Buffers of completed samples are archived in files on the egg host's file system. Periodically, the basket host will contact the egg host over the Internet and request any samples collected after the last packet it received from the egg host. The egg will then transmit any newer buffers it has filled to the basket. All communications are performed over the stateless UDP Internet protocol, and the design of the basket request and egg reply protocol is robust against loss of packets or packets being received out of order.

(This data transfer protocol may seem odd, but recall that the network was designed more than twenty years ago when many people, especially those outside large universities and companies, had dial-up Internet access. The architecture would allow a dial-up egg to collect data continuously and then, when it happened to be connected to the Internet, respond to a poll from the basket and transmit its accumulated data during the time it was connected. It also makes the network immune to random outages in Internet connectivity. Over two decades of operation, we have had exactly zero problems with Internet outages causing loss of data.)

When a buffer from an egg host is received by the basket, it is stored in a database directory for that egg. The buffer contains a time stamp identifying the second at which each sample within it was collected. All times are stored in Universal Time (UTC), so no correction for time zones or summer and winter time is required.

This is the entire collection process of the network. The basket host, which was originally located at Princeton University and now is on a server at global-mind.org, only stores buffers in the database. Buffers, once stored, are never modified by any other program. Bad data, usually long strings of zeroes or ones produced when a hardware random event generator fails electrically, are identified by a “sanity check” program and then manually added to a “rotten egg” database which causes these sequences to be ignored by analysis programs. The random event generators are very simple and rarely fail, so this is a very unusual circumstance.

The raw database format is difficult for analysis programs to process, so every day an automated program (which I wrote) is run which reads the basket database, extracts every sample collected for the previous 24 hour period (or any desired 24 hour window in the history of the project), and creates a day summary file with a record for every second in the day with a column for the samples from each egg which reported that day. Missing data (eggs which did not report for that second) is indicated by a blank in that column. The data are encoded in CSV format which is easy to load into a spreadsheet or read with a program. Because some eggs may not report immediately due to Internet outages or other problems, the summary data report is re-generated two days later to capture late-arriving data. You can request custom data reports for your own analysis from the Custom Data Request page. If you are interested in doing your own exploratory analysis of the Global Consciousness Project data set, you may find my EGGSHELL C++ libraries useful.

The analysis performed by the Project proceeds from these summary files as follows.

First, we observe than each sample (xi) from egg i consists of 200 bits with an expected equal probability of being zero or one. Thus each sample has a mean expectation value (μ) of 100 and a standard deviation (σ) of 7.071 (which is just the square root of half the mean value in the case of events with probability 0.5).

Then, for each sample, we can compute its Stouffer Z-score as Zi = (xi −μ) / σ. From the Z-score, it is possible to directly compute the probability that the observed deviation from the expected mean value (μ) was due to chance.

It is now possible to compute a network-wide Z-score for all eggs reporting samples in that second using Stouffer's formula:

Summing Stouffer Z-scores

over all k eggs reporting. From this, one can compute the probability that the result from all k eggs reporting in that second was due to chance.

Squaring this composite Z-score over all k eggs gives a chi-squared distributed value we shall call V, V = Z² which has one degree of freedom. These values may be summed, yielding a chi-squared distributed number with degrees of freedom equal to the number of values summed. From the chi-squared sum and number of degrees of freedom, the probability of the result over an entire period may be computed. This gives the probability that the deviation observed by all the eggs (the number of which may vary from second to second) over the selected window was due to chance. In most of the analyses of Global Consciousness Project data an analysis window of one second is used, which avoids the need for the chi-squared summing of Z-scores across multiple seconds.

The most common way to visualise these data is a “cumulative deviation plot” in which the squared Z-scores are summed to show the cumulative deviation from chance expectation over time. These plots are usually accompanied by a curve which shows the boundary for a chance probability of 0.05, or one in twenty, which is often used a criterion for significance. Here is such a plot for U.S. president Obama's 2012 State of the Union address, an event of ephemeral significance which few people anticipated and even fewer remember.

Cumulative deviation: State of the Union 2012

What we see here is precisely what you'd expect for purely random data without any divergence from random expectation. The cumulative deviation wanders around the expectation value of zero in a “random walk” without any obvious trend and never approaches the threshold of significance. So do all of our plots look like this (which is what you'd expect)?

Well, not exactly. Now let's look at an event which was unexpected and garnered much more worldwide attention: the death of Muammar Gadaffi (or however you choose to spell it) on 2011-10-20.

Cumulative deviation: Gadaffi killed, 2011-10-20

Now we see the cumulative deviation taking off, blowing right through the criterion of significance, and ending twelve hours later with a Z-score of 2.38 and a probability of the result being due to chance of one in 111.

What's going on here? How could an event which engages the minds of billions of slightly-evolved apes affect the output of random event generators driven by quantum processes believed to be inherently random? Hypotheses non fingo. All, right, I'll fingo just a little bit, suggesting that my crackpot theory of paranormal phenomena might be in play here. But the real test is not in potentially cherry-picked events such as I've shown you here, but the accumulation of evidence over almost two decades. Each event has been the subject of a formal prediction, recorded in a Hypothesis Registry before the data were examined. (Some of these events were predicted well in advance [for example, New Year's Day celebrations or solar eclipses], while others could be defined only after the fact, such as terrorist attacks or earthquakes).

The significance of the entire ensemble of tests can be computed from the network results from the 500 formal predictions in the Hypothesis Registry and the network results for the periods where a non-random effect was predicted. To compute this effect, we take the formal predictions and compute a cumulative Z-score across the events. Here's what you get.

Cumulative deviation: GCP 1998 through 2015

Now this is…interesting. Here, summing over 500 formal predictions, we have a Z-score of 7.31, which implies that the results observed were due to chance with a probability of less than one in a trillion. This is far beyond the criterion usually considered for a discovery in physics. And yet, what we have here is a tiny effect. But could it be expected in truly random data? To check this, we compare the results from the network for the events in the Hypothesis Registry with 500 simulated runs using data from a pseudorandom normal distribution.

Cumulative deviation: GCP results versus pseudorandom simulations

Since the network has been up and running continually since 1998, it was in operation on September 11, 2001, when a mass casualty terrorist attack occurred in the United States. The formally recorded prediction for this event was an elevated network variance in the period starting 10 minutes before the first plane crashed into the World Trade Center and extending for over four hours afterward (from 08:35 through 12:45 Eastern Daylight Time). There were 37 eggs reporting that day (around half the size of the fully built-out network at its largest). Here is a chart of the cumulative deviation of chi-square for that period.

Cumulative deviation of chi-square: terrorist attacks 2001-09-11

The final probability was 0.028, which is equivalent to an odds ratio of 35 to one against chance. This is not a particularly significant result, but it met the pre-specified criterion of significance of probability less than 0.05. An alternative way of looking at the data is to plot the cumulative Z-score, which shows both the direction of the deviations from expectation for randomness as well as their magnitude, and can serve as a measure of correlation among the eggs (which should not exist in genuinely random data). This and subsequent analyses did not contribute to the formal database of results from which the overall significance figures were calculated, but are rather exploratory analyses at the data to see if other interesting patterns might be present.

Cumulative deviation of Z-score: terrorist attacks 2001-09-11

Had this form of analysis and time window been chosen a priori, it would have been calculated to have a chance probability of 0.000075, or less than one in ten thousand. Now let's look at a week-long window of time between September 7 and 13. The time of the September 11 attacks is marked by the black box. We use the cumulative deviation of chi-square from the formal analysis and start the plot of the P=0.05 envelope at that time.

Cumulative deviation of chi-square: seven day window around 2001-09-11

Another analysis looks at a 20 hour period centred on the attacks and smooths the Z-scores by averaging them within a one hour sliding window, then squares the average and converts to odds against chance.

Odds: twenty hour window around 2001-09-11, one hour smoothing

Dean Radin performed an independent analysis of the day's data binning Z-score data into five minute intervals over the period from September 6 to 13, then calculating the odds against the result being a random fluctuation. This is plotted on a logarithmic scale of odds against chance, with each 0 on the X axis denoting midnight of each day.

Binned odds: 2001-09-06 to 2001-09-13

The following is the result when the actual GCP data from September 2001 is replaced with pseudorandom data for the same period.

Binned odds: pseudorandom data 2001-09-06 to 2001-09-13

So, what are we to make of all this? That depends upon what you, and I, and everybody else make of this large body of publicly-available, transparently-collected data assembled over more than twenty years from dozens of independently-operated sites all over the world. I don't know about you, but I find it darned intriguing. Having been involved in the project since its very early days and seen all of the software used in data collection and archiving with my own eyes, I have complete confidence in the integrity of the data and the people involved with the project. The individual random event generators pass exhaustive randomness tests. When control runs are made by substituting data for the periods predicted in the formal tests with data collected at other randomly selected intervals from the actual physical network, the observed deviations from randomness go away, and the same happens when network data are replaced by computer-generated pseudorandom data. The statistics used in the formal analysis are all simple matters you'll learn in an introductory stat class and are explained in my “Introduction to Probability and Statistics”.

If you're interested in exploring further, Roger Nelson's book is an excellent introduction to the rationale and history of the project, how it works, and a look at the principal results and what they might mean. There is also non-formal exploration of other possible effects, such as attenuation by distance, day and night sleep cycles, and effect sizes for different categories of events. There's also quite a bit of New Age stuff which makes my engineer's eyes glaze over, but it doesn't detract from the rigorous information elsewhere.

The ultimate resource is the Global Consciousness Project's sprawling and detailed Web site. Although well-designed, the site can be somewhat intimidating due to its sheer size. You can find historical documents, complete access to the full database, analyses of events, and even the complete source code for the egg and basket programs.

A Kindle edition is available.

All graphs in this article are as posted on the Global Consciousness Project Web site.

Posted at 21:10 Permalink