Fair or Foul, Part 4: Weather Analysis and Forecasting

Here is your preview of the story.

The British set up a storm warning system in 1860. Thirteen coastal telegraph stations were supplied with meteorological equipment and were to report conditions at 9 am. Then London would telegraph back warnings of a storm and the stations would hoist storm signals on a high mast. These would warn of "probable" dangerous winds and their direction (Moore 243). The system was popular, but the authorities became skeptical about the accuracy of the forecasts and the program was discontinued in 1866 (312ff).

In the new timeline (NTL), having built meteorological instruments (parts 1 and 2), and created an observation network (part 3), we can proceed to analyze the state of the weather and forecast how it will evolve. Indeed, as early as October 1633, the Voice of America, broadcasting from Grantville, features a "local weather forecast" (Hughes, "Turn Your Radio On, Episode Two," Grantville Gazette 20). But canon does not say how forecasts were made or how accurate they typically were.

****

 

Analysis: Creation of Synoptic Charts

 

A synoptic chart is one that shows the state of the weather at several locations but at the same time.  This can be facilitated if the various weather stations make their observations at the same (universal) time. However, that may be difficult initially, since observations may be possible at some times and not others, local time will differ from universal time, and mobile stations may not know their longitude with precision. Observations that are made at divergent times will have to be either ignored or interpolated to arrive at estimated conditions for the observation site at the target time of the synoptic chart.

Once the observations are plotted, the next step is drawing isobars (contour lines of equal pressure) at standard intervals.  The denser the observational coverage, the less guesswork is involved. However, there is some art to the process (until we are able to write computer programs to do it for us). You look at what pressures (reduced to sea level) were reported where, you consider that winds blow across isobar lines at a slight angle (toward low pressure and away from high pressure) that is dependent on terrain, and you consider that the stronger the wind, the closer the lines should be.

The resulting contour map will show centers of low and high pressure, troughs and ridges, and cols (saddles). The high and low centers may further be classified as cold or warm core.

The third step is frontal analysis; fronts separate unlike air masses. Our characters may locate the front by reference to precipitation, characteristic temperature, dew point, pressure and wind changes, and cloud type progressions, before and after the front. These differ for cold and warm fronts, and an occluded front is a combination of the two. Past history may also prove helpful. The isobar lines should kink where they cross a front, and you may need to fine-tune the isobars once you know where the fronts are. We also classify the air masses on either side of the fronts.

Once you have an isobaric synoptic weather map, you may go a step further and compare it to one for the prior period, creating an isoallobaric map (with lines of equal pressure change). These help determine the direction in which the pressure systems are moving. The isoallobars may be drawn on the same chart or perhaps on a transparent overlay.

The first NTL synoptic charts will be surface charts, but upper air observations will ultimately make upper air charts possible. Nowadays, upper air charts are what are called constant pressure charts, and will show the temperature, dewpoint depression, wind temperature and speed, and altitude of the specified pressure for each station. The contours drawn on these charts are lines of equal altitude (isoheights).

Besides the contour maps already mentioned, the analyst may draw isotherms (equal temperature), isodrosotherms (equal dewpoint), and isotachs (equal wind speeds).

Study of the arrangement of wind speeds and directions can reveal the locations of areas of inward (convergence) and outward (divergence) mass flow. In turn, vertical motion can be inferred and vertical velocity charts prepared.

Oceanographic charts will show the sea conditions. In the North Atlantic, sea surface temperature observations will outline the warm Gulf Stream and the cold Labrador current.

****

 

Forecasting

 

Weather forecasts can be qualitative or quantitative; deterministic or probabilistic; local, regional or global, and short- or long-term.

Aircraft and airship pilots will be most interested in forecasts for three to six hours ahead. Farmers, on the other hand, are more interested in long-term forecasts, guiding what crops to plant this season, or whether to harvest this week or next. Of course, they would also like warnings about short-term threats, like frost or hail. Mariners and army commanders fall perhaps in-between.

In the NTL 1630s, we will be making mostly short-term forecasts (under 24 hours) on a local or at best a regional scale. We may occasionally recognize a persistent pattern that permits a longer-term forecast. There is likely to be a lot of uncertainty in the forecast, whether this is expressed as a percent probability or not.

****

 

Empirical Weather Forecasting

 

Numerical weather forecasting, which we'll discuss in the next section, is based on physics-based computer models; i.e., on the present state of the atmosphere and a knowledge of fluid dynamics.

In contrast, empirical weather forecasting is based on analysis of past weather states, and correlations between successive weather states. The analysis may be formal (statistical analysis) or informal (rule of thumb internalized from experience). The underlying concept is that if the present weather duplicates past weather, the prognosis should be what happened next in the past.

Unfortunately, we begin with a paucity of weather statistics, and even if we could wave our hands and create a complete observation network out of thin air, it will take years to collect a useful body of data (more on that shortly).

There are a variety of statistical tools available. These include least squares regression, Markov chains, harmonic analysis, principal component analysis, canonical correlation analysis, and discriminant analysis. For example, in 1964-1988, the US National Hurricane Center used a set of regression equations to forecast hurricane movement (Wilks 196; Miller). The main danger is overfitting; the method does a great job of explaining the variation in the reference data but fails when exposed to new data.

Historically, the development of empirical weather forecasting was inhibited after the Fifties as NWP became increasingly more skillful, but there may be a place for it in NTL—especially if we suffer a computing power shortfall after we have developed an adequate observation network.

To be useful, empirical weather forecasting must do better than what can be achieved by knowing the climatology of the locality, but it can take that climatology into account.

 

Climatology. The temperature, precipitation, and wind direction and strength that a location receives at a particular time of year tends not to stray too far from the average for that location and time. To give an extreme example, it is a safe bet that it is not going to snow in Death Valley in July (barring a Krakotoa-scale eruption).

While Germany, like West Virginia, is in the northern temperate climate zone, air masses approaching from the west are conditioned by crossing the ocean rather than a continent. And climate extremes for Europe are less pronounced than for the United States. Unfortunately, our characters do not have detailed climatological data for Europe, let alone seventeenth-century Europe.

In tourist guides that came through RoF, they may be able to find monthly averages for temperature and rainfall for particular cities, but of course those will be averages for the twentieth century, and will not take into account the effect of the Little Ice Age. And there will be down-timers who have kept weather diaries, but those typically expressed meteorological data in qualitative terms ("very hot today"), and it will be difficult to make use of such data.

As meteorological instruments are made and distributed, we will begin to accumulate data, and as the years pass, the climatological data will better reflect the natural variability of the weather, and thus be a more useful guide. However, it is only fair to point out that it is customary nowadays to compute climatological normals over a thirty-year period. In 1636 we will have at best five years of statistics—this will have 2.45 times the variability of thirty-year ones.

 

Day-to-Day Persistence. The simplest of all forecasting methods is to assume that the weather tomorrow will be the same as the weather today. If this were always true, there would be no need for meteorologists! However, persistence forecasting has its role. The less the variability of weather at a particular time and place, the more reliable the persistence forecast will be. Temperature variability is typically smaller if the location is surrounded by water rather than land. As meteorological records accumulate, we should be able to quantify the degree of persistence as a function of location and season.

Meteorologists have developed random "weather generators" to simulate weather for use with agricultural productivity or hydrological engineering software (Wilks), but the near-term simulated weather can be thought of as a prediction of real weather. These generators usually simulate rainfall as a first order Markov process; i.e., you specify the conditional probabilities of rain today if it rained yesterday, and of no-rain today if no-rain yesterday (if these differ, the average lengths of wet and dry spells will differ). These can be annual, monthly, or even day-of-the-year averages, depending on the statistics collected (i.e., climatology is considered). There are a few climates for which second order Markov processes are appropriate; i.e., you need to look two days back. Predicting temperature, wind strength, etc. is more complicated.

 

Single Observer Forecasting Methods. The down-timers spend much more time outdoors than most twenty-first-century Americans do and thus are more aware of changes in the appearance of the sky and the behavior of animals and plants. This has led to "weather wisdom," sayings that make prognostications. Some have a scientific basis—for example, referencing clouds that are associated with an approaching warm front (Lee; Watts).

Some authors have attempted to quantify the guidance provided by clouds. For example, Docekal, Nature Detective (102) says that when cirrostratus appears, "there is an 80 percent chance of rain within 24 hours." Now, to put that in perspective, Chapman Piloting, Seamanship and Boat Handling says that warm fronts move 150-200 miles per day and presents a cross section of a warm front showing cirrostratus at about 400 miles out, and rain beginning at about 200 miles out (311-3).

It is not really worth looking for such guidance in Grantville literature because of the differences between American and European weather. Rather, we should be tabulating meteorological observations and calculating the probabilities for the here and now.

Other authors have provided guidance that relies on cloud type, cloud movement (upper wind direction) and surface wind direction. Watts (80) suggests that when the winds at cirrus height are crossed in direction to those at cumulus height, change (for better or worse) is likely.

The barometer, possibly combined with wind direction, has been the basis of several early forecasting methods. A falling barometer, by itself, is ambiguous; it can mean that a nearby low is approaching or intensifying, or a nearby high is receding or weakening. However, this can be coupled with consideration of whether the wind is strengthening or weakening, constant, veering, or backing. The old Weather Bureau published a table of wind and barometer indications for the United States. For example, a wind from the SW to NW quarter, with barometer 30.10-30.20 and rising rapidly, meant "fair, followed within two days by rain."

Several forecasting devices were manufactured in the late nineteenth and early twentieth centuries that conceivably could have been in RoF Grantville. They cannot be used "as is" in 1630s Germany, but they can be studied for inspiration.

One of these was the 1915 Negretti & Zambra ("Zambretti") Pocket forecaster (British Patent 6276/15). It had two movable dials; one was set to the barometer reading, the other to the wind direction, and then you read off a code from one of three windows (for rising, falling or steady barometer). This was intended to provide 12 hour forecasts, and claims of 90% or greater accuracy are (dubiously) made for it (how does one judge whether the forecast "fine, possibly showers" was right or wrong?).

A more detailed description of the device, with comments on how to implement it algorithmically, appears here:

https://web.archive.org/web/20110610213848/http://www.meteormetrics.com/zambretti.htm

The developers were London-based barometer makers so plainly it was designed with British climate in mind. This website provides a "Zambretti" algorithm-based forecaster, modified for use worldwide (you set the hemisphere and the local barometric weather range, but it ignores barometric trends: http://www.beteljuice.co.uk/zambretti/forecast.html

The other device is the Sager Weathercaster. This was a spiral-bound booklet, with a four-dial circular calculator, published in 1969. Raymond Sager was a meteorologist, and he developed the system during WW2. The four dials are for setting wind (and wind change), barometer, barometer change, and present weather, and the settings yield a code number which is interpreted by a lookup in the booklet. For example, the settings for "wind from SW and veering, barometer 29.45 and steady, presently overcast" yields code S634 and the forecast "Rain or Showers followed by improvement (within 12 hours) and becoming cooler; strong SW or W winds becoming strong W or NW."


That is the end of the preview.
Only active subscribers can read the full story.
If you would like to, please subscribe.
We hope you enjoyed the preview.

About Iver P. Cooper

Iver P. Cooper, an intellectual property law attorney, lives in Arlington, Virginia with his wife and two children. Two cats and a chinchilla rule the household with iron paws. Iver has received legal writing awards from the American Patent Law Association, the U.S. Trademark Association, and the American Society of Composers, Authors and Publishers, and is the sole author of Biotechnology and the Law, now in its twenty-something edition. He has frequently contributed both fiction and nonfiction to The Grantville Gazette.

 

When not writing (or trying to get an “orange blob” off his chair so he can start writing), he has been known to teach swing dancing and folk dancing, or to compete in local photo club competitions. Iver adds, “I can’t get my wife to read my fiction, but she has no trouble cashing the checks.”

Iver’s story “The Chase” is in Ring of Fire II