Treasures of the Earth: Geophysical and Geochemical Prospecting

Conventional prospecting depended on learning to “read the earth”: spotting pegmatites in granite; recognizing sulfide stains in outcrops, or a limonite mass (gossan, “iron hat”) covering an ore body; examining talus slopes for desirable minerals and then working uphill to find their source; understanding where gold and other heavy metal particles might accumulate in a stream; searching the Gulf Coast for low mounds associated with a sulfurous smell—like the famous Spindletop dome.

Geophysical prospecting takes advantage of the local differences in the earth’s physical properties (magnetism, gravity, seismic response, resistivity, etc.) created by geologic structures and changes in rock types, to find anomalies that are suggestive of gas, oil or mineral deposits.

Geochemical prospecting looks for chemical gradients, tracing these trails back to their source.

Well logging is not conventionally considered a form of prospecting, but the examination of drilling records can be quite revealing of what lies below the earth’s surface.

The Lure of the Lodestone: Magnetometry

In what strange regions ‘neath the polar star

May the great hills of massy lodestone rise,

Virtue imparting to the ambient air

To draw the stubborn iron . . . .

—Guido Guinicelli (d. 1276) (Bauer 16).

Magnetic Anomalies

The basis for magnetic prospecting is that the earth’s powerful magnetic field magnetizes certain crustal materials, causing them to generate their own magnetic fields. These fields constitute a local magnetic anomaly that is superimposed on the general magnetic field created by the Earth’s core.

The amenability of a mineral to magnetization is called its susceptibility; the most magnetizable ones are magnetite (“lodestone,” iron oxide), ilmenite (titanium-iron oxide), and pyrrhotite (iron sulfide). The susceptibilities of rocks are in turn dependent on that of their component minerals. Igneous rocks can have relatively high susceptibilities whereas that of sedimentary rocks is low.

Magnetic prospecting is typically used to (1) find magnetic minerals, (2) find non-magnetic minerals that are associated with magnetic minerals (e.g., pentlandite, an iron-nickel or iron-nickel-cobalt sulfide, is associated with pyrrhotite, and copper, nickel, lead and zinc sulfides are associated with magnetite), or (3) determine the depth to basement (igneous) rocks and thus the depth of a sedimentary basin (which could in turn contain oilfields).

The earth’s average total magnetic field is 0.5 Oersteds (50,000 gammas or nanoteslas). There are large-scale geographic variations in magnetic field strength; in 1965, it varied from about 68,000 gammas on the Antarctic coast, to about 24,000 gammas in southern Brazil (Dobrin 486). (At one time, it was thought that these variations might be regular enough so that navigators could use them to determine longitude.) Of course, over a sufficiently limited area, the variation is smaller; within Sweden, for example, it was about 1500 gammas. Grantville literature includes maps showing worldwide variation in magnetic declination, dip (inclination), horizontal force, and vertical force for 1907. ( EB11/Magnetism, Terrestrial, Figs. 1-4 ).

If a magnetic ore body (30% magnetite) were a sphere of 100 foot radius, with its center at a depth of 200 feet, then it would create a magnetic anomaly with a vertical component of 9,450 gammas (Dobrin 502). The anomaly is proportional to the average susceptibility and to the cube of the radius, and inversely proportional to the cube of the depth, so the ability to find an ore body falls off drastically as the body is buried deeper underground, and large bodies are much easier to find than small ones. But a small body near the surface may direct attention away from a large, deep one.

As an example of a real-life magnetic anomaly, take Pea Ridge, Missouri. Airborne magnetometers (altitude, 1800 feet) detected an anomaly of 3200 gamma. The ore body (good for two million tons annually) was at a depth of 1250 feet, and about 3000 feet in diameter. (557).

There are pitfalls for the unwary in magnetic surveying. Magnetite has a much higher susceptibility than other minerals. Hence, in Wisconsin, Michigan and Minnesota, many anomalies were found that were associated with commercially worthless deposits in which small amounts of magnetite were mixed with nonferrous minerals. It therefore helps to confirm a magnetic anomaly with the gravimeter (557). On the other hand, the survey would probably overlook a rich deposit of hematite, because hematite is non-magnetic. (555)

Magnetic surveys for ore bodies are made on a fine grid, with stations separated by as little as 25 feet. In the twentieth century, the main problem in magnetic surveying was making sure that the stations were sufficiently far away from iron objects. That will be less of a problem in the 1632verse, where there’s a paucity of railroad tracks, power lines, wire fencing, and automobiles. However, there will be more difficulty in transporting the equipment from station to station.

Generally speaking, the magnetic anomalies associated with the ore bodies that are shallow and rich enough to be of interest are also likely to be so big that they will stand out against the regional variation within a given magnetic survey area. (Dobrin 553). Hence, regional correction isn’t necessary.

The basement structures of interest to petroleum geologists are much deeper and their rocks are less magnetic. Hence, they are likely to generate anomalies measured in tens or at most hundreds of gammas. (Dobrin 503). Hence, to “see” these anomalies, we would need to correct for more regional features. This is feasible in the early seventeenth century; the first major magnetic survey (of the Atlantic) was conducted by Halley in 1701. (These surveys gradually become outdated, as the intensities change, by as much as 120 gammas/year, in an irregular way.) Regional magnetic trends can be mapped at a grid spacing of say ten miles (521).

Since the features are at a larger scale, sedimentary basin-oriented magnetic surveys are usually conducted by air or ship, and if land instruments are used, the stations are typically a mile apart.

A magnetic survey can take days or weeks, so one also has to worry about more rapid changes in magnetism with time. Magnetic storms occur intermittently, as a result of solar activity, and can change the field strength by 1000 gammas (more in polar regions). Hence, surveys must shut down during magnetic storms. There is also a predictable daily variation with an amplitude of about 25 gammas.

Magnetometers

Now let’s talk about how geomagnetism is measured. I will use the term “magnetometer” to refer to any device, however primitive, that can be used to quantify the magnetic field. Other than at the magnetic poles and equator, the magnetic field has both a vertical and a horizontal component. Some magnetometers only measure one component, whereas others measure the total field.

According to Dobrin (19), “The magnetic compass was first used in prospecting for iron ore as early as 1640.” Actually, I wouldn’t be surprised if this use predated the Ring of Fire (RoF). The ability of iron objects to deflect the needle (“deviation”) was known in the sixteenth century.

A traditional compass has a magnetic needle, but it’s constrained to move only horizontally. That limits its utility in detecting large masses of magnetic minerals; if you were standing over the orebody, the needle might stay still, or it might spin around, but it certainly isn’t going to point straight down.

If the compass were free to pivot vertically, it would dip, thereby orienting the needle with the local magnetic field. The needle would be vertical at the earth’s magnetic poles and horizontal on the magnetic equator. The magnetic “dip” (inclination) was discovered by Georg Hartmann in 1544 and further studied by Robert Norman later in the sixteenth century. William Gilbert suggested that the dip could be used to determine latitude when the sky was obscured; Henry Hudson refuted this (and in the process sailed rather close to the north magnetic pole). (Ricker)

While dip-compasses were invented in the sixteenth century, mining historians suggest that they were not used for prospecting until the eighteenth or even the nineteenth century (Brough 309). I suspect that this is too pessimistic. That said, the “Swedish Mining Compass” and innumerable variants certainly became popular in the nineteenth century.

There are, of course, many possible variations in how the needle is suspended, and how its position is gauged. One form was the inclinometer, which only pivoted vertically. A regular compass would be used to find the magnetic meridian (magnetic north-south line) and then the inclinometer needle would be aligned with it.

Modern dip needle magnetometers have a practical sensitivity of 10 gamma and a maximum sensitivity, in temperature-controlled environments, of 1 gamma. (Morrison 3.5).

EBCD15 says that the simplest absolute magnetometer (Gauss 1832) was a permanent bar magnet suspended by a gold [silk?] fiber; you had to measure the period of oscillation of the magnet. The problems of timing oscillations are discussed below in the context of pendulum-type gravimeters.

In the Schmidt vertical balance, the magnet was balanced on a knife edge, near but not at the center of mass, in such a manner that it would be turned clockwise (say) by gravity and counterclockwise by geomagnetism. The magnet was oriented perpendicular to the local “magnetic meridian” so the horizontal component of the magnetic field would not affect it. A mirror was attached to the top of the magnet, and a light beam reflected off the mirror to illuminate a graduated scale. It had a sensitivity of ten gamma. (Dobrin 505ff). All that Grantville literature says about this device is that it's a relative magnetometer that “uses a horizontally balanced bar magnet equipped with mirror and knife edges.” (EBCD15).

The earth inductor, invented by Charles Delzenne in 1847, works on a completely different principle. A circular coil is mounted so that it can be rapidly rotated around an axis lying along a diameter of the coil. This axis in turn is mounted in a frame, which is itself mounted on pivots. If the axis isn’t parallel to the local magnetic field, the field produces an alternating current in the coil, which in turn can be detected by a galvanometer. The frame would first be positioned horizontally (to measure the vertical component of the magnetic field with the galvanometer) and then vertically (to measure the horizontal component). (Kenyon) I do not believe that there is any description of the earth inductor in Grantville literature, but it's conceivable that one of the resident electrical engineers is familiar with it. And it could certainly be re-invented.

Like the earth inductor, the aviator's magnetic inductor compass senses the earth's magnetic field by induction. The movement of an airplane causes the turning of a paddlewheel or windmill, which rotates the armature of a generator. The geomagnetic field induces a current in the armature coil, which can be sensed with a galvanometer. There was a controller (roughly equivalent to Delzenne's frame) that could be rotated to indicate the desired heading, so that there would be no current if the plane were on course. The inductor compass was popular in the Twenties and Thirties but has long been obsolete. Still, Jesse Wood may know something about it.

The flux gate magnetometer was developed in World War II (for detecting submarines), the nuclear magnetic resonance (proton precession) magnetometer in 1954, and the optically-pumped magnetometer in the Sixties. The first two instruments have sensitivities of about one gamma (Morrison 3.5). They are briefly described by EB15 and McGHEST/Magnetometer.

While the encyclopedias don’t provide much information about magnetometers that would be practical in the early post-RoF period, they aren’t the only relevant Grantville literature.

The Scientific American Amateur Scientist column covered “how to make a sensitive magnetometer” in February 1968. Imagine my surprise when this turned about to be a differential (gradient measuring) proton procession magnetometer. “The magnetometer featured sensor coils wound on small bottles of distilled water and an audio amplifier employing germanium transistors and a hand-wound tuned transformer filter.” (Fountain). We certainly aren’t going to be mass producing these ‘Wadsworth” magnetometers, but we probably have enough up-time transistors around to build a few of them. Or perhaps their integrated circuit equivalents.

Shipborne and Airborne Magnetometry

Magnetic surveys can be conducted from ships or aircraft, if the magetometer is towed so as to distance it from the metal of the vehicle, and you can calculate the position at which each reading was taken (Dobrin 523ff).

Putting a magnetometer in the air makes it possible to survey a large area quickly. However, you need a more sensitive instrument, because the intensity falls off with the cube of the effective depth (altitude plus depth from surface). Over the Dayton ore body in Nevada, the vertical anomaly was over 30,000 gamma and at an altitude of 500 feet, the total anomaly was about 3,000. (560).

Magnetometry in the 1632verse

I expect that dip-compasses will be used for iron ore prospecting in the USE and Sweden by 1633-1635. Don't sneer at these simple devices; they were used in iron ore exploration until about 1950 (Kennedy, Surface Mining 57). And airships will come in very handy for magnetic surveys of the wilds of Norway, Sweden, Finland and Russia—if the magnetometer is sensitive enough for aerial use.

Newton’s Apple: Gravimetry

Gravitational Anomalies

Now let’s talk about gravity. If the earth were isolated in space, perfectly spherical, and of uniform density—that is, chunks of equal volume had the same mass—then the force of gravity you felt would be constant wherever you walked.

In fact, and fortunately, none of those conditions apply. We feel the gravitational force of the sun and moon as well as the earth—that’s why tides exist. Also, the earth isn’t perfectly spherical, and it isn’t uniform. So even the earth’s gravitational force isn’t constant.

The earth’s force of gravity is the aggregate result of the individual pulls of every drop of water, every grain of sand, and every chip of rock on the planet. Each individual pull is proportional to the density of the “bit” (assuming all bits are equal in volume) and the distance between the observer and that bit, and inversely proportional to the cube of the distance between the observer’s center of gravity and that of the planet. (For a sphere of uniform density, these “pulls” add up so that the aggregate effect is inversely proportional to the square of the distance between the centers.)

So if the Assiti were suddenly to replace a sphere of rock, some distance beneath your feet, with a sphere of water, there would be a reduction in the local force of gravity, because water isn’t as dense as rock (usually) and the “pull” from that sphere would be reduced. This is a negative gravitational anomaly. And if the Assiti instead replaced the sphere of rock with a sphere of solid lead, the density and thus the “pull” would be increased, and we would have a positive gravitational anomaly. The difference in density that creates the anomaly is called the density contrast.

As we walk away from the point that lies directly above this Assiti sphere, the anomaly becomes smaller (less positive or less negative) and eventually becomes undetectable.

Now, here are the two key points that make this of interest to people who want to find oil. First, oil is often trapped above or alongside a geological structure called a salt dome, essentially a big vertical mass of salt extruded upward like geological toothpaste. Secondly, salt is usually less dense than the surrounding rock.

If we can detect these small changes in local gravity that are the result of density contrast, then we can find salt domes.

So, we have two questions to answer before we can design an appropriate instrument. First, how small are the anomalies associated with salt domes (or other geologic structures that we want to find)? Second, how do they compare in magnitude to the average force of gravity and to the other conditions that can affect local gravity?

In the geophysical prospecting business, gravitational force is measured in galileos (Gals). On this scale, the average gravitational force at the earth’s surface is 980 Gals. A milliGal is one-thousandth of a Gal; a microGal, one-millionth.

The average density of rock salt is 2.22; sedimentary rock, 2.50; igneous (2.7); and metamorphic rock, 2.74. So, on average, there is a density contrast of 0.28 between salt and sedimentary rock, creating a negative gravitational anomaly.

For a sphere in a homogeneous country rock, the peak gravitational anomaly (milligals) is 8.53 * density contrast * radius (kilofeet)3 /depth (kilofeet)2. The shape of the gravitational anomaly profile (the falloff as you move away from directly above the center of the sphere) indicates the depth of the sphere.

A vertical cylinder is a better model of a salt dome (or a volcanic plug), but the formula is more complex; 12.77 * density contrast * (length of the cylinder + diagonal distance from surface point above axis of cylinder to perimeter of top face – diagonal distance to perimeter of bottom face). Thus, a cylinder of salt with a constant density contrast of 0.2, running from 2,000 feet to 14,000 feet, with a radius of 4,000 feet, would create an anomaly of 4.88 milligals. If the contrast were 0.3, it would be 7.32 milligals, and if the cylinder also ran from 1,000 to 13,000 feet, it would be 9.66. On the other hand, if the contrast were 0.2 and the cylinder ran from 8,000 to 14,000 feet, it would be 0.98.

Given knowledge of Newton's law, potential theory, and calculus, all of which are in Grantville literature, the mathematicians of NTL Europe should be able to calculate the anomaly profiles that would be created by various density contrast geometries.

The peak negative anomaly associated with the deep seated Lovell Lake salt dome (Jefferson County, Texas) was about one milliGal (Dobrin2d, 400). The one over Minden Dome, Louisiana was about 5.5 milligals. (Dobrin 470).

You can have a salt dome present and not detect it by gravity methods because of lack of density contrast. The density range of salt (2.1-2.6) overlaps with that of sandstone, 1.61-2.76; shale, 1.77-3.20; limestone, 1.93-2.90 (Seigel). Or the dome could be obscured because the reduced gravitational force from the salt is compensated for by increased gravitational force from the cap rock (cap rocks are usually anhydrite, 2.93; calcite, 2.65; or gypsum, 2.35).Or you could see a density contrast, but find that salt isn’t involved. Hence, a favorable gravity survey was usually followed by (more expensive but more informative) seismic studies.

We can also detect the positive gravitational anomaly associated with a large ore body. The metallic minerals include manganite (4.32), chromite (4.36), ilmenite (4.67), magnetite (5.12), malachite (4.0), pyrite (4.6), pyrrhotite (4.65), cassiterite (6.92) and wolframite (7.32). For example, the Mobrun copper-zinc-silver-gold ore body was essentially pyrite in igneous rock. Its peak anomaly was about 1.6 milligals. The Pyramid lead-zinc ore body in Canada peaked at 0.8 milligals, and a Russian chromite deposit at 1.2. (Seigel).

Grantville literature (EB15CD) warns that different structures can produce the same anomalies. For example, a large sphere with a small density and a small sphere with a big density contrast, centered at the same depth, could have the same anomaly profiles.

From the foregoing, it seems that we want to be able to detect anomalies in the 1-10 milliGal range. Moreover, if we want to map the structure, not merely detect the peak, we would probably want sensitivity on the order of 0.1 milligals. So that means that we need to be able to subtract out, not only the average force of gravity, but also any large-scale variations with a magnitude larger than perhaps 0.01 milligals.

EB15CD may scare some would-be prospectors away from gravimetry; it says that for petroleum and mineral prospecting, the necessary accuracy is approaching the microGal (0.001 milligal) level. (Modern petroleum surveys use gravimeters with an accuracy of about 0.005 milligals (Seidel).)

Because the earth bulges at the equator, putting the surface further away, the force is less there (978 Gals) than at the poles (983 Gals). (This includes a slight correction for the centrifugal force caused by the Earth’s rotation, which the gravimeters can’t distinguish from the gravitational force.) The variation is highly nonlinear, but at 45 degrees, it's 980.6, and within a few degrees of that value, the change in gravity with latitude is about 90 milligals per degree (Author's calculation).

The terrain effect is more complicated. If you are on the summit of a mountain, you are moved further from the center of the earth, reducing gravity by about 0.3 milliGals per meter elevation above “sea level” (the “free air” effect), but the additional mass of the mountain is pulling on you, increasing gravity by about 0.1 milliGal per meter if the mountain had average crustal density (2.67)(Bouguer effect), for a net elevation effect of 0.2 milligals/meter.

The accuracy of the elevation data for the “station” limits the achievable accuracy in measuring local gravity. The surveyors in Grantville will be familiar with methods of determining elevation. An elevation difference may be measured trigonometrically, or estimated from the air pressure difference sensed by a barometer. With the anomalies of interest being on the order 1-10 milligals at peak, we clearly must be able to measure elevation with an accuracy of a meter or two. That's not too difficult on the Hanoverian plain but more problematic in the Carpathians.

The formulae for the latitude, free air and Bouguer effects are in Grantville literature (EB15CD).

You are affected, of course, not only by the land beneath your feet but also, to a lesser extent, by hills and valleys nearby. Even a two foot “bump” would, if less than 55 feet away, require a correction of 2 microgals; which would have to be taken into account in a high-precision (using gravimeters of ~ 1-10 microGal sensitivity) survey. The total effect, outside mountain regions, is not likely to exceed 1 milliGal (Wu), but that’s still a lot if you are trying to detect a peak anomaly of 1-10 milligals. Hence, it's a good idea to locate the stations as much as possible on flat terrain, even if that means departing from a mathematically perfect grid arrangement.

In 1939, Hammer developed complex terrain correction zone charts and tables that provided accuracy to 0.1 milligals; for example, a 30′ hill at 50 feet from the station, or a 4300′ peak that was 12 miles away, each would warrant an adjustment of 0.1 milligals. (Dobrin 420ff). These tables didn't pass through RoF but can certainly be prepared by an appropriately programmed computer; based on knowledge of physics and calculus that Grantville can pass on. The real problem is that to apply these corrections, you need a detailed, accurate topographic survey.

If you are trying to detect a small-scale feature like a salt dome, then you need to subtract out regional trends caused by deep-seated structural features. For example, as you approach the Gulf of Mexico from inland, there is a decrease in regional gravity of about 1 milligal/mile. (Dobrin 437). Accounting for this requires collecting gravimetric data over a sufficiently wide area in order to quantify the regional trend.

There are still other perturbations that affect high-precision surveys. The sun and moon cause tidal variation in local gravity with time, typically on the order of 0.1 milliGals. Changes in atmospheric pressure change the mass of the air column over your head, and these changes are on the order of 0.36 microGals/millibar. Rainfall can raise the water level, increasing gravity by about 0.04 milliGals per meter of retained water.

Gravimeters

Would-be geophysical prospectors may be unduly discouraged by the statement, in Grantville literature, that “gravimeters used in geophysical surveys have an accuracy of about 0.01 milligal” (EB15CD). As shown by the preceding analysis, one that can detect even a 1–20 milliGal anomaly may at least reveal the existence of a salt dome, and one with an accuracy of 0.1 milligals should be able to able to give some idea of its possible shape.

When you are measuring a tiny effect, you have to worry about instrument errors caused by its own physical limitations or its surroundings. There are essentially three approaches. First, you can attempt to isolate the instrument from the confounding factor. Second, you can construct the apparatus or conduct the experiment in such a way that the factor acts twice, in opposing sense, and thus cancels itself out. (This could be simply averaging out a random variation.) Finally, you can measure or predict the magnitude of the error, and adjust the raw data accordingly.

An absolute gravimeter measures gravity directly. A relative gravimeter—the more common kind—tells us how the gravity at position A compares with that at position B, but must be calibrated by using it alongside an absolute gravimeter.

Gravimeters use one of three principles: timing the oscillation of a swinging or twisting pendulum; measuring the elongation of a spring; or timing the free fall of an object.

Grantville literature reveals that until the 1950s, all absolute measurements were made with pendulum gravimeters; spring gravimeters can't provide an absolute value, and it wasn't possible until then to time a falling body with sufficient accuracy. Likewise, it teaches that until 1930, all relative measurements were made with pendulums, but these were superseded by spring-based instruments. (EB15CD).

Torsion balance. This was the first device used for gravity prospecting. (Pendulums, discussed below, were used previously to measure local gravity, but only for determining the shape of the earth). The balance had a horizontal bar suspended with a silk fiber. If a force was applied to one end of the bar, the bar would rotate, twisting the fiber. The fiber would of course try to untwist, thus supplying an opposing torque. The greater the applied force, the greater the twist angle attained. In 1777, Coulomb used this principle to measure electrostatic forces.

The torsion balance was used to determine the gravitational constant by measuring its deflection (Cavendish, 1798) or change in period of oscillation (Braun 1897) when a neighboring weight was moved from one side to another. (EB15CD/gravitation).

The Coulomb and Cavendish experiments were classic science experiments, and there may be useful descriptions of their apparatus in general science or physics textbooks that passed through the RoF. The physics majors in Grantville may also have duplicated one or both experiments in their college days.

In order to measure local gravity, Coulomb’s device was modified by Eotvos (1901) so that the gravitational force on one end was greater than the force on the other. In essence, that meant placing weights on the ends in such a way that there was a vertical as well as a horizontal separation between them. If there was a localized gravitational anomaly, the “pull” on one weight could be at a slightly different angle and intensity than that on the other, and the component of the net force that acted horizontally and at right angles to the bar would cause the bar to rotate. (Dobrin2d, 201ff). Strictly speaking, the Eotvos torsion balance measured the gradient (the rate of change over distance) of the earth’s gravitational field, not its value at a particular point. (SEG/Timelines).

The torsion balance is briefly described by Grantville literature (EB15CD/torsion balance). The device was light and compact, and accurate to perhaps 2 microgals (SEG). It helped prospectors discover 79 oil fields in the Gulf Coast in the Twenties and Thirties. However, measurements were extremely time-consuming, as, for each gravity determination, readings had to be taken in three different orientations, 120 degrees apart, and then one orientation repeated. Since the device took an hour to stabilize for each reading, that meant that a single gravity determination took four hours (and someone eventually had to solve a system of simultaneous equations to obtain the local gravity from the readings). The typical spacing between “stations” was a quarter or half mile.

The other problem with the torsion balance was that it was so sensitive to surface topography, that it had good sensitivity only in flat terrain (Louisiana being ideal in this regard.)

Spring Gravimeters. If we suspend a weight on a spring, gravity will pull down the weight, fighting against the elastic restoring force of the spring. If the stretch is small, the elongation is proportional to the gravitational force. The spring may be linear or helical. To make the elongation observable, it’s amplified by mechanical (levers) or optical means.

Despite its clear preference for spring gravimeters, Grantville literature (EB15CD) doesn't say much about them. Herschel proposed a spring gravimeter in 1849, but it wasn't sensitive enough. “The difficulties Herschel had encountered were overcome by choosing suitably stable material for the springs, by employing mechanical, electrical or optical devices to amplify the small displacements of the system, and by providing temperature control of compensation.”

That's helpful, but doesn't address several fundamental issues. First, to sense a 1 part in X change in gravity, you need to be able to detect a 1 part in X change in the elongation of the spring. So, for a sensitivity of 0.1 milligals (100 ppm), you would need to be able to detect a change of 100 microns in a spring that initially is one meter long. And that also explains why temperature control is so important; a 1oC change in temperature would change the length of even a quartz spring by 5.5 microns.

Secondly, there's the problem of oscillation. Imagine a mass suspended by a spring. Press down on the mass, and release, it will oscillate up and down until friction and air resistance bring it to a halt. The sensitivity of a simple spring gravimeter is proportional to the square of the period of the oscillation. That of course means that it takes a lot more time to get a reading with a more sensitive unit. But that's not all; for the system to have a period of 20 seconds, the spring length would have to be 100 meters! Plainly, we have to cheat.

The “Gulf gravimeter” used a spring wound into a helix; the force of gravity on the weight at the end caused the spring to both elongate and rotate, and the rotation caused the deflection of a light beam.

In the LaCoste-Romberg gravimeter (1934), a cantilevered spring, anchored above the hinge of a hinged beam with a weight at the far end, acts at an angle on a point near the far end; the component of the spring force perpendicular to the beam balances the weight at “normal” gravity so the beam is nearly horizontal. If the local gravity is different from normal, the weight will pivot up or down, changing both the elongation and the angle of action of the spring.

The unstretched length of the spring is as close to zero as possible. A mirror on the beam reflects a light beam, that illuminates a scale; this provides a further optical magnification. The mechanism is inside an insulated housing that communicates with a thermostat-controlled, battery-powered electric oven. (Dobrin 388). There may be a schematic diagram of this gravimeter in the 1977McGHEST at the Grantville high school library (cp. 2002McGHEST).

As an example of a modern instrument, EB15CD cites the “Worden gravimeter” (1948), but without providing any construction details; just performance characteristics (it can measure gravity of 0.01 milligals in a few minutes, and weighs just a few pounds).A “zero-length” fused quartz spring acts in opposition to a weighted arm about a torsion fiber. The spring mechanism is inside an evacuated thermos flask, and the spring is connected to differential expansion arms, all to minimize the effect of changes in temperature and pressure. (Dobrin 391).

Fused quartz is used because it has a very low coefficient of thermal expansion; Grantville engineers will know of this property. It is made by subjecting very pure silica to fusing temperatures (3200° F). Fused quartz was originally made by fusing natural Brazilian crystals.

Another low-coefficient material that the engineers will be yearning for is Invar, a nickel-steel alloy invented in 1896. Grantville literature almost certainly contains a reference book that reveals its composition (64% iron, 36% nickel) but if there are any alloying “tricks” they will need to be rediscovered. Local German supplies of nickel are probably adequate to meet the demand for making Invar for precision instruments.

In the seventeenth century, wood was used in low-coefficient applications (like the pendulums of a pendulum clock). Once borosilicate glass is available, it too will be an option.

Pendulum Gravimeters. If the bob of a pendulum is drawn away from under its pivot point, and released, the force of gravity will cause it to swing back and forth. Friction and aerodynamic drag gradually reduce the amplitude (size) of the swings and bring it to rest.

If we imagine an ideal simple pendulum (no forces other than gravity, the bob is a point mass, the cord is massless), then for small amplitudes (angles of swing) the period approximates 2*pi*square root (length/local gravitational acceleration). A real pendulum’s behavior is close enough to the ideal so that this relationship was discovered experimentally by Galileo.

The relationship meant that if gravity and length were constant, the period would also be constant, and you would have a means of measuring time. Pendulum clocks were conceived of by Galileo in 1637, and independently invented (and actually built) by Christian Huygens in 1656. In 1673 he published a treatise on the pendulum that established that the period was affected by amplitude and, for timekeeping accuracy, the amplitude had to be kept small. He also calculated the behavior of a ideal compound pendulum (we still ignore friction and air resistance, but the pendulum is a swinging rigid body), which is a better model of a real pendulum.

In 1666, Robert Hooke suggested that a pendulum clock could be used to measure the force of gravity. An ideal compound pendulum has a period which is the same as that of a simple pendulum with a length equal to the distance from the pivot point to the “center of oscillation.” The center of oscillation is not readily determinable by calculation or observation, because its location is dependent on how the mass is distributed along the arm of the pendulum. Hence, in ordinary use, a pendulum is a relative gravimeter. The ratio of its periods at two different locations is the reciprocal of the ratio of the square roots of the local gravities. Initially, the standard surveying pendulum was the meter-long “seconds pendulum,” one taking a second per swing (so period of two seconds) at “standard gravity.” It was replaced around 1880 by the “half-second pendulum,” only one-quarter the length.

Grantville literature ( CRC) has a table of “acceleration due to gravity and length of the seconds pendulum,” including the “free air correction for altitude.”

In 1672, Jean Richer discovered that a pendulum clock was 2.5 minutes/day slower in Cayenne, French Guiana than it was in Paris. He thus had detected the difference in gravitational force between the latitudes of Cayenne (4° 55′ N) and Paris (48° 50′ N); about 2.9 Gals.

It’s amazing to me that so many books refer to this discovery without asking the following question: how did Richer tell that there a difference in the speed of the pendulum speed? After all, if the best clocks were pendulum clocks, any such clock brought to Cayenne would suffer the same slowdown. It’s necessary to have a clock that tells time (at least over the short term) at least as accurately as a pendulum clock, but which works on a different principle so that it’s unaffected by gravity.

We know that Richer made telescopic observations, and most likely his reference “clock” was an astronomical one; he counted the number of pendulum swings in a solar day (noon to noon), or in a sidereal day (star returns to same position in sky), or perhaps between two particular orientations of the moons of Jupiter. (Matthews 145).

To measure time by an “astronomical clock,” you need to at least account for the effect of the tilt of the earth's axis and the ellipticity of its orbit around the sun (UT0, mean solar time). Desirably, you also correct for the wobble of the earth's axis (UT1) and its annual and semi-annual variations (UT2). There are unpredictable irregularities in the spin rate that give rise to a time prediction error of about 60 milliseconds/year. (Allan). The preferred astronomical clock, by the way, is a photographic zenith tube, a telescope that photographs stars that pass directly overhead and records the time of transit. (Popular Mechanics, January 1948 p. 138).

For accuracy in measuring gravity (or time), there are a few confounding factors one must worry about: Is the rod stretched by the weight of the bob? Does it expand or contract with changes in temperature? Does it absorb moisture? Is it slowed down by friction at the pivot point or by air resistance? Is the density of air it’s passing through changed by changes in atmospheric temperature or pressure?

In 1818, Kater invented the first reversible pendulum. This took advantage of Huygen’s theorem that a pendulum has the same period when hung from its center of oscillation as from its pivot. EB11/Henry Kater refers to this “property of reciprocity,” but doesn’t provide any construction details. Neither does EB15CD.

Kater’s pendulum was a brass bar that could be pivoted around either of two knife blades. These were a fixed distance apart, measured initially with a microscope. There was a screw-driven moveable weight on the bar, and its position was adjusted until the periods of oscillation from the two pivot points was equal. He measured the period with the same precision clock used in the adjustment phase, calculated the local gravity, and applied various corrections.

By Kater's day, of course, high precision mechanical clocks were available; several decades earlier, Harrison had built a marine chronometer accurate to one second per day. That's good enough, by my calculations, for measuring gravity with a seconds pendulum to within about 20 milligals. Good enough for studying the shape of the Earth; not good enough for finding salt domes.

In general, with a pendulum, to achieve a sensitivity for measuring gravity of 1 part in X, we need to time the period to 0.5 parts in X. (Morrison 2.5). Thus, for 1 milliGal accuracy (1 ppm), we need to time the seconds pendulum to 0.5 ppm, or 0.04 seconds/day. And pendulums have been made with an absolute accuracy of 0.1 milligals.

The Kater pendulum could also be used to make relative gravity measurements, by just taking into account the change in the period when it was moved to a new location (since the length was constant).

In 1835, the mathematician Friedrich Bessel showed that as long as the two periods were close enough, the moveable weight wasn’t needed, and that if the pendulum was symmetrical but weighted at one end, air drag errors would cancel out. The Repsold pendulum (1864) was based on these discoveries.

Von Sterneck (1887) solved the drag problem in another way, by placing the pendulum in a temperature-controlled vacuum. He also improved the readout. A similar device was constructed by Mendenhall (1890); it was, even in the 1920s, the world’s best clock. (Wikipedia/Pendulum).

Unfortunately, neither EB11 nor EB15CD provide useful information about these pendulum designs. EB15CD/Clock does however briefly discuss the Shortt pendulum clock, which it calls the “most accurate mechanical timekeeper.”

Another source of error was the effect of the swing on the pendulum stand. When the problem was recognized, it was first addressed by simply measuring the sway and mathematically correcting for it. Later, devices were built in which two pendulums swung out of phase to cancel out the effect.

Wikipedia says that Kater’s accuracy was about 7 milliGals. However, EB15CD says that the reversible pendulum-based absolute gravity measurement made in Potsdam, 1906, which was the reference point for all local gravity data up until 1968, was in error by at least 15 milligals. Dobrin says that the later reversible pendulums measured gravity to 1 ppm, which is about 1 milliGal (Dobrin2d 204).

In the 1632verse, we can temporarily sidestep many of the historical problems with the use of pendulum gravimeters because we have access to twentieth century timekeeping technology. First, we have a limited number of highly accurate timepieces that are based on quartz crystals. The surveyor can borrow one, or perhaps can listen to time signals provided by a radio station equipped with such a clock.

Quartz crystal clocks use a quartz crystal as the oscillator. If the oscillation frequency isn't quite right (the standard one is 32,768 Hertz), then this will create a systematic error. If you have a more accurate clock to compare it with, you can measure the frequency error and therefore the appropriate daily correction. For example, a comparison of three cheap ($6 apiece in 1997) LCD stopwatches with the Atomic Clock in Boulder, Colorado found frequency inaccuracies of 0.48 to 1.17 seconds/day—about 10 ppm. If you subtract out this constant rate error, what you are left with are the errors attributable to frequency instabilities; the most importance source of instability is temperature variation (but pressure, humidity, shock and vibration may also play a role). Over a period of 145 days, the residual time error varied slowly between -0.7 and 0.4 seconds, but the average day-to-day change was perhaps 50 milliseconds. (Allan) It is possible to devise methods of calibrating a clock to take into account the more common frequency instabilities, and reduce the (adjusted) daily clock error from one to one-tenth seconds or even less. And of course, we can start with one of the better up-time clocks to begin with. (An observatory grade quartz crystal clock has a frequency stability such that the maximum error is 0.1 ppb—about one second every 10 years. Anderson Institute. The best clock in Grantville is probably somewhere in between cheap department store and observatory grade.)

We will need to worry about temperature variation more than surveyors in the late twentieth century, because temperature-controlled buildings will be quite rare outside Grantville. The effect of temperature variation is about 0.1 seconds/day per degree Celsius.

Eventually, of course, all the wristwatch batteries will die, and some surveyors will need to travel beyond radio range. At that point, we will need to be able to make new watches on either spring or quartz crystal principles.

Ballistic Gravimeter. Galileo was able to deduce that the distance fallen is proportional to the square of the time of fall by having the “falling” object cross frets as it rolled down a ramp; Galileo was an accomplished amateur musician and he repositioned the frets until he could hear that the interval between the audible “bumps” was “on beat.” Galileo’s measurement accuracy has been estimated as 1/64th second. (Coelho 12).

Since the time and distance of fall are both observable, a ballistic gravimeter provides absolute measurements. (The object may be just dropped, or it can be tossed up and both its up and down movement observed.)

The principle underlying the ballistic gravimeter is simple, but the time must be measured and the distance known with great accuracy. The time of free fall is proportional to the square root of the drop distance, so if you make the device more compact, you need to measure time more precisely. Non-gravitational forces must be rigorously excluded or corrected. Air resistance can be reduced by evacuating the “drop tube,” but at low pressure, electric charges build up on the object that result in electrostatic forces affecting its motion.

Grantville literature (EB15CD) briefly describes three ballistic gravimeter designs. Volet (1952) dropped a graduated rule in an evacuated chamber and photographed its movement. Cook (1967) replaced the rule with a sphere; EB15CD didn’t say so, but this was the first gravimeter with a toss-up mechanism, eliminating some systematic errors. (Dehlinger 13)

The first portable (but not field!) system was that of Hammond and Faller (1967); the falling object was a corner cube reflector, laser light was split so that part was reflected by the falling reflector and the rest by an identical fixed cube. The two reflections created an interference pattern that could be measured by a photomultiplier tube and recorded. Accuracy was about 20 microgals.

Shipborne and Airborne Gravimetry

The gravimeter cannot tell the difference between the vertical acceleration caused by the earth's gravity and that caused by the heave of a ship or by the diving or climbing of an aircraft.

For surveys of shallow water, a gravimeter and its operator can be lowered to the bottom in a diving bell. If waters are deeper, the meter is lowered to the bottom from a boom hanging over the side of the ship. In either of these cases, the measurement is taken while the gravimeter is stationary, so the main problem is the motion of the sea floor as a result of water waves.

Gravity can be monitored continuously from a shipborne meter, but then one needs to correct for the Eotvos effect—when the ship is traveling east or west, it works with or against the centrifugal acceleration felt by the meter as a result of the rotation of the earth. To do so, you must know the ship's latitude and speed; at the equator, a one knot error in speed results in a 7.5 milliGal error in gravity. (Dobrin 412). EB15CD doesn’t provide the formula, but it notes that in middle latitudes, the effect is 5 milligals for an east-west speed of just 1.6 km/hour.

Airborne gravimetric surveys are problematic because the non-gravitational accelerations are likely to be 100-1000 times the strength of the gravitational anomalies we are trying to detect. An airship is probably a better platform for gravimetry than an airplane. That said, we would probably need to have a very accurate altimeter (radar or laser-based), so we could separate the effect of the airplane's vertical motion from that of gravity. (Hannah).

Grantville literature says that even on a ship, the vertical acceleration can be thousands of milligals, but that it’s possible to average out the effect of wave action. The average vertical acceleration over a time interval is the change in vertical velocity from beginning to end, divided by the time interval. If you make the time interval long enough (typically several minutes), the average swell-driven acceleration becomes negligible. (EB15CD/Earth). In theory, that works with an aircraft, too, but the plane will move quite a distance over that time interval so you aren’t really sensing local gravity over a single point. Which brings us back to my airship proposal, since an airship can hover by opposing the wind with its engines.

Gravimetry in the 1632verse

Because the gravitational anomalies are so small relative to normal gravity, much more sophisticated instrumentation and methodology are needed to detect them than was the case for detection of magnetic anomalies. Hence, gravimetry is likely to lag behind magnetometry in the 1632verse.

I think that both pendulum and spring based gravimeters will be developed (and I think the LaCoste-Romberg design has the edge). However, the first survey use is likely to come in the 1635-39 period. I would imagine that they would first confirm that the instruments are at least sensitive enough to detect the salt dome at Wietze. If so, then they will probably be used for exploration of the half-dozen areas in Germany that, according to up-time atlases, contain oil fields. Eventually, the methods will be used further afield. The ideal place to use them, of course, is the American Gulf Coast.

Gradiometry

Gradiometry measures the rate of change (slope) of the gravitational or magnetic field over a short vertical or horizontal distance, rather than the value (absolute, or relative to a distant reference point) at a single location.

Gradiometry eliminates the effect of variations over the course of the day, and also favors anomalies with steep gradients over those with shallow ones. To obtain a gradiometric measurement, you need a pair of meters (gravitic or magnetic) that are a known and fixed distance apart. (Dobrin 516).

Surveying

Surveying may be done one-dimensionally (along a profile) or two-dimensionally, along a grid.

Both magnetic and gravimetric surveys provide, at each station, a single value that represents the aggregate magnetic or gravitational contribution of all sensible features, shallow or deep, small or large. Because a small, shallow feature can create an anomaly bigger than a large, deep one, the sampling interval (distance between stations) is very important. If it is too wide, “aliasing” occurs, the deep, big feature is masked. And decreasing the sampling interval of course increases the cost of the survey. (Morrison 3.6).

Aliasing is greatly reduced if the survey is conducted at a height off the ground. There is a greater proportionate increase in the distance to the shallow features than to the deep ones, and this improves the signal-noise ratio.

A large survey takes longer than a small one, and it becomes more important to account for temporal variations. This is usually done by periodically looping back to the base station to take a repeat measurement there, or simply by keeping one sensor at the base station and recording the variation over time.

For aerial magnetic surveys, EB15CD recommends that the main grid lines be 2-4 kilometers apart, at an altitude of 400 meters, when searching for petroleum, and 0.5-2 kilometers apart, and 200 meters high, when seeking a mineral deposit. It adds that ground stations may be as little as 50 meters apart.

For land-based gravimetric surveys, EB15CD suggests taking readings every kilometer.

That's probably good enough to detect the existence of a salt dome, but won't provide a lot of information about it. The dimensions of a salt dome are such that a mapping survey would have stations 100-500 meters apart. Ore bodies are smaller, so spacing would be more like 25-100 meters.

Artificial Earthquakes: Seismometry

Seismometry potentially provides more information about subsurface structure than any other geophysical technique. The information it provides also requires the most work to interpret.

In essence, you give the earth a big whack (“shot”). This sets it vibrating at various frequencies. The generated seismic “body” (underground) waves are of two kinds, P (primary, compressional, thus akin to sound waves) and S (secondary, shear). Up through the Fifties, at least, only the P waves were used for prospecting. (Dobrin2d, 21). The shot will also create surface waves but those count as “noise.”

The seismic waves radiate outward underground. They travel at a speed that is defined by the nature (density, rigidity, incompressibility) of the rock they are traveling through. When they strike a boundary—strictly speaking, a place where the “acoustic impedance” (product of density and wave velocity) of that frequency changes—the wave is partially reflected and partially transmitted. If it strikes the boundary obliquely, then the transmitted wave is refracted (bent). Of course, as the wave moves, its energy is gradually absorbed (and turned into heat), which is why you don’t feel an earthquake on the other side of the world. And the intensity of the wave (energy per unit surface) decreases as the result of the spreading out of the wave.

Seismometers and Geophones

For this to be useful, we need to be able to detect the seismic wave when it is returned to the surface. Both seismometers and geophones detect seismic waves. However, seismometers are usually larger and more expensive. A modern geophone might cost $50 and a modern seismometer $10,000–20,000. The seismometer is designed to detect minute earth movements caused by earthquakes hundreds of miles away. That means that it needs to be able to pick up very low frequency waves, because those are the ones that travel the furthest. Geophones typically have poor low frequency response. (Barzilai)

A seismometer may measure the earth’s displacement, velocity or acceleration as a result of seismic wave, and it may measure them on the vertical axis, or either horizontal axis.

All seismometers (and geophones) have two basic elements, one which moves with the earth, and the other of which is inertial (tends to remain in place).

The first seismometer of practical importance was Milne’s pendulum seismograph (1880), see EB11/Seismometer and EB15CD. The pendulum is the inertial element, and it will remain in place (the ground shaking beside it) if its natural period is substantially shorter or longer than the period of the seismic waves it is sensing.

If the pendulum does receive a vibration of a frequency close to its natural one, it will dance about madly. To avoid this, a damper (air, liquid or electromagnetic) is used.

In a horizontal pendulum seismograph, the pendulum is a weighted arm that can swing back-and-forth around a vertical axis. It will sense horizontal earth movements perpendicular to the neutral position of the arm. A “normal” pendulum, swinging in a vertical plane, can sense vertical earth movements.

To convert a seismometer into a seismograph, you need a way of permanently recording the wiggles. In Milne’s device, a mirror was attached to the pendulum arm, and it reflected light (“optical lever”) through a double slit (one fixed, one moving with the arm) onto light-sensitive “bromide” paper. It’s of course possible to instead have the arm connected (probably by levers that magnify the movement) to a stylus, but then there’s friction between the pen and the paper and the instrument has to be heavier to compensate.

There are advantages to connecting an electric coil to the pendulum. If the pendulum is exposed to a magnetic field, then when it moves, an electric current is induced in the coil and can be sensed by a galvanometer. The current can be amplified by standard electronic circuitry and this allows construction of a seismograph that can detect micro-earthquakes.

A strain seismograph (1935) has two piers, with a horizontal rod attached to one pier and pointing at the other. It measures the change in the separation of the free end of the rod from the other pier. Since the two piers are typically 20 meters apart, this is clearly not a field instrument.

The Amateur Scientist column of Scientific American covered seismographs in June 1953, July 1957, August 1970, November 1973, September 1975, July and November 1979.

Okay, now that we have talked about seismographs (which could be used for seismic surveying if we could make them cheaply enough) we turn to geophones. The land geophone is an electromagnet, i.e., a combination of a wire coil and a magnet. One is fixed to the housing, and thereby to the earth’s surface, so it moves when the earth shakes. The other is connected to the housing by a suspension spring. The relative motion produces an electrical signal; the motion is damped so that the response is “flat” over the frequencies of interest. Pressure phones (hydrophones) are used for seaborne prospecting.

There’s a bit of description of geophones in Grantville literature. EB15CD provides the key point that it “generates a voltage when a seismic wave produces relative motion of a wire coil in the field of a magnet,” and 2002McGHEST has a schematic drawing.

Seismic Imaging

Anyway, we position “geophones” on the surface, and they listen for the reflected (or refracted) waves. As the seismic wave travels downward from the source of the shot, some of the energy will be reflected at the first boundary and return to the surface, and some will be transmitted into the layer below. This first transmitted wave will continue on to the next boundary, where some will be reflected and some transmitted. And so on. Hence, the geophones will hear, successively, the “echo” from the first boundary, then the second, then the third, and so on, until the wave is so attenuated that it can’t be heard. Thus, seismology can potentially differentiate multiple geological structures, while magnetometry and gravimetry simply reveal the net anomaly created by all the structures in the vicinity. A modern reflection survey can obtain reflections from depths as great as 20,000 feet, and determine structure with a precision of 10-20 feet. (Dobrin 4)

The reflected waves follow the law that the angle of incidence equals the angle of refraction. Hence, singly reflected waves will reach the surface. A refracted wave can reach the surface under special circumstances. If a wave strikes a boundary between an upper, low speed layer and a lower high speed layer at the “critical” angle, the angle of refraction is such that the refracted wave travels along the boundary. If that happens, then as it moves along, it sets up secondary waves which are emitted into the upper layer and, if they reach the surface, can be heard.

Refraction seismology is used mostly to “image” shallow structures. That’s because the geometry requires that the distance between the “shot” and the geophones be four or five times the depth to the boundary where the critical refraction occurred. And of course the further away the geophones are, the more powerful the shot (or the more sensitive the geophone) must be to he heard. However, refraction shooting is cheaper and faster than reflection methods. (5)

Refraction shooting was first used in 1923 (in Mexico), and was locating salt domes in the American Gulf Coast by 1924. Reflection shooting began in Oklahoma in 1921, and the technique was successfully used to find the Maud Field in 1927. (11ff).

****

In the old timeline, seismic prospecting went through three phases. First, the prospectors had geophones that recorded their traces on paper. Usually, there was one shothole and one geophone per trace. By the later Thirties, the single geophone was replaced by a geophone group.

Next, from the early Fifties, they used analog recording onto magnetic tape. Either the signal was amplified (and perhaps filtered) and recorded directly, or the geophone signal was used to modulate a carrier signal (similar to the operation of an FM radio station) and so recorded. The tape was on a drum and would be drawn past a panel of heads, one for each trace. Analog recording made possible compositing (combination of signals)(149).

Finally, beginning in 1963, the amplified analog signals were digitized and recorded on magnetic tape, and the digital signals in turn were processed by digital computers and then converted back to analog for display.

By the time of RoF, analog recording on magnetic tape was in decline, but there are certainly some consumer grade tape recorders in up-timer attics and basements. There are of course a large number of digital computers in Grantville, but there are an even larger number of demands for those computers. It remains to be seen how well seismic prospecting can compete for computer time. And it’s beyond the scope of this article (or this author’s expertise) to venture a guess as to how soon we could build new analog recorders or digital computers (electronic or fluidic).

Even if we had ready access to the computers, physicists need to figure out the relevant formulae and programmers need to write the programs to implement them so that we can process, interpret and display the seismic information. It will take years, and I am not sure that much real progress can be made until we have had some practical experience with “old school” seismic surveying so we know what’s important and what isn’t.

****

Originally, the source of the seismic wave was a dynamite explosion in a borehole from thirty to several hundred feet deep, and as late as 1976, explosives were used in over 60% of all land-based seismic work. Explosions provide strong signals, and the explosives are easy to transport across rough terrain. However, there is some hazard in working with dynamite, the noise of the explosion is a nuisance in populated areas, and it is costly to drill boreholes.

An alternative method—but one not likely to be thought of early on in the 1632verse—is to horizontally bury a detonating cord, perhaps several hundred feet long, a few feet below the surface. This has the advantage of directing the explosive energy downward rather than horizontally.

Non-explosive sources simply do not provide enough energy to yield a detectable reflection unless you have some means, whether it be analog magnetic tape recording, or digital recording, to synchronize and combine signals from multiple geophones. (Dobrin 90). The first (1953) non-explosive source used in oil exploration was a special truck with a crane hoist; it held a three ton iron weight nine feet above the ground, and dropped it repeatedly. (93)

****

There are several different shot-detector geometries that were developed over the years. The only one mentioned in Grantville literature is “fan-shooting,” described somewhat cryptically as measuring travel times “along different azimuths from a source.” (EB15CD).

Fan-shooting was a refraction technique. Essentially, the seismic waves fan out from a single source, and are detected by geophones arranged along a circular arc about five to ten miles away. What you are looking for is a “lead,” abnormally early arrival times at some of the detectors, implying the presence of a high-speed material such as salt. If you find a “lead,” you would then shoot a second fan, at roughly right angles to the first one.

A more popular refraction method was profile shooting. The first shot is in line with detectors, all of which lie down-line of it. You move the detectors further down the line, then take a new shot, down-line from the first. There were other arrangements, too. (Dobrin2d, 88ff). In reflection work, the most common layout was the “split-spread,” with the detectors in line with the shot, and an equal number on either side. (115ff).

Seismic Signal Processing and Interpretation

One of the problems with seismic work is “noise”: high amplitude, low velocity, low frequency surface waves (“ground roll”); scattering from near-surface irregularities; multiple reflections; etc.

Ground roll can be suppressed by replacing individual geophones with geophone groups (3-6 phones/group as used initially, 100 or more under extreme “ground roll” conditions) having a geometry that cancelled out the surface waves, or by use of low-cut filters. (86, 100). It can also be advantageous to have multiple shotholes per trace, again in a special pattern.

Scattering can be suppressed by adding together signals from multiple shots and receivers for a single trace (88). (Use of multiple shots per station is more economical with a drop truck than with dynamite.) Multiple reflection can be addressed by special shooting patterns or by frequency filtering. The noise reduction methods will need to be relearned the hard way in the new universe.

****

It’s important to appreciate exactly what information is produced by a seismic survey. The geophones pick up the time of arrival of the reflected or refracted seismic waves. Each strong reflection or refraction corresponds to some subsurface boundary. However, you don’t know, initially, the depth of that boundary.

To convert times to depths, you need to know the velocity of the seismic waves in each of the layers through which they traveled, coming and going. This is not something that the prospectors can just look up! Grantville literature (CRC) provides the average velocity of seismic waves as a function of depth, but without differentiating by the nature of the rock. Even if our characters had access to specialist books on geophysical prospecting, the range in velocity for a single rock type is enormous. Limestone can be 5,600-20,000 feet/second, and sandstone 4,600-14,200 (Dobrin 50, Dobrin2d 42).

So, there are two basic methods. One is that you hang a geophone at different depths inside a deep borehole and record the times it takes for a seismic wave to reach it from a nearby explosion. This is called a “velocity log.” (You say, “but I thought the whole point of doing seismic prospecting was so I didn’t have to drill a hole until I knew that there was a possible oil trap underneath my feet.” Sorry.)

The other approach requires that you have multiple geophones (or geophone groups) per shot and plot the square of the travel time against the square of the shot-receiver distance; the slope should be inversely proportional to the square of the velocity. (229). It is also possible to use computers to analyze this data.

****

The wave arrival times also need to be corrected before performing this conversion; the two standard corrections are for elevation of the “shot,” and for the thickness of the (low seismic velocity) “weathered” layer at the surface.

The apparent horizontal positions of the reflections also need to be “migrated” to their true positions, and to do that we need to compare multiple traces so we have several different reflection paths corresponding to the same “reflector.”

While the people in Grantville will certainly be aware that they are obtaining “arrival time” data, the appropriate corrections and conversion methods will have to be worked out the hard way.

The fact that we are measuring times also means that we have to worry about the accuracy with which we can tell time. If you are studying what’s 1000′ deep, and you have a timing accuracy of 0.001 second, you have a depth accuracy of perhaps 3-5 feet. But if the timing accuracy is just 0.1 seconds, the depth accuracy is 300-500 feet which is completely worthless.

When you consider that the “cutting edge” clocks of even the late seventeenth century tended to lose or gain one to ten minutes a day, it’s clear that we will need to improve timepiece design considerably in order to conduct a useable seismic survey. Of course, that was true for gravity surveys using pendulum gravimeters, too.

Seismometry in the 1632verse

Grantville is able to make dynamite (Evans, “Thunder in the Mountains,” Grantville Gazette 12), so we don’t have any problem with creating seismic waves. And the geophones seem to be less complicated than, say, an electric motor. The real problem is going to be learning to process and interpret the results. And we have to start somewhere.

The obvious first target is the RoF itself. Ferrara hypothesized that the RoF event moved an entire sphere, half air, half rock, three miles in radius. “I’ll be surprised if we don’t discover that we have the same radius beneath our feet. Three miles down at the center—maybe more. Way deeper than any gas or oil beds we’ll be tapping into. Or coal seams.” Flint, 1632, Chapter 8.

So, a seismic survey of the RoF area should reveal a hemispherical discontinuity between the West Virginia intrusion and the surrounding Thuringian rock. If the West Virginian hemisphere were homogeneous, the raw seismic record from a line of detectors would reveal spikes in a circular arrangement. However, it’s certainly not homogeneous, so the circle will be distorted (more so directly beneath the center of the RoF than near the periphery, because more layers will be traversed). Our first exercise in seismic interpretation will be making a travel time-depth calculation. However, I figure that we will have a lot of information to help us. Even if the coal mine office doesn’t have books on prospecting, it’s likely to have a geological column and perhaps even seismic survey data for the coal mine and its vicinity. We can also put detectors in the coal mines (and in boreholes of abandoned gas and water wells) to make velocity measurements. And that will give us our first correlations of seismic velocity with rock depth and type.

The seismic survey of the RoF isn’t just a training exercise, it has a practical value. Coal has a much lower density and seismic velocity than other sedimentary rocks, leading to an acoustic impedance change of 35-50%. So coal beds give a strong reflection. Perhaps we’ll find some more formerly West Virginia coal.

Of course, we have to get density and velocity data for the rocks of Germany. Logically, we should take advantage of every location where we can cheaply put a sensor underground: in mines, or in the dry holes drilled at Wietze. For the latter, we have the advantage that we can compare the data with the coring of the rocks encountered as they drilled down.

Eventually, we’ll have enough data that we can make a reasonable interpretation of a raw seismogram, and we will want to conduct a survey with more immediate benefits. It will be tempting to conduct a seismic survey of Wietze, but I fear that the result will be dismaying; Wietze features a heavily fractured salt dome and will produce a very complex seismogram.

However, there are other salt domes in Germany, and for that matter on the American Gulf Coast. One of these will be the first commercial success for seismic surveying in the new universe.

Time-wise, I think the seismic study of the RoF area will occur in 1633-35. However, we know so little about European geology that the extension of the work outside the RoF will probably progress very slowly. It might not be until the 1640s that we could usefully interpret a seismic survey of a German site.

Caged Lightning: Electrical and Electromagnetic Methods

EB15 warns that “Electrical methods generally do not penetrate far into the Earth, and so do not yield much information about its deeper parts. They do, however, provide a valuable tool of exploring for many metal ores.”

Electrical prospecting methods sense the electrical properties of rocks. First, there is resistivity; the ability of a unit thickness of rock to impede the passage of an electric current. Resistivity is a function of the mineral content of the rock; galena, pyrite, magnetite and graphite are among the minerals with unusually low resistivity. As noted by EB15, rocks with high clay content tend to have low resistivity. Also, the more porous the rock, the more water it can hold, and that water (if not fresh) in turn decreases its electrical resistivity. If, instead, it holds petroleum, resistivity will not be lowered. This is the basis of the resistivity logging method used in oil well drilling; a large change in resistivity is indicative of the location of the oil water content.

The detected current may be a natural telluric (“earth”) current, or artificially induced. The natural currents are produced by atmospheric disturbances, notably lightning strikes and solar wind bombardment. These currents are of a variety of frequencies, and therefore have different penetration depths (low frequency probes deeper). The telluric currents are detected by planting a pair of electrodes in the ground, connecting them with an insulated wire, and measuring the voltage difference between. You can then calculate the electric field component (in millivolts/kilometer) along the line connecting the two electrodes.

The alternating currents in turn create electric and magnetic fields that may also be sensed. For example, an alternating magnetic field will induce an eddy current in a conductor (such as a low resistivity ore body), and this in turn will create its own magnetic field. This can be measured with a magnetometer.

You may also stick a pair of electrodes into the ground and apply an external voltage to them. A current will flow between them, on diverse “equipotential” paths. The flow paths are distorted by the presence of bodies of unusually low or high resistivity. In the equipotential methods, the flow paths are detected by moving a second pair of electrodes to various points on a grid pattern and measuring the potential difference between them. The method is usually used just for reconnaissance. In the “mise-a-la-masse” variant, a known conductive mineral body serves as one of the current electrodes and the other is placed far away. The potential electrodes are then used to plot the extent of the ore body.

In profiling methods, the electrode spacing is kept constant and the electrode array is moved along a line. This is used to locate an ore body. In sounding methods, the center of the electrode array remains in one place, but the spacing is varied. This allows estimation of the depth of the ore body.

A time-varying current also induces a magnetic field that can be measured by hand-held metal detectors or by airborne electromagnetic sensors. In Grantville, there are no doubt some people with metal detectors of the kind used for treasure hunting; these have electrically balanced coils that become unbalanced if metal is brought nearby. Different metals have different phase responses to alternating currents, and this can be used to discriminate metals (at the cost of reduced sensitivity). Bear in mind that these metal detectors are designed to detect coins some inches below the ground, not ore deposits hundreds of feet deep.

Electrical techniques were first used in prospecting for sulfide ore bodies. What was actually measured was the voltage resulting from the induced currents, and to avoid misleading results attributable to the polarization of the electrode, it was necessary to develop a non-polarizing electrode. Barus achieved this in 1880 by filling a polarized wood or unglazed clay cup with a metal sulfate solution, and he used his electrodes to trace an extension of the Comstock Lode. Schlumberger used resistivity methods to map the salt domes of Pechelronn in 1921-26.

****

Rocks may also have electrochemical activity, that is, the ability to act as a natural (but very weak) battery. The top of a metallic sulfide ore body may be heavily oxidized, while the bottom is oxygen-deficient. If groundwater circulates between the two, it acts as an electrolyte, and ions are exchanged between the top and the bottom. EB15 says that “Graphite, magnetite, anthracite, some pyritized rocks, and other phenomena also can generate self-potentials.”

Exposing rocks to an electric field induces polarization, that is, a separation of charges, as in a capacitor. The magnitude of the response is related to the rock’s dielectric constant.

There isn’t much information in Grantville literature about detection of self-potentials or induced polarization.

A Trail of Chemical Crumbs: Geochemical Prospecting

There is very little information about geochemical prospecting in Grantville literature; the only significant source is the McGraw-Hill Encyclopedia of Science and Technology at the high school.

An ore deposit exists because geological processes concentrated desirable minerals in a particular place. The deposit doesn’t usually have a “hard” boundary, rather, there is a core in which the mineral is found in economically useful concentrations, surrounded by a “primary halo” in which the concentration declines to parts per million and then to parts per billion. Depending on the nature of the deposit, the halo can vary in width from a few centimeters to hundreds of meters.

In addition, as a result of weathering and erosion, there is a further dispersion of the minerals, resulting in a “secondary halo.” Rivers and glaciers can extend this secondary halo over a very long distance.

Geochemical prospecting essentially involves detecting an abnormally high (but still economically useless) concentration of a chemical and then following the gradient “upstream” to the source deposit.

The chemical (usually a chemical element, but it could be natural gas or petroleum) may be the economic chemical, or it may be a “pathfinder” chemical that is closely associated with the valuable one. For example, arsenic is a pathfinder for gold.

Samples may be taken from soil, surface or ground water, stream or lake sediment, glacial debris, rock chip samples, drill cores, or plants. Besides analyzing the chemical constitution of plants growing in the area, a botanist can examine them for plant diseases characteristic of metal poisoning.

We must, of course, have the ability to quantify the chemical of interest even at low concentrations, such as parts per million. (An evaluation of the up-timers’ ability to quantify various elements is outside the scope of this article.)

A useful step beyond simply measuring the quantity of an element is to determine its isotopic distribution, as that provides additional information about, e.g., the formation of the deposit. Unfortunately, it also requires even more sophisticated analytic methods than those necessitated by conventional geochemical prospecting.

Down the Rabbit Hole: Well Logging

Well logs are an important source of geophysical information in regions which have experience exploration or development by the petroleum industry. The log is essentially a depth-linked record of the geological formations that were drilled through.

Logs can be run in an open hole, or after the hole is lined with casing to prevent water infiltration. Logs can be based on material brought up to the surface (geological), or data collected from a sensor (sonde) lowered into the well (geophysical). The sonde may be lowered at intervals (interrupting the drilling process), or most recently, integrated into the drill string so that data is collected continuously during drilling.

The collection of rock samples dates back to the early days of cable tool drilling. Rotary drilling chops up the rock more finely than cable tool methods, but the cuttings can be separated from the drilling mud.

The collected rocks can be analyzed qualitatively (looks like sandstone) or quantitatively (porosity, permeability, water content, grain size). It is also fairly straightforward to log penetration rates, which is a crude measure of the hardness of the rock penetrated and can detect a change in formation.

Electric logging was introduced by the Schlumberger brothers in 1927; Schlumberger’s first experiments were conducted in a bathtub filled with various rocks. (Pike). Resistivity logs measure the current flow between two or more electrodes in the borehole. Spontaneous potential logging determines the naturally occurring potential difference between a fixed electrode at the surface and a moveable electrode in the borehole. For both, you must be drilling with a conductive (saltwater-based) drilling mud.

An induction log is compatible with an oil-based drilling mud; you lower both a generating coil and a detector coil into an uncased hole; the former induces eddy currents in the rock that are sensed by the latter.

In general, electric logging distinguishes water-bearing rock from rock that bears hydrocarbons or is impermeable.

A more recent development is logging of gamma radiation, which is emitted by potassium, thorium and uranium isotopes found in rocks, each isotope emitting at a different wavelength. The log can be of total radiation, or wavelength-specific. Shales are typically more radioactive than sandstones or limestones. The differences in natural radioactivity among different sediments was first recognized in 1909, but gamma ray logging was not commercialized until 1939. (Brannon).

Neutron logging is more sophisticated; a neutron source is lowered into the hole together with a system for detecting the radioactivity induced by neutron bombardment.

Geological logging will probably be practiced right from the beginning of the 1632verse oil industry. Electric logging will come later, perhaps by the late 1630s, but it depends on our developing both rotary drilling equipment, and either a practical saltwater-based drilling mud or induction logging system. Radioactive logging is likely to be a much later development.

Conclusion

George Bernard Shaw quipped that the lack of money is the root of all evil. Over the centuries, prospectors of every land have sought to remedy that problem, at least at a personal level, by finding gold, silver or other precious things buried in the earth.

The methods in use in the seventeenth century are best described as hit-and-miss, and those of even the nineteenth century weren’t much better.

The scientific “sorcery” of Grantville won’t find the pot of gold at the end of the rainbow, but geophysical and geochemical prospecting, supplemented by well logging, do promise to make it easier to locate the treasures of the earth.

****

Appendix: Grantville Literature Relevant to Prospecting

Grantville is based on the town of Mannington, West Virginia. We can look at the online catalogues of the Mannington Public Library and the North Marion High School library to see what pre-Rof books they presently have (and presumably had before RoF) and we can make educated guesses as to what is reasonably likely to be in the up-timers’ personal libraries, given their Grid hobbies and educational backgrounds.

We can safely assume the presence of

Encyclopedia Britannica 11th ed (EB11)

—pre-RoF versions of the modern Encyclopedia Britannica (EB15), Encyclopedia Americana, Collier’s Encyclopedia and other general encyclopedias

—the 1977 edition of the McGraw-Hill Encyclopedia of Science and Technology (McGHEST)(at NMHS!)

—pre-RoF world atlases

—”CRC” Handbook

—McGraw-Hill Dictionary of Earth Science

—various field guides to rocks and minerals (Eyewitness Handbook, Simon and Schuster, Zim, Pough, Loomis, Fay, Bauer, Lye, Michele)

—”geology 101″ college textbooks

—books on earthquakes, notably Hodgson, Earthquakes and Earth Structure, Grady, Plate Tectonics

—books on the American, especially West Virginian, oil industry, notably Mallison, The Great Wildcatter, Toenen, History of the Oil and Gas Industry in West Virginia; Fanning, Men, Money and Oil; Whiteshot, The Oil-Well Driller

—books on the West Virginia coal industry, including Cohen, King Coal: A Pictorial History of West Virginia Coal Mining; Stevenson, Coal Towns of West Virginia; Conley, History of the West Virginia Coal Industry

—mining stories by Bret Harte and Jack London

The high school or the public library might have a geological map of the United States and a “geological column” for “Grantville.’

I would not expect to find any books specifically devoted to geophysical prospecting.

It’s becoming more and more difficult to have easy access to pre-RoF editions of reference works.

As a surrogate for the pre-RoF EB15, I have used the 2002 CD edition of the Encyclopedia Britannica (EB15CD). Most of its content is pre-RoF; if I cite anything from it that isn’t in a pre-RoF print edition, please advise me!

As a surrogate for the 1977McGHEST, I use the 2002 edition that is in my law firm library. Same request applies!

Bibliography

Geophysical Prospecting, Generally

Dobrin, Introduction to Geophysical Prospecting (3d ed. 1976).

Dobrin, Introduction to Geophysical Prospecting (2d ed. 1960).

Exploration Geophysics—Petroleum Industry Timeline

ftp://ftp.cwp.mines.edu/pub/Timelines/timelineB25Jul2006.pdf

[SEG] Society of Exploration Geophysicists Virtual Geoscience Center, “Historical Collection:

http://www.mssu.edu/seg-vm/museum_items_main.html

US Army Corps of Engineers, Engineering and Design—Geophysical Exploration for Engineering and Environmental Investigations, Publ No. EM 1110-1-1802 (1995),

http://140.194.76.129/publications/eng-manuals/em1110-1-1802/

Hartman, SME Mining Engineering Handbook, Vol. 1 (1992).

Magnetometry

[Kenyon]http://physics.kenyon.edu/EarlyApparatus/Electricity/Earth_Inductor_or_Delzennes_Circle/Earth_Inductor_or_Delzennes_Circle.html

[EOST] “Schmidt’s field balance (vertical Z)”

http://eost.u-strasbg.fr/musee/En/magn/balance_schmidt_Z_sch.html

Brough, A treatise on mine surveying.

Ricker III, “The Discovery of the Magnetic Compass and Its Use in Navigation,”

http://www.wbabin.net/science/ricker4.pdf

Morrison, “3.5 Magnetometers” in The Berkeley Course in Applied Geophysics (2004)

http://appliedgeophysics.lbl.gov/magnetic/mag35.pdf

Fountain, “Dan’s Homegrown Proton Precession Magnetometer Page”

http://gerf.org/~jasegler/proton_mag/proton.htm

Gravimetry

Coelho, Music and Science in the Age of Galileo (1992).

Seigel, “A Guide to High Precision Land Gravimeter Surveys” (August 1995).

http://www.scintrexltd.com/downloads/GRAVGUID.pdf

Wu, “Gravity Measurements”, Geophysics 547—Gravity and Magnetics (Winter 2007)

http://www.geo.ucalgary.ca/~wu/Goph547/Gravimeters.pdf

Smith, Introduction to Geodesy (1997).

[SEG] Society of Exploration Geophysicists, Virtual Geoscience Center, ‘Torsion Balance”

http://www.mssu.edu/seg-vm/pict0349.html

Hannah, “Airborne Gravimetry: A Status Report” (Jan. 2001)

http://www.linz.govt.nz/docs/miscellaneous/airborne-gravimetry.pdf

[OU] Geophysics 5864–Gravimetric and Magnetic Exploration (Ohio University Fall 2007):

—References

http://gravmag.ou.edu/references.html

—Absolute Gravity measurement

http://gravmag.ou.edu/measure/absolute.html

—Relative gravity measurement

http://gravmag.ou.edu/measure/relative.html

—History

http://gravmag.ou.edu/history/history.html

Dehlinger, Marine Gravity

Seismometry

Enviroscan, “Seismic Refraction versus Reflection” (2010)

http://www.enviroscan.com/html/seismic_refraction_versus_refl.html

Barzilai, “Improving a geophone to produce an affordable, broadband seismometer,” (Ph.D Defense., Mechanical Engineering, Stanford U., Jan. 25, 2000).

http://micromachine.stanford.edu/smssl/projects/Geophones/DefenseBarzilaiFinalCopyWeb/DefenseBarzilaiFinalCopy.pdf

Zimmerman, Depth Conversion

http://www.searchanddiscovery.net/documents/geophysical/zimmerman/index.htm

Timekeeping

[Anderson Institute] “History of the Clock”

http://www.andersoninstitute.com/history-of-the-clock.html

Newton, Galileo’s Pendulum 55-6 (2004).

Matthews, Time for Science Education (2000).

Olmsted, The Scientific Expedition of Jean Richer to Cayenne (1672-1673)

ISIS 34: 117-128 (1942)

Allan, The Science of Timekeeping, Hewlett Packard Application Note 1289 (1997)

http://www.allanstime.com/Publications/DWA/Science_Timekeeping/TheScienceOfTimekeeping.pdf

Well Logging

Brannon and Osoba, Spectral Gamma Ray Logging (abstract)

http://www.onepetro.org/mslib/app/Preview.do?paperNumber=SPE-000523-G&societyCode=SPE

Pike, Logging history rich with innovation (2002)

http://www.epmag.com/archives/features/3162.htm

Clark, The Chronological History of the Petroleum and Natural Gas Industries

Electrical Methods

Zonge, Chapter 10, “A Short History of Electrical Techniques in Petroleum Exploration”

http://www.zonge.com/PDF_Papers/IP-Petro_9.pdf

Share

About Iver P. Cooper

Iver P. Cooper, an intellectual property law attorney, lives in Arlington, Virginia with his wife and two children. Two cats and a chinchilla rule the household with iron paws. Iver has received legal writing awards from the American Patent Law Association, the U.S. Trademark Association, and the American Society of Composers, Authors and Publishers, and is the sole author of Biotechnology and the Law, now in its twenty-something edition. He has frequently contributed both fiction and nonfiction to The Grantville Gazette.

 

When not writing (or trying to get an “orange blob” off his chair so he can start writing), he has been known to teach swing dancing and folk dancing, or to compete in local photo club competitions. Iver adds, “I can’t get my wife to read my fiction, but she has no trouble cashing the checks.”

Iver’s story “The Chase” is in Ring of Fire II