Friday, June 26, 2009

End of the Big Beasts

by Peter Tyson

Who or what killed off North America's mammoths
and other megafauna 13,000 years ago?

It takes a certain kind of person to tackle this question in earnest. You have to be itching to know the answer yet patient as a Buddha, for the answer is frustratingly elusive. I know I'm not the type. I'm intrigued by the question but far too anxious to calmly accept, as some experts suggest, that it might be years or decades, if ever, that a definitive, widely accepted solution will come. (To follow the long-running wrangle over this question, see The Extinction Debate.)

The four people I spoke to about the megafaunal or "large-animal" extinctions possess this sort of edgy sangfroid. While keeping an open mind, they also stand in four decidedly different camps regarding why America's rich complement of big beasts went extinct quite suddenly at the end of the Ice Age. The four camps are known tongue-in-cheek as "overkill," "overchill," "overill," and "overgrill"*:

  • Archeologist Gary Haynes, University of Nevada Reno, and others think that the continent's first human hunters, fresh from Siberia, killed the megafauna off as they colonized the newly discovered land.

  • Donald Grayson, an archeologist at the University of Washington, Seattle, along with colleague David Meltzer of Southern Methodist University, believes that climate changes at the end of the Pleistocene epoch triggered the collapse.

  • Mammalogist Ross MacPhee of the American Museum of Natural History has advanced the idea, with virologist Preston Marx, that a virulent "hyperdisease" brought by the first Americans might have raced through species with no natural immunity, bringing about their demise.

  • And, in the newest hypothesis advanced, geologist James Kennett, U.C. Santa Barbara, and colleagues propose that a comet impact or airburst over North America did it.

So why is the answer so elusive? As often happens in the paleosciences, it largely comes down to lack of empirical evidence, something all four hypotheses arguably suffer from. (There's a fifth hypothesis, actually—that a combination of overkill and overchill did it.)

Overkill

In the early 1960s, ecologist Paul Martin of the University of Arizona postulated that the first Americans, after crossing into the Americas over the Bering Land Bridge, hunted the megafauna to extinction. For many years, "overkill" became the leading contender. The timing seemed more than coincidental: Humans were thought to have arrived no earlier than about 14,000 years ago, and by roughly 13,000 years ago, most of the megafaunal species abruptly vanish from the fossil record. (See a list of all 35 extinct genera of North American Ice Age mammals.)

But skeptics have asked, Where's the evidence? Grayson and Meltzer (overchill) have noted that late-Ice Age sites bearing megafaunal remains that show unequivocal sign of slaughter by humans number just 14. Moreover, they stress, only two types of giants were killed at those 14 sites, mammoth and mastodon. There's no sign that early hunters preyed on giant ground sloths, short-faced bears, or the massive, armadillo-like glyptodonts, for instance. (Forensic studies of a cache of Clovis tools found in 2008 suggest the Clovis people did hunt now-extinct camels and horses.) That's hardly enough evidence, Grayson and Meltzer argue, to lay blame for a continent's worth of lost megafauna at the foot of the first Americans.

Gary Haynes (overkill) begs to differ. "I don't care what anybody else says, 14 kill sites of mammoth and mastodon in a very short time period is extraordinary," he told me. It's one thing to find a campsite with some animal bones in it, he says, quite another to find the actual spot where an ancient hunter felled and butchered an animal—where, say, a spearpoint turns up still sticking in bone. "It's very, very rare to find a kill site anywhere in the world," he says. And absence of other megafauna in kill sites doesn't mean they weren't hunted. "There is no doubt Native Americans were eating deer and bear and elk," Haynes says, citing several large mammals that pulled through. "But you cannot find a single kill site of them across 10,000 years."

The dearth of widely convincing evidence only serves as a spur.

Could what scholars agree must have been a relatively modest initial population of hunters have emptied an entire continent of its megafauna virtually overnight, geologically speaking? (In fact, it's three continents: South America and, to a lesser extent, Northern Eurasia also lost many large species at the end of the Ice Age.) For his part, Ross MacPhee (overill) finds it hard to swallow. "I just don't think it's plausible, especially if we're also talking about collapses for megafauna that didn't actually go extinct." Certain populations of surviving big beasts, including bison in North America and musk oxen in Asia, are known to have fallen precipitously at the end of the Ice Age. "It gets a little bit beyond probability in my view that people could have been so active as to hunt every animal of any body size, in every context, in every possible environment, over three continents."

Overchill

Could climate change have done it? Scholars generally agree that North America witnessed some rapid climate adjustments as it shook off the Ice Age beginning about 17,000 years ago. The most significant swing was a cold snap between about 12,900 and 11,500 years ago. Known as the Younger Dryas, this partial return to ice-age conditions may have stressed the megafauna and their habitats sufficiently to cause widespread die-offs, Grayson and others believe.

Detractors, again, point to the lack of evidence. "There aren't any deposits of starved or frozen or somehow naturally killed animals that are clearly non-cultural in origin that you would expect if there was an unusual climate swing," says Haynes. "I don't think that evidence exists." Another question dissenters have is how the megafauna survived many abrupt glacial and deglacial shifts during the past two million years only to succumb to the one that closed the Pleistocene. "It just doesn't hold water," Jim Kennett (overgrill) told me.

Grayson admits that overchill advocates have failed to develop the kind of records needed to test climate hypotheses in detail. But he focuses on climate change, he says, because he sees absolutely no sign that people were involved. "You can't look at climate and say climate didn't do it for the simple reason that we don't really know what to look for," Grayson told me. "But what you can do fairly easily is look at the evidence that exists for the overkill position. That position would seem to make fairly straightforward predictions about what the past should have been like, and when you look to see if it was that way, you don't find it."

Overill

A lack of data has particularly plagued the "overill" hypothesis. This is the notion that diseases brought unwittingly by newly arriving people, either in their own bodies or in those of their dogs or perhaps rats, could have killed off native species that had no natural immunity. MacPhee devised this hypothesis with Preston Marx after realizing that the link between initial human arrival and subsequent large-animal extinctions was strong not just in North America but in many other parts of the world (see map in sidebar), but that in his opinion, convincing evidence for hunting as the culprit simply did not exist.

Despite what he calls "prodigious effort" using DNA techniques and immunological probes, however, MacPhee and his colleagues have failed to detect clues to any pathogens in megafaunal bones, much less nail down a specific disease, like rabies or rinderpest, that could have jumped from one type of animal to another and wiped out all the big beasts. "There's no evidence, and there's virtually no possibility of getting any evidence," Kennett told me.

"[Overill] doesn't even have circumstantial evidence, because we can't prove there was hyperdisease," Haynes says. "We can prove people were here, and we can prove climates were changing." Fair enough, says MacPhee, though he points out that the burgeoning ability of Asian bird flu to infect across species boundaries seems to suggest that some diseases are ecologically and genetically preordained to, as he puts it, "go hyper."

Overgrill

The most recent hypothesis, advanced by Kennett and 25 other scientists in a 2007 Proceedings of the National Academy of Sciences paper, concerns the proposed cosmic impact. Right about the time the Younger Dryas began and at least 15 of those 35 extinct mammals and arguably the Clovis culture itself appear to vanish abruptly from the fossil record—that is, right about 12,900 years ago—Kennett et al see markers of a major catastrophe. The markers lie in a thin layer at the base of a "black mat" of soil that archeologists have identified at over 50 Clovis sites across North America.

According to Kennett, fieldworkers have uncovered fossils of the 15 genera of mammals that survived right up to Younger Dryas times just beneath—but neither within nor above—this black mat. (Some fossil bones butt up against this layer so closely that the mat has blackened them, Kennett told me.) Stone-tool remains of the Clovis culture also end just beneath the mat, he says. Moreover, Kennett and the team he works with have identified charcoal, soot, microscopic diamonds, and other trace materials at the base of the mat. These materials indicate, he says, that a comet (not an asteroid—different constituents) exploded in the atmosphere or struck the surface, likely in pieces. This triggered widespread wildfires and extinctions, changed ocean circulation, and coughed up sun-blocking ash and dust, all of which helped unleash the Younger Dryas. Tokens of this cosmic cataclysm have shown up in the Greenland ice sheet as well, Kennett says.

Aren't you just dying to know what happened?

Where then, skeptics ask, is the crater? Unlike the asteroid strike at the end of the Cretaceous, the one thought to have ended the reign of the dinosaurs, this 12,900-year-old event currently has no hole or holes definitively linked to it. Kennett says it's still early, noting that it took nearly a decade for scientists to discover the dinosaur-ending impact crater after evidence for a cosmic collision 65 million years ago first turned up in sedimentary layers around the world. Then again, there may be no crater, Kennett says. He cites Tunguska: In 1908, an object that scholars believe was a meteor or comet exploded high above the Tunguska River in Siberia, leveling trees over 800 square miles but leaving no crater.

Critics also take issue with the black-mat evidence. Haynes (overkill) argues that the mat's charcoal-rich layer could as likely be from human-caused fires as from comet-caused wildfires, while Grayson (overchill) questions the purported collapse of Clovis populations, for which he and many other archeologists see very little evidence.

Finally, there are the extinctions themselves. Of the 35 extinct genera, 20 or so cannot be shown to have survived up to the Younger Dryas. The youngest date, for example, for fossils of Eremotherium, a giant ground sloth, is 28,000 years ago. "So the idea that this impact could have caused the extinctions of all these animals just does not make sense," Grayson says. In response, Kennett points out that the fossil record is imperfect, and one would not expect to see the most recent occurrence of rare forms like Eremotherium to extend right up to the Younger Dryas, as the remains of more common animals like mammoths, horses, and camels do.

Soldiering on

If there's one thing all scholars involved in this famously contentious debate would welcome it's more data. For in science, as Kennett put it to me, "data eventually rules." Grayson, for one, feels the field would benefit from a better understanding of just when each of those 20 rarer genera of big beasts went extinct. "Until we know when these extinctions occurred, I think we're wasting our time in trying to explain them," he says.

In the meantime, the dearth of widely convincing evidence only serves as a spur. MacPhee may be speaking for all researchers working on this mystery when he says: "What's of interest here for me personally is that these Pleistocene extinctions have occupied the minds of some very able thinkers over the last half century or so, and nobody's come up with anything that's drop-dead decisive. So it's attractive as an intellectual problem."

Granted. But hey, aren't you just dying to know what happened?


*Gary Haynes offered this sobriquet when I asked him if a playful term for the comet hypothesis had caught on yet.

Monday, March 23, 2009

The “Ultimate Jurassic Predator” Could Crush a Hummer in Its Jaws

On a Norwegian island within the Arctic Circle, researchers have unearthed the fossilized remains of a marine monster they call “Predator X.” The 50-foot beast is a new species of pliosaur, and researchers say the enormous reptile ruled the Jurassic seas some 147 million years ago…. “Its anatomy, physiology and hunting strategy all point to it being the ultimate predator – the most dangerous creature to patrol the Earth’s oceans” [New Scientist], the Natural History Museum at the University of Oslo said in a breathless press release.

Predator X swept through the seas some 147 million years ago during the Jurassic Period, when dinosaurs walked the land. The creature swam with its four flippers, and relied on its crushing jaw power to bring down its prey–lead researcher Joern Hurum estimates that its had 33,000 pounds per square inch bite force. Says Hurum: “With a skull that’s more than 10 feet long you’d expect the bite to be powerful but this is off the scale…. It’s much more powerful than T-Rex” [Reuters]. Hurum has said that a previously discovered fossil pliosaur was big enough to chomp on a small car. He said the bite estimates for the latest fossil forced a rethink. “This one is more like it could crush a Hummer,” he said [Reuters]. Hurum theorizes that the 45-ton predator feasted on fish and marine reptiles, including ichthyosaurs and long-necked plesiosaurs.

Paleontologists dug up the partial skull and the fragmented skeleton of a giant pliosaur last summer on the island of Spitsbergen. Fossil hunters get used to working in the heat and cold, the dry and wet, but even without counting the polar bears nosing around their dig, Spitsbergen posed unusual challenges. It has only a three-week window for excavating, from the end of July through much of August. That is after the warmth of a brief summer has thawed upper layers of the ground and before the onset of the round-the-clock darkness of Arctic winter [The New York Times]. A documentary about the expedition will be shown on the History Channel later this month.

The researchers haven’t yet given the new species a scientific name, and although they’ve described their findings at scientific conferences, they have yet to publish their work in a peer-reviewed journal–they say that will happen later this year.

Wednesday, March 11, 2009

Trio of Galaxies Mix It Up

Release No. STScI-2009-10
Image Credit: NASA, ESA, and R. Sharples (University of Durham)

Though they are the largest and most widely scattered objects in the universe, galaxies do go bump in the night. The Hubble Space Telescope has photographed many pairs of galaxies colliding. Like snowflakes, no two examples look exactly alike. This is one of the most arresting galaxy smash-up images to date.

At first glance, it looks as if a smaller galaxy has been caught in a tug-of-war between a Sumo-wrestler pair of elliptical galaxies. The hapless, mangled galaxy may have once looked more like our Milky Way, a pinwheel-shaped galaxy. But now that it's caught in a cosmic Cuisinart, its dust lanes are being stretched and warped by the tug of gravity. Unlike the elliptical galaxies, the spiral is rich in dust and gas for the formation of new stars. It is the fate of the spiral galaxy to be pulled like taffy and then swallowed by the pair of elliptical galaxies. This will trigger a firestorm of new stellar creation.

If there are astronomers on any planets in this galaxy group, they will have a ringside seat to seeing a flurry of starbirth unfolding over many millions of years to come. Eventually the ellipticals should merge too, creating one single super-galaxy many times larger than our Milky Way. This trio is part of a tight cluster of 16 galaxies, many of them being dwarf galaxies. The galaxy cluster is called the Hickson Compact Group 90 and lies about 100 million light-years away in the direction of the constellation Piscis Austrinus, the Southern Fish.

Hubble imaged these galaxies with the Advanced Camera for Surveys in May 2006.

The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency (ESA) and is managed by NASA's Goddard Space Flight Center (GSFC) in Greenbelt, Md. The Space Telescope Science Institute (STScI) conducts Hubble science operations. The institute is operated for NASA by the Association of Universities for Research in Astronomy, Inc., Washington, D.C.

STScI is an International Year of Astronomy 2009 (IYA 2009) program partner.


Thursday, February 05, 2009

FORECAST: EARTH QUAKE

Fascinated by the implications of what were apparently man-made quakes, USGS scientists in 1969 set up their instruments at the Rangely oilfield in northwestern Colorado. There, Chevron was recovering oil from less productive wells by injecting water into them under great pressure. The recovery technique was setting off small quakes, the strongest near wells subjected to the greatest water pressure. If water was pumped out of the earth, the survey scientists wondered, would the quakes stop? In November 1972, they forced water into four of the Chevron wells. A series of minor quakes soon began, and did not stop until March 1973. Then the scientists pumped water out of the wells, reducing fluid pressure in the rock below. Almost immediately, earthquake activity ended. In a limited way, they had controlled an earthquake.

The results of the Rangely experiments led USGS Geophysicists Raleigh and James Dietrich to propose an ingenious scheme. They suggested drilling a row of three deep holes about 500 yds. apart, along a potentially dangerous fault. By pumping water out of the outer holes, they figured they could effectively strengthen the surrounding rock and lock the fault at each of those places. Then they would inject water into the middle hole, increasing fluid pressure in the nearby rocks and weakening them to the point of failure. A minor quake—contained between the locked areas—should result, relieving the dangerous stresses in the immediate vicinity. By repeating the procedure, the scientists could eventually relieve strains over a wide area. Other scientists feel that such experiments should be undertaken with caution, lest they trigger a large quake. Raleigh is more hopeful. In theory, he says, relatively continuous movement over the entire length of the San Andreas Fault could be maintained—and major earthquakes prevented—with a system of some 500 three-mile-deep holes evenly spaced along the fault. Estimated cost of the gigantic project: $1-$2 billion.

In a time of austerity, the possibility of such lavish financing is remote। As M.I.T.'s Press puts it: "How does one sell preventive medicine for a future affliction to Government agencies beleaguered with current illness?" Ironically, the one event that would release money for the study of earthquake prediction and control is the very disaster that scientists are trying to avert: a major quake striking a highly populated area without any warning. Tens of thousands of people living in the flood plain of the Van Norman Dam had a close call four years ago in the San Fernando Valley quake; had the tremor lasted a few more seconds, the dam might have given way. When the San Andreas Fault convulses again—as it surely must—or when another, less notorious fault elsewhere in the U.S. suddenly gives way, thousands of other Americans may not be so lucky.

The last large-scale killers occurred in Asia. One, last December in northern Pakistan, ravaged nine towns and took nearly 5,000 lives. The other, a February tremor in China, is believed to have killed hundreds. Indeed, not a day passes without earth tremors somewhere on the globe. Some of those quakes are too weak to be felt by humans; they can be detected only by sensitive seismographs. Others are more violent but occur on the ocean floor or in remote areas and do no harm. Some add to the long catalogue of destruction. Last week, for example, a 4.7 earthquake rocked lightly populated Kodiak Island, off the coast of Alaska. In July, a 6.8 quake struck Pagan, Burma, destroying or damaging half of the city's historic temples. Within the past several weeks, strong earthquakes struck Oroville, Calif., Mindanao in the Philippines, the Kamchatka Peninsula in Siberia and the southwest Pacific island of Bougainville.

With good reason, many primitive peoples regarded the terrible quakes they could not understand as the acts of a vengeful deity. As late as 1750, Thomas Sherlock, the Bishop of London, told his flock that two recent earthquakes were warnings that Londoners should atone for their sins. John Wesley agreed. In a 1777 letter to a friend, he wrote:

"There is no divine visitation which is likely to have so general an influence upon sinners as an earthquake." The ancient Japanese believed that the hundreds of quakes that shook (and still shake) their islands every year were caused by the casual movements of a great spider that carried the earth on its back. Natives of Siberia's quake-prone Kamchatka Peninsula blamed the tremors on a giant dog named Kosei tossing snow off his fur. Pythagoras, the Greek philosopher and mathematician, believed that earthquakes were caused by the dead fighting among themselves. Another ancient Greek, Aristotle, had a more scientific explanation. He contended that the earth's rumblings were the result of hot air masses trying to escape from the earth's interior.

In the past decade, the development of a bold new geological theory called plate tectonics—which offers an elegant, comprehensive explanation for continental drift, mountain building and volcanism—seems finally to have clarified the underlying cause of earthquakes। It holds that the surface of the earth consists of about a dozen giant, 70-mile-thick rock plates. Floating on the earth's semimolten mantle and propelled by as yet undetermined forces, the plates are in constant motion. Where they meet, friction sometimes temporarily locks them in place, causing stresses to build up near their edges. Eventually the rock fractures, allowing the plates to resume their motion. It is that sudden release of pent-up energy that causes earthquakes. Off Japan, for instance, the Pacific plate is thrusting under the Eurasian plate, causing the deep-seated quakes characteristic of the Japanese archipelago. In California, along the San Andreas Fault, two great plates are sliding past each other. The sliver west of the fault, which is located on the Pacific plate, is moving toward the northwest. The rest of the state is resting on the North American plate, which is moving westward. The sudden movement of a portion of the fault that had been locked in place for many years is thought to have caused the great San Francisco earthquake of 1906.

When quake centers are marked on a map of the world, it becomes clear that many earthquakes do indeed occur along plate boundaries. The earthquake-marked "ring of fire" around the Pacific Ocean, for example, neatly outlines the Pacific plate. But earthquakes can also occur well within a plate, possibly because the plate structure has been weakened in those places during periods of ancient volcanism. Charleston, S.C., for instance, is more than 1,000 miles away from the edge of the North American plate; yet it lies in a seismically active area (see map page 39) and was hit by a major quake that killed 27 people in 1886. New Madrid, Mo., near the middle of the plate, was the site of three huge quakes in 1811 and 1812. Wrote one resident of the then sparsely populated area: "The whole land was moved and waved like the waves of the sea. With the explosions and bursting of the ground, large fissures were formed, some of which closed immediately, while others were of varying widths, as much as 30 ft."

Long before the plate-tectonics theory was conceived, scientists were aware that rocks fracture only under extreme stress. As early as 1910, Johns Hopkins Geologist Harry Reid suggested that it should be possible to tell when and where quakes were likely to occur by keeping close tab on the buildup of stresses along a fault. But the knowledge, instruments and funds necessary to monitor many miles of fault line and interpret any findings simply did not exist. Earthquake prediction did not draw much attention until 1949, when a devastating quake struck the Garm region of Siberia, causing an avalanche that buried the village of Khait and killed 12,000 people. Stunned by the disaster, the Soviets organized a scientific expedition and sent it into the quake-prone area. Its mission: to discover any geologic changes—in effect, early warning signals—that might occur before future quakes. The expedition remained In Siberia far longer than anyone had expected. But it was time well spent. In 1971, at an international scientific meeting in Moscow, the Soviet scientists announced that they had achieved their goal: learned how to recognize the signs of impending quakes.

The most important signal, they said, was a change in the velocity of vibrations that pass through the earth's crust as a result of such disturbances as quakes, mining blasts or underground nuclear tests। Earth scientists have long known that tremors spread outward in two different types of seismic waves. P waves cause any rock in their path to compress and then expand in the same direction as the waves are traveling. S waves move the rock in a direction that is perpendicular to their path. Because P waves travel faster than S waves, they reach seismographs first. The Russian scientists found that the difference in the arrival times of P and S waves began to decrease markedly for days, weeks and even months before a quake. Then, shortly before the quake struck, the lead time mysteriously returned to normal. The Russians also learned that the longer the period of abnormal wave velocity before a quake, the larger the eventual tremor was likely to be.*

The implication of that information was not lost on visiting Westerners. As soon as he returned home from Moscow, Lynn Sykes, head of the seismology group of Columbia University's Lamont-Doherty Geological Observatory, urged one of his students, a young Indian doctoral candidate named Yash Aggarwal, to look for similar velocity shifts in records from Lamont-Doherty's network of seismographs in the Blue Mountain Lake region of the Adirondacks, in upper New York State, where tiny tremors occur frequently

As it happens, a swarm of small earthquakes had taken place at approximately the time of the Moscow meeting. Aggarwal's subsequent analysis bore out the Russian claims: before each quake, there had been a distinct drop in the lead time of the P waves.

As significant as those changes seemed, U.S. seismologists felt that they could not be really dependable as a quake-prediction signal without a more fundamental understanding of what was causing them. That explanation was already available. In the 1960s, while studying the reaction of materials to great mechanical strains, a team of researchers under M.I.T. Geologist William Brace had discovered that as rock approaches its breaking point, there are unexpected changes in its properties. For one thing, its resistance to electricity increases; for another, the seismic waves passing through it slow down.

Both effects seemed related to a phenomenon called dilatancy—the opening of a myriad of tiny, often microscopic cracks in rock subjected to great pressure. Brace even suggested at the time that the physical changes associated with dilatancy might provide warning of an impending earthquake, but neither he nor anyone else was quite sure how to proceed with his proposal. Dilatancy was, in effect, put on the shelf.

The Russian discoveries reawakened interest in the subject। Geophysicist Christopher Scholz of Lamont-Doherty and Amos Nur at Stanford, both of whom had studied under Brace at M.I.T., independently published papers that used dilatancy to explain the Russian findings. Both reports pointed out an apparent paradox: when the cracks first open in the crustal rock, its strength increases. Temporarily, the rock resists fracturing and the quake is delayed. At the same time, seismic waves slow down because they do not travel as fast through the open spaces as they do through solid rock. Eventually ground water begins to seep into the new openings in the dilated rock. Then the seismic-wave velocity quickly returns to normal. The water also has another effect: it weakens the rock until it suddenly gives way, causing the quake.

Soon California Institute of Technology's James Whitcomb, Jan Garmany and Don Anderson weighed in with more evidence. In a search of past records, they found a distinct drop in the speed of P waves 3½ years before the 1971 San Fernando quake (58 deaths), the largest in California in recent years. The P waves had returned to their normal velocity a few months before the tremor. Besides providing what amounted to a retroactive prediction of that powerful quake, the Caltech researchers demonstrated that it was primarily the velocity of the P waves, not the S waves, that changed. Their figures were significant for another reason: the P-wave velocity change was not caused by a quirk of geology in the Garm region or even in the Adirondacks, but was apparently a common symptom of the buildup of dangerous stresses in the earth.

In fact, dilatancy seems to explain virtually all the strange effects observed prior to earthquakes. As cracks open in rock, the rock's electrical resistance rises because air is not a good conductor of electricity. The cracks also increase the surface area of rock exposed to water; the water thus comes in contact with more radioactive material and absorbs more radon—a radioactive gas that the Soviet scientists had noticed in increased quantities in Garm-area wells. In addition, because the cracking of the rock increases its volume, dilatancy can account for the crustal uplift and tilting that precedes some quakes. The Japanese, for instance, noticed a 2-in. rise in the ground as long as five years before the major quake that rocked Niigata in 1964. Scientists are less certain about how dilatancy accounts for variations in the local magnetic field but think that the effect is related to changes in the rock's electrical resistance.

With their new knowledge, U।S. and Russian scientists cautiously began making private predictions of impending earthquakes. In 1973, after he had studied data from seven portable seismographs at the Blue Mountain Lake encampment, Columbia University's Aggarwal excitedly telephoned Lynn Sykes back at the laboratory. All signs, said Aggarwal, pointed to an imminent earthquake of magnitude 2.5 to 3. As Aggarwal was sitting down to dinner two days later, the earth rumbled under his feet. "I could feel the waves passing by," he recalls, "and I was jubilant." In November 1973, after observing changes in P-wave velocity, Caltech's Whitcomb predicted that there would be a shock near Riverside, Calif., within three months. Sure enough, a tremor did hit before his deadline—on Jan. 30. Whitcomb's successful prediction was particularly important. All previous forecasts had involved quakes along thrust faults, where rock on one side of a fault is pushing against rock on the other. The Riverside quake took place on a strike-slip fault, along which the adjoining sides are sliding past each other. Because most upheavals along the San Andreas Fault involve strike-slip quakes, Whitcomb's forecast raised hopes that seismologists could use their new techniques to predict the major earthquakes that are bound to occur along the San Andreas.

The Chinese, too, were making rapid progress in their earthquake-forecast studies. When a delegation of U.S. scientists headed by M.I.T. Geologist Frank Press toured Chinese earthquake-research centers in October 1974, they were astonished to learn that the country had some 10,000 trained earthquake specialists (more than ten times the American total). They were operating 17 major observation centers, which in turn receive data from 250 seismic stations and 5,000 observation points (some of which are simply wells where the radon content of water is measured). In addition, thousands of dedicated amateurs, mainly high school students, regularly collect earthquake data.

The Chinese have good reason to be vigilant. Many of their people live in vulnerable adobe-type, tile-roofed homes that collapse easily during tremors. And the country shudders through a great number of earthquakes, apparently because of the northward push of the Indian plate against the Eurasian plate. Says Press: "It is probably the one country that could suffer a million dead in a single earthquake."

Chinese scientists read every scientific paper published by foreign earthquake researchers. They also pay close attention to exotic prequake signals—including oddities of animal behavior—so far largely overlooked by other nations. Before a quake in the summer of 1969, the Chinese observed that in the Tientsin zoo, the swans abruptly left the water, a Manchurian tiger stopped pacing in his cage, a Tibetan yak collapsed, and a panda held its head in its paws and moaned. On his return from the China tour, USGS's Barry Raleigh learned that horses had behaved skittishly in the Hollister area before the Thanksgiving Day quake. "We were very skeptical when we arrived in China regarding animal behavior," he says. "But there may be something in it."

Though the U.S. does not have the national commitment of the Chinese, there is no lack of urgency among American scientists. California has not had a great earthquake since the San Francisco disaster in 1906, and seismologists are warily eying at least two stretches of the San Andreas Fault that seem to be "locked." One segment, near Los Angeles, has apparently not budged, while other parts of the Pacific and North American plates have slid some 30 ft. past each other. Near San Francisco, there is another locked section. Sooner or later, such segments will have to catch up with the inexorable-movement of the opposing plates. If they do so in one sudden jolt, the resulting earthquakes, probably in the 7-to 8-pt. Richter range and packing the energy of multimegaton hydrogen bombs, will cause widespread destruction in the surrounding areas.

If one of those quakes occurs in the San Francisco area, the results will be far more calamitous than in 1906 (see box page 40)। A comparable earthquake near Los Angeles could kill as many as 20,000 and injure nearly 600,000.

As a practical start toward earthquake prediction, USGS is constructing a prototype network of automated sensing stations equipped with magnetometers, tiltmeters and seismographs in California's Bear Valley. They are also beginning to make measurements of radon in wells and electrical resistance in rock. Some of the data are already being fed into the USGS's central station at Menlo Park. But analysis is still being delayed by lack of adequate computer facilities.

Other seismic monitoring grids in the U.S. include a 45-station network in the Los Angeles area, operated jointly by the USGS and Caltech; smaller networks in the New York region under the Lamont-Doherty scientists; and those in the Charleston, S.C., area, operated by the University of South Carolina. When completed and computerized, these networks will provide two warnings of impending quakes. If scientists detect changes in P-wave velocities, magnetic field and other dilatancy effects that persist over a wide area, a large quake can be expected—but not for many months. If the dilatancy effects occur in a small area, the quake will be minor but will occur soon. The return to normal of the dilatancy effects provides the second warning. It indicates that the quake will occur in about one-tenth the time during which the changes were measured. If dilatancy changes have been recorded for 70 days and then suddenly return to normal, the quake should occur in about a week.

The networks are far from complete, progress in general has been slow, and seismologists blame inadequate Government funding. The USGS's annual quake budget has remained at about $11 million for the past few years, only about $3 million of it for research in the art of forecasting.

Once in operation, an earthquake warning system will bring with it a new set of problems. If a major quake is forecast for San Francisco, for example, should the Government shut down businesses and evacuate the populace? Where would evacuees be housed? If the quake does not occur, who will be responsible for the financial loss caused by the evacuation? Answers come more easily in totalitarian China. There, says Press, "if an actual quake does not take place, it is felt that the people will understand that the state is acting on their behalf and accept a momentary disruption in their normal lives."

Just such a disruption took place in many Chinese communities on Feb। 4, the day that an earthquake struck an industrialized area in Liaoning province. According to the Chinese publication Earthquake Frontiers, at 6 p.m. that day an announcement was made over the public-address system in the Kuan-t'un commune: "According to a prediction by the superior command, a strong earthquake will probably occur tonight. We insist that all people leave their homes and all animals leave their stables." As an added incentive for people to go outside, the commune leaders also announced that movies would be shown in an outdoor location.

"As soon as the announcement was finished," the article says, "many men and women members with their whole families gathered in the square in front of the detachment gate. The first film was barely finished when a strong earthquake, 7.3 on the magnitude scale, occurred. Lightning flashed and a great noise like thunder came from the earth. Many houses were destroyed at once. Of the 2,000 people in the commune, only the 'stubborn ones,' who ignored the mobilization order, were wounded or killed by the earthquake. All the others were safe and uninjured; not even one head of livestock was lost."

Convinced that "earthquake prediction is a fact at the present time," and worried about the effect of such forecasts, particularly in U.S. cities, the National Academy of Sciences this week released a massive study entitled "Earthquake Prediction and Public Policy." Prepared by a panel of experts headed by U.C.L.A. Sociologist Ralph Turner, the study takes strong issue with the politicians and the few scientists who believe that earthquake predictions and warnings would cause panic and economic paralysis, thus resulting in more harm than the tremors themselves. Forecasting would clearly save lives, the panel states, and that is the "highest priority." Because most casualties during a quake are caused by collapsing buildings, the report recommends stronger building codes' in areas where earthquakes occur frequently, the allocation of funds for strengthening existing structures in areas where earthquakes have been forecast and even requiring some of the population to live in mobile homes and tents when a quake is imminent. Fearful that forecasting could become a political football and that some officials might try to suppress news of an impending quake, the panel recommends that warnings, which would cause disruption of daily routine when an earthquake threatens, should be issued by elected officials—but only after a public prediction has been made by a panel of scientists set up by a federal agency.

Other scientists are already looking ahead toward an even more remarkable goal than forecasting: earthquake control। What may become the basic technique for taming quakes was discovered accidentally in 1966 by earth scientists in the Denver area. They noted that the forced pumping of lethal wastes from the manufacture of nerve gases into deep wells at the Army's Rocky Mountain arsenal coincided with the occurrence of small quakes. After the Army suspended the waste-disposal program, the number of quakes declined sharply.

Fascinated by the implications of what were apparently man-made quakes, USGS scientists in 1969 set up their instruments at the Rangely oilfield in northwestern Colorado. There, Chevron was recovering oil from less productive wells by injecting water into them under great pressure. The recovery technique was setting off small quakes, the strongest near wells subjected to the greatest water pressure. If water was pumped out of the earth, the survey scientists wondered, would the quakes stop? In November 1972, they forced water into four of the Chevron wells. A series of minor quakes soon began, and did not stop until March 1973. Then the scientists pumped water out of the wells, reducing fluid pressure in the rock below. Almost immediately, earthquake activity ended. In a limited way, they had controlled an earthquake.

The results of the Rangely experiments led USGS Geophysicists Raleigh and James Dietrich to propose an ingenious scheme. They suggested drilling a row of three deep holes about 500 yds. apart, along a potentially dangerous fault. By pumping water out of the outer holes, they figured they could effectively strengthen the surrounding rock and lock the fault at each of those places. Then they would inject water into the middle hole, increasing fluid pressure in the nearby rocks and weakening them to the point of failure. A minor quake—contained between the locked areas—should result, relieving the dangerous stresses in the immediate vicinity. By repeating the procedure, the scientists could eventually relieve strains over a wide area. Other scientists feel that such experiments should be undertaken with caution, lest they trigger a large quake. Raleigh is more hopeful. In theory, he says, relatively continuous movement over the entire length of the San Andreas Fault could be maintained—and major earthquakes prevented—with a system of some 500 three-mile-deep holes evenly spaced along the fault. Estimated cost of the gigantic project: $1-$2 billion.

In a time of austerity, the possibility of such lavish financing is remote। As M.I.T.'s Press puts it: "How does one sell preventive medicine for a future affliction to Government agencies beleaguered with current illness?" Ironically, the one event that would release money for the study of earthquake prediction and control is the very disaster that scientists are trying to avert: a major quake striking a highly populated area without any warning. Tens of thousands of people living in the flood plain of the Van Norman Dam had a close call four years ago in the San Fernando Valley quake; had the tremor lasted a few more seconds, the dam might have given way. When the San Andreas Fault convulses again—as it surely must—or when another, less notorious fault elsewhere in the U.S. suddenly gives way, thousands of other Americans may not be so lucky.

* At the current rate of plate movement, Los Angeles will lie directly west of San Francisco in 10 million years.

* Used to measure the strength of earthquakes. Because the scale is logarithmic, each higher number represents a tenfold increase in the magnitude of the tremors, and a 30-fold increase in the energy released. Thus a 2-point quake is barely perceptible, a 5 may cause minor damage, a 7 is severe, and an 8 is a violent quake.

* U.S. scientists now estimate that the change can occur as long as ten years before a magnitude 8 quake, a year before a 7-pointer and a few months before a magnitude 6.

Wednesday, January 28, 2009

Dinosaur Fossil Record Compiled, Analyzed; 500 Or More Dinosaurs Possible Yet To Be Discovered

ScienceDaily (Feb. 10, 2004) — A graduate student in earth and planetary sciences in Arts & Sciences at Washington University in St। Louis has combed the dinosaur fossil record from T Rex to songbirds and has compiled the first quantitative analysis of the quality and congruence of that record.

Julia Heathcote, whose advisor is Josh Smith, Ph.D., Washington University assistant professor of earth and planetary sciences, examined data of more than 250 dinosaur genera, or classes, as well as various clades, or family tree branches, of dinosaur classes.

Heathcote found that existing data is between one-half and two-thirds complete, or of high quality, for dinosaur data. As template, she used two published whole dinosaur studies and compared them with smaller family trees within the context of the whole dinosaur data, commonly known as the Dinosauria. She also analyzed for congruence – the correlation between the fossil record and family tree relationships. Heathcote found some of the clades both complete and congruent, while others are poor in both ways.

"The whole Dinosauria fossil record I would say is moderately good, which was a surprise, because I thought it would be much worse," Heathcote said. "It generally shows a low degree of completeness but a high degree of congruence with the existing phylogenies, or family trees, studied."

Her results are important for paleontologists who are especially focused on the evolution of dinosaurs. It is to the paleontologist what Beckett's Baseball Card Price guide is to the baseball card collector, and more -– Heathcote's analysis provides information on the relationships between classes and groups, whereas the Beckett guide draws no lineages for players, for instance.

Heathcote said that there have been many attempts to analyze evolutionary patterns using the fossil record, but that the patterns can only be interpreted usefully in the context of stratigraphy -- essentially how old the fossils are. It's important to know the quality of the fossil record to better assess whether an apparently large number of genera at any one time – say, the late Jurassic period – is due to genuine species diversity or just exceptionally good preservation. Congruence matters, too, to provide information on the adequacy of data and confidence to construct evolutionary relationships.

Heathcote presented her results at the annual meeting of the Geological Society of America, held Nov. 2-5, 2003, in Seattle.

Heathcote used three different calculations to achieve her results: the Stratigraphic Consistency Index, the Relative Completeness Index and the Gap Excess Ratio. The first is a measure of how well the relationships that have been proposed for dinosaurs fits in with the stratigraphic data, which contributes to a timeline for evolution. The Relative Completeness index measures the percentage of how much missing data there might be to how much researchers actually have. And the Gap Excess Ratio measures how much missing data there actually is to the minimum missing data possible if the genera were arranged in the family tree in order of age.

Heathcote said that the known number of dinosaurs now stands at slightly more than 250. But because her results give a maximum possible completeness value, there might be 500 or more yet to be discovered, and she hopes that with each discovery, the researchers will enter their data into her program so that all paleontologists can benefit by seeing how the new discovery relates to previous ones.

She called the work "a new tool that draws together all of the data of the past 150 years, all plotted out accurately for the first time. You can see how far back these dinosaurs go, see their relationships with each other."


Adapted from materials provided by Washington University In St. Louis.

Dinosaur Fossils Fit Perfectly Into The Evolutionary Tree Of Life

ScienceDaily (Jan. 26, 2009) — A recent study by researchers at the University of Bath and London’s Natural History Museum has found that scientists’ knowledge of the evolution of dinosaurs is remarkably complete।

Evolutionary biologists use two ways to study the evolution of prehistoric plants and animals: firstly they use radioactive dating techniques to put fossils in chronological order according to the age of the rocks in which they are found (stratigraphy); secondly they observe and classify the characteristics of fossilised remains according to their relatedness (morphology).

Dr Matthew Wills from the University of Bath’s Department of Biology & Biochemistry worked with Dr Paul Barrett from the Natural History Museum and Julia Heathcote at Birkbeck College (London) to analyse statistical data from fossils of the four major groups of dinosaur to see how closely they matched their trees of evolutionary relatedness.

The researchers found that the fossil record for the dinosaurs studied, ranging from gigantic sauropods to two-legged meat eaters such as T. rex, matched very well with the evolutionary tree, meaning that the current view of evolution of these creatures is very accurate.

Dr Matthew Wills explained: “We have two independent lines of evidence on the history of life: the chronological order of fossils in the rocks, and ‘trees’ of evolutionary relatedness.

“When the two tell the same story, the most likely explanation is that both reflect the truth. When they disagree, and the order of animals on the tree is out of whack with the order in the rocks, you either have a dodgy tree, lots of missing fossils, or both.

“What we’ve shown in this study is that the agreement for dinosaurs is remarkably good, meaning that we can have faith in both our understanding of their evolution, and the relative completeness of their fossil record.

“In other words, our knowledge of dinosaurs is very, very good.”

The researchers studied gaps in the fossil record, so-called ‘ghost ranges’, where the evolutionary tree indicates there should be fossils but where none have yet been found. They mapped these gaps onto the evolutionary tree and calculated statistical probabilities to find the closeness of the match.

Dr Wills said: “Gaps in the fossil record can occur for a number of reasons. Only a tiny minority of animals are preserved as fossils because exceptional geological conditions are needed. Other fossils may be difficult to classify because they are incomplete; others just haven’t been found yet.

“Pinning down an accurate date for some fossils can also prove difficult. For example, the oldest fossil may be so incomplete that it becomes uncertain as to which group it belongs. This is particularly true with fragments of bones. Our study made allowances for this uncertainty.

“We are excited that our data show an almost perfect agreement between the evolutionary tree and the ages of fossils in the rocks. This is because it confirms that the fossil record offers an extremely accurate account of how these amazing animals evolved over time and gives clues as to how mammals and birds evolved from them.”

The study, published in the peer-reviewed journal Sytematic Biology, was part of a project funded by the Biotechnology & Biological Sciences Research Council (BBSRC) that aimed to combine different forms of evolutionary evidence to produce more accurate evolutionary trees.

----------------------------------------------------------

Journal reference:

  1. Wills et al. The Modified Gap Excess Ratio (GER*) and the Stratigraphic Congruence of Dinosaur Phylogenies. Systematic Biology, 2008; 57 (6): 891 DOI: 10.1080/10635150802570809
Adapted from materials provided by University of Bath

Wednesday, December 17, 2008

Two new dinosaurs found in ancient Saharan river system

Dinosaur hunters have discovered two new species while searching an ancient river in the Sahara desert।

Thursday, October 16, 2008

Dunes and Dust in Arabia Terra

The battered region of Arabia Terra is among the oldest terrain on Mars. A dense patchwork of craters from countless impacts testifies to the landscape's ancient age, dating back billions of years.

In eastern Arabia lies an anonymous crater, 120 kilometers (75 miles) across. The floor of this crater contains a large exposure of rocky material, a field of dark sand dunes, and numerous patches of finer-grain material, probably dust. The shape of the dunes hints that prevailing winds have come from different directions over the years.

This false-color image, made from frames taken by the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter, shows the center of the crater's floor. The image combines a daytime view at visible wavelengths with a nighttime view at infrared (heat-sensing) wavelengths, thus giving scientists clues to the physical nature of the surface.

Fine-grain materials, such as dust and the smallest sand particles, heat up quickly by day and cool off equally quickly at night. However, coarser materials - bigger sand particles, gravel, and rocks - respond more slowly to the same daily cycle.

This means that when THEMIS views these late in the martian night, they appear warmer than the pools and patches of dust. In the image here, areas that are cold at night appear in blue tints, while the warmer areas show in yellows, oranges, and reds.

Location: 26.7N, 63.0E Released: 2006/02/07 Instrument: VIS Image Size: 37.7x34.2 km, 23.4x21.3mi, 2092x1907 pixels Resolution: 18m (59ft.)

Wednesday, October 08, 2008

Earth Science Literacy Initiative (ESLI)

PROJECT DESCRIPTION

The Earth Science Literacy Initiative (ESLI), funded by the National Science Foundation, aims to gather and codify the underlying understandings of Earth sciences into a succinct document that would have broad-reaching applications in both public and private arenas. It will establish the “Big Ideas” and supporting concepts that all Americans should know about Earth sciences. The resulting Earth Science Literacy framework will also become part of the foundation, along with similar documents from the Oceans, Atmospheres and Climate communities, of a larger geoscience Earth Systems Literacy effort.

The primary outcome of the Earth Science Literacy Initiative will be a community-based document that clearly and succinctly states the underlying principles and ideas of Earth science across a wide variety of research fields that are funded through the NSF-EAR program, including Geobiology and Low-Temperature Geochemistry, Geomorphology and Land-Use Dynamics, Geophysics, Hydrologic Sciences, Petrology and Geochemistry, Sedimentary Geology and Paleobiology, and Tectonics.

The Earth Science Literacy framework document of Big Ideas and supporting concepts will be a community effort representing the current state-of-the-art research in Earth sciences. It will be written, evaluated, shaped and revised by the top scientists working in Earth science. Because of its validity, authority and succinct format, the ESL framework will be influential in a wide variety of scientific, educational and political settings. Future governmental legislation will be guided by it, and future national and state educational standards will be based upon it.

Draft Document

Wednesday, September 24, 2008

Two Planets Suffer Violent Collision

Two Planets Suffer Violent Collision

ScienceDaily (Sep. 24, 2008) — Two terrestrial planets orbiting a mature sun-like star some 300 light-years from Earth recently suffered a violent collision, astronomers at UCLA, Tennessee State University and the California Institute of Technology will report in a December issue of the Astrophysical Journal।

"It's as if Earth and Venus collided with each other," said Benjamin Zuckerman, UCLA professor of physics and astronomy and a co-author on the paper. "Astronomers have never seen anything like this before. Apparently, major catastrophic collisions can take place in a fully mature planetary system."

"If any life was present on either planet, the massive collision would have wiped out everything in a matter of minutes — the ultimate extinction event," said co-author Gregory Henry, an astronomer at Tennessee State University (TSU). "A massive disk of infrared-emitting dust circling the star provides silent testimony to this sad fate."

Zuckerman, Henry and Michael Muno, an astronomer at Caltech at the time of the research, were studying a star known as BD+20 307, which is surrounded by a shocking 1 million times more dust than is orbiting our sun. The star is located in the constellation Aries. The astronomers gathered X-ray data using the orbiting Chandra X-ray Observatory and brightness data from one of TSU's automated telescopes in southern Arizona, hoping to measure the age of the star.

"We expected to find that BD+20 307 was relatively young, a few hundred million years old at most, with the massive dust ring signaling the final stages in the formation of the star's planetary system," Muno said.

Those expectations were shown to be premature, however, when Carnegie Institution of Washington astronomer Alycia Weinberger announced in the May 20, 2008, issue of the Astrophysical Journal that BD+20 307 is actually a close binary star — two stars orbiting around their common center of mass.

"That discovery radically revised the interpretation of the data and transformed the star into a unique and intriguing system," said TSU astronomer Francis Fekel who, along with TSU's Michael Williamson, was asked to provide additional spectroscopic data from another TSU automated telescope in Arizona to assist in comprehending this exceptional binary system.

The new spectroscopic data confirmed that BD+20 307 is composed of two stars, both very similar in mass, temperature and size to our own sun. They orbit about their common center of mass every 3.42 days.

"The patterns of element abundances in the stars show that they are much older than a few hundred million years, as originally thought," Fekel said. "Instead, the binary system appears to have an age of several billion years, comparable to our solar system."

"The planetary collision in BD+20 307 was not observed directly but rather was inferred from the extraordinary quantity of dust particles that orbit the binary pair at about the same distance as Earth and Venus are from our sun," Henry said. "If this dust does indeed point to the presence of terrestrial planets, then this represents the first known example of planets of any mass in orbit around a close binary star."

Zuckerman and colleagues first reported in the journal Nature in July 2005 that BD+20 307, then still thought to be a single star, was surrounded by more warm orbiting dust than any other sun-like star known to astronomers. The dust is orbiting the binary system very closely, where Earth-like planets are most likely to be and where dust typically cannot survive long. Small dust particles get pushed away by stellar radiation, while larger pieces get reduced to dust in collisions within the disk and are then whisked away. Thus, the dust-forming collision near BD+20 307 must have taken place rather recently, probably within the past few hundred thousand years and perhaps much more recently, the astronomers said.

"This poses two very interesting questions," Fekel said. "How do planetary orbits become destabilized in such an old, mature system, and could such a collision happen in our own solar system?"

"The stability of planetary orbits in our own solar system has been considered for nearly two decades by astronomer Jacques Laskar in France and, more recently, by Konstantin Batygin and Greg Laughlin in the U.S.A.," Henry noted. "Their computer models predict planetary motions into the distant future and they find a small probability for collisions of Mercury with Earth or Venus sometime in the next billion years or more. The small probability of this happening may be related to the rarity of very dusty planetary systems like BD+20 307."

"There is no question, however," Zuckerman said, "that major collisions have occurred in our solar system's past. Many astronomers believe our moon was formed from the grazing collision of two planetary embryos — the young Earth and a body about the size of Mars — a crash that created tremendous debris, some of which condensed to form the moon and some of which went into orbit around the young sun. By contrast with the massive crash in the BD+20 307 system, the collision of an asteroid with Earth 65 million years ago, the most favored explanation for the final demise of the dinosaurs, was a mere pipsqueak."

In their 1932 novel "When Worlds Collide," science fiction writers Philip Wylie and Edwin Balmer envisioned the destruction of Earth by a collision with a planet of a passing star. The 1951 classic movie based on the novel began a long line of adventure stories of space rocks apocalyptically plowing into Earth.

"But," Zuckerman noted, "there is no evidence near BD+20 307 of any such passing star."

This research is federally funded by the National Science Foundation and NASA and also by Tennessee State University and the state of Tennessee, through its Centers of Excellence program.

Thursday, August 21, 2008

Speed

Traveling Faster Than the Speed of Light: Two Baylor Physicists Have a New Idea That Could Make It Happen

Aug. 11, 2008

by Matt Pene

Two Baylor University scientists have come up with a new method to cause a spaceship to effectively travel faster than the speed of light, without breaking the laws of physics.

Dr. Gerald Cleaver, associate professor of physics at Baylor, and Richard Obousy, a Baylor graduate student, theorize that by manipulating the extra spatial dimensions of string theory around a spaceship with an extremely large amount of energy, it would create a "bubble" that could cause the ship to travel faster than the speed of light. To create this bubble, the Baylor physicists believe manipulating the 10th spatial dimension would alter the dark energy in three large spatial dimensions: height, width and length. Cleaver said positive dark energy is currently responsible for speeding up the expansion rate of our universe as time moves on, just like it did after the Big Bang, when the universe expanded much faster than the speed of light for a very brief time.

"Think of it like a surfer riding a wave," said Cleaver, who co-authored the paper with Obousy about the new method. "The ship would be pushed by the spatial bubble and the bubble would be traveling faster than the speed of light."

The method is based on the Alcubierre drive, which proposes expanding the fabric of space behind a ship and shrinking space-time in front of the ship. The ship would not actually move, rather the ship would sit in a bubble between the expanding and shrinking space-time dimensions. Since space would move around the ship, the theory does not violate Einstein's Theory of Relativity, which states that it would take an infinite amount of energy to accelerate a massive object to the speed of light.

String theory suggests the universe is made up of multiple dimensions. Height, width and length are three dimensions, and time is the fourth dimension. String theorists use to believe that there were a total of 10 dimensions, with six other dimensions that we can not yet identify because of their incredibly small size. A new theory, called M-theory, takes string theory one step farther and states that the "strings" that all things are made of actually vibrate in an additional spatial dimensional, which is called the 10th dimension. It is by changing the size of this 10th spatial dimension that Baylor researchers believe could alter the strength of the dark energy in such a manner to propel a ship faster than the speed of light.

The Baylor physicists estimate that the amount of energy needed to influence the extra dimension is equivalent to the entire mass of Jupiter being converted into pure energy for a ship measuring roughly 10 meters by 10 meters by 10 meters.

"That is an enormous amount of energy," Cleaver said. "We are still a very long ways off before we could create something to harness that type of energy."

The paper appears in the Journal of the British Interplanetary Society.

The full paper can be viewed here.

For more information, contact Dr. Cleaver at (254) 710-2283.

Black holes

Black holes 'dodge middle ground'

For black holes, there appears to be very little room for mediocrity, astronomers have found.

A study suggests they come in either small or large sizes, but medium-sized ones are very rare or non-existent.

A team of astronomers has examined one of the best hiding places for a middleweight black hole, and found that it cannot possibly host one.

Details of the research are to be published in the latest issue of the Astrophysical Journal.


If a medium black hole existed in a cluster, it would either swallow little black holes or kick them out of the cluster
Daniel Stern, JPL
Black holes are incredibly dense points of matter, whose gravity prevents even light from escaping.

The least massive black holes known are about 10 times the mass of our Sun and form when colossal stars explode as supernovas.

The heftiest black holes are billions of times the mass of the Sun and lie deep in the bellies of almost all galaxies.

That leaves black holes of intermediate mass, which were thought to be buried at the cores of globular clusters.

Full of stars

Globular clusters are dense collections of millions of stars, which reside within galaxies containing hundreds of billions of stars.

Theorists argue that these clusters should have a scaled-down version of a galactic black hole. Such objects would be about 1,000 to 10,000 times the mass of the Sun - medium-sized as far as black holes are concerned.

Now, a team of astronomers led by Stephen Zepf of Michigan State University, East Lansing, has carried out a detailed examination of a globular cluster called RZ2109.

The researchers' work led them to the conclusion that it could not possess a medium-sized black hole.

"Some theories say that small black holes in globular clusters should sink down to the centre and form a medium-sized one, but our discovery suggests this isn't true," said co-author Daniel Stern of Nasa's Jet Propulsion Laboratory in Pasadena, California.

In a previous study, Dr Zepf and his colleagues looked for evidence of a black hole in RZ2109, located 50 million light-years away in a nearby galaxy.

Elusive quarry

Using the European Space Agency's (Esa) XMM-Newton telescope, they discovered the telltale X-ray signature of an active, or "feeding", black hole. But, at that point, they still didn't know its size.

Stephen Zepf and Daniel Stern then teamed up with other researchers to obtain a chemical fingerprint, called a spectrum, of the globular cluster, using the WM Keck Observatory on Mauna Kea in Hawaii.

The spectrum revealed that the black hole is petite, with roughly 10 times the mass of the Sun.

According to theory, a cluster with a small black hole cannot have a medium one, too.

"If a medium black hole existed in a cluster, it would either swallow little black holes or kick them out of the cluster," said Dr Stern. In other words, the small black hole in RZ2109 rules out the possibility of a medium one being there, too.

The study does not quite represent the end of the road for medium-sized black holes.

Zepf said it was possible such objects were hiding in the outskirts of galaxies like our Milky Way, either in surrounding "dwarf galaxies" or in the remnants of dwarf galaxies being swallowed by a bigger one.

If so, he said, the black holes would be faint and difficult to find.

Story from BBC NEWS:
http://news.bbc.co.uk/go/pr/fr/-/1/hi/sci/tech/7573364.stm

Published: 2008/08/20 22:30:54 GMT

Thursday, February 28, 2008

Quakes Under Pacific Ocean Floor Reveal Unexpected Circulation System

Research upsets long-held view of volcanism-driven hydrothermal vents

Scientists have discovered a new way in which ocean water circulates through deep-sea vents. Credit and Larger Version
January 11, 2008
Zigzagging some 60,000 kilometers across the ocean floor, Earth's system of mid-ocean ridges plays a pivotal role in many workings of the planet: plate-tectonic movements, heat flow from the interior, and the chemistry of rock, water and air।

Now, a team of seismologists working in 2,500 meters of water on the East Pacific Rise, some 565 miles southwest of Acapulco, Mexico, has made the first images of one of these systems--and it doesn't look the way most scientists had assumed. The results of the National Science Foundation (NSF)-supported research appear in this week's issue of the journal Nature.
It was not until the late 1970s that scientists discovered the existence of vast plumbing systems under the oceans called hydrothermal vents. The systems pull in cold water, superheat it, then spit it back out from seafloor vents--a process that brings up not only hot water, but dissolved substances from rocks below. Unique life-forms feed off the vents' stew, and valuable minerals, including gold, may pile up.
The hypothetical image of a hydrothermal-vent system shows water forced down by overlying pressure through large faults along ridge flanks. The water is heated by shallow volcanism, then rises toward the ridges' middles, where vents (often called "black smokers" for the cloud of chemicals they exude) tend to cluster.
"The new images show a very different arrangement," said Rodey Batiza, marine geosciences section head in NSF's Division of Ocean Sciences.
The research team's calculations suggest that water moves a lot faster than previously thought--perhaps a billion gallons per year--through these systems. The water appears to descend instead through a buried 200-meter-wide chimney atop the ridge studied on the East Pacific Rise, run below the ridge along its axis through a tunnel just above a magma chamber, then bubble back up through a series of vents further along the ridge.
"If you look at images of hydrothermal vents, you come up with cartoons that don't at all match what we see," said lead Nature paper author Maya Tolstoy, a marine geologist at Lamont-Doherty Earth Observatory in Palisades, N.Y.
The images were created using seismometers planted around the ridge to record tiny, shallow earthquakes--in this study, 7,000 of them over seven months in 2003 and 2004.
The shallow quakes cluster neatly, outlining the cold water's apparent entrance. It dives straight down about 700 meters, then fans out into a horizontal band about 200 meters wide, before bottoming out at about 1.5 kilometers just above the magma. Heated water rises back up through a dozen vents about 2 kilometers north along the ridge.
The researchers interpret the quakes as being the result of cold water passing through hot rocks and picking up their heat--a process that shrinks the rocks and cracks them, creating small quakes.
Seawater, forced down into the resulting space, eventually gets heated by the magma, then rises back to the seafloor--much the same process seen in a pot of boiling water. Tolstoy and co-authors believe the water travels not through large faults--the model previously favored by some scientists--but through systems of tiny cracks.
Their chart of the water's route is reinforced by biologists' observations from submersible dives that the area around the downflow chimney is more or less lifeless, while the surging vents are covered with bacterial mats, mussels, tubeworms and other creatures that thrive off the heat and chemicals.
It is a mystery where vent organisms originally came from--some evolutionary biologists believe that life on Earth began with them--and how they make their way from one isolated vent system to another.
These findings could add to an understanding of seafloor currents along which they may move, and of the nutrient flows that feed them, said Tolstoy. The work also has large-scale implications for how heat and chemicals are cycled to the seafloor and overlying waters, she said.
Scientists are still retrieving and analyzing data on the vents and their circulation. In 2006, an ocean-bottom volcanic eruption buried many of their instruments; most of the instruments were lost, but the "survivors" provided new information about how undersea eruptions work.
-- Cheryl Dybas, NSF (703) 292-7734 cdybas@nsf.gov
Kevin Krajick, LDEO (212) 854-9729 kkrajick@ei.columbia.edu

Wednesday, November 28, 2007

'Ultrasound' Of Earth's Crust Reveals Inner Workings Of A Tsunami

ScienceDaily (Nov। 27, 2007) — Research just announced by a team of U.S. and Japanese geoscientists may help explain why part of the seafloor near the southwest coast of Japan is particularly good at generating devastating tsunamis, such as the 1944 Tonankai event, which killed at least 1,200 people. The findings will help scientists assess the risk of giant tsunamis in other regions of the world.

Geoscientists from The University of Texas at Austin and colleagues used a commercial ship to collect three-dimensional seismic data that reveals the structure of Earth's crust below a region of the Pacific seafloor known as the Nankai Trough. The resulting images are akin to ultrasounds of the human body.
The results, published in the journal Science, address a long standing mystery as to why earthquakes below some parts of the seafloor trigger large tsunamis while earthquakes in other regions do not।

The 3D seismic images allowed the researchers to reconstruct how layers of rock and sediment have cracked and shifted over time. They found two things that contribute to big tsunamis. First, they confirmed the existence of a major fault that runs from a region known to unleash earthquakes about 10 kilometers (6 miles) deep right up to the seafloor. When an earthquake happens, the fault allows it to reach up and move the seafloor up or down, carrying a column of water with it and setting up a series of tsunami waves that spread outward.
Second, and most surprising, the team discovered that the recent fault activity, probably including the slip that caused the 1944 event, has shifted to landward branches of the fault, becoming shallower and steeper than it was in the past.

"That leads to more direct displacement of the seafloor and a larger vertical component of seafloor displacement that is more effective in generating tsunamis," said Nathan Bangs, senior research scientist at the Institute for Geophysics at The University of Texas at Austin who was co-principal investigator on the research project and co-author on the Science article।
The Nankai Trough is in a subduction zone, an area where two tectonic plates are colliding, pushing one plate down below the other। The grinding of one plate over the other in subduction zones leads to some of the world's largest earthquakes.


In 2002, a team of researchers led by Jin-Oh Park at Japan Marine Science and Technology Center (JAMSTEC) had identified the fault, known as a megathrust or megasplay fault, using less detailed two-dimensional geophysical methods. Based on its location, they suggested a possible link to the 1944 event, but they were unable to determine where faulting has been recently active.
"What we can now say is that slip has very recently propagated up to or near to the seafloor, and slip along these thrusts most likely caused the large tsunami during the 1944 Tonankai 8।1 magnitude event," said Bangs।

The images produced in this project will be used by scientists in the Nankai Trough Seismogenic Zone Experiment (NanTroSEIZE), an international effort designed to, for the first time, "drill, sample and instrument the earthquake-causing, or seismogenic portion of Earth's crust, where violent, large-scale earthquakes have occurred repeatedly throughout history."
"The ultimate goal is to understand what's happening at different margins," said Bangs. "The 2004 Indonesian tsunami was a big surprise. It's still not clear why that earthquake created such a large tsunami. By understanding places like Nankai, we'll have more information and a better approach to looking at other places to determine whether they have potential. And we'll be less surprised in the future."
Bangs' co-principal investigator was Gregory Moore at JAMSTEC in Yokohama and the University of Hawaii, Honolulu। The other co-authors are Emily Pangborn at the Institute for Geophysics at The University of Texas at Austin, Asahiko Taira and Shin'ichi Kuramoto at JAMSTEC and Harold Tobin at the University of Wisconsin, Madison. Funding for the project was provided by the National Science Foundation, Ocean Drilling Program and Japanese Ministry of Education, Culture, Sports and Technology.

Adapted from materials provided by University of Texas at Austin

Thursday, April 19, 2007

385-million-year-old fossil reveals first tree

By William Atkins
Thursday, 19 April 2007
British scientists have discovered a 385 myo fossil that contains a fern-like frond that shows the tree was about eight meters (26 feet) tall. It is the oldest known tree based on fossil records


The fossilized frond, which is similar to today’s fern, came from the Gilboa Forest in the Catskill Mountain in New York State. The Gilboa Forest contains the oldest tree fossils ever discovered on the Earth.

Three hundred seventy million-year-old rocks and fossils from tree stumps were first exposed after an 1869 flood occurred in the forest that washed away much of the top layer of soil. Later, in the 1920s, other stump fossils were found when the Gilboa Reservoir was being built.

The Gilboa forest is well known to paleontologists (scientists that study prehistoric life forms with the use of plant and animal fossils) who have been studying the area for years. Unfortunately, all they could find were fossilized stumps of trees. They couldn’t find any leaves—that is, until now.

British team members—headed by Chris Berry of Cardiff University, England—state that the tree is one of an extinct group of plants known as cladoxylopsids, which are closely related to some ferns and articulates (sphenopsids) that still live today. The cladoxylopsids have a central trunk with smaller, lateral branches coming off.

All fossils of cladoxylopsids are from the Middle Devonian (417 to 354 million years ago) to Early Carboniferous (354 to 290 million years ago) periods.

Thursday, March 08, 2007

Species under threat: Honey, who shrunk the bee population?

Across America, millions of honey bees are abandoning their hives and flying off to die, leaving beekeepers facing ruin and US agriculture under threat. And to date, no one knows why. Michael McCarthy reports

Published: 01 March 2007

It has echoes of a murder mystery in polite society. There could hardly be a more sedate and unruffled world than beekeeping, but the beekeepers of the United States have suddenly encountered affliction, calamity and death on a massive scale. And they have not got a clue why it is happening.

Across the country, from the Atlantic coast to the Pacific, honey bee colonies have started to die off, abruptly and decisively. Millions of bees are abandoning their hives and flying off to die (they cannot survive as a colony without the queen, who is always left behind).

Some beekeepers, especially those with big portable apiaries, or bee farms, which are used for large-scale pollination of fruit and vegetable crops, are facing commercial ruin - and there is a growing threat that America's agriculture may be struck a mortal blow by the loss of the pollinators. Yet scientists investigating the problem have no idea what is causing it.

The phenomenon is recent, dating back to autumn, when beekeepers along the east coast of the US started to notice the die-offs. It was given the name of fall dwindle disease, but now it has been renamed to reflect better its dramatic nature, and is known as colony collapse disorder.

It is swift in its effect. Over the course of a week the majority of the bees in an affected colony will flee the hive and disappear, going off to die elsewhere. The few remaining insects are then found to be enormously diseased - they have a "tremendous pathogen load", the scientists say. But why? No one yet knows.

The condition has been recorded in at least 24 states. It is having a major effect on the mobile apiaries which are transported across the US to pollinate large-scale crops, such as oranges in Florida or almonds in California. Some have lost up to 90 per cent of their bees.

A reliable estimate of the true extent of the problem will not be possible for another month or so, until winter comes to an end and the hibernating bee colonies in the northern American states wake up. But scientists are very worried, not least because, as there is no obvious cause for the disease as yet, there is no way of tackling it.

"We are extremely alarmed," said Diana Cox-Foster, the professor of Entomology at Penn States University and one of the leading members of a specially convened colony-collapse disorder working group.

"It is one of the most alarming insect diseases ever to hit the US and it has the potential to devastate the US beekeeping industry. In some ways it may be to the insect world what foot-and-mouth disease was to livestock in England."

Most of the pollination for more than 90 commercial crops grown throughout the United States is provided byApis mellifera, the honey bee, and the value from the pollination to agricultural output in the country is estimated at $14.6bn (£8bn) annually. Growers rent about 1.5 million colonies each year to pollinate crops - a colony usually being the group of bees in a hive.

California's almond crop, which is the biggest in the world, stretching over more than half a million acres over the state's central valley, now draws more than half of the mobile bee colonies in America at pollinating time - which is now. Some big commercial beekeeping operations which have been hit hard by the current disease have had to import millions of bees from Australia to enable the almond trees to be pollinated.

Some of these mobile apiaries have been losing 60 or 70 per cent of their insects, or even more. "A honey producer in Pennsylvania doing local pollination, Larry Curtis, has gone from 1,000 bee colonies to fewer than eight," said Professor Cox-Foster. The disease showed a completely new set of symptoms, "which does not seem to match anything in the literature", said the entomologist.

One was that the bees left the hive and flew away to die elsewhere, over about a week. Another was that the few bees left inside the hive were carrying "a tremendous number of pathogens" - virtually every known bee virus could be detected in the insects, she said, and some bees were carrying five or six viruses at a time, as well as fungal infections. Because of this it was assumed that the bees' immune systems were being suppressed in some way.

Professor Cox-Foster went on: "And another unusual symptom that we're are seeing, which makes this very different, is that normally when a bee colony gets weak and its numbers are decreasing, other neighbouring bees will come and steal the resources - they will take away the honey and the pollen.

"Other insects like to take advantage too, such as the wax moth or the hive beetle. But none of this is happening. These insects are not coming in.

"This suggests that there is something toxic in the colony itself which is repelling them."

The scientists involved in the working group were surveying the dead colonies but did not think the cause of the deaths was anything brought in by beekeepers, such as pesticides, she said.

Another of the researchers studying the collapses, Dennis van Engelsdorp, a bee specialist with the State of Pennsylvania, said it was still difficult to gauge their full extent. It was possible that the bees were fleeing the colonies because they sensed they themselves were diseased or affected in some way, he said. This behaviour has been recorded in other social insects, such as ants.

The introduction of the parasitic bee mite Varroa in 1987 and the invasion of the Africanised honey bee in 1990 have threatened honey bee colonies in the US and in other parts of the world, but although serious, they were easily comprehensible; colony collapse disorder is a deep mystery.

One theory is that the bees may be suffering from stress as beekeepers increasingly transport them around the country, the hives stacked on top of each other on the backs of trucks, to carry out pollination contracts in orchard after orchard, in different states.

Tens of billions of bees are now involved in this "migratory" pollination. An operator might go from pollinating oranges in Florida, to apples in Pennsylvania, to blueberries in Maine, then back to Massachusetts to pollinate cranberries.

The business is so big that pollination is replacing honey-making as the main money earner at the top end of the beekeeping market, not least because in recent years the US has been flooded with cheap honey imports, mainly from Argentina and China.

A typical bee colony, which might be anything from 15,000 to 30,000 bees, would be rented out to a fruit grower for about $135 - a price that is up from $55 only three years ago. To keep the bees' energy up while they are pollinating, beekeepers feed them protein supplements and syrup carried around in large tanks.

It is in these migratory colonies where the biggest losses have been seen. But the stress theory is as much speculation as anything else. At the moment, the disappearance of America's bees is as big a mystery as the disappearance of London's sparrows.

Fish contaminated with mercury 'pose worldwide threat to health'

By Jeremy Laurance, Health Editor
Published: 08 March 2007


A worldwide warning about the risks of eating mercury-contaminated fish is to be issued by an international group of scientists today.

Three times more mercury is falling from the sky than before the Industrial Revolution 200 years ago, the scientists say.

Fish absorb the toxic chemical, which pollutes the seas, posing a risk especially to children and women of childbearing age. The role of low-level pollutants such as lead and mercury on the growing brain has been known for decades and measures have been taken to reduce exposure to a minimum. But the scientists say more must be done.

The warning is based on five papers by mercury specialists summarising the current state of knowledge on the chemical published in the international science journal Ambio. Called the Madison Declaration on Mercury Pollution, it presents 33 key findings from four expert panels over the past year. Every member of the four panels backed the declaration which was endorsed by more than 1,000 scientists at an international conference on mercury pollution in Madison, Wisconsin, in the US last August.

However, it runs counter to research by British scientists last month which found pregnant women who ate the most fish had children who were more advanced, with higher IQs and better physical abilities.

The British researchers said that while mercury is known to harm brain development, fish also contain omega-3 fatty acids and other nutrients which are essential to brain development. They studied 9,000 families taking part in the Children of the 90s project at the University of Bristol and concluded, in The Lancet, that the risks of eating fish were outweighed by the benefits.

The US scientists focused on the risks of mercury which they say now constitute a "public health problem in most regions of the world". In addition to its toxic effects on the human foetus, new evidence indicates it may increase the risk of heart disease, particularly in adult men.

While developed countries have reduced mercury emissions over the past 30 years, these have been offset by increased emissions from developing nations.

The uncontrolled use of the metal in small-scale gold mining is contaminating thousands of sites around the world, putting 50 million inhabitants of mining regions at risk and contributing 10 per cent of the global burden of the pollutant attributable to human activities in the atmosphere.

The global spread of the threat is revealed in increased mercury concentrations now being detected in fish-eating species in remote areas of the planet. The impact on marine eco-systems may lead to population declines in these species and in fish stocks.

Professor James Wiener, of the University of Wisconsin, said: "The policy implications of these findings are clear. Effective national and international policies are needed to combat this global problem."

In the US, official government advice is for pregnant women to limit their consumption of all seafood, including white fish, oily fish and shellfish, to no more than 12oz (340g) a week in order to limit their exposure to mercury.

In the UK, the Food Standards Agency advises expectant mothers to avoid shark, swordfish and marlin and to limit their consumption of tuna, because these are the fish with the highest levels of mercury.

The key findings

* Three times more mercury is falling from the sky today than before the Industrial Revolution

* Eating fish is the primary way most people are exposed to the toxic metal

* There is solid scientific evidence of the toxic effects of mercury on the developing foetus

* Mercury exposure now constitutes a public health problem in most regions of the world

* New evidence suggests exposure to mercury may increase the risk of heart disease and stroke in men

* Increased mercury emissions from developing countries over the past 30 years have outstripped declines in the developed world

* Increasing mercury concentrations are now being detected in fish-eating wildlife in remote areas of the planet