Walls between Cultures

Walls demarcate frontiers, deter predators, and protect people and their families and possessions. Stone walls punctuated by towers and gates have surrounded cities in the Middle East and North Africa since biblical times, when fortifications were often built in concentric rings and outlined by a ditch or a moat for added protection. Walls were a prominent feature of many medieval and early modern cities; from Baku to Delhi to York to San Juan, visitors still relive a past when constant vigilance and strong defenses were a feature of daily life by walking these walls.

In their empire-building days, the Romans erected walls to keep out barbarians. Much of one of their best-known efforts, Hadrian’s Wall, is still standing. Built across northern England at its narrowest point, the wall was intended to serve as a barrier to large, swift enemy troops and to block small parties of raiders. Protection was afforded by ditches on both sides of the wall and by troops garrisoned in forts at intervals along its length. It was briefly superceded as a boundary marker of Roman imperialism by the Antonine Wall 100 miles (160 km) to the north, across southern Scotland; this subsequent effort was abandoned, however, after a revolt by lowland tribes forced a withdrawal of Roman forces back to Hadrian’s Wall.

The longest and most famous defensive wall ever constructed, the Great Wall of China, was built in the 3d century B.C. along China’s northern boundary. Rubble, tamped earth, and masonry sections were intended to thwart Mongol and Tatar raiders. In the succeeding century, China confirmed its supremacy in central Asia by extending the wall westward across the Gobi desert. The establishment of safe caravan routes, known as the Silk Road, soon followed, bringing Western traders to the market towns that grew up around the gates of the Great Wall and facilitating the exchange of cultural practices, knowledge, and ideas as well as material goods. Some historians contend that the Great Wall of China was also an indirect cause of the fall of Rome: the Mongol hordes repulsed by the Great Wall crowded the Visigoths, leading the latter to invade Roman-occupied lands.

Walls are still built to bolster security and effect segregation during times of political upheaval. Morocco, which has occupied Western Sahara since 1976, emphasized its intent to annex the former Spanish colony by constructing a 1,550-mile (2,500-km) wall of sand, barbed wire, and land mines along the length of the border with Algeria. Besides enclosing approximately 80% of Western Sahara, including most of its towns, mineral resources, and coastal fisheries, the wall isolated guerrillas pursuing self-determination for Western Sahara from their alleged constituency, the Saharan tribes now living inside the wall under de-facto Moroccan rule.

The summer of 2002 saw the beginning of yet another wall. Israel, a nation already set apart from its neighbors by religious and cultural differences, began construction on a proposed 215-mile-long (345-km-long) fence along its border with the Palestinian West Bank, the goal of which is to stop suicide bombers from entering Israel. This unilaterally determined border, if completed, may reduce the number of terrorist incidents, but its impact on long-term peace in the Middle East remains to be seen.

Modern-day warfare has made fortifications such as walls considerably less effective at keeping out intruders; however, the issues of peace and security remain pressing. Attempting to achieve protection by building a wall may now be compared to living in the eye of a hurricane—that is, living under a transient and dangerous illusion of calm and tranquility. Perhaps today, a more effective strategy might involve tearing down barriers to understanding rather than building walls.


Facts About United Service Organization

A touch of home for the U.S. military--that was the aim of Congress in creating the United Service Organizations (USO) in 1941 at the request of Pres. Franklin D. Roosevelt, who believed that private groups should handle the recreational needs of the country's fast-growing military. A touch of home and support for the military remain the USO's goals today.

Since its inception, the USO has served as a bridge between the American people and the nation's armed forces. A private nonprofit organization, the USO relies on donations and volunteers. Although it is not part of the U.S. government, it is endorsed by the Defense Department and the president, who has always served as the USO's honorary chair.

The USO began with a coalition of six civilian agencies--the National Catholic Community Services, National Jewish Welfare Board, National Travelers Aid Association, Salvation Army, Young Men's Christian Association, and Young Women's Christian Association. Their efforts solidified during World War II as the need for soldiers grew.

From 1940 to 1944, U.S. troop strength increased from 50,000 to 12 million, and the need for services increased proportionately. To meet the needs, USO centers were set up in more than 3,000 communities worldwide to provide soldiers a "home away from home," and sprang up in venues that included barns, castles, churches, log cabins, museums, and yacht clubs. Soldiers visited these locales to dance, see movies, seek counsel, and just relax.

During the 1940s more than 428,000 USO performances were presented at outposts and more than 7,000 entertainers traveled overseas. But by 1947, with the war having ended, support for the USO waned. In December of that year Pres. Harry Truman thanked the USO for fulfilling its mission with "signal distinction" and granted the USO "an honorable discharge" from active service. But a civilian advisory committee recommended that the USO either be reactivated or another civilian agency be created for the same function.

When the United States entered the Korean War in 1950, the USO regrouped and opened 24 clubs worldwide, its entertainers performing daily for troops in Korea. After the war, more than 1 million service members remained abroad, and the need for USO services continued. The necessity prompted the U.S. Defense Department to direct the USO to expand globally.

It was amid the turbulent 1960s that the USO first set up centers in combat zones. Bob Hope took his USO Christmas show to Vietnam for the first time in 1964, and the shows continued through the next decade. When the draft ended in the early 1970s, the need for the USO was reassessed. Prompted by a United Way report, the Defense Department conducted a review of the USO's global operations in 1974. The study concluded, "If there were no USO, another organization would have to be created ... Isolation of the military from civilian influences is not, we believe in the interest of this nation."

In peacetime the USO has helped service members become involved in their communities. For example, "The USO Virtues Project of USO Seoul and USO Camp Casey in Korea train service members who then volunteer to teach English in Korean elementary schools," said Donna St. John, the vice president for communications for USO World Headquarters. The USO has also helped troops make the transition to civilian life. In 1975 the organization made its own transition and moved its international headquarters from New York City to Washington, D.C.

In the early 1980s, the USO broadened its entertainment program and renewed its original mission to act as a liaison between the American people and the country's armed forces worldwide. With the Middle East wars of the early 1990s and millennium, U.S. troops were deployed in new areas, with little diversion or contact with home. The USO subsequently opened three centers in the Middle East and set up a Mobile Canteen program, whose volunteers drive refreshments, books, magazines, and videos to wherever troops are deployed.

Today the USO operates 123 centers, including 6 mobile canteens, with 73 located in the continental United States and 50 overseas. The organization relies on more than 33,500 volunteers, from its World Board of Governors who work at headquarters in Arlington, Va., to those who serve holiday dinners. These volunteers provide some 371,000 hours of service annually at a total contribution of more than $3 million. Globally, service members and their families visit USO centers more than 5 million times a year.

To help service personnel and their families acclimate to new environments, the USO offers services including counseling, housing assistance, and cultural seminars. The group also involves itself with community groups and businesses. Ms. St. John cited some recent examples, including a holiday gift drive--a partnership between the USO of northern Ohio and the local media--that benefited more than 3,000 military children, and USO centers at the Dallas/Fort Worth and Atlanta airports, which, according to the USO, are "the only two destination points in the United States for rest and recuperation flights from Iraq and Afghanistan," that partner with local businesses to host meals for soldiers who must travel during the holidays.

In a tribute to U.S. troops, the USO held a Patriotic Festival 2005: A Salute and Celebration of Our Military. The "welcome home" celebration, which was sponsored by Virginia-based groups and the USO of Hampton Roads, Va., was another example of how the USO continues to serve as a bridge between the American people and the nation's armed forces, "until everyone comes home."


Interesting Facts About Trick-or-Treat

When costumed children mark the evening of October 31 by going door to door begging for sweets, they are participating in rituals similar to those that have been practiced for centuries. Halloween, now so much a part of American tradition, has both pagan and Christian roots.

The Celtic festival of Samhain, which marked the beginning of the new year, was traditionally celebrated on November 1, when summer was over and the harvest gathered. In a time when gods and spirits were very much a part of everyday life, the Celts believed that on New Year's Eve the worlds of the living and the dead came together. The spirits of those who had died during the previous year wandered the earth making mischief and playing tricks on the living. In an effort to avoid persecution by evil spirits, villagers took to the streets dressed as supernatural beings themselves, in masks and frightening costumes.

In the 9th century, as part of an effort to shift people from pagan to Christian worship, November 1 was fixed as All Hallows or All Saints' Day. October 31 became All Hallows Eve, a name that was later corrupted to Halloween. In about 1000 A.D., November 2 became All Souls' Day. The church also encouraged the substitution of Christian practices for pagan traditions.

To discourage the pagan Celtic custom of leaving food and wine on the doorstep for wandering spirits, the church promoted the practice of "souling"—believed to be the precursor of modern trick-or-treating. On the day set to honor dead souls, Christian beggars wandered from village to village pleading for Soul Cakes, made from square pieces of bread and currants. For each cake collected, the beggar promised to say prayers for the giver's dead relatives. Thus, the more cakes given, the more prayers said. These prayers were more than tribute. It was believed that the souls of the dead remained in limbo for a time after death and that prayer could speed their way to heaven.

Over the years, British children began to go "a-souling" in their neighborhoods, asking for food, money, and even ale. In England, this tradition was eventually assimilated into the celebration of Guy Fawkes Day (November 5). In Scotland and Ireland, however, the Halloween custom of going door to door costumed as supernatural beings, to be placated with hospitality, continued.

The tradition came to America with settlers from the British Isles, especially with the massive influx of Irish immigrants during the 1840s potato famine. The American festival of Halloween commemorates both Celtic and early Christian heritage, and celebrates the integration of diverse religious and cultural traditions into a popular secular holiday.


The Secrets of Cival: How One Ancient City Is Rewriting Maya History

The ancient Maya city of Cival may represent that most tantalizing of archaeological prospects: a find that forces a sweeping reanalysis of all conventional thinking about an ancient culture. Although the Maya left behind many fabled and enduring monuments, there are relatively few written records of their 2,000-year hold over modern day Mexico and Central America. Consequently, archaeologists are required to decipher Maya history in blurry hindsight, with finds such as those made at Cival potentially forcing vast revisions of our image of the Maya.

Located in east-central Guatemala, Cival was not considered to be of any extraordinary scholarly or historical significance until a team of archaeologists, led by the Vanderbilt University researcher Francisco Estrada-Belli, uncovered a massive 15-by-9-foot (5-by-3-meter) stone mask abandoned by looters and hidden in a tunnel. The mask depicted a fanged deity likely associated with maize, the Maya's principal crop. This deity, in turn, is a symbol of Maya royalty, who typically claimed to be descended from the maize god.

The significance of the Cival mask--and its twin counterpart, found later in the same tunnel--was that Estrada-Belli dated the mask to 150 B.C., well before the Maya were thought to have developed a pronounced class system that included royalty. If Estrada-Belli's initial conclusions were correct, the Maya likely anointed theocratic kings centuries earlier than previously thought, making the Cival mask a discovery that could radically alter our understanding of the pace of Maya cultural development.

Historians have divided Maya history into three periods: the Preclassic (also called Formative), from 2500 B.C. to 300 A.D.; the Classic, from 300 to 1000 A.D.; and the Postclassic, which ran from 1000 A.D. until the Spanish conquistadors subjugated most of Latin America in the early 1500s. The Preclassic Period saw the Maya evolve from tribes of hunter-gatherers to village-centered farmers, with rudimentary forms of Maya pottery, sculpture, and architecture appearing in parallel with this transformation.

As villages grew in size, conventional thinking suggested that the Maya developed a complex class structure, with religious rulers establishing authority in response to the social pressures of a burgeoning population. Thus was born the Classic Period, when the ruling class began erecting the vast pyramid complexes and sacred monuments for which the Maya are best known, all while undertaking advanced astronomic and mathematic studies. The Postclassic Period saw Maya society inexplicably dissolve, with cities neglected or abandoned and royal authority discarded for a reversion to a rural, agrarian society.

According to the aforementioned chronology, the Cival masks, which are a symbol of a royal institution, should not have appeared for roughly another 400 years. Spurred on by this discrepancy, Estrada-Belli began to take a fresh look at the entire Cival site and found further evidence that a complex religious ruling class may have held power during the Preclassic Period.

Estrada-Belli's team eventually found a collection of buried offering jars filled with ornate jade axes and smaller carvings, which were indicative of more advanced dynastic rituals than should have been present during the Preclassic Period. Moreover, excavations of the surrounding area found that at its peak Cival was a city with a population of 10,000 and an urban layout meticulously planned to give inhabitants an unobstructed view of the autumnal equinox. These features indicate an advanced society with knowledge of astronomy, architecture, and collective government, all of which was tied to religious ritual. None of these accomplishments should have been present before the Classic Period, yet the foundations for many of Cival's earliest buildings were likely laid around 300 B.C., 600 years earlier than anyone thought possible.

As Estrada-Belli has begun to present his findings from Cival, all evidence suggests that this was a city ahead of its time and that the grandeur of Maya history began centuries earlier than conventional wisdom allows. If these findings are confirmed by the discovery of other artifacts, especially at other archaeological sites, the chronology of the Maya may have to be rewritten.


The Secret of Range Creek: Waldo Wilcox and the Fremont Indians

For more than 50 years, a man named Waldo Wilcox guarded a secret treasure on his ranch in eastern Utah: possibly the greatest single collection of artifacts that belonged to the Fremont Indian tribe, a people who mysteriously disappeared 700 years ago. Today, the former Wilcox ranch represents one of the most significant archaeological treasure troves in North America, but one that now is imperiled by two simple facts: Waldo Wilcox is no longer standing guard, and the rest of world knows what is there.

In 1951 Waldo Wilcox purchased 4,200 acres (1,700 ha) of hardscrabble land 130 miles (210 km) southeast of Salt Lake City---an area that includes Range Creek Canyon---for the purpose of raising cattle. Shortly thereafter, he and his family discovered dozens of perfectly preserved Indian encampments and hundreds of pristine artifacts---precisely the sort of treasures that archaeologists would seek to investigate and looters would try to steal. Waldo Wilcox decided to prevent the exploitation and destruction of these artifacts (a form of cultural property), as best he could, by actively defending his land against intruders and publicity seekers for more than five decades.

Fewer than 25 years before Wilcox purchased his ranch, an expedition from Harvard University had traveled through the same region, uncovering evidence of a theretofore unidentified Native American tribe. The expedition's leader, Noel Morss, dubbed these people the Fremont after the Fremont River that supplied water to the area. In the years between Morss's original identification and Wilcox's land purchase, precious little evidence was uncovered to further illuminate how the Fremont lived or why this centuries-old culture inexplicably vanished some time around the year 1300.

Indeed, there was very little evidence that the Fremont were a people distinct from their better-known contemporaries, the Anasazi tribe, who also mysteriously disappeared before European settlers ventured into the American Southwest. Even today, only four types of artifacts can uniquely be qualified as belonging to the Fremont: a rod-and-bundle type of basketry that no other Native American tribe employed; an unusual type of moccasin fashioned from deer hock; trapezoidal depictions of human beings in personal artifacts, pictographs, and petroglyphs; and a thin, gray-clay type of pottery. Of these artifacts, only the pottery is consistently sturdy enough to survive the harsh southwestern climate. Thus, concentrations of Fremont artifacts such as those found at Range Creek are of incalculable value.

What makes the Fremont so intriguing---besides their mysterious disappearance---are their attempts at agriculture in the arid Utah climate. While primarily a hunter-gatherer people, the Fremont in general and the Range Creek Fremont in particular sometimes cultivated a unique breed of "dented" corn that could withstand the rigors of the climate. Moreover, the Fremont often stored their hard-won corn harvests in heavily defended granaries atop sheer cliffs and mesas. Range Creek boasts several well-preserved Fremont granaries, some in nearly inaccessible nooks within the rocks, offering a glimpse into the Fremont's precarious life as embattled farmers, desperately working to protect their winter stores against rivals by hiding their harvests within natural geographic defenses.

Ignorance of the Range Creek Fremont encampments protected the artifacts for years, and for much of their known existence Waldo Wilcox stood guard, keeping the vast majority of the Range Creek archaeological cornucopia intact until his recent sale of the land to a charitable trust. In summer 2004, the state of Utah (which now holds the title to the Wilcox ranch) finally opened select portions of Range Creek to the press, displaying the quiet and fragile secrets that survive thanks to Waldo Wilcox's lifelong vigil. While the subsequent publicity has drawn increased attention from looters, scientists now at least have a fighting chance to preserve and record the Range Creek finds before, as happened with the Fremont tribe itself, they succumb to the forces of history.


The Preservation of Petra

The ancient Jordanian desert city of Petra regained fame as the location of the 1989 filming of Indiana Jones and the Last Crusade. The restoration of the city, however, is still incomplete, and many of Petra's archaeological treasures have yet to be uncovered. The challenge in the meantime is for Petra to withstand the tests of time, tourism, and temperature and the effects of a growing Bedouin population.

Petra is often referred to as the "rose-red city" because of the color of the sandstone from which it was built. The city came into being more than 2,000 years ago. At that time Arabian nomads called the Nabateans settled down with the wealth they had gained from trading spice and incense. From this prosperity they created Petra. The city flourished as the capital of the Nabatean kingdom from 300 B.C. until 106 A.D. It was then occupied by Rome. The city remained an important center until it was half destroyed by an earthquake in 363.

Petra faded from memory until the 12th-century Crusades. At that time King Baldwin I of Jerusalem realized the strategic location of the site and established an outpost there. Later the Crusaders built a mountain fortress on the same spot. It was abandoned at the end of their campaign. During the ensuing centuries, sandstorms and wind eroded Petra's buildings. Sand and debris buried much of the city. Only Bedouin shepherds knew of Petra's existence until the Swiss explorer Johan Ludwig Burckhardt rediscovered it in 1812.

Archaeologists have shown an interest in the area since 1812. However, it was not until the 1960s that conservation of the site became a priority. Jordan's Queen Noor and the United Nations Educational, Scientific and Cultural Organization (UNESCO) took on Petra as a formal conservation project. Global attention became focused on the site's artifacts and concerns for their preservation. During excavations archaeologists found tombs and tomes and art and artifacts. A world-class water system that in its heyday sustained 20,000 people was also uncovered. However, the sandstone that made for a perfect aquifer and allowed memorials to be carved with relative ease had, over time, made Petra vulnerable. Wind and water have taken a toll on the fragile stone. The abandonment of the city's once flourishing water system had caused soil erosion and flooding.

From 1960 to 1970 the World Bank and UNESCO launched a study of the site. In 1985 UNESCO added Petra to its World Heritage List. The Petra National Trust (PNT), a private, nonprofit organization, was founded in 1989 with the aid of Jordan's Queen Noor. She secured a team of UNESCO specialists who, along with Jordanian experts, produced a plan for Petra National Park. By 1993 Jordan had allocated some 100 square miles (259 sq km) of canyon country for the park.

Designation of Petra as a World Heritage site, however, was not enough to ensure the area's preservation. The site itself needed long-term protection. Queen Noor set up a committee to ensure that architectural development in the surrounding area would remain in keeping with that of Petra. She also allotted land to the Bedoul, a Bedouin tribe some researchers believe are related to the Nabateans. They had lived in the city's rock tombs until 1985. Still of concern, however, is population growth, conservation, and managing the tourism that has brought both prosperity and problems to Petra.

Jordan's tourism trade generates more than 17,000 jobs and brings in about $1 billion a year. In the late 1980s an estimated 100,000 people visited Petra. By the early 1990s that number had risen to 400,000. The number of hotels in the area has risen from 5 in 1994 to some 60 in 2004. To manage this influx, the World Bank set up a $27 million plan in 1996 for the construction of a drainage system to prevent untreated waste from being discharged into the environment. In addition, terraces from the Nabatean period have been restored. There are plans to renovate the roads to accommodate increased traffic. Furthermore, trees are being planted to help absorb water during the rainy season and to limit building development.

Efforts to accommodate visitors have made the lives of the Bedouls easier. They have also detracted from the beauty of the region. Concurrently, the area's Bedoul population, most of whom make a living from tourism, is growing. In 1985 there were 40 Bedoul families; now there are 350. As a result housing construction is on the rise. Owing to the increasing cost of land, the Bedoul have added on to their homes. They have also been given permission to expand into an area adjacent to Petra.

Some experts say that the issue of conservation may be even more immediate than increases in tourism and population. In 1993 Jordan's antiquities department and the German Agency for Technical Cooperation (GTZ) launched a joint conservation plan. It included the development of a natural mortar ideal for Petra's sandstone structures. Not content with restoring monuments, the GTZ called on Jordan to maintain restoration efforts. To that end the GTZ set up the Conservation and Restoration Center in Petra independent of Jordan's antiquities department and staffed it with experts. The GTZ has spent at least $3.5 million on the project.

Meanwhile, there are the usual bureaucratic problems. The tourism ministry is at odds with the department of antiquities over the unattractive appearance of the scaffolding used during renovations. Archaeologists have criticized the PNT for commissioning a Swiss firm to build dams in the valleys leading to the Siq, a narrow cleft that is the main entrance to Petra. The purpose of the dams is to avoid the recurrence of the 1963 tragedy in which 21 French tourists drowned in flash floods. Archaeologists argue, however, that the $1.5 million project financed by the Swiss government damages Petra's integrity. They also argue that the excavation process creates a powder that has coated the walls of the Siq, dulling the rocks' colors.

In the meantime archaeological excavations continue. In 1998 researchers found a pool complex near the Great Temple. In 2000 they unearthed a Nabatean villa near the Siq. In 2003 they found more tombs cut in the rock beneath the Khazneh, or Treasury, the most famous of Petra's monuments. Despite this, experts estimate that only about 5% of Petra has been unearthed. Archaeologists plan more digs at the Treasury, projects in which the PNT intends to remain involved. The trust is also keen to implement a system of watershed management. They also want to create alternate trails to the site (visitors now enter Petra on horseback through the Siq, kicking up dust that is harmful to the monuments). The trust hopes to establish procedures for community planning, conserve the Neolithic village of Beidha, and revive indigenous vegetation to prevent erosion. Throughout all this activity, the Petra National Trust remains committed to preserving this great rose-colored city of stone.


The Placebo Effect: Mind over Matter

An individual receives an injection from his or her doctor to reduce the discomfort caused by a recurrence of rheumatoid arthritis. The physician, in line with an ethical obligation, has already explained to the patient that he or she will be the beneficiary of either an effective new drug that is in the final stages of research or a placebo. After taking the medicine, the individual reports that the pain and stiffness have eased and that the swelling and inflammation have objectively improved. What the patient does not know is that the only ingredient in the shot was an inactive saline solution. The individual was given a placebo, and what he or she experienced was the placebo effect.

A placebo is an intervention (pill, injection, or treatment) that has no inherent healing or transforming properties. The placebo effect, the body's biochemical response to this inert intervention, is initiated by a suggestion made to the mind. In other words, the imagination has curative powers.

Although placebos have always had a place in medicine, ethical controversy has long surrounded their use. References to the healing power of symbols and belief systems predate Hippocrates, but it was not until 1811 that a definition of the word placebo appeared in medical literature. In 1834 placebos were first used as a control in a scientific study; this has since become their predominant role in medicine.

In research it is essential to separate any contaminating effects of the procedure from those of the active experimental (treatment) agent. In one commonly used method for controlling contaminants, the double-blind approach, suitable volunteers are assigned to either an experimental group or a placebo group. Neither the investigators nor the subjects are aware of individual group affiliations. The placebo subjects are exposed to the exact same protocol, receive the same attention, and have the same expectations as the experimental subjects. Any differences in outcome between the two groups can therefore be directly attributed to the treatment agent, since all other aspects of the experiment are identical.

Although it has been established that placebos have been effective in reducing pain, lowering cholesterol and blood pressure, relieving insomnia and depression, and even shrinking tumors, the phenomenon can work in an undesirable manner as well (sometimes called the "nocebo" effect). Take, for example, the individual who develops symptoms of the rumored negative side effects of a drug that he or she has not actually received or the person who starts to hallucinate after falsely believing that a mind-altering substance has been ingested.

Clearly, expectations, setting, and past experiences are essential components in determining the strength and direction of the placebo effect.

What about individual differences? Are some people more susceptible to placebos than others? Is there a placebo-responder personality? Are there situations or states of mind that make people more vulnerable to the power of the placebo? The answer to all of these questions is yes. Studies indicate that the person who exhibits a profile that reflects the qualities of open-mindedness, trust, extroversion, compliance, and suggestibility will be more responsive. Moreover, individuals in an anxious or uninhibited state also are more sensitive to a placebo's effects.

In order to explain how placebos work on a cellular level, scientists turn to the branch of medicine known as psychoneuroimmunology, which, as its name implies, combines the disciplines of psychology, neurology, and immunology. The two basic axioms of this field are that every change in thought almost simultaneously creates variations in physiology, and that the body has an innate ability to spontaneously heal itself. An individual's expectations, social context, and past learning experiences induce a pattern of involuntary bodily responses based on classical conditioning that influence the autonomic, central nervous, and endocrine systems. If the mind believes that an intervention will produce a certain effect, the internal organs, including the immune system, or "internal pharmacy," will act in concert, in much the same way that Pavlov's dog salivated at the sound of a bell.


The Paradox of Women's Status in South Asia

South Asia has produced more female leaders than anywhere else in the world, yet it has one of the world's worst records on women's rights. This striking contradiction was noted in the 2000 "Human Development in South Asia" report, published by the United Nations Development Programme (UNDP) and devoted to the state of women in the South Asia.

Indeed, it was Sirimavo Bandaranaike of Ceylon (now Sri Lanka) who in 1960 became the world's first female prime minister. Her daughter, Chandrika Kumaratunga, held the office from 1994--2005. In 1988 Pakistan's Benazir Bhutto became the first woman to head the government of an Islamic state. Indira Gandhi led India, the second most populous country in the world, for decades. In Bangladesh two women, Sheikh Hasina Wazed and Khaleda Zia, recently served as prime ministers.

The striking number of women in leadership roles, however, has had little, if any, influence on the status of South Asian women generally. According to the UNDP report, the region has the highest illiteracy rates in the world and the largest gap between male and female literacy rates. Women face numerous limitations and violations of basic human rights---from restrictions on inheriting and owning property to purdah (seclusion of women from public observation) to the outlawed, but still widely practiced, taboo on marriage of widows. Women hardly have a voice in the decision-making forums in the region; they comprise only 7% of South Asian parliamentarians.

Is there really a contradiction between the incidence of women in power in South Asia and the disenfranchisement of women in the region? As paradoxical as it may seem, both stem from the same roots---the patriarchal structures and customs prevailing in South Asian societies. In these traditional, male-dominated societies, with their cultural preference for sons as inheritors of property and bearers of the family lineage, the contributions of women are often viewed and treated as marginal. At the same time the clan-based structure of these societies makes family succession in power a vital factor in preserving social stability. However able, strong, and charismatic each of the above-mentioned female leaders might be, none of them likely would have been elected had she not been a spouse or daughter of a deceased national leader. ( Zulfikar Ali Bhutto, Jawaharlal Nehru, and Mujibur Rahman were all succeeded in national politics by their daughters, and Solomon West Ridgeway Dias Bandaranaike was succeeded by both his wife and his daughter.)

Given the hereditary, hierarchical political culture, it is not surprising that the illustrious female presidents and prime ministers are not easily perceived, or followed, as role models by the majority of women in their own countries. Even when free public education is available to girls (as in India), most female pupils drop out of school at an early age; the prevailing custom demands that they work to earn their dowry instead.

With half of the South Asian nations' populations---namely, women---relegated to low social status, including working as unpaid laborers in family businesses, these nations continue to be economically handicapped. The UNDP report concludes with an agenda for action to help South Asian women achieve gender parity in legal, political, and social arenas and urges the establishment of agencies empowered to be strong advocates for women's equality at both the national and the global level.


The Fight against Polio Is Now a Fight against Time

One of the primary goal of the World Health Organization (WHO) is to rid the world of polio. Some fear that the time to eradicate the disease may be now--or never. When WHO first declared war on polio, forming the Polio Eradication Initiative with partners Rotary International, the Centers for Disease Control and Prevention, and UNICEF (the United Nations Children's Fund) in 1988, there were approximately 350,000 cases of polio in 125 countries on five continents. After an international investment of U.S.$3 billion and the combined efforts of 20 million volunteers in more than 200 countries, just under 700 new cases were reported in 2003, and polio remains endemic in only 6 countries. Yet ultimate victory is at risk.

Polio begins as an intestinal virus. Its usual victims are young children living in unhygienic surroundings, drinking contaminated water. Not everyone who ingests the virus becomes seriously ill; in fact, most have relatively minor flu-like symptoms or none at all. Only the unlucky ones become paralyzed and, in some cases, die.

The original goal of the Polio Eradication Initiative was to wipe out polio worldwide by the year 2000--a goal that proved overly ambitious for several reasons. Polio is not easy to diagnose; many who have it display few signs of illness yet remain contagious for several weeks after all symptoms are gone, and the virus itself can live outside the human body for up to 60 days. By the time one case of polio is confirmed, a multitude of others may be incubating, so that a large number of children over a wide geographic area must be vaccinated in response to a single confirmed case. In addition one vaccination may not be sufficiently prophylactic; a child with an intestinal upset may pass the vaccine through his or her system too quickly for it to be effective. Therefore, repeat vaccinations of all children are necessary.

The cornerstones of the eradication initiative have been national immunization days, during which all young children in a target country are vaccinated whether or not they have been vaccinated before, and immediate response to actual outbreaks, targeting all children under the age of five in the surrounding area for immunization. WHO provides the expertise, UNICEF provides the vaccine, Rotary International provides local advertising and support, and government health officials arrange for the necessary thousands of trained vaccinators. In a representative response to a reported case in Karnataka, India, 4 million children in 13 districts were vaccinated in three days.

At the start of 2007, polio remained endemic only in India, Pakistan, Afghanistan, and Nigeria, with Nigeria representing over 70% of the total number of cases reported worldwide in 2006. The spread of the disease to areas that have reported no cases of infection in recent years is a major concern, because many countries that had eliminated the disease ended their routine immunization programs in order to redirect their limited financial resources to more critical health concerns. Hence the urgency in WHO's push to wipe out the disease once and for all.

The high incidence of polio outbreaks in Nigeria has been traced to the northern state of Kano. Muslim leaders in the area opposed the immunization, in the belief that the vaccine contained hormones that would sterilize the area's female population. As this erroneous information circulated, some parents refused to have their children immunized, and the incidence of new polio cases multiplied. The Nigerian government, however, has confirmed its trust in the purity of the UNICEF-provided vaccine and has publicly reasserted its commitment to the eradication of polio through immunization campaigns. Now it must convince Kano's state and local governments to support that commitment.

The final phase of polio eradication consists of three steps: containment of the remaining chains of polio transmission, certification by independent experts of transmission interruption region by region (and ultimately, worldwide), and development of postcertification policies (determining if there is the need for continuing immunization). Realization of WHO's goal will require the dedicated efforts of government officials in the nations where polio remains endemic and continuing financial help from wealthy nations, several of which have already pledged their assistance. The complete eradication of polio will save millions of children from a devastating disease and will also demonstrate the miracles attainable through global cooperation. The legacy of the world's largest health initiative will include valuable lessons in the fight against other vaccine-preventable diseases, and its success will provide confidence that other, even larger, battles can be won in the 21st century.


Sylvester Graham and His Crackers

By the 1830s the American diet was largely based on meat and white bread; fruits, vegetables, and coarse breads were not thought to contain much nutrition. So it is quite understandable that most of his contemporaries regarded Sylvester Graham as a pure eccentric. A Presbyterian minister, Graham was hated and sometimes attacked by butchers, bakers, and liquor and tobacco companies for railing against meat, potatoes, tobacco and alcohol, coffee and tea, and chocolate and pastries and for preaching for the consumption of pure water and coarse bread made of unsifted flour. At the same time Graham had a fair number of devoted followers, known as Grahamites, who founded "Graham boardinghouses," where his dietary regimen was observed, in Boston and New York City.

Beyond diet, Graham recommended hard beds, cold baths, open windows, loose clothing, vigorous exercise, and daily toothbrushing (then a revolutionary idea). Influenced by the perfectionist impulse of the Second Great Awakening, Graham cast sin in a physical framework and advocated fighting it through bodily self-restraint and suppression of libidinal impulses. At the height of the 1830s health craze he inspired, Graham lectured to huge audiences and wrote several books on temperance and nutrition, including the extremely popular Lectures on the Science of Human Life (1839). Graham's ideas influenced Amos Bronson Alcott's Fruitlands community, Brook Farm, and the Oneida Community. Graham himself suffered poor health throughout his life; although he followed strictly his own lifestyle recommendations, he died at the age of 57 after a round of failed Grahamite cures. Many of his ideas, however, have since been proven correct and have become widely accepted.

Graham's original whole-grain wheat bread recipe eventually found a more appealing successor in the Graham cracker (according to most sources Graham invented the snack in 1829). Many commercial bakers tried to market the treat after Graham's death, but it was not until 1898 that the National Biscuit Company (now, Nabisco) came up with its Nabisco Graham Crackers. Today Nabisco turns out some 50 million packages a year to meet the unwavering demand. The irony is that these Graham crackers are made with bleached white flour, a deviation that would have infuriated Sylvester Graham, for he regarded refined flour as one of the greatest dietary evils.


Responding to SARS: The Reform of Canada's Health-care System

Severe acute respiratory syndrome (SARS), a new coronavirus, was first identified in China in November 2002. Within weeks the potentially fatal virus had spread throughout the world. According to the Swiss-based World Health Organization (WHO), a total of 8,098 people worldwide became infected with SARS during the outbreak, and 774 people died from it. As of July 2003 no new cases of SARS had been reported, and the outbreak was believed to have been contained (although a lone case from December 2003 caused some concern among officials). Canada was hard hit by SARS, reporting more cases than any nation outside of Asia, with the highest concentration of cases occurring in and around Toronto, Ontario. According to the WHO, there were 251 probable cases of SARS in Canada, 43 of which were fatalities.

In April 2003 the Canadian federal government appointed an 11-member panel to examine the Canadian public health system's handling of the SARS outbreak. Called the National Advisory Committee on SARS and Public Health, the panel was headed by the University of Toronto's dean of medicine, Dr. David Naylor. In October the committee issued a 224-page report entitled "Learning from SARS: Renewal of Public Health in Canada," which detailed the events surrounding the SARS outbreak in Canada, provided a comprehensive analysis of the Canadian health system, and outlined a series of recommendations on ways in which Canada might handle future outbreaks of significant infectious diseases.

One of the numerous inadequacies cited in the report was the long-standing shortage in funding and personnel in the field of public health care. In addition the panel noted that frontline health-care employees experienced a lack of governmental cooperation and received conflicting responses from the various levels of government. For example, the Ontario Ministry of Health and Longterm Care (OMHLTC) did not share information with infectious disease experts at the National Microbiology Laboratory in Winnipeg, Manitoba, citing potential violations of patient confidentiality as the primary reason. Another problem with Ontario's provincial health-care system was the conflict between the offices of the province's chief medical officer and commissioner of public health, Dr. Colin D'Cunha, and the commissioner of public safety, Dr. James Young. Both officials later acknowledged that the dual leadership structure was confusing, and they advocated the position that a single official should have been in charge. The panel also found that the WHO-mandated airport screening for SARS was executed ineffectively by Canadian airports.

The committee called for a restructuring of the Canadian public health system with the aim of addressing these deficiencies. The committee advocated an overhaul of the nation's public health system in order to include the creation of a central agency modeled on the United States's Centers for Disease Control and Prevention (CDC). The aim of this body would be to provide information that enhances health decisions and promotes and facilitates the sharing of health information. Currently, Canada's disease control mechanism is divided between Winnipeg's National Microbiology Laboratory and the OMHLTC.

The new agency, the Public Health Agency of Canada, was established in 2004. It was headed by the Chief Public Health Officer of Canada, who reported directly to the Minister of Health. The agency was composed of four branches: Infectious Disease and Emergency Preparedness, Health Promotion and Chronic Disease Prevention, Public Health Practice and Regional Operations, and Strategic Policy, Communications and Corporate Services. The stated goal of the Public Health Agency of Canada was to "work closely with provinces and territories to keep Canadians healthy and help reduce pressures on the health care system."


Reindeer Husbandry

Reindeer, domesticated thousands of years ago by the indigenous peoples of Scandinavia and Siberia, served the Arctic tribes for centuries as a universal resource, much as the bison served the American Indians of the Great Plains. The tribes relied on the deer for milk and meat, for clothing and housing, for tools, for transportation, and as decoys to entice wild reindeer. In the 17th and 18th centuries, reindeer husbandry became the foundation of livelihood and the primary source of income for many of the northernmost peoples. Reindeer herding allowed the inhabitants of the Arctic regions of Russia, Finland, Norway, and Sweden to practice sustainable development in environmentally sensitive areas.

By the mid-20th century, however, it began to look as though herding as a full-time occupation might die out. The encroachment of civilization made the native herders an endangered species. Unless the destruction of pasture land could be controlled and the development of processing infrastructure hastened, reindeer husbandry was doomed. Since herding's demise was considered to have strong negative implications for both the native peoples and the land they traditionally occupied, central and local governments increased their involvement in its protection.

National objectives of cultural heritage preservation and ecosystem management inspired government programs for the development of modern processing facilities, transportation, and markets; a legal framework securing the rights of herders; formal vocational training courses; and mandated interaction on land-use matters (especially logging) between the herding cooperatives and the government agencies responsible for forest and park management. Although, despite this assistance, reindeer herding has continued to decline in some areas, in other places it appears to be holding its own.

Unlike reindeer, North American caribou (now classified with reindeer as a single species) have never been domesticated. Toward the end of the 19th century, the depletion of game in the Alaskan territory by U.S. hunters, trappers, and whalers was perceived by missionaries to be causing hardship to the Eskimos. The Rev. Sheldon Jackson, general agent for education in Alaska, presented a plan to Congress for the importation of domesticated reindeer from Siberia, as a means of providing a native domestic industry and a renewable food source for the Eskimos. Over the next decade (until the Russian government forbade it), 1,280 reindeer were imported. Under the tutelage of Lapp (Saami) herders, reindeer herds were established, and attendant schools taught English and instituted apprenticeship herding programs. For the next 40 years, reindeer sleds were used to carry supplies, passengers, and mail along the Yukon. (They were cheaper to operate than dog teams---reindeer graze, while dogs must be fed.) Incidentally, the introduction of reindeer (a mobile food source and emergency means of transport) also played a large part in the rescue of stranded miners and icebound whalers at the end of the 19th century. Today, although it has declined in most parts of Alaska, reindeer herding continues to be a central part of the economy of the Seward Peninsula.

In 1929 the Canadian government became similarly concerned that loss of northern caribou herds would result in widespread starvation of the Inuit of the Northwest Territories. Hoping to avoid a crisis, the government purchased a herd of 3,000 reindeer and hired the 62-year-old native Laplander and veteran herdsman Andy Bahr and a small team of Lapp herders and Inuit guides to bring the deer from Buckland, Alaska, to the Mackenzie Delta, a trip of 1,500 miles (2,414 km) across largely unmapped hostile territory. Projected to take approximately 18 months, the trip actually took almost six years. Many of the reindeer did not survive the trip, but enough endured to found the Delta herd that continues to prosper today.


Peace through Preservation: The Peace Parks Movement

Environmentalists and politicians are often at cross-purposes. But sometimes the two join forces to create a political, economic, and environmental entity that serves both. Such is the case with the cross-border peace park. More than a simple wildlife preserve, a cross-border peace park is designed to traverse political boundaries. These divisions, while relevant to humans, are usually meaningless and even harmful to regional ecology. By creating an environmental preserve independent of national boundaries, peace parks help foster common interests between neighboring countries. Such parks also provide a basis for political and environmental dialogue.

An example of this green diplomacy is a multi-island oceanic region in a disputed area between Russia and Japan. The region is part of the Shiretoko National Park on Hokkaido. It is located on the northernmost island of Japan and links a series of land and sea wildlife preserves in and around 4 of the Russian-controlled Kuril Islands. These islands number more than 30. In 2005 the Shiretoko Peninsula was among seven natural sites added to UNESCO's World Heritage List.

This Shiretoko peace park is important because the ownership of 4 Kurils is disputed between Russia and Japan. Although the Soviet Union gained the islands from Japan in 1945, Japan never recognized Russian territorial claims. Though oblivious to the dispute, area wildlife is being affected by humans. Creating a jointly operated peace park would alleviate bird and fish habitat encroachment caused by development on Hokkaido. The park would also lower tensions over ownership of the largely pristine, Russian-controlled Kurils.

The cross-border peace park concept predates World War II. The model originated in 1925, with the Krakow Protocol involving Poland and Czechoslovakia. Twin national parks were created on the nations’ border to symbolize peaceful coexistence. Some, however, argue that the first jointly operated, and perhaps most famous cross-border peace park is Waterton-Glacier International Peace Park. The United States and Canada created this park in 1932. It is a product of the combined U.S. Glacier National Park in Montana and Waterton Lakes National Park in Alberta, Canada, and symbolizes sociopolitical cooperation. The park’s creation also serves as a reminder that humans may draw boundaries across the Rocky Mountains, but migratory, free-range wildlife do not.

Over the last 70 years, peace parks have become a global, enviro-political phenomenon. More than 100 parks are in operation, and cross-border or transfrontier parks exist on five continents. Meanwhile, international organizations use environmentalism and wildlife tourism as means of disarming tensions and fostering relations. The peace parks movement has been particularly active and effective in sub- Saharan Africa. In this area nations are beset by civil unrest, military conflict, and AIDS. These countries are working to stabilize territories and utilize wildlife resources to obtain park grants and increase tourism.

As evidenced by these projects, peace parks have value beyond diplomacy and conservation. They often serve as engines of economic renewal. National boundaries still serve numerous legitimate purposes. However, peace parks demonstrate that even the most fiercely maintained divisions can be overlooked for the common good.


Paleoethnobotany: Plant Use in Prehistoric Societies

Paleoethnobotany, also known as archaeobotany, is the subdiscipline of archaeology that studies how plants were used by prehistoric societies. Until the mid-19th century, paleoethnobotanists did not work at the site, relying instead on other archaeologists to gather materials in the field. As the discipline developed, however, the need to ensure that appropriate sampling strategies and analyses were used in collecting and organizing samples prompted archaeobotanists to go on-site to collect their own data. They could then more accurately interpret the uncovered organic remains to determine the importance of various plant life to the area's former residents.

Under normal conditions, of course, plant remains tend to decay very quickly; but organic material that has been carbonized by charring may survive for extended periods of time. Consequently, paleoethnobotanists focus on sifting through hearths and middens (dunghills and refuse piles) in an effort to extract material that will provide clues as to how plants were utilized by the site's former occupants. The study of an individual fireplace is likely to reveal only what was burned there in the last fires or perhaps just the final fire (although study of multiple fireplaces may offer cumulative evidence); refuse dumps are able to provide a better indication of the lasting patterns of consumption and are therefore more likely to yield enough statistical data to allow the archaeologist to reach meaningful conclusions.

As a rule, material scooped up on-site is usually a mixed bag of plant remnants combined with soil, bone, and animal dung. The primary means of separating the plant remains from the other media is the flotation process. A simple but time-consuming and painstaking task, the flotation process is built on the assumption that organic material--seeds, fibers, and so on--generally floats, and inorganic material--sand, shells, or ceramics--generally does not. Using a series of graduated sieves, often nested together, archaeobotanists are able to filter plant remains from other debris. Because the collected material tends to be desiccated, charred, or waterlogged, it is extremely fragile, and tweezers or a small paintbrush may be necessary to remove substances from the sieve. It is dirty work, at best, but ultimately very revealing to the botanist.

Once sorted, organic remnants are generally reviewed using a process called reflected light microscopy. Specimens are razored to reflect three anatomical planes (transverse, radial, and tangential) and studied under a microscope. Species are sorted, identified, and cataloged, and accumulated data are then analyzed and interpreted and, often, fiercely debated, reconsidered, and reinterpreted as additional material is gathered. Because in some areas of the world local customs have changed little over time, paleoethnobotanists may also look to contemporary village life for guidelines to assist them in interpreting their archaeological findings.

Within the field of paleoethnobotany is the subspecialty of palynology, or pollen analysis. Just as a seed is unique to an individual plant, so too is pollen, and similar to charred seeds, pollen may be preserved under proper conditions for long periods of time in soil and rock formations. Analysis of the pollen fossil record reveals what plants grew in a given area during a particular era, thus allowing palynologists to infer the climatic conditions that would have been necessary to support such plants (temperature, rainfall, growing season). Comparison of these findings with current climate conditions in various parts of the modern world provides evidence of environmental change, allowing palynologists to conclude, for example, that the climate prevailing in one area 5,000 years ago was similar to the current climate in an area 1,000 miles (1,600 km) to the north of the subject locale.

What, then, does paleoethnobotanical evaluation tell the archaeologist? First, it provides information about how local vegetation was used by inhabitants for fuel, food, and the construction of houses and boats. Further, the collection of plant material allows the paleoethnobotanists to make anthropological deductions about multiple aspects of prehistoric culture and environment, including the process and timing of plant cultivation and animal domestication, the development of trade practices and local industry, the occurrence of demographic change, and alterations in climatic conditions, as well as to hypothesize as to the reasons why these shifts occurred.

By enhancing our knowledge of how ancient peoples lived, paleoethnobotany helps us understand and appreciate the role humans play in altering their environment. For example, certain cultural changes once thought to be caused by climate shifts, such as drought, are now believed to have been caused by population growth and the resultant deforestation, overgrazing, or soil exhaustion of farmland. In cases such as these, the study of paleoethnobotany can not only teach us about the past, it may also provide a lesson for the future.


Human Origins and the Neanderthal

The American Heritage Dictionary of the English Language, 4th edition, includes among its definitions of Neanderthal the slang meaning "crude, boorish or slow-witted person." Recent evidence, however, suggests that this usage perpetuates a misconception of these early hominids and their lifestyle. Neanderthals take their name from the Neander Valley in Germany, the location of an early anthropological find that provided evidence of their existence. They ranged and hunted for about 250,000 years during the late Pleistocene Epoch in the general area of modern Europe and west Asia, fading into extinction approximately 28,000 years ago. In contrast to their dull-witted reputation, Neanderthals were able to survive for a quarter-million years in harsh and changing environments, proof of their well-developed hunting skills. In contrast, our direct human ancestors probably did not move into and survive in colder climates until about 40,000 years ago.

Neanderthals were portrayed as subhuman brutes by most anthropologists of the 19th and early 20th century. This characterization may have been influenced by the hominid's decidedly nonhuman facial features, which included a large brow and nose, sloping forehead, and receding chin. The subspecies was generally depicted as having a hairy appearance, not unlike the mythical " Bigfoot." Seen as little more than crude scavengers, Neanderthals were assumed to have been incapable of language or finer tool skills.

Countering the "brute" representation of the Neanderthal is a 2003 study that found that the hominid's thumb and forefinger could oppose, probably giving the subspecies significant fine manual dexterity. In addition, recently discovered evidence suggests that Neanderthals deliberately butchered hunting kills with stone axes and knives and shared the proceeds of the hunt in an organized way with their clan members. Some of the unearthed cutting tools were fashioned to be hafted onto wooden shafts or handles, while others show evidence of use for scraping hides, so that the skins could be fashioned into clothes or shelters. Additionally, Neanderthals appear to have had a sense of style and color, using bone, ivory, and animal teeth as ornamentations.

Skeletal evidence suggests that Neanderthals generally lived about 45 years and probably looked a great deal like cold-adapted humans, such as Eskimos. Short in stature with long trunks and arms, the Neanderthal was stronger than a human of similar size, with a brain that was, on average, slightly larger than that of modern humans. Skeletal remains also indicate that multiple fractures were not uncommon among Neanderthals. Indeed, the fact that some of these hominids survived after disabling bone injuries indicates that the subspecies was capable of sustaining long-term caring relationships. Neanderthals lived in close groups of 10 to 15, which means that an individual could rely on the help of his or her fellow group members during a period of recuperation. Pollen discovered near excavated remains indicates that the hominids took care while burying their dead and incorporated the intentional use of symbolic objects and flowers in the burial.

Many have asked whether the Neanderthals are part of the human genetic line. Research has focused on the "out-of-Africa" versus "multiregional" theories of human development. Proponents of the out-of-Africa view see all modern humans as descendants of an original restricted gene pool in central Africa. The multiregionalists, on the other hand, see human origins as more diverse, resulting, they assume, from genetic mixing, including with Neanderthals. Researchers have been exploring Neanderthal mitochondrial DNA (deoxyribonucleic acid) for clues of any interbreeding, but there is considerable dispute as to whether or not these small samples of bone DNA support the theory.

Whether Neanderthals do or do not form a part of our modern gene pool, they do represent another strand in the complex evolution of modern humans. The Neanderthals in ice-age central Europe, eating meat the clan had hunted, butchered, and shared; living and caring in an extended group; and burying their dead, were not, perhaps, so very different from us.


Genealogical Research: A Family Key

People may be motivated to research their family tree for many reasons. Some common incentives include a sense of history and identity, the requirement of proof of inheritance or pedigree, ancestor worship or salvation, or simple curiosity regarding one’s background. Historically, familial memory and oral history were keys to the establishment of one’s family lineage. In modern times the proliferation of written records has largely replaced the need for dependence on such unsubstantiated sources—although not all written records are dependable, and a certain amount of skepticism ought to be employed in their evaluation.

Establishing one’s genealogy today still usually begins with the analysis of family folklore, or personal mementos such as journals, family bibles, or photo collections. However, in much of the world relationships can now be confirmed through written records. There are many sources of such records; vital statistics and civil registration offices, which maintain birth, marriage, and death records, are among the best. Although not infallible, these types of documents are generally considered primary sources of genealogical information because the data were recorded at the time of the noted event. Other good sources include:

  • houses of worship, which may have much earlier information than civil records offices and often have baptismal, confirmation, and burial records;
  • cemeteries, which provide both tombstone inscriptions (sometimes including references to family relationships, military service, and memberships in various organizations in addition to the deceased’s date and place of birth and death) and sextons’ records (which may reveal plot ownership and provide keys to family relationships);
  • census records, which, although they are sometimes flawed, may also provide many details about everyday life that are not otherwise available;
  • military service records, which provide dates of service and discharge details, and military pension records, which sometimes include details such as medical documents;
  • immigration and naturalization records, which include ship passenger lists and citizenship petitions;
  • probate records, which document estate inventories, real estate ownership, and guardians appointed for minor children;
  • local newspapers, which often carry obituaries as well as reports of social events, milestone anniversaries, and wedding or birth announcements.
Tracing one’s roots has become easier owing to the explosion in archival material available on the Internet. A wealth of genealogical Web sites facilitate the establishment of connections with distant relatives, as do online phone directories, where it is possible to search for others with the same family name. In many cases the ease of obtaining information electronically has eliminated the need to travel physically to the location of the source documents. This ease makes it feasible for nearly anyone to research his or her family background, inevitably resulting in the expansion of the sum of genealogical knowledge that will be readily available to the next generation of researchers.


Emma Goldman, Anarchist and Feminist

The American anarchist and feminist Emma Goldman was arrested so often that she always carried a book to her public-speaking engagements, so that she would have something to read to help pass the time in jail. She often lectured on the social ills of American workers in 1900. Her topics included union organizing, the eight-hour workday, equality for women, and free speech. Such issues were considered controversial in 1900. They earned Goldman the appelation "red Emma." In addition she was perceived by many as a threat to established society.

Goldman was born in 1869 in the city of Kovno ( Kaunas), Lithuania. In order to avoid an arranged marriage, she immigrated to America with her older half-sister in 1885. An early sympathy with the martyrs of Chicago's Haymarket Riot led her to embrace the ideals of anarchism. She believed that large organizations---whether governmental, religious, or commercial---were inherently contradictory to the interests of the people. Throughout her life, she publicly opposed any institution, including marriage, private property, and the military, that involved the exploitation of individuals or the suppression of personal freedom.

Anarchists generally maintained that the triumph of their movement would inevitably uphold the rights and improve the status of women. Goldman, however, did not agree with that theory. Trained as a nurse and midwife in Vienna, Austria, she had worked among the immigrant population in the Lower East Side of New York City during the 1890s. Her experiences there engendered a conviction that enforced childbearing restricted women's economic and sexual freedom. She saw the burdens that having too many children imposed on women. This was particularly true for women who needed to work for a living. No political solution, she came to believe, would be enough to overturn the inequality and subjugation of the sexes. Goldman argued that women's issues needed to be addressed separately. Progress would begin only when women took control of their sexual and reproductive rights.

Goldman was well known for her incorporation of sexual policy into anarchist politics. She invariably drew the biggest crowds when she lectured on birth control. She believed that women had the right to control their bodies without governmental interference. She also saw the choice of whether or not to bear a child as a matter of personal freedom.

Goldman first aired her views on birth control in September 1900. She was in Paris participating in the International Anti-Parliamentary Congress. While there she also attended a meeting of the Neo-Malthusian Congress, a secret organization that advocated birth control. There she obtained birth-control literature and contraceptives. She brought both back to the United States. On her return to America, Goldman continued to lecture on birth control. She was eventually arrested on Feb. 11, 1916, in New York City, during a lecture on family planning. She was charged under the Comstock Act of 1873, which banned the "trade in and circulation of obscene literature and articles of immoral use." Convicted, she was given the choice of paying a $100 fine or serving 15 days in the workhouse. She chose incarceration. This gained her widespread support among progressive writers and journalists.

An early mentor of Margaret Sanger, Goldman was largely responsible for bringing Sanger into the struggle for the legalization of birth control. Their goals, however were different. Sanger focused her efforts on achieving the legalization of contraceptives. Goldman, however, viewed birth control as one aspect of the larger struggle of women to overcome the political, economic, and social forces that were oppressing them. Goldman treated the restriction of birth control as one instance of the suppression and exploitation of women. She continued to address the wider issues of personal freedom and equality.

When the United States went to war in 1917, Goldman's public advocacy of anarchism alarmed the federal government. Goldman was convicted of conspiracy against the draft. After two years in jain, she was deported to the Soviet Union in 1919. Goldman lived the remainder of her life outside of the United States. After her death in 1940, her body was buried in Chicago. Her grave was close to the executed anarchists of the Haymarket Riot who inspired her.


Draft Resistance in America: Moral Obedience and Civil Disobedience

With roots in early colonial times, the American antidraft movement has evolved from individual acts of resistance based on religious convictions to mass demonstrations and concerted acts of civil disobedience for political, economic, and ideological purposes as well. At the heart of resistance, however, are individuals who choose not to comply with compulsory military service and who suffer the legal and social consequences of their decisions.

From the early 1600s many American colonists struggled with the ideological implications of conscription and the use of a colonial militia to settle the New World. The objection on religious grounds of Quakers and other pacifist groups to military support or service came to be widely accepted and had an impact on colonial draft policies during the American Revolution. In 1775 the Continental Congress passed a statute exempting religious conscientious objectors (COs) from serving in the military.

Congress enacted the first federal conscription law during the Civil War. Under the Draft Act of 1863, all males between the ages of 20 and 45 were required to register. That same year violent draft riots erupted in New York City. Many rioters protested the exemptions the act offered to the wealthy; the substitution exemption allowed a draftee to hire another person to serve in his place, and the commutation exemption released a draftee from service if he could pay $300. Other protesters objected to being drafted to fight for the freedom of slaves. These riots differed from earlier antidraft protests in that they were not based on principles of pacifism or religious ideology but instead were born from economic inequality and the Northern protestors' hostility to the antislavery cause.

World War I saw the emergence of mass resistance to draft laws. The first national draft since the Civil War was enacted in 1917. Men were drafted directly into the armed services and were immediately subject to military law. Resisters surfaced from a variety of religious, political, and economic backgrounds and included Quakers, Mennonites, Hutterites, Molokans, socialists, anarchists, and some members of labor unions. For the first time in U.S. history, religious, ideological, and political resisters joined in direct action. Protests against conscription, the war, and U.S. economic policy erupted across the country, but few CO exemptions were granted. The draft was discontinued following the armistice in 1918.

In anticipation of U.S. involvement in World War II, Congress reinstituted conscription in the Selective Training and Service Act of 1940. Seen by most Americans as a "just" war, World War II was a challenge for the antidraft movement. Very few eligible men applied for CO status. During this time, however, antidraft submovements emerged. Members objected to perceived encroachments on the civil liberties of some cultural and racial groups in the United States. One group of resisters protested the armed services' Jim Crow segregation policy, Puerto Rican and Hopi groups emerged to protest their treatment by the U.S. government, and some Japanese Americans endorsed draft resistance to protest the violation of their civil rights while incarcerated in relocation camps. Most of the resisters were tried and found guilty of violating the law. Many were sent to prison, where they continued resistance efforts through hunger strikes and other nonviolent demonstrations.

The Vietnam War provided the backdrop for the largest and most organized antidraft demonstrations in U.S. history. Protestors practiced mass acts of resistance, including pickets and sit-ins at local draft boards, large demonstrations in cities throughout the country, and public draft-card burnings often staged for the media. This sustained effort used resistance tactics drawn from a maturing civil rights movement and combined the ideologies of religious, political, and moral opponents of the Vietnam War.

Seen at first as a "youth movement," antidraft efforts grew from small grassroots initiatives to large organizations, including the Chicago Area Draft Resisters (CADRE) and the New England Resistance. Student organizations sprang to life on campuses across the country, organizing resistance campaigns. But non-draft-age groups embraced draft resistance as well. A well-publicized antiwar statement was made by the Jesuit priest Father Philip Berrigan and three other protestors when they poured blood over records at a Baltimore, Md., draft board office. A group including the peace activists Dr. Benjamin Spock and the Rev. William Sloan Coffin, Jr., supported the young draft resisters' efforts by publishing their "Call to Resist Illegitimate Authority" in 1967.

By the end of U.S. military involvement in Vietnam in 1973, the antidraft movement had gained considerable ground. President Richard Nixon called for the end of induction in late 1972, and in 1975 Pres. Gerald Ford proclaimed the end of the draft registration requirement. Perhaps the most telling evidence of the changed political climate in the post-Cold War era was Pres. Jimmy Carter's pardon of convicted Vietnam War draft resisters in 1977.

This was a short-lived hiatus for the draft. By 1980 President Carter had reintroduced a draft registration requirement, stating that reinstatement would show America's "resolve as a nation" in the face of Soviet aggression in Afghanistan. This new draft spurred a large resistance movement across the country. Still under effect today, the 1980 draft proclamation has spawned an increasing number of draft-age resisters who have failed to register. There has been no actual induction since 1972.


Clandestine Cargo: Smuggling along the Southeast Coast of England

Lively stories of 18th- and 19th-century smuggling ventures along the south coast of England are easy to uncover---especially in Kent and Sussex where the narrow English Channel offered easy access to the riches of continental Europe. Colorful tales of gangs, murder, and fortunes gained and lost are still part of the region's folklore.

Behind this smuggling heyday, however, lies a long history of hardship and bleak prospects for economic improvement that dogged the area during the 1700s and early 1800s, creating ripe ground for a culture of "free trade" between England and the Continent.

Smuggling along the coast may have begun as early as the 13th century when King Edward I created one of the earliest known export taxes, imposed on wool sold across England's borders. This levy served to replenish the king's treasury, which had dwindled precipitously owing to internal strife during the reign of his father, King Henry III; but the tax also cut into the profits of the area's primary source of income, and soon cargos of wool were stealthily shipped across the channel by bands of smugglers. The king recruited a small customs staff to collect the duty at officially appointed ports but paid little attention to the secret exports from the Kent and Sussex coastlines.

By the 1700s both import customs duties and export taxes were commonplace. These revenues were needed to bankroll wars and defense operations throughout Great Britain. The government also sought to protect home industry by restricting the export of certain items, including wool. As a result of the increasingly inflated price of imported commodities such as tea, liquor, and woven goods and the decreasing profits from wool and other local industries, smuggling became a full-blown business. Contraband operations crossed class lines, creating profits and a better standard of living for common people, merchants, and gentry alike, with the result that "free-traders" were often respected locally as public benefactors. Illegal trade became so widely accepted that whole villages were involved and roadways were apportioned for smugglers and their wares. While a hierarchy of customs officials and excisemen was created to stem the flow of illegal goods, some of these government officials were amenable to bribes from the smugglers---especially in areas where the pursuit of their duty was made difficult and hazardous by widespread local opposition to it. Not every resident in every area town supported the smugglers, but very few supported the excisemen.

Smuggling became a highly organized operation in the southeast, often financed by London merchants and powerful landowners. Because the stakes were high (during much of the 1700s and 1800s smuggling was punishable by death) and the cargos costly, armed gangs often oversaw the transport and sale of the imported and exported goods. Large numbers of men with weapons would line the beaches as ships were unloaded, sometimes observed by the area's small revenue forces who were powerless to stop the operation.

The Kent and Sussex gangs gained a fearsome reputation and were known for their violence toward all who opposed them. The Mayfield gang was one of the earliest known organized groups. It successfully ran contraband off the coast of East Sussex during the early 1700s (it is estimated that over 11 tons of tea and coffee were handled a year) until its leader, Gabriel Tomkins, was captured and convicted in 1721. Reputedly a wily character, Tomkins testified about his own smuggling operations before an official inquiry into corruption in the customs services and was rewarded by being given a post as a customs officer himself.

By 1730 many gangs of smugglers peppered the region. The Hawkhurst gang dominated the landscape during the 1740s. They reportedly organized the forces of other regional gangs and aligned with area gentry who offered their land and buildings for use as the smugglers' base camps, landing sites, and storerooms.

Later, the impressive smuggling operations from the east Kent town of Deal became worrisome for customs officials. Deal boatbuilders perfected a style of lightweight galley, capable of making very quick runs across the channel and of negotiating waters too shallow for revenue cutters to follow. In January 1784 the Deal fleet was burned by order of Prime Minister William Pitt, but this proved to be only a brief solution to the area's illegal activity.

At the close of the Napoleonic Wars in 1815, the British government was free to turn its attention to thwarting the coastal smugglers. The Coast Blockade Service established a successful shore patrol in 1816 that substantially increased the difficulty of landing smuggled goods and the hazard to smugglers who persisted in trying. In 1831 the Coastguard Service took over from the Coast Blockade; by 1845 its vigilance, together with repealed or reduced import duties on many items, enabled the coast guard to report that smuggling in the region had been effectively eliminated.


Chinese Americans and Racism: The Rock Springs Massacre

By the middle of the 19th century, the United States was experiencing a large influx of Chinese immigrants. China's economy had taken a downturn and many of that nation's nationals had come to the United States in search of greater job opportunities and higher wages. The bulk of the Chinese immigrants worked on the transcontinental railroad and in the gold fields of California. Unfortunately, the boon in railroad construction was of limited duration and opportunities in gold prospecting soon declined, leaving many of the Chinese unemployed. In their attempt to secure jobs in a nation in the midst of a broad economic depression, they began to compete with white, Anglo-American, workers for the few remaining job opportunities and soon were subjected to racism and violence.

Once the railroads were complete, the Union Pacific Railroad needed coal to fuel its locomotives and steam engines; to fulfill this need it established the Union Pacific Coal Company (UPCC). In the mid-1800s mining was a labor-intensive and very dangerous occupation, but in order to supply the vast quantities of coal required while simultaneously reducing production costs, in 1875 the UPCC reduced the pay from five cents to four cents per bushel and increased the number of hours worked by 25% in its Rock Springs, Wyo., mine. The workers responded with a protest strike, which the UPCC broke by bringing in Chinese labor as strikebreakers.

Over the next decade jobs became scarce and the resentment about reduced pay and increased work hours began to build. The UPCC continued to import Chinese workers, however, and by 1885 they represented 65% of the population in Rock Springs. During that time the white miners tried on several occasions to convince the Chinese to join them in demanding better wages and working hours and striking if the demands were not met. On each occasion the Chinese refused, thus making them the focus of the white miners' resentment. Around 1883 the white workers in Rock Springs organized in an effort to get the Chinese expelled from the area.

On Sept. 2, 1885, the growing resentment erupted into violence. According to accounts given by the surviving Chinese workers in a memorial that they presented to the Chinese consul in New York City in 1885, the white miners were especially resentful of the Chinese working at Pit Number Six, which was considered a desirable assignment. On September 2 approximately ten armed white workers entered Pit Number Six demanding that the Chinese not be allowed to work there. When they tried to reason with the whites, the Chinese were attacked and three of their number were injured. On hearing of the violence, the foreman closed down the pit. It is believed that the miners who carried out the attack belonged to the Knights of Labor, a union for workers of all trades that strongly backed labor laws that protected American workers against competition from foreigners. One such law was the Chinese Exclusion Act of 1882, which prohibited immigration from China for ten years. (After a decade, the policy was extended indefinitely, and in 1902 it became permanent. Only in 1943, when China became a key U.S. ally against Japan, was the act finally repealed.)

After Pit Number Six was closed down, the white miners regrouped in the "Whiteman's Town" area of Rock Springs. By 2 P.M. they had entered the city's Chinatown, and by 9 P.M. they had driven the Chinese out of Rock Springs and had looted and destroyed their homes. Twenty-eight Chinese were killed in the violence, many were wounded, and hundreds fled into the countryside.

The day after the incident, Francis E. Warren, the territorial governor of Wyoming, called on the federal government to help restore order in Rock Springs. In response Pres. Grover Cleveland deployed federal troops to the mining town, and one week after the massacre, the troops escorted the Chinese back to their homes. Although 16 men were tried for crimes against the Chinese all of them were acquitted. The UPCC continued to employ immigrants at the Rocks Springs mine, but as pressure from the white community increased, the company started to phase out the use of Chinese workers. The lack of jobs and growing local resentment--evidenced in the headlines in the town's newspaper stating "The Chinese Must Go"--eventually induced most of the Chinese to leave Rock Springs.

The Rock Springs massacre was not the only anti-Chinese incident to occur at the time. Similar acts of violence against this ethnic minority took place in Tacoma, Wash., and Los Angeles, Calif. Although neither was as fatal or as destructive as the Rock Springs massacre, taken together, these incidents help to illustrate the theme of ethnic prejudice that runs through much of American history.


Breathless Exploration: Dr. Robert D. Ballard and the Black Sea

Dr. Robert D. Ballard is arguably the most acclaimed undersea explorer of his generation, and the Black Sea is potentially the richest hunting ground for marine archaeologists on the planet, so it is little wonder that Ballard has been so obsessed with his work in the Black Sea. If his exploits in the Black Sea are successful, they could unlock secrets of human settlement, trade, and even mythology dating back 7,500 years.

Our story begins in around 5500 B.C., when the Black Sea was most likely an inland freshwater lake separated from the Mediterranean Sea by an earthen dam. Glacial melting at the end of the Quarternary Ice Age caused the Mediterranean to overflow its boundaries and, after centuries of pressure, finally overcome this dam, flooding the lake and quickly expanding it into the saltwater body we now call the Black Sea. Some scientists theorize that this event was the basis for the "Great Flood" myths found in many cultures, including the Judeo-Christian story of Noah, the Mesopotamian epic of Gilgamesh, and the Greek myth of Deucalion.

Beyond its mythical implications, however, the formation of the Black Sea created an environment ripe with archaeological allure. First, if the flooding that created the Black Sea occurred with anything approaching the speed depicted in flood legends, it almost certainly overran and possibly preserved ancient human settlements. Moreover, the unique origins of the Black Sea are the most likely explanation for the presence of an anoxic, or "oxygenless," layer near its bottom. This anoxic layer is almost completely lifeless, meaning it lacks the organisms that aid in the decomposition of wood and cloth, which might have sunk to the bottom of the Black Sea. In other words, ancient wooden shipwrecks can remain intact on the Black Sea's floor thanks in part to the anoxic layer.

Enter Dr. Robert D. Ballard, perhaps the most acclaimed shipwreck hunter in the world. Ballard's résumé reads like a Who's Who of famous shipwrecks, including the 1985-1986 discovery and exploration of the doomed luxury liner Titanic; the 1989 discovery of the notorious World War II German battleship Bismarck; the 1998 discovery of the American World War II aircraft carrier Yorktown, which was lost at the Battle of Midway; and the 2002 discovery of the U.S. Navy patrol ship PT 109, which was commanded by the future U.S. president John F. Kennedy and sank in 1943.

Ballard's Black Sea discoveries, however, have been far more ancient than these high-profile 20th-century finds. Since the late 1990s, Ballard has been following the unmistakable presence of discarded amphorae--ancient clay jars used to ferry cargo on sailing vessels--that litter the floor of the Black Sea. Sailors often tossed the amphorae overboard to lighten the load of sinking vessels, which in turn created "breadcrumb trails" leading directly to ancient shipwrecks.

Among Ballard's amphora-guided Black Sea finds are three ancient shipwrecks and a possible site of preflood human settlement. The settlement area, called Site 82, may appear to the untrained eye to be nothing more than a collection of oddly shaped stones, but archaeologists have found similar artifacts in Neolithic and Bronze Age settlements, suggesting that humans once inhabited an area now covered by 490 feet (150 meters) of water.

As to the shipwrecks, they include a 2,300-year-old vessel--the oldest ever found in the Black Sea--and additional evidence that naval trade from Greek, Phoenician, and Roman vessels was common on the Black Sea and occurred as early as 450 A.D., during the Byzantine period. Although analysis of these finds has barely begun, in time the Black Sea may come to be regarded as the archaeological equal of ancient Egypt, Greece, or Rome-but only, that is, if scientists like Dr. Ballard are there, beneath the waves, to prove it.


Amazon Warriors - Fact or Fiction?

Greek mythology includes many tales of the Amazons, a tribe of women who were reputed to be fierce fighters. They figure prominently in the epic poem the Iliad, Homer's 8th-century-B.C. account of the Trojan War. Homer depicted the Amazons as a race of female warriors who lived in a matriarchal society, keeping only their daughters and maiming or killing their sons. Some myths say that the Amazons performed mastectomies, cutting off one breast to allow them to shoot more effectively with bow and arrow.

Herodotus, the 5th-century-B.C. Greek historian, also mentioned the Amazons in his writings, reporting that when the Amazons lost the battle at Thermodon to the Greeks and were subsequently imprisoned on Greek sailing vessels, they murdered the ships' crews. Not knowing how to sail, the women drifted until they eventually landed on the shores of the Black Sea, where they came into contact with the nomadic Scythians. According to some researchers, the Amazons married into this nomadic culture and eventually moved to the Russian steppes, where the group developed what later came to be known as the Sauromatian culture.

Is it possible that the fierce warriors of Greek mythology really ended up in the grasslands of southern Russia? Some investigators deny that the Amazons of ancient Greece ever existed outside of mythology. But archaeologists have explored the notion that members of this mythical warrior tribe settled in the steppes. Digs at Pokrovka, Russia, near the Kazakstan border, have furnished burial artifacts dating from the 6th century to the 4th century B.C. that indicate that the Sauromatians were a nomadic people with expertise in warfare and raising animals. The women of this culture were buried with a greater number and a wider variety of weapons than were the men. Excavations have produced one female skeleton whose legs suggest that she was a frequent horseback rider and another containing a bent arrowhead, indicating that the woman died in battle. Such evidence suggests that warrior women played a prominent role in the nomadic Sauromatian culture.

The Sauromatian culture evolved into the Sarmatian culture, whose members also were nomads skilled in animal husbandry and the art of warfare. According to Jeannine Davis-Kimball, director of the American-Eurasian Research Institute and the Center for the Study of Eurasian Nomads (CSEN), the Sarmatians began to expand their territory during the 4th century B.C., moving westward in order to trade with the Romans and settling in cities on the major trade routes. Archaeological evidence suggests that although the Sarmatian women maintained their power and increased their wealth during this time, their status had changed. Gold and highly ornamented burial artifacts suggest that the women were considered priestesses, but the artifacts of this period show fewer weapons, an indication that the women's role as warriors had waned. Between the 2d century B.C. and the 3d century A.D., the Sarmatians moved to the regions north and west of the Black Sea and invaded Dacia, an area that is now Romania. In 370 the Huns conquered the Sarmatians and either absorbed or eliminated them. Some archaeologists believe that the remnants of an assimilated Sarmatian culture can be found in the descendents of the Hun conquerors, who today reside in western Mongolia.

Davis-Kimball and other researchers at CSEN continue to address the question of whether the Sauromatian and Sarmatian women were indeed the descendents of the Amazons of Greek mythology. According to Davis-Kimball the Sarmatian women were mainly occupied with raising animals and were fighters only when they needed to safeguard their territory. While she acknowledges that archaeological findings refute the notion that in ancient times all women remained at home caring for their children and has found evidence that women of high status were common in nomadic societies of ancient Eurasia, she is less convinced that the excavated Sarmatian female remains are likely the descendents of the fabled Amazon warriors. "I think the idea of the 'Amazon' was created by the Greeks for their own purposes," she says.

Over the past decade, however, other researchers have found archaeological evidence that might lend support to backers of the Amazon thesis. Female mummies from the Ukok Plateau in Eurasia have provided evidence of a matriarchal warrior culture in the Attai region, but the jury is still out on whether or not these women were descended from the Amazons of ancient Greece. In the meantime, research continues into these and other such finds.