The First Crash

Read Time:
4m 20sec

The Constructors (1950), by Fernand Leger

ics, Boyer and Smith point out, pioneers such as H. Greg Lewis used the new tools to look at traditional topics, including "the effects of unions in raising the wages of their members relative to those of nonunion workers."

As neoclassical economists became intimately involved in debates about government policies, "they were forced to give more attention" to institutionalist concerns, Boyer and Smith point out. "Seemingly small administrative details about how unemployment or workers’ compensation insurance premiums are set, for example, have huge implications for the layoff or safety behavior of employers; labor economists wanting the ear of policy-makers had to know these details." Today, say the authors, a permanent fusion of "the neoinstitutionalist interests . . . with the neoclassical approach" may be in the works.

The First Crash

"The First Bank of the United States and the Securities Market Crash of 1792" by David J. Cowen, in The Journal of Economic History (Dec. 2000), Social Science History Institute, Bldg. 200, Rm. 3, Stanford Univ., Stanford, Calif. 94305–2024.

The Panic of 1792 was America’s first market crash, and historians usually have blamed it on a speculator named William Duer and his confederates. Evidence freshly assembled, however, suggests a different culprit: the First Bank of the United States.

The brainchild of Secretary of the Treasury Alexander Hamilton, the semipublic national bank received a charter for 20 years in February 1791 and, with a colossal $10 million in capital, opened its doors in Philadelphia the following December. Its mission was to facilitate commerce by lending money, and, not incidentally, to

Spring 2001 89

The Periodical Observer

strengthen the federal government. Duer, working in secret with others, borrowed heavily in an effort to corner the markets in

U.S. debt securities as well as the stocks of the national bank and the Bank of New York. "In the ensuing speculation," writes Cowen, a foreign currency trader and director of Deutsche Bank, "securities prices reached their peaks in late January 1792. Prices trended lower in February, [and] fell off sharply in March"—prompting the "Panic of 1792."

The speculator Duer, his credit exhausted, could not meet contracts he had made to buy securities and suspended payments on his obligations on March 9. His failure, a contemporary said that month, was "beyond all description—the sums he owes upon notes is unknown—the least supposition is half a Million dollars. Last night he went to [jail]." Historians blamed him for bringing the market down.

An 1833 fire at the Treasury Department destroyed most of the First Bank’s records. But its balance sheets for the 1790s were found by historian James Wettereau in the 1930s in the papers of Hamilton’s successor, and published in 1985. Together with other historical materials, Cowen says, they make it clear that the national bank, headed by Thomas Willing, was responsible for the March crash.

When it opened in December 1791, the bank "flooded the economy with credit." Some loans were for legitimate businesses, but others were made to speculators (apparently including Duer) who used them to buy securities. In February—a full month before Duer ran into trouble—the bank, realizing it had loaned so heavily that its bank notes were not being readily accepted everywhere, reversed course by sharply curtailing credit and calling in outstanding loans. Hamilton, worried about speculation and the state of the financial system, gave the reversal his blessing, Cowen says, and may even have initiated it. Suddenly, Duer and other speculators were called upon to repay their loans. Many dumped stocks to do so, and the market sank.

It was a classic "credit crunch." But no recession or depression followed. Like Federal Reserve Chairman Alan Greenspan after the 1987 market crash, Hamilton moved rapidly to have the central monetary authority act as lender of last resort, helping to avert a meltdown.


The Urban Myth

"Small Towns, Mass Society, and the 21st Century" by James D. Wright, in Society (Nov.–Dec. 2000), Rutgers—The State Univ., 35 Berrue Circle, Piscataway, N.J. 08854.

Over the past half-century it’s become conventional wisdom, reaffirmed at 10-year intervals by the Census Bureau, that the United States is becoming an ever more urban nation. Wright, a sociologist at Tulane University, paints a different picture.

If America is becoming more "urban," he says, isn’t it strange that "most of the really big American cities have been losing population for decades"? Of the 10 largest cities in 1970, seven—New York, Chicago, Philadelphia, Detroit, Baltimore, Washingon, and Cleveland—were noticeably smaller two decades later. Of the 100 largest cities, 54—predominantly in the Northeast and Midwest—had fewer people.

Of course, if "urban" simply means "not rural," then, yes, more than three-fourths of the American populace is "urban." (The Census Bureau classifies as rural any place with fewer than 2,500 inhabitants.) But should a tiny burg of 3,000 really be considered "urban"? It’s an archaic definition, Wright says.

"Urban" is also often casually equated with what the Census Bureau calls "metropolitan areas." These have a "large population nucleus" of at least 50,000 people, located in a county of at least 100,000, and include any adjacent counties that seem economically or socially "integrated" with the nucleus. In 1990, nearly four out of five Americans lived in such areas. Does that really make all of them "urban" folk? Many

90 Wilson Quarterly

More From This Issue