State-Wrecked: The Corruption of Capitalism in America

David Stockman:

When it bursts, there will be no new round of bailouts like the ones the banks got in 2008. Instead, America will descend into an era of zero-sum austerity and virulent political conflict, extinguishing even today’s feeble remnants of economic growth.

THIS dyspeptic prospect results from the fact that we are now state-wrecked. With only brief interruptions, we’ve had eight decades of increasingly frenetic fiscal and monetary policy activism intended to counter the cyclical bumps and grinds of the free market and its purported tendency to underproduce jobs and economic output. The toll has been heavy.

As the federal government and its central-bank sidekick, the Fed, have groped for one goal after another — smoothing out the business cycle, minimizing inflation and unemployment at the same time, rolling out a giant social insurance blanket, promoting homeownership, subsidizing medical care, propping up old industries (agriculture, automobiles) and fostering new ones (“clean” energy, biotechnology) and, above all, bailing out Wall Street — they have now succumbed to overload, overreach and outside capture by powerful interests. The modern Keynesian state is broke, paralyzed and mired in empty ritual incantations about stimulating “demand,” even as it fosters a mutant crony capitalism that periodically lavishes the top 1 percent with speculative windfalls.

The culprits are bipartisan, though you’d never guess that from the blather that passes for political discourse these days. The state-wreck originated in 1933, when Franklin D. Roosevelt opted for fiat money (currency not fundamentally backed by gold), economic nationalism and capitalist cartels in agriculture and industry.

Under the exigencies of World War II (which did far more to end the Depression than the New Deal did), the state got hugely bloated, but remarkably, the bloat was put into brief remission during a midcentury golden era of sound money and fiscal rectitude with Dwight D. Eisenhower in the White House and William McChesney Martin Jr. at the Fed.

Why Innovators Get Better With Age

Tom Agan:

In reality, though, these examples are the exception and not the rule. Consider this: The directors of the five top-grossing films of 2012 are all in their 40s or 50s. And two of the biggest-selling authors of fiction for 2012 — Suzanne Collins and E. L. James — are around 50.

According to research by Alex Mesoudi of Durham University in England, the age of eventual Nobel Prize winners when making a discovery, and of inventors when making a significant breakthrough, averaged around 38 in 2000, an increase of about six years since 1900.

But there is another reason to keep innovators around longer: the time it takes between the birth of an idea and when its implications are broadly understood and acted upon. This education process is typically driven by the innovators themselves.

For Nobel Prize winners, this process usually takes about 20 years — meaning that someone who is 38 at the time of discovery will most likely be nearly 60 when he or she receives the prize. For most eventual laureates, that interval is spent attending and making presentations at conferences, networking with colleagues, writing additional papers, editing academic journals and talking with the press.

When Simplicity Is the Solution

Alan Siegel:

At the beginning of “Walden,” Henry David Thoreau makes a concise case against the complexity of modern life. “Our life is frittered away by detail. An honest man has hardly need to count more than his ten fingers, or in extreme cases he may add his ten toes, and lump the rest. Simplicity, simplicity, simplicity!” he writes. “[L]et your affairs be as two or three, and not a hundred or a thousand; instead of a million count half a dozen, and keep your accounts on your thumb-nail….Simplify, simplify.”

That was the 19th century, though, and we live in the 21st. In a typical day, we encounter dozens—if not dozens upon dozens—of moments when we are delayed, frustrated or confused by complexity. Our lives are filled with gadgets we can’t use (automatic sprinklers, GPS devices, fancy blenders), instructions we can’t follow (labels on medicine bottles, directions for assembling toys or furniture) and forms we can’t decipher (tax returns, gym membership contracts, wireless phone bills).

How to Make a Computer from a Living Cell

Katherine Bourzac:

If biologists could put computational controls inside living cells, they could program them to sense and report on the presence of cancer, create drugs on site as they’re needed, or dynamically adjust their activities in fermentation tanks used to make drugs and other chemicals. Now researchers at Stanford University have developed a way to make genetic parts that can perform the logic calculations that might someday control such activities.

The Stanford researchers’ genetic logic gate can be used to perform the full complement of digital logic tasks, and it can store information, too. It works by making changes to the cell’s genome, creating a kind of transcript of the cell’s activities that can be read out later with a DNA sequencer. The researchers call their invention a “transcriptor” for its resemblance to the transistor in electronics. “We want to make tools to put computers inside any living cell—a little bit of data storage, a way to communicate, and logic,” says Drew Endy, the bioengineering professor at Stanford who led the work.

Timothy Lu, who leads the Synthetic Biology Group at MIT, is working on similar cellular logic tools. “You can’t deliver a silicon chip into cells inside the body, so you have to build circuits out of DNA and proteins,” Lu says. “The goal is not to replace computers, but to open up biological applications that conventional computing simply cannot address.”