14.4.21

24 Hummingbird Flowers to Attract Pollinators to Your Yard


When you start to design your garden or landscape, you should consider including hummingbird flowers. They’re bright, cheerful, and they can produce large flowers throughout the spring and summer months. 

They come in a large range of colors, shapes, and styles that can fill in your landscape and line up as excellent walkway edging plants. As a bonus, many hummingbird flowers have a sweet scent that draws pollinators to your yard like bees, and they can fill the air as you go out and enjoy the warm months. Many of them are great for novice gardeners because they’re not difficult to maintain.

If you’ve never heard of hummingbird flowers before, or you want more variety to add to your landscape, this is for you. I’ve picked out several bright and cheerful hummingbird flowers you can plant around your yard. Some of them are larger bush-type plants, and many of them fit nicely in containers. Whatever design aesthetic you want, there are hummingbird flowers available for you. 


www.happydiyhome.com 

12.4.21

Brazil’s COVID-19 Crisis and Jair Bolsonaro’s Presidential Chaos

In the past few weeks, Brazil has had the world’s highest covid-19 death count, a predicament which seems to have been driven by Jair Bolsonaro’s response to the crisis.Photograph by Lalo de Almeida / Folhapress / Panos Pictures / Redux

Is the President’s do-nothing approach to the pandemic finally becoming a threat to his political future?

Among the images by the Brazilian photographer Mauricio Lima that accompanied a recent Times article about his country’s covid-19 crisis, two tell a story that should feel familiar to Americans. In one, supporters of the country’s populist right-wing leader, President Jair Bolsonaro, many of them draped in the colors of the national flag, protest against lockdown measures. In the other, health-care workers disguised in hazmat suits demonstrate in support of such measures. Other photographs offer glimpses of a society overwhelmed by the pandemic—doctors tending to patients in an emergency field tent, a coffin-maker and a gravedigger at work.

Today, Brazil ranks second only to the United States in the total number of deaths from covid-19, with more than three hundred and fifty thousand fatalities. In the past few weeks, it has had the highest covid death count, and it is the home of the most worrisome variant, P.1, which is now spreading through Brazil’s neighbors in Latin America and several other nations, including the United States. (P.1, which is sometimes called the Manaus variant, for the Amazonian city where it was first detected, last year, is thought to be up to almost two and a half times more transmissible than the other known covid variants. Thousands of people have already died of covid-19 in Manaus, from where it spread throughout the Amazon region.) A third of all covid-19 deaths are now occurring in Brazil, which has less than three per cent of the global population, and the country’s vaccination rollout has been slow—about twelve doses per hundred people. (Chile, by contrast, has delivered sixty-two doses per hundred.)

On April 5th, with close to four thousand Brazilians dying every day, some from asphyxiation due to a lack of oxygen supplies, and the I.C.U.s of many Brazilian hospitals at near-maximum capacity, an opinion piece published by the authoritative British Medical Journal argued that the colossal scale of Brazil’s health emergency could have been avoided. The authors, three Brazilian medical professionals, state that Bolsonaro has been intentionally negligent in adopting a strategy to “achieve herd immunity through contagion.” They conclude, “In our opinion, the federal government’s stance may constitute a crime against humanity.”

Brazil’s predicament does seem to have been driven by Bolsonaro’s responses, which have been imitative of those adopted by former President Donald Trump, whom he openly admires. From the outset of the crisis, Bolsonaro has waffled on mask-wearing, opposed lockdowns, promoted hydroxychloroquine as a preventative remedy, and eschewed a federal response to the pandemic. In public statements, he has derided covid-19 as “mere sniffles,” while telling Brazilians that “we all have to die sometime.” Even after he contracted the virus himself, he rarely wore a mask in public. Most recently, he berated Brazilians for “whining” and told them to stop being “sissies,” while discouraging them from getting vaccinations—and joking that, if they do, they might “turn into crocodiles.”

He has also inveighed against governors and mayors who sought to mandate lockdowns, on the ground that they violated individual freedoms and would harm the economy, and said that he would not deploy “his” troops to enforce such measures. And his government initially did nothing when pharmaceutical manufacturers started making vaccines available last year, rejecting an offer to buy tens of millions of doses from Pfizer and publicly ridiculing China’s vaccine program; the then foreign minister, Ernesto Araújo, accused China of intentionally spreading covid-19, which he called the “communavirus.”

Bolsonaro’s do-nothing approach to the pandemic notwithstanding, his popularity among his base, which accounts for some thirty per cent of the electorate, has remained steady. But, in recent weeks, other pillars of his support—including in the military and the powerful agribusiness sector, and also a right-of-center coalition in the National Congress—have begun expressing discomfort, leading to talk in political circles of possible impeachment proceedings against him. In a country where two Presidents have been impeached in the past thirty years, such talk has to be taken seriously. And it follows a Supreme Court decision last month to annul the criminal convictions of Bolsonaro’s nemesis, the former President Luiz Inácio Lula da Silva, who is now free to run for office again. All of which is said to have Bolsonaro extremely worried for his political survival. The next Presidential election is scheduled for October of 2022. Lula has not yet declared his candidacy, but it is widely assumed that he will do so; recent polls show him ahead of Bolsonaro.

Then came a stunning cabinet shakeup last month, which saw the replacement of Bolsonaro’s health minister (the fourth in a year) and the resignations of his foreign minister, Araújo, and defense minister, Fernando Azevedo e Silva, followed by those of the chiefs of the Air Force, the Navy, and the Army. (In all, six cabinet ministers left office.) There were rumors that Bolsonaro had attempted to involve the military in what is traditionally known in Latin America as an autogolpe—a self-coup—wherein leaders seize dictatorial powers in an effort to extend their authority.

It emerged that, in fact, Araújo was asked to resign because members of Congress, as well as figures in the influential agribusiness sector, had complained that his far-right, anti-Beijing rhetoric was upsetting Brazil’s principal customer for soy exports, and also complicating vaccine-purchase negotiations. Bolsonaro apparently fired Azevedo because he had refused to replace the Army commander, General Edson Pujol, who had stressed the need for the military to be independent from politics. In public comments that were seen as a rebuke of Bolsonaro, Pujol and another senior officer had also defended tougher measures against covid. The resignations of Pujol and the other two military chiefs, in solidarity with Azevedo, signified a clear breach between Bolsonaro and the senior military establishment. Azevedo, in his resignation letter, seemed to be speaking for all of them when he said that, during his year in the job, he had “preserved the institutional integrity of the armed forces.”

While Bolsonaro may have alienated some top military officials, he still has significant support among the rank and file, and military men continue to hold many posts in his government, including the Vice-President, Hamilton Mourão. Bolsonaro also replaced the minister of justice with a federal police chief who has worked closely with the so-called Bullet Bench, a congressional lobby that supports a looser gun-ownership law that Bolsonaro has been trying to get approved. Analysts say that the appointment shows Bolsonaro’s intention to curry favor among the police forces and conservative law-enforcement circles more broadly.

Prominent observers, including Oliver Stuenkel, a political scientist at São Paulo’s Getulio Vargas Foundation, think that Bolsonaro is laying plans to stage his own “January 6th,” in order to stay in power, if next year’s elections don’t go well for him. (Already, Bolsonaro, echoing Trump, has been warning of election “fraud.”) Eduardo Bolsonaro, a member of the Chamber of Deputies (the lower house in Congress), who is the most hard-line and outspoken of the President’s four sons, publicly lauded the storming of the Capitol, saying that, if the insurrectionists had “been organized,” they could have kept Trump in the White House. (Eduardo is close to the former Trump adviser Steve Bannon, who named him to represent South America in the Movement, his mooted global organization of right-wing nationalist leaders.)

Stuenkel believes that Bolsonaro is working to shore up his support in the military—at least, among those who have not demonstrated a preference for working in a democratic framework—while also trying to insure that he would have the backup of the military police. “If the Army stands back during a Brazilian January 6th, and the military police are with him,” he said, “I think it could be enough for things to end his way.”

With the cabinet shakeup, then, Bolsonaro has secured some room for political maneuvering, and he is also showing an ability to alter course for survival’s sake. In the past few weeks (and after Lula told Brazilians to “get vaccinated”), Bolsonaro declared that he is in favor of vaccines, after all, even as he continues to promote a questionable “covid kit,” comprising a cocktail of hydroxychloroquine and other drugs, which hospital officials say has unproved benefits and possibly fatal consequences; several Brazilians have reportedly been hospitalized and died after taking it.

Richard Lapper, a longtime British observer of Brazilian politics and the author of the forthcoming book “Beef, Bible and Bullets: Brazil in the Age of Bolsonaro,” told me that, “if Bolsonaro continues with the existing covid policy, he is going to lose the more traditional conservative part of his base and be much more dependent on the hard-line ideological supporters, and that, in turn, sets the scene for much greater conflict.” Lapper predicts that there will be more external pressure on Bolsonaro, too, as the P.1 variant spreads further across Latin America; several neighboring states have already banned flights to and from Brazil.

I recently asked Lula how he views the situation. Last Tuesday, in a WhatsApp message, he replied, “I have said for many years, and history teaches, that when people negate politics, what comes next is always worse. And in Brazil there was a very violent campaign against politics, to take the left out of the government, which ended up resulting in Bolsonaro, in a phenomenon similar to Trump in the United States.” He added, “You overcame Trump, and Brazilian society will overcome this accident called Bolsonaro.”

In the meantime, he said, “We need to speed up vaccinations, provide economic assistance to those who are unemployed and starving, and create a credit line to help micro- and small business. President Bolsonaro needs to stop talking and doing nonsense. But the solution to the coronavirus problem can only be a global one. It is necessary for rich countries to forget geopolitical divergences in order to discuss the production of vaccines and the vaccination of all. What we are experiencing is a war of nature against humanity, and for the time being the only weapon is the vaccine. That is why it has to be transformed into a public good financed by the states, so that the vaccine is guaranteed to all the inhabitants of the planet. We will not beat covid with each country acting individually.” That day, forty-one hundred and ninety-five Brazilians died of covid-19, nearly three thousand more than had died the day before—with, as things currently stand, many more deaths to come.

11.4.21

Jaw-Dropping Video of the Moon Was Shot with Modular Leica DSLR lens


A video showcasing the incredible optical quality of the Leica APO-Telyt-R 400mm f/2.8 lens recently surfaced, and the quality that this lens exhibits (which today retails used for $15,000 used) in 4K video is mind-blowing… especially considering the footage was shot from Earth.

Recently shared by Leica Rumors, the video was shot in 2015 and uploaded in 2016 by photographer Markus Stark. He asks — and the quality of the footage makes you wonder — if this is the sharpest super-telephoto ever made.

“The video clips were filmed in August 2015 at only 290 meters above sea level,” Stark writes, citing that he shot it while camping in Germany.

The edit features some zooming and out and spinning that he added in post-production, but the video itself was shot using a Panasonic GH4 with a modified Leica 400mm f/2.8 with a 1.4x teleconverter, and two 2x Leica Apo extenders.

“I wanted [to] make the viewer feel like [they were] observing the moon from a space craft,” he continues.

Mission accomplished.


The lens itself is unique because it features apochromatic correction, which — as DPReview notes — is unusual for a super-telephoto lens. The lens, which is no longer in production, features 10 elements in 8 groups and is extremely heavy: it weighs 13 pounds. Originally introduced in 1996 as part of Leica’s APO-Telyt Module system, the lens was designed to be used with Leica-R SLR cameras and all the lenses in the system were made to be modular and allow the photographer to combine them to create different focal lengths. This design let Stark use the two 2x extenders, for example.

DPReview shared an image that shows the full Leica APO-Telyt-R Module System in this older, low-resolution chart:

As the publication describes, the system included two Leica Heads and three Leica Focus Modules which could be combined to create six different manual focus super-telephoto prime lenses. The 400/560/800 is the larger head and is what was used to create the video above.

In the long telephoto range, the module system produces extraordinarily clear pictures with high contrast and accuracy in detail rendition and color reproduction. And because even the smallest mechanical weaknesses can significantly reduce the performance, especially in this range of focal lengths, the Leica module system is fabricated and assembled with extremely tight tolerances.

The Leica APO-Telyt-R Module System ceased production in 2009 but at the time, the entire module system could be purchased for around $39,000 for those looking to grab it before it was gone. Seeing the lens combined with a GH4 to make a video of this quality probably wasn’t on the minds of Leica’s lens designers back when the system was first introduced, but they would surely be happy to see that the quality of footage produced thanks to the lens is still top-tier.

5.4.21

Best professional camera 2021



Canon EOS 1D X III

Finding the best professional camera for you depends very much on the types of photos or video you're looking to take. Unless you can stretch to landmark all-rounder like the Sony A1, specialist tools remain the best choice for professionals – and whether you're a landscape photographer, portrait snapper or hybrid shooter, we've rounded up all of the best ones here.


Our best cameras for professionals round up mainly focuses on stills photography in the higher end of the mirrorless and DSLR market – though we have also included a couple of our favorite video-focused options and, yes, a Leica for the those who want something that's just a little bit more than a photographic tool. If you're looking for cine camera comparisons, though, it's best to look elsewhere.


We've picked our favorite choices covering an array of photography disciplines, including action, landscapes and reportage. DSLR fan? You'll find our favorite option, the Nikon D850, in our list. Branching out to mirrorless and the hybrid world of photo and video? There's a bit more choice here, from the Canon EOS R5 to the more affordable Fujifilm X-T4.

With smartphones monopolizing beginner-level cameras, manufacturers are investing heavily in this pro-level part of the market – and professionals are well and truly enjoying the fruit.


As things stand, the Canon EOS R5 is currently our top pick as the best professional camera. Aside from its initial overheating issues and video recording limits, it's a formidable camera that has well and truly raised the bar for photo and video performance, all in a compact body.

We're treading on new and exciting ground and there's never been an easier time to raise your game as a pro – so make sure you check out our whole guide to find the right match for you. Whatever scenarios you find yourself making images in, you'll know how important a reliable camera system is – and all of the ones below will both support your craft and stand the test of time.



The Canon EOS 1D X III is the company's flagship DSLR, a camera typically seen in the hands of professional action photographers at big events. It's a substantial and rugged bit of kit, designed for speed and to withstand harsh conditions. They don't come tougher than this.


You'll be able to rattle off 20.1MP still images at a rate of 16fps until the memory card fills up. Truly, there is no limit in the camera's performance for action and it is backed up by a staggering battery life of nearly 3,000 shots (which in real use for continuous shooting is much higher, too). Subject-tracking autofocus performance is also simply jaw-dropping.


But this isn't just an action camera – the EOS 1D X III is a brilliant video tool too, with 5.5K RAW 10-bit video up to 60fps. Beware, you'll need to save up for a handful of expensive CFexpress cards because those video files are huge. Unlike other competing DSLRs, Canon's Dual Pixel AF works exceptionally well in Live View, where you virtually get equal AF performance to when you're using the bright optical viewfinder. The only real downside is that there has been a price hike in this third 1D X installment.

2.4.21

A Neuroscientist’s Poignant Study of How We Forget Most Things in Life


Any study of memory is, in the main, a study of its frailty. In “Remember,” an engrossing survey of the latest research, Lisa Genova explains that a healthy brain quickly forgets most of what passes into conscious awareness. The fragments of experience that do get encoded into long-term memory are then subject to “creative editing.” To remember an event is to reimagine it; in the reimagining, we inadvertently introduce new information, often colored by our current emotional state. A dream, a suggestion, and even the mere passage of time can warp a memory. It is sobering to realize that three out of four prisoners who are later exonerated through DNA evidence were initially convicted on the basis of eyewitness testimony. “You can be 100 percent confident in your vivid memory,” Genova writes, “and still be 100 percent wrong.”

Forgetfulness is our “default setting,” and that’s a good thing. The sixty or so members of our species whose brains are not sieves have their own diagnosis: highly superior autobiographical memory, or hyperthymesia. While the average person can list no more than ten events for any given year of life, people living with H.S.A.M. “remember in excruciatingly vivid detail the very worst, most painful days of their lives.” The most studied case concerns Solomon Shereshevsky, an early-twentieth-century Russian journalist who, like Borges’s Funes the Memorious, “felt burdened by excessive and often irrelevant information and had enormous difficulty filtering, prioritizing, and forgetting what he didn’t want or need.” Desperate to empty his mind, Shereshevsky practiced, with some success, various visualization exercises: he’d imagine setting fire to his memories or picture them scrawled on a giant chalkboard and then erased. (He also turned to the comforts of the bottle and died of complications from alcoholism, although Genova doesn’t mention this.)

An efficient memory system, Genova writes, involves “a finely orchestrated balancing act between data storage and data disposal.” To retain an encounter, deliberate attention alone will get you most of the way there. “If you don’t have Alzheimer’s and you pay attention to what your partner is saying, you’re going to remember what they said.” (Distracted spouses, take note.) Also, get enough sleep. (An exhausted Yo-Yo Ma once left his eighteenth-century Venetian cello, worth $2.5 million, in the trunk of a New York City yellow cab.) Other strategies include leaning on external cues, such as checklists—every year, U.S. surgeons collectively leave hundreds of surgical instruments inside their patients’ bodies—chunking information into meaningful units, and the method of loci, or visualizing information in a familiar environment. Joshua Foer employed the latter device, also known as a “memory palace,” to win the 2006 U.S. Memory Championship.

The business of “motivated forgetting” is more complicated. Genova advises aspiring amnesiacs to avoid anything that might trigger an unwanted memory. “The more you’re able to leave it alone, the more it will weaken and be forgotten,” she writes. Easier said than done, especially with respect to the recurring, sticky memories that characterize conditions such as P.T.S.D. Here, Genova points to promising therapies that take advantage of the brain’s natural tendency to edit episodic memories with every retrieval. In the safe keeping of a psychiatrist’s office (and sometimes with the benefit of MDMA), a patient deliberately revisits the painful memory “with the intention of introducing changes,” revising and gradually overwriting the panic-inducing memory with a “gentler, emotionally neutral version of what happened.” Not quite “Eternal Sunshine,” but if it works, it works.

Genova, a neuroscientist by training, has spent most of her working life writing fiction about characters with various neurological maladies. Her novel “Still Alice,” from 2007, centered on a Harvard psychology professor who is diagnosed with early-onset Alzheimer’s. In “Remember,” her first nonfiction work, Genova assures her readers that only two per cent of Alzheimer’s cases are of the strictly inherited, early-onset kind. For most of us, our chances of developing the disease are highly amenable to interventions, as it takes fifteen to twenty years for the amyloid plaque that is mounting in our brains to reach a tipping point, “triggering a molecular cascade that causes tangles, neuroinflammation, cell death, and pathological forgetting.” What do those interventions look like? Genova’s guidance is backed by current science, but is mostly just parental: exercise, avoid chronic stress, adopt a Mediterranean diet, and enjoy your morning coffee—but not so much as to compromise deep sleep, which is when “your glial cells flush away any metabolic debris that has accumulated in your synapses.”

One of the more interesting studies that Genova cites followed six hundred and seventy-eight elderly nuns over two decades, subjecting them to all manner of physical and cognitive tests. When a nun died, her brain was collected for autopsy. Curiously, a number of the nuns whose brains showed plaques, tangles, and shrinkage exhibited “no behavioral signs” of Alzheimer’s disease. The researchers theorized that these nuns had a high degree of “cognitive reserve”; they tended to have more years of formal education, active social lives, and mentally stimulating hobbies. Even as many old neural pathways collapsed, they were paving “new neural roads” and taking detours along as-yet undamaged connections, thereby masking, if not postponing, the onset of the disease. All pretty straightforward. Now all we have to do is build a society in which everyone has the time and resources for adequate sleep, exercise, nutrition, self-care, and a few good hobbies.

31.3.21

Why Computers Won’t Make Themselves Smarter


We fear and yearn for “the singularity.” But it will probably never come.

In the eleventh century, St. Anselm of Canterbury proposed an argument for the existence of God that went roughly like this: God is, by definition, the greatest being that we can imagine; a God that doesn’t exist is clearly not as great as a God that does exist; ergo, God must exist. This is known as the ontological argument, and there are enough people who find it convincing that it’s still being discussed, nearly a thousand years later. Some critics of the ontological argument contend that it essentially defines a being into existence, and that that is not how definitions work.

God isn’t the only being that people have tried to argue into existence. “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever,” the mathematician Irving John Good wrote, in 1965:

Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

The idea of an intelligence explosion was revived in 1993, by the author and computer scientist Vernor Vinge, who called it “the singularity,” and the idea has since achieved some popularity among technologists and philosophers. Books such as Nick Bostrom’s “Superintelligence: Paths, Dangers, Strategies,” Max Tegmark’s “Life 3.0: Being Human in the age of Artificial Intelligence,” and Stuart Russell’s “Human Compatible: Artificial Intelligence and the Problem of Control” all describe scenarios of “recursive self-improvement,” in which an artificial-intelligence program designs an improved version of itself repeatedly.

I believe that Good’s and Anselm’s arguments have something in common, which is that, in both cases, a lot of the work is being done by the initial definitions. These definitions seem superficially reasonable, which is why they are generally accepted at face value, but they deserve closer examination. I think that the more we scrutinize the implicit assumptions of Good’s argument, the less plausible the idea of an intelligence explosion becomes.

What might recursive self-improvement look like for human beings? For the sake of convenience, we’ll describe human intelligence in terms of I.Q., not as an endorsement of I.Q. testing but because I.Q. represents the idea that intelligence can be usefully captured by a single number, this idea being one of the assumptions made by proponents of an intelligence explosion. In that case, recursive self-improvement would look like this: Once there’s a person with an I.Q. of, say, 300, one of the problems this person can solve is how to convert a person with an I.Q. of 300 into a person with an I.Q. of 350. And then a person with an I.Q. of 350 will be able to solve the more difficult problem of converting a person with an I.Q. of 350 into a person with an I.Q. of 400. And so forth.

Do we have any reason to think that this is the way intelligence works? I don’t believe that we do. For example, there are plenty of people who have I.Q.s of 130, and there’s a smaller number of people who have I.Q.s of 160. None of them have been able to increase the intelligence of someone with an I.Q. of 70 to 100, which is implied to be an easier task. None of them can even increase the intelligence of animals, whose intelligence is considered to be too low to be measured by I.Q. tests. If increasing someone’s I.Q. were an activity like solving a set of math puzzles, we ought to see successful examples of it at the low end, where the problems are easier to solve. But we don’t see strong evidence of that happening.

Maybe it’s because we’re currently too far from the necessary threshold; maybe an I.Q. of 300 is the minimum needed to increase anyone’s intelligence at all. But, even if that were true, we still don’t have good reason to believe that endless recursive self-improvement is likely. For example, it’s entirely possible that the best that a person with an I.Q. of 300 can do is increase another person’s I.Q. to 200. That would allow one person with an I.Q. of 300 to grant everyone around them an I.Q. of 200, which frankly would be an amazing accomplishment. But that would still leave us at a plateau; there would be no recursive self-improvement and no intelligence explosion.

The I.B.M. research engineer Emerson Pugh is credited with saying “If the human brain were so simple that we could understand it, we would be so simple that we couldn’t.” This statement makes intuitive sense, but, more importantly, we can point to a concrete example in support of it: the microscopic roundworm C. elegans. It is probably one of the best-understood organisms in history; scientists have sequenced its genome and know the lineage of cell divisions that give rise to each of the nine hundred and fifty-nine somatic cells in its body, and have mapped every connection between its three hundred and two neurons. But they still don’t completely understand its behavior. The human brain is estimated to have eighty-six billion neurons on average, and we will probably need most of them to comprehend what’s going on in C. elegans’s three hundred and two; this ratio doesn’t bode well for our prospects of understanding what’s going on within ourselves.

Some proponents of an intelligence explosion argue that it’s possible to increase a system’s intelligence without fully understanding how the system works. They imply that intelligent systems, such as the human brain or an A.I. program, have one or more hidden “intelligence knobs,” and that we only need to be smart enough to find the knobs. I’m not sure that we currently have many good candidates for these knobs, so it’s hard to evaluate the reasonableness of this idea. Perhaps the most commonly suggested way to “turn up” artificial intelligence is to increase the speed of the hardware on which a program runs. Some have said that, once we create software that is as intelligent as a human being, running the software on a faster computer will effectively create superhuman intelligence. Would this lead to an intelligence explosion?

Let’s imagine that we have an A.I. program that is just as intelligent and capable as the average human computer programmer. Now suppose that we increase its computer’s speed a hundred times and let the program run for a year. That’d be the equivalent of locking an average human being in a room for a hundred years, with nothing to do except work on an assigned programming task. Many human beings would consider this a hellish prison sentence, but, for the purposes of this scenario, let’s imagine that the A.I. doesn’t feel the same way. We’ll assume that the A.I. has all the desirable properties of a human being but doesn’t possess any of the other properties that would act as obstacles in this scenario, such as a need for novelty or a desire to make one’s own choices. (It’s not clear to me that this is a reasonable assumption, but we can leave that question for another time.)

So now we’ve got a human-equivalent A.I. that is spending a hundred person-years on a single task. What kind of results can we expect it to achieve? Suppose this A.I. could write and debug a thousand lines of code per day, which is a prodigious level of productivity. At that rate, a century would be almost enough time for it to single-handedly write Windows XP, which supposedly consisted of forty-five million lines of code. That’s an impressive accomplishment, but a far cry from its being able to write an A.I. more intelligent than itself. Creating a smarter A.I. requires more than the ability to write good code; it would require a major breakthrough in A.I. research, and that’s not something an average computer programmer is guaranteed to achieve, no matter how much time you give them.

When you’re developing software, you typically use a program known as a compiler. The compiler takes the source code you’ve written, in a language such as C, and translates it into an executable program: a file consisting of machine code that the computer understands. Suppose you’re not happy with the C compiler you’re using—call it CompilerZero. CompilerZero takes a long time to process your source code, and the programs it generates take a long time to run. You’re confident that you can do better, so you write a new C compiler, one that generates more efficient machine code; this new one is known as an optimizing compiler.

You’ve written your optimizing compiler in C, so you can use CompilerZero to translate your source code into an executable program. Call this program CompilerOne. Thanks to your ingenuity, CompilerOne now generates programs that run more quickly. But CompilerOne itself still takes a long time to run, because it’s a product of CompilerZero. What can you do?

You can use CompilerOne to compile itself. You feed CompilerOne its own source code, and it generates a new executable file consisting of more efficient machine code. Call this CompilerTwo. CompilerTwo also generates programs that run very quickly, but it has the added advantage of running very quickly itself. Congratulations—you have written a self-improving computer program.

But this is as far as it goes. If you feed the same source code into CompilerTwo, all it does is generate another copy of CompilerTwo. It cannot create a CompilerThree and initiate an escalating series of ever-better compilers. If you want a compiler that generates programs that run insanely fast, you will have to look elsewhere to get it.

The technique of having a compiler compile itself is known as bootstrapping, and it’s been employed since the nineteen-sixties. Optimizing compilers have come a long way since then, so the differences between a CompilerZero and a CompilerTwo can be much bigger than they used to be, but all of that progress was achieved by human programmers rather than by compilers improving themselves. And, although compilers are very different from artificial-intelligence programs, they offer a useful precedent for thinking about the idea of an intelligence explosion, because they are computer programs that generate other computer programs, and because when they do so optimization is often a priority.

The more you know about the intended use of a program, the better you can optimize its code. Human programmers sometimes hand-optimize sections of a program, meaning that they specify the machine instructions directly; the humans can write machine code that’s more efficient than what a compiler generates, because they know more about what the program is supposed to do than the compiler does. The compilers that do the best job of optimization are compilers for what are known as domain-specific languages, which are designed for writing narrow categories of programs. For example, there’s a programming language called Halide designed exclusively for writing image-processing programs. Because the intended use of these programs is so specific, a Halide compiler can generate code as good as or better than what a human programmer can write. But a Halide compiler cannot compile itself, because a language optimized for image processing doesn’t have all the features needed to write a compiler. You need a general-purpose language to do that, and general-purpose compilers have trouble matching human programmers when it comes to generating machine code.

A general-purpose compiler has to be able to compile anything. If you feed it the source code for a word processor, it will generate a word processor; if you feed it the source code for an MP3 player, it will generate an MP3 player; and so forth. If, tomorrow, a programmer invents a new kind of program, something as unfamiliar to us today as the very first Web browser was in 1990, she will feed the source code into a general-purpose compiler, which will dutifully generate that new program. So, although compilers are not in any sense intelligent, they have one thing in common with intelligent human beings: they are capable of handling inputs that they have never seen before.

Compare this with the way A.I. programs are currently designed. Take an A.I. program that is presented with chess moves and that, in response, needs only to spit out chess moves. Its job is very specific, and knowing that is enormously helpful in optimizing its performance. The same is true of an A.I. program that will be given only “Jeopardy!” clues and needs only to spit out answers in the form of a question. A few A.I. programs have been designed to play a handful of similar games, but the expected range of inputs and outputs is still extremely narrow. Now, alternatively, suppose that you’re writing an A.I. program and you have no advance knowledge of what type of inputs it can expect or of what form a correct response will take. In that situation, it’s hard to optimize performance, because you have no idea what you’re optimizing for.

How much can you optimize for generality? To what extent can you simultaneously optimize a system for every possible situation, including situations never encountered before? Presumably, some improvement is possible, but the idea of an intelligence explosion implies that there is essentially no limit to the extent of optimization that can be achieved. This is a very strong claim. If someone is asserting that infinite optimization for generality is possible, I’d like to see some arguments besides citing examples of optimization for specialized tasks.

Obviously, none of this proves that an intelligence explosion is impossible. Indeed, I doubt that one could prove such a thing, because such matters probably aren’t within the domain of mathematical proof. This isn’t a question of proving that something is impossible; it’s a question of what constitutes good justification for belief. The critics of Anselm’s ontological argument aren’t trying to prove that there is no God; they’re just saying that Anselm’s argument doesn’t constitute a good reason to believe that God exists. Similarly, a definition of an “ultraintelligent machine” is not sufficient reason to think that we can construct such a device.

There is one context in which I think recursive self-improvement is a meaningful concept, and it’s when we consider the capabilities of human civilization as a whole. Note that this is different from individual intelligence. There’s no reason to believe that humans born ten thousand years ago were any less intelligent than humans born today; they had exactly the same ability to learn as we do. But, nowadays, we have ten thousand years of technological advances at our disposal, and those technologies aren’t just physical—they’re also cognitive.

Let’s consider Arabic numerals as compared with Roman numerals. With a positional notation system, such as the one created by Arabic numerals, it’s easier to perform multiplication and division; if you’re competing in a multiplication contest, Arabic numerals provide you with an advantage. But I wouldn’t say that someone using Arabic numerals is smarter than someone using Roman numerals. By analogy, if you’re trying to tighten a bolt and use a wrench, you’ll do better than someone who has a pair of pliers, but it wouldn’t be fair to say you’re stronger. You have a tool that offers you greater mechanical advantage; it’s only when we give your competitor the same tool that we can fairly judge who is stronger. Cognitive tools such as Arabic numerals offer a similar advantage; if we want to compare individuals’ intelligence, they have to be equipped with the same tools.

Simple tools make it possible to create complex ones; this is just as true for cognitive tools as it is for physical ones. Humanity has developed thousands of such tools throughout history, ranging from double-entry bookkeeping to the Cartesian coördinate system. So, even though we aren’t more intelligent than we used to be, we have at our disposal a wider range of cognitive tools, which, in turn, enable us to invent even more powerful tools.

This is how recursive self-improvement takes place—not at the level of individuals but at the level of human civilization as a whole. I wouldn’t say that Isaac Newton made himself more intelligent when he invented calculus; he must have been mighty intelligent in order to invent it in the first place. Calculus enabled him to solve certain problems that he couldn’t solve before, but he was not the biggest beneficiary of his invention—the rest of humanity was. Those who came after Newton benefitted from calculus in two ways: in the short term, they could solve problems that they couldn’t solve before; in the long term, they could build on Newton’s work and devise other, even more powerful mathematical techniques.

This ability of humans to build on one another’s work is precisely why I don’t believe that running a human-equivalent A.I. program for a hundred years in isolation is a good way to produce major breakthroughs. An individual working in complete isolation can come up with a breakthrough but is unlikely to do so repeatedly; you’re better off having a lot of people drawing inspiration from one another. They don’t have to be directly collaborating; any field of research will simply do better when it has many people working in it.


Consider the study of DNA as an example. James Watson and Francis Crick were both active for decades after publishing, in 1953, their paper on the structure of DNA, but none of the major breakthroughs subsequently achieved in DNA research were made by them. They didn’t invent techniques for DNA sequencing; someone else did. They didn’t develop the polymerase chain reaction that made DNA synthesis affordable; someone else did. This is in no way an insult to Watson and Crick. It just means that if you had A.I. versions of them and ran them at a hundred times normal speed, you probably wouldn’t get results as good as what we obtained with molecular biologists around the world studying DNA. Innovation doesn’t happen in isolation; scientists draw from the work of other scientists.

The rate of innovation is increasing and will continue to do so even without any machine able to design its successor. Some might call this phenomenon an intelligence explosion, but I think it’s more accurate to call it a technological explosion that includes cognitive technologies along with physical ones. Computer hardware and software are the latest cognitive technologies, and they are powerful aids to innovation, but they can’t generate a technological explosion by themselves. You need people to do that, and the more the better. Giving better hardware and software to one smart individual is helpful, but the real benefits come when everyone has them. Our current technological explosion is a result of billions of people using those cognitive tools.

Could A.I. programs take the place of those humans, so that an explosion occurs in the digital realm faster than it does in ours? Possibly, but let’s think about what it would require. The strategy most likely to succeed would be essentially to duplicate all of human civilization in software, with eight billion human-equivalent A.I.s going about their business. That’s probably cost-prohibitive, so the task then becomes identifying the smallest subset of human civilization that can generate most of the innovation you’re looking for. One way to think about this is to ask: How many people do you need to put together a Manhattan Project? Note that this is different from asking how many scientists actually worked on the Manhattan Project. The relevant question is: How large of a population do you need to draw from in order to recruit enough scientists to staff such an effort?

In the same way that only one person in several thousand can get a Ph.D. in physics, you might have to generate several thousand human-equivalent A.I.s in order to get one Ph.D.-in-physics-equivalent A.I. It took the combined populations of the U.S. and Europe in 1942 to put together the Manhattan Project. Nowadays, research labs don’t restrict themselves to two continents when recruiting, because building the best team possible requires drawing from the biggest pool of talent available. If the goal is to generate as much innovation as the entire human race, you might not be able to dramatically reduce that initial figure of eight billion after all.

We’re a long way off from being able to create a single human-equivalent A.I., let alone billions of them. For the foreseeable future, the ongoing technological explosion will be driven by humans using previously invented tools to invent new ones; there won’t be a “last invention that man need ever make.” In one respect, this is reassuring, because, contrary to Good’s claim, human intelligence will never be “left far behind.” But, in the same way that we needn’t worry about a superhumanly intelligent A.I. destroying civilization, we shouldn’t look forward to a superhumanly intelligent A.I. saving us in spite of ourselves. For better or worse, the fate of our species will depend on human decision-making.


Ted Chiang is an award-winning author of science fiction. In 2016, the title story from his first collection, “Stories of Your Life and Others,” was adapted into the film “Arrival.” He lives in Bellevue, Washington, where he works as a freelance technical writer.