The 80,000 Hours team (Author archive) - 80,000 Hours https://80000hours.org/author/80000hours/ Mon, 25 Nov 2024 18:48:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 Updates to our problem rankings of factory farming, climate change, and more https://80000hours.org/2024/10/updates-to-our-problem-rankings-on-factory-farming-climate-change-and-more/ Tue, 08 Oct 2024 02:00:18 +0000 https://80000hours.org/?p=87545 The post Updates to our problem rankings of factory farming, climate change, and more appeared first on 80,000 Hours.

]]>
At 80,000 Hours, we are interested in the question: “if you want to find the best way to have a positive impact with your career, what should you do on the margin?” The ‘on the margin’ qualifier is crucial. We are asking how you can have a bigger impact, given how the rest of society spends its resources.

To help our readers think this through, we publish a list of what we see as the world’s most pressing problems. We rank the top most issues by our assessment of where additional work and resources will have the greatest positive impact, considered impartially and in expectation.

Every problem on our list is there because we think it’s very important and a big opportunity for doing good. We’re excited for our readers to make progress on all of them, and think all of them would ideally get more resources and attention than they currently do from society at large.

The most pressing problems are those that have the greatest combination of being:

  • Large in scale: solving the issue improves more lives to a larger extent over the long run.
  • Neglected by others: the best interventions aren’t already being done.
  • Tractable: we can make progress if we try.

We’ve recently updated our list. Here are the biggest changes:

  • We now rank factory farming among the top problems in the world. (See why.)
  • We’ve simplified the list into three categories: top most pressing problems, a new category for ‘emerging challenges,’ and other pressing problems (issues we think are underrated by society as a whole but aren’t quite as pressing as our top issues given work already happening). (See more.)
  • We now rank climate change in the category of other pressing problems alongside global health, rather than among the most pressing problems in the world. (See why.)

New articles and updates:

We are also working on or thinking about publishing new articles on:

  • Global health
  • Building capacity for top world problems
  • Wild animal suffering
  • Invertebrate welfare
  • Global priorities research
  • Other sub-problems relating to transformative AI

We expect to continue updating our list as we learn more and our views evolve. We’re not confident that our ranking of global problems is right or that we’re including everything we should. In fact, we’re confident that we don’t have it right — comparing complex global issues is such a difficult research question that it’d be shocking if we did!

But when we make decisions about how to focus our resources — our time, our money, our careers — we can’t avoid prioritisation. We think there are lots of benefits to being explicit about our choices. And we hope this list gives our audience a jumping off point for deciding which problems they should focus on. Read more in our FAQ on ranking global issues.

You can see our full ranking of pressing global problems here. Click through to the articles to see the arguments for and against each problem being particularly pressing and the best ways we know for you to help.

We explain the biggest changes to the rankings below.

Factory farming

We have written a new, in-depth problem profile on factory farming. We now rank it among the top problems in the world to work on.

In our problem profile, Benjamin Hilton reports that there are around 1.6-4.5 trillion farmed animals killed every year. The vast majority of these are raised in factory farms. This causes a huge amount of suffering, and we expect these numbers to continue to grow in the coming decades.

As Benjamin explained in his article:

  • Around 24 billion chickens are alive in farms at any time. We slaughter around 75 billion each year.
  • Around one billion pigs are alive in farms at any one time. We slaughter around 1.5 billion each year.
  • Around 1.5 billion cattle are alive in farms at any one time. We slaughter around 300 million per year.
  • Around 2.2 billion sheep and goats are alive in farms at any one time. We slaughter around 1.05 billion each year.
  • Around 100-180 billion fish are alive in farms at any one time. We kill around 100 billion farmed fish each year. (Many more fish are wild-caught.)
  • We kill around 255-605 billion farmed decapod crustaceans for food each year. That includes:
    • Crabs (5-16 billion slaughtered each year)
    • Crayfish and lobsters (37-60 billion slaughtered each year)
    • Shrimp (213-530 billion slaughtered each year)

Many of these animals suffer intensely for much of their lives on farms.

New research from Rethink Priorities suggests that while there’s a lot of uncertainty about the intensity of farmed animals’ experiences, it’s difficult to justify extremely low estimates of their capacities to suffer. Even significantly discounting the moral importance of animals compared to humans, which we think is reasonable, the scale of this suffering and death is still extreme. And we think most plausible moral views would put significant weight on mitigating such outcomes.

We also think it’s moderately tractable to make progress on this problem and that it’s highly neglected compared to many other other issues, with only about $410 million a year currently being spent on it.

It’s hard to know how to compare this kind of problem to other global problems. We try to approach these questions from a standpoint of moral uncertainty, impartiality, and concern for future generations.

In general, because we try to take into account how issues will affect all future individuals, we focus a lot on reducing existential risks. We think society as a whole underrates these risks and that they are hugely important.

But one might also reasonably think that there are very few effective interventions with lasting impacts that we can pursue now, which will predictably influence the long-term future. Additionally, if you have doubts about whether the future will be net positive, this boosts the priority of working on issues that improve the quality of the future or relieve suffering in the present.

Given how tricky the empirical and philosophical questions involved here are, we think these considerations place mitigating factory farming roughly on par with some smaller existential risks, like the risks posed by nuclear weapons. That said, we still consider it less pressing than existential risks where there are plausibly single or even double-digit chances of an existential catastrophe this century — like from AI.

For much more detail, read our new problem profile on factory farming.

Emerging challenges

We’ve added a new category for problems called ’emerging challenges.’ We think of this as a flexible category that allows us to comfortably include content about problems that we have a lot of remaining uncertainties about, but which could be extremely pressing and competitive with our top problems. It contains issues like the moral status of digital minds, space governance, and stable totalitarianism.

These issues don’t yet have well-developed fields built around them like biosecurity and AI safety do. The career paths within them may be less clearly defined, and overall, pursuing work on these problems should be thought of as high-variance.

Most of these issues are incredibly neglected. Some of them, like understanding the moral status of digital minds or invertebrate welfare, have only dozens of people working on them full time and only a few small funding sources. Meanwhile, hundreds, thousands, or even tens of thousands of people work on some of the issues we list on our page, with millions, billions, or even (in the case of climate change) over a trillion dollars in annual funding dedicated to solving them. And several issues we don’t list, like education in wealthy countries, have even more resources devoted to them.

We think this extreme neglectedness makes it particularly high impact to make progress on these emerging challenges — if you can.

This is balanced in part by the fact that it might be very hard to make progress on these issues (how does one increases society’s understanding of the moral status of digital minds?), or they may turn out to be much less pressing after further investigation. Since so little work has been done on them, our understanding of these issues is limited, and it will be harder to find collaborators or other support to work on them.

But even so, and partly because these fields aren’t well defined, we think people who are well-suited to working on them can have a really big impact by getting in on the ground floor. We recommend learning everything there is to learn about the topic (which often isn’t very much due to limited research), and then helping to shape the field and assess the pressingness of the issue with an insider’s perspective.

For example, we think of AI safety as having been in this category about a decade ago — and the people who pioneered the field have been disproportionately impactful in part because they were the only ones working on it.

Tackling these issues is not an easy path. There usually won’t be many organisations or jobs available, meaning you’ll probably have to chart your own course. This can be confusing and challenging, and there is often a much higher chance that it won’t work out, or that you might even do harm by shaping a burgeoning field in a negative way. This means it’s worth being extra careful, and it’s certainly not everyone’s best option to work on an emerging challenge.

But for those who have the right background and aptitudes to thrive in this kind of work, it can be extremely promising.

Climate change

Climate change is a very important issue, and we think, in general, more resources should be going toward addressing it. Humanity’s greenhouse gas emissions have triggered rising global temperatures, which are already impacting people’s lives. Projections suggest this will result in many millions of avoidable deaths and widespread disruption and harm in the coming decades.1 The harms of climate change are arguably particularly objectionable because they will generally most burden the populations who have contributed least to the problem.

We think, however, that given the work that is already happening to mitigate climate change, and considering the scale of other, more neglected issues, many people can do even more good tackling issues like nuclear weapons, catastrophic pandemics, factory farming, and risks from artificial intelligence.

The topline reasons for listing climate change near the top of our ‘other pressing problems’ section, rather than in our ‘most pressing problems’ section, are:

  • Climate change is significantly less neglected than other problems we focus on, and we expect that to continue.
  • Substantial progress has already been made in addressing climate change, which makes the most extreme global outcomes less likely than they might have been otherwise. We think it’s likely this progress will continue.
  • The most recent projections indicate that while the world is likely to miss the goal of keeping global warming below 2°C, it’s less likely than previously thought to exceed 4°C.
  • While lower levels of warming can still do a lot of damage, it’s much less likely to pose a risk of human extinction than some other threats, like AI, pandemics, and nuclear war.

Climate change is projected to have serious consequences for the many of the most vulnerable populations, such as people in India who already face the challenges of extreme heat. But climate change is not a unique threat in this regard. Preventable diseases and premature deaths also disproportionately burden people in low-income countries, and we believe much more should be done to address this problem. We think climate change is roughly comparable to the general challenge of improving global health and wellbeing.

We will continue to list climate change roles on our job board, just as we do for roles focused on improving global health.

Below we’ll give more detail on recent research and our thinking about how the neglectedness, scale, and tractability of climate change compares to other problems on our list.

Neglectedness

Climate change is significantly less neglected than it was in the recent past and much less neglected than most of the other issues on our list.

When we published an updated article on climate change in 2022, we cited an estimate from the Climate Policy Initiative that global climate finance was around $640 billion annually in 2019/20. The most recent version of that estimate has nearly doubled to $1.265 trillion.

This is about 10 times the amount of global funding for biosecurity, according to a recent estimate. It’s more than 3,000 times the amount of funding going to factory farming.2

Though we haven’t done nearly enough — and we should have done more much sooner — humanity has broadly recognised climate change as a major world problem and devoted significant resources to addressing it. For this reason, warming looks like it will be significantly less severe than it might have otherwise been.

Again, more climate change funding is still needed — the IPCC has called for 3-6x the current funding level — but the recent increase is a positive development.

And there’s widespread support for continuing action on climate change:

  • In opinion polls, climate change is consistently ranked as one of the most important global problems. A 2021 poll found that a plurality of respondents in the EU believed climate change was the most important single problem facing the world, ranking above poverty, infectious disease, and terrorism.
  • A 2022 poll of 21,000 people living in 22 countries found that 36% thought that climate change and environmental protection to be among the top three problems facing the world.
  • Public support for climate policies is high worldwide. A 2024 survey published in Nature found that 89% of respondents wanted their governments to do more to tackle climate change.

The 2024 study found that people tend to underestimate how supportive others are of tackling climate change. Other studies have shown similar findings. As Hannah Ritchie notes:

A study published in Nature Communications found that 80% to 90% of Americans underestimated public support for climate policies. And not by a small amount: they thought that just 37% to 43% were supporters, despite the actual number being 66% to 80%. In other words, they thought people in favour of climate policies were in the minority. In reality, the opposite was true: more than two-thirds of the country wants to see more action.”

Most discussions on climate change are now about the merits of various solutions and the scale of the problem, not whether it exists or is a problem at all.

The IPCC reported with medium confidence that, “By 2020, laws primarily focussed on reducing GHG emissions existed in 56 countries covering 53% of global emissions.” And: “More than 100 countries have either adopted, announced or are discussing net zero GHG or net zero CO2 emissions commitments, covering more than two-thirds of global GHG emissions.”

The future is uncertain, and it’s possible that investment in fighting climate change could stall or even reverse course. This could be due to a backlash against climate action or other shifts in the global political landscape. For example, if Donald Trump were elected president in November, the US federal government would be less likely to invest in ambitious climate change mitigation, and some progress on this issue may stall.

But even a significant slowdown in funding could leave climate change much better funded than our top-ranked problems, which could also be affected by shifting political winds. And it’s notable that climate funding has consistently increased since 2013/2014 despite many tumultuous political events. We also expect significant private investment in climate solutions to continue.

All this matters because we think neglectedness is a key factor in determining how pressing a problem is — in the sense of how much good you can do by working on it. The more work goes into a problem, the more likely it is to hit diminishing returns because the low-hanging fruit has been taken. In other words, if you’re the 100,000th person working on an issue, you’re likely to have a smaller impact, all else being equal, than if you’re the 100th person. (See more on this above.)

We think there is still a lot of good work to be done on climate change, and we hope to see much more investment in the most impactful solutions. That’s why we list it as a pressing problem. But there are other issues that are also large or even larger in scale, that have insufficient resources going into solving them, and which are not as widely recognised.

Scale

Recent climate change developments

Despite what some sceptics have tried to say over the years, climate change is real, it’s already happening, and it has serious impacts on the planet. For example:

But the resources humanity has invested into counteracting climate change are starting to bear fruit. Due to progress in low-carbon technology and increasingly ambitious climate policy, overall warming is likely to be much lower than feared a decade ago.

From 2000 to 2010, global emissions increased by around 3% per year, and the world was tracking slightly above the highest emissions scenario considered by the IPCC, which implied warming of around 5°C by 2100. However, since then, global emissions have slowed considerably and appear to be reaching more of a plateau, making a 5°C warmer world look increasingly unlikely.

Progress has been driven by strengthening climate policy and the falling costs of low-carbon technologies. Over the last 30 years, the price of lithium-ion batteries has declined by 97%.

This means that batteries will play an increasing role in energy storage, as well as in transport. Some family electric cars now sell for $10,000, and electric cars cost less to maintain and run than petrol cars. In 2020, 4% of new cars sold were electric. That figure increased to 18% only three years later.

A similar trend is happening with solar panel prices, which have declined by more than 500x over the last 50 years.

As a result, the share of global electricity production from solar has increased dramatically in recent years. While still only at 5%, it’s rising fast.3

Even without stringent climate policy, we expect low-carbon technology will play an increasing role in the global energy system.

In addition, climate policies across the world have strengthened in recent years following the 2015 Paris Agreement, which aimed to limit warming to 2°C. As a result, future global temperatures projections have moderated substantially over the last decade. In 2014, current policies suggested a pathway to around 3.9°C of warming in 2100. However, more ambitious climate policy and improved low-carbon technology now place the world on a pathway to warming of around 2.7°C by the end of the century.

If climate policy continues to strengthen in the future, as we hope, warming could be reduced even further. Indeed, if governments stick to their pledges and targets, the most likely level of warming is around 2.1°C.

This is still far more warming than the world should want — 2.1°C would be majorly disruptive, and many of the harms will fall on the world’s most vulnerable people, who contributed least to the problem. But it’s far less warming and less harmful than it might have been had we not begun mitigating it.

Source: Climate Action Tracker • Values included for the global temperature increase projections are median, with extended ranges on either side.

This is just experts’ best guess of likely warming based on some key parameters, and there is uncertainty about how emissions will progress over the century and how sensitive the climate is to emissions. But there is broad consensus in the literature that current policies will likely result in warming between 2°C and 3.5°C by 2100 on current policy, with the likelihood of the higher temperatures decreasing over time as policy strengthens. Our children aren’t likely to face a world of 5°C of warming that we once feared.

The IPCC has found that while our uncertainty about the range of future warming had decreased — making higher temperatures less likely — lower temperatures now look somewhat riskier for a range of impacts than previously believed. This somewhat diminishes the positive update from reduced uncertainty about the level of warming, but it appears unlikely to substantially change the general ballpark of the direct harms of climate change, which we turn to next.

Projected deaths

Despite progress, climate change is still on track to cause a huge amount of suffering and millions of deaths over the next century. That’s why we continue to think it makes sense to encourage more people to work towards making further progress.

Several projections have attempted to quantify the potential loss of life due to climate change:

  • The World Health Organization has projected 250,000 excess deaths annually between 2030 and 2050.
  • A Nature Communications study by R. Daniel Bressler estimated 83 million cumulative excess deaths by 2100 with 4.1°C of warming.4
    • In the most extreme scenario under this model, cumulative excess deaths reach nearly 300 million by 2100.
    • If warming is limited to 2.4°C by the end of the century, the model projects 9 million excess deaths.
  • The Climate Vulnerable Forum has projected 3.4 million deaths per year by 2100 from “unabated climate change”.5
  • While not giving precise projections of annual deaths, a 2023 IPPC report found that, “Depending on the level of global warming, the assessed long-term impacts will be up to multiple times higher than currently observed (high confidence) for 127 identified key risks, e.g., in terms of the number of affected people and species.”6

These projections are extremely difficult, so we shouldn’t place too much confidence in any particular estimate. But the scale of these projected effects is consistently disturbing and roughly on a par with the broad range of major world health challenges (largely reflecting global inequalities) like tuberculosis, diarrheal diseases, malaria, outdoor air quality, and HIV/AIDS, which cause many millions of deaths each year.7

While climate change is expected to increase the number of global deaths than there would otherwise have been, the Institute for Health Metrics and Evaluation recently forecast in its Global Burden of Disease study that between 2022 and 2050, global life expectancy will increase by 4.6 years. This projection factors in the impact of rising temperatures and notes:

Our findings indicate that increases in life expectancy will be largest in countries where it is currently lower, and inequalities between countries will shrink.

Addressing climate change may synergise with other global health initiatives. For example, transitioning to green energy can mitigate climate change while also reducing air pollution, and eradicating diseases like malaria could make societies more resilient to climate-related challenges. Indeed, the Gates Foundation has said, “Malaria eradication may be one of the most cost-effective climate adaptations we can make.”

All things considered, while climate change is a large-scale problem, it seems less severe than threats that pose much more significant risks of human extinction, like we think nuclear war and engineered pandemics do.

See more details in a footnote.8 For a related perspective, see David Wallace-Wells’ New York Times op-ed, “Just How Many People Will Die From Climate Change?”

Economic models

While ‘projected deaths’ are a useful proxy for understanding the impact of climate change and comparing its scale to other problems, it’s highly imperfect and doesn’t capture the full effects of warming. It’s helpful to look at other methods of assessing climate change’s impact to see if they suggest its scale may be more comparable to potentially extinction-level events, like the worst catastrophic pandemics.

In some ways, climate-economy models provide a broader picture of the impact, because they incorporate negative effects on people’s lives beyond disease and death. Experts in this area project the social impact of warming based on two different kinds of methods:

  • Bottom-up models estimate that 2°C to 3.5°C warming by 2100 could reduce global GDP by 1-10% compared to a world without climate change.
  • Top-down models suggest more pessimistic outcomes, with potential GDP reductions of around 20%.

Making these projections is exceedingly difficult, so again, we shouldn’t be overly confident in any particular model. It’s also important to note that the costs in these models are relative to a counterfactual future world without climate change, not today’s economy. (See more detail in this footnote.9) Even with climate change, average living standards are expected to rise significantly in the future due to ongoing economic growth.10

For an alternative take arguing that the worst-case effects of climate change remain underrated, even by the IPCC, we recommend “Climate endgame: Exploring catastrophic climate change scenarios” by Kemp et al.

Contributions to existential risk

We’ve argued that even if climate change turns out to be significantly worse than existing projections suggest, it is very unlikely to directly cause human extinction:

  • The IPCC and climate models account for various feedback loops and tipping points. The chance of runaway warming to uninhabitable global conditions is considered extremely low.
  • Humanity has shown the ability to adapt to climate changes in the past. With decades or centuries of warming, further adaptation is possible, even in extreme scenarios.

We think the direct extinction risk from climate change is less than 1 in 1,000,000. This is comparable to Toby Ord’s estimate of the risk of an existential catastrophe from an asteroid collision in the next century.

We also discuss the possibility of climate change indirectly increasing other catastrophic risks in our problem profile on climate change. Climate change-induced disasters or crises could, for example, fuel international conflict, perhaps increasing the risk of a great power war.11

While we think these indirect risks are real, we don’t think they significantly strengthen the case for working on climate change over other pressing global issues, all things considered.

One reason is that instead of working on climate change, you might instead work on reducing the risk of a great power war directly — for example, by working to reduce the risk of an accidental nuclear launch or fostering cooperation between great powers. It’s possible that working to reduce climate change is in fact more effective on the current margin at reducing the risk of great power war than either of those methods or any others. But our view is that tackling threats as directly as possible is usually a good heuristic, especially when the indirect method already receives a fair amount of resources, as is the case with climate change, as discussed above.

There are also indirect benefits from working on many problems, not just climate change. For example,reducing the risk of great power conflict could plausibly increase the chance that we effectively tackle climate change, because avoiding conflict makes it easier for countries to coordinate with one another to bring down carbon emissions. Similarly, mitigating factory farming could reduce the risk of pandemics, because factory farms increase the risks that a potential pandemic pathogen crosses over from animals to humans.

Tractability

While the neglectedness and scale of climate change tend to count in favour of prioritising it less than our top problems, it is plausibly more tractable than at least some of the other problems. This is a reason to prioritise it more on the margin, since it means that working on it in your career can make a bigger positive difference to solving it.

There are two main reasons to think that climate change is more tractable than other global catastrophic risks. These arguments are discussed in this article by Giving What We Can:

First, there is a clear success metric for climate change: we know we are winning if we reduce carbon emissions. Compared to other problems like AI safety and nuclear security, it is much clearer whether we are making progress on climate change.

Second, because success is relatively easy to measure, it is easier to identify the most promising ways forward. There are now several climate success stories which suggest that progress on climate change is possible if efforts are carefully designed. For example:

Because climate change has such a clear success metric and different solutions are now so well-tested, it is one of the more tractable major global risks.

Despite the fact that climate change seems a more tractable problem, we think this does not outweigh the differences in neglectedness and scale between climate change and problems on the top of our list.

There’s still much more to do

There has been substantial progress on climate change, and the risks are now lower than they once were. We might have learned that it was getting worse and the most extreme possibilities were looking more likely, but that hasn’t happened. That doesn’t mean that climate change is no longer a big problem. Under current policies, there is a non-negligible chance of 4°C warming, which would clearly be damaging to the world, and work still needs to be done to reduce emissions more. Even if climate change is less severe than that, many people will likely suffer as a result.

As we discussed at the start of this post and on our problem profiles page, we try to think about the difference we and our readers can make on the margin. We think about what they can do to help as much as they can, given how the rest of society spends resources. We are not saying that all resources directed to climate should instead go to AI and pandemics. In fact, we think that climate change should receive more resources than it does today, just as we think global health should. Our point is that, especially for many people starting their careers, you can probably do even more good by working on other problem areas.

We’re also not telling people currently working on climate change that they should change careers or suggesting their work isn’t valuable. It’s often very valuable, and personal career decisions must weigh many different factors. While the pressingness of the problem you work on is an important and underrated factor in our opinion, considerations like personal fit — and what you enjoy — are also relevant.

Some people argue that climate change should be prioritised in part because the harms it causes are particularly unjust. Many countries that will be most harmed have historically contributed least to greenhouse gas emissions. We don’t explicitly include these kinds of considerations in our rankings, instead focusing on total welfare impacts, but they may motivate many people in their work. Though note that considerations of justice may be relevant to other problems as well — e.g. factory farming or the threat of nuclear war.

Johannes Ackva, a grantmaker who works on climate change, told 80,000 Hours in an interview that early-career people might be advised not to work on the issue because much of the policy, technology, and emissions trajectories could be essentially “locked in” within the next 10-15 years or so. Since you’re most likely to be impactful after at least a decade in your field, a young person pursuing this path may find the most valuable years of their career don’t coincide with the best opportunities to mitigate the harms of climate change.

If you want to work on mitigating climate change, we list climate change roles on our job board and have guidance for what seem to be the best ways to help in our article on the problem. And if you want to donate to organisations that work on this topic, we’d recommend the Founders Pledge Climate Change Fund.

What if I disagree with 80,000 Hours about all of this?

We expect a lot of disagreement with these decisions. One reason you might disagree with our ranking of climate change is if you think it’s more likely to cause human extinction than we’ve argued is plausible, or if you think the risks from advanced AI, catastrophic pandemics, and nuclear weapons are significantly lower than we do.

Figuring out how to compare the impact of working on different problem areas is hard, and there will always be reasonable disagreement about how to do it. Members of our team disagree with one another on these topics all the time.

We also acknowledge that we may well be wrong about these new changes, but that’s also true for every other choice we make as an organisation.

We’re excited for people to engage with our ideas, propose counter-arguments, and develop their own views. We have an article that can help you compare problems for yourself if you’re interested in exploring this further. Many people in our audience and in the effective altruism community have differing views on what issues are most pressing – you can see some argument about these topics on the Effective Altruism Forum.

Much of our other content, such as our career guide, is also designed to be helpful regardless of your problem prioritisation. We think we can still be useful to people even if they totally disagree with us on what issues are most pressing.

Learn more

Factory farming

Problem choice

Climate change

The post Updates to our problem rankings of factory farming, climate change, and more appeared first on 80,000 Hours.

]]>
An apology for our mistake with the book giveaway https://80000hours.org/2024/01/an-apology-for-our-mistake-with-the-book-giveaway/ Fri, 05 Jan 2024 14:15:43 +0000 https://80000hours.org/?p=85320 The post An apology for our mistake with the book giveaway appeared first on 80,000 Hours.

]]>
80,000 Hours runs a programme where subscribers to our newsletter can order a free, paperback copy of a book to be sent to them in the mail. Readers choose between getting a copy of our career guide, Toby Ord’s The Precipice, and Will MacAskill’s Doing Good Better.

This giveaway has been open to all newsletter subscribers since early 2022. The number of orders we get depends on the number of new subscribers that day, but in general, we get around 150 orders a day.

Over the past week, however, we received an overwhelming number of orders. The offer of the free book appears to have been promoted by some very popular posts on Instagram, which generated an unprecedented amount of interest for us.

While we’re really grateful that these people were interested in what we have to offer, we couldn’t handle the massive uptick in demand. We’re a nonprofit funded by donations, and everything we provide is free. We had budgeted to run the book giveaway projecting the demand would be in line with what it’s been for the past two years. Instead, we had more than 20,000 orders in just a few days — which we anticipated would run through around six months of the book giveaway’s budget.

We’ve now paused taking new orders, and we’re unsure when we’ll be able to re-open them.

Also, because of this large spike in demand, we had to tell many people who subscribed to our newsletter hoping to get a physical book that we’re not able to complete their order.

We deeply regret this mistake. We should have had a better process in place to pause the book giveaway much sooner, so that no orders were placed that we couldn’t fulfil, and so no one signed up to the newsletter thinking they would get a physical copy of a book when they wouldn’t.

Our readers’ trust in our services is extremely important to us, and we’re very sorry to let down the people who won’t get the books they signed up for.

We understand that this might make some readers trust us less. All we can say is that we commit to doing better in the future. We’re reviewing our book giveaway processes so that going forward, we will be able to consistently fulfil all orders as expected.

If you’re reading this and you were one of the users affected:

  • Please accept our sincerest apologies for not being able to deliver on our promise to you.
  • You can still get access to the 80,000 Hours career guide in these ways:

We’d also like to address any concerns readers may have concerning the processing of user data that we obtained during this period:

If you’d like to unsubscribe from our newsletter, because of this or any other reason, you can do so at any time by clicking the ‘unsubscribe’ link in the footer of any email from us. If you unsubscribe, we won’t email you again.

User data collected by us will be processed in accordance with our privacy policy, which you can read on Effective Ventures’ website here.

We will never sell any user data, for any reason.

Users who ordered a book will also have provided some of their personal data to our distribution partner, Impact Books, such as the delivery address for their book and their email. You can read their privacy policy here. Like us, they will never sell your data, for any reason.

We’ve asked Impact Books to delete all the personal data they had gathered from any user whose order we did not fulfil, and they will do so. So you can be confident that we will not benefit in any way from your provision of this data.

We hope this clears up some potential concerns in this area.

We apologise once again for not sending out all the requested books, and we’re really sorry that we let people down.

We think our book giveaway is a valuable service, so we’re motivated to get it restarted in a sustainable way — and we will strive to make sure we avoid a mistake like this in the future. We also hope that some of those who are disappointed to not receive a paperback book can make use of other versions of our advice, which are (and will remain) available for free online.

Update — Book giveaway re-opened on January 26, 2024:

We have re-opened our book giveaway for free paperback orders! If you have already signed up to our newsletter, you can order a paperback book by emailing book.giveaway@80000hours.org. Otherwise, you can get your book by subscribing to our newsletter as normal.

We greatly appreciate the patience of our new subscribers while we prepared to re-open the giveaway.

While we may have to close orders if we get overwhelmed again in the future, we have made several changes to improve the process of the book giveaway to address this problem.

  1. We added new terms and conditions to the giveaway so new subscribers are better informed about the availability of books in certain formats, our data privacy policy, and the circumstances in which we may be unable to fulfil paperback orders.
  2. We improved the system that alerts us to unexpectedly high volumes of paperback book orders so that more of us are aware sooner.
  3. We developed clearer internal recommendations and procedures for when and how to pause the giveaway.

These changes will help us respond more quickly to these situations in the future, which we hope will limit the number of orders placed that we cannot fulfil.

The post An apology for our mistake with the book giveaway appeared first on 80,000 Hours.

]]>
Announcing our plan to become an independent organisation https://80000hours.org/2023/12/announcing-plan/ Fri, 29 Dec 2023 14:39:54 +0000 https://80000hours.org/?p=85263 The post Announcing our plan to become an independent organisation appeared first on 80,000 Hours.

]]>
We are excited to share that 80,000 Hours has officially decided to spin out as a project from our parent organisations and establish an independent legal structure.

80,000 Hours is a project of the Effective Ventures group — the umbrella term for Effective Ventures Foundation and Effective Ventures Foundation USA, Inc., which are two separate legal entities that work together. It also includes the projects Giving What We Can, the Centre for Effective Altruism, and others.

We’re incredibly grateful to the Effective Ventures leadership and team and the other orgs for all their support, particularly in the last year. They devoted countless hours and enormous effort to helping ensure that we and the other orgs could pursue our missions.

And we deeply appreciate Effective Ventures’ support in our spin-out. They recently announced that all of the other organisations under their umbrella will likewise become their own legal entities; we’re excited to continue to work alongside them to improve the world.

Back in May, we investigated whether it was the right time to spin out of our parent organisations. We’ve considered this option at various points in the last three years.

There have been many benefits to being part of a larger entity since our founding. But as 80,000 Hours and the other projects within Effective Ventures have grown, we concluded we can now best pursue our mission and goals independently. Effective Ventures leadership approved the plan.

Becoming our own legal entity will allow us to:

  • Match our governing structure to our function and purpose
  • Design operations systems that best meet our staff’s needs
  • Reduce interdependence with other entities that raises financial, legal, and reputational risks

There’s a lot for us to do to make this happen. We’re currently in the process of finding a new CEO to lead us in our next chapter. We’ll also need a new board to oversee our work, and new staff for our internal systems team and other growing programmes.

We’re excited to begin this next chapter and to continue providing research and support to help people have high-impact careers!

Join the 450,000 people aiming to have a greater impact with their careers.

Sign up and we’ll send you:

  • Weekly job opportunities
  • Opportunities to meet others
  • Details on how to get one-on-one coaching from our team

The post Announcing our plan to become an independent organisation appeared first on 80,000 Hours.

]]>
Preventing catastrophic pandemics https://80000hours.org/problem-profiles/preventing-catastrophic-pandemics/ Thu, 23 Apr 2020 13:57:25 +0000 https://80000hours.org/?page_id=69550 The post Preventing catastrophic pandemics appeared first on 80,000 Hours.

]]>
Some of the deadliest events in history have been pandemics. COVID-19 demonstrated that we’re still vulnerable to these events, and future outbreaks could be far more lethal.

In fact, we face the possibility of biological disasters that are worse than ever before due to developments in technology.

The chances of such catastrophic pandemics — bad enough to potentially derail civilisation and threaten humanity’s future — seem uncomfortably high. We believe this risk is one of the world’s most pressing problems.

And there are a number of practical options for reducing global catastrophic biological risks (GCBRs). So we think working to reduce GCBRs is one of the most promising ways to safeguard the future of humanity right now.

Summary

Scale

Pandemics — especially engineered pandemics — pose a significant risk to the existence of humanity. Though the risk is difficult to assess, some researchers estimate that there is a greater than 1 in 10,000 chance of a biological catastrophe leading to human extinction within the next 100 years, and potentially as high as 1 in 100. (See below.) And a biological catastrophe killing a large percentage of the population is even more likely — and could contribute to existential risk.

Neglectedness

Pandemic prevention is currently under-resourced. Even in the aftermath of the COVID-19 outbreak, spending on biodefense in the US, for instance, has only grown modestly — from an estimated $17 billion in 2019 to $24 billion in 2023.

And little of existing pandemic prevention funding is specifically targeted at preventing biological disasters that could be most catastrophic.

Solvability

There are promising approaches to improve biosecurity and reducing pandemics risk, including research, policy interventions, and defensive technology development.

Why focus your career on preventing severe pandemics?

COVID-19 highlighted our vulnerability to worldwide pandemics and revealed weaknesses in our ability to respond. Despite advances in medicine and public health, around seven million deaths worldwide from the disease have been recorded, and many estimates put the figure far higher.

Historical events like the Black Death and the 1918 flu show that pandemics can be some of the most damaging disasters for humanity, killing tens of millions and significant portions of the global population.

It is sobering to imagine the potential impact of a pandemic pathogen that is much more contagious and deadly than any we’ve seen so far.

Unfortunately, such a pathogen is possible in principle, particularly in light of advancing biotechnology. Researchers can design and create biological agents much more easily and precisely than before. (More on this below.) As the field advances, it may become increasingly feasible to engineer a pathogen that poses a major threat to all of humanity.

States or malicious actors with access to these pathogens could use them as offensive weapons or wield them as threats to obtain leverage over others.

Dangerous pathogens engineered for research purposes could also be released accidentally through a failure of lab safety.

Either scenario could result in a catastrophic ‘engineered pandemic,’ which we believe could pose an even greater threat to humanity than pandemics that arise naturally, as we argue below.

Thankfully, few people seek to use disease as a weapon, and even those willing to conduct such attacks may not aim to produce the most harmful pathogen possible. But the combined possibilities of accident, recklessness, desperation, and unusual malice suggest a disturbingly high chance of a pandemic pathogen being released that could kill a very large percentage of the population. The world might be especially at risk during great power conflicts.

But could an engineered pandemic pose an extinction threat to humanity?

There is reasonable debate here. In the past, societies have recovered from pandemics that killed as much as 50% of the population, and perhaps more.1

But we believe future pandemics may be one of the largest contributors to existential risk this century, because it now seems within the reach of near-term biological advances to create pandemics that would kill greater than 50% of the population — not just in a particular area, but globally. It’s possible they could be bad enough to drive humanity to extinction, or at least be so damaging that civilisation never recovers.

Reducing the risk of biological catastrophes by constructing safeguards against potential outbreaks and preparing to mitigate their worst effects therefore seems extremely important.

It seems relatively uncommon for people in the broader field of biosecurity and pandemic preparedness to work specifically on reducing catastrophic risks and engineered pandemics. Projects that reduce the risk of biological catastrophe also seem to receive a relatively small proportion of health security funding.2

In our view, the costs of biological disasters grow nonlinearly with severity because of the increasing potential for the event to contribute to existential risk. This suggests that projects to prevent the gravest outcomes in particular should receive more funding and attention than they currently do.

In the rest of this section, we’ll discuss how artificial pandemics compare to natural pandemic risks. Later on, we’ll discuss what kind of work can and should be done in this area to reduce the risks.

We also have a career review of biorisk research, strategy, and policy paths, which gives more specific and concrete advice about impactful roles to aim for and how to enter the field.

Natural pandemics show how destructive biological threats can be

Four of the worst pandemics in recorded history were:3

  1. The Plague of Justinian (541-542 CE) is thought to have arisen in Asia before spreading into the Byzantine Empire around the Mediterranean. The initial outbreak is thought to have killed around 6 million (about ~3% of world population)4 and contributed to reversing the territorial gains of the Byzantine empire.
  2. The Black Death (1335-1355 CE) is estimated to have killed 20–75 million people (about 10% of world population) and believed to have had profound impacts on the course of European history.
  3. The Columbian Exchange (1500-1600 CE) was a succession of pandemics, likely including smallpox and paratyphoid, brought by the European colonists that devastated Native American populations. It likely played a major role in the loss of around 80% of Mexico’s native population during the 16th century. Other groups in the Americas appear to have lost even greater proportions of their communities. Some groups may have lost as much as 98% of their people to these diseases.5
  4. The 1918 Influenza Pandemic (1918 CE) spread across almost the whole globe and killed 50–100 million people (2.5%–5% of the world population). It may have been deadlier than either world war.

These historical pandemics show the potential for mass destruction from biological threats, and they are a threat worth mitigating all on their own. They also show that the key features of a global catastrophe, such as high proportional mortality and civilisational collapse, can be driven by highly destructive pandemics.

But despite the horror of these past events, it seems unlikely that a natural pandemic could be bad enough on its own to drive humanity to total extinction in the foreseeable future, given what we know of events in natural history.6

As philosopher Toby Ord argues in the section on natural risks in his book The Precipice, history suggests humanity faces a very low baseline extinction risk — the chance of being wiped out in ordinary circumstances — from natural causes over the course of, say, 100 years.

That’s because if the baseline risk were around 10% per century, we’d have to conclude we’ve gotten very lucky for the 200,000 years or so of humanity’s existence. The fact of our existence is much less surprising if the risk has been about 0.001% per century.

None of the worst plagues we know about in history was enough to destabilise civilization worldwide or clearly imperil our species’ future. And more broadly, pathogen-driven extinction events in nature appear to be relatively rare for animals.7

Is the risk from natural pandemics increasing or decreasing?

Are we safer from pandemics now than we used to be? Or do developments in human society actually put us at greater risk from natural pandemics?

Good data on these questions is hard to find. The burden of infectious disease generally in human society is on a downward trend, but this doesn’t tell us much about whether infrequent outbreaks of mass pandemics could be getting worse.

In the abstract, we can think of many reasons that the risk from naturally arising pandemics might be falling. They include:

  • We have better hygiene and sanitation than past eras, and these will likely continue to improve.
  • We can produce effective vaccinations and therapeutics.
  • We better understand disease transmission, infection, and effects on the body.
  • The human population is healthier overall.

On the other hand:

  • Trade and air travel allow much faster and wider transmission of disease.8 For example, air travel seems to have played a large role in the spread of COVID-19 from country to country.9 In previous eras, the difficulty of travelling over long distances likely kept disease outbreaks more geographically confined.
  • Climate change may increase the likelihood of new zoonotic diseases.
  • Greater human population density may increase the likelihood that diseases will spread rapidly.
  • Much larger populations of domestic animals can potentially pass diseases on to humans.

There are likely many other relevant considerations. Our guess is that the frequency of natural pandemics is increasing, but that they’ll be less bad on average.10 A further guess is that the second factor is more important than the first factor, netting out to reduced overall danger. There remain many open questions.

Engineered pathogens could be even more dangerous

But even if natural pandemic risks are declining, the risks from engineered pathogens are almost certainly growing.

This is because advancing technology makes it increasingly feasible to create threatening viruses and infectious agents.11 Accidental and deliberate misuse of this technology is a credible global catastrophic risk and could potentially threaten humanity’s future.

One way this could play out is if some dangerous actor wanted to bring back catastrophic outbreaks of the past.

Polio, the 1918 pandemic influenza strain, and most recently horsepox (a close relative of smallpox) have all been recreated from scratch. The genetic sequence of all these pathogens and others are publicly available, and the progress and proliferation of biotechnology opens up terrifying opportunities.12

Beyond the resurrection of past plagues, advanced biotechnology could let someone engineer a pathogen more dangerous than those that have occurred in natural history.

When viruses evolve, they aren’t naturally selected to be as deadly or destructive as possible. But someone who is deliberately trying to cause harm could intentionally combine the worst features of possible viruses in a way that is very unlikely to happen naturally.

Gene sequencing, editing, and synthesis are now possible and becoming easier. We’re getting closer to being able to produce biological agents the way we design and produce computers or other products (though how long it takes remains unclear). This may allow people to design and create pathogens that are deadlier or more transmissible, or perhaps have wholly new features. (Read more.)

Scientists are also investigating what makes pathogens more or less lethal and contagious, which may help us better prevent and mitigate outbreaks.

But it also means that the information required to design more dangerous pathogens is increasingly available.

All the technologies involved have potential medical uses in addition to hazards. For example, viral engineering has been employed in gene therapy and vaccines (including some used to combat COVID-19).

Yet knowledge of how to engineer viruses to be better as vaccines or therapeutics could be misused to develop ‘better’ biological weapons. Properly handling these advances involves a delicate balancing act.

Hints of the dangers can be seen in the scientific literature. Gain-of-function experiments with influenza suggested that artificial selection could lead to pathogens with properties that enhance their danger.13

And the scientific community has yet to establish strong enough norms to discourage and prevent the unrestricted sharing of dangerous findings, such as methods for making a virus deadlier. That’s why we warn people going to work in this field that biosecurity involves information hazards. It’s essential for people handling these risks to have good judgement.

Scientists can make dangerous discoveries unintentionally in lab work. For example, vaccine research can uncover virus mutations that make a disease more infectious. And other areas of biology, such as enzyme research, show how our advancing technology can unlock new and potentially threatening capabilities that haven’t appeared before in nature.14

In a world of many ‘unknown unknowns,’ we may find many novel dangers.

So while the march of science brings great progress, it also brings the potential for bad actors to intentionally produce new or modified pathogens. Even with the vast majority of scientific expertise focused on benefiting humanity, a much smaller group can use the community’s advances to do great harm.

If someone or some group has enough motivation, resources, and sufficient technical skill, it’s difficult to place an upper limit on how catastrophic an engineered pandemic they might one day create. As technology progresses, the tools for creating a biological disaster will become increasingly accessible; the barriers to achieving terrifying results may get lower and lower — raising the risk of a major attack. The advancement of AI, in particular, may catalyse the risk. (See more about this below.)

Both accidental and deliberate misuse are threats

We can divide the risks of artificially created pandemics into accidental and deliberate misuse — roughly speaking, imagine a science experiment gone wrong compared to a bioterrorist attack.

The history of accidents and lab leaks which exposed people to dangerous pathogens is chilling:

  • In 1977, an unusual flu strain emerged that disproportionately sickened young people and was found to be genetically frozen in time from a 1950 strain, suggesting a lab origin from a faulty vaccine trial.
  • In 1978, a lab leak at a UK facility resulted in the last smallpox death.
  • In 1979, an apparent bioweapons lab in the USSR accidentally released anthrax spores that drifted over a town, sickening residents and animals, and killing about 60 people. Though initially covered up, Russian President Boris Yeltsin later revealed it was an airborne release from a military lab accident.
  • In 2014, dozens of CDC workers were potentially exposed to live anthrax after samples meant to be inactivated were improperly killed and shipped to lower-level labs that didn’t always use proper protective equipment.
  • We don’t really know how often this kind of thing happens because lab leaks are not consistently tracked. And there have been many more close calls.

And history has seen many terrorist attacks and state development of mass-casualty weapons. Incidents of bioterrorism and biological warfare include:

  • In 1763, British forces at Fort Pitt gave blankets from a smallpox ward to Native American tribes, aiming to spread the disease and weaken these communities. It’s unclear if this effort achieved its aims, though smallpox devastated many of these groups.
  • During World War II, the Japanese military’s Unit 731 conducted horrific human experiments and biological warfare in China. They used anthrax, cholera, and plague, killing thousands and potentially many more. The details of these events were only uncovered later.
  • In the 1960s and 1970s, the South African government developed a covert chemical and biological warfare program known as Project Coast. The program aimed to develop biological and chemical agents targeted at specific ethnic groups and political opponents, including efforts to develop sterilisation and infertility drugs.
  • In 1984, followers of the Rajneesh movement contaminated salad bars in Oregon with Salmonella, causing more than 750 infections. It was an attempt to influence an upcoming election.
  • In 2001, shortly after the September 11 attacks, anthrax spores were mailed to several news outlets and two U.S. Senators, causing 22 infections and five deaths.

So should we be more concerned about accidents or bioterrorism? We’re not sure. There’s not a lot of data to go on, and considerations pull in both directions.

It may seem releasing a deadly pathogen on purpose is more concerning. As discussed, the worst pandemics would most likely be intentionally created rather than emerge by chance, as discussed above. Plus, there are ways to make a pathogen’s release more or less harmful, and an accidental release probably wouldn’t be optimised for maximum damage.

On the other hand, many more people are well-intentioned and want to use biotechnology to help the world rather than harm it. And efforts to eliminate state bioweapons programs likely reduce the number of potential attackers. (But see more about the limits on these efforts below.) So it seems most plausible that there are more opportunities for a disastrous accident to occur than for a malicious actor to pull off a mass biological attack.

We guess that, all things considered, the former considerations are the more significant factors.15 So we suspect that deliberate misuse is more dangerous than accidental releases, though both are certainly worth guarding against.

This image is borrowed from Claire Zabel’s talk on biosecurity.16

Overall, the risk seems substantial

We’ve seen a variety of estimates regarding the chances of an existential biological catastrophe, including the possibility of engineered pandemics.17 Perhaps the best estimates come from the Existential Risk Persuasion Tournament (XPT).

This project involved getting groups of both subject matter experts and experienced forecasters to estimate the likelihood of extreme events. For biological risks, the range of median estimates between forecasters and domain experts were as follows:

  • Catastrophic event (meaning an event in which 10% or more of the human population dies) by 2100: ~1–3%
  • Human extinction event: 1 in 50,000 to 1 in 100
  • Genetically engineered pathogen killing more than 1% of the population by 2100: 4–10%18
  • Note: the forecasters tended to have lower estimates of the risk than domain experts.

Although they are the best available figures we’ve seen, these numbers have plenty of caveats. The main three are:

  1. There is little evidence that anyone can achieve long-term forecasting accuracy. Previous forecasting work has assessed performance for questions that would resolve in months or years, not decades.
  2. There was a lot of variation in estimates within and between groups — some individuals gave numbers many times, or even many orders of magnitude, higher or lower than one another.19
  3. The domain experts were selected for those already working on catastrophic risks — the typical expert in some areas of public health, for example, might generally rate extreme risks lower.

It’s hard to be confident about how to weigh up these different kinds of estimates and considerations, and we think reasonable people will come to different conclusions.

Our view is that given how bad a catastrophic pandemic would be, the fact that there seems to be few limits on how destructive an engineered pandemic could be, and how broadly beneficial mitigation measures are, many more people should be working on this problem than current are.

Reducing catastrophic biological risks is highly valuable according to a range of worldviews

Because we prioritise world problems that could have a significant impact on future generations, we care most about work that will reduce the biggest biological threats — especially those that could cause human extinction or derail civilisation.

But biosecurity and catastrophic risk reduction could be highly impactful for people with a range of worldviews, because:

  1. Catastrophic biological threats would harm near-term interests too. As COVID-19 showed, large pandemics can bring extraordinary costs to people today, and even more virulent or deadly diseases would cause even greater death and suffering.
  2. Interventions that reduce the largest biological risks are also often beneficial for preventing more common illnesses. Disease surveillance can detect both large and small outbreaks; counter-proliferation efforts can stop both higher- and lower-consequence acts of deliberate misuse; better PPE could prevent all kinds of infections; and so on.

There is also substantial overlap between biosecurity and other world problems, such as global health (e.g. the Global Health Security Agenda), factory farming (e.g. ‘One Health‘ initiatives), and AI.

How do catastrophic biorisks compare to AI risk?

Of those who study existential risks, many believe that biological risks and AI risks are the two biggest existential threats. Our guess is that threats from catastrophic pandemics are somewhat less pressing than threats stemming from advanced AI systems.

But they’re probably not massively less pressing.

One feature of a problem that makes it more pressing is whether there are tractable solutions to work on in the area. Many solutions in the biosecurity space seem particularly tractable because:

  • There are already large existing fields of public health and biosecurity to work within.
  • The sciences of disease and medicine are well-established.
  • There are many promising interventions and research ideas that people can pursue. (See the next section.)

We think there are also exciting opportunities to work on reducing risks from AI, but the field is much less developed than the science of medicine.

The existence of this infrastructure in the biosecurity field may make the work more tractable, but it also makes it arguably less neglected — which would make it a less pressing problem. In part because AI risk has generally been seen as more speculative, and it would represent essentially a novel threat, fewer people have been working in the area. This has made AI risk more neglected than biorisk.

In 2023, interest in AI safety and governance began to grow rather rapidly, making these fields somewhat less neglected than they had been previously. But they’re still quite new and so still relatively neglected compared to the field of biosecurity. Since we view more neglected problems as more pressing, this factor probably counts in favour of working on AI risk.

We also consider problems that are larger in scale to be more pressing. We might measure the scale of the problem purely in terms of the likelihood of causing human extinction or an outcome comparably as bad. 80,000 Hours assesses the risk of an AI-caused existential catastrophe to be between 3% and 50% this century (though there’s a lot of disagreement on this question). Few if any researchers we know believe comparable biorisk is that high.

At the same time, AI risk is more speculative than the risk from pandemics, because we know from direct experience that pandemics can be deadly on a large scale. So some people investigating these questions find biorisk to be a much more plausible threat.

But in most cases, which problem you choose to work on shouldn’t be determined solely by your view of how pressing it is (though this does matter a lot!). You should also take into account your personal fit and comparative advantage.

Finally, a note about how these issues relate:

  1. AI progress may be increasing catastrophic biorisk. Some researchers believe that advancing AI capabilities may increase the risk of a biological catastrophe. Jonas Sandrink at Oxford University, for example, has argued that advanced large language models may decrease the barriers to creating dangerous pathogens. AI biological design tools could also eventually enable sophisticated actors to cause even more harm than they otherwise would.
  2. There is overlap in the policy space between working to reduce biorisks and AI risks. Both require balancing the risk and reward of emerging technology, and the policy skills needed to succeed in these areas are similar. You can potentially pursue a career reducing risks from both frontier technologies.

If your work can reduce risks on both fronts, then you might view the problems as more similarly pressing.

There are clear actions we can take to reduce these risks

Biosecurity and pandemic preparedness are multidisciplinary fields. To address these threats effectively, we need a range of approaches, including:

  • Technical and biological researchers to investigate and develop tools for controlling outbreaks
  • Entrepreneurs and industry professionals to develop and implement these
  • Strategic researchers and forecasters to develop plans
  • People in government to pass and implement policies aimed at reducing biological threats

Specifically, you could:

  • Work with government, academia, industry, and international organisations to improve the governance of gain-of-function research involving potential pandemic pathogens, commercial DNA synthesis, and other research and industries that may enable the creation of (or expand access to) particularly dangerous engineered pathogens
  • Strengthen international commitments to not develop or deploy biological weapons, e.g. the Biological Weapons Convention (see below)
  • Develop new technologies that can mitigate or detect pandemics, or the use of biological weapons,20 including:
    • Broad-spectrum testing, therapeutics, and vaccines — and ways to develop, manufacture, and distribute all of these quickly in an emergency21
    • Detection methods, such as wastewater surveillance, that can find novel and dangerous outbreaks
    • Non-pharmaceutical interventions, such as better personal protective equipment
    • Other mechanisms for impeding high-risk disease transmission, such as anti-microbial far UVC light
  • Deploying and otherwise promoting the above technologies to protect society against pandemics and to lower the incentives for trying to create one
  • Improving information security to protect biological research that could be dangerous in the wrong hands
  • Investigating whether advances in AI will exacerbate biorisks and potential solutions to this challenge
  • For more discussion of biosecurity priorities, you can read our article on advice from biosecurity experts about the best way to fight the next pandemic.

The broader field of biosecurity and pandemic preparedness has made major contributions to reducing catastrophic risks. Many of the best ways to prepare for more probable but less severe outbreaks will also reduce the worst risks.

For example, if we develop broad-spectrum vaccines and therapeutics to prevent and treat a wide range of potential pandemic pathogens, this will be widely beneficial for public health and biosecurity. But it also likely decreases the risk of the worst-case scenarios we’ve been discussing — it’s harder to launch a catastrophic bioterrorist attack on a world that is prepared to protect itself against the most plausible disease candidates. And if any state or other actor who might consider manufacturing such a threat knows the world has a high chance of being protected against it, they have even less reason to try in the first place.

Similar arguments can be made about improved PPE, some forms of disease surveillance, and indoor air purification.

But if your focus is preventing the worst-case outcomes, you may want to focus on particular interventions within biosecurity and pandemic prevention over others.

Some experts in this area, such as MIT biologist Kevin Esvelt, believe that the best interventions for reducing the risk from human-made pandemics will come from the world of physics and engineering, rather than biology.

This is because for every biological countermeasure to reduce pandemic risk, such as vaccines, there may be tools in the biological sciences to overcome these obstacles — just as viruses can evolve to evade vaccine-induced immunity.

And yet, there may be hard limits to the ability of biological threats to overcome physical countermeasures. For instance, it seems plausible that there may just be no viable way to design a virus that can penetrate sufficiently secure personal protective equipment or to survive under far-UVC light. If this argument is correct, then these or similar interventions could provide some of the strongest protection against the biggest pandemic threats.

Two example ways to reduce catastrophe biological risks

We illustrate two specific examples of work to reduce catastrophic biological risks below, though note that many other options are available (and may even be more tractable).

1. Strengthen the Biological Weapons Convention

The principal defence against proliferation of biological weapons among states is the Biological Weapons Convention. The vast majority of eligible states have signed or ratified it.

Yet some states that signed or ratified the convention have also covertly pursued biological weapons programmes. The leading example was the Biopreparat programme of the USSR,22 which at its height spent billions and employed tens of thousands of people across a network of secret facilities.23

Its activities are alleged to have included industrial-scale production of weaponised agents like plague, smallpox, and anthrax. They even reportedly succeeded in engineering pathogens for increased lethality, multi-resistance to therapeutics, evasion of laboratory detection, vaccine escape, and novel mechanisms of disease not observed in nature.24 Other past and ongoing violations in a number of countries are widely suspected.25

The Biological Weapons Convention faces ongoing difficulties:

  • The convention lacks verification mechanisms for countries to demonstrate their compliance, and the technical and political feasibility of verification is fraught.
  • It also lacks an enforcement mechanism, so there are no consequences even if a state were out of compliance.
  • The convention struggles for resources. It has only a handful of full-time staff, and many states do not fulfil their financial obligations. The 2017 meeting of states’ parties was only possible thanks to overpayment by some states, and the 2018 meeting had to be cut short by a day due to insufficient funds.26

Working to improve the convention’s effectiveness, increasing its funding, or promoting new international efforts that better achieve its aims could help reduce the risk of a major biological catastrophe.

2. Govern dual-use research of concern

As discussed above, some well-meaning research has the potential to increase catastrophic risks. Such research is often called ‘dual-use research of concern,’ since the research could be used in either beneficial or harmful ways.

The primary concerns are that dangerous pathogens could be accidentally released or dangerous specimens and information produced by the research could fall into the hands of bad actors.

Gain-of-function experiments by Yoshihiro Kawaoka and Ron Fouchier raised concerns in 2011. They published results showing they had modified avian flu to spread in ferrets — raising fears that it might also be enabled to spread to humans.

The synthesis of horsepox is a more recent case. Good governance of this kind of research remains more aspiration than reality.

Individual investigators often have a surprising amount of discretion when carrying out risky experiments. It’s plausible that typical scientific norms are not well-suited to appropriately managing the dangers intrinsic in some of this work.

Even in the best case, where the scientific community is solely composed of those who only perform work which they sincerely believe is on balance good for the world, we might still face the unilateralist curse. This occurs when only one individual mistakenly concludes that a dangerous course of action should be taken, even when all their peers have ruled it out. This makes the chance of disaster much more likely, because it only takes one person making an incorrect risk assessment to impose major costs on the rest of society.

And in reality, scientists are subject to other incentives besides the public good, such as publications, patents, and prestige. It would be better if safety-enhancing discoveries were found before easier to make dangerous discoveries arise. But the existing incentives may encourage researchers to conduct their work in ways that aren’t always optimal for the social good.

Governance and oversight can mitigate risks posed by individual foibles or mistakes. The track record of such oversight bodies identifying concerns in advance is imperfect. The gain-of-function work on avian flu was initially funded by the NIH (the same body which would subsequently declare a moratorium on gain-of-function experiments), and passed institutional checks and oversight — concerns only began after the results of the work became known.

When reporting the horsepox synthesis to the WHO advisory committee on variola virus research, the scientists noted:

Professor Evans’ laboratory brought this activity to the attention of appropriate regulatory authorities, soliciting their approval to initiate and undertake the synthesis. It was the view of the researchers that these authorities, however, may not have fully appreciated the significance of, or potential need for, regulation or approval of any steps or services involved in the use of commercial companies performing commercial DNA synthesis, laboratory facilities, and the federal mail service to synthesise and replicate a virulent horse pathogen.

One challenge is there is no bright line one can draw to rule out all concerning research. List-based approaches, such as select agent lists or the seven experiments of concern, may increasingly be unsuited to current and emerging practice, particularly in such a dynamic field.

But it’s not clear what the alternative to necessarily incomplete lists would be. The consequences of scientific discovery are often not obvious ahead of time, so it may be difficult to say which kinds of experiments pose the greatest risks or in which cases the benefits outweigh the costs.

Even if a more reliable governance could be constructed, the geographic scope would remain a challenge. Practitioners inclined toward more concerning work could migrate to more permissive jurisdictions. And even if one journal declines to publish a new finding on public safety grounds, a researcher can resubmit to another journal with laxer standards.27

But we believe these challenges are surmountable.

Research governance can adapt to modern challenges. Greater awareness of biosecurity issues can be spread in the scientific community. We can construct better means of risk assessment than blacklists (cf. Lewis et al. (2019)). Broader cooperation can mitigate some of the dangers of the unilateralist’s curse. There is ongoing work in all of these areas, and we can continue to improve practices and policies.

Example reader

What jobs are available?

For our full article on pursuing work in biosecurity, you can read our biosecurity research and policy career review.

If you want to focus on catastrophic pandemics in the biosecurity world, it might be easier to work on broader efforts that have more mainstream support first and then transition to more targeted projects later. If you are already working in biosecurity and pandemic preparedness (or a related field), you might want to advocate for a greater focus on measures that reduce risk robustly across the board, including in the worst-case scenarios.

The world could be doing a lot more to reduce the risk of natural pandemics on the scale of COVID-19. It might be easiest to push for interventions targeted at this threat before looking to address the less likely, but more catastrophic possibilities. On the other hand, potential attacks or perceived threats to national security often receive disproportionate attention from governments compared to standard public health threats, so there may be more opportunities to reduce risks from engineered pandemics under some circumstances.

To get a sense of what kinds of roles you might take on, you can check out our job board for openings related to reducing biological threats. This isn’t comprehensive, but it’s a good place to start:

Our job board features opportunities in biosecurity and pandemic preparedness:

    View all opportunities

    Want to work on reducing risks of the worst biological disasters? We want to help.

    We’ve helped people formulate plans, find resources, and put them in touch with mentors. If you want to work in this area, apply for our free one-on-one advising service.

    Apply for advising

    We thank Gregory Lewis for contributing to this article, and thank Anemone Franz and Elika Somani for comments on the draft.

    Learn more

    Top recommendations

    Further recommendations

    Resources for general pandemic preparedness

    Other resources

    Career resources

    Podcasts

    The post Preventing catastrophic pandemics appeared first on 80,000 Hours.

    ]]>
    How 80,000 Hours has changed some of our advice after the collapse of FTX https://80000hours.org/2023/05/how-80000-hours-has-changed-some-of-our-advice-after-the-collapse-of-ftx/ Fri, 12 May 2023 06:06:36 +0000 https://80000hours.org/?p=81669 The post How 80,000 Hours has changed some of our advice after the collapse of FTX appeared first on 80,000 Hours.

    ]]>
    Following the bankruptcy of FTX and the federal indictment of Sam Bankman-Fried, many members of the team at 80,000 Hours were deeply shaken. As we have said, we had previously featured Sam on our site as a positive example of earning to give, a mistake we now regret. We felt appalled by his conduct and at the harm done to the people who had relied on FTX.

    These events were emotionally difficult for many of us on the team, and we were troubled by the implications it might have for our attempts to do good in the world. We had linked our reputation with his, and his conduct left us with serious questions about effective altruism and our approach to impactful careers.

    We reflected a lot, had many difficult conversations, and worked through a lot of complicated questions. There’s still a lot we don’t know about what happened, there’s a diversity of views within the 80,000 Hours team, and we expect the learning process to be ongoing.

    Ultimately, we still believe strongly in the principles that drive our work, and we stand by the vast majority of our advice. But we did make some significant updates in our thinking, and we’ve changed many parts of the site to reflect them. We wrote this post to summarise the site updates we’ve made and to explain the motivations behind them, for transparency purposes and to further highlight the themes that unify the changes.

    We also support many efforts to push for broader changes in the effective altruism community, like improved governance.1 But 80,000 Hours’ written advice is primarily aimed at personal career choices, so we focused on the actions and attitudes of individuals in these updates to the site’s content.

    The changes we made

    We think that while ambition in doing good is still underrated by many, we think it’s more important now to emphasise the downsides of ambition. Our articles on being more ambitious and the potential for accidental harm had both mentioned the potential risks, but we’ve expanded on these discussions and made the warnings more salient for the reader.

    We expanded our discussion of the reasons against pursuing a harmful career. And we’ve added more discussion in many places, most notably our article on the definition of “social impact” and in a new blog post from Benjamin Todd on moderation, about why we don’t encourage people to focus solely, to the exclusion of all other values, on aiming at what they think is impartially good.

    We also used this round of updates to correct some other issues that came up during the reflections on our advice after the collapse of FTX.

    The project to make these website changes was implemented by Benjamin Todd, Cody Fenwick and Arden Koehler, with some input from the rest of the team.

    Here is a summary of all the changes we made:

    • We updated our advice on earning to give to include Sam as a negative example, and we discussed at more length the risks of harm or corruption. We express more scepticism about highly ambitious earning to give (though we don’t rule it out, and we think it can still be used for good with the right safeguards).
    • In our article on leverage, we added discussion of the downsides and responsibility that comes with having a lot of leverage, such as the importance of governance and accountability for influential people.
    • We clarified our views on risk and put more emphasis on how you should generally only seek upsides after limiting downsides, for both yourself and the world.
    • We put greater emphasis on respecting a range of values and cultivating character in addition to caring about impact, as well as not doing things that seem very wrong from a commonsense perspective for what one perceives as the “greater good.”
    • We added a lot more advice on how to avoid accidentally doing harm.
    • We took easy opportunities to tone down language around maximisation and optimisation. For instance, we talk about doing more good, or doing good as one important goal among several, rather than the most good. There’s a lot of room for debate about these issues, and we’re not in total agreement on the team about the precise details, but we generally think it’s plausible that Sam’s unusual willingness to fully embrace naive maximising contributed to the decision making behind FTX’s collapse.
    • We slightly reduced how much we emphasise the importance of getting involved with the effective altruism community, which now has a murkier historical impact compared to what we thought before the collapse. (To be clear, we still think there are tons of great things about the EA community, continue to encourage people to get involved in it, and continue to count ourselves as part of it!)
    • We released a newsletter about character virtue and a blog post about moderation.
    • We’ve started doing more vetting of the case studies we feature on the site.
    • We have moved the “Founder of new project tackling top problems” out of our priority paths and into the “high-impact but especially competitive” section on the career reviews page. This move was in part driven by the change in the funding landscape after the collapse of FTX — but also because the recent proliferation of new such projects likely reduces the marginal value of the typical additional project.

    We’re still considering some other changes, such as to our ranking of effective altruism community building and certain other careers, as well as doing even more to emphasise character, governance, oversight, and related issues. But we didn’t want to wait to be ‘done’ with these edits, to the degree we ever will be ‘done’ learning lessons from this episode, before sharing this interim update with readers.

    Some of the articles that saw the most changes were:

    We’ve also updated some of our marketing materials, mostly by toning down calls to “maximise impact.” We still think it’s really important to be scope sensitive, and helping more individuals is better than helping fewer — some of the core ideas of effective altruism. But handling these ideas in a naive way, as the maximising language may incline some toward, can be counterproductive and miss out on important considerations.

    We think there’s a lot more we can learn from what happened. Here are some of the reflections members of the 80k team have had:

    We think the edits we’ve made are only a small part of the response that’s needed, but hopefully they move things in the right direction.

    The post How 80,000 Hours has changed some of our advice after the collapse of FTX appeared first on 80,000 Hours.

    ]]>
    80,000 Hours two-year review: 2021 and 2022 https://80000hours.org/2023/03/80000-hours-two-year-review-2021-and-2022/ Wed, 08 Mar 2023 11:29:04 +0000 https://80000hours.org/?p=80949 The post 80,000 Hours two-year review: 2021 and 2022 appeared first on 80,000 Hours.

    ]]>
    We’ve released our review of our programmes for the years 2021 and 2022. The full document is available for the public, and we’re sharing the summary below.

    You can find our previous evaluations here. We have also updated our mistakes page.


    80,000 Hours delivers four programmes: website, job board, podcast, and one-on-one. We also have a marketing team that attracts users to these programmes, primarily by getting them to visit the website.

    Over the past two years, three of four programmes grew their engagement 2-3x:

    • Podcast listening time in 2022 was 2x higher than in 2020
    • Job board vacancy clicks in 2022 were 3x higher than in 2020
    • The number of one-on-one team calls in 2022 was 3x higher than in 2020

    Web engagement hours fell by 20% in 2021, then grew by 38% in 2022 after we increased investment in our marketing.

    From December 2020 to December 2022, the core team grew by 78% from 14 FTEs to 25 FTEs.

    Ben Todd stepped down as CEO in May 2022 and was replaced by Howie Lempel.

    The collapse of FTX in November 2022 caused significant disruption. As a result, Howie went on leave from 80,000 Hours to be Interim CEO of Effective Ventures Foundation (UK). Brenton Mayer took over as Interim CEO of 80,000 Hours. We are also spending substantially more time liaising with management across the Effective Ventures group, as we are a project of the group.

    We had previously held up Sam Bankman-Fried as a positive example of one of our highly rated career paths, a decision we now regret and feel humbled by. We are updating some aspects of our advice in light of our reflections on the FTX collapse and the lessons the wider community is learning from these events.

    In 2023, we will make improving our advice a key focus of our work. As part of this, we’re aiming to hire for a senior research role.

    We plan to continue growing our main four programmes and will experiment with additional projects, such as relaunching our headhunting service and creating a new, scripted podcast with a different host. We plan to grow the team by roughly 50% in 2023, adding an additional 12 people.

    Our baseline non-marketing budget is $8.8m for 2023 and $13.7m for 2024. We’re keen to fundraise above our baseline budget and also interested in expanding our runway – though expect that the amount we raise in practice will be heavily affected by the funding landscape.

    We would like to increase the number of people and organisations donating to 80,000 Hours, so if you would consider donating, please contact michelle.hutchinson@80000hours.org.

    The post 80,000 Hours two-year review: 2021 and 2022 appeared first on 80,000 Hours.

    ]]>
    How to choose where to donate https://80000hours.org/articles/best-charity/ Wed, 09 Nov 2016 18:13:57 +0000 https://80000hours.org/?post_type=article&p=36393 The post How to choose where to donate appeared first on 80,000 Hours.

    ]]>
    If you want to make a difference, and are happy to give toward wherever you think you can do the most good (regardless of cause area), how do you choose where to donate? This is a brief summary of the most useful tips we have.

    How to choose an effective charity

    First, plan your research

    One big decision to make is whether to do your own research or delegate your decision to someone else. Below are some considerations.

    If you trust someone else’s recommendations, you can defer to them.

    If you know someone who shares your values and has already put a lot of thought into where to give, then consider simply going with their recommendations.

    But it can be better to do your own research if any of these apply to you:

    • You think you might find something higher impact according to your values than even your best advisor would find (because you have unique values, good research skills, or access to special information — e.g. knowing about a small project a large donor might not have looked into).
    • You think you might be able to productively contribute to the broader debate about which charities should be funded (producing research is a public good for other donors).
    • You want to improve your knowledge of effective altruism and charity evaluation.

    Consider entering a donor lottery.

    A donor lottery allows you to donate into a fund with other small donors, in exchange for a proportional chance to be able to choose where the whole fund gets donated. For example, you might put $20,000 into a fund in exchange for a 20% chance of being able to choose where $100,000 from that fund gets donated.

    Why might you want to do this? If you win the lottery, it’s worthwhile doing a great deal of research into where it’s best to give, to allocate that $100,000 as well as possible. If you don’t win, you don’t have to do any research, and whoever wins the lottery does it instead. In short, it’s probably more efficient for small donors to pool their funds, and for one of them to do in-depth research, rather than for each of them to do a small amount of research. This is because there are some fixed costs of understanding the landscape — it doesn’t generally become 100 times harder to figure out where to donate 100 times the funds.

    Giving What We Can organises donor lotteries once a year.

    If you’re going to do your own research, decide how much you should do.

    The more you’re giving as a percentage of your annual income, the more time it’s worth spending on research. Roughly speaking, a 1% donation might be worth a few hours of work, while a 50% donation could be worth a month of research. On the other hand, the more you earn per hour, it may be that the less time you should take off for independent research, as that may be dominated by simply earning and giving more.

    Another factor is how much you expect the research to affect your decisions. For example, if you haven’t thought about this much before, it’s worth doing more research. But even if you have thought about it a lot, bear in mind you could be overconfident in your current views (or things might have changed since you last looked into it), so a bit of research might be a good idea to ensure your donations are doing the most good.

    Finally, younger people should sometimes do more research, since it will help them learn about charity evaluation, which will inform their giving in future years (and perhaps their career decisions as well). As a young person, giving 1% per year and spending a weekend thinking about it is a great way to learn about effective giving. If you’re a bit older, giving 10%, and don’t expect your views to change, then perhaps one or two days of research is worth it. If you’re giving more than 10%, more time is probably justified.

    Second, choose an effective charity

    If you’re doing your own research, we recommend working through these steps:

    1. Decide which global problems you think are most pressing right now.

    You want to find charities that are working on big but neglected problems, and where there’s a clear route to progress — this is where it’s easiest to have a big impact. If you’re new to 80,000 Hours, learn about how we approach figuring out which global problems are most pressing, or see a list of problems we think especially need attention.

    2. Find the best organisations within your top 2–3 problem areas.

    Look for charities that are well-run, have a great team and potential to grow, and are working on a justified programme.

    Many charitable programmes don’t work, so focus on organisations that do at least one of the following:

    • Implement programmes that have been rigorously tested (most haven’t).
    • Are running pilot programmes that will be tested in the future.
    • Would be so valuable if they worked that it’s worth taking a chance on them — even if the likelihood of success is low. Organisations in this category have a ‘high-risk, high-reward’ proposition, such as scientific research, policy advocacy, or the potential to grow very rapidly.

    If you’re doing your own intensive research, then at this stage you typically need to talk to people in the area to figure out which organisations are doing good work. One starting point might be our lists of top-recommended organisations.

    3. If you have to break a tie, choose the one that’s furthest from meeting its funding needs.

    Some organisations already have a lot of funding, and may not have the capacity to effectively use additional funds. For instance, GiveWell has tried to find a good organisation that provides individuals with vaccines to fund, but funders like the Gates Foundation take most of the promising opportunities. You can assess an organisation’s room for more funding by looking at where they intend to spend additional donations, either by reading their plans or talking to them.

    This consideration is a bit less important than others: if you support a great organisation working on a neglected problem, then they’ll probably figure out a good way to use the money, even if they get a lot.

    Learn more about how to find effective charities

    • When can small donors make donations that are even more effective than large donors? This article lists situations when small donors have an advantage over large donors — ideally you’d choose one of these situations to focus on. It also includes more thoughts on whether to delegate your decision or do your own research.

    • Tips on how to evaluate charities from GiveWell. Bear in mind that the process for evaluating a large organisation is different from evaluating a startup. With large, stable organisations, you can extrapolate forward from their recent performance. With new and rapidly growing organisations, what matters is the long-term potential upside (and their chances of getting there), more than what they’ve accomplished in the past.

    We are not experts in charity evaluation — but there are people who are! Not every cause area has charity evaluators, but in global health and animal welfare the recommendations are more developed.

    A good place to start are the following lists, which are updated annually.

    Donating to expert-led funds rather than directly to charities

    The best charity to give to is both hard to determine and constantly changing. So, we think a reasonable option for people who don’t have much time for their own research is to give to expert-managed funds that are aligned with your principles. (Our principles are broadly in line with effective altruism, which is why we highlight effective altruism funds below.)

    When donating to a fund, you choose how to split your giving across different focus areas — global health, animal welfare, community infrastructure, and the long-term future — and an expert committee in each area makes grants, with the aim of selecting the most effective charities. This is a great way to delegate your decision to people who might have a better view of the options, provided you feel reasonably aligned with the committees.

    EA Funds options:

    Founders Pledge also has an expert-led fund for climate change.

    The Giving What We Can donation platform lists more recommended effective altruism funds:

    Donate now
    (Note that EA Funds is a project of the Effective Ventures Foundation, our parent charity, and due to our similar views on how to do the most good, we have received grants from both funds in the past.)

    You can also see some notes from our president, Benjamin Todd, on how he would decide where to donate.

    Topping up grants from other donors you broadly agree with

    If you prefer to have more control over where your money is going, you could also directly ‘top up’ a particular past grant made by one of the funds you think is effective, or another large donor, such as Open Philanthropy — read more about this option here:

    We think the leading foundation that takes an effective altruism approach to giving is Open Philanthropy.1 (Disclosure: it is our largest funder.) You can learn more about Open Philanthropy’s mindset and research in our interviews with current and former research staff.

    Open Philanthropy has far more research capacity than any individual donor, but you can roughly match the cost effectiveness of its grants without needing to invest much effort at all. One way to do this is by co-funding the same projects, or giving based on what its analysts have learned.

    Open Philanthropy often doesn’t want to provide 100% of an organisation’s funding, so that organisations don’t become too dependent on it alone. This creates a need for smaller donors to ‘top up’ its funding.

    In light of the above, Open Philanthropy maintains a database of all its grants, which you can filter by year and focus area.

    Also, some grantmakers at Open Philanthropy offer annual giving suggestions for individual donors that you can follow.

    For instance, if you’re interested in giving to support pandemic preparedness, you can get a list of all its grants in that area, read through some recent ones, and donate to an organisation you find attractive and which still has room to absorb more funding.

    Below is a list of Open Philanthropy’s focus areas and associated grants.

    Our top-priority areas

    Other focus areas we’ve investigated

    Focus areas we know less about

    Reading the research conducted by other informed donors

    Here are some other resources you could draw on:

    • Technical AI safety research: A contributor at the Effective Altruism Forum publishes a review of organisations most years— here’s their December 2021 update.
    • Global health and development: GiveWell identifies and recommends charities that are evidenced-based, thoroughly vetted, and underfunded. Many of the staff at GiveWell also write about where they are giving personally, and make suggestions for the public. Here’s their post from 2022.
    • Farmed animal welfare: Animal Charity Evaluators uses four criteria to recommend charities they believe most effectively help animals.
    • ‘S-risks’: The German Effective Altruism Foundation has launched its own expert-advised fund focused on the possibility that future technologies could lead to large amounts of suffering.
    • See all posts about where to donate on 80,000 Hours and on the EA Forum.

    Should you give now or later?

    It might be more effective to invest your money, grow it, and donate a larger sum later. We have an article on this, or you can read this more recent and technical exploration of the considerations. Here are all our resources on the ‘now vs later’ question.

    How should you handle taxes and giving?

    If you’re in the US, here’s an introductory guide to giving, taxes, and personal finance, and a more advanced one. You may also be interested in this guide to choosing a donor-advised fund.

    If you’re in the UK, here’s a guide to income tax and donations.

    You can also see Giving What We Can’s article on tax deductibility of donations by country.

    Next steps

    The post How to choose where to donate appeared first on 80,000 Hours.

    ]]>
    What 80,000 Hours learned by anonymously interviewing people we respect https://80000hours.org/2020/06/lessons-from-anonymous-interviews/ Thu, 18 Jun 2020 14:48:27 +0000 https://80000hours.org/?p=69994 The post What 80,000 Hours learned by anonymously interviewing people we respect appeared first on 80,000 Hours.

    ]]>
    We recently released the fifteenth and final installment in our series of posts with anonymous answers.

    These are from interviews with people whose work we respect and whose answers we offered to publish without attribution.

    It features answers to 23 different questions including How have you seen talented people fail in their work? and What’s one way to be successful you don’t think people talk about enough?.

    We thought a lot of the responses were really interesting; some were provocative, others just surprising. And as intended, they spanned a wide range of opinions.

    For example, one person had seen talented people fail by being too jumpy:

    “It seems particularly common in effective altruism for people to be happy to jump ship onto some new project that seems higher impact at the time. And I think that this tendency systematically underestimates the costs of switching, and systematically overestimates the benefits — so you get kind of a ‘grass is greener’ effect.

    In general, I think, if you’re taking a job, you should be imagining that you’re going to do that job for several years. If you’re in a job, and you’re not hating it, it’s going pretty well — and some new opportunity presents itself, I think you should be extremely reticent to jump ship.

    I think there are also a lot of gains from focusing on one activity or a particular set of activities; you get increasing returns for quite a while. And if you’re switching between things often, you lose that benefit.”

    But another thought that you should actually be pretty open to leaving a job after ~6 months:

    “Critically, once you do take a new job — immediately start thinking “is there something else that’s a better fit?” There’s still a taboo around people changing jobs quickly. I think you should maybe stay 6 months in a role just so they’re not totally wasting their time in training you — but the expectation should be that if someone finds out a year in that they’re not enjoying the work, or they’re not particularly suited to it, it’s better for everyone involved if they move on. Everyone should be actively helping them to find something else.

    Doing something you don’t enjoy or aren’t particularly good at for 1 or 2 years isn’t a tragedy — but doing it for 20 or 30 years is.”

    More broadly, the project emphasised the need for us to be careful when giving advice as 80,000 Hours.

    In the words of one guest:

    “trying to give any sort of general career advice — it’s a fucking nightmare. All of this stuff, you just kind of need to figure it out for yourself. Is this actually applying to me? Am I the sort of person who’s too eager to change jobs, or too hesitant? Am I the sort of person who works themselves too hard, or doesn’t work hard enough?”

    This theme was echoed in a bunch of responses (1, 2, 3, 4, 5, 6).

    And this wasn’t the only recurring theme — here are another 12:

    You can find the complete collection here.

    We’ve also released an audio version of some highlights of the series, which you can listen to here, or on the 80,000 Hours Podcast feed.

    These quotes don’t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own.

    All entries in this series

    1. What’s good career advice you wouldn’t want to have your name on?
    2. How have you seen talented people fail in their work?
    3. What’s the thing people most overrate in their career?
    4. If you were at the start of your career again, what would you do differently this time?
    5. If you’re a talented young person how risk averse should you be?
    6. Among people trying to improve the world, what are the bad habits you see most often?
    7. What mistakes do people most often make when deciding what work to do?
    8. What’s one way to be successful you don’t think people talk about enough?
    9. How honest & candid should high-profile people really be?
    10. What’s some underrated general life advice?
    11. Should the effective altruism community grow faster or slower? And should it be broader, or narrower?
    12. What are the biggest flaws of 80,000 Hours?
    13. What are the biggest flaws of the effective altruism community?
    14. How should the effective altruism community think about diversity?
    15. Are there any myths that you feel obligated to support publicly? And five other questions.

    The post What 80,000 Hours learned by anonymously interviewing people we respect appeared first on 80,000 Hours.

    ]]>
    Policy and research ideas to reduce existential risk https://80000hours.org/2020/04/longtermist-policy-ideas/ Mon, 27 Apr 2020 22:46:38 +0000 https://80000hours.org/?p=69591 The post Policy and research ideas to reduce existential risk appeared first on 80,000 Hours.

    ]]>
    In his book The Precipice: Existential Risk and the Future of Humanity, 80,000 Hours trustee Dr Toby Ord suggests a range of research and practical projects that governments could fund to reduce the risk of a global catastrophe that could permanently limit humanity’s prospects.

    He compiles over 50 of these in an appendix, which we’ve reproduced below. You may not be convinced by all of these ideas, but they help to give a sense of the breadth of plausible longtermist projects available in policy, science, universities and business.

    There are many existential risks and they can be tackled in different ways, which makes it likely that great opportunities are out there waiting to be identified.

    Many of these proposals are discussed in the body of The Precipice. We’ve got a 3 hour interview with Toby you could listen to, or you can get a copy of the book mailed you for free by joining our newsletter:

    Policy and research recommendations

    Engineered Pandemics

    • Bring the Biological Weapons Convention into line with the Chemical Weapons Convention: taking its budget from $1.4 million up to $80 million, increasing its staff commensurately, and granting the power to investigate suspected breaches.
    • Strengthen the WHO’s ability to respond to emerging pandemics through rapid disease surveillance, diagnosis and control. This involves increasing its funding and powers, as well as R&D on the requisite technologies.
    • Ensure that all DNA synthesis is screened for dangerous pathogens. If full coverage can’t be achieved through self regulation by synthesis companies, then some form of international regulation will be needed.
    • Increase transparency around accidents in BSL-3 and BSL-4 laboratories.
    • Develop standards for dealing with information hazards, and incorporate these into existing review processes.
    • Run scenario-planning exercises for severe engineered pandemics.

    Unaligned Artificial Intelligence

    • Foster international collaboration on safety and risk management.
    • Explore options for the governance of advanced AI.
    • Perform technical research on aligning advanced artificial intelligence with human values.
    • Perform technical research on other aspects of AGI safety, such as secure containment and tripwires.

    Asteroids & Comets

    • Research the deflection of 1 km+ asteroids and comets, perhaps restricted to methods that couldn’t be weaponised such as those that don’t lead to accurate changes in trajectory.
    • Bring short-period comets into the same risk framework as near-Earth asteroids.
    • Improve our understanding of the risks from long-period comets.
    • Improve our modelling of impact winter scenarios, especially for 1–10 km asteroids. Work with experts in climate modelling and nuclear winter modelling to see what modern models say.

    Supervolcanic Eruptions

    • Find all the places where supervolcanic eruptions have occurred in the past.
    • Improve the very rough estimates on how frequent these eruptions are, especially for the largest eruptions.
    • Improve our modelling of volcanic winter scenarios to see what sizes of eruption could pose a plausible threat to humanity.
    • Liaise with leading figures in the asteroid community to learn lessons from them in their modelling and management.

    Stellar Explosions

    • Build a better model for the threat including known distributions of parameters instead of relying on representative examples. Then perform sensitivity analysis on that model—are there any plausible parameters that could make this as great a threat as asteroids?
    • Employ blue-sky thinking about any ways current estimates could be underrepresenting the risk by a factor of a hundred or more.

    Nuclear Weapons

    • Restart the Intermediate-Range Nuclear Forces Treaty (INF).
    • Renew the New START arms control treaty, due to expire in February 2026.
    • Take US ICBMs off hair-trigger alert (officially called Launch on Warning).
    • Increase the capacity of the International Atomic Energy Agency (IAEA) to verify nations are complying with safeguards agreements.
    • Work on resolving the key uncertainties in nuclear winter modelling.
    • Characterise the remaining uncertainties then use Monte Carlo techniques to show the distribution of outcome possibilities, with a special focus on the worst-case possibilities compatible with our current understanding.
    • Investigate which parts of the world appear most robust to the effects of nuclear winter and how likely civilisation is to continue there.

    Climate

    • Fund research and development of innovative approaches to clean energy.
    • Fund research into safe geoengineering technologies and geoengineering governance.
    • The US should re-join the Paris Agreement.
    • Perform more research on the possibilities of a runaway greenhouse effect or moist greenhouse effect. Are there any ways these could be more likely than is currently believed? Are there any ways we could decisively rule them out?
    • Improve our understanding of the permafrost and methane clathrate feedbacks.
    • Improve our understanding of cloud feedbacks.
    • Better characterise our uncertainty about the climate sensitivity: what can and can’t we say about the right-hand tail of the distribution.
    • Improve our understanding of extreme warming (e.g. 5–20 °C), including searching for concrete mechanisms through which it could pose a plausible threat of human extinction or the global collapse of civilisation.

    Environmental Damage

    • Improve our understanding of whether any kind of resource depletion currently poses an existential risk.
    • Improve our understanding of current biodiversity loss (both regional and global) and how it compares to that of past extinction events.
    • Create a database of existing biological diversity to preserve the genetic material of threatened species.

    General

    • Explore options for new international institutions aimed at reducing existential risk, both incremental and revolutionary.
    • Investigate possibilities for making the deliberate or reckless imposition of human extinction risk an international crime.
    • Investigate possibilities for bringing the representation of future generations into national and international democratic institutions.
    • Each major world power should have an appointed senior government position responsible for registering and responding to existential risks that can be realistically foreseen in the next 20 years.
    • Find the major existential risk factors and security factors — both in terms of absolute size and in the cost-effectiveness of marginal changes.
      • (Editor’s note: existential risk factors are problems, like a shortage of natural resources, that don’t directly risk extinction, but could nonetheless indirectly raise the risk of a disaster. Security factors are the reverse, and might include better mechanisms for resolving disputes between major military powers.)
    • Target efforts at reducing the likelihood of military conflicts between the US, Russia and China.
    • Improve horizon-scanning for unforeseen and emerging risks.
    • Investigate food substitutes in case of extreme and lasting reduction in the world’s ability to supply food.
    • Develop better theoretical and practical tools for assessing risks with extremely high stakes that are either unprecedented or thought to have extremely low probability.
    • Improve our understanding of the chance civilisation will recover after a global collapse, what might prevent this, and how to improve the odds.
    • Develop our thinking about grand strategy for humanity.
    • Develop our understanding of the ethics of existential risk and valuing the long-term future.

    Learn more

    The post Policy and research ideas to reduce existential risk appeared first on 80,000 Hours.

    ]]>
    Save time through smart buying https://80000hours.org/2013/01/save-time-through-smart-buying/ Fri, 11 Jan 2013 22:52:00 +0000 http://80000hours.org/2013/01/save-time-through-smart-buying/ When people say that time is money, they mostly mean that you can earn money with your time. But it works both ways. In a previous post I discussed how we can spend money on a virtual assistant to save time. Here I will discuss some ways that you can spend money on goods or services to save you time.

    The post Save time through smart buying appeared first on 80,000 Hours.

    ]]>
    When people say that time is money, they mostly mean that you can earn money with your time. But it works both ways. Here I will discuss some ways that you can spend money on goods or services to save you time.

    Food

    I have found a few cheap places to eat in my neighbourhood but my favourite is our college hall. The student cafeteria is an ideal location to eat as food is generally cheaper than elsewhere, and cooking costs are usually subsidised or paid entirely by the college. I try and arrange all of my meetings during meal times as a form of multi-tasking, and my college cafeteria is my favourite location. You’ll want to factor in travel times to and from your local eatery when doing this calculation, but for me it is well worth doing.

    Shopping

    When I go do a weekly shop it takes me about an hour all-told. I am about to trial using a shopping delivery service such as tesco.com to do the shopping for me. There is an initial setup cost associated with selecting which types of items I like, for example which type of muesli I buy, and saving them in the system. But once I have done this, I can save time each week by taking a photo of my shopping list and sending it to my virtual assistant to order online for me. I have not tried this yet, but I’ll let you know how it goes.

    Going digital

    I have spent money on a number of services that have allowed me to go digital, saving me time and hassle. Everything I have here is worth doing in my opinion, but are not that big time savers relative to many of the other items listed here. Some examples are below:

    • Movie rental – I use iTunes to rent or buy movies rather than going to a video store.
    • Digitise your life – I use Evernote to eliminate most paper from my life. Most of my formerly paper files are now stored on Evernote. I take photos of them, and uses tags and optical character recognition for each searching. This saves me time in searching through lab books, note pads, and files. An even better solution is to make notes digitally in the first place, but I still find myself wedded, perhaps irrationally, to pen and paper for many things. The main benefit of using Evernote for me is to make me footloose, allowing me to have my office with me anytime anywhere, turning more travel time into productive time. It also allows me to share my files with my virtual assistant, allowing me to delegate more.
    • Good backup – The time expense of losing all your files is quite large, not to mention the associated psychological stress. Get yourself a good backup system. I use a 1Tb hard disk in Raid-1 configuration (so that it is actually functions as a 500Gb hard disk with redundancy in case one of its two independent disks fail). To avoid the hassle of backing up regularly I use Time Machine, which does it automatically for me. I locate my hard disk in my office, so that whenever I go there I hook up to it when I hook of up for power and internet.

    Graphic design and presentations

    There are now professional services available that let you send edits on a print out of a document or presentation to a virtual business assistant to put the edits into the electronic copy. For example, it is much quicker to make edits to a “deck” of printed out powerpoint slides and scan them to an assistant than it is to make the edits yourself.

    When I am making a powerpoint that includes a complex diagram, I often sketch the diagram on a whiteboard or on paper and send it to a graphic designer to work up into an electronic diagram rather than doing it myself.

    Travel

    There are a number of ways that you can spend money to gain time when faced with a situation in which you must travel.

    Let us begin with a simplified assumption that the benefits gained from your time while traveling is zero. This makes the calculation quite simple: just calculate the cost saving per hour of time saved in travelling via a faster route.

    A more realistic calculation factors in the value of your time while you are travelling. If you sitting down on a train or bus and can use your laptop, then the benefits gained from time spent travelling is close to the benefit gained from a spare hour, assuming that you have things to do on your laptop that you would otherwise do in your spare time.

    You can increase the benefits gained from time while travelling by doing something that you would have had to do anyway. For example, I use the iPhone app “pocket” (formerly “Read it later”) to do reading on public transport that I would normally do at my computer. You could be reading this very blog post, or any other website for that matter, on the bus or while waiting for a train, instead of at your computer.

    This means that if you can cycle somewhere in 20 minutes, it might actually be worth your time to take the bus even if it takes 40 minutes, as you could work on the bus. This is of course complicated by waiting times, and the enjoyment and fitness benefits of cycling, but I’ll let you figure those out.

    Summary

    So there were a few ways to spend money to save time. Whether or not you should be using these techniques depends on the marginal value of your time, but I’m saving that for another post. If you have any other ideas on how people can spend money to save time using goods and services please do let us know below.

    The post Save time through smart buying appeared first on 80,000 Hours.

    ]]>