In 2008, Nebraska had a problem that it wanted to solve.
Like many other states before it, Nebraska was looking to reduce the unfortunate situation in which a distressed parent abandons their newborn — leaving the child dangerously alone, often on the steps of a building. So, the state developed a law aimed at reducing occurrences of this problem that established firehouses as a safe harbor for individuals to drop off their children without risk of penalty. To prevent babies from being harmed, the law asserted that no questions would be asked about why the child was being dropped off at a firehouse.
Then, people started abandoning their teenagers. Oops. The law, created with good intentions, led to unintended consequences. But were the consequences unforeseeable?
As the founder and director of All Tech Is Human, a non-profit focused on diversifying the tech pipeline to include those individuals best capable of foreseeing problems, I think about this incident often. And after spending a lot of my time considering ways to improve social media platforms, I am often struck by the myth that the consequences of social media (misinformation, hate speech, polarization, censorship by authoritarian regimes) were unforeseeable. These consequences may be unintended, but they were certainly not unforeseeable.
In order to reduce technology’s collateral damage, we need to change the types of individuals we recruit for the field. In my opinion, the tech industry has focused too much on hiring problem-solvers at the expense of problem-finders.
These are often two wildly different types of people with different academic backgrounds and personality types. Being bereft of problem-finders in the industry has left us in the current reactive, whack-a-mole state of tackling major social impacts related to technology after they happen as opposed to a more proactive outlook centered around anticipating and preparing for the potential trajectories of tech, both good and bad.