
“What’s troubling from our perspective is that it tends to skew differently for different populations,” he said. He’s also concerned about the nature of those errors. There is, for example, an immediate concern around the growing use of commercial AI products to set bail levels and determine who should or should not be released on bond.Ĭalabrese isn’t just worried about the error rate of these decisions - about 30 percent, according to the center’s findings. More than just a theoretical exploration, the center is focused on the practical applications of new technology. There are all sorts of issues that arise from that.” “Looking at facial recognition again, that can be used to track people in public, thanks to the power of AI. They’re also exploring the implementation of these technologies. “That means making sure facial recognition works as well on a white face as it does on an African-American face,” he said. The center’s thinkers are especially concerned about the algorithms that drive automated decision-making. “The impact of that across the board is going to be one of the guiding challenges of the 21st century,” Calabrese said.

More recently, the group has taken a deep dive into the ethics of AI. The Center for Democracy and Technology has generally focused its efforts on understanding policy challenges surrounding the Internet. Drawing from academia, industry and the public sector, several groups are delving deep into these issues in an effort to guide effective policymaking. CIOs are considering everything from the moral implications of cameras on light posts to the ethical fallout from allowing AI to set prisoners’ bail.Īt the Center for Democracy and Technology, Chris Calabrese and his colleagues work to understand the potential policy impacts of new technologies.Īs New York has pursued its foray into issues of fairness and equality in an AI-driven world, various non-government entities have been pursuing a similar track. That gives me a sense of how far we have to go.”įacebook’s stuttering steps into automation reflect broader ethical challenges faced by public tech leaders as AI, biometrics and surveillance technologies increasingly enter the mainstream.

“That’s automation being used by one of the most influential companies in the world, and it’s still not up to snuff. They are taking down groups dedicated to sewing masks, just because they are falsely flagged,” said Calabrese, vice president of policy at the Center for Democracy and Technology. “The result has been systems that don’t work as well. His aha moment came when he realized Facebook had sent home most of its human overseers and put AI in charge of policing the social forum for inappropriate content.

Joint Center for Political and Economic Studies: $1 million to support a Technology Policy Program, which conducts research and develops policy solutions to ensure that Black communities are not harmed by-and have an opportunity to benefit from-emerging technologies.Berkman Klein Center for Internet and Society at Harvard University: $2 million to support a three-year institute, “Rebooting Social Media,” that will bring together diverse experts to develop tractable solutions to false information, radicalization and harassment online.
