People are coming together to call for safeguards against A.I. Last March, more than 1,000 technology leaders and researchers working in and around artificial intelligence signed an open letter warning that A.I. technologies present “profound risks to society and humanity.”
“Powerful A.I. systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
In October, President Biden issued an executive order outlining the federal government’s first regulations on A.I. systems. They include requirements that the most advanced A.I. products be tested to assure that they can’t be used to produce biological or nuclear weapons, as well as recommendations that images, video, and audio developed by such systems be watermarked to make clear that they were created by A.I.
“Deepfakes use A.I.-generated audio and video to smear reputations, spread fake news, and commit fraud,” Biden said at the signing of the order at the White House. He described his concern that fraudsters could take three seconds of a person’s voice and manipulate its content, turning an innocent comment into something more sinister that would quickly go viral.