Americans are concerned that artificial intelligence tools will worsen already high levels of hate, harassment and extremism, according to a study by the Anti-Defamation League Center for Technology and Society ?ADL, for its acronym in English?.
According to a brief study by the nonprofit, while some people are hopeful about some of the potential uses of generative AI tools, there is overwhelming concern that these advances will worsen hate and harassment in society.
In that sense, he specified that the majority of Americans are concerned that people will use the tools for criminal activities (84 percent), spread false or deceptive information (84 percent), radicalize people towards extremism (77 percent), and incite hatred and harassment (75 percent).
It also highlights that 74 percent of people think that the tools produce biased content, while another 70 percent believe that the tools worsen extremism, hatred and/or anti-Semitism in the United States.
The study, which surveyed 1,007 people across the country between May 1 and 5 of this year, found that Americans overwhelmingly supported interventions to mitigate the risks posed by generative AI.
As for what companies creating IAG tools should do, 89 percent of respondents believe companies should take steps to prevent their tools from creating hateful content and not allow users to generate anti-Semitic or extremist images.
In terms of government or legal action, 87 percent of respondents supported Congressional efforts to require transparency and privacy, while 81 percent of respondents believe that GAI tool creators should be held accountable if their tools are used for illegal hate, harassment or extremism.
Finally, 85 percent supported civil society having the capacity to audit generative AI tools.
In light of this, the organization stated that, while the conversation can and should continue about the potential benefits of IAG, all stakeholders - government, technology companies, social media users and civil society organizations - must put these risks at the forefront, to ensure these new capabilities benefit society.
This publication was supported in whole or part by funding provided by the State of California, administered by the