[ad_1]
A recent experiment from a research team at BYU examined the ways in which artificial intelligence can predict how different demographics will vote in elections.
The study – conducted by a team of political and computer science professors and graduate students at BYU – examined ways in which AI could be used as a substitute for human responders in survey-style research.
To see whether this was possible, the team tested the accuracy of programmed algorithms of a GPT-3 model, which mimics the relationship between human ideas, attitudes and sociocultural contexts of various demographics.
In one experiment, the researchers created artificial personas, assigning attributes like race, age, ideology and religiosity. Then, using data from the American National Election Studies (ANES), the team tested to see whether their “personas” voted the same way people did in the 2012, 2016 and 2020 U.S. presidential elections.
A.I. COULD GO ‘TERMINATOR,’ GAIN UPPER HAND OVER HUMANS IN DARWINIAN RULES OF EVOLUTION, REPORT WARNS
Ultimately, the researchers found a “high correspondence” between how the AI personas voted, and how the American public did in those elections.
David Wingate, a computer science professor and co-author on the study, said he was “absolutely surprised” to see how accurately the experiment matched up.
What was notable, Wingate said, was that the algorithm model hadn’t been trained to “do political science,” but on a “hundred billion words of text downloaded from the internet.”
“[T]he consistent information we got back was so connected to how people really voted,” he said.
Another experiment yielded highly similar patterns between human and AI responses to interview-style survey questions.
The team believes the results of their experiment hold prospects for researchers, marketers, and pollsters to craft better survey questions or “simulate populations that are difficult to reach.”
“We’re learning that AI can help us understand people better,” BYU political science professor Ethan Busby said. “It’s not replacing humans, but it is helping us more effectively study people. It’s about augmenting our ability rather than replacing it.
“It can help us be more efficient in our work with people by allowing us to pre-test our surveys and our messaging.”
CLICK HERE TO GET THE Online News 72h APP
The results of the study, “Out of One, Many: Using Language Models to Simulate Human Samples,” were published in the journal Political Analysis.
[ad_2]
Source link