12 December 2018

Quartz: Amazon and Microsoft claim AI can read human emotions. Experts say the science is shaky

“The problem is now AI is being applied in a lot of social contexts. Anthropology, psychology, and philosophy are all incredibly relevant, but this is not the training of people who come from a technical [computer science] background.” says Kate Crawford, co-founder of AI Now, distinguished research professor at NYU and principal researcher at Microsoft Research. “Essentially the narrowing of AI has produced a kind of guileless acceptance of particular strands of psychological literature that have been shown to be suspect.”

Crawford and the AI Now report refer to the system commonly used to codify facial expressions into seven core emotions, originating with psychologist Paul Ekman. His work studying facial expressions in communities separated from modern society suggests that facial expressions are universal. The idea of universal facial expressions is convenient for AI researchers, since much the artificial intelligence in use today must categorizes complex images or sounds. [...]

But the way emotions are expressed on the face could be much more contextual than previously believed—calling the validity of Ekman’s categories into question. ”A knitted brow may mean someone is angry, but in other contexts it means they are thinking, or squinting in bright light,” wrote psychologist Lisa Feldman Barrett Barrett (paywall) in the Financial Times in 2017. “A hypothetical emotion-reading robot would need tremendous knowledge and context to guess someone’s emotional experiences.”

No comments:

Post a Comment