IA: Investors look between the lines of executives’ speeches – 11/14/2023 – Tech

IA: Investors look between the lines of executives’ speeches – 11/14/2023 – Tech

[ad_1]

In his last earnings call as chief executive of genetic sequencing company Illumina, Francis deSouza did his best to stay positive.

A controversial US$8 billion (R$39 billion) acquisition of cancer screening company Grail has sparked a campaign from activist investor Carl Icahn, fights with competition authorities on both sides of the Atlantic and criticism from Grail’s founding directors.

DeSouza told analysts that the drama was only affecting “a very small part of the company.”

But every time he was asked about Grail, there were changes in his speaking rate, pitch and volume, according to Speech Craft Analytics, which uses artificial intelligence to analyze audio recordings. There has also been an increase in the use of filler words like “um” and “ah” and even an audible gulp.

The combination “reveals signs of anxiety and tension specifically when approaching this sensitive issue,” according to David Pope, chief data scientist at Speech Craft Analytics.

DeSouza resigned less than two months later.

The idea that audio recordings could provide clues about executives’ true emotions has caught the attention of some of the world’s biggest investors.

Many funds already use algorithms to comb through transcripts of earnings calls and company presentations to extract signals from executives’ word choices – a field known as “Natural Language Processing” or NLP. Now they are trying to find additional messages in the way these words are spoken.

“The idea is that audio captures more than just what’s in the text,” said Mike Chen, head of alternative alpha research at asset manager Robeco. “Even if you have a sophisticated semantic machine, it only captures the semantics.”

Hesitation and filler words tend to be left out of transcriptions, and the AI ​​can also pick up some “microtremors” imperceptible to the human ear.

Robeco, which manages more than $80 billion in algorithmically driven funds, making it one of the largest quants, began adding AI-captured audio signals into its strategies earlier this year. Chen said this has increased returns and that he hopes more investors will follow suit.

The use of audio represents a new level in the cat and mouse game between fund managers and executives.

“We found tremendous value in transcripts,” said Yin Luo, head of quantitative research at Wolfe Research. “The problem this has created for us and many others is that the general sentiment is becoming increasingly positive [porque] company management knows that their messages are being analyzed.”

Several research articles have found that presentations have become increasingly positive since the emergence of NLP, as companies adjust their language to manipulate the algorithms.

A paper co-written by Luo earlier this year found that combining traditional NLP with audio analytics was an effective way to differentiate between companies as their statements become increasingly “standardized.”

Although costs have decreased, the approach can still be relatively expensive. Robeco spent three years investing in new technology infrastructure before it even began incorporating audio analytics.

Chen spent years trying to use audio before joining Robeco, but found the technology wasn’t advanced enough. And while available insights have improved, there are still limitations.

To avoid drawing conclusions based on different personalities — some executives may be naturally more effusive than others — the most reliable analysis comes from comparing different speeches from the same individual over time. But this can make it difficult to evaluate a new leader’s performance — possibly a time when vision would be particularly useful.

“One limitation even in NLP is that a CEO change disrupts general sentiment [análise]” said an executive at a company that provides NLP analytics. “This disruption effect should be stronger with voice.”

Developers should also avoid adding their own biases into audio-based algorithms, where differences such as gender, class, or race may be more obvious than in text.

“We are very careful to ensure that conscious biases that we are aware of are not included, but there may still be subconscious biases,” Chen said. “Having a large, diverse research team at Robeco helps.”

Algorithms can provide misleading results if they try to analyze someone speaking a non-native language, and an interpretation that works in one language may not work in another.

Just as companies have tried to adapt to text analytics, Pope predicted that investor relations teams would begin training executives to monitor tone of voice and other behaviors that transcripts don’t capture. Voice analysis struggles with trained actors who can convincingly remain in character, but replicating this can be easier said than done for executives.

“Very few of us are good at modulating our voice,” he said. “It’s much easier for us to choose our words carefully. We learn to do this from a young age to avoid getting into trouble.”

[ad_2]

Source link