Analyzing inflection points in tone in earnings transcripts (regarded as a proxy signal anticipating stock movement and an indicator of shifting corporate strategy) requires an AI model that can understand natural language.

AlphaSense sentiment analysis for transcripts uses natural language processing and machine learning to parse the language in those documents, revealing the most salient language and significant changes since previous quarters, down to the sentence level.

Scoring is an important part of our sentiment analysis model. This post will discuss in detail how sentiment scoring works in AlphaSense.

## What is sentiment scoring?

Sentiment scoring is enabled by algorithms that assess the tone of a transcript on a spectrum of positive to negative. It includes an overall score, as well as the delta ( Δ). The document level score is the ratio of positive and negative statements from the entire call, on a minus 100 to 100 score, with zero being neutral.

The delta, Δ, shows the difference in overall sentiment since the previous quarter’s earnings transcript for that company.

## How we calculate sentiment score

Calculating sentiment for a specific document first entails counting the raw inputs. That includes counting

• Total negative statements in the document

• Total positive statements in the document

• Total statements in the document

The raw inputs drive the raw sentiment score for a document. Raw sentiment is the percent of the document that is positive minus the percent that is negative.

From there, the model calculates a normalized sentiment score. The normalized sentiment is the raw score normalized across all companies such that the mean sentiment is 0 and the scores are stretched between -99 and 99. This score matches what you see in the AlphaSense platform.

Each sentence in a transcript gets a prediction of positive, negative or neutral along with a confidence value (e.g. 99% confidence this statement is positive). Our sentiment model, which has been trained on a huge body of financial documents, first forms a contextual understanding of each sentence based on the surrounding context and what’s important in that sentence or paragraph, and then uses it to make a sentence level prediction and output a confidence value based on how confident our model is in its prediction. The model also highlights important phrases and sentences that are driving its predictions, generating a summary for our users.

For the doc level score we take the count of positive statements minus the count of negative statements divided by the total number of statements. We then normalize these scores by:

• the confidence of each prediction

• the mean and standard deviation of scores across all transcripts in the last 2 years such that the average score is 0. Some examples of what this means:A score of 0 is average across all transcriptsA score of 40 (or -40) is in the top 20% (or bottom 20%) of all transcriptsA score of 99 (or -99) is in the top 2% (or bottom 2%) of all transcripts