Enhance your next speech

Coding exercise with an intriguing idea using AI and code snippet at the end

15 min read. 💡 20 min coding

Inspired by Cicero and Jeff Weiner

From a modern translation of De oratore, by Cicero, I read that: nothing is so unbelievable that oratory cannot make it acceptable.

Powerful, isn’t it?! In principle yes, but Cicero also wrote that it takes constant exercise to become a good orator. Despite these words are over 2000 years old, the world’s need for good orators has not changed. Nor have changed the great value of Cicero’s words.

What did change overtime are the communication channels we use: these are more than ever and using them effectively in a professional capacity is a real challenge for scientists like me! Public presentations, written articles, video conferencing and the recorded videos streamed on YouTube. More channels, more tools and, after all, less time to communicate.

If you share the same frustration and you are in a leadership position, I am sure that you will find the course on leadership by Jeff Weiner a masterpiece. If you haven’t taken that course yet, Google it now!

A short passage from the course: For me, the definition of leadership was the ability to inspire others to achieve shared objectives. And had you asked me about leadership several years ago, this is the answer you would’ve gotten. Jeff is an effective orator and I find him inspiring, but the question is: how can I get to that level?

In need for simple cognitive AI

It was the week of a big review with the client and I did not want to trade the time for my research with time for my speech. Because of Covid-19, this event was going to be held virtually and the plan was to pre-record a video to be streamed live. As usual: prepare slides, mic, camera and script.

It wasn’t my first pre-recorded video presentation, but my script was not going to be as *believable * as Cicero teaches, nor as vibrant as Jeff Weiner’s words. I search online for a tool that could help me, but all I could find were spelling and grammar checkers. Occasionally some clever word counters. But all these writing assistants together were not enough - better grammar is not enough! I was after better words that would trigger sentiments in the audience.

While I could not find a good tool online, I discovered Natural Language Processing (NLP) artificial intelligence (AI) agents. These basically are algorithms that can extract analytics out of text. Yes, plane human-readable text. Powerful, isn’t it?! I decided to give it a go a build something with it.

Work it out with Amazon Comprehend

After a quick review, I picked Amazon Comprehend as NLP tool because offered the best compromise between cost and performance. While I do not intent to promote this service deliberately, I do want to point out that its ability to extract measurable data from text via a simple python API served my purpose very well. If you want to hear more cheesy words on Amazon Comprehend, listen to AWS CEO in this 3-minute video.

You will know by now that the algorithms behind Amazon Comprehend process text to extract data.

The data I am personally interested are key phrases and sentiment. I will be using these to align my speech to Cicero’s guidelines on arrangement and, more importantly, to trigger human emotion so as to appeal to the audience.

The code

I wanted to create a simple and quick script to do these three steps:

  1. Import a short text from a txt file.
  2. Passes the text to Amazon Comprehend for analysis via python API.
  3. Visualise data about the sentiment of the text.

I found this terribly easy to achieve.

import boto3
import json
import string

# Load file to analyse - - - - - - - - - -
with open("script.txt", "r") as file_opened:
    text = file_opened.read()

# Call Amazon Comprehend  - - - - - - - - - -
comprehend = boto3.client(service_name='comprehend')
compLang = comprehend.detect_dominant_language(Text=text)

# Identify language   - - - - - - - - - -
for lang in compLang.get('Languages'):
    sentiment = comprehend.detect_sentiment(LanguageCode=lang["LanguageCode"], Text=text)
    SentimentScores = sentiment.get('SentimentScore')

I then used pyplot to visualise data - again, very simple:

labels = ['Positive', 'Negative', 'Neutral', 'Mixed']
sizes = [SentimentScores.get("Positive"), SentimentScores.get("Negative"), SentimentScores.get("Neutral"),
         SentimentScores.get("Mixed")]
colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral']
patches, texts = plt.pie(sizes, colors=colors, shadow=True, startangle=90)
plt.show()

Let’s now see the results coming out of this code.

The results

I compared the sentiment analysis of Jeff Weimer’s introduction to his course on leadership to my first speech draft. The analysis runs in milliseconds and the results are astounding: I thought I prepared a decent one, but data prove me wrong. Jeff scores over 90% positive, leaving little space for neutral sentiment and nearly none for negative. Whereas I leave nearly no space for positive sentiment as over 90% of my talk was neutral. In Cicero’s terms neutral is even worse than negative because it really means no emotions.

image image

Sentiment analysis of Jeff Weimer’s (left) vs my speech (right). Green is positive, light blue is neutral.

These results speak for themselves: **my speech needs rewriting, and I started going deeper into the analysis of words.

By adding few lines of code I could visualise word clouds that, however simple, explain the sentiment analysis.

image image

Word analysis of Jeff Weimer’s (left) vs my speech (right). No wonder why my words are boring whereas Jeff speaks to people’s heart.