>In 2010, the Academy sought to combat this verbosity with a new 45-second rule. In response, some winners sped through their acknowledgements, while others used humour or emotion to buy extra time before the music signalled them off. Occasionally, the orchestra was ignored entirely, with speeches like Adrien Brody’s 2003 win for The Pianist running well over the limit.
Brody so clairvoyant that he can ignore limits that don't even exist yet.
Odd. The 45-second "rule" was not new to 2010. It apparently went back to the 1940s.
> The longest Oscar speech was given by Greer Garson at the 15th Academy Awards after she was named Best Actress for 1942 for Mrs. Miniver. Her speech ran for nearly six minutes.[11] It was shortly after this incident that the academy set forty-five seconds as the allotted time for an acceptance speech and began to cut the winners off after this time limit.
(Linked from the original article, but clearly the 45-second limit predated 2010.)
Regardless of when the limit first appeared, the orchestra did attempt to hurry Brody along after he was several minutes into his speech and well over 45 seconds:
This was an enjoyable article but the conclusion where he finds the most thanked woman in oscar speeches and gets a response from her puts it over the top. Amazing.
Not really? It's a lot of work, a multi-week project, but reading a couple hundred word speech can be done in 5 minutes, following a checklist in hand, probably 10 minutes. Times 12 categories, and 80 years of history, that's a lot of time 160 hours, a working month. A lot of effort but humanely doable.
That's true, but assumes you have the checklist of what data to analyze in hand when you start out. If you only decide after the fact which familial relationships have interesting trends, you'd have to start over again. It seems more reasonable to start by transcribing everything to text, annotating that text, and then running a lot of scripting to automatically query that data.
Ok, obviously it's _doable_, but is it worth it? Using LLMs for this purpose would have been significantly cheaper, easier and with the right configuration just as reliable. Once the setup works, you could extend the analysis to all kinds of other interesting branches without having to look at a single speech by hand.
I would even go so far as to say that _not_ using LLMs for this task would be fairly odd, unless I'm missing something or the author really enjoys a month of manually classifying documents to write an interesting and well-written but not exceedingly outstanding article.
Of course. It's just my opinion that this task would be perceived by most to be fairly repetitive and unfulfilling, but if the author thinks otherwise, great for him.
Great dive into the nature of the speeches and some interesting tidbits.
Counting the instances of the word “amazing” would be a fun follow up. That was our drinking game cue word. We inevitably stopped at some point because…poisoning became likely.
>In 2010, the Academy sought to combat this verbosity with a new 45-second rule. In response, some winners sped through their acknowledgements, while others used humour or emotion to buy extra time before the music signalled them off. Occasionally, the orchestra was ignored entirely, with speeches like Adrien Brody’s 2003 win for The Pianist running well over the limit.
Brody so clairvoyant that he can ignore limits that don't even exist yet.
Odd. The 45-second "rule" was not new to 2010. It apparently went back to the 1940s.
> The longest Oscar speech was given by Greer Garson at the 15th Academy Awards after she was named Best Actress for 1942 for Mrs. Miniver. Her speech ran for nearly six minutes.[11] It was shortly after this incident that the academy set forty-five seconds as the allotted time for an acceptance speech and began to cut the winners off after this time limit.
https://en.wikipedia.org/wiki/Oscar_speech
There's no citation for that claim though and I can't find anything definitive, but here's Denzel Washington referencing the limit in 1990:
https://aaspeechesdb.oscars.org/link/062-2/
Here's an article from 2001 referencing the 45-second limit again:
https://abcnews.go.com/Entertainment/story?id=108258
It seems like there's news articles every year claiming the limit is a new thing. Here's an article from 2010:
https://www.theguardian.com/film/2010/feb/16/oscar-winners-s...
(Linked from the original article, but clearly the 45-second limit predated 2010.)
Regardless of when the limit first appeared, the orchestra did attempt to hurry Brody along after he was several minutes into his speech and well over 45 seconds:
https://www.americanrhetoric.com/speeches/adrienbrodyoscarsp...
https://www.youtube.com/watch?v=8HgWANva9Xk
This is really interesting thanks for looking into it. I kind of just assumed that the author used AI and it mixed some stuff up.
Cool. As mentioned at the end, the oscars has site,
https://aaspeechesdb.oscars.org/
"This database contains more than 1,500 transcripts of onstage acceptance speeches given by Academy Award winners and acceptors."
This was an enjoyable article but the conclusion where he finds the most thanked woman in oscar speeches and gets a response from her puts it over the top. Amazing.
Very cool! I think mentions per word could be a good metric for some of these, otherwise a main takeaway is just "everyone crams in more stuff now."
> I have added more details in the notes at the end of the article to explain how I found God, but for now, just have faith that I did.
> God cannot give them their next job - Steven Spielberg can
What is the go-to toolbox set these days to make this kind of analysis?
How exactly was the data evaluated? I would assume that manually checking every speech would be too labor-intensive?
Not really? It's a lot of work, a multi-week project, but reading a couple hundred word speech can be done in 5 minutes, following a checklist in hand, probably 10 minutes. Times 12 categories, and 80 years of history, that's a lot of time 160 hours, a working month. A lot of effort but humanely doable.
That's true, but assumes you have the checklist of what data to analyze in hand when you start out. If you only decide after the fact which familial relationships have interesting trends, you'd have to start over again. It seems more reasonable to start by transcribing everything to text, annotating that text, and then running a lot of scripting to automatically query that data.
They probably just used the speech database that the Academy hosts? https://aaspeechesdb.oscars.org/
Ok, obviously it's _doable_, but is it worth it? Using LLMs for this purpose would have been significantly cheaper, easier and with the right configuration just as reliable. Once the setup works, you could extend the analysis to all kinds of other interesting branches without having to look at a single speech by hand.
I would even go so far as to say that _not_ using LLMs for this task would be fairly odd, unless I'm missing something or the author really enjoys a month of manually classifying documents to write an interesting and well-written but not exceedingly outstanding article.
Some people like doing stuff.
Of course. It's just my opinion that this task would be perceived by most to be fairly repetitive and unfulfilling, but if the author thinks otherwise, great for him.
this is a very interesting project. I like when technology is used to analyze cultural or political events
The 50s must have been odd
Awesome, what dedication!
Great dive into the nature of the speeches and some interesting tidbits.
Counting the instances of the word “amazing” would be a fun follow up. That was our drinking game cue word. We inevitably stopped at some point because…poisoning became likely.
[dead]