Skip to Main Content

TWIL 2024-02-23

Graphic for TWIL's blog post featuring the synergy of GPT-4 Vision and ElevenLabs, depicting a playful AI interpretation of a meeting narrated in the style of David Attenborough.

Welcome to TWIL, our weekly knowledge-sharing voyage where we spotlight the latest lessons our team has gathered in the ever-evolving landscape of software development. This installment of TWIL brings you an exciting intersection of AI storytelling with GPT Vision and Narration with ElevenLabs. With just a little effort, GPT-4 Vision and ElevenLabs bring a meeting snapshot to life in the — a testament to the playful might of today's technology.

GPT Vision and Narration with ElevenLabs

You might’ve seen this viral video earlier in the week where Charlie Holtz, a developer at Replicate wrote a script that would take pictures with his webcam every 5 seconds, then ask GPT-4 Vision to describe them in the style of David Attenborough, only to then use a voice model from ElevenLabs to read the description.

Sounds like a lot, so here’s the clip:

While amusing and terrifying, Charlie’s demonstration shows us how these tools can be equally powerful and surprisingly simple. Although there are several legal and ethical concerns about modeling someone’s voice without their permission (something ElevenLabs’ TOS outright prohibits) — one has to worry about how these things will be used in the future.

But not before we have our own fun with them… I wanted to hear how GPT-4 Vision would invent an Attenborough-style narration of our team’s standup. So I provided this single screenshot and the results, well, they literally speak for themselves 😂.

Screenshot of Cuttlesoft's engineering standup meeting, used for the demonstration of combining GPT-4 and ElevenLabs for a documentary style narration.

  • OpenAI
  • ElevenLabs
Frank Valcarcel's profile picture
Frank Valcarcel

Cofounder, Director of Operations

Related Posts