As humorous as it may sound to hear “David Attenborough” narrate everything from the mating habits of work colleagues to the peculiar behavior of household pets, the man himself is far from entertained. Sir David Attenborough, known for his soothing and captivating voice, has expressed his deep displeasure with AI-generated clones of his iconic tones.
“I am profoundly disturbed to find these days my identity is being stolen by others and greatly object to them using it to say whatever they wish,” Attenborough said, making it clear that these digital imitations cross a line.
The trigger for his frustration came after the BBC played him clips of his voice being mimicked by Artificial Intelligence. These AI versions, often designed to replicate the cadence, tone, and rhythm of Attenborough’s speech, have become increasingly common. While many have found the novelty amusing—whether it’s “David Attenborough” narrating an office meeting or mimicking pop culture references—the legendary figure behind the voice is not laughing.
While the rise of AI voice cloning offers incredible possibilities, from improving accessibility for those with speech impairments to creating lifelike digital assistants, it also presents ethical dilemmas that need careful consideration. For Sir David Attenborough, and others like him, the question is no longer about the novelty of hearing their voices in new contexts—it’s about how these digital imitations are used and the implications for their identities.
So, next time you hear “David Attenborough” comment on the unusual habits of your coworkers, just remember: it might not be the man himself. And judging by his recent remarks, that’s something Sir David Attenborough wishes we all took more seriously.
How will the internet evolve in the coming decades?
Fiction writers have explored some possibilities.
In his 2019 novel “Fall,” science fiction author Neal Stephenson imagined a near future in which the internet still exists. But it has become so polluted with misinformation, disinformation and advertising that it is largely unusable.
Characters in Stephenson’s novel deal with this problem by subscribing to “edit streams” – human-selected news and information that can be considered trustworthy.
The drawback is that only the wealthy can afford such bespoke services, leaving most of humanity to consume low-quality, noncurated online content.
To some extent, this has already happened: Many news organizations, such as The New York Times and The Wall Street Journal, have placed their curated content behind paywalls. Meanwhile, misinformation festers on social media platforms like X and TikTok.
On the surface, chatbots seem to provide a solution to the misinformation epidemic. By dispensing factual content, chatbots could supply alternative sources of high-quality information that aren’t cordoned off by paywalls.
Ironically, however, the output of these chatbots may represent the greatest danger to the future of the web – one that was hinted at decades earlier by Argentine writer Jorge Luis Borges.
The rise of the chatbots
Today, a significant fraction of the internet still consists of factual and ostensibly truthful content, such as articles and books that have been peer-reviewed, fact-checked or vetted in some way.
The developers of large language models, or LLMs – the engines that power bots like ChatGPT, Copilot and Gemini – have taken advantage of this resource.
To perform their magic, however, these models must ingest immense quantities of high-quality text for training purposes. A vast amount of verbiage has already been scraped from online sources and fed to the fledgling LLMs.
The problem is that the web, enormous as it is, is a finite resource. High-quality text that hasn’t already been strip-mined is becoming scarce, leading to what The New York Times called an “emerging crisis in content.”
This has forced companies like OpenAI to enter into agreements with publishers to obtain even more raw material for their ravenous bots. But according to one prediction, a shortage of additional high-quality training data may strike as early as 2026.
As the output of chatbots ends up online, these second-generation texts – complete with made-up information called “hallucinations,” as well as outright errors, such as suggestions to put glue on your pizza – will further pollute the web.
And if a chatbot hangs out with the wrong sort of people online, it can pick up their repellent views. Microsoft discovered this the hard way in 2016, when it had to pull the plug on Tay, a bot that started repeating racist and sexist content.
Over time, all of these issues could make online content even less trustworthy and less useful than it is today. In addition, LLMs that are fed a diet of low-calorie content may produce even more problematic output that also ends up on the web.
An infinite − and useless − library
It’s not hard to imagine a feedback loop that results in a continuous process of degradation as the bots feed on their own imperfect output.
A July 2024 paper published in Nature explored the consequences of training AI models on recursively generated data. It showed that “irreversible defects” can lead to “model collapse” for systems trained in this way – much like an image’s copy and a copy of that copy, and a copy of that copy, will lose fidelity to the original image.
How bad might this get?
Consider Borges’ 1941 short story “The Library of Babel.” Fifty years before computer scientist Tim Berners-Lee created the architecture for the web, Borges had already imagined an analog equivalent.
In his 3,000-word story, the writer imagines a world consisting of an enormous and possibly infinite number of hexagonal rooms. The bookshelves in each room hold uniform volumes that must, its inhabitants intuit, contain every possible permutation of letters in their alphabet.
Initially, this realization sparks joy: By definition, there must exist books that detail the future of humanity and the meaning of life.
The inhabitants search for such books, only to discover that the vast majority contain nothing but meaningless combinations of letters. The truth is out there –but so is every conceivable falsehood. And all of it is embedded in an inconceivably vast amount of gibberish.
Even after centuries of searching, only a few meaningful fragments are found. And even then, there is no way to determine whether these coherent texts are truths or lies. Hope turns into despair.
Will the web become so polluted that only the wealthy can afford accurate and reliable information? Or will an infinite number of chatbots produce so much tainted verbiage that finding accurate information online becomes like searching for a needle in a haystack?
The internet is often described as one of humanity’s great achievements. But like any other resource, it’s important to give serious thought to how it is maintained and managed – lest we end up confronting the dystopian vision imagined by Borges.
The first trailer for Universal Pictures’ How to Train Your Dragon live-action adaptation is here!
Starring Mason Thames as young Viking Hiccup, the teaser gives us a glimpse of the journey ahead—focusing on the moment Hiccup first encounters Toothless. From battling humanity’s prejudice against dragons to dealing with loss and love, this film promises to capture the magic of the beloved franchise.
Mark your calendars! The movie is set to release on June 13th, 2025.
For today’s edition of “Deal of the Day,” here are some of the best deals we stumbled on while browsing the web this morning! Please note that Geeks are Sexy might get a small commission from qualifying purchases done through our posts. As an Amazon Associate, I earn from qualifying purchases.
–1minAI: Lifetime Subscription – Why choose between ChatGPT, Midjourney, GoogleAI, and MetaAI when you could get them all in one tool? – $234.00 $39.99
We’ve all enjoyed the sweet, vibrant taste of grape-flavored drinks, candies, and gum, but have you ever stopped to wonder: why doesn’t it actually taste like grapes? Despite being synonymous with the purple fruit, grape flavor in many of our favorite treats doesn’t resemble the real thing at all! That sweet, sugary taste we all associate with “grape” comes from methyl anthranilate—a synthetic compound that has zero interest in actually tasting like grapes. It was invented in the early 1900s, and instead of being inspired by fruit, it was just created in a lab. So when you’re sipping on that grape soda or munching on grape candy, just know you’re not actually tasting fruit—you’re tasting… science!
Want to learn more? Uncover the bizarre history behind this iconic flavor and how it came to dominate the world of sweets and drinks in the latest video from the Weird History Food channel!
Thirty years after Star Trek: Generations saw Captain Kirk’s heroic sacrifice, fans finally receive the emotional farewell between Kirk and Spock they longed for. The short film 765874 – Unification, created by the Roddenberry Archive and OTOY, reimagines Kirk leaving the Nexus to reunite with his lifelong friend in Spock’s final moments. Directed by Carlos Baena, the poignant story features William Shatner, with CGI-enhanced portrayals of Leonard Nimoy’s Spock and other beloved characters.
Lawrence Selleck embodies Spock, with visual effects honoring Nimoy’s likeness, while Gary Lockwood reprises his role as Gary Mitchell, the godlike figure from Where No Man Has Gone Before. Mitchell uses his powers to help Kirk transcend time and space for this reunion. Easter eggs abound, with appearances by Robin Curtis as Lt. Saavik, Sam Witwer as a younger Kirk, and even a nod to Yor, the multiverse-traveling character from Star Trek: Discovery.
The film connects timelines, referencing Spock’s death in the Kelvin Universe from Star Trek Beyond while anchoring this moment in the Prime Universe. With involvement from Michael Giacchino and Picard production designer David Blass, the short is a masterful tribute to one of science fiction’s most iconic friendships.
Filled with nostalgia and reverence, Unification offers longtime fans the closure they’ve yearned for—a final, heartfelt goodbye between Kirk and Spock that bridges galaxies and timelines, celebrating their bond in a way only Star Trek can. Be sure to watch this one, and don’t forget to have a box of tissues on hand!
The long-standing joke “Where is Half-Life 3?” reared its cheeky head once again this week as Valve marked the 20th anniversary of Half-Life 2. Spoiler alert: no, Half-Life 3 isn’t happening (at least not yet), and no, Half-Life: Alyx doesn’t count. But while the dream of a trilogy capstone remains in limbo, fans of the franchise were treated to something almost as exciting: the announcement of Half-Life 2 RTX.
This remaster breathes new life into the 2004 masterpiece with cutting-edge visuals, courtesy of Nvidia. Following in the footsteps of Portal RTX, this project leverages Nvidia’s RTX Remix platform, transforming the original’s graphics into a jaw-dropping showcase of modern technology. Think real-time ray tracing, advanced lighting, and ultra-detailed textures that make City 17 feel more dystopian than ever.
The remaster will be completely free for those who already own the original. That’s right—if you’re among the PC gamers who somehow don’t have Half-Life 2 in your library, this is your chance to correct that glaring oversight. I know the remastered version has been announced a few days ago, but the reason we’re posting this now is that today is the LAST DAY to get Half-Life 2 for free on STEAM. If you’re planning on getting Half-Life 2 RTX, you need to already “own” the regular version.
One big question remains unanswered: will the remaster include Half-Life 2: Episode One and Episode Two? Valve hasn’t dropped any hints, but it’s hard to imagine revisiting the series without wrapping up those cliffhanger episodes.
For now, Half-Life 2 RTX will be the perfect excuse to dive back into one of gaming’s most beloved worlds when it comes out. As for Half-Life 3, well… maybe it’s stuck in another test chamber.