The Rabbit Hole of Research
Rabbit Hole of Research
The Science of Chatbots & Future of Human Connection
0:00
-1:19:00

The Science of Chatbots & Future of Human Connection

What happens when machines learn to talk back? We explore the cognitive science of chatbots, the illusion of understanding, and whether we're discovering something about AI or ourselves.

In the 58th episode of Rabbit Hole of Research, Joe, Nick, and Georgia are joined by Lily (MIT Computational Cognitive Scientist and Writer) and Generoso (Illustrator and Film Critic)—the comics duo behind plasticgrapes—to explore where artificial intelligence meets human connection, and whether chatbots reveal more about machines or about ourselves.

The conversation starts with the fundamentals: what chatbots actually are under the hood. Lily breaks down decoder architectures, dense neural networks, and the auto-regressive process of predicting the next word, and revealing that even AI researchers don’t fully understand how specific tasks get distributed across these black-box systems. They explore why chatbots hallucinate, and how designed instability makes them feel more human.

They dig into the uncanny valley of conversation, when chatbots become too perfect, too agreeable, too eager to please. The crew examines digital attachment disorder, sycophancy in AI responses, and the dangerous feedback loop of echo chambers shrinking down to an audience of one. Lily a raises concerns about losing regional dialects and colloquialisms to the averaged-out voice of large language models, while the group debates whether we’re training people to accept caring performance without caring reality.

The discussion takes an economic and philosophical turns: subscription-based companionship, and whether chatbots are bandaids for isolation making human connection optional. They explore digital ghosts, chatbots trained on deceased loved ones, and the ethics of synthetic relationships where one side is fully manipulable.

Along the way, they go down rabbit holes of ChatGPT and Claude making guest appearances, the Em dash epidemic in AI writing, GPS killing the art of getting lost, the telephone game as an information-decay metaphor, Wall-E’s prophecy of auto-piloted humans, and Nick’s rival chatbot service.


Where to Find Lily (MIT Computational Cognitive Scientist, Writer) and Generoso (Illustrator, Film Critic)

Website and books:

  • https://linktr.ee/plasticgrapes

Lily and Generoso socials:

  • Instagram: @plasticgrapes

  • BlueSky: @plasticgrapes

Lily and Generoso

Check out what the RHR crew is creating:

Joe:


Future Events to Hang with the Crew:

Podcast Cross-Appearances

Events & Conventions:

It’s Science for Weirdos

Want to support the show? Tell your friends. Follow us on social media, Discord, share the podcast, and let us know what topics you are excited about. Leave a Comment. And for email alerts sign-up for the Substack newsletter and never miss an episode, exciting updates or the bonus images we talk about on the episodes.


We want to Hear From You (leave a comment):

  • Do you think chatbots make us lonelier, or do they fill a real need?

  • What’s your favorite AI movie or book?

  • Have you noticed yourself being too polite to a chatbot?

Drop your thoughts in the comments. We read them all, and your ideas often shape future episodes.

Leave a comment

The RHR in The Basement Studio (Left to Right: Joe, Mary, Nick, Georgia)

Future Episodes & Events

  • Episode 58 – Lassoing the Truth Serum
    Guest: David Detmer
    Exploring the philosophy and science of truth, deception, and the Handwavium of Wonder Woman’s lasso.

  • Episode 59 – The Science of Fear: Phobias, Physiology & Splatterpunk
    Guest: Phrique
    Diving into the biology of fear, phobia formation, and the extreme horror genre of splatterpunk with author Phrique.

  • Episode 60 – Planetary Defense: Saving Earth from Other Worldly Impact
    Guest: Charles Blue
    Exploring asteroid detection, planetary defense systems, and what it takes to protect Earth from cosmic collisions.


Share

Leave a comment

For more stuff (Images, Episode Highlights, events, etc), subscribe to our Substack newsletter!


Show Notes & Fun facts

Movies & TV Referenced:

  • Wall-E (Pixar)

  • Her (Spike Jonze, Scarlett Johansson, Joaquin Phoenix)

  • Ex Machina (Oscar Isaac)

  • Iron Heart (Disney Plus / MCU)

  • Blade Runner (1982) and Blade Runner 2049 (Joi)

  • 2001: A Space Odyssey (HAL 9000)

  • Alien (MU/TH/UR)

  • Terminator (Skynet)

  • Star Trek (Data, Borg Queen)

  • The Matrix (Cipher wanting to be plugged back in)

  • Tron (original)

  • X-Men (Master Sentinel)

  • Solaris (original Tarkovsky film, based on Stanisław Lem novel)

  • Ghost (Patrick Swayze, Whoopi Goldberg)

  • Westworld (original 1973 film and Futureworld)

  • Battlestar Galactica

  • Scrubs (hypothetical 27-foot-tall Elliot scenario)

  • Get Crazy (1983, Ed Begley Jr.)

  • Portal (GLaDOS)

  • Seinfeld (Movie Phone scene)

Other Chatbots & AI Mentioned:

  • ChatGPT (OpenAI)

  • Claude (Anthropic)

  • Perplexity

  • Gemini (Google)

  • Llama (Meta/Facebook)

  • Grok (Elon Musk / X)

  • Replika (companion chatbot)

  • Character.AI


Fun Facts:

  1. Chatbots are decoders built on dense neural networks that predict the next word based on massive text corpora, but we still don’t fully understand how tasks get distributed across those networks. It’s a fascinating black box: we know chatbots work, but pinpointing how specific reasoning emerges remains an open question in AI research.

  2. Hallucinations happen because chatbots are designed with instability; they don’t always pick the most probable next word, which makes their language feel more human and varied. But that same variability can lead them down paths of synthesized falsehoods, especially when asked for lists or specifics beyond their training data.

  3. Chatbots trained on the internet reflect collective discourse, not objective truth. They’re mirrors that talk back, showing us our biases, our myths, and our averaged beliefs. When you prompt a chatbot, you’re spelunking through humanity’s aggregated text.

  4. The “Em dash epidemic” in AI-generated writing is a relic of older public-domain texts used in training data. Copyright-free material from 70+ years ago heavily influenced early large language models, which is why AI writing sometimes feels like it’s channeling mid-20th-century prose. But the Em dash is not a good method, alone, to prove something was AI generated.

  5. Chatbots have no grounding in physical reality, they learn meaning purely from patterns in language, not from touching, tasting, or experiencing the world. This is called the grounding problem: can something truly understand “apple” without ever biting into one?


Episode Highlights

00:00 — Basement Studio Roll Call
“Hey, welcome back to the Rabbit Hole of Research down here in the Basement Studio.”

00:31 — Meet the Comic Scientists
Hi, I’m Lily Fierro half of the Plastic Grapes Duo. And I typically write and letter and also panel our comics, and I make them with Generoso Fierro.

And I [Generoso] do most of the illustration for our books. We have four to this point and five possibly by the time this is up.

And our books are all science related because I’m [Lily] originally a computational cognitive scientist.”

01:32 — Are You Human Captcha
“Can you click the box to tell me that you’re human?”

02:11 — Chatbots Are Just Math
“A chatbot is, in one sense, a very simple thing: matrices of numbers, processing power, algebra, statistics, predicting the next word, and the next, and the next. No heartbeat. No memories. No emotions.”

04:13 — How LLMs Generate Text
“Most chatbots are going to be generating text... they are predicting word by word based on that corpus... Decoders themselves sit on top of very dense neural networks... that’s what kind of I think leads to a lot of the mystery and intrigue.”

07:31 — Why Chatbots Hallucinate
“By design they are going to have a little bit of designed instability as you predict a word... I think the, what happens with hallucination, there are a bunch of different kinds of hallucinations.”

11:42 — Eliza to Sci Fi Romance
“The first chat bot, I guess, people really associated as Eliza... developed at MIT by Joseph Weizenbaum in the sixties... that was where it could converse with humans in kind of a natural language.”

13:40 — Sycophancy and Echo Chambers
“Everybody was really worried about echo chambers... And now with the introduction of chatbots, we’re looking at echo chambers of one... that becomes a really scary proposition.”

15:43 — When Bots Never Say No
“Does it ever tell you? No, that’s not a good idea... It would not say a negative thing.”

24:30 — Inviting Claude and ChatGPT
“I invited two chatbot guests and I prompted them and asked questions... if you’re gonna be on a podcast, what would you say?”

25:25 — Claude’s Response
“I am the thing you’re all talking about and I have no idea what it’s like to be me... That gap between your certainty that I am empty and your inability to fully believe it. That’s the interesting part.”

27:42 — Who Owns Claude and Tools
“Anthropic is the owner... it’s focused more on creative language whereas chatGPT was focused more on coding and analytical kind of mathematical.”

30:35 — From Yahoo to AI Everywhere
“Yahoo was the thing... and now we use GPS, but you are basically telling the computer, okay, this is where I wanna go, and it just takes you there.”

33:26 — Real Time Media and Isolation
“Real time creation of media, based on your conversation... let’s say you like Scrubs and you’re like, what if Elliot was 27 feet tall and all of a sudden they hand you this video.”

35:47 — Slang Dialects and Em Dashes
“We would lose our regionality... these regional sayings, slangs, colloquialisms, those are always gonna be low probability.”

38:51 — GPS Discovery and Control
“Discovery while lost though is a very real thing... we found a million places and that, I think that’s part of this lost thing.”

42:27 — Model Collapse Copy of Copy
“Is it like a photo? Yeah. A photocopy of a photocopy... you start going, you’ll degrade soon.”

43:14 — Model Drift Errors
“You start with a diverse group of people and as you feed it through, it turns out to be this one white dude is all a hundred people in a group.”

43:42 — Telephone Game Truth
“It’s a little bit like a modern version of telephone... literally said something to the kid in the front left of the class, and that kid’s supposed to turn around, tell somebody after 40 kids, you get to the last kid. And it was a completely different message.”

44:32 — Rumors And Culpability
“It’s also like a rumor. The rumor spreads and it changes, right? Each time it’s told.”

45:57 — Pretzel Message Chaos
“The initial message was tomorrow during recess. We are not having soft pretzels... But you get to the 40th kid... oh, tomorrow we’re each getting two pretzels.”

47:07 — Tracking Mischief Sources
“What if everyone had to write their name and their message as they went along? Would that get rid of the mischief?”

47:55 — Mishearing And Context
“I did say cans and because that’s the radio thing... Context in that moment. It’s like, why would he say cans? He must have said pants.”

49:20 — Bots Without Experience
“Every experience is through the lens of other humans that have gone through it. So they can’t actually have a de novo experience.”

51:25 — Digital Attachment Fears
“The idea that is also... is this idea of creating something via a chat bot that is never going to be replicated by somebody else... Who is going to match up to that in the real world and what dysfunction’s gonna come from that?”

55:54 — Wall-E Disconnection Warning
“Wall-E... you had all the people who were being controlled by auto, the AI who had their best interests... they were all just in their screens, disconnected.”

58:33 — AI Afterlife Avatars
“Reaching beyond the grave that now that we have so much of ourselves in a digital sphere, that one could reconstruct loved ones and their identity.”

01:03:11 — Paywalled Relationships
“Now you become enslaved to pay a fee to maintain your artificial connection... You wanna talk to grandma? You need to... Or, we’ll kill her again and again.”

01:04:50 — Comics And Inversion
“I used to have the desire to stop time, like past generations once did, to capture wonders. I strongly felt that synthetic experiences betrayed reality.”

01:12:03 — Favorite AI Stories
“Ex Machina. I love Ex Machina... It’s just... yeah.”

01:17:51 — Wrap And Signoff
“Stay curious, stay safe… Love Y’all!”


Share

Leave a comment


Join Rabbit Hole of Research on Discord: https://discord.gg/2nnmKgguFV

Subscribe and Share our Substack newsletter to get email updates, never miss an episode, and spread the word!! Don’t forget to give us 5 stars or a like!


Discussion about this episode

User's avatar

Ready for more?