[Gardner Analytics Apartment — January 2014, Evening]
The registration confirmation sat in the dead man's inbox between a spam email about Nigerian prince investments and a three-month-old Slack notification from a channel called #general that nobody had posted in since October.
Thank you for registering for TechCrunch Disrupt SF! Your observer badge will be available at check-in.
Fifty dollars. The first real money Ethan had spent in this life, and it went to a lanyard and a seat in the back row. He closed the email and opened a fresh browser tab.
Time to understand the world he'd landed in.
He started with arXiv. The machine learning section of arXiv in January 2014 was a graveyard compared to what he remembered. In 2025, the section averaged dozens of new papers daily — architecture innovations, training techniques, benchmark demolitions, a constant firehose of progress. Here, the pace was glacial. A handful of papers per week, most of them incremental. Convolutional neural networks were the hot topic. Recurrent networks were getting modest attention. The word "transformer" appeared exactly zero times in any ML context.
He pulled up the most-cited recent work. Krizhevsky's AlexNet paper from 2012 — the ImageNet breakthrough that had reignited interest in deep learning. That was barely two years old, and the field was still processing its implications. Researchers were arguing about whether deep learning was a real paradigm shift or a clever trick that would hit diminishing returns.
They were arguing. In 2014. About whether neural networks had a future.
The irony tasted metallic.
Ethan switched to tech blogs. TechCrunch's AI coverage was sparse — a few articles about IBM Watson, a profile of a startup doing sentiment analysis, a think-piece titled "Will Robots Take Our Jobs?" that read like it had been written by someone who'd watched Terminator 2 and extrapolated. Hacker News was marginally better. A thread about Yann LeCun joining Facebook's AI research lab had seventeen comments, half of them skeptical.
The venture capital landscape was worse. He pulled up Crunchbase and filtered for AI-related investments in 2013. The numbers were anemic. A few million here, a few million there, mostly going to companies doing narrow applications — fraud detection, recommendation engines, image tagging. The word "generative" didn't appear in any funding announcement. Not once.
VCs were pouring billions into mobile apps. Social networks. The sharing economy. "Uber for laundry." "Airbnb for dogs." The investment thesis of Silicon Valley in early 2014 was built on platforms, marketplaces, and the assumption that software would eat the world through mobile-first consumer products.
Not through neural networks. Not through language models. Not through the thing sitting crystal-clear in Ethan's skull.
He leaned back in the chair. The apartment's overhead light buzzed — a cheap fluorescent tube that cast everything in the color of old milk. The desk was covered in printouts now, pages of funding announcements and research summaries and blog posts he'd been marking with a red pen from the desk drawer.
The market education problem was staggering. He wasn't just building a product nobody had seen before. He was building a product in a category nobody believed in. Pitching a Transformer-based AI in January 2014 would be like pitching a smartphone in 1990. The technology was real, but the audience lacked every piece of context needed to understand why it mattered.
How do you explain attention mechanisms to someone who thinks "neural network" is a metaphor?
---
[Same Apartment — Three Hours Later]
The dead man's email told a sad, familiar story.
Ethan scrolled through it methodically, building a picture of the life he'd inherited. Ethan Gardner — the original — had been a competent but unremarkable programmer who'd caught the startup bug in 2013. He'd quit a decent job at a mid-tier consultancy to found Gardner Analytics, a data visualization company that solved a problem nobody had. The pitch had been vague enough to sound promising and specific enough to be boring: "Turning your data into actionable insights."
The inbox contained the archaeological layers of a failing company. Early emails full of optimism — pitch deck drafts, advisor outreach, excited Slack threads about feature ideas. Then the slow deterioration. Unanswered investor emails. A polite rejection from Y Combinator. Client meetings that led nowhere. A gradual shift from building product to scrambling for any revenue at all.
The contacts were thin. Two hundred and thirty-seven LinkedIn connections. Ethan opened the profile and scanned the list. Fellow YC rejects. Meetup organizers. A handful of angels who invested in pre-seed rounds that never materialized. Three people had endorsed him for "Data Analysis." One had endorsed him for "Microsoft Excel."
But buried in the wreckage, two threads caught his attention.
The first was an email chain with someone named David Park, a former colleague from the consultancy. David had moved to a small VC fund called Basecamp Ventures — not a partner, just an associate. The last email, from November, was David asking if Ethan wanted to grab coffee and "catch up on what you're building." Ethan — the original Ethan — had never replied.
The second was a draft email to a Stanford professor named James Kuo, an expert in recurrent neural networks. The original Ethan had been trying to get a meeting, probably to add academic credibility to his dying company. The email had been started, abandoned, deleted, and re-drafted three times. None of the versions had been sent.
Warm leads. Barely warm. Lukewarm at best. But in a city where he knew nobody and had nothing, lukewarm was better than cold.
Ethan bookmarked both threads and opened a blank document.
PITCH DECK v1.
Slide 1: Gardner Analytics presents: The next generation of artificial intelligence.
He typed. Deleted. Typed again.
The problem was fundamental. The Transformer architecture in his head was revolutionary, but "revolutionary" meant nothing without context. He couldn't say "attention mechanism" — the term didn't exist yet in this usage. He couldn't say "language model" — in 2014, that meant statistical n-gram models, not neural networks that generate coherent text. He couldn't reference GPT or BERT or any of the landmarks that would eventually validate his approach, because none of them existed.
He tried analogies. "Like autocomplete, but for paragraphs." Too small. "Like a brain, but for text." Too sci-fi. "A new approach to machine learning that enables computers to understand and generate human language." Better, but it sounded exactly like what every failed NLP startup had promised for twenty years.
The cursor blinked. The fluorescent light buzzed.
By midnight, he had nine slides. He read them back. They were terrible. Technically accurate and completely unpersuasive. A 2014 VC reading this deck would see another academic project dressed up as a business. No traction, no market, no moat — just a guy claiming he could do something that the entire field considered either impossible or decades away.
He deleted the whole file.
Started over.
Slide 1: What if a computer could write?
That was the hook. Not the technology. Not the architecture. The output. Show them what it does, not how it works. In 2014, a computer that could generate coherent paragraphs would be magic. Nobody needed to understand multi-head self-attention to be amazed by the result.
He built the new deck around demonstrations, not explanations. Hypothetical examples of generated text. Comparisons to existing NLP tools that would make the gap obvious. Market sizing for content generation, automated customer service, language translation. The technology section was three slides — high-level, deliberately vague, enough to suggest depth without exposing the architecture.
The clock showed 2:47 AM when he finished. Fourteen slides. Not good — but less bad than the first attempt. The pitch still had a gaping hole: no demo. No working product. Just a promise from a failed startup founder that he could build something the world's best researchers hadn't.
Ethan saved the file. His neck ached from hunching over the laptop. His eyes burned from twelve hours of screen time. The apartment had gotten cold — the heating system ran on a timer and shut off at midnight.
He stood, stretched, and pulled the North Face jacket from the closet. Wrapped it around his shoulders like a blanket. The bed was six feet away but felt too far. The couch was closer but smelled like old pizza.
The laptop screen dimmed to sleep mode. The only light left was the power indicator on the router, blinking green in the dark.
Ethan put his head down on the desk, using his arms as a pillow. The last thing before sleep was the architecture, spinning slowly behind his eyes. Layers and attention heads and data flowing like water through a machine that didn't exist yet.
Two days until Disrupt.
Author's Note / Promotion: Your Reviews and Power Stones are the best way to show support. They help me know what you're enjoying and bring in new readers! You don't have to. Get instant access to more content by supporting me on Patreon. I have three options so you can pick how far ahead you want to be: 🪙 Silver Tier ($6): Read 10 chapters ahead of the public site. 👑 Gold Tier ($9): Get 15-20 chapters ahead of the public site. 💎 Platinum Tier ($15): The ultimate experience. Get new chapters the second I finish them . No waiting for weekly drops, just pure, instant access. Your support helps me write more . 👉 Find it all at patreon.com/fanficwriter1
