Why did Rows, the leading gen-ai Excel, not work?
February 26, 2026
This past week, Rows (the gen AI age's Excel) was acquired, and it genuinely surprised me.
Rows was a good product.
I used it. I liked it. I would have invested if I'd had the chance.
It raised $34M+, built real traction — 2.2M users (though this is likely either signups or free user base), 17B functions executed — and had a thesis that made sense: AI-native spreadsheets could, with a simple command to the agent, pull in data from anywhere, analyze, visualize.
And yet: full acqui-hire. Product sunset May 31, 2026. The founders join Superhuman to work on Coda's data layer.
So yeah, the product didn't work.
The obvious explanation is incumbent gravity — Excel and Google Sheets are too entrenched, the switching cost is too high.
That's true, but it's not sufficient.
If "incumbent AI killed the startup" were the explanation, Gamma and a bunch of other companies fighting super dominant incumbents should be folding too.
I've been reflecting on this, focusing especially on the distinctions between slides and sheets (since I'm a slides junkie), and the conclusions I'm landing on are more specific: Presentations and spreadsheets require AI to do fundamentally different things. One of those things AI was ready to do in 2022. The other it wasn't.
The rhetorical vs. computational output distinction
A presentation is a rhetorical output.
It needs to be coherent, visually compelling, and narratively plausible. It does not need to be mathematically correct. A slide with a wrong statistic is bad. But a slide that looks beautiful and sounds confident will still move people, get shared, open doors. The failure mode for 80% correct AI-generated slide is "nice work overall, now fix this and this part." The output is forgiving.
A spreadsheet is a computational output.
It doesn't just need to look right — it needs to calculate right. Formulas have to reference the correct cells. Data has to come from the correct source with the correct structure. A revenue model with a wrong formula or a data integration pulling the wrong field isn't just imprecise — it's actively dangerous. It gives you false confidence. The failure mode for 80% correct AI-generated spreadsheet is "this is total trash and I can't use it." The output is unforgiving.
This asymmetry maps directly onto what generative AI was actually good at when these products launched.
Language models in 2022 were extraordinarily good at generating coherent, plausible, beautiful text and visual content. They were not good at executing computationally correct data work (and still yields like 1% errors in our experience, which is still unacceptable in the case of, say, financial institutions).
Gamma's core product — supply a topic, get a pretty deck in 60 seconds — asked AI to do exactly what it could do. Rows' core product — analyze your data with AI — asked AI to do something it couldn't yet reliably do.
The time-to-magic-moment gap
I remember trying to connect Rows to stock market data. They only had US equities. Not relevant to me because our investments and benchmarks were 90% in Asia. I even emailed them asking whether we could have Asian stock data integrated. I didn't hear back.
So yes, I spent time, hit a wall, moved on. The magic moment never came.
That experience is structurally baked into what Rows was trying to build. The aha was conditional: it required you to have the right data, connected through an integration that existed, in a format Rows could work with. The moment you hit a missing integration — US stocks only, no international — the user left to do something else, and busy users rarely come back (I didn't).
And Rows was trying to build out hundreds of integrations against a nearly infinite surface of data sources.
Gamma's aha is unconditional: type a topic, see something impressive in 60 seconds. Zero prerequisites. Nothing to bring. The magic is built into the onboarding.
30% of it is wrong? Well it still saved me 70% of time, and the things I needed to fix could be fixed right there in the interface.
This isn't a difference in execution quality. It's a difference in what the product fundamentally requires of the user before the value lands.
Why slides are image-adjacent and spreadsheets aren't
There's a physical property to this distinction worth naming explicitly.
A presentation is, at its core, a visual being. There can be text and tables and icons and all that, but it's still just a visual. It doesn't compute.
This is why Google Gemini's latest capability is stunning and totally works: it can generate gorgeous presentations by essentially generating images with text.
That means whatever you created elsewhere, pptx, pdf, google slides, are also more portable.
A spreadsheet is not an image. It's a data model. Cells reference other cells. Formulas chain through the sheet. Outputs depend on inputs being correctly sourced and correctly typed.
That's why the lock-in of Excel is different: it's the formulas, the cross-sheet references, the VBA macros, the pivot tables, the data models users have built over years. Even if Excel's AI features are bad, it's hard to leave, because the alternative means rebuilding years of computational work in a new environment.
The timing gap and why the followers have more momentum now
Rows wasn't wrong about the category. It was too early for the capability the category required.
The new AI-native spreadsheet entrants — Shortcut AI, Paradigm, others — are launching into a different environment. Agents can now actually pull data from APIs on instruction, write correct formulas from natural language, run multi-step analysis without manual data wrangling. The prerequisite that killed Rows (bring your own data, find your own integration) is being abstracted away by the agent layer. The magic moment is now achievable at onboarding, and only gets better from there.
I write this, however, without knowing precisely how well Shortcut and Paradigm are doing. I guess we'll find out in the coming months, probably soon.