The first thing I learned working with appse.ai

The first thing I learned working with appse.ai is that nobody really knows what they’re doing with AI yet, and the people pretending otherwise are mostly selling something.

I don’t mean that cynically. I mean it the way you’d say it about any new technology — radio in 1920, the internet in 1995, smartphones in 2008. There’s a moment with every big shift where the gap between what the technology can do and what people know how to do with it is enormous, and the most useful thing isn’t a clever new tool. It’s someone patient enough to figure out how to use the tools we already have without making things worse.

That’s most of what we do at work, as far as I can tell. Less invention, more orchestration. Less what if AI did this whole job and more which three steps of this job is AI actually good at, and what does the human do with the time that frees up? It’s much less glamorous than the Twitter version. It’s also, I think, where the real work is.

The people I find most useful to listen to are the ones who say “I tried it, it didn’t work, here’s why” — not the ones who say “this changes everything.”

A few things I’ve noticed in my first stretch here:

Most “AI problems” are really process problems. When someone says “AI can’t do this thing,” nine times out of ten what they actually mean is “the workflow we tried to automate was unclear even when humans were doing it.” AI is a flashlight. It shows you what was already there.

Taste is a competitive advantage now in a way it wasn’t. Models can produce passable everything. They can write a passable email, design a passable landing page, draft a passable essay. The thing that’s getting more valuable, not less, is the editorial judgment to know what’s good. Knowing what to throw away. Knowing when “passable” is the wrong target.

Speed is a trap. AI makes everything feel faster, and the temptation is to ship more, more, more. The teams I’ve seen do best are the ones who got faster but kept their standards — not the ones who got faster and let standards drop because, hey, the model wrote it.

The interesting work is at the seams. Not the AI part. Not the human part. The handoff between them. How do you pass context from a model back to a person without losing it? How do you let a person review a model’s output without it becoming a tedious second job? Most of what I’m watching get built right now is plumbing for that handoff. It looks boring. It’s not.

I came into this expecting AI to be a story about machines getting smarter. It’s mostly turned out to be a story about people — what we want, what we’re willing to delegate, what we still want to do ourselves even when we don’t have to. The model is the easy part. The hard part is figuring out who we are now that the model exists.

Which, if you think about it, is the hard part of every technology, ever.

I’ll keep writing about this as I learn more. Some of what I think now will probably be wrong in six months, and I’m trying to make peace with that. The internet rewards confidence; I’d rather be slightly wrong out loud than confidently right alone.

More soon.

Tanishka

 


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *