Paul Ford on the The Extremely Human Last Mile:

I think the answer is actually really simple, and it’s something I keep mumbling to myself: AI is great at the first mile because it seems human, but bad at the last mile because it’s not human.

First-mile tasks include “writing a summary,” or “looking for interesting companies in the cannabis space” or “reviewing inbound emails and classifying them in the CRM” or “finding the right SaaS tool for my church-membership drive” or “listing the kind of software components my new online hat store needs”—and for all of these, LLMs often work surprisingly well. They simulate basic human skill sets to differing degrees of talent, and they’re fast, too.

But then there’s the last mile. That includes “launching the app at the same time the marketing campaign rolls out in four languages” or “finishing the five-hundred-thousand line migration from COBOL to Java“ or “completing the oral defense of your PhD thesis.” This set of skills might be in reach of AI, but I don’t buy it yet.

The last mile analogy is nearly perfect! While self-driving on highways is generally the most predictable and structured scenario for autonomous vehicles challenges abound in the final leg of the journey. Parking in the right spot on a private driveway or navigating those unpredictable pedestrians or a traffic cop waving you through a red light. In these moments, context takes precedence over rigid rules, demanding flexibility and intuition that favor humans over our nascent crop of agents. If you're in the business of creating AI products, identifying the environments where this generation of agents can truly excel is the key to driving meaningful customer success.

Wide-angle view of the vaulted concrete ceiling and platform of the Capitol South Washington Metro station, with a train stopped on the right side and passengers boarding and exiting. The scene features the station’s iconic Brutalist coffered design and subdued lighting.


Comment Section