AI-assisted tooling · May 2026
AI tools need receipts
I trust AI-assisted work when it is easy to inspect.
I use AI tools a lot. I do not trust them because they sound confident. I trust them when they make their work easy to inspect.
That distinction matters most when the work gets close to production. Drafting a component is one thing. Looking at a failing job, editing a deploy path, or summarizing an incident needs a different standard.
For me, a receipt is the small body of evidence a human can review without re-running the whole story. It might be a command transcript, a changed file, a test result, a before-and-after screenshot, or a note explaining why a risk was not taken. The format can be simple. The point is that judgment should not disappear into the tool.
The best AI-assisted workflows I have used have a few traits:
- They read the real repo before proposing changes.
- They name assumptions instead of burying them.
- They make small, reversible edits.
- They run the same checks a careful engineer would run.
- They separate local-only work from externally visible actions.
That is less flashy than the usual pitch about autonomous software engineering. It is also more useful. Most teams need tools that reduce the distance between evidence and action.
This is why I like local engineering tools. A good one can search logs, inspect a repo, draft a patch, run checks, and summarize what changed. The human still owns the decision. The tool makes the decision cheaper to make well.
There is a design challenge here. If the interface rewards speed alone, the tool learns to look done. If the workflow rewards evidence, the tool becomes useful in a more durable way.
That is the version of AI-assisted engineering I want more of: tools that work inside real constraints, produce reviewable artifacts, and make the next human less blind.