Kevin Champlin
← Back to glossary

Agentic

agentic
Also called agent/ AI agent/ autonomous agent

"Agentic" describes a system that can take multiple actions in sequence, use tools, observe outcomes, and adjust its plan to reach a goal. It is a descriptor of behavior, not a claim about consciousness, autonomy in any moral sense, or general intelligence.

A non-agentic chat is one where the model takes a prompt, returns a single response, and stops. An agentic system is one where the model can decide to call a tool (search, code execution, file system, API), receive the tool's output, decide what to do next, and repeat that loop until it judges the task complete. That capability changes what the model can do, but it does not change what the model is.

The word "agentic" is doing too much work in current AI marketing. It often gets used as a stand-in for "intelligent" or "autonomous in the way a person is autonomous." It is neither. An agentic system that books your flight has not gained understanding of travel; it has been given a set of tools and a tighter loop. The same underlying model that hallucinates a citation in a chat will also hallucinate a flight number when it is allowed to call a flight-booking API.

This site uses the word "agentic" precisely. A demo on /can might show an agent successfully completing a task across five tool calls. The matching demo on /cannot will show the same agent confidently completing the wrong task because the underlying limitations of the model still apply. Capability without consciousness is the editorial spine of this site, and the gap between "agentic" and "general" is exactly where the spine sits.

Related concepts

tool-use
awaiting authorship
general-intelligence
awaiting authorship
Consciousness
consciousness

Want the rest?

There are 10 terms total.

See the full glossary
Today, UTC
Monthly
refreshed /cost-of-mind →