While agentic AI systems aim to autonomously set and achieve goals, practical limitations such as API access and system complexity reveal significant hurdles, highlighting the gap between the technology's promise and current capabilities.
Agentic AI promises something more ambitious than a chatbot or an automated workflow. In theory, it is a system that can receive a goal, choose tools, make decisions and take actions with limited human input. Kobi Toueg’s Medium essay uses that idea to imagine an agent tasked with a simple business objective: making $1,000 on Medium. The thought experiment is appealing, but the article argues that the first obstacle is practical: Medium does not provide a public API for this kind of automation, and scraping the site would be brittle, difficult to maintain and ill-suited to a system that depends on repeated feedback loops.
That technical constraint reflects a broader pattern in agentic design. According to TechTarget, agentic systems are built around perception, decision-making and action execution, with the agent moving from one step to the next in pursuit of a goal. IBM describes the same architecture as a departure from non-agentic software, because the system is expected to act with some autonomy rather than wait for constant human instruction. In practice, that means an agent is not just a language model with a prompt; it is an orchestrated stack of planning, memory, tools and control logic.
The promise of that stack is also what makes it difficult to build well. Multiple explainers on agentic architecture, including those from Agentic AI Masters, CrossML, UpGrad and DigitalAPI, point to the same recurring problems: growing system complexity, fragmented data, fragile orchestration, weak API readiness and the risk of optimising for technology rather than a clearly defined business problem. As those components multiply, debugging becomes harder and maintaining alignment between the agent’s actions and the original objective becomes more precarious.
Toueg’s central argument is that the idea of a self-directed writer optimising for revenue exposes both the appeal and the unease of agentic AI. The system may be technically imaginable, but it depends on access, control and trust that do not yet exist in a clean, reliable form. That leaves agentic AI suspended between a compelling product vision and a set of unresolved engineering and ethical trade-offs.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph: - Paragraph 1: [2] - Paragraph 2: [3], [6] - Paragraph 3: [4], [5], [7] - Paragraph 4: [2], [3], [5]
Source: Noah Wire Services