If you’ve been running an RPA program for more than two years, you probably already know the feeling. The bots work. Most of the time. Until the supplier portal redesigns a button, or a line item wraps to a second page, or someone types a field in Spanish. Then the queue backs up and a human has to babysit it until Wednesday.
That feeling is the real difference between RPA and agentic AI. And it’s the reason we keep both in every industrial deployment we do.
RPA is a script. Agents are a colleague.
Robotic Process Automation does exactly what you programmed it to do, in exactly the order you programmed it to do it. That’s a feature, not a bug. If your ERP produces a CSV on Tuesday and your warehouse system needs it imported by Wednesday at 6am, you want the deterministic bot. It’s cheap, it’s fast, it’s auditable, and it will run the exact same way 50,000 times without creativity.
Agents are different. An agent reasons about the task. Give it a PDF invoice that doesn’t match the template, and it will read the PDF, notice the mismatch, flag the confidence as 0.74 instead of 0.98, and decide whether to post it, hold it, or escalate to a human. That’s not a script running. That’s a colleague making a judgment.
The CFOs who ask us whether RPA is dead are usually frustrated because they expected their bots to behave like colleagues, and bots are just scripts. The answer isn’t to rip out the scripts. The answer is to stop asking scripts to do judgment work.
Where to draw the line
Here’s the simple rule we use when we walk into a Forge deployment:
If the task is deterministic, high-volume, and stable, keep it on RPA. If the task requires context, judgement, or can fail in ways nobody has seen before, put an agent on it.
In practice that usually looks like this:
Keep on RPA
- EDI ingestion. Same format, same supplier, same fields. Bots eat this for breakfast.
- Portal scraping. If the portal doesn’t change twice a week, RPA is cheaper than an agent making the same HTTP calls.
- Single-step automations. Click button, copy value, paste value, press save. This is what RPA was built for.
- High-frequency batch jobs. Anything that runs 10,000+ times a day with sub-second latency requirements.
Move to agents
- Exception handling. Anything that ends with a human staring at an unexpected screen trying to figure out what’s wrong.
- Multi-system reconciliation. Finding and explaining a $2,400 discrepancy between your ERP and your bank feed is the textbook agent job.
- Unstructured input. PDFs, emails, chat messages, scanned documents, voicemails, anything where the exact format isn’t guaranteed.
- Policy-driven decisions. “Approve if the vendor is in the approved-supplier list AND the PO amount matches within 2% AND the receiving report is complete.” That’s a handful of conditions an agent can actually reason through.
- Escalation triage. Deciding who to escalate to, with what context, and when, is a full-time human job at most mid-market operators. Agents do it in seconds.
The dirty secret: most teams need both
On our last four industrial deployments, we ended up running agents on top of the client’s existing RPA. Bots handled the portal scraping and EDI ingestion, then dropped files into a queue. Agents picked up the queue, reasoned about exceptions, and either posted them to the ERP or escalated them to a human with a pre-written summary.
The result: the RPA layer got more valuable, not less. The bots were no longer the bottleneck because they weren’t trying to do judgment work. They could focus on the deterministic heavy lifting they were always good at, and the agent layer soaked up everything that used to break them.
This is why the “RPA is dead” framing is lazy. The people saying it are usually selling either pure RPA or pure agents, and they need you to pick a side. The actual answer — messier and more useful — is that the next-generation automation stack has a deterministic layer and a probabilistic layer, and the boundary between them is the single most important design decision you’ll make.
How to get the boundary right
Three questions we ask at the start of every engagement:
- What breaks your current bots the most? That’s where agents earn their keep first. Don’t try to replace the bots that are working.
- Where are your humans doing judgment work that could be written down as a policy? Those are agent tasks. Not RPA tasks, not human-only tasks.
- What’s the cost of a bad decision vs. a slow decision? High-stakes, low-volume decisions — credit limits, supplier approvals, contract amendments — are perfect for an agent in recommend mode, where it drafts the decision and a human signs off.
If you can answer those three honestly, you can draw the line between deterministic and probabilistic automation in about an afternoon. Most of the pain we see in the field comes from organizations that never drew it, and now have RPA bots trying to do exception handling and people trying to do bulk data entry.
So should we keep our RPA platform?
Almost certainly yes. If it’s deployed, working, and your team knows how to maintain it, ripping it out to replace with agents is a waste of money and a loss of institutional knowledge. Put agents on top. Let them handle what the bots could never handle. Measure the exception queue, because that’s where you’ll see the real ROI.
And the next time a vendor tells you to pick a side, ask them which of your current processes they’d leave on RPA. If they say “none,” they don’t understand your business.