The scenario that almost cost someone their job
I've seen this happen more than once. Someone builds a Make.com scenario that works perfectly in testing. It processes orders, sends emails, updates a spreadsheet. Clean, fast, reliable. They hand it off to a client, the client loves it.
Three months later, an API it depends on returns an unexpected error. The scenario fails. But because there's no error handler configured, Make just quietly logs the failure and moves on. Nobody gets notified. No Slack message, no email, no alert. For two weeks, the automation is silently doing nothing.
By the time anyone notices, hundreds of orders haven't been processed.
The scenario worked. It just wasn't built to handle real-world conditions. And the gap between those two things is exactly what we built the Scenario Analyzer to catch.
What the Scenario Analyzer actually does
You export your blueprint from Make.com (same process as the Documenter), paste or drop the file into the tool, and it runs a set of checks on the structure of your scenario. You get a health score out of 100 and a list of issues organized by severity.
It also calculates how many operations your scenario uses per run. Since Make bills by operations, this helps you understand what a scenario is actually costing you before you hit your monthly limit and start getting surprised.
The whole thing runs in your browser. No data leaves your device.
How to use it
Same process as the Documenter. Takes about 30 seconds.
Export your blueprint
Open your scenario in Make, click the three dots (...) in the bottom toolbar, and select Export Blueprint. You'll get a .json file.
Drop it in or paste the contents
Go to the Scenario Analyzer, drag your file onto the input area, or open the file and paste the JSON. The Browse file button works too.
Click Analyze Scenario
If you used drag-and-drop or the file browser, it analyzes automatically. Otherwise, hit the button.
Read the results
You'll see a health score, a metrics grid, and a list of issues with plain-language explanations and specific suggestions for each one.
What the tool checks for, and why each one matters
There are currently eight checks. Here's what each one is looking for and why it shows up in the report.
Errors
No modules found
This fires if the blueprint is empty or the flow contains nothing. Usually means you exported the wrong thing, or the file is corrupted. Deducts 30 points from the health score.
Warnings
No error handler configured
This is the big one. Without an Error Handler module, Make uses its default behavior when something fails, which is: stop the run and log it. You won't get notified. Your data won't be retried. The failure just sits in the history tab waiting for you to notice it.
An Error Handler lets you define what happens instead. Send yourself a Slack message, log the failed bundle to a Google Sheet, try again after a delay. Anything is better than silent failure. This check deducts 15 points.
Empty router branch
A Router branch with zero modules. When data matches the filter for that branch, Make routes it in, executes... nothing, and burns one operation doing it. If it's intentional (you plan to fill it in later) that's fine, but it's usually an oversight. Also deducts 15 points.
Suggestions
Router with only one branch
A Router that leads to a single branch doesn't route anything. It adds 1 op per run and makes the scenario harder to read. The modules inside should connect directly to the previous step. 5 points.
Multiple unfiltered router branches
Having one unfiltered branch on a router is normal and intentional (it's usually your catch-all). Having multiple unfiltered branches means everything flows into all of them, which may not be what you wanted. 5 points.
Iterator without an aggregator
An Iterator splits an array into individual bundles. Without an Aggregator downstream, those bundles never get collected back together. Sometimes that's intentional: you want to process each item separately and you're done. But if you expected a combined output, the aggregator is missing. 5 points.
25 or more modules
Large scenarios aren't inherently bad, but they're harder to debug, harder to hand off, and more expensive per run. At some point it's worth asking whether this should be two or three smaller scenarios that communicate via HTTP. 5 points.
Nested routers
A Router inside another Router's branch. This isn't wrong, but it adds complexity fast. If the inner logic is getting complicated, a sub-scenario can make things easier to reason about. 5 points.
The operations estimator
Make charges by operation. Every module that executes in a run costs 1 op. That sounds simple, but the actual number gets tricky fast.
The tool shows you a range: a minimum and a maximum ops-per-run estimate. The minimum assumes your scenario always takes the shortest router branch. The maximum assumes it always takes the longest. Reality is usually somewhere in between, depending on which branches actually fire for a given piece of data.
If your scenario has an Iterator, you also get a calculator. You tell it how many items your iterator typically processes, and it shows you the projected total. The formula is straightforward: base ops (modules that run once) plus loop modules multiplied by your item count. A scenario with 5 base modules and 3 modules in a loop that processes 100 items per run costs 305 ops, not 8. That's the kind of thing that surprises people when they hit their plan limit.
What the tool can't tell you (and why)
The analyzer works from the blueprint file, which describes the structure of your scenario. It doesn't know what actually happens when your scenario runs. That creates some real blind spots.
It can't see inside your modules
The analyzer knows which modules exist and what order they're in. It doesn't know what field mappings, formulas, or conditions you've set inside each one. A filter condition that references a non-existent field, a formula that divides by zero, a mapping that pulls from the wrong bundle. None of that shows up here.
The ops estimate is an approximation
We count router branches correctly (only one fires per run), but Make's exact counting rules for certain built-in modules aren't publicly documented. The estimate is directionally accurate and useful for planning, but don't treat it as a precise billing number.
Iterator detection scans linearly
The tool finds modules that sit between an Iterator and an Aggregator in a flat flow. If your scenario has an unusual structure (an iterator with no aggregator at all, or multiple nested iterators), the loop module count may be off. We flag the situation but the number should be treated as approximate.
It doesn't check connection health or rate limits
Whether your connections are authorized, whether you're approaching API rate limits, whether a module is using a deprecated version. None of that is in the blueprint. You'd need Make's API for that level of inspection.
It can't catch logic bugs
The analyzer checks structure, not intent. A scenario with no structural problems can still have completely wrong logic. If your router sends the right data to the wrong branch, or your aggregator is in the wrong place, or you're writing to the wrong sheet, the analyzer won't know. That's a human review job.
The limitations aren't reasons not to use it. They're reasons to treat the results as a starting point, not a final verdict. A clean score means your scenario is structurally sound. It doesn't mean it's correct. You still need to test it with real data.
When this is most useful
Run it before you hand off a scenario to a client. It takes 30 seconds and catches the kind of structural problems that are embarrassing to explain after the fact. No error handler is the big one. If a scenario you delivered breaks silently for two weeks, that's a difficult conversation.
Run it on old scenarios you've inherited or haven't touched in a while. Scenarios that were built quickly under deadline pressure often skip error handling. The analyzer surfaces that quickly.
Run it when you're getting close to your Make operations limit and don't know which scenario is eating the most. Export a few suspects, check their operations estimates, and you'll usually find one that's much more expensive than you expected.
Run it when you're learning Make. The suggestions section explains not just what's flagged but why it matters and what to do about it. It's a decent way to learn best practices without having to find them all through trial and error.
Run a scenario through it
Export any blueprint from Make.com and see what comes up. The no-error-handler warning fires on most scenarios people test for the first time.
Open Scenario Analyzer