1,000 records per run is a hard ceiling, not a guideline

The Free plan caps each run at 1,000 records. The pipeline does not chunk itself across runs. Discovering this at minute three of a 12,000-record copy is a bad day.

4 min read

Every plan in DMM Infinity has two limits worth knowing before you build: an annual record allowance and a per-execution limit. On the Free plan those are 10,000 records/year and 1,000 records per run. Settings → Usage shows your current consumption against both, in real time.

The pitfall in one line
A 12,000-record entity will not process in a single run on Free — and the run does not pick up where the previous one stopped. It starts again from the top.

Why this matters

The per-execution cap is not a soft warning. It is a pre-flight check at run time, evaluated against the total records the selection would move. If the total exceeds the cap, the run is rejected. If it equals the cap exactly, the run completes and the cap is gone for that run. The pipeline has no concept of resume — it re-evaluates the source from scratch on every run.

That changes how you plan large copies. Splitting work across runs means splitting the selectionacross runs (different entities, different filters), not running the same selection twice and hoping for progress. Same selection, twice, gets you the same first 1,000 records — and burns 2,000 against your annual allowance for 1,000 rows of value.

What to do instead

Sample first. Filter the source. Split entities across runs. Or upgrade — paid tiers raise both caps and change the math. Whichever path you take, do it before you click Run with a 12,000-row selection.

Do
  • ·Check Settings → Usage before launching a large run. The numbers tell you what is possible right now.
  • ·Sample with a small filter first to validate strategy and entity selection at low cost.
  • ·Split a wide selection into multiple smaller pipelines if you are near the per-run cap.
  • ·Upgrade once the steady-state volume is known — paid tiers raise both limits.
Don't
  • ·Run a brand-new full-module copy on Free without sampling first.
  • ·Re-run the same selection expecting it to "continue" — it will not.
  • ·Treat the annual allowance as a budget you can ignore; once it's spent, runs are rejected.
  • ·Upgrade reactively after burning the allowance on retries. Upgrade ahead of the work.

A concrete example

An HR migration on Free queues a Copy of 8,400 Employee records from QA to Sandbox. The run hits the per-execution cap and stops at 1,000 in / 7,400 left. The team re-runs hoping for progress. The next run re-evaluates the source, copies the same first 1,000 Employee records again — Insert strategy creates duplicates this time — and stops again at the cap.

Three runs in, the team has burned 3,000 records against the 10,000/year allowance and produced a target with 1,000 unique employees and 2,000 duplicates. The fix that should have happened first: filter the source to 1,000 rows per run by department, or upgrade to a tier whose per-run cap is comfortably above 8,400.

Rule of thumb
If your selection exceeds the per-run cap, either filter the source, split the entities, or upgrade. Do not "just hit Run again".
Still need help?
If this article does not solve it, the DMM Infinity team is one ticket away.
Submit a ticket