feat: expose function checkpoints (from/to) in Nango logs#5565
feat: expose function checkpoints (from/to) in Nango logs#5565
Conversation
| telemetryBag: { customLogs: 0, proxyCalls: 0, durationMs: 0, memoryGb: 0 }, | ||
| functionRuntime: 'lambda' | ||
| functionRuntime: 'lambda', | ||
| checkpoints: null |
There was a problem hiding this comment.
Unfortunately we are losing the checkpoints when the lambda OOM or times out. (same as telemetryBag).
Not a deal breaker imho for now but annoying nonetheless
There was a problem hiding this comment.
If this is something we want to fix we could keep an execution log of some sort that we could interrogate at this point. Example, when a checkpoint is stored we could record something using the AWS request id (or task id). If not a deal breaker right now maybe not pursuing in the short term, but if it becomes an issue, we have options.
There was a problem hiding this comment.
yes, logs/proxy/checkpoints are all touching the backend so we could keep track of them instead of relying on the runner to report them. Much more complicated though
Runner is now returning from/to checkpoints which is exposed in the logs operation metadata `from` is the checkpoint value at the beginning of the execution (aka: first call to getCheckpoint) `to` is the last saved checkepoints
d2454f9 to
85d0b9a
Compare
There was a problem hiding this comment.
Review found no issues; changes appear well implemented.
Status: No Issues Found | Risk: Low
Review Details
📁 16 files reviewed | 💬 0 comments
Instruction Files
├── .claude/
│ ├── agents/
│ │ └── nango-docs-migrator.md
│ └── skills/
│ ├── agent-builder-skill/
│ │ ├── EXAMPLES.md
│ │ └── SKILL.md
│ ├── creating-integration-docs/
│ │ └── SKILL.md
│ └── creating-skills-skill/
│ └── SKILL.md
├── AGENTS.md
└── GEMINI.md
Runner is now returning from/to checkpoints which is exposed in the logs operation metadata to make it visible for customers for monitoring/debugging purpose
fromis the checkpoint value at the beginning of the execution (aka: last saved checkpoint of the previous execution)tois the last saved checkpoints of this executionNote: in a followup PR, the checkpoints info will be added to the sync webhook
It also propagates these checkpoint ranges through execution results into the jobs service, updating the jobs API contract and task payloads so handlers capture checkpoints on both success and error paths for end-to-end logging and metadata.
Possible Issues
• If
getCheckpointis never called,frommay remainnulland logs may show an incomplete range; confirm that is acceptable for monitoring expectations.This summary was automatically generated by @propel-code-bot