Looks like this is still happening 3 weeks after the initial report. Any updates on your promise to fix this in 10-14 days? This is preventing work for us.
Hey @Eyal_Bens! If you’re reporting an issue with a flow or an error in a run, please include the run link and make sure it’s shareable so we can take a look.
Find your run link on the history page. Format: https://www.gumloop.com/pipeline?run_id={{your_run_id}}&workbook_id={{workbook_id}}
Make it shareable by clicking “Share” → ‘Anyone with the link can view’ in the top-left corner of the flow screen.
Provide details about the issue—more context helps us troubleshoot faster.
Hey Eyal, thank you for reaching out and for your patience as we work through this.
We’ve made a lot of progress on flow memory and stability, and in most cases, flows now handle runs without crashing. That said, there are still edge cases, especially with larger or more complex flows, where timeouts or errors can occur, as you’ve seen. This is a major area of focus for us right now, and we’re actively working on improving things further. I want to be transparent that while we’ve made real progress in the last few weeks, it’s a substantial lift on our side and I can’t give a specific date for when this will be fully resolved just yet.
For now, the best workaround is to break up large runs into smaller files. I know this isn’t ideal, and I would feel exactly the same in your position, but this approach tends to avoid the current limits. I’d also recommend trying the flow I shared earlier, which uses a run code node to handle the filter operation more efficiently in a single go, rather than looping through each row. This can save quite a bit of time compared to the standard setup.
If you’d like more details about the work we’re doing or want to discuss the technical side of the errors, I’m happy to go over that by email. The progress on this, along with platform speed, are our main priorities right now.