Hey @Eyal_Bens! If you’re reporting an issue with a flow or an error in a run, please include the run link and make sure it’s shareable so we can take a look.
Find your run link on the history page. Format: https://www.gumloop.com/pipeline?run_id={{your_run_id}}&workbook_id={{workbook_id}}
Make it shareable by clicking “Share” → ‘Anyone with the link can view’ in the top-left corner of the flow screen.
Provide details about the issue—more context helps us troubleshoot faster.
Hey @Eyal_Bens – his might be too much data for a single run at the moment. I’d recommend splitting it into batches of around 1000 rows or so at a time.
That said, the most robust solution here would be to use a Google Sheet Writer node instead of the CSV Writer, and build a subflow that processes a single URL – then loop it over your list of URLs. In your case, the subflow would include everything downstream of the Filter node: the scraping subflow, Combine Text, Extract Data, and finally the Sheet Writer (instead of CSV Writer).
The benefit of this setup is that each subflow run writes data to the sheet individually, so all URLs are processed in parallel and you get data showing up in real time, instead of all at once at the end.