I’m scraping a list of URLs in a subflow and performing some summarisation and passing those lists to another subflow to write to a Supabase table. One of the lists (site_contents) has a size mismatch to the other lists (although the error does not specify which). This results in the flow falling over.
If I run the initial scraping/summarising subflow independently, it completes successfully - with the matching size lists (as far as I can see - the logs are poor). I thought the scraper may be erroring (despite the successful independent execution), but even with the error shield, I get the same result.
With the limited logs, I’d appreciate any other insight into a root cause. Thanks.
Hey @dandug! If you’re reporting an issue with a flow or an error in a run, please include the run link and make sure it’s shareable so we can take a look.
Find your run link on the history page. Format: https://www.gumloop.com/pipeline?run_id={your_run_id}&workbook_id={workbook_id}
Make it shareable by clicking “Share” → ‘Anyone with the link can view’ in the top-left corner of the flow screen.
Provide details about the issue—more context helps us troubleshoot faster.
Essentially, when using loop mode or a node expecting List inputs, all list inputs need to be the same size. Otherwise, the node won’t know how to pair them correctly.
The best approach here is to create a “Master” subflow that works for a single input from start to finish. For example, in your case, that could mean taking an input from the RSS Feeder node, summarizing it, and posting to Supabase. Once that works smoothly, you can loop it over the list input—ensuring all list sizes stay intact.
This would also mean moving the Supabase writing subflow inside the feed list or summarize subflow to keep everything aligned.
Alternatively, you can create a Custom Node that takes in all list inputs and matches the smaller lists to the longest one by adding empty strings. This is a simple and straightforward approach.
Thanks. To be honest, I’m still unclear why the mismatch has occurred, but I do like your suggestion of breaking out the iteration process so I will give that a go. Thanks again!
Oh, I understand why a mismatch is an issue, but I’ve no idea how a mismatch is being generated. As I trace back through the nodes, the outputs are all as expected/matched. For some reason the Web Scraper Agent outputs a list smaller than the list of URLs that were input - but only when it’s run from the master Orchestrator flow. When I run within the subflow, it works fine.
Anyway, I split out some of that logic into other sub-flows along the lines you suggested, and it has completed successfully a few times. I’ve burnt quite a few credits over the last few days, so I’ll be slowing the pace of execution, but hopefully my daily summary will just chug along.
Yeah if the outputs of some of the subflows are larger than the column size on our database, the logs will be truncated which can make it harder to trace the list size issue - we’re working on scaling up here.
Meanwhile, let me know if you run into any issues with the subflow approach. I also sent you some credits so you can continue to test