I am encountering an issue where the web scraper is unable to find a specific link. In such cases, I would like to pass the value as None to prevent the data from getting mixed up. Could you please advise on how I can achieve this or if there are any settings I need to adjust?
Hey @Harbuzova! If youāre reporting an issue with a flow or an error in a run, please include the run link and make sure itās shareable so we can take a look.
Find your run link on the history page. Format: https://www.gumloop.com/pipeline?run_id={{your_run_id}}&workbook_id={{workbook_id}}
Make it shareable by clicking āShareā ā āAnyone with the link can viewā in the top-left corner of the flow screen.
Provide details about the issueāmore context helps us troubleshoot faster.
Hey @Harbuzova ā you can use a Find & Replace node here and connect it with the Error output of the web agent scraper node. This lets you format a fallback message like "none" in case anything fails. Then, use a Join Paths node to merge the success and error paths, and write the result to Google Sheets.
Essentially, if the scraper works and returns the URL, it flows through Join Paths and writes that to the sheet. If the scraper fails, it goes through Find & Replace, formats the error output, and still writes to the sheet.
The main problem is that heās mixing up the data. The data he processed was outputted in the wrong order, and because of that, itās still getting confused.
Can you share the run link so I can view the inputs/outputs please? You can find the run link on the https://www.gumloop.com/history page or through the Previous Runs tab on the canvas.
Thanks for sharing the link and the sheet @Harbuzova ā Iād recommend using the sheet updater node here so thereās a 1:1 correspondence and the flow updates the same row it processes.