Credit Burn on Website Scraper

Hi,

I am starting out and thought I would follow this tutorial:

AI Automation/Gumloop Live Workshop #1

The LinkedIn scraping node consumed nearly 1,200 credits but failed to retrieve the headcount data for more than four companies. It exhausted all available credits before the process could be completed.

Workbook:
https://www.gumloop.com/pipeline?workbook_id=oBVhnzxcSbPQpdmasxUpEr&tab=2&run_id=izKWgjuwv9EQEbqzQ5zsUb

Did I do something wrong?

Thanks

Hey @HGKhiz - I’ve requested access to view the flow. You can also enable ‘anyone with the link can view’ under the share button.

To clarify, the Linkedin scraper nodes use ProxyCurl so the availability of data depends upon the data on ProxyCurl’s database. They’re usually very up-to-date but for smaller/newer companies they can have limited data.

Hi Wassay

Access has been given.

Thanks

Hey HGK jumping in for Wasay – the linkedin scraping node can be expensive especially if you’re then chaining that information with further AI steps (which each cost credits).

I’ve gone ahead and send some credits your way so you can keep testing but be careful when running those types of flows!

This topic was solved and automatically closed 3 days after the last reply. New replies are no longer allowed.