I am starting out and thought I would follow this tutorial:
AI Automation/Gumloop Live Workshop #1
The LinkedIn scraping node consumed nearly 1,200 credits but failed to retrieve the headcount data for more than four companies. It exhausted all available credits before the process could be completed.
Hey @HGKhiz - I’ve requested access to view the flow. You can also enable ‘anyone with the link can view’ under the share button.
To clarify, the Linkedin scraper nodes use ProxyCurl so the availability of data depends upon the data on ProxyCurl’s database. They’re usually very up-to-date but for smaller/newer companies they can have limited data.
Hey HGK jumping in for Wasay – the linkedin scraping node can be expensive especially if you’re then chaining that information with further AI steps (which each cost credits).
I’ve gone ahead and send some credits your way so you can keep testing but be careful when running those types of flows!