What would be really cool is this workflow
- Fetch Raw html content
- Let end user define the scope of the html element and where the data exists
- Run llm on that short dom element
Looks like we are running the llm on the whole page content that is probably waste of token and hence the high usage of credits for web agent scraper?
More on this here: Web scraping node - Return HTML - #7 by Shrikar