Skip to content

Conversation

@Sz945Kv
Copy link

@Sz945Kv Sz945Kv commented Nov 23, 2025

Short description

Fixes an issue where ScrapeWithURL would crash and hang up the scraping process whenever one of the many urls did not return a valid result. Resolved by only printing the results when it's not None.

@feederbox826
Copy link
Collaborator

Not sure if this does what you intend for it, let me take some time to review and make sure

@Sz945Kv
Copy link
Author

Sz945Kv commented Nov 24, 2025

Not sure if this does what you intend for it, let me take some time to review and make sure

please do!

My tests has been against a scene with a valid & invalid url where the latter did not match any installed scraper. Another was a scene with two urls with matching scrapers but one slug was modified to no longer match.
I'm unsure however if this works because of some quirk in the spec or if it's fully intentional.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants