-
Notifications
You must be signed in to change notification settings - Fork 39
Check for cross submits #68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: 2.x
Are you sure you want to change the base?
Conversation
Signed-off-by: Julien Nioche <[email protected]>
…wl#65 Signed-off-by: Julien Nioche <[email protected]>
sebastian-nagel
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @silentninja, thanks for the contribution! The code looks good and the accompanying unit test is very appreciated.
I see that the PR is still marked as "draft". But one comment ahead which might require a deeper change:
How are sitemap indexes handled? For example in the following constellation:
robots.txt -> sitemap-index.xml -> sitemap-news.xml.gz
-> sitemap-video.xml.gz
-> sitemap-books.xml.gz
Then looking into the robots.txt alone is not enough. A recursive lookup into a sitemap to check whether it's an index seems too expensive, esp. because the robots.txt is likely cached while sitemaps are definitely not.
What about keeping traces in the status index for any sitemap from which robots.txt it was detected? A similar feature is already available in StormCrawler to trace the seed origin of URLs, see metadata.track.path in MetadataTransfer. Of course, it would be sufficient to track the original robots.txt host name(s). Note: it's not uncommon that a sitemap is referenced from multiple hosts. This way it would not even necessary to fetch any robots.txt in case it is not found in the cache.
One real example of such a "(news) sitemap detection chain":
https://www.anews.com.tr/robots.txt
-> https://www.anews.com.tr/sitemap/index.xml
-> https://www.anews.com.tr/sitemap/news.xml
|
Thanks for the review @sebastian-nagel.
Why do we need to keep traces in the status index if
Sitemap indexes are tricky especially for sitemap with incomplete trace. For example, if the seed is the |
|
@sebastian-nagel I made some commits to check the path using MetadataTransfer like you suggested. Can you see if that makes sense? |
@sebastian-nagel I addressed most of your concerns except for filtering out the large hosting domains. The Apache Http Client which we use to get the root domain does not differentiate these private domains. We need to create a separate list from the Suffix list for these large hosting domains and filter it out. I will create a issue to track it |
This is already implemented in crawler-commons' EffectiveTldFinder. It's already a dependency, but we should upgrade it. |
…ttp.conn.util.PublicSuffixMatcher to get the hostnames when checking for cross submit
Neat! I made changes to the PR based on your suggestion. Thanks for the suggestion! |
sebastian-nagel
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very sorry, @silentninja, for the overlong delay... I'd totally understand if you now could not continue working on this PR. In this case, I'd be happy to take this on my shoulders. Let me know... In any case, thank you very much for your work!
| } | ||
|
|
||
| // Cross-host checks | ||
| Metadata targetMetadata = metadataTransfer.getMetaForOutlink(targetURL.toString(), sitemapURL.toString(), metadata); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My hint about the MetadataTransfer wasn't about using it for link checking, but to forward the robots.txt URL to the record of a sitemap URL.
This can be done by setting the configuration property metadata.track.path to true. The MetadataTransfer is called on all outlinks and also redirects from the StatusEmitterBolt.
Two caveats:
- not the robots.txt URL is tracked but the URL which was going to be fetched and triggered the robots.txt to be fetched and checked. But this should be equivalent.
- in case, there is a chain of redirects and/or a sitemap index record between robots.txt and the news sitemap, all intermediate "hops" are recorded. Then the first element of
Metadata.urlPathKeyNameis the original URL. Note: better use the string constant, not the literal"url.path".
| } | ||
|
|
||
| public String getHost(URI url) { | ||
| if (this.crossSubmitLenient) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Nice. That's a good, lean solution.
| ol.getMetadata().setValue(Constants.STATUS_ERROR_MESSAGE, errorMessage); | ||
| Values v = new Values(ol.getTargetURL(), ol.getMetadata(), | ||
| Status.ERROR); | ||
| collector.emit(StatusStreamName, tuple, v); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question, whether URLs failing the cross-submit check should be emitted at all:
- it unnecessary fills up the status index (it's about URLs which are not considered to be fetched)
- the error status is not necessarily a permanent one: depending on the configuration, a error page is retried after some time. Then the cross-submit check would be circumvented after a time delay.
| * URLs from example.net fail since their robots reference a different sitemap index. | ||
| */ | ||
| @Test | ||
| public void test_cross_host_submission_sitemaps() throws IOException, UnknownFormatException, URISyntaxException { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a canonical Java method name.
| SitemapType.SITEMAP, type); | ||
| } | ||
|
|
||
| @Test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Good to have a unit test!
Thanks for reviewing the PR! I'm definitely still interested in working on it. I've been tied up with moving to a new house, but I’ll be able to pick this back up next week. |
Fixes #32
Skips unverified sitemap links by checking the URLs of the robots.txt of the target link