Skip to content

WebCrawler is a C# console application that recursively scans a website starting from a given URL, collects all discovered links, and saves them to a file. It’s useful for site mapping, link analysis, and content discovery.

Notifications You must be signed in to change notification settings

atymri/WebCrawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WebCrawler

A simple C# console application that crawls a website starting from a URL, collects all links, and saves them to a file.

Features

  • Recursively visits links on a website
  • Saves all discovered links to a text file

Usage

  1. Run the program.
  2. Enter the starting URL.
  3. The program will crawl and save links to crawled_links.txt.

Requirements

  • .NET 6 or newer
  • Internet connection

License

MIT License

About

WebCrawler is a C# console application that recursively scans a website starting from a given URL, collects all discovered links, and saves them to a file. It’s useful for site mapping, link analysis, and content discovery.

Topics

Resources

Stars

Watchers

Forks

Languages