A simple C# console application that crawls a website starting from a URL, collects all links, and saves them to a file.
- Recursively visits links on a website
- Saves all discovered links to a text file
- Run the program.
- Enter the starting URL.
- The program will crawl and save links to
crawled_links.txt
.
- .NET 6 or newer
- Internet connection
MIT License