Skip to content

naveennk045/Web-Scraping

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Web Scraping Project: ESPN Cricinfo and Wikipedia

Project Overview

This project demonstrates how to use Python for web scraping to extract data from the ESPN Cricinfo and Wikipedia websites. The project uses two main libraries:

  • Pandas: For extracting tables from the ESPN Cricinfo webpage.

  • BeautifulSoup: For parsing and extracting specific information from a Wikipedia page.

Tools and Libraries

  • Python 3.x
  • Pandas
  • BeautifulSoup
  • Requests

Installation

To run this project, you need to have Python installed along with the required libraries. You can install the libraries using pip.

pip install pandas
pip install beautifulsoup4
pip install requests

Usage ##1. Extracting Tables from ESPN Cricinfo

Web Pages

The script uses Pandas' read_html() function to extract tables from a specific ESPN Cricinfo webpage.

import pandas as pd

# URL of the webpage
url = "https://www.espncricinfo.com/series/ipl-2024"

# Extracting tables from the webpage
tables = pd.read_html(url)

# Display the first table
print(tables[0])
  1. Extracting Information from Wikipedia

Web Pages

The script uses BeautifulSoup to parse and extract specific information from a Wikipedia page. bash

from bs4 import BeautifulSoup
import requests

# URL of the Wikipedia page
url = "https://en.wikipedia.org/wiki/Indian_Premier_League"

# Sending a request to the webpage
response = requests.get(url)

# Parsing the webpage content
soup = BeautifulSoup(response.content, 'lxml')

# Extracte what you want using find and find_all by specifying tags
first_paragraph = soup.find('p').get_text()


print(first_paragraph)

About

Scrapes a webpage Using Pandas and BeautifulSoup4

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors