ONLY NEED UML!!!
From this website:
Scrape this table:
Table 2. Global Radiative Forcing, CO2-equivalent mixing ratio, and the AGGI 1979-2018
Scrapping this website is significantly easier than the website used in lab2. Store the data in a SQLite database.
Create 6 threaded agents: CO2,CH4,N2O,CFC12,CFC11,15-minor. These agents extract the data for their respective columns a year a time over the range 1979 thru 2018. When the data has been extracted, the agents plot a linear regression for each CO2-equivalent mixing ratio.
Only one agent can access the database at a time. The database only releases one line per request for data. The agents must make repeated requests for data. When all the data has been acquired, the agent plots the data.
THIS IS LAB 2:
Lab 2 – Web Scraping
Using BeautifulSoup, scrape this website:
Scrape the “List of countries by carbon dioxide emissions” for the data. Store the scraped data in an object from your SQLite database class.
Using the database passed from the backend, sort the data by the Fossil CO2 Emissions 2017 (% of world) column. Extract the top 10 countries data, and plot them in a pie-graph using Matplotlib.
Note: I haven’t found the CSV, HTML, XML data file to scrape. If you can find it, you can use beautifulsoup mixed with Regex to scrape the file from the website, download and insert it into a SQLite database. A online-readable guideline for scraping data files from webpages:
I did find a GreenHouse.csv file located here:
But I haven’t tried to scrape it. In addition, the data ranges from 1990-2014 for each country. If you select this file, use the 2014 value.