Climate change impacts on the global food supply, frequent climate change and irregularities are big challenging environmental issues.
Irregularities in climate divisions are drastically affecting the human lives residing on the Earth. This project concentrates on how the climate impact will highly affect global food production worldwide and how much quantification will impact climate change.
The main aim of development for this project is to calculate the potentialities on the staple crop productions due to climate change. Through this project, all the implications related to temperatures & precipitation change. It will then be taken into account how much carbon dioxide affects the growth of plants and the uncertainties happening in the climatic conditioning. Hence, this project will largely deal with Data Visualizations.
The biggest pain-points I have identified are: finding the right data, getting access to it, understanding tables and their purpose, clean the data and explain in laypeople terms how data works and how it links to the different areas and organizations.
Data is scattered in multiple sources, making it difficult for us to find the right asset. Part of the solution is to consolidate the information in a single place.
Here are a few options of where we might start looking for data:
Kaggle: Kaggle started as a website for doing data science competitions. Companies post a data set and a question and usually offer a prize for the best answer.
API’s: API’s are application programming interfaces, developer tools that allow us to access data directly from companies.
Government open data: A lot of government data is available online, so we can use census data, employment data and tons of local government data like New York City’s 911 calls or traffic counts.
Unfortunately, real-life data is nothing like hackathon data or Kaggle data. It is much messier. The result? Data scientists spend most of their time pre-processing data to make it consistent before analyzing it, instead of building meaningful models. This tedious task involves cleaning the data, removing outliers, encoding variables, and so on. Although data pre-processing is often considered the worst part of a data scientist’s job, it is crucial that models are built on clean, high-quality data.
Other big challenge is to communicate the results to business executives. In fact, managers and other stakeholders are ignorant of the tools and the works behind models. They have to base their decisions on data scientists’ explanations. If the latter can’t explain how their model will affect the performance of the organization, the solution is unlikely to be executed.