It's a Big data analysis problem. Just one task that requires data analysis using Python, Apache Spark and Hadoop. All the requirements are in this notebook.
Please read through the whole notebook carefully, You have to write a python script using Spark to analyze big data. You python script should produce 9 csv file based on 9 categories by analyzing some csv files. In the link you will find a zip file that has 4 csv file.
1.2020-01-06-weekly-patterns.csv
2.2019-01-07-weekly-patterns.csv
3.core-places-nyc.csv
4.fast_food_chains.csv
You will work with data 1,2,3 to generate 9 csv files that will look like 4.fast_food_chains.csv. I gave you this file just to show you how the output file will be formatted and what are the data should be there.
The actual data set is much larger then what I gave you but all the data are similar to 1 and 2. So your python script file should run with other kinds of similar data and should generate 9 different csv files.
You will see a plot below it has created from the file 4.fast_food_chains.csv.
The 9 csv file that your script will create should generate 9 similar plot which are shown at the last and the code also given for the plot. Since you dont have all the data so the plot will not be same as those plot below but it should be similar.
This is the link for the data:
https://drive.google.com/file/d/1nZ9AZAxQ0evkZEpTa9G1WhUhpLH_WgRs/view?usp=sharing
You will submit the python script and generated 9 csv file.
You are required to turn in a working Python's script that runs on the Hadoop cluster. This notebook is only for developing and testing your code. We still use the Safegraph data to better understand how NYC response to the COVID-19 pandemic. If you have any doubts about the data, please consult SafeGraph's documentation for Places Schema and Weekly Pattern.
Problem Description
To assess the food access problem in NYC before and during the COVID-19 pandemic, we would like to plot the visit patterns for all food stores (including restaurants, groceries, deli's, etc.) such as the one shown below.
[ ]
However, we suspect that the visit patterns may vary across different type of stores. Our hypothesis is that we have changed our shopping behavior during the pandemic. For example, we visit Fast Food and Whole Saler restaurants more often comparing to full service restaurants and typical supermarkets. In particular, we are interested in the following store categories with their NAICS codes:
Big Box Grocers: 452210 and 452311
Convenience Stores: 445120
Drinking Places: 722410
Full-Service Restaurants: 722511
Limited-Service Restaurants: 722513
Pharmacies and Drug Stores: 446110 and 446191
Snack and Bakeries: 311811 and 722515
Specialty Food Stores: 445210, 445220, 445230, 445291, 445292, and 445299
Supermarkets (except Convenience Stores): 445110
[ ]
#@markdown TODO
#@markdown The plot above was created by the `linePlot()` function (defined later), which takes a Panda's DataFrame consisting of 5 columns as follows.
TODO The plot above was created by the linePlot() function (defined later), which takes a Panda's DataFrame consisting of 5 columns as follows.
[ ]
year: column is used for showing the trend line category (orange or blue).
date: denotes the day of the year for each data point, for which we project to to year 2020. We chose 2020 as the base year because it is a leap year and would have all possile dates (i.e. month + day combination). The actual date for the data point would be month and day from date combined with the year in year.
median: is used to draw the solid line describing the median visit counts across all stores for that date.
low: the lower bound of the "confidence interval". In our plot, it is the median minus the standard deviation but will be kept at 0 or above.
high: the higher bound of the "confidence interval". In our plot, it is the median plus the standard deviation but will be kept at 0 or above.
NOTES
low and high value will be used to create the transparent area that we see in the plot.
low, median, high should be computed not only for stores that had visits but also for all stores in Core Places that fit the category. As we learned previously, restaurants with no visits will not be reported in the Weekly Pattern data set.
Objective
Your task is to produce the visit pattern data for each of the store category above so that we can plot them in a similar way to our first plot for compare and contrast. You must process the 2 year worth of pattern data on the cluster, and produce 9 CSV-formated folders (header must be included), one for each category.
#### **OUTPUT DATA (on HDFS)**
Your code must create the following 9 sub-folders (corresponding to 9 categories) under the `OUTPUT_PREFIX` provided in the command line argument:
* `big_box_grocers`
* `convenience_stores`
* `drinking_places`
* `full_service_restaurants`
* `limited_service_restaurants`
* `pharmacies_and_drug_stores`
* `snack_and_bakeries`
* `specialty_food_stores`
* `supermarkets_except_convenience_stores`
Each folder contains the CSV records for each category with the same schema specified above, **sorted by `year` and `date`**. For example, if I run your code with the following command.
I should have the following 9 folders populated with the expected output:
DescriptionIn this final assignment, the students will demonstrate their ability to apply two ma
Path finding involves finding a path from A to B. Typically we want the path to have certain properties,such as being the shortest or to avoid going t
Develop a program to emulate a purchase transaction at a retail store. Thisprogram will have two classes, a LineItem class and a Transaction class. Th
1 Project 1 Introduction - the SeaPort Project series For this set of projects for the course, we wish to simulate some of the aspects of a number of
1 Project 2 Introduction - the SeaPort Project series For this set of projects for the course, we wish to simulate some of the aspects of a number of