I'm currently doing a project that uses R to process some large csv files that are saved in my local directory linked to my repo.
So far, I managed to create the R project and commit and push R scripts into the repo with no problem.
However, the scripts read in the data from the csv files saved in my local directory, so the code goes in a form
df <- read.csv("mylocaldirectorylink")
However, this is not helpful if my partner and I working on the same project have to change that url to our own local directory every time we pull it off the repo. So I was thinking that maybe we can upload the csv files onto GitHub Repo and let the R script refer directly to the csv files online.
So my questions are:
- Why can't I upload csv files onto GitHub? They keep saying that my file is too large.
- If I can upload the csv files, how to I read the data from these csv files?

dfis a bad variable name (b) if you're getting that error then your CSV is YUGE and you shld consider migrating to RDS files withxzencryption. That will get you around the limits. It's a bad idea to refer to GH URLs for data but if it's cloned you can use therprojrootpkg to ensure you are both using the local copies. If you're stuck w/CSV (ugh) use Amazon S3, Google Drive, Dropbox or some other, similar service (as Jake suggested).