The Moore Institute Open Data Portal
GIS Standardization of Microplastic Pollution Research
The Moore Institute for Plastic Pollution Research
Leading the Charge in Microplastic Research and Education
Founder Charles Moore Making Waves In Bringing Awareness to Microplastics in our Oceans and Environment
A Microplastics Breakdown
Working Together on Microplastics
California is breaking ground in the US by facilitating action and collaboration on research, education, and to take action in order to mitigate and find solutions to microplastic pollution.
CA Legislation on Plastics and Microplastics
Research in Microplastics is still considered "young" (20 years or so); therefore a lack of uniformity/standards in the scientific field, which holds back large scale analysis.
Contributing To The Development Of The TMIODPMPP
Open Source Principles
Our Open Source Tools and Resources
Our Project Plan
In order to visualize and analyze microplastic pollution data with GIS programs and Rshiny, we first needed to create a high-quality microplastics dataset with samples closer to California.
Working alongside our clients, we collected, cleaned, assessed, and came up with solutions for visualizing and analyzing microplastics.
# Load the necessary libraries
library(pdftools)
library(stringr)
# Define the list of keywords
keywords <- c("SPT", "Hyper-saline solution", "glass funnels", "clamped flexible tubing",
"NaCl", "Sodium polytungsenate", "Nal saline", "density separation")
# Convert all keywords to lower case for case insensitive search
keywords <- tolower(keywords)
# Define the directory where the manuscripts are located
manuscripts_dir <- "path_to_your_manuscripts_directory"
# List the files in the manuscripts directory
manuscripts_files <- list.files(manuscripts_dir, pattern = "*.pdf")
# Initialize a counter for the number of papers that contain the keywords
counter <- 0
# Iterate over the files
for (file_name in manuscripts_files) {
# Define the file path
file_path <- file.path(manuscripts_dir, file_name)
# Read the content of the file
content <- pdf_text(file_path)
# Check if the content contains any of the keywords
if (any(str_detect(content, paste0(collapse = "|", keywords)))) {
counter <- counter + 1
}
}
# Calculate the percentage of papers that contain the keywords
percentage <- (counter / length(manuscripts_files)) * 100
print(paste0(percentage, "% of papers contain the keywords in any section of the paper."))