Often when citizen scientists view an image, they want context. Where is this? Am I really seeing kelp, or is this sand or mudflats? Fortunately, we have you covered. In the video below, I show you how you can view the metadata about each individual image, including how to view the area pictures in Google Maps. Now, the map on Google Maps isn’t going to be from the same time as the Landsat image, so, there may or may not be kelp in the same places. But you can at least get a better high resolution view of the area to make decisions about your classifications if you want it.
One question that has come up a few times with our consensus classifications is, does the level really matter when it comes to looking at change in kelp forests over the long-term?
While our data isn’t quite up to looking at large-scale timeseries yet (we’re still digging through a few thorny methodological issues), I grabbed the complete dataset we have for the Landsat scene around Los Angeles from work Isaac is doing and decided to take a look. Note, this is totally raw and I haven’t done any quality control checking on it, but the lessons are so clear I wanted to share them with you, our citizen scientists.
After aggregating to quarterly data and then selecting the scene that had the highest kelp abundance for that quarter (i.e., probably the fewest clouds obscuring the view), we can see a few things. First, yeah, 1 classifier along is never good.
Note, I haven’t transformed the data to area covered, instead we’re just going with number of pixels (1 pixel = 30 x 30m). But, wow, we need consensus!
But what if we impose a more conservative filter? Say, a 4-10 user agreement threshold? What does that show us?
What I find remarkable about this image is that while we see the effect of decreases in detection when more and more citizen scientists have to agree on a pixel, the trends remain the same. This means that while we will try to chose the best threshold that will give us the closest true estimate of kelp area, that there will be multiple intermediate thresholds that give us the same qualitative results to any future attempts at asking questions of this data set.
This is a huge relief. It means that, as long as we stay within a window where we are comfortable with consensus values, this data set is going to be able to tell us a lot about kelp around the world. It means that citizen science with consensus classifications is robust to even some of the choices we’re going to have to make as we move forward with this data.
It means you all have done some amazing work here! And we can be incredibly confident in how it will help us learn more about the future of our world’s Floating Forests!
In working on a recent submission for a renewal grant to the NASA Citizen Science program, I whipped up a quick script that takes the data posted and overlays it with the actual image. I really like the results, so here’s one. Feel free to grab the script, data, and play along at home!
Some of you have noted in earlier posts of this preliminary dataset that some classifications show up on land – particularly at low thresholds. This is likely due to some images being served up that, shouldn’t have been (we’ve fixed this in the new pipeline), and the zeal of some classifiers. Regardless, we can crop out those areas, as we know that there’s no real kelp there. But do to it, we need some very very good maps of the coastline. Fortunately, there’s a solution!
The Global Self-consistent, Hierarchical, High-resolution Geography Database is an incredible resource, with some coastline data files that are remarkable in their detail. The data is also, of course, huge. So, for anyone playing along at home, we’ve subsetting it down to a few files for you delectation. These are all in the common ESRI Shapefile format, but if folks want them otherwise, we’re happy to provide. Here’s what we’ve created for you. Click on the names of the areas below to download the zip files.
And last, the absolutely stunning Falkland Islands
When we used them for coastal cropping, they worked great – we’ll show some timeseries with cropped data next week!
At Floating Forests, we get confronted with two issues a lot. First, how good is the data? A lot of scientists are still skeptical of citizen science (wrongly!) Second, many citizen scientists worry that they need to achieve a level of accuracy that would require near-pixel levels of zooming. We approach both of these issues with consensus classification – namely, that every image is seen by 15 people as soon as it’s noted that it has kelp in it. We can then build up a picture of kelp forests where, for each pixel, we note how many users select it as, indeed having kelp. You can read an initial entry about this here.
So, how does this pan out in the data I posted a few days back? Let’s explore, and I’ll link to code in our github repo for any who want to play along at home if you’re using R – I’d love to see things generated in other platforms!
I love this, as you can see both at the 1 threshold at least one person was super generous in selecting kelp. However, at the 10 threshold (unclear why we’re maxing at 10 here – likely nothing was higher!) super picky classifiers end up conflicting with each other so it looks like there’s no kelp here.
How does this play out over the entire coast? Let’s take a look with some animations. First, the big kahuna – the whole coast! Here are all of the classifications from 2008-2012 overlapping (I’ll cover timeseries another time).
I love this, because you can really see how crazy things are at a single user, but then they tamp down very fast. You can also see, given that we have the whole coastline, how, well, coastal kelp is! Relative to the entire state, the polygons are not very large. It’s a bit hard to see.
Let’s look at the coastline north of San Francisco Bay, from Tomales to roughly Point Arena.
Definitely clearer. You can also see we accidentally served up some lake photos, and folk probably circled plankton blooms. Oops! Now the question becomes, what is the right upper threshold? Time (and some ongoing analysis which is suggesting somewhere between 6-8) will tell!
If you have ideas for other visualizations you want to see, queries for the data that you want us to make, or more, let us know in the comments! If you want to see some other visualizations we’ve been whipping up, see here!
The day has come – we’re finalizing our data pipeline to return data to you, our citizen scientists! It’s been a twisty road, and we’re still tweaking, but we’ve begun to build some usable products for your delecation and exploration!
We want to know more from **you** about what you want and what is interesting for you to explore, so, today, I’m going to post some demo data for you to look at and give us feedback and comments on. This is a data file from our California project that consists of polygons for each kelp forest at different levels of user agreement on whether pixels are kelp or not. So, first, here’s the file in three formats (depending on what you want) (we can also add more if asked for)
You can do a lot with these in whatever GIS software is your preference, and if anyone has examples, we’d love to post them! For now, here’s a quick and dirty visualization of the whole shebang at the 6 users agreeing on a pixel per threshold (source.
Neat, huh? You can even see where something in one image was confusing (no kelp on land!) which now I’m *very* curious about.
So, what’s in this dataset? There’s a lot, but here are things most relevant to you
threshold – the number of users who agree on the pixels in a given polygon are kelp
zooniverse_id – the subject (i.e., tile) id of a given image, if you want to just look at a single image, subset to that id
scene – Individual Landsat “images” are called scenes. So, every subject that we served to users was carved out of a scene. You can look at a whole scene by subsetting on this column. For more about what a scene name means, see here
classification_count – number of users who looked at a given subject
image_url – to pull up the subject as seen on Floating Forests
scene_timestamp – when was an image taken by the satellite?
activated_at – when did we post this to Floating Forests?
There’s a lot of other info regarding subject corner geospatial locations. We might or might not trim this out in future versions, although for now it helps us locate missing data and see what has actually been sampled.
So, take a gander, enjoy, and if you have any comments, fire them off to us! This is just a sample, and there’s more to come!
Thanks to all of our great citizen scientists! I loved this Tweet from Trine Bekkby and the Norwegian Blue Forests Network so much that I thought I’d post it. Look at that Laminara hyperborea! SO GORGEOUS!
From our kelp to your kelp, happy holidays!
As I’ve been browsing through these beautiful images of classifications in the Falklands, I realized something. One of the reasons to explore the Falklands is that there aren’t too many studies looking at more long-term kelp dynamics there. Now, I’m a Northern Hemisphere kelp forest ecologist. We know that typically many types of kelp forests start to boom in the spring, get to peak biomass in the late summer/early fall, and then get whacked back by fall/winter storms before booming again in the spring.
One of the first questions I have as a scientist, then, is do we see the same seasonal trends in the Falklands? I’m very curious what y’all are seeing, so, I started a thread on talk asking y’all to note any observations. Please also tag very kelpy images with the month they were taken (click the (i) for information) as well as the #sokelpy hashtag, so I can do a quick search by hashtag to see frequency of when #sokelpy occured. I’ll post the resulting data after we get a decent set of tagged images.
And talk about what you’re seeing – month by month, or if you’re noticing certain years have more or less kelp over in the thread!
(And, heck, we haven’t even talked about north v. south side of the islands – but that’s for another time!)
If you are reading this post, it means the Floating Forests relaunch is live – thanks to all of your hard work we were able to get through over 20 years worth of data! Special thanks to the beta testers who gave us tons of feedback on the new site. We are busy on our end calibrating the results from the first round of data and it’s looking great. I don’t want to spill the beans on a future blog post, but working with this dataset has already led us down a new path with some unexpected collaborators!
As exciting as calibration models are, today’s main event is even better! Welcome to Floating Forests 2.0! We have been hard at work with Zooniverse to make your experience even better. In addition to a shiny new website, we’ll be taking you to a new part of the world – The Falkland Islands!
The Falkland Islands are an often overlooked ecological treasure. From land they appear a windswept grassland dominated by birds and insects, one of only a handful of places on Earth with no native trees. The coastal waters, however, are a different story altogether. You’ve probably guessed where this is going – kelp! Lots of kelp! The expansive kelp forests ringing the islands more than make up for the lack of terrestrial trees. Kelp forests around the world are a haven for wildlife, and these are no different. They are an irreplaceable resource for elephant seals, fur seals, sealions, multiple penguin species, two types of dolphins, and a huge number of fish and invertebrates. A recent report1 has listed lack of awareness and information as one of the biggest threat to the Falkland Islands’ marine biodiversity, so lets generate data and get aware!
Before you dive in, lets take a quick tour of the new website – if you’re familiar with our old site you’ll already know the drill, but some things have been moved around!
As you can see, there are two buttons at the bottom. “Classify Kelp” brings you to our shiny new version of the kelp tracing you all know and love. “Kelp presence/absence” you to a new feature- a simplified, mobile friendly task that can be done quickly and easily! This allows anyone who wants to check out the project to do so even if they don’t have access to a full computer. On the research side of things it allows us to squeeze every last drop of data out of these satellite images. To make a long story short, images from different satellites are different, and these differences make it somewhat difficult to automate a filter that boots out bad images. Just like with kelp classifications, our brains are much more useful here than computers. Once enough people have tagged an image as “kelp”, into the main workflow it goes to be classified!
Across the top, you will see a number of headings.
About: Learn about kelp, the project, and the team behind the research!
Classify: Get right to the action and start classifying kelp.
Talk: This links to our talk forum where you can discuss particular images, ask science questions, get technical help, and more! We will be very active here, so don’t hesitate to post!
Collect: More on this later, but this is where collections of images are found.
Recents: Link to your most recent classifications.
Blog: Direct link to the blog you are currently reading.
The classification should feel pretty familiar. The field guide tab on the far right has been overhauled and contains many examples of phenomena you could find in these images – refer to it often! It is constantly being updated, and if you have a suggestions for additions, let us know in talk!
Beneath the image are three buttons.
From the left:
Metadata: Click this to view metadata (location, time/date, satellite number), as well as a link to the image on google maps.
Favorite: Click this to add the image to your favorites, allowing you to quickly find it again.
Collect: Similar to adding an image to your favorites, you can add an image to a collection. This way we can collaboratively sort through images, keeping track of those that contain loads of kelp, cities, or any other identifiable feature.
Once you complete an image and have clicked the green “Done” button, you will see the following information:
Here you will see a summary of the number of patches you marked, as well as the blue “Talk” button. If you had any questions about the image, this button will create a discussion thread linked back to the image. Use this space to ask the science team any questions you might have about the image. Don’t be shy, we love to talk!
In addition to these front-end changes, there have been some under-the-hood updates as well that make it much easier for us to add images or collections and even create new workflows – stay tuned for future happenings with these features, but for now go check out the new site!
- Otley H. Falkland Islands Species Action Plan for Cetaceans 2008-2018.; 2008.
Hello! You may have noticed that things have slowed down on the website. To make a long story short, thanks to all of your help we are down to the last handful of images from California and Tasmania! We have been busy cleaning the data up and getting it ready to go. This milestone has given us a chance to reflect on the first phase of the project and to get ready for some exciting next steps – more on this later!
In January, science team members Jarrett Byrnes, Kyle Cavanaugh, and Isaac Rosenthal traveled to Chicago to meet with the Zooniverse team. We were hosted at the amazing Adler Planetarium, and had an unbelievable week of planning and collaboration (and eating!). By getting the science and development teams into the same room, we were able to work through a few issues that have been nagging the project since its inception, fixing some geo-referencing issues and streamlining the post-processing of the data (in other words, what happens to the data after the kelp is classified). It was truly amazing to spend a week surrounded by talent from so many disciplines, ranging from educators to back-end web developers. I think I speak for all of us when I say that it was a unique and deeply inspiring experience!
While still under construction, hopefully most of this is a familiar sight. Our goal with this relaunch is to make YOUR jobs easier! The tracing tool has been upgraded, and we will be able to spruce up the field guide. The under-the-hood flexibility of the new system is incredible and leaves the future of Floating Forests wide open. Custom datasets and modular workflows mean that the sky is the limit! Something that I am personally excited about is the opportunity to use these tools to ask new questions, broadening horizons for research and education. This relaunch will also feature an overhauled talk section so that we can continue to communicate with all of you!
Stay tuned for more information as we begin beta testing of the new website!