With the Falklands classifications wrapping up, it’s time to move on to our next phase of global kelp mapping – kelp on the edge! Giant kelp is a cold water species – warm nutrient poor water is a definite no. Their range extends towards the equator wherever they are found until they hit a wall of warm water. There they shall stop, and no further.
But what about when that wall – that range edge – starts heating up? That’s when you’ve got…
We’ve witnessed a variety of other kelps die back when things got to hot, and our own Jorge Assis has shown some projected major range shifts of kelps in the future.
With the data from Floating Forests, one question we want to ask is, how have kelps on the edge been faring? Over the past 35 years, have we seen kelps on warm water range edges dwindling? Have any of those populations blinked out? Have the ranges of giant kelp actually been on the move?
The great thing about this project is the simplicity of the question. Rather than circling kelp beds (although we’ll get there), we want to begin just by looking at range edges of giant kelp around the planet and the area up to 500km away and ask you just to note, do you see any kelp? Yes or no? That’s it!
The first set of images is up – from Baja California using Landsat 7 and 8. L4 and L5 imagery will come in the next few days, and more of Baja and then New Zealand next week.
So what are you waiting for! Pull out your phones and get swiping! (or click here.
Also… the music we can’t get out of our heads
There’s somethin’ wrong with the kelp today
I don’t know what it is
Something’s you’ll see with our eyes
We’re classifyin’ things in a different way
With swipe fast left or right
The data will surprise
Kelp’s livin’ on the edge!
Often when citizen scientists view an image, they want context. Where is this? Am I really seeing kelp, or is this sand or mudflats? Fortunately, we have you covered. In the video below, I show you how you can view the metadata about each individual image, including how to view the area pictures in Google Maps. Now, the map on Google Maps isn’t going to be from the same time as the Landsat image, so, there may or may not be kelp in the same places. But you can at least get a better high resolution view of the area to make decisions about your classifications if you want it.
One question that has come up a few times with our consensus classifications is, does the level really matter when it comes to looking at change in kelp forests over the long-term?
While our data isn’t quite up to looking at large-scale timeseries yet (we’re still digging through a few thorny methodological issues), I grabbed the complete dataset we have for the Landsat scene around Los Angeles from work Isaac is doing and decided to take a look. Note, this is totally raw and I haven’t done any quality control checking on it, but the lessons are so clear I wanted to share them with you, our citizen scientists.
After aggregating to quarterly data and then selecting the scene that had the highest kelp abundance for that quarter (i.e., probably the fewest clouds obscuring the view), we can see a few things. First, yeah, 1 classifier along is never good.
Note, I haven’t transformed the data to area covered, instead we’re just going with number of pixels (1 pixel = 30 x 30m). But, wow, we need consensus!
But what if we impose a more conservative filter? Say, a 4-10 user agreement threshold? What does that show us?
What I find remarkable about this image is that while we see the effect of decreases in detection when more and more citizen scientists have to agree on a pixel, the trends remain the same. This means that while we will try to chose the best threshold that will give us the closest true estimate of kelp area, that there will be multiple intermediate thresholds that give us the same qualitative results to any future attempts at asking questions of this data set.
This is a huge relief. It means that, as long as we stay within a window where we are comfortable with consensus values, this data set is going to be able to tell us a lot about kelp around the world. It means that citizen science with consensus classifications is robust to even some of the choices we’re going to have to make as we move forward with this data.
It means you all have done some amazing work here! And we can be incredibly confident in how it will help us learn more about the future of our world’s Floating Forests!
In working on a recent submission for a renewal grant to the NASA Citizen Science program, I whipped up a quick script that takes the data posted and overlays it with the actual image. I really like the results, so here’s one. Feel free to grab the script, data, and play along at home!
The day has come – we’re finalizing our data pipeline to return data to you, our citizen scientists! It’s been a twisty road, and we’re still tweaking, but we’ve begun to build some usable products for your delecation and exploration!
We want to know more from **you** about what you want and what is interesting for you to explore, so, today, I’m going to post some demo data for you to look at and give us feedback and comments on. This is a data file from our California project that consists of polygons for each kelp forest at different levels of user agreement on whether pixels are kelp or not. So, first, here’s the file in three formats (depending on what you want) (we can also add more if asked for)
You can do a lot with these in whatever GIS software is your preference, and if anyone has examples, we’d love to post them! For now, here’s a quick and dirty visualization of the whole shebang at the 6 users agreeing on a pixel per threshold (source.
Neat, huh? You can even see where something in one image was confusing (no kelp on land!) which now I’m *very* curious about.
So, what’s in this dataset? There’s a lot, but here are things most relevant to you
threshold – the number of users who agree on the pixels in a given polygon are kelp
zooniverse_id – the subject (i.e., tile) id of a given image, if you want to just look at a single image, subset to that id
scene – Individual Landsat “images” are called scenes. So, every subject that we served to users was carved out of a scene. You can look at a whole scene by subsetting on this column. For more about what a scene name means, see here
classification_count – number of users who looked at a given subject
image_url – to pull up the subject as seen on Floating Forests
scene_timestamp – when was an image taken by the satellite?
activated_at – when did we post this to Floating Forests?
There’s a lot of other info regarding subject corner geospatial locations. We might or might not trim this out in future versions, although for now it helps us locate missing data and see what has actually been sampled.
So, take a gander, enjoy, and if you have any comments, fire them off to us! This is just a sample, and there’s more to come!
It’s been a bit since I promised calibration info, but we’ve hit a minor (almost solved) projection issue in comparing our data to some gold-standard data we have. So, to stave off boredom while the real geographers on our team do the heavy lifting, I’ve been futzing about with making the generation of overall indices easier. I arrived at a neat solution using Spatial Grids in R that was much faster than switching back and forth between rasters. The biggest bonus is that the default plotting of results with color as number of people selecting an area is *purty!*
Or at least, I think so.
How does this kelp forest look to you?
Well, I woke up this morning, fired up Floating Forests, as is my wont, and saw this! I thought it would be a few more days, and was even going to post some exhortation, but you guys have been too awesome and brought us to 2 million classifications yourself!
Nice work, all! And now it looks like we’re going to need to throw some new regions your way soon!
A lot of what we’ll be working on to determine area of beds are heatmaps of users selecting a pixel as kelp. This sounds somewhat abstract, so I wanted to operationalize it for you with some images. Let’s start with a single image from Floating Forests chosen because it has been flagged as having kelp. It has 13 classifications, so, one more and it is ‘complete’ – unless we decide to lower the classification threshold. The image is
So, what would it look like if we overlaid all of the outlines of users outlining kelp from the other day on the image?
You can see, to some extent, folk circling the same areas, and their varying degrees of specificity. What does this result in if we want a heatmap of number of users selecting each pixel on which to do our analysis? Well, here you go!
Next time, a more quantitative look.
For the next post or three, I’m going to talk about what I see when I look at the data from one image. In the coming weeks, I hope to get at putting together bigger spatial or temporal results. But for the moment, I’m going to begin with what we see when we look at user classifications of one image. I’m going to begin with something beautiful – human variation.
This is the variability from person to person that we see in circling the same set of beds. I just find it striking and lovely.
If you have been classifying images in California over the past few months, you may have come across an array of square kelp forests and wondered, “How did those get there?!” The story behind this amazing man-made kelp forest involves a nuclear power plant, a state agency, and some remarkable researchers.
In the early 1970’s the San Onofre Nuclear Generating Station (SONGS) proposed adding two additional reactor units to increase its power generation capacity. The California Coastal Commission (CCC) granted the permit in 1974, but as a condition of the expansion a Marine Review Committee was established to direct impact assessment studies on nearby coastal ecosystems that could be negatively affected by the additional reactor units. As a result of these studies, the CCC added new conditions for the mitigation of identified impacts, one of the conditions was the construction of an artificial reef to replace kelp bed resources lost as a result of SONGS’ cooling water discharge.
The additional reactors are cooled by a single pass seawater system. As the warm water is discharged back to the environment it is cooled with additional seawater using diffusers. This process draws in ambient seawater at rate about 10x the discharge flow and is swept up along with sediments, which are transported offshore. This warm, sediment-laden plume led to substantial reductions in the abundance and density of kelp plants within the San Onofre kelp bed, as well as reductions in many kelp bed fish and invertebrate species.
The mandated artificial reef had to be large enough to sustain 150 acres of kelp forest as compensation for the loss of 179 acres within the San Onofre kelp bed. This process began with a 5-year experimental phase that entailed building a smaller 22.4 acre reef to determine the substrate types and configurations that would support a giant kelp forest and associated biota during the later mitigation phase. The plan involved testing eight different reef designs that varied in substrate composition, substrate coverage, and the presence of transplanted kelp. Reef designs were implemented as 56 (40 m x 40 m) modules (7 replicates of the 8 designs), with construction completed in 1999. These are the squares seen on your images! Results obtained from monitoring the 5-year experiment showed a near-equally high tendency of all reef designs to meet the performance standards established for the mitigation phase, and the final recommendation was to build out the reef using low relief quarry rock or concrete rubble that covered between 42-86% of the bottom.
Construction of the full artificial reef was completed in 2008 with the use of approximately 126,000 tons of boulder-sized quarry rocks, deposited into 18 polygons. When combined with the experimental reef, these areas provide 174.4 acres of hard substrate for the growth of giant kelp and associated species. The reef was named after the late Dr. Wheeler North, a pioneer in the understanding of kelp forest ecology. The coastal development permit to operate SONGS requires ongoing monitoring of the artificial reef, which is led by UCSB researchers Dan Reed, Steve Schroeter, and Mark Page. These efforts evaluate whether the reef is meeting performance standards, and if necessary, determining why standards are not being met and recommending remedial measures.
Another amazing story behind the green blobs on your computer screen!
For more information about the Wheeler North Reef click here!