Floating Forests Goes to Washington

wh_logo_sealLast week I had the opportunity to take part in a citizen science forum organized by the White House. It was inspiring to see how committed the White House is to harnessing the power of citizen science. A number of exciting announcements were made during the event. For one, the Federal Citizen Science and Crowdsourcing Toolkit was officially released. This toolkit, developed with the support and collaboration of over 25 federal agencies, provides step-by-step instructions, case studies, and other resources to help scientists use citizen science in their research. As you might imagine, Zooniverse projects are well represented in the successful case studies section! Then John Holdren, the Director of the Office of Science and Technology Policy, gave a talk where he announced the release of a memorandum promoting the use of citizen science by Federal Agencies. Towards the end of the forum Senator Chris Coons (D-DE) announced a new bill authorizing citizen science and crowdsourcing. This bill is co-sponsored by Senator Steve Daines (R-MT), making it bi-partisan! During his talk Senator Coons described how he and his family were citizen scientists themselves and have spent many evenings collecting data for a wide variety of different Zooniverse projects! So next time you are chatting with someone on Talk, know that he or she could very well be a senator or representative. Perhaps even President Obama has a Zooniverse account?

In between these exciting announcements there were panels on Community Science Leaders, Oceans and Coasts, Democratized Tools, Water and Agriculture, and Communities and Health. A number of really exciting citizen science projects were highlighted during these panels. These ranged from investigations of the impact of aggressive policing to surfboards that collect oceanographic data to the development of methods for utilizing indigenous traditional knowledge to our own Floating Forests! You can watch the entire forum here.

I had the honor to serve on the Oceans and Coasts panel with some HUGE names in the marine science world: Dr. Alex Dehgan, Dr. Sylvia Earle (aka Her Deepness), Dr. Daniel Pauly, and Dr. Janet Coffey. During the panel we talked about the importance of the ocean and how little we know about it. The oceans play a central role is supporting human life. Yet we’ve mapped less of the ocean floor at high resolution than the surface of Mars, Venus, and the Moon combined. We have limited information about the changes that coastal ecosystems like coral reefs, mangroves, and giant kelp have been experiencing in recent decades. Citizen science provides a powerful method for collecting data that will allow us to better understand and protect these critical ecosystems.

Floating Forests at the White House!

We were delighted to have our own Kyle Cavanaugh presenting Floating Forests at the White House Citizen Science Forum earlier this week. Check out the video of his presentation below!

Kelp Forest Heatmap: Nightvision Edition

It’s been a bit since I promised calibration info, but we’ve hit a minor (almost solved) projection issue in comparing our data to some gold-standard data we have. So, to stave off boredom while the real geographers on our team do the heavy lifting, I’ve been futzing about with making the generation of overall indices easier. I arrived at a neat solution using Spatial Grids in R that was much faster than switching back and forth between rasters. The biggest bonus is that the default plotting of results with color as number of people selecting an area is *purty!*

Or at least, I think so.

How does this kelp forest look to you?


2 Million Kelp Classifications at Floating Forests!

2 Millions Classifications!


Well, I woke up this morning, fired up Floating Forests, as is my wont, and saw this! I thought it would be a few more days, and was even going to post some exhortation, but you guys have been too awesome and brought us to 2 million classifications yourself!


Nice work, all! And now it looks like we’re going to need to throw some new regions your way soon!

Heatmap of Kelp Selection Overlap

A lot of what we’ll be working on to determine area of beds are heatmaps of users selecting a pixel as kelp. This sounds somewhat abstract, so I wanted to operationalize it for you with some images. Let’s start with a single image from Floating Forests chosen because it has been flagged as having kelp. It has 13 classifications, so, one more and it is ‘complete’ – unless we decide to lower the classification threshold. The image is

Demo image

So, what would it look like if we overlaid all of the outlines of users outlining kelp from the other day on the image?


You can see, to some extent, folk circling the same areas, and their varying degrees of specificity. What does this result in if we want a heatmap of number of users selecting each pixel on which to do our analysis? Well, here you go!

Selection heatmap with background image

Next time, a more quantitative look.

Variation in Kelp Selection is Beautiful

For the next post or three, I’m going to talk about what I see when I look at the data from one image. In the coming weeks, I hope to get at putting together bigger spatial or temporal results. But for the moment, I’m going to begin with what we see when we look at user classifications of one image. I’m going to begin with something beautiful – human variation.

This is the variability from person to person that we see in circling the same set of beds. I just find it striking and lovely.

User Variation in Selection

Who’s Getting Kelpy at Floating Forests?

Well, we’ve finally hit a critical mass of classifications (well, blown past it) and other projects by science team members have boiled down (we’ll be posting about them – they’re kelpy!), so we’ve begun to dig into the data. For anyone who wants to follow along at him, all code that we talk about will be posted in this github repository.

I thought I’d begin by telling you all about how *you* have been interacting with Floating Forests. Namely, how much effort do the ~5,100 users of FF put into FF the project

Many Zooniverse projects do well from a lot of people doing just a few images each. We’re no different. We have a nice distribution of folk with many doing few images (~1,500 have done just one classification), but with a looong tail with many users in the 100 to 1000 range. See below, but note the log10 scale on the x-axis.


The average user, though, does ~125 classifications. If we put it together and look at the cumulative percentage of classifications done by users who classify different numbers of images, we see that ~25% is done by those users who classify less than ~250 images. So, our ‘super-users’ are incredibly important! Heck, we have one users who has contributed 5.15% of the classifications. The top 10 have contributed 18% of classifications.

cumulative percent classifications

It may still be difficult to see just how much those users are doing in comparison to users classifying only a few images. So, we’ve done what many other zooniverse projects have done, made a treemap!

It’s not only incredibly informative – with the size of each square being proportional to the contribution of an individual users – but, oh, pretty data! Enjoy!


Let There Be Coast!

Note: This post is from Briana Harder, our newest Science Team member! We encountered Briana in Talk where she not only noted some issues, but then wrote code to reprocess images to fix them! Needless to say, we were impressed. What emerged was a wonderful dialogue between Briana, members of the science team, and the folk at Zooniverse. She’s made some large changes to our image processing pipeline and helped us all learn a lot about how to use Landsat for kelp in places *other* than California. As such, we asked Briana if she wanted to take her involvement to the next level, and join the Science Team. And we were delighted when she accepted! So, here are her comments on the awesome work she did and how our image processing has changed.

Hello, I’m Briana Harder, I’ve recently joined the science team to work on the processing pipeline that creates the images we classify. Some of you may know me from Talk as Quia. Back in November 2014, there was a blog post that ended with a shout out to anyone who wanted to collaborate on making the image selecting process better. I’m not a PhD in anything, but learning something new to solve a real life problem sounds like my idea of a fun weekend, and I have some experience with image processing. Turns out I’ve spent a wee bit more than a weekend on this, and I did make progress on the problem, so here I am to explain how my ‘fun project’ turns into improving the Floating Forests image pipeline!

The first thing to do upon finding an interesting problem is to find out if anyone else has solved it already. So I searched for research in the areas of image analysis and coastlines and satellite imagery. The majority of the papers were far too detail oriented to be very helpful, the problems in tracking the month to month changes of the coastline of a small island are wildly different from sorting coast from non-coast for FF! But I did find a fascinating paper on using Landsat data to build a highly accurate waterline database for all of Europe. They clearly solved the problem of finding ocean coastline, and then went a lot further!

The technique they used was to take a cloudless mosaic of the region–lots of preprocessing there!– and separate the image into three regions, water and land, selected with simple pixel value thresholds, and unassigned pixels. They then ran a region growing algorithm to add the unassigned pixels to either area.

This was good find for me, because they’re solving a very similar problem, and I know how to implement both those things! Unfortunately region growing is relatively slow and expensive, and it probably wouldn’t play nice with cloudy images. I did more digging over the next week, without finding anything else that was more promising. So I sat down, and wrote a little program.

Simplicity is important when you’re working with a lot of data; if the running time of the algorithm is longer than a person would take to do the same task, something has gone horribly wrong! I went through a couple iterations on how to find water, but in the end, this is what I ended up with.

Water is any pixel where the red value is between 1 and 25. Water’s very dark in all the bands, but it’s darkest in red, so that’s the best way to find it. If we’re clever about it, we only need to read the pixel values once, and perform some simple math operations, which means it should hardly take longer than opening up the image to view it.

– Count all the pixels that are water.
– Count all the pixels that are black, value 0. This ensures it’s not biased to throw out images that are on the edges of the Landsat scene.
– Calculate the percentage of non-black pixels that are water.
– If that percentage is above a certain threshold, we’re good to go, keep this image. I picked 5% as the threshold, based on a little trial and error.

And that’s it! It by no means gets rid of ALL the non-coast images, for example this does absolutely nothing for the abundance partially cloudy ocean images. It also gets tripped up by dark shadows on land, either from clouds or mountains, as shadows are just dark enough to fall within that threshold. Lakes are also selected, if they’re big enough.

The more complicated part comes after algorithms are made and tested: building them into the existing image processing pipeline. I wrote my algorithm in Python, making use of a few key libraries to do all the image processing; the pipeline is in Ruby, and uses a tool call ImageMagick for its image processing. I’m good at programming Python, I’d never touched Ruby until working on this project! And ImageMagick does seem quite ‘magical’ to someone who hasn’t used it before.

After reducing the problem of non-coast images, there’s the problem of the dark and red images that are especially common in the Tasmania dataset. The red part has been solved, but the darkness is still there for a lot of images. I have more work to do! But for now, we can say goodbye to a big chunk of the non-coast images in the next data set. No more bright blue snow-capped mountains, or solid fluffy cloud tops, or endless squares of farmland.

I’ll see you on Talk!

Tasmania Images Back!

You may have noticed that for the past few months Floating Forests has only been serving up California images. The Tasmania images that we were analyzing last year have been offline due to some image quality issues. However, thanks to a lot of hard work by the Zooniverse team and power user/image processing magician Briana Harder we are happy to announce that the Tasmania images are back! Not only are the image quality issues fixed, Briana has helped us implement an algorithm to filter out cloudy images and images that don’t contain any coastline (see upcoming post for more details). As a result, hopefully you will be spending less time skipping bad images and more time outlining those beautiful kelp forests!

Screen Shot 2015-04-22 at 9.34.24 PM

Less of this…


and more of this!

and more of this!

We have also improved the way in which we obtain images from USGS/NASA. This will make it easier for us to introduce data from other regions, so expect to see some images from Baja California, Chile, South Africa, and other temperate coastlines soon.

In the meantime, have fun with these Tasmania images. We’ve already started to document some major declines in kelp abundance in Tasmania over the past few decades thanks to your classifications. We are eager to obtain a better picture of these changes, but we need your help!