Hello! You may have noticed that things have slowed down on the website. To make a long story short, thanks to all of your help we are down to the last handful of images from California and Tasmania! We have been busy cleaning the data up and getting it ready to go. This milestone has given us a chance to reflect on the first phase of the project and to get ready for some exciting next steps – more on this later!
In January, science team members Jarrett Byrnes, Kyle Cavanaugh, and Isaac Rosenthal traveled to Chicago to meet with the Zooniverse team. We were hosted at the amazing Adler Planetarium, and had an unbelievable week of planning and collaboration (and eating!). By getting the science and development teams into the same room, we were able to work through a few issues that have been nagging the project since its inception, fixing some geo-referencing issues and streamlining the post-processing of the data (in other words, what happens to the data after the kelp is classified). It was truly amazing to spend a week surrounded by talent from so many disciplines, ranging from educators to back-end web developers. I think I speak for all of us when I say that it was a unique and deeply inspiring experience!
While still under construction, hopefully most of this is a familiar sight. Our goal with this relaunch is to make YOUR jobs easier! The tracing tool has been upgraded, and we will be able to spruce up the field guide. The under-the-hood flexibility of the new system is incredible and leaves the future of Floating Forests wide open. Custom datasets and modular workflows mean that the sky is the limit! Something that I am personally excited about is the opportunity to use these tools to ask new questions, broadening horizons for research and education. This relaunch will also feature an overhauled talk section so that we can continue to communicate with all of you!
Stay tuned for more information as we begin beta testing of the new website!
Last week I had the opportunity to take part in a citizen science forum organized by the White House. It was inspiring to see how committed the White House is to harnessing the power of citizen science. A number of exciting announcements were made during the event. For one, the Federal Citizen Science and Crowdsourcing Toolkit was officially released. This toolkit, developed with the support and collaboration of over 25 federal agencies, provides step-by-step instructions, case studies, and other resources to help scientists use citizen science in their research. As you might imagine, Zooniverse projects are well represented in the successful case studies section! Then John Holdren, the Director of the Office of Science and Technology Policy, gave a talk where he announced the release of a memorandum promoting the use of citizen science by Federal Agencies. Towards the end of the forum Senator Chris Coons (D-DE) announced a new bill authorizing citizen science and crowdsourcing. This bill is co-sponsored by Senator Steve Daines (R-MT), making it bi-partisan! During his talk Senator Coons described how he and his family were citizen scientists themselves and have spent many evenings collecting data for a wide variety of different Zooniverse projects! So next time you are chatting with someone on Talk, know that he or she could very well be a senator or representative. Perhaps even President Obama has a Zooniverse account?
In between these exciting announcements there were panels on Community Science Leaders, Oceans and Coasts, Democratized Tools, Water and Agriculture, and Communities and Health. A number of really exciting citizen science projects were highlighted during these panels. These ranged from investigations of the impact of aggressive policing to surfboards that collect oceanographic data to the development of methods for utilizing indigenous traditional knowledge to our own Floating Forests! You can watch the entire forum here.
I had the honor to serve on the Oceans and Coasts panel with some HUGE names in the marine science world: Dr. Alex Dehgan, Dr. Sylvia Earle (aka Her Deepness), Dr. Daniel Pauly, and Dr. Janet Coffey. During the panel we talked about the importance of the ocean and how little we know about it. The oceans play a central role is supporting human life. Yet we’ve mapped less of the ocean floor at high resolution than the surface of Mars, Venus, and the Moon combined. We have limited information about the changes that coastal ecosystems like coral reefs, mangroves, and giant kelp have been experiencing in recent decades. Citizen science provides a powerful method for collecting data that will allow us to better understand and protect these critical ecosystems.
It’s been a bit since I promised calibration info, but we’ve hit a minor (almost solved) projection issue in comparing our data to some gold-standard data we have. So, to stave off boredom while the real geographers on our team do the heavy lifting, I’ve been futzing about with making the generation of overall indices easier. I arrived at a neat solution using Spatial Grids in R that was much faster than switching back and forth between rasters. The biggest bonus is that the default plotting of results with color as number of people selecting an area is *purty!*
Or at least, I think so.
How does this kelp forest look to you?
Well, I woke up this morning, fired up Floating Forests, as is my wont, and saw this! I thought it would be a few more days, and was even going to post some exhortation, but you guys have been too awesome and brought us to 2 million classifications yourself!
Nice work, all! And now it looks like we’re going to need to throw some new regions your way soon!
A lot of what we’ll be working on to determine area of beds are heatmaps of users selecting a pixel as kelp. This sounds somewhat abstract, so I wanted to operationalize it for you with some images. Let’s start with a single image from Floating Forests chosen because it has been flagged as having kelp. It has 13 classifications, so, one more and it is ‘complete’ – unless we decide to lower the classification threshold. The image is
So, what would it look like if we overlaid all of the outlines of users outlining kelp from the other day on the image?
You can see, to some extent, folk circling the same areas, and their varying degrees of specificity. What does this result in if we want a heatmap of number of users selecting each pixel on which to do our analysis? Well, here you go!
Next time, a more quantitative look.
For the next post or three, I’m going to talk about what I see when I look at the data from one image. In the coming weeks, I hope to get at putting together bigger spatial or temporal results. But for the moment, I’m going to begin with what we see when we look at user classifications of one image. I’m going to begin with something beautiful – human variation.
This is the variability from person to person that we see in circling the same set of beds. I just find it striking and lovely.
Well, we’ve finally hit a critical mass of classifications (well, blown past it) and other projects by science team members have boiled down (we’ll be posting about them – they’re kelpy!), so we’ve begun to dig into the data. For anyone who wants to follow along at him, all code that we talk about will be posted in this github repository.
I thought I’d begin by telling you all about how *you* have been interacting with Floating Forests. Namely, how much effort do the ~5,100 users of FF put into FF the project
Many Zooniverse projects do well from a lot of people doing just a few images each. We’re no different. We have a nice distribution of folk with many doing few images (~1,500 have done just one classification), but with a looong tail with many users in the 100 to 1000 range. See below, but note the log10 scale on the x-axis.
The average user, though, does ~125 classifications. If we put it together and look at the cumulative percentage of classifications done by users who classify different numbers of images, we see that ~25% is done by those users who classify less than ~250 images. So, our ‘super-users’ are incredibly important! Heck, we have one users who has contributed 5.15% of the classifications. The top 10 have contributed 18% of classifications.
It may still be difficult to see just how much those users are doing in comparison to users classifying only a few images. So, we’ve done what many other zooniverse projects have done, made a treemap!
It’s not only incredibly informative – with the size of each square being proportional to the contribution of an individual users – but, oh, pretty data! Enjoy!
Note: This post is from Briana Harder, our newest Science Team member! We encountered Briana in Talk where she not only noted some issues, but then wrote code to reprocess images to fix them! Needless to say, we were impressed. What emerged was a wonderful dialogue between Briana, members of the science team, and the folk at Zooniverse. She’s made some large changes to our image processing pipeline and helped us all learn a lot about how to use Landsat for kelp in places *other* than California. As such, we asked Briana if she wanted to take her involvement to the next level, and join the Science Team. And we were delighted when she accepted! So, here are her comments on the awesome work she did and how our image processing has changed.
The first thing to do upon finding an interesting problem is to find out if anyone else has solved it already. So I searched for research in the areas of image analysis and coastlines and satellite imagery. The majority of the papers were far too detail oriented to be very helpful, the problems in tracking the month to month changes of the coastline of a small island are wildly different from sorting coast from non-coast for FF! But I did find a fascinating paper on using Landsat data to build a highly accurate waterline database for all of Europe. They clearly solved the problem of finding ocean coastline, and then went a lot further!
The technique they used was to take a cloudless mosaic of the region–lots of preprocessing there!– and separate the image into three regions, water and land, selected with simple pixel value thresholds, and unassigned pixels. They then ran a region growing algorithm to add the unassigned pixels to either area.
This was good find for me, because they’re solving a very similar problem, and I know how to implement both those things! Unfortunately region growing is relatively slow and expensive, and it probably wouldn’t play nice with cloudy images. I did more digging over the next week, without finding anything else that was more promising. So I sat down, and wrote a little program.
Simplicity is important when you’re working with a lot of data; if the running time of the algorithm is longer than a person would take to do the same task, something has gone horribly wrong! I went through a couple iterations on how to find water, but in the end, this is what I ended up with.
Water is any pixel where the red value is between 1 and 25. Water’s very dark in all the bands, but it’s darkest in red, so that’s the best way to find it. If we’re clever about it, we only need to read the pixel values once, and perform some simple math operations, which means it should hardly take longer than opening up the image to view it.
– Count all the pixels that are water.
– Count all the pixels that are black, value 0. This ensures it’s not biased to throw out images that are on the edges of the Landsat scene.
– Calculate the percentage of non-black pixels that are water.
– If that percentage is above a certain threshold, we’re good to go, keep this image. I picked 5% as the threshold, based on a little trial and error.
And that’s it! It by no means gets rid of ALL the non-coast images, for example this does absolutely nothing for the abundance partially cloudy ocean images. It also gets tripped up by dark shadows on land, either from clouds or mountains, as shadows are just dark enough to fall within that threshold. Lakes are also selected, if they’re big enough.
The more complicated part comes after algorithms are made and tested: building them into the existing image processing pipeline. I wrote my algorithm in Python, making use of a few key libraries to do all the image processing; the pipeline is in Ruby, and uses a tool call ImageMagick for its image processing. I’m good at programming Python, I’d never touched Ruby until working on this project! And ImageMagick does seem quite ‘magical’ to someone who hasn’t used it before.
After reducing the problem of non-coast images, there’s the problem of the dark and red images that are especially common in the Tasmania dataset. The red part has been solved, but the darkness is still there for a lot of images. I have more work to do! But for now, we can say goodbye to a big chunk of the non-coast images in the next data set. No more bright blue snow-capped mountains, or solid fluffy cloud tops, or endless squares of farmland.
I’ll see you on Talk!