Leaf Litter

Expert Q&A: Dr. Serge Wich

Interview by Amy Nelson

Serge Wich is a professor and leader for the MSc Wildlife Conservation and UAV Technology programs at Liverpool John Moores University’s School of Biological and Environmental Sciences. His research focuses on primate behavioral ecology, tropical rainforest ecology, and conservation of primates and their habitats, and since 2011, he has been using drones to support conservation work. Serge is a co-founder to two non-profits that continue to advance the field of conservation technology: ConservationDrones.org, which shares knowledge related to building and using low-cost unmanned aerial vehicles with conservation workers and researchers worldwide, and Conservation AI, a platform that uses machine learning techniques to analyze vast quantities of data to provide conservation practitioners with invaluable insights. Along with his colleague, Lian Pin Koh, Serge is the co-author of Conservation Drones, the first book to provide guidance on drone usage specifically to the conservation and ecology communities, and the co-editor of Conservation Technology, the first book to provide guidance and insight on a broad range of technology usage for the conservation community.

 

Article Index

Serge Wich ©Jeff Kerby

How did you become interested in technology and when did it first intersect with your work in biology?
I have always been interested in technology, but for the past 10 years or so, it has really become an important part of my research. The interest mainly came out of a discussion I had with Lian Pin Koh, the other co-founder of ConservationDrones.org, about monitoring orangutans in tropical rainforests. Generally, this is difficult, because the terrain is often hilly, it is warm, and there are so many trees in the way that it takes a long time walk through these forests. It can also be very costly.

Orangutans, like all great apes, make sleeping platforms at night. We knew that we could see these platforms from single-engine and helicopter flights, but we weren’t sure if we could see them from a drone. So we built a drone and then found out that you can. That really got me very interested in technology for conservation.

Wich (L) and Lian Pin Koh (R) with their early drone ©ConservationDrones

As drones became more popular and prices began dropping, it became even more of an interest point for me. If we want to conserve biodiversity, we need to monitor it in an efficient and affordable way that will allow us to be better at conservation management.

©ConservationDrones

You co-edited, along with Alex Peel (of University College London and the Greater Mahale Ecosystem Research and Conservation Project in Tanzania), the book Conservation Technology. Why was it important for you and Dr. Peel to create that book?
There are a lot of publications out there on specific technologies, like drones and camera traps, and there are a lot of papers about eDNA. But there wasn’t a book that put all these technologies together within a conservation context, and I know from colleagues and students that there is a lot of interest in this. We thought, if we ask experts around the world to contribute a chapter, we can create a book that is useful for students and other interested academics or non-academics.

Based on what you know from experience, and from what you learned in pulling the book together, how would you describe the state of conservation technology today? Is it merely in its infancy?
The developments are going very fast, but I think it is still in a very early state. I wouldn’t say it is in its infancy, but –

It is like a toddler?
… maybe. It’s making its first moves, but it’s not completely comfortable yet. It doesn’t really know what its role is in the grand scheme of monitoring. The sensors are out there and there is a lot of data being collected, but to make the step from data to metrics and information that is useful for policymakers and other decision-makers… we are still a little bit away from that.

I, along with many others, would like to see us automate as much as we can—use automated computer algorithms to go through the data, detect objects or calls in the data, and then turn that into something useful for a decision-maker. That is a process, and it involves stepping out of our comfort zones and talking to decision-makers and understanding what they need. It’s very easy for me to do work based on what I think they need, but that’s a little bit pretentious and probably off the mark. We need to find a way to talk to national park managers, politicians, and others and ask what data they need to inform their decisions and then try to develop pipelines for providing those data quickly.

That is particularly troublesome because as great as science is, it is also quite slow. For example, a publication recently came out on the decline of orangutans in a particular national park, but data collection for the study ended in 2014. Our colleagues and decision-makers need to get information faster. We need to accelerate that process and get information out to policy people faster.

Let’s talk about drones. I understand that the use of drones in conservation really began in the late 2000s. What are some of the most important ways that the use of drones in conservation has evolved since then?

Drones have improved dramatically in terms of ease of operation and the stability of the systems. The duration that they can fly has also improved. The sensors and cameras have also evolved and improved. There are now standard optical cameras, thermal infrared to see the heat radiation, and multi-spectral, which samples different bands in the individual spectrum. Prices have dropped and access has gotten easier as well. But at the same time, there are a lot of users or potential users who still struggle to use drones even though they want to. You can buy drones easily in the West, but they’re much harder to buy in many developing nations. Prices may be affordable in wealthier nations, but that’s not the case for our colleagues working in conservation in certain countries. Similarly, for local communities, the use of drones can be important to help them manage their own areas, but to them, access is also not so easy because of cost, language, and internet access.

©ConservationDrones

There are still a lot of hurdles to go over before drones and other technologies are accessible to everyone in the world. I work mostly in tropical conservation and those are generally countries that are less developed. It is particularly important that we solve this problem so that everyone there has access to drones and to the methods to analyze data. There is also still a hurdle in trying to ensure that drones will not have a negative social impact. In many parts of the world, when you start flying drones, people might think about military drones, or about their privacy being impacted. Those are real questions, fears, and worries that people have. When we operate drones, we have to make sure that we discuss this with the people in and near the communities where we’re flying so that they understand what we are doing. If they don’t want it, it doesn’t happen. We shouldn’t be flying over people and their land if there is no consent. Conservation biologists might not always be thinking about those issues. That is changing, but there’s more work to do there.

An unmanned military drone

Indeed, drones are often associated with surveillance or warfare, and that could really affect the dynamics of the relationships you have with local communities. Can you share an example of how you handled communication with a community in an area in which you wanted to fly drones for conservation?
We always work with colleagues from the country—and ideally, the region—in which we’re flying, so that they can help us overcome any language barriers. Often, people that we work with have been working with communities already. In Madagascar for instance, before flying in a certain area, our colleagues already had discussions with the local community members. When we arrived, we had more discussions.

©Mike Hudson

We showed community members the drone and example images and explained that we wanted to be sure that we had their approval to fly. Once we did, and we had a couple of days of data collected, we organized an evening in the village with a big screen and showed our results. There were a couple of hundred people there watching the footage. Those things really help and are important to try to do. It is not always possible, but this is a nice example where we try not to just fly over an area and then have people wonder what they’re seeing and hearing in the sky but involve people—from the first step to seeing the results. We’re not collecting data, running away, and never sharing the results. When people can see what we’re seeing that helps build a better understanding.

©Serge Wich

How can conservation and restoration practitioners know whether drone or satellite imagery is best for a particular project? Is it always an either/or situation?
They both have their advantages and disadvantages. Satellites have the great advantage that they cover expansive areas, which is very difficult to do with drones. However, satellites have a lower resolution, so if you need a lot of detail, then drones are the way to go. With drones, you cover less ground, but you can fly whenever you want to. Satellites have certain repeat times: for some, it’s six days, for others it’s a month or a day. But if there are clouds [during those repeat times], then most sensors on the satellite won’t be so useful. With a drone, on the other hand, you fly under the cloud, so you can essentially take the data whenever you like.

I see them as two different kinds of tools that we often use in harmony in the same area. We use satellites to cover larger areas and look for things like trends in deforestation. If we want to go into detail in a specific area, or if we want to know something particular about certain tree species, we fly a drone. If we want to look at tree mortality, for instance, a drone would be very good to fly while a satellite would struggle with identifying the single trees, particularly when they’re small.

Let’s shift to artificial intelligence. What is the difference between deep learning and machine learning?
In simple terms, you can think of machine learning as a broader category that includes various methods for teaching computers to learn from data. Deep learning is a specialized subset of machine learning that excels at learning complex patterns from large amounts of data. It does this by using deep neural networks with multiple layers to automatically extract relevant features from the data. We do have to train such networks and we can do this by drawing a box around a certain animal species in an image and then indicating to the computer which animal species this is. If we do this with a large number of animals per species, the deep learning can start to learn how to differentiate and then do this automatically.

In general, where are we in terms of the application of artificial intelligence to biodiversity conservation?
I think it is still in its very early days. We are using it for camera trap data. Quite a few species are in camera trap algorithms, and some of those algorithms are getting quite good, but we’re still early in terms of identifying different individuals in the same animal. The use of AI with drone imagery is also in its early days. There are people doing it, but not many. A lot of this is still in the academic research domain, but there are attempts, by Wildlife Insights , ConservationAI, and others, to open it up more. For acoustic data, where people make long recordings of sounds and then try to listen or have a computer listen to where a sound of a particular animal is, deep learning is also still in its early days. The exception is for birds, where there are good, well-developed apps out there, like Merlin and eBird, which allow a lot of people to identify birds who probably could not do it themselves. Overall, it’s early, but it’s also developing rapidly, so I suspect that it will change and be more accessible to everyone quickly.

How important is citizen science to the development of AI and its application to conservation?
It is very important. Citizen science helps with data collection and with corrections when identifications are not correct. Well-developed tools that can be used all around the world are so interesting because everybody can collect data.

You can now analyze the data that is in iNaturalist, (the platform where anyone can take a photo of an animal or plant and upload it), for instance. You can see bird migration patterns just by looking at the changing locations of people observing the birds. We can use the data to see patterns and changes in these patterns, so all the people involved are part of a global monitoring scheme. I think that is super fascinating and super necessary.

Can you tell me about your platform, Conservation AI, and how it can help us learn about trends in biodiversity?
The platform is fairly young. It has been developed by two computer scientists, an astrophysicist, and me, and the aim is to work closely with users to develop the object detection models that they require. These can be from images in drones or camera traps. We are also working on acoustic data. The main aim at the moment is to provide a platform [on which users can] upload their data and then on the other side, get information on which animals, cars, or people—whatever the user is interested in—have been detected.

©ConservationAI

On the platform, we can also build tools that show changes over time. For instance, with cameras we have in South Africa, you can see at which cameras zebras are found more often throughout the different months of the year. That might tell the managers there how zebras are moving throughout the reserve. Conservation AI is very user-driven. We try to listen to the people we work with and then develop the tools that are useful for them.

Is the platform mainly for monitoring fauna?
At the moment it’s for fauna, but we are certainly thinking about tree species as well, particularly from drone data.

I have read about your team’s use of drones and thermal imagery to detect spider monkeys in Mexico, and I have seen your unique footage of the endangered giant ground pangolin (Smutsia gigantea) on the Conservation AI blog. Can you share another case story that helps illustrate the outcomes and benefits of your platform, Conservation AI?
The pangolin work is important for us because it relates to the most traded animal in the world, and they need all the help they can get. If we can use camera traps to find out where they are or when poachers are in those areas, that might help.

We’re working a lot with orangutans, but that is still a work in progress. The key aim is to fully develop an automated method where you can fly over a tropical rainforest, have the algorithm determine where nests are in the video images, and then have a statistical aspect that turns that into animal density. We are now working to get the algorithm to find orangutan nests in the videos. That is one example.

We are also working with Audubon Canyon Ranch in the in the American West. They are interested in mountain lions, and they have a lot of camera traps and large amounts of images. We have analyzed several million images for them, which really saves them time, and that is important for any organization. It is an enormous hurdle to manually go through all those images. The first few 100 are kind of fun, and then you realize that actually, it would be really nice if a computer could do it. If you have a few million images, which many projects now have, it becomes unmanageable to go through them manually, and that’s where these algorithms really help. So now this organization can just upload those images [to Conservation AI] and then they get a list back of the file names in which something was detected and the probability that the something is a mountain lion.

Can anyone with data from a camera trap or drone or acoustic recordings use Conservation AI? What would they need to do?
It’s always good to email us first and explain who you are and what you’re interested in. You need to register on the website and if, for instance, you have camera trap images of North American mammals, you can just use that model, upload your images, and get results. Then it would be great if you let us know the ones that were not correct, because that’s how we can continue to improve the model. We need to know, for example, when a mountain lion has been classified as something else, or when something else has been incorrectly classified as mountain lion.

The North American mammal model currently has about 10 species in it, but that will grow over time. We are developing similar models for Europe, the Asian region, and South America, so people interested in those regions can certainly contact us.

If you are interested in animals that we do not yet have a model for, we can see if we can develop one. We’re developing new models all the time. If people have images that they can share, so we can train our models before we make them available, then that’s great as well. We are always looking for new camera trap images for the models we’re training.

©ConservationAI

What sort of costs are involved to the user to do something like this, or is that arranged on a case-by-case basis? Are any aspects of it free?
Everything is free at the moment. How that will be in the future, I don’t know. We get a lot of requests, so at some stage we may have to hire more people and then ask for a small fee, but we’re trying to do as much as we can for free.

What are some of the major challenges or kinks that you’re noticing and having to work through?
The major challenge is data. To train these models and to train them well, you need large amounts of data, and that is not always available. There are a lot of images, for instance, of certain animals out there, but those are not always representative of the images you get on a camera trap. Images that you get on a camera trap are often only the behind and a tail, a head from a particular angle, or the animal at a particular distance. A bunch of perfect images of an elephant will help identify an elephant standing clearly and beautifully in front of a camera trap, but it becomes much more difficult if the elephant is much further away, or if the image only captures the elephant’s ear, for example.

These algorithms do not generalize very well. If you only train them on elephants from the front and then show an image of an elephant from the back, it will have no idea what it is. These algorithms really need to be trained on images that have as much variation in them as possible, and it’s not always possible to get those images.

I do have one more question on machine learning and AI. Where is it in terms of monitoring aquatic or marine species?
People are using underwater camera traps and AI to try to detect different fish species. Passive acoustic monitoring—recorders that you leave out for a long time—is far advanced for underwater study. It’s also difficult for underwater studies because of the three dimensions, distances, and all the other sounds that are out there, but there is a lot of effort happening, particularly in monitoring the larger mammals, dolphins, and whales, acoustically.

You’ve mentioned how drones and AI are automating labor-intensive remote sensing work and analysis. What exciting things are being done with the time that researchers used to spend doing transects or combing through data or watching hours and hours of footage or sifting through photographs? Is anyone studying how the use of these technologies is impacting the nature of conservation biology or the job of a conservation biologist?
The time savings allow people to be in the field more, perhaps doing more work with local communities and potentially patrolling more, and that should improve conservation.

©Mike Hudson

My hope is not that this will lead to losses in jobs in conservation. It is much rather that people can do other things. That a local colleague working on a conservation project that used to involve a lot of survey work on foot will now be retrained to fly a drone. Loss of jobs is a justified concern, but I don’t see that happening at the moment. I have seen that people who were previously going on foot into the forest often become drone pilots. We spend quite a lot of time training people how to fly drones and deal with the data, because it needs to be the same local people working on these conservation projects.

Wich shows drone to children in Tanzania ©Jeff Kerby

That’s why it is so important that the technologies become available throughout the world. Otherwise, conservation relies on a system where somebody from the UK or the US comes in with the drones, flies them, and then either takes them back or leaves them there without anyone who can fix them if something breaks. That all of course needs to change.

In addition to you and your colleagues at Liverpool John Moores University, where are you seeing exciting leaders in the field? Who is inspiring you, when it comes to groundbreaking advances in the application of technology to conservation?
Most of my inspiration comes from colleagues from habitat countries, like Brazil and Vietnam. They are often young people, like PhD students or local conservationists who are using this technology and trying to look at novel ways to do it. They are often really good people who make others enthusiastic about technology. Because they are in the field, they use the technology in innovative ways and really try to push boundaries. There are people in Brazil, like Fabiano de Melo, doing amazing things with drones, and people in Vietnam studying monkeys with drones. I am also inspired by local community members who use drones to map their own areas and try to use those data to prevent mining companies from going in. I see their work and I think, “Wow, this technology is really helping and empowering people.” That inspires me to do more.

An AudioMoth acoustic recorder (under a cover to protect it from the rain) in Mexico ©Serge Wich

I read somewhere that drones can be used for sampling water. Is that true?
Yes. People are using them to sample environmental DNA from water. One way is to have a winch underneath, or a rope with a bucket, and then go in the water and fly up. A slightly more advanced way is what the UK company NatureMetrics has developed. They have developed a system that lowers a plastic case in the water that has a water pump in it. It pumps a few liters of water through a filter, and the environmental DNA becomes stuck to the filter. Once it’s filled, the rope on the winch (which I think is 60 meters long) goes up again and then the drone can fly away. You can hover over a river or lake, for example—even a forested area—and collect a water sample. It’s amazing technology.

Are there any other applications of technology for conservation—whether it’s drone, eDNA, AI or a combination—that people may not know about?
Airborne DNA is starting, where drones have sampling mechanisms to monitor all the DNA flying in the air from pollen, trees, plants, and animals.

What’s on the horizon?
Drones currently fly over areas. I think this will completely change. Once drones are more developed, equipped with more sensors, and can essentially pilot themselves, I think they will be flying in the forest, into and through these immensely complex, three-dimensional worlds where a human drone pilot struggles. With our eyes, it’s difficult to estimate the distances of branches to the drones, so you can very easily crash. If we could develop drones to navigate through those systems and sample, that would be amazing. There are people working on this. There are drones [in development] that can perch on a tree, make recordings, and then fly away, or perch on a tree, take a DNA sample of a plant, and then fly away. If that gets developed further, it will allow for amazing sampling and monitoring of things within the forest and not only above it.

Someone developed a drone that can fly up in an area and shoot an arrow into a tree with a little acoustic recorder in it. That recorder will then record for a couple of days, and then a small rope will fall down and then the recorder can be pulled down. This enables us to have recordings from high up in the canopy without anyone having to climb up those high trees. It’s super interesting to see all these innovative ideas of people.

Technology has greatly advanced remote sensing, data collection, mapping, spatial analysis, modeling, but what aspects of conservation has technology not yet powerfully impacted where it has potential? I wonder about the human behavior change, for example.
[The application technology to] human behavior change is of course a big one, but also a somewhat worrisome one. We know how it can be used to change people’s behavior, and we’ve seen that done in some contexts where it shouldn’t have happened, like with elections and the Cambridge Analytica scandal. We see it all the time with social media. There is a lot of knowledge out there on how to use technology for behavioral change, but I’m not sure that that’s the way we should be doing it. I find it a bit manipulative. I think we need to see if we can use technology to impact decision-making in a collaborative, rather than manipulative way. I’m not sure how that would work, but I think we can certainly still improve upon monitoring and awareness raising.

What about gaming? Is there a place for gaming in conservation tech?
I don’t know much about it, but I think gamifying things is interesting to explore because people can be competitive. If I could automatically see the carbon, environmental, or social footprints on my shopping list, for example, maybe I would game with myself to try to improve that. Maybe you could have an app that you game with your friends and see who does the most environmentally sustainable shopping. I think there could be cool ways to try to do this that are not dark, but fun… where we would learn and improve at the same time.

 

Note: Learn more about Serge Wich’s research and projects, and check out ConservationAI and ConservationDrones.

Got an idea?

Contact The Editor

Sign up for Leaf Litter

Browse by topic

Browse by year