top of page

Mosquito control: targeting breeding sites using street view images


In addition to personal protection against mosquito bites and the chemical control of mosquitoes and their larvae, the removal of common breeding sites is one of the most important and effective steps in controlling mosquitoes and the diseases they transmit. Recent research has sought to use geotagged images obtained through Google's Street View to map the most common types of open containers in order to facilitate and accelerate the detection of these and generate a decision support tool. In this Infectious Thoughts interview, we speak to Pr. Peter Haddawy of Mahidol University's Faculty of ICT and the University of Bremen's Spatial Cognition Center, about the benefits and cost-effectiveness of developing such tools as part of a broader vector- and disease-control effort, the potential to develop the model by improving the integration of data streams, and the applications of this approach to other diseases or issues where the environment plays an important role.

What have been some of the main advantages of using Google Street View images to detect likely mosquito breeding sites?

Using this approach, we are able to provide information on outdoor container counts on a scale not possible through manual surveys and at very low cost. There is a minimal cost for obtaining the Google Street View images. For the processing, a high-end PC with fast graphics card and a good amount of storage is the only requirement.

Your approach relies on the detection of eight of the most common containers by convolutional neural network transfer learning - how does this approach ensure that containers are accurately identified and classified?

We specifically evaluated the detection accuracy in our study. This is the first thing you need to look at because if the detection accuracy is not good, you can stop right there. For the binary problem of determining whether an image contains a breeding site or not, the F-score is 0.91. For the problem of detecting and then classifying into one of the eight container types, the F-scores range from a low of 0.37 for bins to a high of 0.92 for old tires. Other than bins, the F-scores are all above 0.8. The value for bins is a bit low because they get confused with buckets.

How easily could this approach be scaled up to different countries or continents?

Scaling up to larger regions is partly a matter of computation, which scales linearly with the number of images. The more significant issue is deciding on the types of containers to detect. The prevalence and importance of different types of containers as potential breeding sites varies from country to country and even among different regions in a country. So one would need to first identify which are the most important containers in the area to be studied. If one is lucky, the containers are already in the COCO dataset. If not, one would need to train the neural network to recognize these containers. This involves collecting and labeling enough images of the containers. We were able to use only about 500 training examples per container category because of the use of transfer learning on the network originally trained on the COCO dataset. I would expect this to be the case for a large range of container types.