Classification of Underwater Fish Images and Videos via Very Small Convolutional Neural Networks
MetadatosMostrar el registro completo del ítem
The automatic classification of fish species appearing in images and videos from underwater cameras is a challenging task, albeit one with a large potential impact in environment conserva- tion, marine fauna health assessment, and fishing policy. Deep neural network models, such as convolutional neural networks, are a popular solution to image recognition problems. However, such models typically require very large datasets to train millions of model parameters. Because underwater fish image and video datasets are scarce, non-uniform, and often extremely unbalanced, deep neural networks may be inadequately trained, and undergo a much larger risk of overfitting. In this paper, we propose small convolutional neural networks as a practical engineering solution that helps tackle fish image classification. The concept of “small” refers to the number of parameters of the resulting models: smaller models are lighter to run on low-power devices, and drain fewer resources per execution. This is especially relevant for fish recognition systems that run unattended on offshore platforms, often on embedded hardware. Here, established deep neural network models would require too many computational resources. We show that even networks with little more than 12,000 parameters provide an acceptable working degree of accuracy in the classification task (almost 42% for six fish species), even when trained on small and unbalanced datasets. If the fish images come from videos, we augment the data via a low-complexity object tracking algorithm, increasing the accuracy to almost 49% for six fish species. We tested the networks with images obtained from the deployments of an experimental system in the Mediterranean sea, showing a good level of accuracy given the low quality of the dataset.