USV解析手法一覧
MSA (Mouse Song Analyzer: Arriaga et al., 2012; Chabout et al., 2015) VoiCE (Burkett et al., 2015, Sci Rep) MUSE (Neunuebel et al., 2015) Ax (Seagraves et al., 2016) A-MUD (Zala et al., 2017, PONE) MUPET (Van Segbroeck et al., 2017) USVSEG (Tachibana et al, 2020, PONE)
DAS (Steinfath et al, 2021) AMVOC (Stoumpou et al, 2021) table:機能比較
システム名 雑音除去 シラブル分類 備考
USVSEG 多重窓と平坦化で前処理 -
なんでもコメント
全般
USVSEGは基本的に前処理に特徴があるので、これを他のアルゴリズムも取り入れれば対雑音性能はあがると思う。 その後の分類にはUSVSEG作成者は基本的に興味がないのだが、分類したい人もいると思うので、plugin的なものを用意するか、前処理関数を小分けにパッケージして公開するのが良いだろう、とは思う(python版 by 松本氏はすでにある)
改良がどんどん進んでいるようでわからない。
使い勝手はどうなのか?
一般的なhanning窓で、スペクトル差分でノイズ除去していたはず。定常ノイズ(ファンの音とか?)しか除けない。
ノイズに弱いと聞く
ラットに特化している。YOLOとかの画像認識モデルを使っているのだろうか。スペクトログラム上で四角いboundary boxを検出するようなイメージ。
USVSEG
いろんな分類手法を試したというもの。SVM、random forest、CNN。ソフトの公開はない。データセットはアクセスできない?
VAEのやつ
文献とリポジトリ
Arriaga, G., Zhou, E. P., and Jarvis, E. D. (2012). Of mice, birds, and men: the mouse ultrasonic song system has some features similar to humans and song-learning birds. PLoS ONE 7:e46610. doi: 10.1371/journal.pone.0046610
Chabout, J., Sarkar, A., Dunson, D. B., & Jarvis, E. D. (2015). Male mice song syntax depends on social contexts and influences female preferences. Frontiers in behavioral neuroscience, 9, 76. https://doi.org/10.3389/fnbeh.2015.00076 Neunuebel, J. P., Taylor, A. L., Arthur, B. J., & Egnor, S. R. (2015). Female mice ultrasonically interact with males during courtship displays. Elife, 4, e06203. https://doi.org/10.7554/eLife.06203 Burkett, Z. D., Day, N. F., Peñagarikano, O., Geschwind, D. H., & White, S. A. (2015). VoICE: A semi-automated pipeline for standardizing vocal analysis across models. Scientific reports, 5(1), 1-15. https://www.nature.com/articles/srep10237 Seagraves, K. M., Arthur, B. J., & Egnor, S. R. (2016). Evidence for an audience effect in mice: male social partners alter the male vocal response to female cues. Journal of Experimental Biology, 219(10), 1437-1448. https://doi.org/10.1242/jeb.129361 Zala, S. M., Reitschmidt, D., Noll, A., Balazs, P., & Penn, D. J. (2017). Automatic mouse ultrasound detector (A-MUD): A new tool for processing rodent vocalizations. PloS one, 12(7), e0181200. https://doi.org/10.1371/journal.pone.0181200 Van Segbroeck, M., Knoll, A. T., Levitt, P., & Narayanan, S. (2017). MUPET—mouse ultrasonic profile extraction: a signal processing tool for rapid and unsupervised analysis of ultrasonic vocalizations. Neuron, 94(3), 465-485. https://doi.org/10.1016/j.neuron.2017.04.005 Coffey, K. R., Marx, R. G., & Neumaier, J. F. (2019). DeepSqueak: a deep learning-based system for detection and analysis of ultrasonic vocalizations. Neuropsychopharmacology, 44(5), 859-868. https://doi.org/10.1038/s41386-018-0303-6 Tachibana, R. O., Kanno, K., Okabe, S., Kobayasi, K. I., & Okanoya, K. (2020). USVSEG: A robust method for segmentation of ultrasonic vocalizations in rodents. PloS one, 15(2), e0228907. https://doi.org/10.1371/journal.pone.0228907 Fonseca, A. H., Santana, G. M., Ortiz, G. M. B., Bampi, S., & Dietrich, M. O. (2021). Analysis of ultrasonic vocalizations from mice using computer vision and machine learning. Elife, 10, e59161. https://doi.org/10.7554/eLife.59161 Steinfath, E., Palacios-Muñoz, A., Rottschäfer, J. R., Yuezak, D., & Clemens, J. (2021). Fast and accurate annotation of acoustic signals with deep neural networks. Elife, 10, e68837. https://doi.org/10.7554/eLife.68837 Premoli, M., Baggi, D., Bianchetti, M., Gnutti, A., Bondaschi, M., Mastinu, A., ... & Bonini, S. A. (2021). Automatic classification of mice vocalizations using Machine Learning techniques and Convolutional Neural Networks. PloS one, 16(1), e0244636. https://doi.org/10.1371/journal.pone.0244636 Goffinet, J., Brudner, S., Mooney, R., & Pearson, J. (2021). Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires. Elife, 10, e67855. https://doi.org/10.7554/eLife.67855 Goussha, Y., Bar, K., Netser, S., Cohen, L., Hel-Or, Y., & Wagner, S. (2022). HybridMouse: A Hybrid Convolutional-Recurrent Neural Network-Based Model for Identification of Mouse Ultrasonic Vocalizations. Frontiers in behavioral neuroscience, 358. https://doi.org/10.3389/fnbeh.2021.810590 Netser, S., Nahardiya, G., Weiss-Dicker, G., Dadush, R., Goussha, Y., John, S. R., ... & Wagner, S. (2022). TrackUSF, a novel tool for automated ultrasonic vocalization analysis, reveals modified calls in a rat model of autism. BMC Biology, 20(1), 1-20. https://doi.org/10.1186/s12915-022-01299-y Abbasi, R., Balazs, P., Marconi, M. A., Nicolakis, D., Zala, S. M., & Penn, D. J. (2022). Capturing the songs of mice with an improved detection and classification method for ultrasonic vocalizations (BootSnap). PLoS computational biology, 18(5), e1010049. https://doi.org/10.1371/journal.pcbi.1010049 Stoumpou, V., Vargas, C. D., Schade, P. F., Giannakopoulos, T., & Jarvis, E. D. (2021). Analysis of Mouse Vocal Communication (AMVOC): A deep, unsupervised method for rapid detection, analysis, and classification of ultrasonic vocalizations. bioRxiv. https://doi.org/10.1101/2021.08.13.456283