Hyperparameter Importance Across Datasets
With the advent of automated machine learning, automated hyperparameter optimization methods are by now routinely used in data mining. However, this progress is not yet matched by equal progress on automatic analyses that yield information beyond performance-optimizing hyperparameter settings.
In this work, we aim to answer the following two questions:
Given an algorithm, what are generally its most important hyperparameters, and what are typically good values for these?
We present methodology and a framework to answer these questions based on meta-learning across many datasets. We apply this methodology using the experimental meta-data available on OpenML to determine the most important hyperparameters of support vector machines, random forests and Adaboost, and to infer priors for all their hyperparameters. The results, obtained fully automatically, provide a quantitative basis to focus efforts in both manual algorithm design and in automated hyperparameter optimization.
The conducted experiments confirm that the hyperparameters selected by the proposed method are indeed the most important ones and that the obtained priors also lead to statistically significant improvements in hyperparameter optimization.
https://www.youtube.com/watch?v=mS4vL7_rSWQ
メモ
超パラメータ
SVMの正則化パラメータやカーネルの幅など
手法とは独立に欠損値の埋め方も超パラメータとみなせる
どの超パラメータが重要なのか?
OpenMLの100データセットを使った実験(functional ANOVAによる分析)
結果として、SVMは正則化パラメータとカーネル幅が重要など、民間伝承に一致した