GridSearchCV for hyperparameter tuning is a technique used to optimize machine learning models by systematically searching for the best hyperparameters for a given model. Hyperparameters are parameters that are set prior to train a model, and they can have a significant impact on the performance of the model. By using grid search, we can optimize these hyperparameters and improve the performance of our model. In this short article, we will learn how we can use the GridSeachCV for hyperparameter tuning of machine learning models. We will understand how it works and how to apply it to various machine learning models including decision trees, random forests, SVM, KNN, linear regression, and many more.
Basic of GridSearchCV for hyperparameter tuning
GridSearchCV for hyperparameter tuning is a method in scikit-learn that is used to search for the best combination of hyperparameters for a given machine learning model. It is called a “grid search” because it exhaustively searches through a specified parameter grid, evaluating the model performance at each combination of parameters using cross-validation.

To use GridSearchCV, you need to specify the following:
- The model that you want to use (e.g. a decision tree, a linear regression, etc.)
- The hyperparameters that you want to tune. For example, if you are using a decision tree, you might want to tune the maximum depth of the tree or the minimum number of samples required to split a node.
- The range of values for each hyperparameter that you want to search over. For example, if you want to tune the maximum depth of the tree, you might specify a range of values from 1 to 10.
- The scoring metric that you want to use to evaluate the model. This could be accuracy, F1 score, AUC, etc.
Once you have specified these things, you can use the fit
method of the GridSearchCV
object to perform the search. The fit
method will train and evaluate a model for each combination of hyperparameters and return the combination that results in the best performance according to the specified scoring metric.
Here is an example of how you might use GridSearchCV
in scikit-learn to tune the hyperparameters of a decision tree:
#importing the modules from sklearn.model_selection import GridSearchCV from sklearn.tree import DecisionTreeClassifier # Create a decision tree classifier model = DecisionTreeClassifier() # Define the hyperparameter search space param_grid = {'max_depth': [1, 2, 3, 4, 5], 'min_samples_split': [2, 5, 10, 20]} # Create a grid search object grid_search = GridSearchCV(estimator=model, param_grid=param_grid, cv=5, scoring='accuracy') # Fit the grid search object to the training data grid_search.fit(X_train, y_train) # Print the best hyperparameters print(grid_search.best_params_)
In this example, the fit method will train and evaluate a decision tree model with a maximum depth of 1, 2, 3, 4, or 5 and a minimum number of samples required to split a node of 2, 5, 10, or 20. It will use 5-fold cross-validation to evaluate the model at each combination of hyperparameters and return the combination that results in the highest accuracy.
Advanced Techniques for Grid Search:
Here are some advanced techniques that you can use to improve the effectiveness of your grid search:
- Nested cross-validation: Instead of using a single split of the data into training and validation sets, you can use nested cross-validation to more accurately evaluate the model. In nested cross-validation, you split the data into a outer training and validation set, and then within the training set, you further split it into a inner training and validation set. This allows you to use the inner validation set to tune the hyperparameters and the outer validation set to evaluate the performance of the model with the tuned hyperparameters.
- Randomized search: Instead of exhaustively searching through the entire hyperparameter grid, you can use a randomized search to sample a random subset of the grid. This can be useful if the grid is very large and a full grid search would be computationally expensive.
- Bayesian optimization: Instead of searching through a predefined grid of hyperparameters, you can use Bayesian optimization to iteratively update an estimate of the optimal hyperparameters based on the performance of the model at each iteration. This can be more efficient than a grid search, especially for high-dimensional hyperparameter spaces.
- Ensemble methods: Instead of tuning the hyperparameters of a single model, you can use ensemble methods such as boosting or bagging to combine multiple models and tune the hyperparameters of the ensemble. Check the article on boosting algorithms to understand the ensemble methods.
- Hyperparameter tuning with early stopping: Instead of using a fixed number of iterations or a fixed amount of time to train the model, you can use early stopping to terminate the training process when the model performance on the validation set stops improving. This can be useful if the model is prone to overfitting and you want to avoid wasting computational resources on unnecessary training.
Now let us explain each of these techniques one by one using Python example:
What is nested cross-validation?
In machine learning, cross-validation is a resampling procedure used to evaluate the performance of a model on a dataset. It involves splitting the dataset into a set of folds, and training and evaluating the model on each fold. The performance of the model is then averaged across all the folds to give an estimate of the model’s generalization performance on the dataset.
Nested cross-validation is a variant of cross-validation in which the resampling procedure is repeated multiple times. It involves two levels of resampling: an outer loop and an inner loop. The outer loop splits the dataset into a set of folds, and the inner loop uses a different set of folds for each iteration of the outer loop. The performance of the model is then averaged across all the iterations of the outer loop.
Nested cross-validation is useful when you want to tune the hyperparameters of a model using cross-validation. The inner loop is used to tune the hyperparameters, and the outer loop is used to evaluate the model performance with the tuned hyperparameters.
Here is an example of nested cross-validation in Python using scikit-learn:
# importing modules from sklearn.model_selection import KFold, GridSearchCV from sklearn.pipeline import Pipeline from sklearn.model_selection import cross_val_score from sklearn.preprocessing import StandardScaler from sklearn.svm import SVC # Create a pipeline with an SVC classifier and a StandardScaler model = Pipeline([('scaler', StandardScaler()), ('svc', SVC())]) # Define the hyperparameter search space param_grid = {'svc__C': [0.001, 0.01, 0.1, 1, 10, 100], 'svc__gamma': [0.001, 0.01, 0.1, 1, 10, 100]} # Create the inner loop cross-validation object inner_cv = KFold(n_splits=5, shuffle=True, random_state=42) # Create the outer loop cross-validation object outer_cv = KFold(n_splits=5, shuffle=True, random_state=42) # Create the grid search object clf = GridSearchCV(estimator=model, param_grid=param_grid, cv=inner_cv) # Use nested cross-validation to evaluate the model performance scores = cross_val_score(clf, X, y, cv=outer_cv)
Now you can print the accuracy score depending on the model: One thing to notice is that the X and Y are the training datasets that we have taken as an imaginary dataset. You should provide your own training data in this part.
# Print the mean and standard deviation of the scores print(f'Mean score: {scores.mean():.3f}') print(f'Standard deviation: {scores.std():.3f}')
In this example, we are using nested cross-validation to tune the hyperparameters C and gamma of an SVC classifier using a grid search. The inner loop is used to tune the hyperparameters, and the outer loop is used to evaluate the model performance with the tuned hyperparameters. The performance of the model is then averaged across all the iterations of the outer loop.
What is a Randomized search?
Randomized search is a method for searching a parameter space that involves randomly sampling from the space and evaluating a function at each point in the space. It is often used as an alternative to grid search, which involves evaluating the function at a fixed set of points in the space.
Here is an example of how you might use randomized search in Python to optimize a function:
#import random import random def optimize_function(param1, param2, param3): # some function that we want to optimize return result # define the parameter space param1_space = [0, 1, 2, 3, 4] param2_space = [10, 20, 30] param3_space = [100, 200, 300] # number of iterations for the search n_iter = 10 # store the best result and parameters best_result = float('inf') best_params = None for i in range(n_iter): # randomly sample a set of parameters param1 = random.choice(param1_space) param2 = random.choice(param2_space) param3 = random.choice(param3_space) # evaluate the function result = optimize_function(param1, param2, param3) # update the best result and parameters if necessary if result < best_result: best_result = result best_params = (param1, param2, param3) # print the best result and parameters print(f'Best result: {best_result}') print(f'Best params: {best_params}')
This code will randomly sample sets of parameters from the defined parameter space and evaluate the optimize_function at each set of parameters. It will keep track of the best result and the corresponding parameters, and print them out at the end of the search.
What is Bayesian optimization in Python?
Bayesian optimization is a method for global optimization of black-box functions that are expensive to evaluate. It is a probabilistic model-based approach that uses Bayesian inference to construct a posterior distribution over possible function evaluations, given the data that has been collected so far. It then uses an acquisition function to select the next point to evaluate, in an effort to find the global minimum of the function.
In Python, there are several libraries that can be used for Bayesian optimization, including scikit-optimize, GPyOpt, and hyperopic.
Here is an example of how to use scikit-optimize for Bayesian optimization in Python:
#importing python modules from skopt import BayesSearchCV from sklearn.datasets import load_iris from sklearn.svm import SVC X, y = load_iris(return_X_y=True) opt = BayesSearchCV( SVC(), { 'C': (1e-6, 1e+6, 'log-uniform'), 'gamma': (1e-6, 1e+1, 'log-uniform'), 'degree': (1, 8), 'kernel': ['linear', 'poly', 'rbf'] }, n_iter=32 ) opt.fit(X, y) print(opt.best_params_)
In this example, we are using Bayesian optimization to search for the optimal hyperparameters for a support vector machine (SVC) classifier, using the iris dataset. The BayesSearchCV function from scikit-optimize is used to perform the optimization, and the fit method is used to fit the model to the data. The best_params_ attribute of the BayesSearchCV object returns the best hyperparameters found by the optimization process.
Hyperparameter tuning with early stopping in Python
Early stopping is a method of hyperparameter tuning that involves training a model for a specified number of epochs, and then monitoring the performance of the model on a separate validation set at regular intervals. If the performance on the validation set does not improve for a certain number of epochs (called the “patience”), training is stopped and the model is returned.
Here is an example of how to implement early stopping in Python using the Keras
library:
from keras.callbacks import EarlyStopping # create the model model = create_model() # compile the model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # create a callback for early stopping callback = EarlyStopping(monitor='val_loss', patience=5) # fit the model with the callback model.fit(X_train, y_train, epochs=100, validation_data=(X_val, y_val), callbacks=[callback])
In this example, we create an EarlyStopping callback object and pass it to the fit method of the model. The monitor parameter specifies which metric to use for early stopping (in this case, it is the validation loss), and the patience parameter specifies the number of epochs to wait before stopping training if the performance on the validation set does not improve.
Summary
Grid search is a method for hyperparameter optimization that involves specifying a grid of hyperparameter values and training a model for each combination of these values. The goal is to find the combination of hyperparameter values that results in the best performance of the model.
To use grid search, you need to specify the hyperparameter values to search over, as well as a performance metric to evaluate the model. You also need to specify a model to train, and a cross-validation strategy to use to evaluate the model.
Once the grid search is set up, it will train a model for each combination of hyperparameter values, evaluate the model using cross-validation, and return the hyperparameter values that result in the best performance according to the specified performance metric.
In this article, we discussed how we can use the GridSearchCv for the hyperparameter tuning of any model.
1 thought on “GridSearchCV for hyperparameter tuning in Python”