dc.description.abstract | This thesis investigated the Anholt offshore wind farm, developed three distinct Deep Learning models for power prediction, and used SHAP values to explain the models’ predictions.
Model 1 developed a submodel for each turbine in the wind farm and summed up their prediction to get the
total power output. Model 2, on the other hand, considered the wind farm as a single object and correlated the
regional LIDAR measurements to the total power output of the wind farm. Lastly, Model 3 attempted to improve on Model 1 by introducing cascading input features, where the power output of upstream turbines was used as input features.
It was found that implementing cascading input for modeling individual turbines improved the performance by
lowering the MAE and RMSE by 9.46 % and 6.37 %, respectively. Still, the most accurate model for predicting the wind farm’s power output was Model 2, which showed a 28.81 % and 24.39 % lower MAE and RMSE compared to Model 1. However, since Model 2 did not model the performance of individual turbines, only Model 1 and Model 3 could predict the wake loss patterns in the wind farm, which both models were able to do with reasonable accuracy.
SHAP values were then calculated for all models to explain how variations in the feature values impact the
models’ predictions. It was found that the SHAP values for wind speed resemble a turbine’s power curve. The
SHAP values for wind direction showed which directions were associated with above or below-average predictions. For the third approach, it was found that using the power from turbines upstream of the turbine has a higher importance in predicting power than wind speed. Furthermore, it was found that the importance of previous turbines decreases the further we move away from the turbines.
This study has demonstrated that Deep Learning is a powerful tool for developing accurate prediction models
for wind farms. While Deep Learning models are often considered black box models due to their complex and
layered structure making it difficult to understand how the models arrive at the predictions, SHAP values have
been shown to be an excellent tool for interpreting and understanding the predictions made by these models,
which enhances trust and confidence in the models. | |