# Penman Monteith equation

## Introduction

The Penman-Monteith equation is often used for modelling evapotranspiration of the soil-crop system. Penman-Monteith is a physics-based and comprehensive equation (Figure 1).

In this notebook, we will approximate the Penman-Monteith equation by a relatively simple Artificial Neural Network (ANN).

## Data preparation

We use the Julia language for data analysis and modelling. First, we load some useful packages.

```
using PenmanMonteith
using Markdown
using CSV
using DataFrames
using Dates
using Unitful
using Flux
using WGLMakie
using AlgebraOfGraphics
using Random
```

We initialize the pseudo random number generator to improve reproducibility.

`Random.seed!(42);`

Next, we calculate the reference crop evapotranspiration for meteorological station De Bilt (The Netherlands) by means of the Penman-Monteith equation. We use our own Julia package ‘PenmanMonteith’ for this. This package follows the implementation in Allen et al. (1998).

All daily observations needed to calculate reference crop evapotranspiration are downloaded from the KNMI-website.

```
= CSV.read(
d_meteo "./data/etmgeg-260.csv", DataFrame,
=47, skipto=48,
header=[:YYYYMMDD,:FG,:TN,:TX,:TG,:Q,:PG,:UX,:UN],
select=true, missingstring=[" ", ""]); normalizenames
```

Some conversions have to be done, before we can use the KNMI-data.

```
transform!(d_meteo,
:YYYYMMDD => ByRow(x -> Date(string(x), dateformat"yyyymmdd")) => :date)
filter!(:date => x -> Date("1983-01-01") ≤ x ≤ Date("2022-12-31") , d_meteo)
select!(d_meteo,
:date,
:date => ByRow(x -> Date("1983-01-01") ≤ x ≤ Date("2012-12-31") ? "training" : "testing") => :type,
:TN => ByRow(x -> uconvert(u"K", 0.1x*u"°C")) => :T_min,
:TG => ByRow(x -> uconvert(u"K", 0.1x*u"°C")) => :T_avg,
:TX => ByRow(x -> uconvert(u"K", 0.1x*u"°C")) => :T_max,
:UN => ByRow(x -> 1.0x) => :RH_min,
:UX => ByRow(x -> 1.0x) => :RH_max,
:Q => ByRow(x -> uconvert(u"MJ/m^2", 1.0x*u"J/cm^2")) => :Rₛ,
:FG => ByRow(x -> 0.1x*u"m/s") => :u,
:PG => ByRow(x -> 0.01x*u"kPa") => :P);
```

Finally, we can compute the reference evapotranspiration as follows:

```
= reference_evapotranspiration.(
d_meteo.ETₒ # mean temperature
d_meteo.T_avg, # minimum temperature
d_meteo.T_min, # maximum temperature
d_meteo.T_max, actual_vapour_pressure.(d_meteo.T_min, d_meteo.T_max, d_meteo.RH_min, d_meteo.RH_max),
# global radiation
d_meteo.Rₛ, # Daily mean sea level pressure
d_meteo.P, # date
d_meteo.date, 52.1u"°", # latitude
# daily mean windspeed
d_meteo.u, = 1.90u"m", # altitude
z = 10.0u"m") # altitude wind measurements; zₘ
```

The resulting data set is given in Table 1.

Row | date | type | T_min | T_avg | T_max | RH_min | RH_max | Rₛ | u | P | ETₒ |
---|---|---|---|---|---|---|---|---|---|---|---|

Date | String | Quantity… | Quantity… | Quantity… | Float64 | Float64 | Quantity… | Quantity… | Quantity… | Quantity… | |

1 | 1983-01-01 | training | 271.25 K | 273.95 K | 276.25 K | 76.0 | 99.0 | 2.74 MJ m^-2 | 5.1 m s^-1 | 102.3 kPa | 0.275534 kg kPa^-1 s^-2 |

2 | 1983-01-02 | training | 275.85 K | 276.95 K | 277.75 K | 91.0 | 100.0 | 1.45 MJ m^-2 | 3.6 m s^-1 | 102.55 kPa | 0.209148 kg kPa^-1 s^-2 |

3 | 1983-01-03 | training | 276.55 K | 281.85 K | 284.15 K | 89.0 | 99.0 | 0.27 MJ m^-2 | 6.7 m s^-1 | 101.59 kPa | 0.527503 kg kPa^-1 s^-2 |

4 | 1983-01-04 | training | 277.05 K | 281.75 K | 284.85 K | 65.0 | 97.0 | 0.89 MJ m^-2 | 7.2 m s^-1 | 101.1 kPa | 1.11914 kg kPa^-1 s^-2 |

5 | 1983-01-05 | training | 277.95 K | 282.95 K | 286.05 K | 82.0 | 97.0 | 0.96 MJ m^-2 | 6.2 m s^-1 | 101.24 kPa | 0.6717 kg kPa^-1 s^-2 |

6 | 1983-01-06 | training | 278.15 K | 283.45 K | 286.05 K | 66.0 | 93.0 | 1.99 MJ m^-2 | 8.2 m s^-1 | 101.44 kPa | 1.14383 kg kPa^-1 s^-2 |

7 | 1983-01-07 | training | 276.25 K | 279.35 K | 280.95 K | 71.0 | 87.0 | 2.69 MJ m^-2 | 5.7 m s^-1 | 102.24 kPa | 0.65378 kg kPa^-1 s^-2 |

8 | 1983-01-08 | training | 275.25 K | 278.35 K | 280.75 K | 69.0 | 95.0 | 3.48 MJ m^-2 | 4.1 m s^-1 | 102.91 kPa | 0.370785 kg kPa^-1 s^-2 |

9 | 1983-01-09 | training | 276.55 K | 279.35 K | 282.15 K | 75.0 | 96.0 | 0.69 MJ m^-2 | 5.7 m s^-1 | 102.84 kPa | 0.751661 kg kPa^-1 s^-2 |

10 | 1983-01-10 | training | 275.65 K | 279.95 K | 282.35 K | 86.0 | 97.0 | 2.26 MJ m^-2 | 3.6 m s^-1 | 103.17 kPa | 0.277468 kg kPa^-1 s^-2 |

11 | 1983-01-11 | training | 279.65 K | 281.35 K | 283.15 K | 82.0 | 96.0 | 2.04 MJ m^-2 | 3.6 m s^-1 | 103.16 kPa | 0.395708 kg kPa^-1 s^-2 |

12 | 1983-01-12 | training | 277.25 K | 279.05 K | 280.65 K | 80.0 | 89.0 | 0.67 MJ m^-2 | 5.1 m s^-1 | 102.7 kPa | 0.697853 kg kPa^-1 s^-2 |

13 | 1983-01-13 | training | 273.75 K | 277.35 K | 280.05 K | 74.0 | 95.0 | 0.57 MJ m^-2 | 5.7 m s^-1 | 101.49 kPa | 0.730486 kg kPa^-1 s^-2 |

⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ |

14599 | 2022-12-20 | testing | 281.45 K | 283.15 K | 284.35 K | 75.0 | 97.0 | 0.55 MJ m^-2 | 4.6 m s^-1 | 101.0 kPa | 0.743504 kg kPa^-1 s^-2 |

14600 | 2022-12-21 | testing | 279.65 K | 281.05 K | 282.35 K | 85.0 | 98.0 | 1.37 MJ m^-2 | 3.3 m s^-1 | 100.83 kPa | 0.336518 kg kPa^-1 s^-2 |

14601 | 2022-12-22 | testing | 280.05 K | 281.85 K | 283.25 K | 83.0 | 99.0 | 2.14 MJ m^-2 | 2.9 m s^-1 | 100.34 kPa | 0.20909 kg kPa^-1 s^-2 |

14602 | 2022-12-23 | testing | 281.55 K | 282.65 K | 284.25 K | 81.0 | 98.0 | 0.43 MJ m^-2 | 3.1 m s^-1 | 100.27 kPa | 0.571801 kg kPa^-1 s^-2 |

14603 | 2022-12-24 | testing | 280.05 K | 282.25 K | 284.15 K | 78.0 | 95.0 | 2.39 MJ m^-2 | 3.9 m s^-1 | 101.12 kPa | 0.360101 kg kPa^-1 s^-2 |

14604 | 2022-12-25 | testing | 280.75 K | 281.75 K | 282.45 K | 94.0 | 98.0 | 0.55 MJ m^-2 | 2.1 m s^-1 | 101.07 kPa | 0.355915 kg kPa^-1 s^-2 |

14605 | 2022-12-26 | testing | 277.05 K | 280.15 K | 282.95 K | 66.0 | 96.0 | 1.96 MJ m^-2 | 4.6 m s^-1 | 101.41 kPa | 0.654877 kg kPa^-1 s^-2 |

14606 | 2022-12-27 | testing | 277.05 K | 278.85 K | 280.45 K | 78.0 | 89.0 | 2.64 MJ m^-2 | 4.7 m s^-1 | 102.11 kPa | 0.393862 kg kPa^-1 s^-2 |

14607 | 2022-12-28 | testing | 279.15 K | 282.35 K | 284.55 K | 81.0 | 93.0 | 0.65 MJ m^-2 | 7.1 m s^-1 | 100.63 kPa | 0.789226 kg kPa^-1 s^-2 |

14608 | 2022-12-29 | testing | 279.65 K | 282.35 K | 284.85 K | 75.0 | 88.0 | 1.28 MJ m^-2 | 6.9 m s^-1 | 100.22 kPa | 0.939953 kg kPa^-1 s^-2 |

14609 | 2022-12-30 | testing | 277.75 K | 281.65 K | 286.05 K | 78.0 | 95.0 | 0.95 MJ m^-2 | 6.2 m s^-1 | 100.29 kPa | 0.796519 kg kPa^-1 s^-2 |

14610 | 2022-12-31 | testing | 283.55 K | 287.15 K | 289.05 K | 58.0 | 97.0 | 0.73 MJ m^-2 | 6.6 m s^-1 | 100.4 kPa | 1.46106 kg kPa^-1 s^-2 |

## Modelling

We use the Flux-package to train an Artificial Neural Network (ANN). We first divide the data into a training set and a test set. Training data are from 1983 to 2012, test data from 2013 to 2022.

```
= subset(d_meteo, :type => ByRow(x -> x == "training"))
d_train = subset(d_meteo, :type => ByRow(x -> x == "testing"))
d_test
= ustrip.(select(d_train, :T_min, :T_max, :RH_min, :RH_max, :Rₛ, :u))
x_train = ustrip.(select(d_test, :T_min, :T_max, :RH_min, :RH_max, :Rₛ, :u))
x_test = transpose(Matrix(x_train))
x_train = transpose(Matrix(x_test))
x_test
= ustrip.(d_train.ETₒ)
y_train = ustrip.(d_test.ETₒ)
y_test .= max.(0.0, y_train)
y_train .= max.(0.0, y_test);
y_test
= Float32.(x_train)
x_train = Float32.(y_train)
y_train = Float32.(x_test)
x_test = Float32.(y_test); y_test
```

We keep our ANN-model as simple as possible to prevent overfitting and to reduce computational demand. We only discern an input layer of six neurons, a single hidden layer of four neurons, and an output layer of one neuron. We also add a nonnegativity constraint to the output layer of our ANN to guarantee that evapotranspiration is always positive or zero. In Flux, this ANN is given by:

```
= Chain(
model Dense(6 => 4),
Dense(4 => 1),
-> abs2.(x)
x )
```

Chain( Dense(6 => 4), # 28 parameters Dense(4 => 1), # 5 parameters var"#37#38"(), ) # Total: 4 arrays, 33 parameters, 388 bytes.

The ANN is visualized in Figure 2.

Next, we will define the loss-function (in our case, the Mean Square Error). The loss-function quantifies the difference between observations and predictions.

`loss(ŷ, y) = Flux.Losses.mse(vec(ŷ), vec(y));`

Next, we select and initialize an optimizer (in our case, ADAM). The aim of the optimizer is to reduce the value of the loss-function. In other words, by minimizing the loss-function the difference between the observations and the predictions will be reduced leading to a better approximation.

`= Flux.setup(ADAM(), model); optimizer `

Now we are ready to train our model. In Flux this can be achieved by running the following code:

```
for epoch ∈ 1:100_000
train!(model, [(x_train, y_train)], optimizer) do m, x, y
Flux.loss(m(x), y)
end
end
```

## Results

The resulting value of the loss function for the training period is 0.047 (= 0.22 mm/d). More interestingly however, the value of the loss function for the testing period is 0.058 (= 0.24 mm/d).

Figure 3 and Figure 4 give scatter plots of the reference evapotranspiration computed by means of Penman Monteith and by the ANN for the training period and testing period. We see good agreement.