The Implicit
Nonlinear PLS Regression (INLR) [237],[238]
is also called Nonlinear PLS in many publications. The INLR introduces nonlinearities
into the regression model by adding squared terms (_{}) and optionally the
cross-product terms (_{}) to the set of "original" independent
variables (_{}) [239].
For this study, only the squared terms were added as the addition of the cross-product
terms would increase the number of independent variables to an unmanageable
number of about_{}. PLS models were built for the increased number
of 100 independent variables with the optimal number of principal components
selected by the minimum crossvalidation criterion.

The prediction
of R22 by the optimal model with 16 principal components showed a relative error
of 2.25% for the calibration data and 2.81% for the validation data. For R134a
the optimal model with 17 principal components predicted the calibration data
with a rel. RMSE of 3.47% and the validation data with a rel. RMSE of 4.02%.
The addition of the squared variables can be also seen as a polynomial approach,
which might explain why a rather many principal components are needed. This
high number of principal components increases the relative gap between the error
for the calibration data and the validation data compared with the Box-Cox Transformation
and PLS due to the increased number of parameters (see also section
2.8.1). Yet, the INLR compensates the nonlinearities better than these
two methods, as only for R22 the Wald-Wolfowitz Runs test and the Durbin-Watson
Statistics are significant.

figure 35: True-predicted plots of the INLR for the validation data.