Article 78 of the CRD requires competent authorities to conduct, at least annually, a supervisory benchmarking assessment of internal approaches for the calculation of own funds requirements except for operational risk.
The concept of benchmarking or comparing the output of one model against another could be meaningless unless the objectives of the two models are similar. EBF knows that any two models can differ significantly in respect of their (i) model philosophy (through-the-cycle, hybrid, point-in-time); and (ii) the length of the data series used to calibrate the models. Constructing benchmarks using aggregates of models which differ on these two points, or using a fixed benchmark, cannot account for these differences.
At issue hearticle 78, requires competent authorities to conduct, at least annually, a supervisory benchmarking assessment of internal approaches for the calculation of own funds requirements except for operational risk.re is the lack of regulatory consistency across jurisdictions, and within jurisdictions, on what a PD model output must represent. For example, rules could be set to determine model philosophy; rules could be set to ensure that model calibrations represent a full economic cycle view of the risk.
In order to successfully and meaningfully benchmark models in a “perfect world”, this lack of consistency must be remediated, time given for banks to adapt their models and then sensible and meaningful benchmarks could be constructed. This would be unlikely to be compatible with the timeframes envisaged to complete the benchmarking exercise and report ultimately to the European Commission.
Given that models must and should reflect banks’ own experiences, their processes, risk policies and risk management performance, benchmarks should provide a range of acceptable outputs.
This should preserve the objective of risk sensitivity of the previous Basel accord, and also remind supervisors and other stakeholders, that benchmarking is no substitute for the internal back testing and validation of models, which still must be regarded as the key factors in assessing model performance.
An additional concern is that banks construct models that operate statistically. Acceptable models, therefore, are models that operate effectively on the bulk of that individual bank’s borrowers. Errors will be tolerated for ‘outlier borrowers’. Benchmarking approaches that do not take account of this run the risk of rejecting model outputs based upon the inadvertent selection of an outlier population for the comparison. It is possible and perfectly acceptable for two banks to rate the same borrower completely differently using two powerful and correctly calibrated statistical models. The clustering approach may serve to mitigate this concern to some degree.
The EBF’s members have a common interest with the EBA in seeking to provide confidence in banks’ risk-weighting approaches. Notwithstanding the besetting difficulties described, the EBF welcomes the intent of the benchmarking concept and intends to examine the benchmarking problem with a view to present a workable solution to the EBA in the not too distant future.
Full comments
© EBF
Key
Hover over the blue highlighted
text to view the acronym meaning
Hover
over these icons for more information
Comments:
No Comments for this Article