How do you measure the profitability of lines of business, products, or customers? The simplest form is in dollars. But does that measure the profits against the investment required to generate those profits? For example, the riskiest banking products most likely deliver the greatest dollar profits, all other things being equal. At least until the risk comes to roost.
Another means to measure profits is the ratio of profits to the size of the portfolio being measured. In banking, we call it the Return on Assets (ROA). For example, a $500 million commercial loan portfolio will have a greater dollar profit than a $100 million consumer loan portfolio, all things being equal. But the ROA of the consumer loan portfolio could be greater. But ROA does not measure the risk of what is being measured, just the relative profit contribution regardless of size.
Banking is a risk business, and has many internal and external factors that impact the amount of risk per activity. This is one reason that financial institutions are getting serious about viewing risk across the entire franchise instead of in the organizational silos where they most exist. See my post on Enterprise Wide Risk Management on this subject here.
But through it all, risk requires capital. Look at the capital needed through the recent recessionary period for credit losses provided by either private investors or the US Treasury. Regulators assign risk weightings as a proxy for how much capital is needed based on the perceived risk of the asset. For example, a security in the investment portfolio may be 20% risk weighted, versus a loan that is 100% risk weighted. The total risk based capital ratio required by regulators to maintain "well capitalized" status formally remains 10%. So $100 million of 20% risk-weighted bonds requires $2 million of capital ($100 x 20% x 10%) versus $100 million of 100% risk-weighted loans requiring $10 million of capital.
If the amount of capital needed for a line of business, a product, or a customer varies based on risk, then it makes sense to measure the profitability of what is measured based on capital required. In industry parlance, we call that Risk Adjusted Return on Capital, or RAROC.
Financial institutions should assess on their own the amount of capital needed per product based on past experience, perceived risk, and potential loss. The table below shows how capital is allocated to each loan category based on a limited risk spectrum (credit, interest rate, and liquidity). Although for loans, these are probably the greatest risks, the financial institution may quantify other risks, such as operational (could it lose money due to fraud, hacking, etc.) or pricing (does the value of the asset fluctuate in the market, causing volatility and therefore risk).
I perform such an analysis for an ABA School of Bank Marketing Management course that I teach every May. For my efforts I have been criticized that the topic was too complicated and I should remove it. But the course is on product profitability. How should I measure the relative profitability of business checking versus a home equity loan? Risk and the capital to support that risk should be the denominator in that equation. Agree? I say the topic stays. I'll take my lumps in the course evaluations.
What does your FI use as the profitability denominator?
~ Jeff
No comments:
Post a Comment