Disparate markets and complex trading call for smarter exceptions management, argues Donal O’Brien, business development director of Coexis.
If Societe Generale’s reference data systems had been a bit wiser to the activities of its users, could the actions of Jerome Kerviel and the French bank’s subsequent £3.7bn loss have been avoided? The need for increasingly complex interactions between central reference data and distributed transaction processing systems is driving many banks and brokers to re-evaluate their processes. But it’s only through the application of intelligent exception management, sometimes known as straight through exception processing (STEP), that they can really grasp the nettle.
Kerviel is said to have hacked into Soc Gen’s systems to cover his tracks in setting up false accounts and trading billions of Euros of the bank’s money. According to Donal O’Brien, business development director at Coexis, the possibility that this sort of thing could happen has had banks running scared for some time now, trying to figure out a way of automating exception management so they don’t have staff conducting manual overrides or, where they do, a transparent audit trail exists.
“People are realising their reputations depend on their being able to monitor upfront that this sort of thing is happening,” says O’Brien. “Most firms handle exceptions directly within the transaction engine itself, so the trade breaks and someone in the middle or back office goes in and changes the data there locally. That’s a compliance nightmare because it means someone is changing transactions and there’s little or no central check or validation. Pre-Mifid, compliance departments tolerated this kind of process. However, it has become an increasingly pressing issue.”
The solution to this problem is to manage exceptions centrally, not just with the transaction engine initiating the workflow but tracking any manual overrides on the central reference data store. “Where you change an SSI centrally, then any trades that are going to use that data have to be held up for authorisation within the transaction engine itself,” says O’Brien. “That’s a new interaction – it means just because I’ve recently touched some data within a reference data application, I now need to initiate further authorisation steps within the transaction engine.”
What about the exceptions?
Exception management is a process that falls outside the remit of most reference data applications which typically concentrate on three areas. First, they look at the assimilation of data, handling data capture from various sources and the transformation of messages. They then scrub the data, introducing simple comparisons between different sources, right down to complex field level comparisons, with prioritisations and overrides based on the type of data and the source. Second, they handle lifecycle management, which is usually rules-based authorisation or manual augmentation of the data in some way. This differs depending on the type of data. For instrument data, it’s relatively straightforward but for counterparty or institutional accounts there are various different parts of the organisation adding bits of data manually and authorising it.
Finally, a typical reference data system handles distribution to downstream systems. A key thing here is the tracking of return values and return data sets from those other systems, anything from status tracking to taking actual return references that come back and are compiled into the overall golden copy.
The fourth area, which is really not handled well by most reference data applications, is on-line interaction with transactional systems – whether they’re in the front, middle or back office – using reference data for look-ups, calculations or validations.
“It’s easy if you’ve got the perfect model, where information cascades down through the reference data to all these transactional systems. But when you actually figure out in real–life what happens it becomes increasingly complicated,” explains O’Brien. “The transaction gets processed in various systems and some piece of data is missing or it’s badly set up or incoherent, then what do you do? The challenges are all about the breaks in the STP and solving real-life problems.”
Breaking point
There are obviously lots of points at which transactions can break but three areas that crop up regularly are around contracting rules, fee calculations and management of standard settlement instructions (SSIs). Contracting rules detail the structure of the financial entity and its counterparties. Newer front office and middle office applications need to refer to these rules to understand how transactions should be managed. Managing fees, meanwhile, has long been an issue. Firms want to manage and store charges, commissions and brokerage fees centrally but they then need to be distributed to downstream systems. Finally, problems emerge with SSI management in calculating default set-ups for particular transaction types.
If this sounds straightforward, O’Brien explains where the issues arise. “Information is collated centrally and then distributed as real-time as you can to the processing systems. That’s fine where you are modelling things upfront and you want to pass things downstream. Increasingly what’s happening in disparate markets and complex trading is the first thing you know about these exceptions is when the trade breaks. So the whole model of collation of data and distribution of data doesn’t work anymore.”
While Coexis Syn~ has a rules-based approach to managing exceptions, O’Brien says rival systems fall short. “The model of assimilation, lifecycle and distribution doesn’t quite hack it because no matter how real-time data distribution is, it’s the transaction processing systems making the request for information and it’s coming the other way.”
Problems in these areas are exacerbated by the pace of change in the industry. It’s no longer acceptable for a broker to offer utility processing for certain generic markets. Increasingly they need to provide service in niche markets. “Firms have to make stuff up as they go along. With some of our clients they are actually adding markets in on a daily or weekly basis. That used to be a huge IT task that happened over six months. Someone says you’ve got to trade in this new market and you have to be able to set it up in hours or days.”
In such circumstances, the reference data team doesn’t necessarily know beforehand what’s going to happen – where a trade might break, or what sort of exception to expect. So it has to be ready for the eventuality of any exception, then start requesting information and initiating workflow around it. The workflow is not just finding the right information to repair the transaction; it’s also about ensuring the reference data is available for the next transaction.
O’Brien explains how the process is being implemented at multiple Coexis client sites: “This is where complex data interactions between transaction engines and central data sources come in. In true STEP, the transaction engine actually pushes a request for the latest information from the central reference data store. If it can find it, it repairs the exceptions automatically itself. If not, it raises tasks within the reference data application so it actually starts initiating live transactions and outstanding tasks in the reference data application asking for items to be researched and set-up by the reference data people.”
A growing problem
Exception management is clearly a big issue, which begs the question of why it hasn’t been tackled already by firms. O’Brien reckons people expect exception volumes by their very nature not to be too much of an issue. But increasingly, they are wrong. “Although their volume might be scaling they think their exceptions are not – that’s actually not true. The more cross-border trading executed, the more exceptions will be created.”
In addition, companies tend to hide behind manual workarounds, so the scale of the issue and the productivity cost to the business is not immediately obvious. And O’Brien says that’s understandable – because it’s no easy task. “As far as I can see when people put in transaction systems it’s difficult enough to make sure that data gets distributed correctly to downstream systems, and for most projects that’s all they get. If they have the data real-time they think they’ve achieved quite a result and remain tolerant of manual intervention. They hide the fact they don’t have more automation by calling someone in reference data or sending an e-mail. People are getting by.”
Automating these kind of interactions not only takes a huge pressure off the teams that are running around fixing transactions, it also means that when things go wrong and transactions break – as they inevitably will – the firm can be sure it’s taking the most efficient route to fixing them, and can confidently expect problems will not come back to haunt it.
Subscribe to our newsletter