The collision of ineptitude and avarice is a frequent occurrence; sometimes it's on purpose. And always, in the case of financial malfeasance and extreme noncompliance, it involves IT systems. People drive the fraud. Processes can either support or hide the crime depending on the quality of back-office controls, but the proof lies in understanding the data, the software, and how they can be manipulated. Application software, tables within the applications and discrete data elements comprise the fundamentals-the ABCs of understanding how financial crimes are committed and detected. The following two stories are based on true events and are emblematic of how financial crimes and lack of compliance are enabled. One was committed by a single person, the other by a cadre within the bank. And they both matched the MO of the recent HSBC anti-money laundering (AML) debacle: a revolving door in Compliance and Audit of under-qualified people and an active intent to obfuscate by continually changing systems.
In a small international bank in a very large city one person ran the IT division. It had few employees, all of them young and inexperienced. Of particular interest, the IT chief had implemented a commercial off-the-shelf (COTS) core banking system not only "vanilla" (out of the box without bank-specific configuration) but also without data entry controls. He, and only he, was entitled to create the following codes and their descriptions: account type, account class, and transaction type. These would prove central to the defalcation. In addition, he had control over the project management process, which means he held the keys to all the artifacts of current and future-state documentation, modifications to the software and tables, and the code itself. He also sat on the compliance and audit committees, and he was the only one in executive management who was not IT-phobic.
The alarm bells were set off when an already-implemented Anti-Money Laundering (AML) transaction monitoring/detection system (TMS) was put to review by an outside consultant. The assessment revealed that-despite apparent compliance with policies and procedures-the analytic results of the TMS were rendered moot because of continually-changing transaction codes and descriptions. At first count there were 200 transaction codes which were not unique. For example, a "CO" could mean "Cash from Vault" or "Miscellaneous Cash Out." And the TMS, like all detection systems, required a one-to-one code + description. It therefore could not differentiate between the two (or three or four...). There were over 50 descriptions (manually entered free-form) for "Miscellaneous Credit." The 200 codes grew despite advice to normalize and limit the codes.
All detection systems, into which transactions and accounts are mapped from core systems, are tasked with using risk assessments, profiles and analytic algorithms to determine if an alert is generated. Alerts are computer-generated potentially suspicious transactions that may warrant the filing of a Suspicious Activity Report (SAR). By constantly changing codes, analytic "dust" was created and the IT chief's embezzlement was hidden. He had the access to redefine transactions at will, and because he knew how transactions were mapped to the TMS, his were not.
Clouds of obfuscation are also created when core banking and detection systems are changing. In this case, a community bank in a very large city bought, in rapid succession, a small international bank-with a large volume of wire transfers-and a small commercial bank, and they decided to port the three legacy core banking systems into a new system.
Midway through this complex undertaking they decided to purchase a new transaction monitoring system, despite the fact that they had inherited two well-known systems, one of which handled wire monitoring very well. This was a surprise because one of the banks was operating under a Memorandum of Understanding (MOU) with the feds because of their sloppy handling of wire transfers.
In addition to the systems changes, compliance staff was let go and replaced by a skeleton crew of people unfamiliar with the new software. Even more alarming, the executive in charge of IT and Operations informed the outside consultant hired to help with the conversion that he only wanted a "C" job. A "C" grade in compliance, that is, as opposed to an "A." The leadership of this bank was known to be connected through family ties, and the person put in charge of the transaction monitoring system was one of them.
Critical information needed to run the new detection system was withheld from the team attempting to configure that system to produce meaningful/actionable intelligence. In addition, they planned to have exceptions-no monitoring at all-for certain account classes and specific individuals. Through selective mapping from the core system, specific groups, individuals, account and transaction types can be dropped from the file sent to the detection system, rendering them exempt from oversight. These can also be changed by any person with access rights to the systems. While exemptions are often necessary to avoid deluging the TMS with "noise," they may mask intentional fraud.
To understand how easy it is to commit intentional fraud/noncompliance, despite the best implementation rigor1, an important concept to grasp is that changes to systems can be made immediately after implementation. When software is modified, for example, a seemingly small change to a table, it is often not documented, disseminated or understood. The staff tasked with reviewing the analytic output of the detection system may never figure out that the system is producing less/little of analytic interest and/or not picking up the specific types of fraud patterns promised at implementation due to the exemptions.
Exemptions can be built into the system in several ways. A staging program is typically used to identify which customers, accounts, and transactions to send to the detection system from the core financial system(s). This program also performs the field-to-field mapping of each data element. Any field not mapped properly, or not mapped at all, means that data cannot be analyzed. Some examples:
Dropping or changing the meaning of any of these key fields (there are more) means that analytic detection has been compromised. When whole classes of accounts are not sent to the TMS, risk ratings, profiles and analytic detection scenarios are ineffective because the data are insufficient to yield results. SARs are therefore not filed and fraud is enabled.
It is very difficult to determine what isn't happening. Because systems change, sometimes suddenly and frequently, an examiner needs to compare all feeder system tables to what is mapped to the detection system tables. Reverse engineering, with an emphasis on the data models, will show what is actually occurring. These are then compared with the policies, procedures and documentation that describe what they profess the system is doing.
Over time the most well-designed detection system can lose its efficacy and lead to undetected fraud. Identifying the people who control the systems and who are charged with maintaining their integrity is key to understanding how financial fraud is accomplished. In addition to understanding the systems' architecture, it is critical to know who can change the systems, and if logs-who changed what and when they did it-are kept for each change, even the seemingly smallest.
Inside jobs are not rare.
Marie G. Kerr specializes in Financial Fraud. She is a Certified Financial Crime Specialist, Certified Anti-Money Laundering Specialist (CAMS), and Project Management Professional (PMP). Ms. Kerr is a financial industry veteran with a deep understanding of how financial institutions work. She has served as a Homeland Security Program Advisor and Fraud Detection Subject Matter Expert (SME) and an IT and AML Advisor for a three-bank merger.
©Copyright - All Rights Reserved
DO NOT REPRODUCE WITHOUT WRITTEN PERMISSION BY AUTHOR.