Over the past decade, the field of Artificial Intelligence (or AI as it is commonly known), has made significant progress in their development, and have evolved to the point where they are now being embraced across multiple industries. To clarify, AI is not a single technology but a distinct set of related disciplines that include Natural Language Processing (NLP), Data Mining and Text Analytics, Semantic Technologies (ontologies, RDF stores, linked data, sentiment analysis) and machine learning.

Since these technologies were first introduced in the corporate world, their primary use has been to improve the quality of investment decision support. However, there are many uses and benefits to these technologies that can be embraced by companies throughout the client life cycle.

Onboarding and Verification One of the first use cases for leveraging AI technologies as part of the client lifecycle is in onboarding and verification. Client onboarding had traditionally been a matter of filling in forms and transferring assets. Today, however, things are considerably more complex. Know Your Customer (KYC) and Anti-Money Laundering (AML) regulations have mandated that companies perform detailed due diligence on their clients, and maintain the file of information regarding that due diligence.

The initial form filled in by a client is far more extensive than in the past. In addition, clients must provide supporting documents that substantiates the information disclosed on the form. AI can assist with this process by extracting the supporting verification and in turn compare it against the disclosed data. The greater concern is what was not disclosed.

Client Search

The next area in which AI and linked data can be leveraged is to perform client information searches. Performing these tasks manually is time consuming for a single client, and can create opportunities for inaccuracies and risk as a result of human error. Employing AI technologies can greatly reduce the manual efforts involved in client onboarding, freeing up staff to focus on improving business performance and operations. The resulting data garnered from both free and paid sites can be analyzed, categorized, and filtered to identify any issues of concern.

In the future, traditional keyword based client searches will move towards a natural language search model, where users will be able to directly question documents to find the required information. The answers to these search questions will be provided from completely integrated databases that contain structured and semantically tagged data. In order to achieve this, information will need to be semantically tagged using NLP, and ontologies will be need to be built support the linked Data to enhance the search results with information available inside and outside the enterprise.

Ontologies

Ontologies are structured data models ‘on steroids’, and are a standard way of defining the knowledge about a domain. The World Wide Web Consortium (W3C) defines the Web Ontology Language (OWL) as, “a Semantic Web language designed to represent rich and complex knowledge about things, groups of things, and relations between things”. Using computational logic-based language, information expressed in OWL can be leveraged by computer programs to verify consistency or to make implicit knowledge explicit. OWL documents (i.e. ontologies), can be published in the World Wide Web and may refer to, or be referred, from other OWL ontologies.

Ontologies can be used to support the KYC/AML in the following areas: a) Ontology based information extraction – used to extract relevant information from unstructured documents (for example monitoring a website to detect people who are involved in money laundering) b) Ontology based information discovery through inference – Detect money laundering schemes or establish connections of people with organizations that are on criminal watch lists c) Ontology based compliance rule verification d) Seamless integration of external and internal data – for example, data integration between internal watch lists between businesses.

Creating and sharing standard ontologies across countries and business entities can aid with implementation of global KYC/AML regulations. However, one of the dangers of trying to create perfect ontologies is a typical analysis-paralysis pattern, which we have previously seen in similar exercises in the past during the creation of enterprise data models. Every company should start with small core ontologies and develop them as the needs grow.

As an alternative to developing individualized ontologies, the industry has developed schema.org, an open-source collaborative project to create, maintain, and promote schemas for structured data on the Internet, on web pages, in email messages, and beyond.” Theses schema replaced by a simpler Meta data to represent the knowledge. It is simpler, requires less theoretical knowledge and replaces the complexity in some instances (for example, information extraction). Schema.org has grown due to the community that is enhancing it at a very fast pace.

One of the important factors to keep in mind is that using shared ontologies needs to be associated witha very significant dose of “Trust”. According to semantic data technologist Barbara Starr, “Trust is achieved through ascertaining the reliability of data sources and using formal logic when deriving new information.

Conclusion

The AI technology stack mentioned above can offer enterprises a great value if they are implemented wisely. These technologies can be used to:

a) Reduce manual labor and the errors created by it – for example researching a business entity can be a very time costly manual process vs. extracting relevant data from all the available sources using ontology based extraction.

b) Insure the optimal compliance monitoring – defining compliance ontologies for each country and using ontology based rule engines to verify transactions, a very efficient compliance verification engine can be created.

c) The most important benefice is the discovery of new information, detecting money laundering schemes and creating new methods to prevent these in the future.

The a) and b) are in the category of “you know what you don’t know”, the c) is in the category of “you don’t know what you don’t know”, i.e. enhancing institutional knowledge through AI based discovery.

(About the author: George Roth is the president and CEO of Recognos Inc. and also functions as the chief architect for Recognos Financial. In 1999, Roth and his partner Ken Rogers founded Recognos Inc., a Silicon Valley based firm specializing in systems development. Roth has extensive experience in the financial services industry and in the semantic technology. He has previously worked for Citibank, State Street, The Northern Trust Company, Merrill Lynch, BGI, Charles Schwab, and Fisher Investments.)

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access