Data Science & Analytics - A-Team https://a-teaminsight.com/category/data-science-analytics-categories-dmi/ Tue, 09 Jul 2024 14:37:59 +0000 en-GB hourly 1 https://wordpress.org/?v=6.5.5 https://a-teaminsight.com/app/uploads/2018/08/favicon.png Data Science & Analytics - A-Team https://a-teaminsight.com/category/data-science-analytics-categories-dmi/ 32 32 Duco Unveils AI-Powered Reconciliation Product for Unstructured Data https://a-teaminsight.com/blog/duco-unveils-ai-powered-reconciliation-product-for-unstructured-data/?brand=dmi Tue, 09 Jul 2024 14:37:59 +0000 https://a-teaminsight.com/?p=69173 Duco, a data management automation specialist and recent A-Team Group RegTech Insight Awards winner, has launched an artificial intelligence-powered end-to-end reconciliation capability for unstructured data. The Adaptive Intelligent Document Processing product will enable financial institutions to automate the extraction of unstructured data for ingestion into their systems. The London-based company said this will let market...

The post Duco Unveils AI-Powered Reconciliation Product for Unstructured Data appeared first on A-Team.

]]>
Duco, a data management automation specialist and recent A-Team Group RegTech Insight Awards winner, has launched an artificial intelligence-powered end-to-end reconciliation capability for unstructured data.

The Adaptive Intelligent Document Processing product will enable financial institutions to automate the extraction of unstructured data for ingestion into their systems. The London-based company said this will let market participants automate a choke-point that is often solved through error-prone manual processes.

Duco’s AI can be trained on clients’ specific documents, learning how to interpret layout and text in order to replicate data gathering procedures with ever-greater accuracy. It will work within Duco’s SaaS-based, no-code platform.

The company won the award for Best Transaction Reporting Solution in A-Team Group’s RegTech Insight Awards Europe 2024 in May.

Managing unstructured data has become a key goal of capital markets participants as they take on new use cases, such as private market access and sustainability reporting. These domains are largely built on datasets that lack the order of reference, pricing and other data formats with which it must be amalgamated in their systems.

“Our integrated platform strategy will unlock significant value for our clients,” said Duco chief executive Michael Chin. “We’re solving a huge problem for the industry, one that clients have repeatedly told us lacks a robust and efficient solution on the market. They can now ingest, transform, normalise, enrich and reconcile structured and unstructured data in Duco, automating data processing throughout its lifecycle.”

The post Duco Unveils AI-Powered Reconciliation Product for Unstructured Data appeared first on A-Team.

]]>
Practicalities of Implementing GenAI in Capital Markets https://a-teaminsight.com/blog/practicalities-of-implementing-genai-in-capital-markets/?brand=dmi Wed, 26 Jun 2024 10:27:41 +0000 https://a-teaminsight.com/?p=69037 Following the opening keynote of A-Team Group’s AI in Capital Markets Summit (AICMS), a panel of expert speakers focused on the practicalities of implementing GenAI. The panel agreed that industry hype is waning and there is enthusiasm for GenAI with firms beginning to develop use cases, although one speaker noted: “People understand the risks and...

The post Practicalities of Implementing GenAI in Capital Markets appeared first on A-Team.

]]>
Following the opening keynote of A-Team Group’s AI in Capital Markets Summit (AICMS), a panel of expert speakers focused on the practicalities of implementing GenAI. The panel agreed that industry hype is waning and there is enthusiasm for GenAI with firms beginning to develop use cases, although one speaker noted: “People understand the risks and costs involved, but they were initially underestimated, I would say dramatically in some cases.”

The panel was moderated by Nicola Poole, formerly at Citi, and joined by Dara Sosulski, head of AI and model management markets and securities services at HSBC; Dr. Paul Dongha, group head of data and AI ethics at Lloyds Banking Group; Fatima Abukar, data, algorithms and AI ethics lead at the Financial Conduct Authority (FCA); Nathan Marlor, head of data and AI at Version 1; and Vahe Andonians, founder, chief product officer and chief technology officer at Cognaize.

Considering the use of GenAI, an early audience poll question asked to what extent organisations are committed to GenAI applications. Some 46% said they are testing GenAI apps, 24% are using one or two apps, and 20% are using a number of apps. Nine percent are researching GenAI and 2% say there is nothing in the technology for them.

Value of GenAI applications

A second poll questioned which GenAI applications would be of most value to a delegate’s organisation. In this case, 53% of respondents cited predictive analytics, 39% risk assessment, 39% KYC automation, 28% fraud detection and 19% portfolio management.

The panel shared their own use cases, with one member experimenting with GenAI to produce programming code and creating an internal chat box for data migration, as well as scanning data to surface information that can be categorised, sorted, filtered and summarised to create ‘kind of conversational extracts that can be used.’

All agreed that GenAI produces some low hanging fruit, particularly in operational activities such as KYC automation, but that the technology is too young for many applications, leading firms to build capability internally before unleashing GenAI apps for customers as there is still work to do around issues such as risk integration and ensuring copyright and data protection are not compromised. One speaker said: “There is a lot of experimentation and some research to do before we’re confident that we can use this at scale.” Another added: “There are just not enough skilled people to allow us to push hard, even if we wanted to. There’s a real pinch point in terms of skills here.”

Risks of adopting GenAI

Turning to risk, a third audience poll asked the audience what it considered to be the biggest risk around adopting GenAI. Here data quality was a clear leader, followed by lack of explainability, hallucinations, data privacy and potential misuse. Considering these results, a speaker commented: “We’ve already got existing policies and governance frameworks to manage traditional AI. We should be using those to better effect, perhaps in response to people identifying data quality as one of the key risks.”

The benefits of AI and GenAI include personalisation that can deliver better products to consumers and improve the way in which they interact with technology. From a regulatory perspective, the technologies are focused on reducing financial crime and money laundering, and resulting enforcements against fraudulent activity.

On the downside, the challenges that come with AI technologies are many and include ethical risk and bias, which needs to be addressed and mitigated. One speaker explained: “We have a data science lifecycle. At the beginning of this we have a piece around the ethical risk of problem conception. Throughout the lifecycle stages our data scientists, machine learning engineers and future engineers have access to python libraries so that when they test models, things like bias and fairness are surfaced. We can then see and remediate any issues during the development phase so that by the time models come to validation and risk management we can demonstrate all the good stuff we’ve done.” Which leads us to the need, at least in the short term, for a human element for verification and quality assurance of GenAI models in their infancy.

Getting skills right

Skills were also discussed, with one panel member saying: “We are living in a constantly more complex world, no organisation can claim that all its workforce has the skill set necessary for AI and GenAI, but ultimately I am hopeful that we are going to create more jobs than we are going to destroy, although the shift is not going to be easy.” Another said: “In compliance, we will be able to move people away from being data and document gatherers and assessors of data in a manual way to understand risk models, have a better capability and play a more interesting part.”

Taking a step back and a final look at the potential of GenAI, a speaker concluded: “Figuring out how to make safe products that we can offer to our customers is the only way we have a chance of reaching any sort of utopian conclusion. We must chart the right course for society and for people at work, because we’re all going to be affected by generative AI.”

The post Practicalities of Implementing GenAI in Capital Markets appeared first on A-Team.

]]>
AI in Capital Markets Summit Tracks Evolution of GenAI and Value Creation https://a-teaminsight.com/blog/ai-in-capital-markets-summit-tracks-evolution-of-genai-and-value-creation/?brand=dmi Wed, 26 Jun 2024 09:24:18 +0000 https://a-teaminsight.com/?p=69031 Generative AI (GenAI) took the world by storm in November 2022 when OpenAI introduced ChatGPT. It has since become a talking point across capital markets as financial institutions review its potential to deliver value, consider the challenges it raises, and question whether they have the data foundation in place to deliver meaningful, unbiased and ethical...

The post AI in Capital Markets Summit Tracks Evolution of GenAI and Value Creation appeared first on A-Team.

]]>
Generative AI (GenAI) took the world by storm in November 2022 when OpenAI introduced ChatGPT. It has since become a talking point across capital markets as financial institutions review its potential to deliver value, consider the challenges it raises, and question whether they have the data foundation in place to deliver meaningful, unbiased and ethical results from GenAI applications. While applications have yet to be implemented to any significant extent in the market, financial institutions are running internal proofs of concept.

The potential and problems of AI and GenAI were the subject of lively discussion at A-Team Group’s inaugural AI in Capital Markets Summit (AICMS) in London last week, with speakers exploring current and emerging trends in AI, the potential of GenAI and large language models (LLMs), and how AI can be applied to achieve efficiencies and business value across the organisation. With a note of caution, the conversation also covered the risks and challenges of adopting AI and the foundational technologies and data management capabilities that underpin successful deployment.

Opening the summit and introduced by A-Team president and chief content officer Andrew Delaney, Edward J. Achter from the office of applied AI at HSBC set the scene for the day, noting the need to build AI and GenAI products that are responsible and ethical and can be scaled, and describing the importance of educating and engaging the workforce to ensure solutions are used effectively and ethically.

In more detail, the keynote speaker explained the explosion of interest in AI and GenAI following the release of ChatGPT and a change in conversation at financial institutions. He also warned of risks inherent to the technology including fairness and bias, data privacy, and the deliberate spread of false information. To mitigate risk and create value, Achter emphasised the need to get your data house in order and, perhaps a long time in the asking, pay attention to data leaders as data is the lifeblood of AI and GenAI applications.

Also important to consider are regulatory requirements around AI and GenAI, addressing the carbon emission costs of using LLMs, and perhaps most importantly, writing a clear company policy that can be shared with all stakeholders. Demonstrating the benefits of AI and GenAI products can turn scepticism into an understanding of benefits, including productivity gains that can be measured, and change negative perspectives into positive approaches to doing more with the technology.

Ultimately, a skilled workforce, educated customers, technology used in the right context of conduct, and confidence across the organisation will result in value creation.

The post AI in Capital Markets Summit Tracks Evolution of GenAI and Value Creation appeared first on A-Team.

]]>
Datactics Enhances Augmented Data Quality Solution with Magic Wand and Rule Wizard https://a-teaminsight.com/blog/datactics-enhances-augmented-data-quality-solution-with-magic-wand-and-rule-wizard/?brand=dmi Wed, 19 Jun 2024 11:42:03 +0000 https://a-teaminsight.com/?p=68978 Datactics has enhanced the Augmented Data Quality Solution (ADQ) it brought to market in November 2023 with the addition of an AI magic wand, Snowflake connectivity and an SQL rule wizard in ADQ v1.4. The company is also working towards the release of ADQ v1.5 that will include generative AI (GenAI) rules and predictive remediation...

The post Datactics Enhances Augmented Data Quality Solution with Magic Wand and Rule Wizard appeared first on A-Team.

]]>
Datactics has enhanced the Augmented Data Quality Solution (ADQ) it brought to market in November 2023 with the addition of an AI magic wand, Snowflake connectivity and an SQL rule wizard in ADQ v1.4. The company is also working towards the release of ADQ v1.5 that will include generative AI (GenAI) rules and predictive remediation suggestions based on machine learning (ML).

ADQ was born out of market interest in a data quality solution designed for not only dedicated data quality experts and data stewards, but also non-tech users who could write their own data quality roles. A user-friendly experience, automation and a reduction in manual processes were also top of mind.

Kieran Seaward, head of sales and business development at Datactics, explains: “Customers said their challenges with data quality were the time it took to stand up solutions and enable users to manage data quality across various use cases. There were also motivational challenges around tasks associated with data ownership and data quality. We took all this on board and built ADQ.”

ADQ v1.4

ADQ made a strong start with v1.4 also a response to customer interests, this time in automation, reduced manual intervention, improved data profiling and exception management, increased connectivity, predictive data quality analytics, and more.

Accelerating automation, ADQ v1.4 offers enhanced out-of-the box data quality rules that ease the burden for non-tech users. The AI magic wand includes reworked AI and ML features and an icon showing where users can benefit from Datactics ML in ADQ. Data quality process automation also accelerates the assignment of issues to nominated data users.

Increased connectivity features the ability to configure a Snowflake connection straight through the ADQ user interface, eliminating the need to set this up in the backend. The company is working on additional integrations as it moves towards v1.5.

Predictive data quality analytics monitor data quality and alert data stewards of breaks and other issues. Stewards can then view the problems and ADQ v1.4 will suggest solutions. Based on a breakage table of historical data from data quality rules, ADQ v1.4 can also predict why data quality will fail in the future. Seaward comments: “Data quality is usually reactive but now we can put preventative processes in place. Predictive data quality is very safe to use as the ML does not change the data, instead providing helpful suggestions based on pattern recognition.”

The SQL rule wizard allows data quality authors to build SQL rules in ADQ, performing data quality checks in-situ to optimise processing time.

ADQ v1.5

Moving on to ADQ v1.5 and the integration of GenAI, users will be able to query the model, write a rule for business logic specific to their domain and test the rule to see if it produces desired results. Datactics is currently using OpenAI ChatGPT to look at the potential of GenAI, but acknowledges that financial institutuiosn are likely to have their own take on LLMs and will point its solution to these internal models.

Other developments include a data readiness solution including preconfigured rules that can check data quality and allow any remedial action before regulatory data submissions are made for regulations including EMIR Refit, MiFID III and MiFIR II, and the US Data Transparency Act and SEC rule 10c-1.

Criticality rules that will help data stewards prioritise data problems and solutions are also being prototyped, along with improved dashboards and permissioning, and as it started, next stage development will continue to make ADQ more friendly for business users.

The post Datactics Enhances Augmented Data Quality Solution with Magic Wand and Rule Wizard appeared first on A-Team.

]]>
Global Indices Launch Marks SIX’s Latest Expansion of ETF Business https://a-teaminsight.com/blog/global-indices-launch-marks-sixs-latest-expansion-of-etf-business/?brand=dmi Tue, 18 Jun 2024 12:08:39 +0000 https://a-teaminsight.com/?p=68969 SIX, the data aggregator and operator of Swiss and Spanish stock exchanges, has expanded its indexing business with the creation of two families of global equities indices that can be used by the company’s retail, private banking and asset management clients. The SIX World Indices provide a broad view of global markets through a diversified and...

The post Global Indices Launch Marks SIX’s Latest Expansion of ETF Business appeared first on A-Team.

]]>
SIX, the data aggregator and operator of Swiss and Spanish stock exchanges, has expanded its indexing business with the creation of two families of global equities indices that can be used by the company’s retail, private banking and asset management clients.

The SIX World Indices provide a broad view of global markets through a diversified and variable roster of stocks traded across major markets. Meanwhile, the SIX Broad & Blue-Chip Indices present views into a fixed array of equities that constitute the most representative companies within geographic markets and regions.

The indices are aimed at helping clients streamline their data operations by providing direct access to information on the stocks in which they are invested without having to subscribe to costly third-party services and products.

Further Expansion

The new indices represent the latest step in SIX’s plan to become a one-stop-shop for exchange-traded funds (ETFs), providing fund manufacturers with the tools to create and list their products and take advantage of SIX’s trading, custody, index and data services.

It already publishes indices around its Swiss and Spanish stock exchange operations, and has also launched Nordic and ESG gauges. The company signalled its intention to build out its index business earlier this year when it made a strategic investment in BITA, a provider of indexing technology and services used by exchanges, delta one desks and asset management firms. The cash injection cemented a relationship begun two years ago and which has aided SIX’s move into the cryptocurrency sector.

In an interview with Data Management Insight, which will be published in full next week, SIX head of financial information Marion Leslie explained that the Bita investment provides the opportunity “to do so much more”.

“The ability to start running global indices, custom indices, thematic indices –which we are investing in – is a great asset to have,” Leslie said.

SIX head of index services, financial information Christian Bahr said that the company’s index business was responding to rising demand for benchmarks, especially from passive funds, and that he expected more to be created soon.

“Establishing a strong presence across the banking sector is paramount for recognition as a key player in global indices and global market data,” Bahr said. “Many financial institutions are in the process of renewing their online banking products, including their own websites and apps,” he said.

Fast Connections

The indices also provide API connection to data on each component enabling clients to access real-time pricing and granular performance data, as well as other datasets. This, said Bahr, provides a more “sophisticated overview of market performance for their investment-savvy customers, and our combined proposition around API delivery will make accessing this data faster, simpler, and more cost-effective for financial institutions”.

All indices are priced in dollars, euros and Swiss francs and components are weighted by free-float market capitalisation. The SIX World Indices will be reviewed each June and December, while those in the SIX Broad and Blue-Chip family will be updated individually.

The post Global Indices Launch Marks SIX’s Latest Expansion of ETF Business appeared first on A-Team.

]]>
TurinTech innovates with Artemis code optimisation https://a-teaminsight.com/blog/turintech-innovates-with-artemis-code-optimisation/?brand=dmi Wed, 12 Jun 2024 12:09:23 +0000 https://a-teaminsight.com/?p=68841 TurinTech, a London-based technology vendor, plans to revolutionise code optimisation with its GenAI Artemis solution. Artemis is based on a proprietary large language model (LLM) – although it can be used with other LLMs – that is trained to help financial firms optimise software code, speed up execution, reduce cloud costs and lower carbon emissions....

The post TurinTech innovates with Artemis code optimisation appeared first on A-Team.

]]>
TurinTech, a London-based technology vendor, plans to revolutionise code optimisation with its GenAI Artemis solution. Artemis is based on a proprietary large language model (LLM) – although it can be used with other LLMs – that is trained to help financial firms optimise software code, speed up execution, reduce cloud costs and lower carbon emissions. To date, Artemis has been implemented by investment banks in the UK, France and US.

The company was set up in 2019 by co-founders who met at University College London while doing PhD research work. They went on to work in financial institutions, where they experienced problems of getting code into production at any speed, internal bottlenecks holding up developers, and the pain points of code reviews.

 “There had to be a better way of doing things and a way to resolve these problems,” says Leslie Kanthan, CEO and co-founder of TurinTech, noting that while financial institutions tend not to have code optimisation teams, Artemis code optimisation can help them improve code quality, make developers more efficient, and give firms spending vast amounts of money on cloud savings of about 10% by optimising code, a potentially huge saving.

As well as optimising code and reducing costs, Artemis plays well into financial institutions’ sustainability goals by running better code faster, descreasing compute usage and providing energy savings.

Artemis scans software code on-premises or in the cloud. It uses TurinTech’s LLM, which has been trained on millions of lines of code and informed by the team’s proprietary knowledge, although it can also be used with other LLMs, perhaps less effectively, and takes hardware into consideration to allow legacy systems to perform to the best of their ability.

Use cases of the solution include identifying weaknesses in code and providing recommendations for optimal changes that enhance performance, noting code that could be sped up or improved by modifying particular lines, and analysing code bases to predict their efficiency – all with a human in the loop but reducing resource requirements overall.

Kanthan concludes: “Everyone wants to use AI, but will it add value to the business? LLMs are just another form of data, so you need apps for use cases. TurinTech has an app for code optimisation and is, at the moment, leading the market.”

The post TurinTech innovates with Artemis code optimisation appeared first on A-Team.

]]>
GoldenSource Partners Snowflake to Deliver Omni Data Management App for the Buy-Side https://a-teaminsight.com/blog/goldensource-partners-snowflake-to-deliver-omni-data-management-app-for-the-buy-side/?brand=dmi Wed, 05 Jun 2024 09:44:48 +0000 https://a-teaminsight.com/?p=68724 GoldenSource has partnered Snowflake to deliver GoldenSource Omni, a native application that deploys the company’s data model on the Snowflake Data Cloud and combines data from multiple sources to centralise all processing and provide analytics and reporting of investment data within the cloud. As well as combining operational and analytical investment data in a unified...

The post GoldenSource Partners Snowflake to Deliver Omni Data Management App for the Buy-Side appeared first on A-Team.

]]>
GoldenSource has partnered Snowflake to deliver GoldenSource Omni, a native application that deploys the company’s data model on the Snowflake Data Cloud and combines data from multiple sources to centralise all processing and provide analytics and reporting of investment data within the cloud.

As well as combining operational and analytical investment data in a unified data model, GoldenSource Omni allows investment managers to view datasets including securities, prices, listed and private portfolios, transactions and ESG data within a single, easy-to-understand format on the Snowflake platform. They can then analyse data more effectively and accelerate the application of generative AI, including training AI and machine learning models. They can also analyse portfolio holdings and exposures in a timely manner, drill into specific attributes of a portfolio and automate attribution reporting.

“Historically, buy-side participants have struggled with the management of disparate datasets. This was exacerbated by the absence of a comprehensive data model,” says Jeremy Katzeff, head of buy-side solutions at GoldenSource. “With the release of GoldenSource Omni, datasets for different asset classes and functions can be integrated and analysed within a modern cloud-native environment. Firms can replace outdated legacy systems with centralised, cloud-based enterprise data management that is far more efficient and cost effective.”

Rinesh Patel, global head of industry, financial services at Snowflake, adds: “GoldenSource is an ideal partner for us with its market experience and data model that links data across domains, providing a more efficient way for joint customers to run analytics, derive insights and train AI models within the Snowflake platform.”

The post GoldenSource Partners Snowflake to Deliver Omni Data Management App for the Buy-Side appeared first on A-Team.

]]>
AI Integration in Capital Markets: Current Trends and Future Directions https://a-teaminsight.com/blog/ai-integration-in-capital-markets-current-trends-and-future-directions/?brand=dmi Tue, 14 May 2024 15:59:24 +0000 https://a-teaminsight.com/?p=68476 Although artificial intelligence (AI) and machine learning (ML) have been widely used in the capital markets sector since the 2000s, the emergence of generative AI (GenAI) within the last 18 months has spurred a significant increase in investment in AI tools and technologies. This trend is set to continue as AI is deployed and utilised...

The post AI Integration in Capital Markets: Current Trends and Future Directions appeared first on A-Team.

]]>
Although artificial intelligence (AI) and machine learning (ML) have been widely used in the capital markets sector since the 2000s, the emergence of generative AI (GenAI) within the last 18 months has spurred a significant increase in investment in AI tools and technologies. This trend is set to continue as AI is deployed and utilised in increasingly innovative ways across the front, middle and back office.

In this article, we explore the current state, challenges, and future directions of AI and ML in capital markets, with insights from four industry experts who will be speaking at A-Team Group’s upcoming AI in Capital Markets Summit London.

The Current Landscape: GenAI and Internal Efficiency

GenAI – and large language models (LLMs) such as GPT-4 in particular – have certainly made a significant impact on the direction of technology projects in capital markets firms. But where is the work actually being focused?

“Many individuals face pressure from senior management to take action in response to the wave of GenAI, as no one wants to be left behind,” says Theo Bell, Head of AI Product at Rimes, a provider of enterprise data management and investment management solutions. “However, the levels of maturity vary significantly. Most firms are focusing on internal efficiency rather than the investment process itself, utilising tools like Microsoft 365 and Github co-pilots to streamline workflows, whether that’s for writing code, emails, or generating documents.”

Bell indicates that some firms are further ahead than others in their utilisation of GenAI, having developed front-office co-pilots for broker research, for example. “Some are automating tasks such as market analysis and performance attribution reporting, or – like us – are building co-pilots using their own data to answer questions like, ‘What is the top ETF by AUM?’ or ‘What are my holdings in Apple?’”

Nathan Marlor, Head of Data and AI at Version 1, an IT services and solutions vendor specialising in digital transformation, agrees that the current focus of GenAI is primarily on productivity gains. “It automates the mundane aspects of our jobs,” he says. “Within financial services, GenAI’s impact might not be as significant as in other industries as the use cases differ. The primary application of AI in financial services remains as it has always been,” he says, “using machine learning models to make market predictions, understand trends, and provide insights by combining numerous data sources swiftly.”

Machine Learning vs GenAI

The distinction between machine learning and GenAI is an important one, given their different use cases within capital markets. While they are both subsets of artificial intelligence, they focus on different aspects of AI applications. ML involves algorithms that learn from and make predictions or decisions based on data, improving their performance as they are exposed to more data over time. In contrast, GenAI and LLMs often leverage complex models such as deep neural networks to generate new content that mimics real-world examples. While ML can include predictive analytics, GenAI is distinguished by its ability to create novel, coherent outputs that resemble human-like interactions.

“It’s important to remember that AI and ML have been used in this industry for years and will continue to be valuable, especially in predictive AI. Large language models are not always necessary,” says Bell. “We are beginning to better understand which use cases are suitable for GenAI and which are better served by more traditional methods.”

She continues: “GenAI models are adept at converting text to SQL and running data queries on things like holdings, benchmarks and so on. But I’m keen to discover the next step in workflow automation, not just asking questions about ETFs or holdings for example, but exploring how to enhance the subsequent stages in the trading workflow and progress things up the value chain.”

Reena Raichura, ex-head of product solutions at interop.io and now the founder of Finergise, a fintech advisory company, outlines some concerns regarding the focus (hype?) around GenAI and LLMs over traditional ML methods in the capital markets sector.

“Currently, much attention is on GenAI and LLMs. However, many real-world applications of AI and ML are unrelated to GenAI,” she says. “For instance, behavioural analysis, bond pricing, and liquidity discovery can all utilise machine learning without involving GenAI or ChatGPT. It seems we’re jumping ahead to GenAI & LLMs without fully leveraging machine learning in the trading space. It feels like we’ve skipped a step. There’s still much to achieve with ML for financial institutions and vendors alike. When rolling out new applications or functionality, analysing user behaviour and workflow is essential to ensure effective utilisation and that you are meeting the needs of your clients/users. Understanding what traders and operations staff want is crucial. By analysing user behaviour on platforms, understanding workflows, and identifying the information users seek, we can intelligently prompt users with the right actions and data at the right time as well as better build the GenAI tools.”

Infrastructure and data strategy

One factor behind GenAI’s rapid take-up is that its infrastructure requirements are markedly different from those required for traditional ML techniques, explains Raichura. “People are quick to adopt GenAI because it doesn’t necessarily require modernising the trading stack; it’s about managing vast amounts of data. In contrast, to get the most value from machine learning, modernisation is required and the ability to analyse user interactions across applications. While the enthusiasm for GenAI is understandable, it’s important to consider the foundational work required for effective machine learning.”

Why is it so important to prioritise data quality and management in AI development?

“Data and AI are intrinsically linked, and the adage ‘garbage in, garbage out’ still applies,” says Marlor. “If your data lacks quality and proper lineage, developing any AI solution will be challenging. Positioning data as a first-class citizen is crucial as it underpins all subsequent AI developments. Without the right data at the appropriate sampling rate, you can’t extract relevant features or train models effectively.”

Raichura agrees. “You definitely can’t have an AI strategy without a data strategy. It’s crucial to address how you structure your data. The two strategies must go hand in hand.”

Challenges and Risks: Price, Performance, and Privacy

Various risk factors should be taken into consideration when undertaking a new GenAI project, explains Marlor. “Price, performance, and privacy are the primary risk factors to consider,” he says.

“Regarding price, implementing a new GenAI solution using GPT-4 Turbo or other advanced models may result in unexpectedly high bills if not managed correctly. Costs can quickly escalate, particularly if models are not right-sized for performance. This is a key risk, as the price tag of these solutions is not always fully understood. Deploying open-source models on static cloud infrastructure can help control costs, though it might slow down performance, which is another risk consideration. What do you need for your specific use case? From an engineering perspective, it’s easy to become engrossed in leaderboards and benchmarks. However, benchmarks can sometimes be misleading, so testing a particular model for your specific use case can provide a better understanding of its performance.

There are also privacy risks associated with using public GenAI models in a business environment, says Marlor. “Public models can retrain based on your prompts, potentially exposing your data to the public domain. Private data needs to be managed differently, possibly using local or cloud-based open-source models, or small language models to reduce costs.”

Explainability and Responsible AI

Explainability in AI models is another important consideration, particularly when deploying ‘black box’ type AI/ML models, suggests Harsh Prasad, a Senior Quantitative Researcher in AI/ML.

“When making investment decisions and finding any kind of alpha, random correlations are insufficient, they must be linked to causality,” he contends. “Essentially, when examining your decision-making process, the question becomes: how do you explain the patterns you observe? Do you have adequate tools to provide such explanations? This becomes even more complex in a multi-dimensional context, with numerous variables at play. Determining the exact contribution of each variable becomes crucial, especially when adopting more sophisticated, high-dimensional models. Without being able to identify the source of value, it will be challenging to convince management, regulators, or other stakeholders. Therefore, a discussion about how to explain observed patterns is essential.”

He continues: “Viewing everything through the lens of explainability makes things both interesting and challenging, particularly in terms of making a ‘black box model’ more accessible. In the context of GenAI, this raises questions about how models are validated. It’s fascinating to examine specific use cases, understand what exactly is being done, which techniques are employed, and why. While these techniques might be best-in-class at the moment, they are not necessarily perfect. So, what is the next step in research? Where is further work needed?”

In summary, it’s clear that while GenAI and LLMs like GPT-4 present a wide range of opportunities for advancing trading operations, the foundational elements of machine learning and robust data strategies are critical components for firms who want to realise the full potential of AI technologies. Moving forward, the industry will need to balance the allure of new AI tools with the practical aspects of data quality, model explainability, and infrastructural readiness, not only to enhance operational efficiencies, but also to pave the way for sustainable, value-driven innovations.

To learn more about how AI could benefit your organisation, where you need to invest from a data, technology and workforce perspective, and how to ensure your AI strategy will set your organisation on a path to future success, book your place at the A-Team Group’s AI in Capital Markets Summit today, using the link below.

AI in Capital Markets Summit London

The post AI Integration in Capital Markets: Current Trends and Future Directions appeared first on A-Team.

]]>
FactSet Expands GenAI Offerings with Portfolio Commentary https://a-teaminsight.com/blog/factset-expands-genai-offerings-with-portfolio-commentary/?brand=dmi Wed, 01 May 2024 11:33:35 +0000 https://a-teaminsight.com/?p=68320 FactSet, provider of a financial digital platform and enterprise solutions, has expanded its GenAI offerings with Portfolio Commentary. The solution complements the company’s performance attribution capabilities with detailed and source-linked commentary allowing buy-side and wealth managers to understand key drivers of portfolio performance more holistically. Powered by large language models (LLMs), Portfolio Commentary also reduces...

The post FactSet Expands GenAI Offerings with Portfolio Commentary appeared first on A-Team.

]]>
FactSet, provider of a financial digital platform and enterprise solutions, has expanded its GenAI offerings with Portfolio Commentary. The solution complements the company’s performance attribution capabilities with detailed and source-linked commentary allowing buy-side and wealth managers to understand key drivers of portfolio performance more holistically.

Powered by large language models (LLMs), Portfolio Commentary also reduces the time-consuming and nuanced process of writing attribution summaries manually. Instead, FactSet claims, users can generate baseline and source-linked portfolio commentary for any attribution report in about 30 to 60 seconds.

“For over 20 years, FactSet has delivered ever-improving solutions for attribution analysis. We are now raising the bar with the introduction of Portfolio Commentary,” says Chris Ellis, executive vice president, head of strategic initiatives at FactSet. “Based on internal testing, we anticipate the solution will enable asset managers, asset owners and wealth managers to reduce the time spent writing portfolio commentary by a factor of eight and focus on the more high-value, strategic priorities of improving performance and strengthening client relationships.”

Key features of Portfolio Commentary include deep understanding of relative performance through four types of insights including an executive summary, sub-period analysis of trends, ‘stepping back’ to explain relative portfolio performance starting from the benchmark, and ‘stepping in’ that highlights the most influential securities within the analysis; use of a proprietary algorithm to highlight companies with the most significant impact; and direct source linking, with each sentence in the commentary including auditable links to the numbers behind the given assertions, allowing users to navigate between the explanation and the supporting data.

“Our goal at FactSet is to make portfolio managers and research professionals over 50% more efficient at their jobs,” says Rob Robie, executive vice president, head of institutional buy-side at FactSet. “Building on our performance and attribution systems, Portfolio Commentary advances this mission by generating attribution-based insights designed to better evaluate performance, contextualise portfolio attributions and deliver tangible results.”

Portfolio Commentary can be accessed via FactSet’s Portfolio Analysis solution within the FactSet Workstation.

The post FactSet Expands GenAI Offerings with Portfolio Commentary appeared first on A-Team.

]]>
Snowflake Releases Arctic Open-Source LLM for Complex Enterprise Workloads https://a-teaminsight.com/blog/snowflake-releases-arctic-open-source-llm-for-complex-enterprise-workloads/?brand=dmi Wed, 01 May 2024 09:16:51 +0000 https://a-teaminsight.com/?p=68311 Snowflake has released Arctic, an open-source large language model (LLM) designed to deliver top-tier intelligence and efficiency at scale, and optimised for complex enterprise workloads. The company’s debut of its own open-source LLM takes it into competition with the likes of OpenAI’s GPT-4, Google’s Gemini, Meta Platforms’ Llama 2 and Mistral AI’s Mixtral. Snowflake Arctic...

The post Snowflake Releases Arctic Open-Source LLM for Complex Enterprise Workloads appeared first on A-Team.

]]>
Snowflake has released Arctic, an open-source large language model (LLM) designed to deliver top-tier intelligence and efficiency at scale, and optimised for complex enterprise workloads. The company’s debut of its own open-source LLM takes it into competition with the likes of OpenAI’s GPT-4, Google’s Gemini, Meta Platforms’ Llama 2 and Mistral AI’s Mixtral.

Snowflake Arctic is based on a ‘mixture-of-experts’ architecture and is part of the company’s LLM family that includes practical text-embedding models for retrieval use cases. Underlining the openness of the LLM, Snowflake is releasing Arctic’s weights under an Apache 2.0 license and details of the research leading to how it was trained.

“By delivering intelligence and efficiency in a truly open way to the AI community, we are furthering what open-source AI can do. Our research with Arctic will enhance our capability to deliver reliable, efficient AI to our customers,” says Sridhar Ramaswamy, CEO at Snowflake.

Snowflake Arctic includes code templates and flexible inference and training options so users can get started quickly with deploying and customising Arctic using their preferred frameworks. These will include NVIDIA NIM with NVIDIA TensorRT-LLM, vLLM, and Hugging Face, a machine learning and data science platform and community that helps users build, deploy and train machine learning models, and provides infrastructure to demo, run and deploy AI in live applications.

For immediate use, Arctic is available for serverless inference in Snowflake Cortex, Snowflake’s fully managed service that offers machine learning and AI solutions in the Data Cloud. It will also be available on Amazon Web Services, Microsoft Azure, Hugging Face, Lamini, NVIDIA API catalog, Perplexity and Together AI. When accessed in Snowflake Cortex, Arctic will accelerate customers’ ability to build production-grade AI apps at scale, within the security and governance perimeter of the Data Cloud.

Clement Delangue, CEO and co-founder at Hugging Face, concludes: “There has been a massive wave of open-source AI in the past few months. Snowflake is contributing significantly with this release not only of the model with an Apache 2.0 license but also with details on how it was trained. It gives the necessary transparency and control for enterprises to build AI and for the field as a whole to break new ground.”

The post Snowflake Releases Arctic Open-Source LLM for Complex Enterprise Workloads appeared first on A-Team.

]]>