0%
  • Financial Services

Global Asset Management Firm Builds an AI-based ETF-like Investing Product to Drive Performance

Client Global Asset Management Firm

Overview

Our client is an asset management and investment advisory firm known for its global reach and local presence. The company focuses on providing optimal asset allocation solutions, continuous evaluation and selection of management companies and funds, and constructing specialized portfolios that aim for quality from a global perspective. 

This project involved the deployment of a highly secure, scalable, and efficient cloud architecture leveraging AWS services to enhance the customer’s financial strategies. Neurons Lab was tasked with improving out-of-sample performance, reducing the variance of data estimation, and speeding up the development of financial strategies. 

Challenges

  • Performance Improvement: The need to improve the out-of-sample performance of their portfolios, ensuring more reliable and consistent results.
  • Speed of Development: The client aimed to accelerate the strategy development process, reducing the time taken from ideation to implementation.
  • Risk Management: There was a critical need to enhance risk management practices, particularly in the context of backtesting and scenario analysis.
  • Infrastructure Scalability: The client required a scalable and secure cloud environment capable of handling large volumes of financial data and complex computations.

Solution

Neurons Lab developed a customized solution leveraging AWS’s robust cloud infrastructure. Key components of the solution included:

  • Advanced Backtesting Framework: A sophisticated backtesting environment using scenario-based and cross-validation techniques to improve confidence in out-of-sample performance.
  • Market Structure Algorithms: Implementation of market structure algorithms using hierarchical clustering, which provided a more reliable estimation of market data and fundamentals.
  • Scalable Cloud Architecture: Deployment on AWS using services like AWS Fargate for API hosting, SageMaker for real-time inference, and additional AWS services for security and monitoring.

LLM Architecture

The solution integrated several AWS services to handle large language models (LLMs), which are pivotal for real-time inference and strategy optimization:

  • Frontend Layer: An Application Load Balancer (ALB) manages incoming traffic, distributing it across the API layer.
  • API Layer: Hosted on AWS Fargate, ensuring scalable and secure execution of API requests.
  • Real-time Inference Layer: Managed by a SageMaker Endpoint, enabling real-time predictions and model deployment.
  • LLM Support: Includes Amazon SageMaker JumpStart, Amazon Bedrock, and Amazon Titan, providing the backbone for advanced AI/ML capabilities.

High Level Design

The architecture was designed to ensure high availability, security, and scalability across multiple AWS availability zones. The key elements include:

  • Amazon VPC: A secure virtual private cloud that segregates public and private subnets across availability zones.
  • AWS Fargate: Used for deploying microservices in private subnets, ensuring secure and isolated execution environments.
  • Amazon Load Balancer: Distributes incoming traffic across multiple availability zones to ensure high availability.
  • NAT Gateways: Facilitate secure outbound internet traffic from private subnets.
  • AWS Security Services: Including AWS CloudTrail, AWS Config, and AWS Secret Manager, to ensure compliance, security, and proper configuration management.

 

Neurons Lab implemented the LLM model using LangChain between the user inputs and the outputs within a LangServe wrapper—a Python framework well suited to APIs.

The main chain checks the input configuration, while the sub-chains handle tone, channel augmentation, and text customization. Suggestions are provided as the outputs.

To deploy the API solution, Neurons Lab used the AWS Elastic Container Service (ECS) alongside LangSmith for the prompt storage logs and evaluation.

This dashboard supports prompt management and the playground while providing exceptional log and chain tracking.

The Results

  • Enhanced Financial Performance: Achieved through advanced algorithms and a robust backtesting framework, leading to improved Sharpe ratios and reduced drawdowns.
  • Faster Strategy Development: The scalable infrastructure and advanced ML models reduced the time taken to develop and deploy new financial strategies.
  • Improved Risk Management: The sophisticated risk management pipeline provided more reliable estimates of market risks and potential scenarios.
  • Scalable and Secure Deployment: The AWS architecture ensured that the solution could scale according to demand while maintaining high levels of security and compliance.

Your Path to Enterprise AI Starts Here

info@neurons-lab.com +44 20 3769 4201
International House
64 Nile St, London N1 7SR, United Kingdom