Building an Agentic Trading Solution: Lessons from the Second Week
May 11, 2026
Overview
In this week's update on the development of my agentic trading solution, I'm diving into the challenges and progress made during the second week. This project aims to create an AI-driven system that trades, learns, and improves over time. Last week, I discussed the foundational elements and initial setup. This week, I focused on refining the architecture, particularly with Amazon OpenSearch Service, and enhancing the trading journal to better inform the Large Language Model (LLM).
3 Big Ideas
1. Transitioning from Serverless to Resource-Based OpenSearch
- Explanation: After discovering that the serverless solution for Amazon OpenSearch Service was exceeding my budget, I transitioned to a resource-based solution. This involved understanding the service's requirements, navigating its documentation, and reconfiguring the stacks to better manage the deployment and indexing processes.
2. Stack Organization and Orchestration
- Explanation: I broke down the services into three distinct stacks: one for OpenSearch, another for Lambda functions and DynamoDB, and a third for orchestration. This approach aimed to segregate services based on their update frequency and dependencies, improving manageability and deployment efficiency.
3. Enhanced Trading Journal for Better LLM Context
- Explanation: To improve the LLM's ability to make discretionary trading decisions, I detailed my trading journal with more context, images, and commentary. This additional information helps the LLM understand the nuances of trading patterns, such as rounded tops and bottoms, which are critical for making informed decisions.
Why It Matters
The progress made this week is crucial for the overall success of the agentic trading solution. Transitioning from a serverless to a resource-based OpenSearch setup ensures cost-efficiency and better control over the deployment process. Organizing the stacks more effectively allows for smoother orchestration and interaction between services. Finally, enhancing the trading journal with detailed context and imagery provides the LLM with the necessary information to make more accurate and human-like trading decisions.
How to Apply It
1. Evaluate and Transition Serverless Solutions
- Explanation: Assess the cost and efficiency of your current serverless solutions. If they are proving to be too expensive or inefficient, consider transitioning to a resource-based approach to gain better control and cost management.
2. Organize Services into Logical Stacks
- Explanation: Group your cloud services into stacks based on their update frequency and dependencies. This will make your architecture more manageable and improve deployment efficiency.
3. Enhance Contextual Information for AI Models
- Explanation: Provide your AI models with rich, detailed context. For trading algorithms, this means including detailed journals, images, and commentary to help the model understand and make better decisions.
Key Takeaways
- Transition to Resource-Based Solutions: Moving from serverless to resource-based OpenSearch saved costs and improved control.
- Effective Stack Organization: Breaking down services into distinct stacks enhanced manageability and deployment efficiency.
- Rich Contextual Information: Detailed trading journals with images and commentary improved the LLM's decision-making capabilities.
If you found these insights valuable and are interested in following along, consider signing up for our newsletter at The Independent Quant. You'll receive exclusive content and updates directly in your inbox. Until next time, keep it green!
Start your quant journey with the TIQ Mini-Course — Free.
8 short lessons to help you trade smarter, test better, and build a system that works.