{"id":4455,"date":"2026-02-11T16:41:00","date_gmt":"2026-02-11T16:41:00","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2026\/02\/11\/how-linqalpha-assesses-investment-theses-using-devils-advocate-on-amazon-bedrock\/"},"modified":"2026-02-11T16:41:00","modified_gmt":"2026-02-11T16:41:00","slug":"how-linqalpha-assesses-investment-theses-using-devils-advocate-on-amazon-bedrock","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2026\/02\/11\/how-linqalpha-assesses-investment-theses-using-devils-advocate-on-amazon-bedrock\/","title":{"rendered":"How LinqAlpha assesses investment theses using Devil\u2019s Advocate on Amazon Bedrock"},"content":{"rendered":"<div id=\"\">\n<p><em> This is a guest post by Suyeol Yun, Jaeseon Ha, Subeen Pang and Jacob (Chanyeol) Choi at LinqAlpha, in partnership with AWS. <\/em><\/p>\n<p><a href=\"https:\/\/linqalpha.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">LinqAlpha<\/a> is a Boston-based multi-agent AI system built specifically for institutional investors. Over 170 hedge funds and asset managers worldwide use LinqAlpha to streamline their investment research for public equities and other liquid securities, transforming hours of manual diligence into structured insights with multi-agent <a href=\"https:\/\/aws.amazon.com\/what-is\/large-language-model\/\" target=\"_blank\" rel=\"noopener noreferrer\">large language model<\/a> (LLM) systems. The system supports and streamlines agentic workflows across company screening, primer generation, stock price catalyst mapping, and now, pressure-testing investment ideas through a new AI agent called Devil\u2019s Advocate.<\/p>\n<p>In this post, we share how LinqAlpha uses Amazon Bedrock to build and scale Devil\u2019s Advocate.<\/p>\n<h2>The Challenge<\/h2>\n<p>Conviction drives investment decisions, but an unexamined investment thesis can introduce risk. Before allocating capital, investors often ask, \u201cWhat am I overlooking?\u201d Identifying blind spots usually involves time-consuming cross-referencing of expert calls, broker reports, and filings. Confirmation bias and scattered workflows make it hard to challenge one\u2019s own ideas objectively. Consider the example thesis, \u201cABCD will be a generative AI beneficiary with successful AI monetization and competitive positioning.\u201d The thesis seems sound until you probe whether open source alternatives could erode pricing power or if monetization mechanisms are fully understood across the product stack. These nuances often get missed. This is where a devil\u2019s advocate comes in, a role or mindset that deliberately challenges the thesis to uncover hidden risks and weak assumptions. For investors, this kind of structured skepticism is essential to avoiding blind spots and making higher-conviction decisions.<\/p>\n<p>Investors have traditionally engaged in devil\u2019s advocate thinking through manual processes, debating ideas in team meetings, or mapping out pros and cons through informal scenario analysis. LinqAlpha set out to structure this manual and improvised process with AI.<\/p>\n<h2>The solution<\/h2>\n<p>Devil\u2019s Advocate is an AI research agent purpose-built to help investors systematically pressure-test their investment theses using their own trusted sources at 5\u201310 times the speed of traditional review. To help investors test their investment theses more rigorously, Devil\u2019s Advocate agent in LinqAlpha follows a structured four-step process from thesis definition and document ingestion to automated assumption analysis and structured counterargument generation:<\/p>\n<ol>\n<li>Define your thesis<\/li>\n<li>Upload reference documents<\/li>\n<li>AI-driven thesis analysis<\/li>\n<li>Structured critique and counterarguments<\/li>\n<\/ol>\n<p>This section outlines how the system works from end to end: how investors interact with the agent, how the AI parses and challenges assumptions using trusted evidence, and how the results are presented. In particular, we highlight how the system decomposes theses into assumptions, links each critique to source materials, and scales this process efficiently using <a href=\"https:\/\/aws.amazon.com\/bedrock\/anthropic\/\" target=\"_blank\" rel=\"noopener noreferrer\">Claude Sonnet 4.0 by Anthropic in Amazon Bedrock<\/a>. <a href=\"https:\/\/aws.amazon.com\/bedrock\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Bedrock<\/a> is a fully managed service that makes high-performing <a href=\"https:\/\/aws.amazon.com\/what-is\/foundation-models\/\" target=\"_blank\" rel=\"noopener noreferrer\">foundation models<\/a> (FMs) from leading AI companies and Amazon available for your use through a unified API.<\/p>\n<h3>Define your thesis<\/h3>\n<p>Investors articulate their thesis as a core assertion supported by underlying reasoning. For example, <code>ABCD will be a GenAI beneficiary with successful AI monetization and competitive positioning<\/code>. They enter this thesis in Devil\u2019s Advocate in the <strong>Investment Thesis<\/strong> field, as shown in the following screenshot.<\/p>\n<p><em><img decoding=\"async\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2026\/02\/04\/image-1.png\" alt=\"A screenshot of a computer\n\nAI-generated content may be incorrect.\"><\/em><\/p>\n<h3>Upload reference documents<\/h3>\n<p>Investors upload research such as broker reports, expert calls, and public filings in the <strong>Upload Files<\/strong> field, as shown in the following screenshot. The system parses, chunks, and indexes this content into a structured evidence repository.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2026\/02\/04\/image-2.png\" alt=\"A screenshot of a computer\n\nAI-generated content may be incorrect.\"><\/p>\n<h3>AI-driven thesis analysis<\/h3>\n<p>Devil\u2019s Advocate deconstructs the thesis into explicit assertions and implicit assumptions. It scans the evidence base to find content that challenges or contradicts those assumptions.<\/p>\n<h3>Structured critique and counterarguments<\/h3>\n<p>The system generates a structured critique where each assumption is restated and directly challenged. Every counterpoint is sourced and linked to specific excerpts from the uploaded materials. The following screenshot shows how the system produces a structured, evidence-linked critique. Starting from the investor\u2019s thesis, it extracts assumptions, challenges them, and anchors each counterpoint to a specific source. In this case, the claim that ABCD will benefit from generative AI is tested against two core weaknesses: a lack of a proven monetization path despite new features such as Product, and a track record of avoiding price increases due to customer sensitivity. Each argument is grounded in uploaded research, such as expert calls and analyst commentary, with clickable citations. Investors can trace each challenge back to its source and evaluate whether their thesis still holds under pressure.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2026\/02\/04\/image-3.png\" alt=\"The LinqAlpha interface produces a detailed answer presenting the evidence for and against the thesis.\" width=\"958\" height=\"706\"><\/p>\n<h2>Application flow<\/h2>\n<p>The Devil\u2019s Advocate agent is a <strong>multi-agent system<\/strong> that orchestrates specialized agents for document parsing, retrieval, and rebuttal generation. Unlike a fixed pipeline, these agents interact iteratively: the analysis agent decomposes assumptions, the retrieval agent queries sources, and the synthesis agent generates counterarguments before looping back for refinement. This iterative back-and-forth is what makes the system agentic rather than a static workflow. The overall architecture can be described as four interdependent stages from ingestion to critique delivery. The architecture follows a four-stage flow from data ingestion to critique delivery.<\/p>\n<h3>Enter thesis<\/h3>\n<p>Users submit an investment thesis, often as an investment committee (IC) memo. The input is received by a custom application running in an <a href=\"https:\/\/aws.amazon.com\/ec2\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Elastic Compute Cloud<\/a> (Amazon EC2) instance, which routes the request to Amazon Bedrock. Claude Sonnet 4 by Anthropic in Amazon Bedrock interprets the statement and decomposes it into core assumptions. Amazon EC2 runs a Python-based orchestration layer built by LinqAlpha, which coordinates API calls, manages logging, and controls agent execution.<\/p>\n<h3>Upload documents<\/h3>\n<p>These documents are handled by a <strong>preprocessing pipeline running in an EC2 instance<\/strong>, which extracts raw data and converts it into structured chunks. The EC2 instance runs LinqAlpha\u2019s parsing application written in Python and integrated with <a href=\"https:\/\/aws.amazon.com\/textract\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Textract<\/a> for document parsing. <a href=\"https:\/\/aws.amazon.com\/lambda\/\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Lambda<\/a> or <a href=\"https:\/\/aws.amazon.com\/fargate\/\" target=\"_blank\" rel=\"noopener noreferrer\">AWS Fargate<\/a> could have been alternatives, but Amazon EC2 was selected because customers in regulated finance environments required <strong>persistent compute with auditable logs and strict control over networking. <\/strong>Raw files are stored in <a href=\"https:\/\/aws.amazon.com\/s3\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Simple Storage Service<\/a> (Amazon S3), structured outputs go into <a href=\"https:\/\/aws.amazon.com\/rds\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Relational Database Service<\/a> (Amazon RDS), and parsed content is indexed by <a href=\"https:\/\/aws.amazon.com\/opensearch-service\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon OpenSearch Service<\/a> for retrieval.<\/p>\n<h3>Analyze thesis<\/h3>\n<p>Claude Sonnet 4 by Anthropic in Amazon Bedrock issues <strong>targeted retrieval queries<\/strong> across Amazon OpenSearch Service and aggregates counter-evidence from Amazon RDS and Amazon S3. A structured prompt template enforces consistency in the rebuttal output. For example, the agent receives prompts like:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-python\">You are an institutional research assistant designed to act as a Devil\u2019s Advocate. \nYour task is to challenge investment theses with structured, evidence-linked counterarguments. \nAlways use provided documents (expert calls, broker reports, 10-Ks, transcripts). \nIf no relevant evidence exists, clearly state \"no counter-evidence found\".\nThesis: {user_thesis}\nStep 1. Identify Assumptions\n- Extract all explicit assumptions (stated directly in the thesis).\n- Extract implicit assumptions (unstated but required for the thesis to hold).\n- Label each assumption with an ID (A1, A2, A3...).\nStep 2. Retrieve and Test\n- For each assumption, issue retrieval queries against uploaded sources (OpenSearch index, RDS, S3).\n- Prioritize authoritative sources in this order:\n   1. SEC filings (10-K, 10-Q, 8-K)\n   2. Expert call transcripts\n   3. Broker\/analyst reports\n- Identify passages that directly weaken, contradict, or raise uncertainty about the assumption.\nStep 3. Structured Output\nFor each assumption, output in JSON with the following fields:\n{\n  \"assumption_id\": \"A1\",\n  \"assumption\": \"&lt;concise restatement of assumption&gt;\",\n  \"counter_argument\": \"&lt;evidence-backed critique, phrased in analyst style&gt;\",\n  \"citation\": {\n       \"doc_type\": \"10-K\",\n       \"doc_id\": \"ABCD_10K_2023\",\n       \"page\": \"47\",\n       \"excerpt\": \"Management noted that monetization of Product features remains exploratory, with no committed pricing model.\"\n  },\n  \"risk_flag\": \"&lt;High | Medium | Low&gt; (relative importance of this counterpoint to the thesis)\"\n}\nStep 4. Output Formatting\n- Return all assumptions and critiques as a JSON array.\n- Ensure every counter_argument has at least one citation.\n- If no evidence found, set counter_argument = \"No counter-evidence found in provided sources\" and citation = null.\n- Keep tone factual and neutral (avoid speculation).\n- Avoid duplication of evidence across assumptions unless highly relevant.\nStep 5. Analyst Voice Calibration\n- Write counter_arguments in the style of an institutional equity research analyst. \n- Be concise (2\u20133 sentences per counter_argument).\n- Focus on material risks to the investment case (competitive dynamics, regulation, margin compression, technology adoption).<\/code><\/pre>\n<\/p><\/div>\n<p>The following is a sample output:<\/p>\n<div class=\"hide-language\">\n<pre><code class=\"lang-css\">[\n  {\n    \"assumption_id\": \"A1\",\n    \"assumption\": \"ABCD will successfully monetize GenAI features like Product\",\n    \"counter_argument\": \"Recent disclosures suggest Product monetization is still experimental, with management highlighting uncertainty around pricing models. This raises questions about near-term revenue contribution.\",\n    \"citation\": {\n      \"doc_type\": \"10-K\",\n      \"doc_id\": \"ABCD_10K_2023\",\n      \"page\": \"47\",\n      \"excerpt\": \"Management noted that monetization of Product features remains exploratory, with no committed pricing model.\"\n    },\n    \"risk_flag\": \"High\"\n  },\n  {\n    \"assumption_id\": \"A2\",\n    \"assumption\": \"Open-source competitors will not significantly erode ABCD's pricing power\",\n    \"counter_argument\": \"Expert commentary indicates increasing adoption of open-source alternatives for creative workflows, which could pressure ABCD\u2019s ability to sustain premium pricing.\",\n    \"citation\": {\n      \"doc_type\": \"Expert Call\",\n      \"doc_id\": \"EC_DesignAI_2024\",\n      \"page\": \"3\",\n      \"excerpt\": \"Clients are experimenting with Stable Diffusion-based plugins as lower-cost substitutes for ABCD Product.\"\n    },\n    \"risk_flag\": \"Medium\"\n  }\n]<\/code><\/pre>\n<\/p><\/div>\n<h3>Review output<\/h3>\n<p>The final critique is returned to the user interface, showing a list of challenged assumptions and supporting evidence. Each counterpoint is linked to original materials for traceability. This end-to-end flow enables scalable, auditable, and high-quality pressure-testing of investment ideas.<\/p>\n<p><strong><img decoding=\"async\" loading=\"lazy\" class=\"\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2026\/02\/04\/image-4.png\" alt=\"A diagram of a company's company\n\nAI-generated content may be incorrect.\" width=\"1131\" height=\"781\"><\/strong><\/p>\n<h3>System components<\/h3>\n<p>The Devil\u2019s Advocate agent operates as a multi-agent system that orchestrates parsing, retrieval, and rebuttal generation across AWS services. Specialized agents work iteratively, with each stage feeding back into the next, facilitating both document fidelity and reasoning depth. Investors interact with the system in two ways, forming the foundation for downstream processing. Investors can enter their thesis in a natural language statement of investment view. Often, this takes the form of an IC memo. Another option is to upload documents. Investors can upload finance-specific materials such as earnings transcripts, 10-Ks, broker reports, or expert call notes.<\/p>\n<p>Uploaded materials are parsed into structured text and enriched with semantic structure before indexing:<\/p>\n<ul>\n<li><strong>Amazon Textract<\/strong> \u2013 Extracts raw text from PDFs and scanned documents<\/li>\n<li><strong>Claude Sonnet 3.7 vision-language model (VLM)<\/strong> \u2013 Enhances Amazon Textract outputs by reconstructing tables, interpreting visual content, and segmenting document structures ( headers, footnotes, charts)<\/li>\n<li><strong>Amazon EC2 orchestration layer<\/strong> \u2013 Runs LinqAlpha\u2019s Python-based pipeline that coordinates Amazon Textract, Amazon Bedrock calls, and data routing<\/li>\n<\/ul>\n<p>Processed data is stored and indexed for fast retrieval and reproducibility:<\/p>\n<ul>\n<li><strong>Amazon S3<\/strong> \u2013 Stores raw source files for auditability<\/li>\n<li><strong>Amazon RDS<\/strong> \u2013 Maintains structured content outputs<\/li>\n<li><strong>Amazon OpenSearch Service<\/strong> \u2013 Indexes parsed and enriched content for targeted retrieval<\/li>\n<\/ul>\n<p>Reasoning and rebuttal generation are powered by Claude Sonnet 4 by Anthropic in Amazon Bedrock. It performs the following functions:<\/p>\n<ul>\n<li><strong>Assumption decomposition<\/strong> \u2013 Sonnet 4 breaks down the thesis into explicit and implicit assumptions<\/li>\n<li><strong>Retrieval agent<\/strong> \u2013 Sonnet 4 formulates targeted queries against OpenSearch Service and aggregates counterevidence from Amazon RDS and Amazon S3<\/li>\n<li><strong>Synthesis agent<\/strong> \u2013 Sonnet 4 produces structured rebuttals, citation-linked to source documents, then returns results through the Amazon EC2 orchestration layer to the user interface<\/li>\n<\/ul>\n<p>The LinqAlpha Devil\u2019s Advocate agent uses a modular multiagent design where different Claude models specialize in distinct roles:<\/p>\n<ul>\n<li><strong>Parsing agent<\/strong> \u2013 Combines Amazon Textract for OCR with Claude Sonnet 3.7 VLM for structural enrichment of documents. This stage makes sure tables, charts, and section hierarchies are faithfully reconstructed before indexing.<\/li>\n<li><strong>Retrieval agent<\/strong> \u2013 Powered by Claude Sonnet 4, formulates retrieval queries against OpenSearch Service and aggregates counterevidence from Amazon RDS and Amazon S3 with long-context reasoning.<\/li>\n<li><strong>Synthesis agent<\/strong> \u2013 Also using Claude Sonnet 4, composes structured rebuttals, citation-linked to original sources, and formats outputs in machine-readable JSON for auditability.<\/li>\n<\/ul>\n<p>These agents run iteratively: the Parsing agent enriches documents, the Retrieval agent surfaces potential counter-evidence, and the Synthesis agent generates critiques that might trigger additional retrieval passes. This back-and-forth orchestration, managed by a Python-based service on Amazon EC2, makes the system genuinely multi-agentic rather than a linear pipeline.<\/p>\n<h2>Implementing Claude 3.7 and 4.0 Sonnet in Amazon Bedrock<\/h2>\n<p>The LinqAlpha Devil\u2019s Advocate agent employs a <strong>hybrid approach<\/strong> on Amazon Bedrock, combining Claude Sonnet 3.7 for document parsing with vision-language support, and Claude Sonnet 4.0 for reasoning and rebuttal generation. This separation facilitates both accurate document fidelity and advanced analytical rigor. Key capabilities include:<\/p>\n<ul>\n<li><strong>Enhanced parsing with Claude Sonnet 3.7 VLM<\/strong> \u2013 Sonnet 3.7 multimodal capabilities augment Amazon Textract by reconstructing tables, charts, and section hierarchies that plain OCR often distorts. This makes sure that financial filings, broker reports, and scanned transcripts maintain structural integrity before entering retrieval workflows.<\/li>\n<li><strong>Advanced reasoning with Claude Sonnet 4.0<\/strong> \u2013 Sonnet 4.0 delivers stronger chain-of-thought reasoning, sharper assumption decomposition, and more reliable generation of structured counterarguments. Compared to prior versions, it aligns more closely with financial analyst workflows, producing rebuttals that are both rigorous and citation-linked.<\/li>\n<li><strong>Scalable agent deployment on AWS<\/strong> \u2013 Running on Amazon Bedrock allows LinqAlpha to scale dozens of agents in parallel across large volumes of investment materials. The orchestration layer on Amazon EC2 coordinates Amazon Bedrock calls, enabling fast iteration under real-time analyst workloads while minimizing infrastructure overhead.<\/li>\n<li><strong>Large context and output windows<\/strong> \u2013 With a 1M-token context window and support for outputs up to 64,000 tokens, Sonnet 4.0 can analyze entire 10-K filings, multi-hour expert call transcripts, and long-form IC memos without truncation. This enables document-level synthesis that was previously infeasible with shorter-context models.<\/li>\n<li><strong>Integration with AWS services<\/strong> \u2013 Through Amazon Bedrock, the solution integrates with Amazon S3 for raw storage, Amazon RDS for structured outputs, and OpenSearch Service for retrieval. This provided LinqAlpha with more secure deployment, full control over customer data, and elastic scalability required by institutional finance clients.<\/li>\n<\/ul>\n<p>For hedge funds, asset managers, and research teams, the choice of Amazon Bedrock with Anthropic models is not merely about technology; it directly addresses <strong>core operational pain points<\/strong> in investment research:<\/p>\n<ul>\n<li><strong>Auditability and compliance<\/strong> \u2013 Every counterargument is linked back to its source document (10-K, broker note, transcript), creating an auditable trail that meets institutional governance standards.<\/li>\n<li><strong>Data control<\/strong> \u2013 The Amazon Bedrock integration with private S3 buckets and <a href=\"https:\/\/aws.amazon.com\/vpc\/\" target=\"_blank\" rel=\"noopener noreferrer\">Amazon Virtual Private Cloud<\/a> (Amazon VPC) deployed EC2 instances keeps sensitive documents within the firm\u2019s secure AWS environment, a critical requirement for regulated investors.<\/li>\n<li><strong>Workflow speed<\/strong> \u2013 By scaling agentic workflows in parallel, analysts save hours during earnings season or IC prep, compressing review cycles from days to minutes without sacrificing depth.<\/li>\n<li><strong>Decision quality<\/strong> \u2013 Sonnet 3.7 facilitates document fidelity, and Sonnet 4.0 adds financial reasoning strength, together helping investors uncover blind spots that would otherwise remain hidden in traditional workflows.<\/li>\n<\/ul>\n<p>This combination of AWS based <strong>multi-agent orchestration and LLM scalability<\/strong> makes the LinqAlpha Devil\u2019s Advocate agent uniquely suited to institutional finance, where <strong>speed, compliance, and analytical rigor must coexist<\/strong>. With Amazon Bedrock, the solution achieved managed orchestration and built-in integration with AWS services such as Amazon S3, Amazon EC2, and OpenSearch Service, which provided fast deployment, full control over data, and elastic scale.<\/p>\n<blockquote>\n<p><em>\u201cThis helped me objectively gut-check my bullish thesis ahead of IC. Instead of wasting hours stuck in my own confirmation bias, I quickly surfaced credible pushbacks, making my pitch tighter and more balanced.\u201d <\/em><\/p>\n<p>\u2014 PM at Tiger Cub Hedge Fund<\/p>\n<\/blockquote>\n<h2>Conclusion<\/h2>\n<p>Devil\u2019s Advocate is one of over 50 intelligent agents in LinqAlpha\u2019s multi-agent research system, each designed to address a distinct step of the institutional investment workflow. Traditional processes often emphasize consensus building, but Devil\u2019s Advocate extends research into the critical stage of <strong>structured dissent<\/strong>, challenging assumptions, surfacing blind spots, and providing auditable counterarguments linked directly to source materials.<\/p>\n<p>By combining <strong>Claude Sonnet 3.7 (for document parsing with VLM support) <\/strong>and<strong> Claude Sonnet 4.0 (for reasoning and rebuttal generation)<\/strong> on Amazon Bedrock, the system facilitates both document fidelity and analytical depth. Integration with <strong>Amazon S3, Amazon EC2, Amazon RDS, and OpenSearch Service <\/strong>enables more secure and scalable deployment within investor-controlled AWS environments.<\/p>\n<p>For institutional clients, the impact is meaningful. By automating repetitive diligence tasks, the Devil\u2019s Advocate agent frees analysts to spend more time on higher-order investment debates and judgment-driven analysis. IC memos and stock pitches can benefit from structured, source-grounded skepticism, supporting clearer reasoning and more disciplined decision-making.<\/p>\n<p>LinqAlpha\u2019s agentic architecture shows how <strong>multi-agent LLM systems on Amazon Bedrock<\/strong> can transform investment research from fragmented and manual into workflows that are scalable, auditable, and decision grade, tailored specifically for the demands of research on public equities and other liquid securities.<\/p>\n<p>To learn more about Devil\u2019s Advocate and LinqAlpha, visit <a href=\"https:\/\/linqalpha.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">linqalpha.com<\/a>.<\/p>\n<hr>\n<h3>About the authors<\/h3>\n<footer>\n<div class=\"blog-author-box\">\n<div class=\"blog-author-image\">\n          <img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-4650 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2026\/02\/04\/image-5.png\" alt=\"\" width=\"100\" height=\"125\">\n         <\/div>\n<h3 class=\"lb-h4\">Suyeol Yun<\/h3>\n<p>Suyeol Yun is a Principal AI Engineer at LinqAlpha, where he designs the computing and contextualization infrastructure that powers multi-agent systems for institutional investors. He studied political science at MIT and mathematics at Seoul National University. His AI journey spans from computer vision for facial reenactment, through graph neural networks for US lobbying industry and congressional stock trading, to building infrastructure for capable AI agents.<\/p>\n<\/p><\/div>\n<div class=\"blog-author-box\">\n<div class=\"blog-author-image\">\n          <img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-4651 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2026\/02\/04\/image-6.png\" alt=\"\" width=\"100\" height=\"125\">\n         <\/div>\n<h3 class=\"lb-h4\">Jaeseon Ha<\/h3>\n<p>Jaeseon Ha is a Product Developer and AI Strategist at LinqAlpha, where she codifies complex analyst workflows into LLM-based agents. Her designs automate the extraction of critical signals from both structured and unstructured data, allowing institutional investors to delegate exhaustive data synthesis to multi-agent systems. Drawing on her experience as an equity analyst at Goldman Sachs and Hana Securities, Jaeseon ensures LinqAlpha\u2019s products are built for high-conviction decision-making. She also contributes to the firm\u2019s research on multi-agent systems, specifically focusing on the automated extraction and querying of financial KPIs and guidance at scale.<\/p>\n<\/p><\/div>\n<div class=\"blog-author-box\">\n<div class=\"blog-author-image\">\n          <img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-4652 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2026\/02\/04\/image-7.jpeg\" alt=\"\" width=\"100\" height=\"125\">\n         <\/div>\n<h3 class=\"lb-h4\">Subeen Pang<\/h3>\n<p>Subeen Pang, Ph.D. is a Co-founder of LinqAlpha, where he develops AI-driven research workflows for institutional investors. He specializes in building agentic systems that help analysts structure and interpret data from earnings calls, filings, and financial reports. He earned his Ph.D. from MIT in Computational Science and Engineering. With a background in mathematical optimization and computational optics, Subeen applies rigorous applied math to AI design. At LinqAlpha, he led the development of a finance-specific retrieval system using query augmentation and entity normalization to ensure high-precision search results for professional analysts.<\/p>\n<\/p><\/div>\n<div class=\"blog-author-box\">\n<div class=\"blog-author-image\">\n          <img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-4653 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2026\/02\/04\/image-8.png\" alt=\"\" width=\"100\" height=\"125\">\n         <\/div>\n<h3 class=\"lb-h4\">Jacob (Chanyeol) Choi<\/h3>\n<p>Jacob (Chanyeol) Choi is the Co-founder and CEO of LinqAlpha, where he leads the development of domain-specialized, multi-agent AI systems that streamline institutional investment research and market intelligence workflows. He earned a M.S.\/Ph.D. in Electrical Engineering and Computer Science from MIT, a B.S. in Electrical and Electronic Engineering at Yonsei University. His research journey spans AI hardware and neuromorphic computing to building reliable, finance-native agentic systems, including work on bias and responsible agent deployment in institutional settings. He was recognized on Forbes\u2019 2021 30 Under 30 (Science) list.<\/p>\n<\/p><\/div>\n<div class=\"blog-author-box\">\n<div class=\"blog-author-image\">\n          <img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-4654 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2026\/02\/04\/image-9.png\" alt=\"\" width=\"100\" height=\"125\">\n         <\/div>\n<h3 class=\"lb-h4\">Joungwon Yoon<\/h3>\n<p>Joungwon Yoon is a Senior Venture Capital Manager at AWS, based in Seoul, South Korea. She partners with leading investors and founders to help startups scale on AWS, connecting high-potential companies with the technology, resources, and global networks they need to grow. She focuses on generative AI startups and supports Korean founders in expanding into the US and Japan.<\/p>\n<\/p><\/div>\n<div class=\"blog-author-box\">\n<div class=\"blog-author-image\">\n          <img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-4655 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2026\/02\/04\/image-10.jpeg\" alt=\"\" width=\"100\" height=\"125\">\n         <\/div>\n<h3 class=\"lb-h4\">Sungbae Park<\/h3>\n<p>Sungbae Park is Senior Account Manager in AWS Startup team helping strategic AI startups grow and succeed with AWS. He previously worked as a Partner Development Manager establishing partnership with various MSP, SI, and ISV companies.<\/p>\n<\/p><\/div>\n<div class=\"blog-author-box\">\n<div class=\"blog-author-image\">\n          <img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-4656 alignleft\" src=\"https:\/\/d2908q01vomqb2.cloudfront.net\/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59\/2026\/02\/04\/image-11.png\" alt=\"\" width=\"100\" height=\"125\">\n         <\/div>\n<h3 class=\"lb-h4\">YongHwan Yoo<\/h3>\n<p>YongHwan Yoo is a GenAI Solutions Architect on the AWS Startup team. He helps customers effectively adopt generative AI and machine learning technologies into their businesses by providing architecture design and optimization support, focusing on infrastructure for large-scale model training. He is also an active member of the AI\/ML Technical Field Community (TFC) at AWS.<\/p>\n<\/p><\/div>\n<\/footer>\n<p>       <!-- '\"` -->\n      <\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/aws.amazon.com\/blogs\/machine-learning\/how-linqalpha-assesses-investment-theses-using-devils-advocate-on-amazon-bedrock\/<\/p>\n","protected":false},"author":0,"featured_media":4456,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/4455"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=4455"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/4455\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/4456"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=4455"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=4455"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=4455"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}