Skip to content

Blog

Some light reading on AI. If you are looking for more technical content, check out the experiments section.

Complex AI Agents

Model Mafia

In the world of AI dev, there’s a lot of excitement around multi-agent frameworks—swarms, supervisors, crews, committees, and all the buzzwords that come with them. These systems promise to break down complex tasks into manageable pieces, delegating work to specialized agents that plan, execute, and summarize on your behalf. Picture this: you hand a task to a “supervisor” agent, it spins up a team of smaller agents to tackle subtasks, and then another agent compiles the results into a neat little package. It’s a beautiful vision, almost like a corporate hierarchy with you at the helm. And right now, these architectures and their frameworks are undeniably cool. They’re also solving real problems as benchmarks show that iterative, multi-step workflows can significantly boost performance over single-model approaches.

But these frameworks are a temporary fix, a clever workaround for the limitations of today’s AI models. As models get smarter, faster, and more capable, the need for this intricate scaffolding will fade. We’re building hammers and hunting for nails, when the truth is that the nail (the problem itself) might not even exist in a year. Let me explain why.

Where Are All the Swarms?

Complex agent architectures are brittle. Every step in the process—every agent, every handoff—introduces a potential failure point. Unlike traditional software, where errors can often be isolated and debugged, AI workflows compound mistakes exponentially. If one agent misinterprets a task or hallucinates a detail, the downstream results may not be trustworthy. The more nodes in your graph, the higher the odds of something going wrong. That’s why, despite all the hype, we rarely see swarm-based products thriving in production. They’re high-latency, fragile, and tough to maintain.

Let's use software development as an example, since it is what I am most familiar with. Today’s agent workflows often look like this: a search/re-ranking agent scours your code repo for relevant files to include in the context window, a smart planning agent comes up with the approach and breaks it into tasks, a (or multiple) coding agent writes the code, a testing agent writes the tests, and a PR agent submits the pull request (maybe with a PR review agent thrown in for good measure). It’s a slick assembly line, but every step exists because current models can’t handle the whole job alone.

  • Search and re-ranking: This is only necessary because context windows are too small and it is too expensive to ingest an entire repo. This is also the step that is most susceptible to failures, because model that is smart enough to plan the task should also be the one deciding which files are relevant. A context window increase and a price decrease will make this step obsolete.
  • Planning and task breakdown: The main value of this step is that you can have your smartest model give direction to the smaller, less capable, but cheaper and faster models. There's no need for a formalized plan when models can perform all planning inside of their own reasoning process. The only other reason I can think of to have subtasks here would be because a models won't be able to output enough tokens to solve the entire problem in 1 go. An output token limit increase and price decrease will make this step obsolete.
  • Testing and PRs: Why separate these? A model that's capable of planning is capable of writing the code to test that plan as long as it fits inside of the output token limit. This step would be replaced by simply returning the test results to the single agent so that it could make decisions based on the results. This is feasible today! But it could be pretty expensive to have an agent loop with the entire codebase as context.

The root issue isn’t the workflow, and in most cases, it's not even the model intelligence. Limited context windows, high-priced top-tier models, and token output caps force us to chop tasks into bite-sized pieces. But what happens when those limits start to fade? Imagine even a modest 3x-5x improvement in context window size, price, and output token limits. Suddenly, you don’t need all of your tools, frameworks, and subagents.

Tech Debt

And those constraints are eroding fast. Last year, OpenAI’s Assistant API launched with built-in RAG, web search, and conversation memory. It didn't gain a ton of traction for RAG—mostly because RAG is not really a one-size-fits-all solution and devs needed control over their pipelines. Back then, RAG was an exact science: tiny context windows, dumb and expensive models, and high hallucination risks meant you had to fine-tune your RAG pipeline obsessively to get good results. Nowadays that stuff is much less of an issue. Chunking strategy? Throw in a whole document, and let the model sort it out. Top K? F*#% it, make it 20 since prices dropped last month. Bigger context windows, lower prices, caching, and better models have made simplicity king again. Problems I’ve wrestled with in my own agents sometimes vanish overnight with a model update. That’s not an edge case; it’s a pattern.

The Shelf Life of Agent Architectures

Complex agent architectures don’t last. If you build a six-step swarm today, a single model update could obsolete 3 of those steps by year’s end, then what? AI isn’t like traditional software, where architectures endure for decades. Six months in AI is an eternity—updates hit fast, and they hit hard. Why sink time perfecting fickle but beautiful multi-agent masterpieces when the next AI lab release might collapse it into a single prompt? LangChain, Crew, Swarm—all these tools are racing against a convergence point where raw model power outstrips their utility.

I’m not saying agent architectures are useless now—they’re critical for squeezing the most out of today’s tech. But they’re not evergreen. Simplicity is the smarter bet. Lean on the optimism that models will improve (they will), and design systems that don’t overcommit to brittle complexity. In my experience, the best architecture is the one that solves the problem with the fewest moving parts—especially when the parts you’re replacing get smarter every day.

The "Idea Guy" Delusion: Why No One Is Safe from AI

Knowledge Workers As AI continues to evolve, many professionals (especially software developers like myself) are coming to terms with the reality that their jobs will eventually be automated. Maybe in two years, maybe in five. But it’s happening.

Yet, amidst this shift, a certain group seems oddly confident in their immunity to AI-driven disruption: the idea guys.

These are the people who believe that once AI automates programming and other forms of technical labor, the true value will shift to those who can generate great ideas. But I don’t buy it. Sure, there’s a timeline where this could be true. But in most cases, the idea guy is just as doomed as the software developer, if not more so.

AI Won't Struggle with Ideas

There's a misconception that while AI might be able to code, it won’t be able to come up with good ideas. But this doesn't hold up under scrutiny. Idea generation isn’t some mystical human trait, it’s just a research problem.

If I wanted to generate 15 startup ideas right now, I wouldn’t meditate in a cabin and wait for inspiration. I’d scroll Reddit for 20 minutes and see what people are complaining about. AI can do that faster, better, and across a wider range of sources.

And filtering good ideas? That’s not some sacred human skill either. A good idea guy isn’t someone who magically comes up with better ideas; it’s someone who avoids bad ideas. But AI doesn’t need a filter, since it can pursue every idea in parallel. If it launches 10 projects and one succeeds, is it a genius idea guy?

AI as CEO

AI isn’t just stopping at coding. Software development isn’t just writing code! It's provisioning environments, debugging, testing, scaling, deploying, architecting, and integrating systems. AI is already creeping into these domains, and eventually, it will handle them in ways that don’t require human oversight.

At that point, what’s stopping AI from also iterating on product-market fit? If it can build a full-stack application, why wouldn’t it also build in user feedback loops, run A/B tests, and continuously optimize the product itself? If it can automate deployment, it can automate iteration. If it can iterate, it can validate its own ideas.

Eventually, users themselves will be the ones proposing ideas by leaving feedback, which the AI will then solve for. At that point, what exactly does the human “idea guy” contribute?

But What About Sales and Marketing?

There’s another flawed assumption that AI can build, but it won’t be able to sell. That’s just false. The same AI that can launch products can also launch A/B-tested marketing campaigns, generate optimized ad copy, and personalize sales pitches at a scale humans can’t compete with. Marketers are already prompting AI to generate content, optimize ads, and personalize sales pitches. How far away are we from automating the prompting?

And it’s not just about generative AI—classic machine learning is already better than humans at optimizing recommendations, ads, and conversion rates. These models will only improve. When that happens, an AI-driven product won’t just sell itself—it will continuously optimize its sales approach better than any human could.

Who Actually Survives?

If anyone has a shot at surviving, it’s not the idea guy. Potentially, it’s the entrepreneur who becomes an intern for the AI.

Someone will still be needed to rig up AI systems, configure automations, and handle anything in the physical world—incorporating businesses, making legal decisions, or doing things that require human interaction. But beyond that? Their role will be minimal.

If we ever reach the point where AI can handle full unsupervised software development, then no job is safe. Not developers, not marketers, not CEOs. Not even scientists, doctors, or lawyers. Because an AI that can reason through the entire software lifecycle without human intervention is smart enough to disrupt every knowledge-based profession. In the way that mathemeticians are not safe even though LLMs are bad at math, because code allows them to make extremely difficult calculations, the same will be true for every knowledge-based profession.

Final Thoughts: No One Is Safe

I don’t feel secure in my role as a software developer. But I don’t think idea guys should feel secure, either. If we ever reach the point where AI is developing software without supervision, it will be smart enough to do much more than just code.

At that point, every knowledge worker is at risk—lawyers, scientists, doctors, and executives included. If AI is smart enough to replace programmers, it’s smart enough to replace idea guys, too. And if you’re betting on the latter being the safer role, you’re in for a rude awakening.

Do First, Optimize Later: Breaking the Cycle of Over-Optimization

I've come to a realization: I spend too much time planning and optimizing rather than actually doing. AI and automation have fueled my obsession with optimization, making me believe that if I refine a system enough, I’ll be more productive. But the truth is, optimization is only valuable when applied to something that already exists.

The problem is, I often optimize before I start. I think, “I need to make a to-do list,” but instead of actually making one and using it, I get lost in finding the best way to structure a to-do list, the best app, or the best workflow. Even right now, instead of writing down what I need to do, I’m writing a blog post about how I should be writing things down. This is the exact loop I need to escape.

Optimization feels like progress. It gives me the illusion that I’m working towards something, but in reality, I’m just postponing action. The efficiency of a to-do list doesn’t matter if I’m not using one. The best UX for adding tasks doesn’t matter if I never add tasks. The friction in a system isn’t relevant if I’m not engaging with the system at all.

The real issue isn’t inefficiency—it’s a lack of discipline. I tell myself I’m not doing things because the process isn’t optimized enough, but the truth is simpler: I just haven’t done them. My focus should be on building the habit of doing, not perfecting the process before I even begin.

The New Rule: Action Before Optimization

Going forward, I want to adopt a new mindset—do first, optimize later. If I find that something is difficult or inefficient while actively doing it, then I can optimize. But I won’t let optimization be the barrier to starting in the first place.

I’ll collect real data from actually engaging in the tasks I want to improve. If my to-do list system feels clunky after I’ve been using it consistently, then I’ll refine it. If I struggle to keep up with a workflow, then I’ll tweak it. But I won’t waste time optimizing something that isn’t even in effect yet.

Optimization should be a tool for improvement, not an excuse for inaction. The first step is always to start. Only then does optimization become valuable.

Handing Over My Wallet to AI: Which Model Gave the Best Financial Advice?

AI Financial Advisors

Ever looked at your bank account and thought "I should probably talk to a personal financial advisor" — but then remembered that good advisors charge anywhere from $150 to $300 per hour? For most of us, professional financial advice feels like a luxury we can't justify. But we shouldn't have to wait until we're rich to get good financial advice!

That's where AI might change everything. Instead of paying hundreds per hour for financial advice, what if you could get personalized insights for the cost of a ChatGPT Plus subscription? To test this possibility, I connected RocketMoney to all my accounts—checking, credit cards, investments, the works—and exported 90 days of transaction data. Then I fed this financial snapshot to three AI heavyweights: ChatGPT, Claude, and Gemini.

But this isn't just about which AI is "smarter." Each platform brings different tools and features that the model can use to analyze the data. I asked each to analyze my spending and create a comprehensive financial plans and reports, just like a human advisor would.

I kept it simple. Each AI received the same prompt:

You are an expert personal finance manager and wealth advisor. I have included my last 90 days of transactions. I need you to do an analysis of my current financial situation and give me a report and wealth plan. Keep in mind this csv is a consolidation for all of my accounts and includes transfers and credit card payments provided by RocketMoney.

The results? Let's just say one AI saved me more money in potential insights than a year's worth of its subscription costs—while another couldn't even handle the basics. Here's what happened when I turned my finances over to the machines.

ChatGPT: A Mixed Bag of Analysis

Everybody knows ChatGPT already, so I'll skip the introduction. ChatGPT allows you to upload files, but their strongest model (GPT-o1) doesn't support file uploads yet so I opted to use GPT-4o as my model of choice.

The analysis began promisingly. ChatGPT used python to quickly parse my CSV file and showed me its understanding of the data structure by listing the columns.

It then produced a "Financial Summary" with my total income, expenses, and net balance. The numbers were significantly off base—likely because it counted both credit card charges and their payments as separate expenses.

Next came a breakdown of spending by category, presented in an interactive spreadsheet. The spreadsheet even had an Expand feature for a full-screen view that even let you select and reference specific cells:

Expanded view of spending categories

I do love the way ChatGPT presents the data in a spreadsheet-like format. It's easy to see the data and it's easy to interact with. The data wasn't particularly useful since I already have RocketMoney for category breakdowns, plus it includes credit card payments and transfers. ChatGPT wrapped up with some suggestions for next steps:

I've provided you with a summary of your financial situation over the last 90 days, including total income, expenses, net balance, and a detailed breakdown of spending by category.

Here's what I'll do next for your wealth plan:

  1. Spending Analysis:

    • Dive deeper into your largest expense categories and provide insights on where you could cut back to save more.
  2. Savings and Investment Strategy:

    • Based on your net balance, I'll suggest a strategy for increasing savings or investments to help grow your wealth.
  3. Debt Management:

    • I'll assess any debt-related payments, such as credit card expenses, to determine the best approach to reduce interest payments and manage liabilities effectively.

Would you like me to focus on any specific area first, or do you have particular goals (e.g., saving for a big purchase, paying down debt, growing investments) in mind for your wealth plan?

Wanting to dig deeper, I prompted:

"Where should I start to reduce my spending? I spend too much!"

Without doing any further python analysis, ChatGPT jumped straight into suggestions—and this is where things got interesting, and not in a good way.

First, it identified my biggest expense category as "Credit Card Payments," which shows a fundamental misunderstanding—credit card payments aren't really an expense category, they're just transfers to pay down credit balances. It was essentially double-counting expenses: once when I used the credit card, and again when I paid the bill.

Next, it focused on bills and utilities as my second-highest expense category. While technically true, the advice was comically generic—suggesting I switch to LED lightbulbs and turn off appliances when leaving rooms. Not exactly the sophisticated financial analysis I was hoping for!

Then came the food spending advice:

3. Dining & Drinks - $XXXX
  • What's happening: This is one of the most common areas where expenses accumulate because of convenience.
  • What to do:
    • Cut Back on Eating Out: Allocate a fixed budget for eating out each week. Consider cooking at home or meal prepping more often, especially given your goal of saving by meal prepping.
    • Plan Meals: Develop simple meal plans with affordable ingredients, focusing on dishes that offer variety yet are easy to prepare. This can help avoid impulse restaurant spending.
    • Limit High-Cost Visits: Target reducing high-cost visits (e.g., fine dining) and focus on affordable alternatives like picnics or home-prepared meals.

The advice I received was surprisingly generic. ChatGPT seems to deliberately limit the amount of data it processes (maybe to conserve tokens?) which leads to surface-level analysis and cookie-cutter suggestions like:

  1. Focusing on "Credit Card Payments" as an expense category (which isn't really an expense at all)
  2. Suggesting I switch to LED lightbulbs and turn off appliances to reduce my bills spending
  3. Making generic food spending suggestions without any reference to my actual dining patterns

ChatGPT didn't dig deep enough to notice that my real issue was excessive UberEats spending and fancy restaurant visits. Instead, it gave blanket advice about cooking at home more—true, but not particularly insightful given the data available.

Gemini: A Comedy of Errors

I had high hopes for Gemini Advanced, given Google's claims about its enhanced analytical capabilities and extremely large context window. Those hopes were quickly dashed.

Gemini's apologetic response

My first attempt resulted in Gemini apologizing and refusing to help—a strange response given that financial analysis is hardly controversial. On my second try, it analyzed the columns without any error but went off on a bizarre tangent about tax-deductible expenses (which I hadn't asked about) and attempted to analyze a non-existent data column named "Tax Deductible", which obviously failed.

Gemini's confused tax analysis

This experience was very disappointing. I am still bullish on Gemini's potential, but it seems completely unusable for this kind of task at its current state.

Claude: The Financial Advisor We Need

Where the others stumbled, Claude shined. Using Claude 3.5 Sonnet in a dedicated project workspace, I got what felt like genuinely professional financial advice.

🚨 Be sure to enable the "Analysis Tool" in the beta features if you want to try this out yourself.

What set Claude apart was its comprehensive approach. It started with a thorough financial report that broke down both total income and expenses. While it did make some mistakes (like double-counting my income due to a recent job change), it showed remarkable attention to detail in its analysis.

Next, it provided a detailed spending breakdown, avoiding the common pitfall of counting transfers and credit card payments as expenses. It provided clear percentage breakdowns—for instance, noting that bills made up 28% of my expenses and food was 13%, further split between groceries and dining out.

Here's when Claude started to impress me—in the details:

Spending Patterns:

  • Essential Spending: $XXXX (58%)
  • Discretionary Spending: $XXXX (42%)
  • Heavy reliance on food delivery services (frequent Uber Eats charges)
  • Multiple subscription services totaling approximately $XXX/month

Claude understood what was essential and what was discretionary, which shows a level of understanding that goes beyond the surface-level categories. It immediately called out my frequent Uber Eats charges and doesn't include credit card payments or transfers as in the spending breakdown.

It even called out specific "Red Flags":

RED FLAGS:

  • High Food Costs: Spending $XXXX/month on food, with about 67% on dining out/delivery
  • Numerous Subscriptions: Multiple overlapping services (Apple, Claude, OpenAI, etc.)
  • High discretionary spending ratio

It is not wrong here, I do sign up for every AI product that I hear about. Busted!

Claude then provided a detailed action plan:

RECOMMENDATIONS:

Immediate Actions:

  • Audit Subscriptions:

    • Consolidate AI services (currently using Claude, OpenAI, and others)
    • Review multiple Apple subscriptions
    • Evaluate necessity of multiple cloud services
  • Food Budget Optimization:

    • Reduce Uber Eats usage (currently spending heavily here)
    • Increase grocery shopping vs. delivery
    • Target reducing food expenses to $XXXX/month

Medium-term Financial Goals:

  • Build Emergency Fund:

    • Target: 6 months of essential expenses ($XXXX)
    • Current high income provides good opportunity for savings
  • Implement 50/30/20 Budget:

    • 50% needs ($XXXX)
    • 30% wants ($XXXX)
    • 20% savings ($XXXX)

Investment Strategy:

  • Maximize Tax-Advantaged Accounts:

    • 401(k) if available through employer
    • Roth IRA ($XXXX/year)
    • HSA if eligible
  • Consider diversifying investments:

    • Currently see some crypto investments
    • Recommend broader diversification into index funds
    • Consider real estate investment given high income

I then asked for visualizations, and Claude delivered with beautiful charts showing my spending patterns:

Claude's food spending analysis visualizations

I then challenged Claude to show me potential savings from cutting out UberEats and limiting dining out. It responded with a detailed plan and an interactive visualization that you can check out here!

I followed up with a few more requests:

  • Identifying other areas for spending cuts
  • Configuring automatic savings and investments
  • Analyzing the rent vs. buy decision

Each response was thoughtful, data-driven, and genuinely helpful! By the time I hit my messaging limits, I had a comprehensive financial plan and a clear path forward, which I saved to the Claude project so I can reference it whenever I want.

What This Means for the Future of AI Financial Advice

This experiment revealed both the potential and current limitations of AI in personal finance:

  1. Data Processing Matters: The ability to correctly interpret financial data—including understanding what to ignore—is crucial. This is where Claude excelled and others faltered.

  2. Context is King: Generic financial advice is everywhere. What's valuable is AI that can understand your specific situation and habits.

  3. Visualization + Analysis: The most helpful insights came when AI could both crunch the numbers AND present them in an intuitive way.

The Bottom Line

While all three AI models showed promise, only Claude provided the kind of comprehensive, personalized financial analysis that could genuinely help someone improve their financial health. It's a glimpse of what's possible when AI moves beyond simple number-crunching to deliver truly personalized financial guidance.

Does this mean AI is ready to replace human financial advisors? Not quite. But it suggests that AI could become an invaluable tool for both individuals managing their finances and professionals looking to provide better service to their clients.

For now, if you're considering using AI for financial analysis, my experiment suggests Claude is your best bet—just don't expect it to make the hard decisions for you. After all, choosing between that fancy dinner and your savings goal is still up to you.

Should You Even Trust Gemini’s Million-Token Context Window?

Haystack Made with GPT-4o

📖 Read On Medium

Imagine you’re tasked with analyzing your company’s entire database — millions of customer interactions, years of financial data, and countless product reviews — to extract meaningful insights. You turn to AI for help. You shove all of the data into Google Gemini 1.5, with its new 1 million token context length and start making requests, which it seems to be solving. But a nagging question persists: Can you trust the AI to accurately process and understand all of this information? How confident can you be in its analysis when it’s dealing with such a vast amount of data? Are you going to have to dig through a million tokens worth of data to validate each answer?

Traditional AI tests, like the well-known “needle-in-a-haystack” tests, fall short in truly assessing an AI’s ability to reason across large, cohesive bodies of information. These tests often involve hiding unrelated information (needles) in an otherwise homogeneous context (haystack). The problem is that it makes the focus on information retrieval and anomaly detection rather than comprehensive understanding and synthesis. Our goal wasn’t just to see if it could find a needle in a haystack, but to evaluate if it could understand the entire haystack itself.

Using a real-world dataset of App Store information, we systematically tested Gemini 1.5 Flash across increasing context lengths. We asked it to compare app prices, recall specific privacy policy details, and evaluate app ratings — tasks that required both information retrieval and reasoning capabilities. For our evaluation platform, we used LangSmith by LangChain, which proved to be an invaluable tool in this experiment.

The results were nothing short of amazing! Lets dive in.

Setting Up the Experiment

You can follow along with our full experiment in this Jupyter notebook.

Choosing Our Datasets

We need 3 datasets for our experiment:

  1. App Data Dataset: We used the Apple App Store Data Set, a real-world collection of information about 10,000 apps. This dataset provided rich, varied information for us to analyze.
  2. Golden Dataset: We selected 5 apps arbitrarily that we will use to craft our Evaluation Dataset questions and ground truth answers. These 5 apps will need to be included in the context in every step of the experiment.
  3. Evaluation Dataset: We crafted a set of three questions and answers based on the Golden Dataset. These are the questions we will ask Gemini Flash and we will evaluate it’s answer against the ground truth answer we have written.
examples = [
    {
        "question": "Do the 'Sago Mini Superhero' and 'Disney Channel  Watch Full Episodes Movies  TV' apps require internet connection?",
        "answer": "You can play Sago Mini Superhero without wi-fi or internet. Internet is required for Disney Channel  Watch Full Episodes Movies  TV"
    },
    {
        "question": "Where can I find the privacy policy for the 'Disney Channel  Watch Full Episodes Movies  TV' app?",
        "answer": "http://disneyprivacycenter.com/"
    },
    {
        "question": "Which one costs less? The 'KQ MiniSynth' app or the 'Sago Mini Superhero' app?",
        "answer": "The 'KQ MiniSynth' app costs $5.99, the 'Sago Mini Superhero' app costs $2.99. So 'Sago Mini Superhero' is cheaper"
    }
]

Leveraging Gemini 1.5 Flash

For our AI model, we utilized Google’s Gemini 1.5 Flash. This model allows up to 1 million tokens in it’s context window, which is roughly 700,000 words! At the time of writing this, Gemini 1.5 Flash costs ~$0.70/million input tokens, and thats without caching. That is comparable to GPT-3.5 or Claude Haiku pricing.

LangSmith: Our Evaluation Platform

For managing our experiment and evaluating results, we turned to LangSmith by LangChain. LangSmith offers gives us access to a few features that are perfect for this kind of experiment:

  1. When we upload our evaluation dataset to LangSmith, we can version control, split, and even generate synthetic data, all from the LangSmith console.
  2. All of our experimentation results and traces are tracked in LangSmith for every dataset. The dashboard allows us to visualize the performance across different context lengths.

When running our experiment, it is not required but it is quite convenient to use automatic evaluations using an LLM as a judge. In our case that means for each question in our dataset:

  1. Gemini Flash takes a whack at answering the question
  2. We have GPT-4o score if Flash’s answer is correct based on the ground truth answer we have written in the dataset.

This functionality happens through LangSmith Custom Evaluators, which are simple python functions that return a score for the evaluation:

# We define the scoring schema for the LLM to respond in
# using Pydantic
class EvaluationSchema(BaseModel):
    """An evaluation schema for assessing the correctness of an answer"""
    reasoning: str = Field(
        description="Detailed reasoning for the evaluation score")
    correct: bool = Field(
        description="Whether the user's answer is correct or not")

# Our evaluation function
def qa_eval(root_run: Run, example: Example):
    """Evaluate the correctness of an answer to a given question"""
    # The question from the dataset example
    question = example.inputs["question"]

    # Gemini's answer
    flash_answer = root_run.outputs["output"]

    # Ground truth answer from the dataset
    correct_answer = example.outputs["answer"]

    # Force GPT-4o to respond in the scoring schema
    llm = ChatOpenAI(model="gpt-4o", temperature=0.4).with_structured_output(EvaluationSchema)

    system_prompt = f"""You are a judge tasked with evaluating a user's answer to a given question.
You will be provided with the question, the correct answer, and the user's thought process and answer.

Question:
{question}

Correct Answer:
{correct_answer}

Your job is to assess the user's answer and provide:
1. Detailed reasoning for your evaluation, comparing the user's answer to the correct answer
2. A boolean judgment on whether the user's answer is correct or not

Be thorough in your reasoning and accurate in your judgment. Consider partial correctness and any nuances in the answers."""

    # Invoke the model with all of the context
    evaluation: EvaluationSchema = llm.invoke(
        [SystemMessage(content=system_prompt),
         HumanMessage(content=flash_answer)]
    )

    score = 1 if evaluation.correct else 0

    return {
        "score": score,
        "key": "correctness",
        "comment": evaluation.reasoning
    }

If you’re not familiar with LangChain or Python, we are simply writing a function (qa_eval) that takes the question from the dataset, Flash’s answer, and the correct answer and putting them all into a prompt for GPT-4o. We use .with_structured_output to ensure that the LLM responds in a specific schema that we can use to return the score in the schema that LangSmith expects.

Running the Experiment

We gradually increased the context length up to the full million-token capacity in 50,000 token increments. To generate these varying context lengths, we wrote a function get_context that would:

  1. Start with our “golden dataset” of 5 apps
  2. Add additional app data until we reached the desired token count
  3. Randomize the order of apps in the context to avoid any positional bias

Target Function

We use this get_context function in our “target function”. The target function describes the function that will be used to produce the output that needs to be evaluated. In our case the target function:

  1. Fills up the context window with app data up to the number of tokens we are testing in that step
  2. Puts the context into a prompt for Gemini Flash
  3. Queries Gemini Flash with the question from the dataset and returns the model’s response

Here’s a simplified version of our target function:

def predict(inputs: dict):
    tokens = (max_context_limit / total_steps) * current_step
    context = get_context(tokens)

    system_prompt = f"""You are tasked with answering user questions based on the App Store data inside <APP STORE DATA>.
    Use ONLY the information provided in the context. Be as accurate as possible."""

    response = llm.invoke([
        SystemMessage(content=system_prompt),
        HumanMessage(content=inputs["question"])
    ])

    return {"output": response.content}

We wrap all of this up into a custom Python class just to make it easy to keep track of the steps and control each experiment and we are ready to run!

result = evaluate(
    self.predict, # Our predict function
    data=client.list_examples(dataset_name=dataset_name), # Our evaluation dataset
    evaluators=[qa_eval], # Our custom evaluator
    experiment_prefix=f"{self.model}-{tokens}" # Prefixes the experiments in Langsmith for readability
)

Results

The results of our experiment with Gemini 1.5 Flash were nothing short of amazing! Across all context lengths, from 50,000 tokens all the way up to the full million-token capacity, Gemini 1.5 Flash achieved 100% accuracy in answering our test questions!

Experiment Results in LangSmith

You can view the full test results on LangSmith here.

Let’s break down what this means:

  1. 🔬 Perfect Accuracy: Regardless of whether we gave Gemini 1.5 Flash 50,000 tokens of context or 1,000,000 tokens, it consistently provided correct answers to our questions. This level of consistency is impressive and suggests that the model can effectively process and utilize information from extremely large contexts.
  2. 🧠 Information Synthesis: Our questions required more than just information retrieval. They involved comparing data points from different parts of the context. Gemini 1.5 Flash’s perfect score indicates a strong ability to understand information across a huge context, not just locate specific strings.

To put this in perspective, at the maximum context length, Gemini 1.5 Flash was accurately answering questions while processing the equivalent of a 400-page book in a single query. This is a significant leap beyond traditional document analysis capabilities.

However, it’s important to note some limitations of our experiment:

  1. Question Complexity: Our questions, while requiring synthesis, are relatively straightforward. We deliberately avoided questions requiring complex numerical reasoning or identifying trends across the entire dataset.
  2. Limited Question Set: We used a small set of questions for each evaluation. A larger, more diverse set of questions could provide even more robust insights into the model’s capabilities.

Despite these limitations, the results are extremely promising. They suggest that Gemini 1.5 Flash can maintain high accuracy and information synthesis capabilities with very low costs, even when dealing with massive amounts of context. This opens up exciting possibilities for applications in various fields, from business analytics to scientific research.