Building Tactile AI: Optimistic UI and the Vercel AI SDK
The feeling of "someone is typing"—streaming tokens, optimistic UI updates, clear loading states—makes the assistant feel responsive. We use Vercel AI SDK streamText, useChat for consistent s...
Transient File I/O: Parsing massive CSVs in the browser without server storage
The user can drop a CSV or text file into the chat to ask "analyze this" or "what do you see?" We parse the file in the browser e.g. with PapaParse, extract text or a ta...
The Interface of Uncertainty: Designing Human-in-the-Loop
When the heuristic or LLM isn't confident, we don't guess. We show the mapping UI and let the user confirm or correct. That's REQUIRESMAPPING—the interface of uncertainty.
Production touchp...
Prompt Guardrails: Forcing an LLM to only talk about finance
The system prompt defines who the assistant is and what it can do. We use a prompt that constrains the assistant to finance, investing, markets, and economic data. It states: "You are the...
Google Drive as Dumb Storage
Google Drive isn't your backend. We use it as file storage for a single export file. The app creates/updates that file and can read it back. Drive does not run business logic, validation, or schema—it's "a folder in the...
AI Grounding: Connecting local data to live stock prices using Gemini 1.5
Users ask "What's the current price of AAPL?" or "Any news on TSLA?" The model must not guess; it must use live, authoritative data. Gemini's native grounding e.g. Google Sea...
Data Normalization: Solving the Date/Locale Nightmare
03/04/2024 is March 4 in the US and April 3 in the UK. Get the locale wrong and you silently corrupt trade dates. We make locale explicit and use deterministic, locale-aware parsers for every va...
The Context Engine: Squashing 10,000 trades into 4,000 tokens
The context engine is the code that maps the user's portfolio state to a string the LLM can use. In our implementation it is a single function: buildPortfolioContexttrades, positions in ...
The 3-Row Snapshot: Privacy-Preserving Inference
Sending the full CSV to an API would be a privacy and cost disaster. We send only headers and three sample rows. That's enough for the model to infer which column is date, ticker, quantity, price—wit...
Architecting a Local-First Hybrid RAG for Finance
Server: Next.js App Router, Vercel AI SDK streamText, useChat, Gemini 1.5 Flash default and optional Pro for paid tiers. The API route /api/ai/chat is the gatekeeper: it receives sanitized context,...
The Bifurcated Pipeline: Heuristics + LLMs
You don't need an LLM for every import. We run the heuristic first. Only when confidence is below 0.9 and the feature is enabled do we send headers and three sample rows to an API. The rest—synonym-based m...
The Privacy Gap: Why sending financial ledgers to OpenAI is broken
Financial data is the most sensitive data users own. Transaction histories, account balances, and position-level detail are the crown jewels of personal finance. Sending raw ledgers...
Building Local-First: The Browser as the Server
Your users' trade history shouldn't touch your server. We run the entire import pipeline in the browser. File read, CSV parse, heuristic mapping, and—when heuristics aren't confident enough—only heade...
Why We Bet on CSV over APIs
Integrating Plaid—or any broker API—is a nightmare. OAuth changes, rate limits, schema updates, deprecations, or the provider shutting the integration down. Every integration is a long-term liability. A small team cannot...
The Fragmentation Problem: Why Financial Data is Broken
Every broker has a different CSV format. "Deal Date" vs "Trade Date" vs "Execution Date." "Epic" vs "Symbol" vs "Ticker." Supporting one broker means writing a parser; supporting ten means mai...
Building a Sovereign Portfolio Risk Calculator: Why We Ditched the Backend
Client-side financial modeling. React risk calculator. TypeScript finance tooling.
This post is about why we deliberately killed the backend for portfolio risk analysis — a...
We stopped reading the news. We built an AI to read it for us. Meet Pulitzer v2
Last week, we open-sourced our editorial team:
Read the original post: “Meet Pulitzer”LINKTOCODERLEGIONPOST1
Today, I want to talk about why we handed the keys to ...
We open-sourced our editorial team. Meet Pulitzer.
Fellow builders,
We all hate SEO spam. You know the type:
“How to install Node.js” articles written by bots that have never opened a terminal.
We built a system to kill the noise.
We call it...