Posts

Building a Financial Agent with OpenClaw

  In the previous article , we built a FinChat-style financial research agent using LangGraph and LangSmith. That system established a clean baseline: structured data, explicit workflows, deterministic operations, and observable traces. It deliberately avoided embeddings and retrieval in order to keep reasoning and execution transparent. That baseline is useful - but it is also fragile. This article examines where the initial design breaks down as the system grows, and how introducing OpenClaw changes the role of execution from “some code ran” into a formal, auditable system . The goal is not to add new capabilities, but to make correctness and failure explicit properties of the system. Recap The first version of the agent had several strong properties: Natural language queries were mapped to a structured QueryPlan All financial logic was deterministic and testable Agent behavior was modeled as an explicit graph Traces and lightweight evaluation were available via La...

Building a Financial Agent with LangGraph and LangSmith

Image
Financial LLMs Overview Financial LLM applications fail for predictable reasons: ungrounded numbers, opaque reasoning, and workflows that are hard to debug or evaluate. In this article we'll try to overcome some of the issues and build a tool-first, graph-based agent for financial research - similar in spirit to FinChat or Koyfin - but that actually useful ;) The goal is not to predict prices or trade, but to answer structured financial questions such as: “Compare NVDA and AMD margins using their latest quarterly reports.” This problem is representative of real production constraints: structured data, deterministic calculations, clear provenance, and observable failures. Scope and assumptions We assume that quarterly financial statements for different companies are already available locally in on the disk in some structured form . The process that produces these files will be treated as an upstream data engineering concern and is explicitly out of scope. This will be somewh...

Memory-Safe Until It Isn’t: The Rust Kernel Bug That Broke Linux

Image
The disclosure of CVE-2025-68260 , the first publicly assigned CVE affecting Rust code in the Linux kernel, triggered a disproportionate level of attention compared to its immediate technical impact.  Headlines framed it as a symbolic failure: “Rust breaks,” “memory safety promises collapse,” or “Linux’s Rust experiment backfires.” These interpretations obscure what actually happened and, more importantly, what the event teaches about systems programming, concurrency, and language guarantees. This article examines three tightly related topics: What CVE-2025-68260 actually was, technically The goals and constraints of the Rust-for-Linux initiative Why race conditions remain a hard problem even in Rust, especially in kernel code The goal is not to defend Rust, nor to criticize Linux developers, but to clarify where responsibility lies: in invariants, concurrency design, and the unavoidable complexity of kernel-level programming. Background: The Rust-for-Linux Initiative The Linux ker...

Real Chaos, Real Security: A Physical Approach to Blockchain Randomness

Image
Why Strong Randomness Matters Every secure cryptographic system relies on a single principle: some values must be impossible for an attacker to predict. Randomness matters because it prevents attackers from predicting secrets. Many cryptographic operations depend on values that must remain entirely unpredictable; even a slight bias shrinks the search space and makes attacks easier. A predictable random number generator functions like an unlocked door. Randomness also protects protocols from replay and forgery. Exchanging unpredictable nonces proves freshness.  If the “random” numbers behind keys, nonces, or challenges are even slightly predictable, attackers gain a dangerous advantage like to impersonate devices  (hm-hm PS3 hack), forge sessions, or inject replayed messages into secure channels. The challenge is that computers are inherently deterministic. Given the same input, they always produce the same output. That property is perfect for reproducible computation but ter...

Bitcoin Layer 2 Wars: Lightning, Liquid, and Stacks

Image
Introduction Bitcoin was designed as a secure, decentralized ledger for peer-to-peer value transfer. Its conservative approach to scalability and limited scripting language make it exceptionally secure, but also restrict throughput and programmability. To overcome these limits without changing Bitcoin’s base protocol, developers have built a growing ecosystem of Layer 2 (L2) solutions - protocols that extend Bitcoin’s functionality while inheriting its security. This first part of the Bitcoin L2 series provides a technical and economic overview of Lightning , Liquid , and Stacks  - three of the most established Bitcoin extensions. We will cover their underlying technologies, use cases, security models, protocol dependencies, tokenomics, and associated risks. 1. The Lightning Network Technology and architecture The Lightning Network is an off-chain payment network built on Bitcoin’s existing scripting capabilities. It uses hashed timelock contracts (HTLCs) to establish payme...