
Renesis Insights

Renesis Team
The Fastest-Growing Source of Unattributed P&L
Somewhere in your competitive set, a fund is running an AI agent that opens and closes positions autonomously. It's not theoretical. Autonomous trading agents on Hyperliquid, Morpho, and Aave are live. Several hedge funds have deployed agent-managed sleeves. The conversation has moved from "will this happen" to "how do we handle it operationally."
The operational question almost nobody is asking: when an AI agent executes a trade, who signs off on the NAV entry?
In a traditional fund, the answer is a human: a trader initiates the position, operations records it, and the administrator verifies the entry against exchange statements. There's a chain of custody. Every position has a human who made the decision, a timestamp on the order, and a counterparty confirmation.
AI agents break this chain in ways that are subtle until they aren't.
What Actually Changes When Agents Trade
Transaction velocity
A human trader might execute 10–50 trades per day. An autonomous agent running a market-making or yield rotation strategy might execute hundreds of transactions in the same period — opening and closing vault positions, rotating between lending protocols, adjusting hedge ratios, claiming and redeploying rewards.
Each of these is a NAV event. Each one changes the fund's position composition. For a fund doing daily NAV calculation, this is manageable. For a fund doing weekly NAV, an agent that executed 300 transactions since the last snapshot has created 300 unreviewed accounting entries that need to be reconciled before the next NAV is produced.
Attribution gaps
In a human-managed fund, every trade has an investment rationale. The trader bought ETH perps because they had a directional view. The fund rebalanced into Morpho USDC because the yield spread justified the move. This rationale is implicit, but it exists — and it's the basis for LP reporting that explains performance in terms of strategy decisions.
AI agents don't naturally produce this attribution. An agent that rotated capital through seven different Morpho vault curators in 48 hours generated a net return of 0.3% — but the LP report line says "DeFi yield" with no further decomposition. The performance is real. The explanation is missing.
Audit trail fragmentation
Traditional audit trails are designed around human-initiated actions: trade tickets, order confirmations, custodian statements. An AI agent operating onchain doesn't produce trade tickets. Its actions are recorded in transaction hashes across multiple protocols and chains, each with its own data format and settlement mechanics.
For an auditor trying to verify that the fund's reported NAV is consistent with its actual onchain activity, this is a new kind of problem: not that the data doesn't exist, but that it exists in a format no traditional audit process was built to consume.
Three Specific Scenarios That Create Accounting Problems
Scenario 1: The agent claims yield before the NAV snapshot
An agent running a yield optimisation strategy is programmed to claim rewards whenever they exceed a gas cost threshold. It claims $4,200 in Morpho rewards at 11:47 PM, two hours before the midnight NAV snapshot. The MORPHO tokens hit the fund wallet at 11:51 PM.
In a manual process, this claim would be recorded when the operations team sees the wallet inflow during next-day reconciliation. It gets booked on the following NAV date. But the actual economic event happened before the snapshot — the fund owned those rewards before midnight, and they affected the fund's true NAV as of midnight.
If the agent is running autonomously and the operations team isn't monitoring in real time, this timing gap creates a NAV misstatement. Not large on any given day, but systematic — and auditors will find it.
Scenario 2: The agent opens a leveraged position that spans the NAV calculation period
An agent opens a leveraged carry position at 10 PM: borrow USDC on Morpho, deposit into a Pendle PT. The borrowing is booked as a liability. The PT deposit is booked as an asset. Between 10 PM and the midnight NAV snapshot, the PT price moves 0.4%. The borrowing rate accrues for two hours.
If the NAV process snapshots wallet balances at midnight, it captures the PT position at midnight price. But the borrowing liability at midnight reflects two hours of interest accrual. The two-side decomposition — gross assets and gross liabilities — needs to be captured atomically, not sequentially. An agent that opens leveraged positions during the NAV calculation window creates a consistency problem that manual processes almost never encounter, because humans generally don't initiate complex positions two hours before close.
Scenario 3: The agent triggers a governance action with P&L implications
Some agents are integrated with protocol governance — they vote on parameter changes, claim governance rewards, or participate in protocol incentive programs. A governance reward claim is a taxable event in most jurisdictions and an NAV event that needs to be valued and recorded.
Agents participating in governance don't announce these events in advance. They happen when onchain conditions are met. For a fund without real-time monitoring of agent activity, governance rewards can sit in a wallet unclaimed for days, or — worse — get claimed and sit unrecorded until month-end reconciliation.
The Question the Industry Isn't Asking
If AI agents can trade autonomously, can't they also monitor and report on themselves?
In principle, an agent could log its own decisions, produce its own attribution reports, and flag its own anomalies. Some early systems attempt this. The problem is structural, and it's the same problem that existed long before AI entered the picture: a system that monitors its own behavior has no independent check.
If the agent's strategy drifts — gradually, imperceptibly — and the agent is also responsible for characterizing its own strategy, the drift may be reclassified as intentional behavior rather than flagged as a deviation from mandate. This isn't a hypothetical. It's the same reason funds have independent administrators rather than self-reported NAV. The principle is not that the fund manager is dishonest; it's that self-reporting produces systematic bias, and independent verification is the structural solution.
Applied to AI agents: the performance monitoring layer needs to be independent of the agent layer. Not a feature of the agent. A separate system, watching from the outside, with access to the same onchain data but no stake in the agent's performance narrative.
What makes this hard is that AI agents — unlike rules-based algorithms — can discover strategies their operators didn't design. They can develop behaviors that aren't legible from the original specification. An independent monitoring layer therefore can't simply check whether the agent is following its rules. It has to reconstruct what the agent is actually doing from raw transaction data, without relying on the agent's own logs or classifications.
That's a different and harder problem than conventional fund reconciliation. Each protocol has its own data format. Cross-chain activity requires reconciliation across multiple indexing layers. Reward events require protocol-specific parsing that generic blockchain explorers don't support. And the whole system needs to operate in real time, because agents don't wait for the morning reconciliation run.
What This Means for Fund Operations in 2026
AI agents are not coming. They're here. The funds that run them well over the next two to three years will not necessarily be the ones with the most sophisticated models. They will be the ones that built the operational discipline to watch what their agents are doing — independently, continuously, and in a format that holds up to LP scrutiny and external audit.
The alpha potential of autonomous trading is available to anyone who can build or access an agent. The infrastructure to govern it properly is the harder and less-discussed problem. It's also the one that separates the funds that can scale agent strategies from the ones that eventually discover, at the worst possible moment, that they couldn't explain what their agent had been doing.
That gap — between what the agent does and what the fund can account for — is where the next generation of fund operations problems will live.
Renesis builds real-time portfolio infrastructure for liquid crypto funds. Our data layer is designed to capture and attribute agent-executed transactions with the same precision as human-initiated ones — across protocols, chains, and asset classes, independently of the agent itself.
Built by builders.
For builders.
We're a DeFi-native team shipping fast. No enterprise sales cycles, no bloated pricing. Start free, talk to us when you're ready.