
Renesis Insights

Renesis Team
The Problem With How Most Funds Use Onchain Data
Most liquid crypto funds have access to good onchain data. Nansen is open. DefiLlama is free. Dune has thousands of public dashboards. The data is not the constraint.
The constraint is workflow: how onchain data gets from a raw signal to a portfolio decision, a risk flag, or a line in an LP report. Most fund teams have never built this workflow explicitly. They have individual analysts who check individual dashboards on individual schedules, and a PM who may or may not see the synthesis before making a decision.
The result is that most funds are data-rich and insight-poor. They have access to everything but a process for turning it into anything.
This guide is about the workflow layer — how the funds that use onchain data well have actually integrated it into how they run.
The Four Jobs Onchain Data Does in a Fund
Before building a workflow, it helps to be precise about what job onchain data is actually doing. There are four distinct functions, each requiring different tools, different cadences, and different personnel.
1. Pre-trade research — building conviction
This is onchain data in service of an investment thesis. Before entering a position, a fund manager wants to understand: is there genuine on-chain activity supporting the narrative? What does smart money positioning look like? Is supply pressure building from unlocks or large holder distribution?
The tools here are primarily Nansen (wallet behaviour), Glassnode (BTC/ETH holder dynamics), Tokenomist (unlock schedules), and Token Terminal (protocol revenue and growth). The output is a qualitative judgment: does the on-chain picture support or undermine the thesis?
Frequency: per investment decision. Depth: high. Owner: analyst or PM.
2. Position monitoring — ongoing risk
Once in a position, onchain data shifts to a monitoring function: is anything changing that should affect the thesis? This includes protocol TVL changes, large wallet movements into or out of a position, funding rate dynamics on leveraged positions, and governance events that could affect token supply or protocol revenue.
The tools here are Coinglass (funding rates, open interest), Nansen alerts (large wallet movements), DefiLlama (TVL and protocol health), and Cryptoquant (exchange flows for macro assets). The output is a flag: does anything here warrant action?
Frequency: daily or real-time depending on position size and strategy. Depth: moderate. Owner: analyst with PM escalation path.
3. Portfolio attribution — explaining performance
At month end, every fund needs to explain what drove returns. Onchain data is increasingly necessary for this in DeFi strategies: which protocol generated which yield, how funding rate accrual contributed to P&L, which LP position experienced impermanent loss. Without onchain attribution data, DeFi returns are a black box.
The tools here are Dune (custom attribution queries), DefiLlama (historical yield data), and protocol-specific dashboards. The output is a structured decomposition of P&L by source — the raw material for LP reporting.
Frequency: monthly (minimum), ideally continuous. Depth: high. Owner: operations or dedicated data analyst.
4. LP reporting and narrative — communicating performance
The final function is translating what happened into a format that makes sense to LPs. This is where onchain data meets institutional communication: performance attribution, risk-adjusted returns, strategy-level decomposition, and benchmark comparison.
The tools here are GMCI (benchmark indices), Coin Metrics (research-grade data with documented methodology), and internal reconciliation systems. The output is the investor report — the document that either builds or erodes LP confidence.
Frequency: monthly. Depth: high on presentation, backed by rigorous data. Owner: PM with operations support.
The Workflow Most Funds Are Missing: The Daily Intelligence Brief
The highest-leverage operational change most funds could make is to formalise a daily intelligence brief: a structured, 15-minute synthesis of overnight data that every relevant person on the team sees before the trading day starts.
Most funds don't have this. They have a Telegram group where people drop links. The difference is significant.
A well-designed daily brief for a liquid CeFi + DeFi fund covers:
Market structure (5 minutes): Overnight price action on core positions. Funding rates and open interest changes on perp positions. Any significant liquidations or volatility events. Source: Coinglass, TradingView alerts.
Onchain positioning (5 minutes): Large wallet movements on core holdings. Exchange inflows/outflows for BTC and ETH as macro indicators. Any protocol-level changes affecting DeFi positions (TVL changes, rate changes, unusual transaction patterns). Source: Nansen alerts, Glassnode daily update, DefiLlama.
Narrative and sentiment (3 minutes): What's the dominant narrative on CT and in the institutional research feeds? Any governance votes, protocol announcements, or regulatory news that could affect positioning? Source: Kaito, The TIE, The Block.
Yield and DeFi (2 minutes): Current yield rates on active DeFi positions vs alternatives. Any material changes in the protocols the fund is using. Source: DefiLlama, protocol dashboards.
The output is not a report — it's a shared mental model that the whole team starts the day with. Decisions made from a shared context are better than decisions made from fragmented individual information.
How to Build a Smart Money Monitoring System
One of the most valuable onchain workflows a fund can build is systematic monitoring of wallets known to belong to sophisticated or well-informed market participants. The premise: if wallets that have historically moved ahead of significant price moves are accumulating or distributing, that's a signal worth integrating.
This is not about blindly copying trades. It's about having a structured, documented system for observing market behaviour and incorporating it as one input among many.
Step 1: Build a wallet watchlist. Nansen provides labelled wallet categories (smart money, whale wallets, exchange wallets). Add any additional wallets you've identified through your own research or network. A starting watchlist might be 20–50 wallets.
Step 2: Set up alerts, not dashboards. The failure mode of wallet monitoring is checking a dashboard occasionally and seeing noise. The correct setup is automated alerts triggered by material movements: threshold-based (wallet moves >$500K), asset-specific (any movement in assets you're currently holding or watching), or behaviour-based (wallet begins accumulating after a period of inactivity).
Step 3: Log and review. Keep a simple log of significant wallet movements and what, if anything, the position did in the subsequent days or weeks. Over time this tells you which wallets are actually signal and which are noise. Most are noise. A handful are consistently worth watching.
Step 4: Integrate into pre-trade research, not into execution. The smart money signal should be one input into a thesis, not a trigger for immediate action. The funds that get burned by wallet monitoring are the ones that react to moves without understanding the context.
The Unlock Schedule Workflow
Token unlock events are one of the most predictable sources of selling pressure in crypto. A fund that systematically tracks upcoming unlocks and incorporates them into position sizing has a structural edge over one that finds out about them from a tweet.
Tokenomist (formerly TokenUnlocks) is the primary tool for this. A practical workflow:
Monthly review: At the beginning of each month, pull the unlock schedule for all assets in the portfolio and watchlist for the next 60 days. Flag any unlock that represents more than 5% of circulating supply — these warrant specific attention.
Position sizing adjustment: For positions held into a major unlock, consider reducing size in the two weeks before the event. The pattern is consistent enough to warrant systematic treatment.
Re-entry monitoring: Post-unlock, watch for on-chain evidence that distributed tokens are being sold (exchange inflows from vesting addresses) vs. held (tokens moving to cold storage or staking). The selling pressure from an unlock that's already been absorbed is different from one that's still working through.
Documentation: Log your unlock monitoring in a way that can be referenced in LP reports. Being able to say "we reduced exposure ahead of the $40M Arbitrum cliff unlock and re-entered after on-chain evidence showed distribution had completed" is exactly the kind of specific, data-backed narrative that builds LP confidence.
The DeFi Protocol Health Checklist
For funds running DeFi yield strategies, protocol health monitoring is a critical risk management function that most teams do informally at best. A systematic checklist run monthly (or weekly for larger positions) covers:
TVL trend: Is TVL stable, growing, or declining? Sustained TVL decline without a corresponding rate improvement is a warning sign of capital rotation out of the protocol.
Yield sustainability: What is the source of the yield — organic protocol revenue, token incentives, or both? Token-incentive yield is structurally unsustainable. Understanding the yield composition matters for holding duration decisions.
Smart contract audit status: Has the protocol been audited? When? By whom? Has the code been materially changed since the last audit? Tools: the protocol's own documentation, DeFiLlama's protocol pages, and direct GitHub monitoring.
Governance activity: Is there any active governance discussion that could affect protocol parameters, fee structures, or supported assets? Governance votes that change borrowing rates or collateral requirements can materially affect DeFi position P&L.
Curator risk (for Morpho vaults): Which curator is managing the vault? What is their track record? What are the current risk parameters they've set? Curator decisions directly affect the risk-return profile of a Morpho position in ways that aren't always visible from the APY number.
The output is a simple health score per protocol — green, amber, red — with a one-sentence rationale. Reviewed monthly by the risk owner and included in the monthly portfolio review.
Connecting Onchain Intelligence to LP Reporting
The final workflow gap most funds have is the connection between their onchain monitoring and their LP reports. These are typically handled by different people using different tools, and the synthesis is done manually under time pressure at month end.
The funds that close this gap have built a data trail that runs continuously from on-chain event to LP report line. Every significant on-chain observation is logged. Every position entry or exit that was informed by on-chain data is documented. Every DeFi yield event is captured at the time it occurs.
The result is a month-end report that doesn't need to be reconstructed from memory — it's assembled from a continuous log. More importantly, it's a report that can withstand LP scrutiny: every performance attribution claim can be traced back to a specific on-chain event with a timestamp and a transaction hash.
This is the standard that institutional LPs are beginning to expect. It's also the standard that separates funds that can scale from funds that hit an operational ceiling when their DeFi complexity outgrows their manual processes.
Renesis provides real-time portfolio infrastructure for liquid crypto funds — a unified data layer that aggregates onchain and CeFi data, automates reconciliation, and generates LP-ready reporting without the manual synthesis layer most funds are currently relying on.
Built by builders.
For builders.
We're a DeFi-native team shipping fast. No enterprise sales cycles, no bloated pricing. Start free, talk to us when you're ready.