@bananapus/suckers-v6 0.0.26 → 0.0.27

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/ADMINISTRATION.md CHANGED
@@ -4,82 +4,23 @@
4
4
 
5
5
  | Item | Details |
6
6
  | --- | --- |
7
- | Scope | Registry-managed sucker deployment plus project-local bridge mapping, deprecation, and safety control |
8
- | Control posture | Mixed registry-owner, project-owner, and one-time deployer-configurator control |
9
- | Highest-risk actions | Wrong token mapping, emergency hatch activation, unsafe deprecation handling, and misconfigured bridge constants |
10
- | Recovery posture | Recovery usually means replacement sucker paths or deployers rather than in-place reversal |
7
+ | Scope | Cross-chain claim movement, token mapping, fees, and deprecation controls |
8
+ | Control posture | Mixed registry-owner, project-permission, and bridge-specific trust |
9
+ | Highest-risk actions | Wrong token mapping, wrong peer assumptions, and bad emergency or deprecation handling |
10
+ | Recovery posture | Often one-way; many recovery paths are intentionally irreversible |
11
11
 
12
12
  ## Purpose
13
13
 
14
- `nana-suckers-v6` has a layered control plane: registry ownership, project-local permissioned actions, and one-time deployer configuration for each bridge family. The most dangerous admin actions are token mapping, deprecation, emergency hatch activation, and deployer bridge-constant setup.
14
+ This repo controls the shared lifecycle around bridging project positions, not just the transport call itself.
15
15
 
16
16
  ## Control Model
17
17
 
18
- - `JBSuckerRegistry` is globally `Ownable`.
19
- - Project-local authority flows through `JBPermissions`.
20
- - `MAP_SUCKER_TOKEN`, `DEPLOY_SUCKERS`, `SUCKER_SAFETY`, and `SET_SUCKER_DEPRECATION` are the critical project-level permissions.
21
- - Bridge deployers have a one-time configurator role for singleton and chain constants.
22
-
23
- ## Roles
24
-
25
- | Role | How Assigned | Scope | Notes |
26
- | --- | --- | --- | --- |
27
- | Registry owner | `Ownable(initialOwner)` | Global | Controls approved deployers and global `toRemoteFee` |
28
- | Project owner | `JBProjects.ownerOf(projectId)` | Per project | May delegate project-local sucker permissions |
29
- | Project operator | `JBPermissions` grant | Per project | Typically `DEPLOY_SUCKERS`, `MAP_SUCKER_TOKEN`, `SUCKER_SAFETY`, `SET_SUCKER_DEPRECATION` |
30
- | Deployer configurator | Constructor `configurator` | Per deployer | One-time setup role for chain constants and singleton |
31
-
32
- ## Privileged Surfaces
33
-
34
- | Contract | Function | Who Can Call | Effect |
35
- | --- | --- | --- | --- |
36
- | `JBSuckerRegistry` | `allowSuckerDeployer(...)`, `removeSuckerDeployer(...)`, `setToRemoteFee(...)` | Registry owner | Controls global deployer allowlist and fee |
37
- | `JBSuckerRegistry` | `deploySuckersFor(...)` | Project owner or `DEPLOY_SUCKERS` delegate | Deploys sucker pairs for a project |
38
- | `JBSucker` | `mapToken(...)`, `mapTokens(...)` | Project owner or `MAP_SUCKER_TOKEN` delegate | Sets or disables token mappings |
39
- | `JBSucker` | `enableEmergencyHatchFor(...)` | Project owner or `SUCKER_SAFETY` delegate | Irreversibly opens emergency exit for tokens |
40
- | `JBSucker` | `setDeprecation(...)` | Project owner or `SET_SUCKER_DEPRECATION` delegate | Starts or cancels deprecation while allowed |
41
- | `JBSuckerDeployer` variants | `configureSingleton(...)`, `setChainSpecificConstants(...)` | Configurator | One-time deployer setup |
42
-
43
- ## Immutable And One-Way
44
-
45
- - Emergency hatch is irreversible for the affected token mapping.
46
- - Deployer singleton and chain-constant setup are one-time.
47
- - Deprecation becomes irreversible once the sucker reaches the disabled phase.
48
- - Token mapping is constrained once outbox activity exists for that token.
49
-
50
- ## Operational Notes
51
-
52
- - Map remote tokens carefully before meaningful bridge traffic accumulates.
53
- - Use deprecation to create a controlled shutdown window instead of abrupt disablement.
54
- - Treat emergency hatch as a last resort.
55
- - Verify deployer singleton and chain constants before approving or using a deployer operationally.
56
- - Treat fee-payment and bridge-send paths as best-effort in some variants; certain failures degrade into retained funds or local fallback claims rather than clean global rollback.
57
-
58
- ## Machine Notes
59
-
60
- - Do not assume registry ownership implies control over project-local mapping or emergency actions.
61
- - Treat `src/JBSucker.sol`, `src/JBSuckerRegistry.sol`, and `src/deployers/` as the minimum admin source set.
62
- - If live leaves, token mappings, or deprecation phase disagree with the planned action, stop and re-evaluate the recovery path.
63
- - If a sucker variant uses try/catch around fee payment or inbound swaps, inspect the variant-specific recovery behavior before assuming failed bridge-side actions fully reverted.
18
+ - registry owner controls shared fee settings and deployer allowlists
19
+ - project-level permissions control token mapping and safety paths
20
+ - bridge-specific implementations inherit external trust assumptions
64
21
 
65
22
  ## Recovery
66
23
 
67
- - The normal recovery path is a new sucker path or a new deployer, not trying to re-enable an unsafe one.
68
- - Emergency-hatched tokens recover through the defined local exit flow.
69
- - Bad bridge-constant configuration generally means replacement deployers or replacement sucker instances.
70
- - Some failure modes intentionally preserve liveness over strict rollback, so recovery may mean reconciling retained funds or retryable local claims rather than undoing the original send.
71
-
72
- ## Admin Boundaries
73
-
74
- - Registry owners cannot override project-local mapping or safety decisions directly.
75
- - Project operators cannot reverse an emergency hatch.
76
- - Project operators cannot force already sent leaves through the emergency hatch path.
77
- - Nobody can mutate constructor immutables on live suckers or deployers.
78
-
79
- ## Source Map
24
+ - emergency hatch and deprecation are the main recovery tools
25
+ - both are intentionally conservative and often one-way
80
26
 
81
- - `src/JBSucker.sol`
82
- - `src/JBSuckerRegistry.sol`
83
- - `src/deployers/`
84
- - `src/utils/MerkleLib.sol`
85
- - `test/`
package/ARCHITECTURE.md CHANGED
@@ -2,89 +2,33 @@
2
2
 
3
3
  ## Purpose
4
4
 
5
- `nana-suckers-v6` moves Juicebox project-token value across chains. A sucker pair lets a holder destroy or consume a local project-token position, bridge the corresponding terminal-side value plus a Merkle root, and later claim equivalent value on the remote chain.
5
+ `nana-suckers-v6` bridges Juicebox project positions across chains by turning local burns into claimable remote mints.
6
6
 
7
7
  ## System Overview
8
8
 
9
- `JBSucker` defines the chain-agnostic prepare, relay, and claim lifecycle. Chain-specific implementations such as `JBOptimismSucker`, `JBArbitrumSucker`, `JBCCIPSucker`, `JBSwapCCIPSucker`, `JBBaseSucker`, and `JBCeloSucker` handle transport-specific details. `JBSuckerRegistry` governs deployment inventory and shared policy, while deployers create deterministic clones for each supported transport family.
9
+ `JBSucker` handles prepare, relay, claim, token mapping, deprecation, and emergency exits. `JBSuckerRegistry` tracks deployments, deployer allowlists, and shared fee settings. Bridge-specific implementations handle transport details.
10
10
 
11
11
  ## Core Invariants
12
12
 
13
- - Inbox and outbox trees must remain append-only and proof-compatible across chains.
14
- - Token mapping is part of economic correctness; a wrong mapping is a value-loss bug.
15
- - Deprecation or emergency controls must not break already-bridged claims.
16
- - Roots may arrive out of order. Newer nonces replace older inbox roots, so claims must stay provable against the latest append-only tree.
17
- - Root reception and token mapping are intentionally decoupled. Accepting a root for an unmapped token is valid if later mapping is what makes the claim redeemable.
18
- - Transport-specific implementations may differ operationally, but they must preserve the same logical prepare-to-claim lifecycle.
19
-
20
- ## Modules
21
-
22
- | Module | Responsibility | Notes |
23
- | --- | --- | --- |
24
- | `JBSucker` | Prepare, root management, claim verification, token mapping, deprecation | Chain-agnostic base |
25
- | chain-specific suckers | Transport details for OP Stack, Arbitrum, CCIP, Base, and Celo | Bridge-specific subclasses |
26
- | `JBSuckerRegistry` | Deployer allowlist, inventory, and global bridge-fee policy | Shared policy surface |
27
- | deployers | Deterministic clone deployment and initialization | One per transport family |
28
- | `MerkleLib` and helper libraries | Incremental tree logic and chain constants | Proof-critical |
13
+ - Merkle trees stay append-only
14
+ - nonce progression stays monotonic
15
+ - token mapping stays coherent across peers
16
+ - claims and emergency exits do not double-spend
17
+ - outbox balance accounting stays consistent through send and recovery flows
29
18
 
30
19
  ## Trust Boundaries
31
20
 
32
- - Project-token semantics and local terminal accounting remain rooted in `nana-core-v6`.
33
- - Transport assumptions come from native bridge infrastructure for each chain family.
34
- - Permission IDs come from `nana-permission-ids-v6`.
35
-
36
- ## Critical Flows
37
-
38
- ### Prepare, Relay, Claim
39
-
40
- ```text
41
- holder prepares a bridge
42
- -> sucker cashes out or consumes the local project-token position
43
- -> sucker inserts a Merkle leaf into the outbox tree
44
- -> someone relays funds and the latest root to the remote sucker
45
- -> remote sucker may accept a later nonce before an earlier one, updating shared cross-chain snapshots to the freshest project-wide message
46
- -> claimant proves inclusion against the remote inbox tree
47
- -> remote sucker releases or remints destination-side value
48
- ```
49
-
50
- ## Accounting Model
51
-
52
- The repo does not replace local treasury accounting. It owns bridge-specific claim accounting: outbox leaves, inbox roots, token mappings, replay protection, and the transition from local destruction to remote claimability.
53
-
54
- `JBSwapCCIPSucker` adds another accounting layer on top of the base lifecycle: nonce-indexed conversion rates. A claim can be temporarily blocked while a failed swap is pending retry, and successful claims are scaled against the conversion rate recorded for that batch's nonce.
21
+ - shared logic lives in `JBSucker`
22
+ - transport security lives in the bridge-specific implementation and external bridge counterparties
23
+ - registry decisions can widen or constrain the allowed deployment surface
55
24
 
56
25
  ## Security Model
57
26
 
58
- - Tree state, token mapping, and replay protection have cross-chain blast radius.
59
- - Each transport backend has distinct failure modes and native-token quirks.
60
- - Out-of-order message delivery is part of the trust model, not an exception path. Proof generation and monitoring must tolerate stale-root rejection and regenerated proofs against the newest root.
61
- - Emergency hatch and deprecation flows are designed to preserve already-bridged exits. Post-deprecation root acceptance is intentional so in-flight messages do not strand users.
62
- - Registry policy matters because bad deployments are hard to repair once pairs exist on multiple chains.
63
-
64
- ## Safe Change Guide
65
-
66
- - Review every cross-chain change from both sides of the pair.
67
- - Do not change Merkle leaf encoding casually.
68
- - Keep registry policy, deployer configuration, and singleton initialization aligned.
69
- - If you change root or snapshot nonce handling, re-check out-of-order delivery behavior and whether older claims remain provable against the newest root.
70
- - If you change CCIP swap handling, re-check pending-swap claim blocking and per-batch conversion-rate lookups together.
71
- - Test chain-specific wrapping and native-token handling separately from the abstract lifecycle.
72
-
73
- ## Canonical Checks
74
-
75
- - peer snapshot and remote-state synchronization:
76
- `test/audit/codex-PeerSnapshotDesync.t.sol`
77
- - deprecation and stranded-destination handling:
78
- `test/audit/DeprecatedSuckerDestination.t.sol`
79
- - peer-chain state accounting:
80
- `test/unit/peer_chain_state.t.sol`
27
+ - the biggest risks are non-atomic cross-chain state, bad token mapping, and broken peer assumptions
28
+ - bridge liveness and correct peer identity are real trust assumptions
81
29
 
82
30
  ## Source Map
83
31
 
84
32
  - `src/JBSucker.sol`
85
33
  - `src/JBSuckerRegistry.sol`
86
- - `src/deployers/`
87
34
  - `src/utils/MerkleLib.sol`
88
- - `test/audit/codex-PeerSnapshotDesync.t.sol`
89
- - `test/audit/DeprecatedSuckerDestination.t.sol`
90
- - `test/unit/peer_chain_state.t.sol`
@@ -1,110 +1,30 @@
1
1
  # Audit Instructions
2
2
 
3
- This repo bridges Juicebox project tokens and associated terminal assets across chains. Audit it as a conservation and replay-prevention system.
3
+ Audit this repo as cross-chain claim and recovery logic, not as a generic ERC-20 bridge.
4
4
 
5
5
  ## Audit Objective
6
6
 
7
7
  Find issues that:
8
- - allow double claim, replay, or claim on the wrong destination
9
- - lose or strand bridged backing assets
10
- - let deprecated or emergency paths violate intended safety rules
11
- - mis-handle root ordering, especially across asynchronous bridge transports
12
- - grant mapping or safety privileges more broadly than intended
8
+
9
+ - break Merkle-root or nonce progression
10
+ - allow bad token mapping or peer assumptions
11
+ - permit double-claim or bad emergency exit behavior
12
+ - make non-atomic bridge semantics unsafe
13
13
 
14
14
  ## Scope
15
15
 
16
16
  In scope:
17
- - all Solidity under `src/`
18
- - deployer contracts under `src/deployers/`
17
+
18
+ - `src/JBSucker.sol`
19
+ - `src/JBSuckerRegistry.sol`
20
+ - bridge-specific implementations and deployers
19
21
  - `src/utils/MerkleLib.sol`
20
- - libraries, enums, interfaces, and structs under `src/`
21
- - deployment scripts in `script/`
22
22
 
23
23
  ## Start Here
24
24
 
25
- Read in this order:
26
- - the shared flow in `JBSucker`
27
- - claim validation and execution tracking
28
- - token mapping and emergency-hatch logic
29
- - one native bridge implementation
30
- - `JBCCIPSucker`
31
- - deployers and registry assumptions
32
-
33
- That order gets you from the shared conservation model to the transport-specific deviations.
34
-
35
- ## Security Model
36
-
37
- The bridge flow is:
38
- - burn or prepare project-token value on source chain
39
- - record a leaf into an outbox tree
40
- - send a merkle root and backing assets over a chain-specific transport
41
- - receive the root on the remote chain
42
- - claim by proving inclusion against the current inbox root
43
-
44
- This repo supports multiple transport implementations:
45
- - OP Stack variants
46
- - Arbitrum
47
- - CCIP
48
- - related deployers and registries
49
-
50
- One non-obvious property to audit explicitly:
51
- - roots and assets do not always arrive in a perfectly ordered, synchronous way
52
- - the system is intentionally designed to survive some transport mismatch without deadlocking
53
- - those recovery choices are exactly where conservation bugs tend to hide
54
-
55
- ## Roles And Privileges
56
-
57
- | Role | Powers | How constrained |
58
- |------|--------|-----------------|
59
- | Source-side caller | Prepare and bridge value to a remote chain | Must not create more claimable value than was prepared |
60
- | Remote peer and messenger | Install new roots and deliver assets | Must be authenticated per transport |
61
- | Emergency authority | Deprecate paths or enable recovery exits | Must not be able to steal in-flight funds |
62
-
63
- ## Integration Assumptions
64
-
65
- | Dependency | Assumption | What breaks if wrong |
66
- |------------|------------|----------------------|
67
- | Bridge transport | Delivers only authenticated peer messages | Anyone can spoof remote state |
68
- | Token mapping and registry state | Remote asset identity stays stable | Users claim the wrong asset or wrong meaning |
69
-
70
- ## Critical Invariants
71
-
72
- 1. Cross-chain conservation
73
- For any prepared transfer, destination claimable value must not exceed what the source side actually prepared and backed.
74
-
75
- 2. Single execution
76
- Each bridged leaf must be claimable at most once on the destination and at most once via emergency exit.
77
-
78
- 3. Peer authenticity
79
- Only the intended remote peer and messenger path may update inbox roots.
80
-
81
- 4. Deprecation safety
82
- Deprecation and emergency-hatch controls must not let callers bypass intended restrictions or steal in-flight funds.
83
-
84
- 5. Token mapping integrity
85
- Remote token mappings must be immutable or mutable only exactly where the design allows.
86
-
87
- 6. Nonce progression is monotonic in the way each transport expects
88
- Later roots must not silently invalidate earlier user claims unless the protocol explicitly intends that recovery path.
89
-
90
- ## Attack Surfaces
91
-
92
- - `prepare`, `toRemote`, `fromRemote`, and `claim`
93
- - bitmap execution tracking
94
- - root and nonce handling
95
- - token mapping and registry trust
96
- - chain-specific messenger authentication
97
- - deployer address derivation and clone setup
98
-
99
- Replay these sequences:
100
- 1. prepare multiple leaves, send multiple roots, receive them out of order, and attempt each claim
101
- 2. prepare, deprecate or enable emergency hatch, then race claim and exit paths
102
- 3. map a token, prepare a transfer, then attempt remap or peer mismatch after value is in flight
103
- 4. replay the same logical transfer across different sucker implementations
104
-
105
- ## Accepted Risks Or Behaviors
106
-
107
- - Out-of-order arrival is part of the intended model, not an edge case.
25
+ 1. `src/JBSucker.sol`
26
+ 2. `src/JBSuckerRegistry.sol`
27
+ 3. the relevant bridge-specific implementation
108
28
 
109
29
  ## Verification
110
30
 
package/README.md CHANGED
@@ -24,7 +24,7 @@ The base implementation is extended for multiple bridge families so the same pro
24
24
 
25
25
  Use this repo when the requirement is canonical project-token movement across chains. Do not use it if the project is single-chain or if the bridge assumptions for the target networks are unacceptable.
26
26
 
27
- The main idea is not "bridge the token contract." The main idea is "bridge a Juicebox cash-out claim plus enough information to recreate the project-token position on the remote chain."
27
+ The main idea is not "bridge the token contract." The main idea is "bridge a Juicebox claim plus enough information to recreate the project-token position on the remote chain."
28
28
 
29
29
  ## Key Contracts
30
30
 
@@ -32,11 +32,7 @@ The main idea is not "bridge the token contract." The main idea is "bridge a Jui
32
32
  | --- | --- |
33
33
  | `JBSucker` | Base bridge logic for prepare, relay, claim, token mapping, and lifecycle controls. |
34
34
  | `JBSuckerRegistry` | Registry for per-project sucker deployments, deployer allowlists, and shared bridge fee settings. |
35
- | `JBOptimismSucker` | OP Stack bridge implementation. |
36
- | `JBBaseSucker` | Base-flavored OP Stack implementation. |
37
- | `JBCeloSucker` | OP Stack implementation adapted for Celo's native asset behavior. |
38
- | `JBArbitrumSucker` | Arbitrum bridge implementation. |
39
- | `JBCCIPSucker` | Chainlink CCIP-based implementation for CCIP-connected chains. |
35
+ | Chain-specific suckers | Transport-specific implementations for OP Stack, Arbitrum, CCIP, and related environments. |
40
36
 
41
37
  ## Mental Model
42
38
 
@@ -50,28 +46,6 @@ That means every bridge path has two trust surfaces:
50
46
  - the shared sucker accounting and Merkle logic
51
47
  - the bridge-specific transport implementation
52
48
 
53
- The shortest useful reading order is:
54
-
55
- | Contract | Description |
56
- |----------|-------------|
57
- | [`JBSucker`](src/JBSucker.sol) | Abstract base. Manages outbox/inbox merkle trees, `prepare`/`toRemote`/`claim` lifecycle, token mapping, deprecation, and emergency hatch. Deployed as clones via `Initializable`. Uses `ERC2771Context` for meta-transactions. Has immutable `FEE_PROJECT_ID` (typically project ID 1) and immutable `REGISTRY` reference. Reads the `toRemoteFee` from the registry via `REGISTRY.toRemoteFee()` on each `toRemote()` call. |
58
- | [`JBCCIPSucker`](src/JBCCIPSucker.sol) | Extends `JBSucker`. Bridges via Chainlink CCIP (`ccipSend`/`ccipReceive`) for chain pairs whose router, selector, and token mapping are configured. Wraps native ETH to WETH before bridging (CCIP only transports ERC-20s) and unwraps on the receiving end. Can map `NATIVE_TOKEN` to ERC-20 addresses on the remote chain (unlike OP/Arbitrum suckers). |
59
- | [`JBOptimismSucker`](src/JBOptimismSucker.sol) | Extends `JBSucker`. Bridges via OP Standard Bridge + OP Messenger. No `msg.value` required for transport. |
60
- | [`JBBaseSucker`](src/JBBaseSucker.sol) | Thin wrapper around `JBOptimismSucker` with Base chain IDs (Ethereum 1 <-> Base 8453, Sepolia 11155111 <-> Base Sepolia 84532). |
61
- | [`JBCeloSucker`](src/JBCeloSucker.sol) | Extends `JBOptimismSucker` for Celo (OP Stack, custom gas token CELO). Wraps native ETH → WETH before bridging as ERC-20. Unwraps received WETH → native ETH via `_addToBalance` override. Removes `NATIVE_TOKEN → NATIVE_TOKEN` restriction. Sends messenger messages with `nativeValue = 0` (Celo's native token is CELO, not ETH). |
62
- | [`JBArbitrumSucker`](src/JBArbitrumSucker.sol) | Extends `JBSucker`. Bridges via Arbitrum Inbox + Gateway Router. Uses `unsafeCreateRetryableTicket` for L1->L2 (to avoid address aliasing of refund address) and `ArbSys.sendTxToL1` for L2->L1. Requires `msg.value` for L1->L2 transport payment. |
63
- | [`JBSuckerRegistry`](src/JBSuckerRegistry.sol) | Tracks all suckers per project. Manages deployer allowlist (owner-only). Entry point for `deploySuckersFor`. Can remove deprecated suckers via `removeDeprecatedSucker`. Owns the global `toRemoteFee` (ETH fee in wei, capped at `MAX_TO_REMOTE_FEE` = 0.001 ether), adjustable by the registry owner via `setToRemoteFee()`. All sucker clones read this fee from the registry. Existing-project deployments are deploy-and-map operations, so the registry also needs to be arranged as an authorized `MAP_SUCKER_TOKEN` operator for those projects. |
64
- | [`JBSuckerDeployer`](src/JBSuckerDeployer.sol) | Abstract base deployer. Clones a singleton sucker via `LibClone.cloneDeterministic` and initializes it. Two-phase setup: `setChainSpecificConstants` then `configureSingleton`. |
65
- | [`JBCCIPSuckerDeployer`](src/deployers/JBCCIPSuckerDeployer.sol) | Deployer for `JBCCIPSucker`. Stores CCIP router, remote chain ID, and CCIP chain selector. |
66
- | [`JBOptimismSuckerDeployer`](src/deployers/JBOptimismSuckerDeployer.sol) | Deployer for `JBOptimismSucker`. Stores OP Messenger and OP Bridge addresses. |
67
- | [`JBBaseSuckerDeployer`](src/deployers/JBBaseSuckerDeployer.sol) | Thin wrapper around `JBOptimismSuckerDeployer` for Base. |
68
- | [`JBCeloSuckerDeployer`](src/deployers/JBCeloSuckerDeployer.sol) | Deployer for `JBCeloSucker`. Extends `JBOptimismSuckerDeployer` with `wrappedNative` (`IWrappedNativeToken`) storage for the local chain's WETH address. |
69
- | [`JBArbitrumSuckerDeployer`](src/deployers/JBArbitrumSuckerDeployer.sol) | Deployer for `JBArbitrumSucker`. Stores Arbitrum Inbox, Gateway Router, and layer (`JBLayer.L1` or `JBLayer.L2`). |
70
- | [`MerkleLib`](src/utils/MerkleLib.sol) | Incremental merkle tree (depth 32, max ~4 billion leaves, modeled on eth2 deposit contract). Used for outbox/inbox trees. `insert` and `root` operate directly on `Tree storage` (not memory copies) to avoid redundant SLOAD/SSTORE round-trips. Gas-optimized with inline assembly for `root()` and `branchRoot()`. |
71
- | [`CCIPHelper`](src/libraries/CCIPHelper.sol) | CCIP router addresses, chain selectors, and WETH addresses for the chain set currently encoded in this repo. |
72
- | [`ARBAddresses`](src/libraries/ARBAddresses.sol) | Arbitrum bridge contract addresses (Inbox, Gateway Router) for mainnet and Sepolia. |
73
- | [`ARBChains`](src/libraries/ARBChains.sol) | Arbitrum chain ID constants. |
74
-
75
49
  ## Read These Files First
76
50
 
77
51
  1. `src/JBSucker.sol`
@@ -82,18 +56,16 @@ The shortest useful reading order is:
82
56
 
83
57
  ## Integration Traps
84
58
 
85
- - do not reason about suckers as if they were generic ERC-20 bridges; they are project-token plus treasury-state bridges
86
- - root ordering and message delivery semantics matter as much as the claim proof format
87
- - token mapping is part of the economic invariant, not just a convenience config
88
- - emergency and deprecation paths are not edge tooling; they are part of normal operational safety
59
+ - do not reason about suckers as if they were generic ERC-20 bridges
60
+ - root ordering and message delivery semantics matter as much as proof format
61
+ - token mapping is part of the economic invariant
62
+ - emergency and deprecation paths are part of normal operational safety
89
63
 
90
64
  ## Where State Lives
91
65
 
92
- - per-claim and tree progression state live in the sucker pair itself
93
- - deployment inventory and shared operational config live in `JBSuckerRegistry`
94
- - bridge transport assumptions live in the chain-specific implementation and its external counterparties
95
-
96
- When reviewing a bridge incident, check local state transition correctness before blaming the transport layer.
66
+ - per-claim and tree progression state: the sucker pair
67
+ - deployment inventory and shared operational config: `JBSuckerRegistry`
68
+ - bridge transport assumptions: the chain-specific implementation and its external counterparties
97
69
 
98
70
  ## High-Signal Tests
99
71
 
@@ -149,15 +121,13 @@ script/
149
121
 
150
122
  ## Risks And Notes
151
123
 
152
- - out-of-order root delivery can make some claims unclaimable until an operator uses an emergency path
124
+ - out-of-order root delivery can make some claims unavailable until an operator uses an emergency path
153
125
  - bridge-specific transport assumptions matter as much as the shared sucker logic
154
126
  - token mapping and deprecation controls are governance-sensitive surfaces
155
127
  - a bridge that stays live operationally still may not be economically safe for every asset or chain pair
156
128
 
157
- When debugging a bad cross-chain outcome, first decide whether the failure is in claim construction, message transport, inbox/outbox root progression, or remote settlement. Those are different bug classes.
158
-
159
129
  ## For AI Agents
160
130
 
161
- - Do not summarize this repo as a generic token bridge; it bridges Juicebox project positions plus transported value.
162
- - Always separate shared sucker logic from bridge-specific transport behavior in your explanation.
163
- - Use the chain-specific implementation and the matching deployer together when answering operational questions.
131
+ - Do not summarize this repo as a generic token bridge.
132
+ - Always separate shared sucker logic from bridge-specific transport behavior.
133
+ - Use the chain-specific implementation and matching deployer together when answering operational questions.
package/SKILLS.md CHANGED
@@ -2,49 +2,24 @@
2
2
 
3
3
  ## Use This File For
4
4
 
5
- - Use this file when the task involves cross-chain project-token bridging, token mapping, Merkle claim flow, bridge-specific transport logic, or sucker registry behavior.
6
- - Start here, then decide whether the issue is in shared accounting, message authentication, token mapping, or operator/deprecation controls. Those concerns live in different layers.
5
+ - Use this file when the task involves cross-chain project-token movement, Merkle-root progression, token mapping, or emergency and deprecation flows.
6
+ - Start here, then decide whether the issue is in shared sucker logic or in a bridge-specific transport implementation.
7
7
 
8
8
  ## Read This Next
9
9
 
10
10
  | If you need... | Open this next |
11
11
  |---|---|
12
- | Repo overview and bridge model | [`README.md`](./README.md), [`ARCHITECTURE.md`](./ARCHITECTURE.md) |
12
+ | Repo overview and architecture | [`README.md`](./README.md), [`ARCHITECTURE.md`](./ARCHITECTURE.md) |
13
13
  | Shared bridge logic | [`src/JBSucker.sol`](./src/JBSucker.sol), [`src/JBSuckerRegistry.sol`](./src/JBSuckerRegistry.sol) |
14
- | Chain-specific transport behavior | [`src/JBArbitrumSucker.sol`](./src/JBArbitrumSucker.sol), [`src/JBOptimismSucker.sol`](./src/JBOptimismSucker.sol), [`src/JBCCIPSucker.sol`](./src/JBCCIPSucker.sol), [`src/JBCeloSucker.sol`](./src/JBCeloSucker.sol) |
15
- | Deployer and transport setup | [`src/deployers/`](./src/deployers/) |
16
- | Merkle and helper logic | [`src/utils/`](./src/utils/), [`src/libraries/`](./src/libraries/) |
17
- | Interop and chain-specific fork coverage | [`test/ForkMainnet.t.sol`](./test/ForkMainnet.t.sol), [`test/ForkArbitrum.t.sol`](./test/ForkArbitrum.t.sol), [`test/ForkCelo.t.sol`](./test/ForkCelo.t.sol), [`test/ForkOPStack.t.sol`](./test/ForkOPStack.t.sol), [`test/InteropCompat.t.sol`](./test/InteropCompat.t.sol) |
18
- | Swap, claim, attack, and regression coverage | [`test/ForkSwap.t.sol`](./test/ForkSwap.t.sol), [`test/ForkClaimMainnet.t.sol`](./test/ForkClaimMainnet.t.sol), [`test/SuckerAttacks.t.sol`](./test/SuckerAttacks.t.sol), [`test/SuckerDeepAttacks.t.sol`](./test/SuckerDeepAttacks.t.sol), [`test/SuckerRegressions.t.sol`](./test/SuckerRegressions.t.sol), [`test/TestAuditGaps.sol`](./test/TestAuditGaps.sol) |
19
-
20
- ## Repo Map
21
-
22
- | Area | Where to look |
23
- |---|---|
24
- | Base contracts | [`src/JBSucker.sol`](./src/JBSucker.sol), [`src/JBSuckerRegistry.sol`](./src/JBSuckerRegistry.sol) |
25
- | Chain-specific implementations and deployers | [`src/`](./src/), [`src/deployers/`](./src/deployers/) |
26
- | Libraries, utils, and types | [`src/libraries/`](./src/libraries/), [`src/utils/`](./src/utils/), [`src/interfaces/`](./src/interfaces/), [`src/structs/`](./src/structs/), [`src/enums/`](./src/enums/) |
27
- | Scripts | [`script/`](./script/) |
28
- | Tests | [`test/`](./test/) |
14
+ | Merkle logic | [`src/utils/MerkleLib.sol`](./src/utils/MerkleLib.sol) |
15
+ | Bridge-specific behavior | the matching implementation and deployer under [`src/`](./src/) and [`src/deployers/`](./src/deployers/) |
29
16
 
30
17
  ## Purpose
31
18
 
32
- Cross-chain bridge layer for Juicebox project tokens and the terminal assets that back them. Suckers package local burn or claim state into Merkle roots, relay those roots across bridge transports, and let users recreate the position on the remote chain.
33
-
34
- ## Reference Files
35
-
36
- - Open [`references/runtime.md`](./references/runtime.md) when you need the base claim flow, registry role, token mapping model, or the main bridge invariants.
37
- - Open [`references/operations.md`](./references/operations.md) when you need deployer and transport-selection guidance, deprecation and emergency behavior, or the common stale-data traps around bridge configuration.
19
+ Canonical cross-chain movement layer for Juicebox project positions.
38
20
 
39
21
  ## Working Rules
40
22
 
41
- - Start in [`src/JBSucker.sol`](./src/JBSucker.sol) for shared accounting and claim flow, then move to the chain-specific implementation only after you know the base path is correct.
42
- - `JBSucker` explicitly does not support fee-on-transfer or rebasing tokens. If a bug report involves those assets, treat it as an unsupported-path question first.
43
- - Root progression, peer supply, and peer surplus snapshots are part of economic correctness, not just bridge bookkeeping.
44
- - Token mapping is intentionally one-way after real activity starts. Disabling a mapping is allowed; remapping to a different remote asset is not.
45
- - Peer symmetry depends on deployer and salt assumptions as well as runtime code. A bridge bug can start in deployment shape before it appears in message flow.
46
- - Treat token mapping, root progression, and emergency/deprecation controls as first-class runtime behavior, not admin-only side tooling.
47
- - When debugging a bridge incident, separate accounting correctness from transport correctness before patching.
48
- - Message authentication is delegated to bridge-specific subclasses. When reviewing a new transport, `_isRemotePeer` is one of the first things to inspect.
49
- - Emergency exit and deprecation behavior are intentionally conservative. Some failure modes lock funds rather than risking double-spend.
50
- - If a task touches project deployment shape, check whether the real source is `nana-omnichain-deployers-v6` or `revnet-core-v6` instead of the sucker implementation itself.
23
+ - Start in `JBSucker` for shared lifecycle logic.
24
+ - Separate Merkle bookkeeping from bridge-specific transport assumptions.
25
+ - Treat token mapping, deprecation, and emergency hatch behavior as core safety surfaces.
package/USER_JOURNEYS.md CHANGED
@@ -2,141 +2,49 @@
2
2
 
3
3
  ## Repo Purpose
4
4
 
5
- This repo bridges Juicebox project-token positions and their treasury-backed claim semantics across chains.
6
- It is not a generic proxy terminal and not a generic ERC-20 bridge. The important unit is the project position and the
7
- explicit bridge lifecycle around `prepare`, `toRemote`, and claim.
5
+ This repo lets a Juicebox project move a claimable position from one chain to another.
8
6
 
9
7
  ## Primary Actors
10
8
 
11
- - projects that want canonical cross-chain movement of project-token positions
12
- - operators deploying and registering sucker pairs on supported bridge families
13
- - users bridging a project position from one chain to another
14
- - teams responsible for bridge fees, token mappings, deprecation, and emergency controls
9
+ - users bridging project positions
10
+ - operators relaying roots and managing emergency or deprecation flows
11
+ - auditors checking Merkle progression and token mapping correctness
15
12
 
16
- ## Key Surfaces
13
+ ## Journey 1: Prepare And Relay A Claim
17
14
 
18
- - `JBSucker`: shared base lifecycle for preparing, relaying, and claiming bridge leaves
19
- - `JBSuckerRegistry`: registry for sucker deployments, deployer allowlists, and shared fee settings
20
- - `JBOptimismSucker`, `JBBaseSucker`, `JBCeloSucker`, `JBArbitrumSucker`, `JBCCIPSucker`, `JBSwapCCIPSucker`: bridge-family implementations
15
+ **Actor:** user or relayer.
21
16
 
22
- ## Journey 1: Launch A Cross-Chain Sucker Pair For A Project
23
-
24
- **Actor:** operator or deployer.
25
-
26
- **Intent:** deploy and register the paired bridge surfaces a project will rely on across chains.
27
-
28
- **Preconditions**
29
- - the project exists on multiple chains or plans to
30
- - the team has chosen the bridge family it trusts
31
-
32
- **Main Flow**
33
- 1. Choose the chain-specific sucker implementation and deployer, such as Arbitrum, OP Stack, Celo, or CCIP.
34
- 2. Configure token mappings, bridge counterparties, and per-project registry state in `JBSuckerRegistry`.
35
- 3. Deploy the pair so each side knows its remote peer and expected transport assumptions.
36
- 4. Frontends and operators can now reason about the bridge as a known project surface instead of ad hoc per-transfer logic.
37
-
38
- **Failure Modes**
39
- - paired deployments disagree about counterparties or token mappings
40
- - teams deploy the right contracts but never register the resulting pair coherently
41
-
42
- **Postconditions**
43
- - paired suckers are deployed, registered, and ready to transport claims between the chains they serve
44
-
45
- ## Journey 2: Bridge A Position From One Chain To Another
46
-
47
- **Actor:** user bridging a position.
48
-
49
- **Intent:** move project-token exposure from the source chain to the destination chain.
50
-
51
- **Preconditions**
52
- - a user holds project-token exposure on the source chain
53
- - the project has a supported destination-side sucker path
54
-
55
- **Main Flow**
56
- 1. The user calls `prepare` on the source-chain sucker to burn or lock the relevant local position into a claimable leaf.
57
- 2. The source sucker appends that leaf into its Merkle outbox tree.
58
- 3. Someone relays the new root to the remote chain using `toRemote`.
59
- 4. The claimant proves inclusion against the remote inbox tree and receives the recreated project-token position there.
60
-
61
- **Failure Modes**
62
- - token mappings are wrong for the project or chain pair
63
- - transport-layer fees are missing and roots never arrive
64
- - operators assume the bridge is generic ERC-20 transport rather than project-position transport
65
-
66
- **Postconditions**
67
- - the source position becomes a claim, the claim is relayed, and the destination position is minted after proof verification
68
-
69
- ## Journey 3: Map Treasury Assets And Project Tokens Correctly Across Chains
70
-
71
- **Actor:** operator mapping assets and wrappers.
72
-
73
- **Intent:** preserve economic meaning across chains instead of bridging into the wrong wrapped exposure.
74
-
75
- **Preconditions**
76
- - the project supports multiple assets or wrappers across chains
77
- - users should be able to bridge without silent economic mismatch
17
+ **Intent:** burn locally and make the position claimable remotely.
78
18
 
79
19
  **Main Flow**
80
- 1. Configure remote token metadata and mapping with the sucker pair.
81
- 2. Make sure the destination chain can mint or settle the project-token representation the bridge expects.
82
- 3. Audit chain-specific native-asset handling, especially on Celo or other non-identical environments.
83
-
84
- **Failure Modes**
85
- - local and remote wrappers look similar but settle into different economics
86
- - chain-specific native-asset assumptions are copied across environments where they do not hold
87
-
88
- **Postconditions**
89
- - the remote claim recreates the intended exposure instead of a superficially similar but economically different asset
20
+ 1. A user calls `prepare`.
21
+ 2. The claim enters the local outbox tree.
22
+ 3. A relayer sends the current root to the peer chain with `toRemote`.
90
23
 
91
- ## Journey 4: Operate The Bridge Safely Over Time
24
+ ## Journey 2: Claim Remotely
92
25
 
93
- **Actor:** bridge operator.
26
+ **Actor:** claimant.
94
27
 
95
- **Intent:** keep registry config, fees, deprecation, and bridge-family assumptions coherent after launch.
96
-
97
- **Preconditions**
98
- - the bridge is live and now needs operational stewardship rather than just deployment
28
+ **Intent:** prove inclusion on the remote chain and mint the corresponding position.
99
29
 
100
30
  **Main Flow**
101
- 1. Use `JBSuckerRegistry` to manage deployer allowlists and shared operational config.
102
- 2. Watch fee fallback paths and transport assumptions because delivery failure is part of the intended threat model.
103
- 3. Use deprecation or emergency surfaces when a bridge family or remote destination should no longer be used.
104
-
105
- **Failure Modes**
106
- - fee policy drifts from actual transport costs and claims stop delivering
107
- - bridge-family deprecation is delayed even after counterparties or fees become unsafe
108
-
109
- **Postconditions**
110
- - fee policy, deprecation, trusted counterparties, and emergency paths remain coherent as conditions change
111
-
112
- ## Journey 5: Recover Value Through The Emergency Hatch When Normal Delivery Breaks
31
+ 1. Fetch a proof against the current inbox root.
32
+ 2. Call the remote claim path.
33
+ 3. The remote side verifies the proof and recreates the intended position.
113
34
 
114
- **Actor:** user or responder handling a broken delivery path.
35
+ ## Journey 3: Use Emergency Or Deprecation Paths
115
36
 
116
- **Intent:** recover value when the normal bridge delivery path is unavailable.
37
+ **Actor:** operator or project authority.
117
38
 
118
- **Preconditions**
119
- - a claim cannot complete through the normal inbox or remote-delivery path
39
+ **Intent:** recover from broken or deprecated bridge conditions.
120
40
 
121
41
  **Main Flow**
122
- 1. Enable or enter the emergency mode the sucker pair exposes for the affected path.
123
- 2. Use `exitThroughEmergencyHatch(...)` with the relevant claim data.
124
- 3. Treat emergency execution slots as distinct state that still must not allow the same economic position to be claimed twice.
125
-
126
- **Failure Modes**
127
- - teams use the emergency path prematurely instead of as a documented recovery mode
128
- - claim state is not checked carefully and responders risk inconsistent double-claim assumptions
129
-
130
- **Postconditions**
131
- - users can recover through the explicit emergency mechanism without double-spending the same claim
42
+ 1. Enable the relevant emergency or deprecation path.
43
+ 2. Stop relying on the broken route.
44
+ 3. Recover only through the allowed recovery surface.
132
45
 
133
46
  ## Trust Boundaries
134
47
 
135
- - this repo trusts both the shared sucker accounting logic and the selected bridge-family transport
136
- - token mapping and registry governance are part of the economic safety model
137
- - emergency and deprecation controls are operationally important, not just last-resort tooling
138
-
139
- ## Hand-Offs
48
+ - shared claim logic and transport behavior are separate concerns
49
+ - non-atomic cross-chain flows are normal, not exceptional
140
50
 
141
- - Use [nana-omnichain-deployers-v6](../nana-omnichain-deployers-v6/USER_JOURNEYS.md) when a project wants suckers packaged into its launch flow instead of deployed separately.
142
- - Use [nana-core-v6](../nana-core-v6/USER_JOURNEYS.md) or [revnet-core-v6](../revnet-core-v6/USER_JOURNEYS.md) for the treasury and runtime project behavior that suckers transport across chains.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@bananapus/suckers-v6",
3
- "version": "0.0.26",
3
+ "version": "0.0.27",
4
4
  "license": "MIT",
5
5
  "repository": {
6
6
  "type": "git",
@@ -327,13 +327,11 @@ library JBSwapPoolLib {
327
327
  config: config, normalizedTokenIn: normalizedTokenIn, normalizedTokenOut: normalizedTokenOut
328
328
  });
329
329
 
330
- // Prefer V4 if it has more liquidity, but only if V4 has a hook or V3 had no liquidity.
330
+ // Select the V4 pool if it has strictly more liquidity than the best V3 pool.
331
331
  if (v4Liquidity > bestLiquidity) {
332
- if (address(v4Candidate.hooks) != address(0) || bestLiquidity == 0) {
333
- isV4 = true;
334
- v3Pool = IUniswapV3Pool(address(0));
335
- v4Key = v4Candidate;
336
- }
332
+ isV4 = true;
333
+ v3Pool = IUniswapV3Pool(address(0));
334
+ v4Key = v4Candidate;
337
335
  }
338
336
  }
339
337
  }
@@ -0,0 +1,366 @@
1
+ // SPDX-License-Identifier: MIT
2
+ pragma solidity 0.8.28;
3
+
4
+ // forge-lint: disable-next-line(unaliased-plain-import)
5
+ import "forge-std/Test.sol";
6
+
7
+ import {IUniswapV3Factory} from "@uniswap/v3-core/contracts/interfaces/IUniswapV3Factory.sol";
8
+ import {IUniswapV3Pool} from "@uniswap/v3-core/contracts/interfaces/IUniswapV3Pool.sol";
9
+ import {IUniswapV3PoolState} from "@uniswap/v3-core/contracts/interfaces/pool/IUniswapV3PoolState.sol";
10
+ import {IPoolManager} from "@uniswap/v4-core/src/interfaces/IPoolManager.sol";
11
+ import {Currency} from "@uniswap/v4-core/src/types/Currency.sol";
12
+ import {IHooks} from "@uniswap/v4-core/src/interfaces/IHooks.sol";
13
+ import {PoolId, PoolIdLibrary} from "@uniswap/v4-core/src/types/PoolId.sol";
14
+ import {PoolKey} from "@uniswap/v4-core/src/types/PoolKey.sol";
15
+ import {StateLibrary} from "@uniswap/v4-core/src/libraries/StateLibrary.sol";
16
+
17
+ import {JBSwapPoolLib} from "../../src/libraries/JBSwapPoolLib.sol";
18
+
19
+ /// @notice Mock V4 PoolManager that stores slot data for pool state queries via extsload.
20
+ contract MockPoolManager {
21
+ using PoolIdLibrary for PoolKey;
22
+
23
+ /// @dev Storage slot for pools mapping (matches StateLibrary.POOLS_SLOT).
24
+ bytes32 private constant POOLS_SLOT = bytes32(uint256(6));
25
+ /// @dev Offset for liquidity within Pool.State (matches StateLibrary.LIQUIDITY_OFFSET).
26
+ uint256 private constant LIQUIDITY_OFFSET = 3;
27
+
28
+ /// @dev Arbitrary storage mapping: slot => value.
29
+ mapping(bytes32 => bytes32) private _slots;
30
+
31
+ /// @notice Configure a pool's slot0 (sqrtPriceX96, tick, protocolFee, lpFee) and liquidity.
32
+ /// @param key The pool key to configure.
33
+ /// @param sqrtPriceX96 The sqrt price (non-zero means initialized).
34
+ /// @param liquidity The in-range liquidity.
35
+ function setPool(PoolKey memory key, uint160 sqrtPriceX96, uint128 liquidity) external {
36
+ PoolId id = key.toId();
37
+ bytes32 stateSlot = keccak256(abi.encodePacked(PoolId.unwrap(id), POOLS_SLOT));
38
+
39
+ // Pack slot0: sqrtPriceX96 in bottom 160 bits, tick=0 in next 24, fees=0 in upper.
40
+ _slots[stateSlot] = bytes32(uint256(sqrtPriceX96));
41
+
42
+ // Pack liquidity at offset 3.
43
+ bytes32 liquiditySlot = bytes32(uint256(stateSlot) + LIQUIDITY_OFFSET);
44
+ _slots[liquiditySlot] = bytes32(uint256(liquidity));
45
+ }
46
+
47
+ /// @notice Implements IExtsload.extsload for StateLibrary compatibility.
48
+ function extsload(bytes32 slot) external view returns (bytes32) {
49
+ return _slots[slot];
50
+ }
51
+
52
+ /// @notice Multi-slot extsload (not used in pool discovery but required by interface).
53
+ function extsload(bytes32 startSlot, uint256 nSlots) external view returns (bytes32[] memory values) {
54
+ values = new bytes32[](nSlots);
55
+ for (uint256 i; i < nSlots; i++) {
56
+ values[i] = _slots[bytes32(uint256(startSlot) + i)];
57
+ }
58
+ }
59
+
60
+ /// @notice Array extsload (not used in pool discovery but required by interface).
61
+ function extsload(bytes32[] calldata slots) external view returns (bytes32[] memory values) {
62
+ values = new bytes32[](slots.length);
63
+ for (uint256 i; i < slots.length; i++) {
64
+ values[i] = _slots[slots[i]];
65
+ }
66
+ }
67
+ }
68
+
69
+ /// @notice Harness contract that exposes JBSwapPoolLib.discoverPool for unit testing.
70
+ contract PoolDiscoveryHarness {
71
+ function discoverPool(
72
+ JBSwapPoolLib.SwapConfig memory config,
73
+ address normalizedTokenIn,
74
+ address normalizedTokenOut
75
+ )
76
+ external
77
+ view
78
+ returns (bool isV4, IUniswapV3Pool v3Pool, PoolKey memory v4Key)
79
+ {
80
+ return JBSwapPoolLib.discoverPool(config, normalizedTokenIn, normalizedTokenOut);
81
+ }
82
+ }
83
+
84
+ /// @title JBSwapPoolLib_PoolDiscoveryTest
85
+ /// @notice Unit tests for the M-1 audit fix: V3/V4 pool preference logic in _discoverPool.
86
+ /// @dev The fix removed the V3 preference guard that blocked hookless V4 pools from being selected
87
+ /// even when they had deeper liquidity than any V3 pool.
88
+ contract JBSwapPoolLib_PoolDiscoveryTest is Test {
89
+ using PoolIdLibrary for PoolKey;
90
+
91
+ // Test addresses.
92
+ address constant TOKEN_A = address(0xA);
93
+ address constant TOKEN_B = address(0xB);
94
+ address constant WETH = address(0xC);
95
+ address constant HOOK_ADDR = address(0xD);
96
+
97
+ // Mock contracts.
98
+ address v3Factory;
99
+ MockPoolManager poolManager;
100
+ PoolDiscoveryHarness harness;
101
+
102
+ // Precomputed V3 pool addresses (one per fee tier).
103
+ address v3Pool3000;
104
+ address v3Pool500;
105
+ address v3Pool10000;
106
+ address v3Pool100;
107
+
108
+ function setUp() public {
109
+ v3Factory = makeAddr("v3Factory");
110
+ poolManager = new MockPoolManager();
111
+ harness = new PoolDiscoveryHarness();
112
+
113
+ // Create V3 pool addresses.
114
+ v3Pool3000 = makeAddr("v3Pool3000");
115
+ v3Pool500 = makeAddr("v3Pool500");
116
+ v3Pool10000 = makeAddr("v3Pool10000");
117
+ v3Pool100 = makeAddr("v3Pool100");
118
+
119
+ // Default: all V3 factory getPool calls return address(0) (no pool).
120
+ vm.mockCall(v3Factory, abi.encodeWithSelector(IUniswapV3Factory.getPool.selector), abi.encode(address(0)));
121
+ }
122
+
123
+ // =========================================================================
124
+ // Helpers
125
+ // =========================================================================
126
+
127
+ /// @dev Configure a V3 pool at a specific fee tier with given liquidity.
128
+ function _setupV3Pool(address pool, uint24 fee, uint128 liquidity) internal {
129
+ // Mock the factory to return this pool for the given fee tier.
130
+ vm.mockCall(
131
+ v3Factory,
132
+ abi.encodeWithSelector(IUniswapV3Factory.getPool.selector, TOKEN_A, TOKEN_B, fee),
133
+ abi.encode(pool)
134
+ );
135
+ // Also mock the reverse token ordering (factory is commutative).
136
+ vm.mockCall(
137
+ v3Factory,
138
+ abi.encodeWithSelector(IUniswapV3Factory.getPool.selector, TOKEN_B, TOKEN_A, fee),
139
+ abi.encode(pool)
140
+ );
141
+ // Mock the pool's liquidity.
142
+ vm.mockCall(pool, abi.encodeWithSelector(IUniswapV3PoolState.liquidity.selector), abi.encode(liquidity));
143
+ }
144
+
145
+ /// @dev Configure a V4 pool (hookless) with given liquidity.
146
+ function _setupV4HooklessPool(uint24 fee, int24 tickSpacing, uint128 liquidity) internal {
147
+ // Sort tokens for V4 convention (no WETH conversion needed here since neither is WETH).
148
+ (address sorted0, address sorted1) = TOKEN_A < TOKEN_B ? (TOKEN_A, TOKEN_B) : (TOKEN_B, TOKEN_A);
149
+
150
+ PoolKey memory key = PoolKey({
151
+ currency0: Currency.wrap(sorted0),
152
+ currency1: Currency.wrap(sorted1),
153
+ fee: fee,
154
+ tickSpacing: tickSpacing,
155
+ hooks: IHooks(address(0))
156
+ });
157
+
158
+ // Set a non-zero sqrtPriceX96 to indicate the pool is initialized, and set liquidity.
159
+ poolManager.setPool(key, 1 << 96, liquidity); // sqrtPriceX96 = 2^96 (price = 1)
160
+ }
161
+
162
+ /// @dev Configure a V4 pool with a hook and given liquidity.
163
+ function _setupV4HookedPool(address hook, uint24 fee, int24 tickSpacing, uint128 liquidity) internal {
164
+ (address sorted0, address sorted1) = TOKEN_A < TOKEN_B ? (TOKEN_A, TOKEN_B) : (TOKEN_B, TOKEN_A);
165
+
166
+ PoolKey memory key = PoolKey({
167
+ currency0: Currency.wrap(sorted0),
168
+ currency1: Currency.wrap(sorted1),
169
+ fee: fee,
170
+ tickSpacing: tickSpacing,
171
+ hooks: IHooks(hook)
172
+ });
173
+
174
+ poolManager.setPool(key, 1 << 96, liquidity);
175
+ }
176
+
177
+ /// @dev Build a SwapConfig pointing at our mocks.
178
+ function _config() internal view returns (JBSwapPoolLib.SwapConfig memory) {
179
+ return JBSwapPoolLib.SwapConfig({
180
+ v3Factory: IUniswapV3Factory(v3Factory),
181
+ poolManager: IPoolManager(address(poolManager)),
182
+ univ4Hook: HOOK_ADDR,
183
+ weth: WETH
184
+ });
185
+ }
186
+
187
+ // =========================================================================
188
+ // Test 1: V3 dust liquidity, V4 deep liquidity => V4 selected
189
+ // (This was the broken case before the M-1 fix)
190
+ // =========================================================================
191
+
192
+ /// @notice When V3 has dust liquidity (1 wei) and hookless V4 has deep liquidity,
193
+ /// V4 should be selected. Before the fix, V3 would win because hookless V4 was blocked.
194
+ function test_poolDiscovery_v4HooklessBeatsV3Dust() public {
195
+ // V3 has dust liquidity at 0.3% fee tier.
196
+ _setupV3Pool(v3Pool3000, 3000, 1);
197
+
198
+ // Hookless V4 has deep liquidity at 0.3% fee tier (fee=3000, tickSpacing=60).
199
+ _setupV4HooklessPool(3000, 60, 1_000_000e18);
200
+
201
+ (bool isV4, IUniswapV3Pool v3Pool,) = harness.discoverPool(_config(), TOKEN_A, TOKEN_B);
202
+
203
+ assertTrue(isV4, "V4 hookless pool with deep liquidity should be selected over V3 dust");
204
+ assertEq(address(v3Pool), address(0), "V3 pool should be cleared when V4 wins");
205
+ }
206
+
207
+ // =========================================================================
208
+ // Test 2: V3 deeper liquidity than V4 => V3 still selected
209
+ // =========================================================================
210
+
211
+ /// @notice When V3 has deeper liquidity than V4, V3 should still be selected.
212
+ function test_poolDiscovery_v3DeeperThanV4() public {
213
+ // V3 has deep liquidity.
214
+ _setupV3Pool(v3Pool3000, 3000, 1_000_000e18);
215
+
216
+ // Hookless V4 has less liquidity.
217
+ _setupV4HooklessPool(3000, 60, 500_000e18);
218
+
219
+ (bool isV4, IUniswapV3Pool v3Pool,) = harness.discoverPool(_config(), TOKEN_A, TOKEN_B);
220
+
221
+ assertFalse(isV4, "V3 should be selected when it has deeper liquidity");
222
+ assertEq(address(v3Pool), v3Pool3000, "Best V3 pool should be returned");
223
+ }
224
+
225
+ // =========================================================================
226
+ // Test 3: Equal liquidity => V3 wins (tie-break behavior)
227
+ // =========================================================================
228
+
229
+ /// @notice When V3 and V4 have equal liquidity, V3 wins because V4 requires
230
+ /// strictly greater liquidity (> not >=).
231
+ function test_poolDiscovery_equalLiquidity_v3Wins() public {
232
+ uint128 sameLiquidity = 500_000e18;
233
+
234
+ // V3 and V4 both at same liquidity.
235
+ _setupV3Pool(v3Pool3000, 3000, sameLiquidity);
236
+ _setupV4HooklessPool(3000, 60, sameLiquidity);
237
+
238
+ (bool isV4, IUniswapV3Pool v3Pool,) = harness.discoverPool(_config(), TOKEN_A, TOKEN_B);
239
+
240
+ assertFalse(isV4, "V3 should win on equal liquidity (V4 needs strictly more)");
241
+ assertEq(address(v3Pool), v3Pool3000, "V3 pool should be returned on tie");
242
+ }
243
+
244
+ // =========================================================================
245
+ // Test 4: V3 zero liquidity, V4 has liquidity => V4 selected
246
+ // (Was already working before the fix, regression check)
247
+ // =========================================================================
248
+
249
+ /// @notice When V3 has zero liquidity and V4 has liquidity, V4 is selected.
250
+ function test_poolDiscovery_v3ZeroLiquidity_v4Selected() public {
251
+ // V3 pool exists but has zero liquidity.
252
+ _setupV3Pool(v3Pool3000, 3000, 0);
253
+
254
+ // V4 hookless has some liquidity.
255
+ _setupV4HooklessPool(3000, 60, 100e18);
256
+
257
+ (bool isV4,,) = harness.discoverPool(_config(), TOKEN_A, TOKEN_B);
258
+
259
+ assertTrue(isV4, "V4 should be selected when V3 has zero liquidity");
260
+ }
261
+
262
+ // =========================================================================
263
+ // Test 5: Hookless V4 with more liquidity than V3 (the broken case)
264
+ // =========================================================================
265
+
266
+ /// @notice Edge case: hookless V4 pool with strictly more liquidity than V3 should
267
+ /// be selected. This was the exact scenario broken before the M-1 fix — the old
268
+ /// code required V4 to have a hook OR V3 to have zero liquidity.
269
+ function test_poolDiscovery_hooklessV4MoreLiquidityThanV3() public {
270
+ // V3 has moderate liquidity.
271
+ _setupV3Pool(v3Pool500, 500, 100_000e18);
272
+
273
+ // Hookless V4 at 0.05% tier has more liquidity.
274
+ _setupV4HooklessPool(500, 10, 200_000e18);
275
+
276
+ (bool isV4, IUniswapV3Pool v3Pool,) = harness.discoverPool(_config(), TOKEN_A, TOKEN_B);
277
+
278
+ assertTrue(isV4, "Hookless V4 with more liquidity must beat V3 (M-1 fix)");
279
+ assertEq(address(v3Pool), address(0), "V3 pool should be zeroed when V4 wins");
280
+ }
281
+
282
+ // =========================================================================
283
+ // Additional coverage: hooked V4 pool with more liquidity
284
+ // =========================================================================
285
+
286
+ /// @notice A hooked V4 pool with more liquidity than V3 should also be selected
287
+ /// (this already worked before the fix, but verify it still works).
288
+ function test_poolDiscovery_hookedV4BeatsV3() public {
289
+ // V3 has some liquidity.
290
+ _setupV3Pool(v3Pool3000, 3000, 50_000e18);
291
+
292
+ // Hooked V4 has much more liquidity.
293
+ _setupV4HookedPool(HOOK_ADDR, 3000, 60, 500_000e18);
294
+
295
+ (bool isV4,,) = harness.discoverPool(_config(), TOKEN_A, TOKEN_B);
296
+
297
+ assertTrue(isV4, "Hooked V4 with more liquidity should beat V3");
298
+ }
299
+
300
+ // =========================================================================
301
+ // No V4 pool manager configured => V3 always wins
302
+ // =========================================================================
303
+
304
+ /// @notice When no V4 pool manager is configured, V3 is always selected.
305
+ function test_poolDiscovery_noPoolManager_v3Only() public {
306
+ _setupV3Pool(v3Pool3000, 3000, 100e18);
307
+
308
+ JBSwapPoolLib.SwapConfig memory config = JBSwapPoolLib.SwapConfig({
309
+ v3Factory: IUniswapV3Factory(v3Factory),
310
+ poolManager: IPoolManager(address(0)), // No V4.
311
+ univ4Hook: HOOK_ADDR,
312
+ weth: WETH
313
+ });
314
+
315
+ (bool isV4, IUniswapV3Pool v3Pool,) = harness.discoverPool(config, TOKEN_A, TOKEN_B);
316
+
317
+ assertFalse(isV4, "Without pool manager, V3 should always be selected");
318
+ assertEq(address(v3Pool), v3Pool3000, "V3 pool should be returned");
319
+ }
320
+
321
+ // =========================================================================
322
+ // No pools at all => reverts with NoPool in executeSwap (but discoverPool returns zeros)
323
+ // =========================================================================
324
+
325
+ /// @notice When neither V3 nor V4 has any pools, discoverPool returns zeros.
326
+ function test_poolDiscovery_noPools_returnsZeros() public {
327
+ // No pools configured (default setUp has all V3 returning address(0)).
328
+ (bool isV4, IUniswapV3Pool v3Pool,) = harness.discoverPool(_config(), TOKEN_A, TOKEN_B);
329
+
330
+ assertFalse(isV4, "Should not select V4 when no pools exist");
331
+ assertEq(address(v3Pool), address(0), "No V3 pool should be found");
332
+ }
333
+
334
+ // =========================================================================
335
+ // Multi-tier: V4 wins on a different tier than V3's best
336
+ // =========================================================================
337
+
338
+ /// @notice V3 best pool is on 0.3% tier, but hookless V4 on 0.05% tier has more liquidity.
339
+ function test_poolDiscovery_v4WinsOnDifferentTier() public {
340
+ // V3 at 0.3% has moderate liquidity.
341
+ _setupV3Pool(v3Pool3000, 3000, 100_000e18);
342
+
343
+ // Hookless V4 at 0.05% tier (fee=500, tickSpacing=10) has more.
344
+ _setupV4HooklessPool(500, 10, 200_000e18);
345
+
346
+ (bool isV4,,) = harness.discoverPool(_config(), TOKEN_A, TOKEN_B);
347
+
348
+ assertTrue(isV4, "V4 should win even on a different fee tier");
349
+ }
350
+
351
+ // =========================================================================
352
+ // V4 barely beats V3 (boundary: V4 has 1 more liquidity)
353
+ // =========================================================================
354
+
355
+ /// @notice V4 with exactly 1 more unit of liquidity than V3 should win.
356
+ function test_poolDiscovery_v4BeatsV3ByOne() public {
357
+ uint128 v3Liq = 1_000_000;
358
+
359
+ _setupV3Pool(v3Pool3000, 3000, v3Liq);
360
+ _setupV4HooklessPool(3000, 60, v3Liq + 1);
361
+
362
+ (bool isV4,,) = harness.discoverPool(_config(), TOKEN_A, TOKEN_B);
363
+
364
+ assertTrue(isV4, "V4 should win with strictly more liquidity (by 1 wei)");
365
+ }
366
+ }