Skip to content

feat(merge): bitmask-OR mergeable channels for registry concurrency#491

Open
spreston8 wants to merge 21 commits intorust/stagingfrom
feat/bitmask-or-mergeable-channels
Open

feat(merge): bitmask-OR mergeable channels for registry concurrency#491
spreston8 wants to merge 21 commits intorust/stagingfrom
feat/bitmask-or-mergeable-channels

Conversation

@spreston8
Copy link
Copy Markdown
Collaborator

@spreston8 spreston8 commented Apr 29, 2026

Summary

  • Adds MergeType::BitmaskOr for the registry's TreeHashMap interior-node bitmaps. Without it, two registry inserts from sibling blocks that touch the same interior node are treated as conflicting at multi-parent merge — even when the inserts are at different keys and logically commute. One deploy gets rejected. With BitmaskOr, the bitmaps are OR-merged.
  • Fixes the bridge-v2.rho test fixture's missing SystemVault.findOrCreate(bridgeVaultAddr) call at bridge init. Without it, transfers to the bridge orphan their _deposit send and the response chain hangs forever. Required for integration test test_multi_block_state_evolution.
  • Relocates mergeable-tag identity (BITMASK_OR_TAG_* and NON_NEGATIVE_NUMBER_* constants + tag-construction primitives) from casper::genesis into rholang::interpreter::merging::mergeable_tags as a single source of truth, so both production runtime and test runtimes (with_runtime) construct the same tag Par values without duplication or sync tests. Casper genesis methods become 1-line delegates.
  • Documents the orphan-send pitfall, vault-registration pattern, block-report tracing, and the new MergeType::BitmaskOr across docs/rholang/ and docs/casper/README.md.

Context

This PR is Phase J in the merge-fix lifecycle (mergeable TreeHashMap as new mechanism), the optimization that PR #488's design doc deferred as "approved future work." #488 ships the broad fix — conservative DAG rejection + buffer recovery — which makes any rejected deploy eventually recoverable. Phase J / this PR ships the targeted optimization that prevents the most common false-positive conflict (registry interior bitmap writes are commutative) so registry-heavy workloads don't have to round-trip through buffer recovery on every concurrent deploy.

Without Phase J, registry contention under sustained parallel deploy load (e.g. multi-validator concurrent contract deploys, the integration test's test_contract_lifecycle workload) can keep deploys cycling through rejection→buffer→re-execute→rejection past deployLifespan, causing them to expire. Phase J breaks that cycle structurally by recognizing the bitmap commutativity at merge time.

Commits

Test plan

  • cargo check --workspace --all-targets clean (no warnings, no errors)
  • Full workspace tests: 2103 passed, 0 failed, 0 panics (post-merge with rust/staging)
  • bridge_contract_concurrent_merge regression test (added in 577cd123) passes
  • Integration suite test_contract_lifecycle: 7/7 passing, stable across 2 consecutive runs
  • genesis_default_tags_match_rholang_test_utils not needed — single-source refactor eliminates the duplication this would have guarded
  • RUST_LOG=f1r3fly.merge.tag_check=trace confirms BitmaskOr fires on registry interior-node channels during concurrent bridge deployment
  • Merged with rust/staging (post-fix(merge): close merge-stale-diff bug class #488 squash-merge); all conflicts resolved with explicit decisions documented in commit messages

Co-Authored-By: Claude noreply@anthropic.com

spreston8 and others added 18 commits April 22, 2026 13:07
When conflict resolution rejected a deploy chain, diffs from descendant
blocks (computed against the rejected chain's post-state) were still
applied to the LCA base, producing internally inconsistent merged state.
Reproduced at code level via stale_diff_application_corrupts_merged_state.

Rejection expansion: after conflict resolution, walk DAG descendants of
rejected blocks within merge scope and reject affected branches whole.
Conservative-only — no event-log refinement, since event logs miss the
indirect dependencies that cause the bug.

Deploy de-duplication: preemptive dedup on (source_block_number desc,
source_block_hash byte-lex asc). Dormant until the rejected-deploy
recovery mechanism ships.

Foundations:
- source_block_hash and source_block_number on DeployChainIndex
- block_number threaded through BlockIndex::new and its callers
- ConflictSetMerger::merge split into resolve_conflicts +
  compute_merged_state so DagMerger can interpose expansion

Also:
- Hand-rolled Hash impl on DeployChainIndex matching PartialEq (the
  derived Hash covered all fields, violating the hash/eq contract)
- Removed now-dead hash_code and pre_state_hash fields
- KeyValueRejectedDeployBuffer skeleton (will be wired in a follow-up)
- Two pre-existing proof tests marked #[ignore]:
  * concurrent_registry_inserts_should_not_conflict — assertion
    contradicts multi-parent DAG semantics; awaits rewrite
  * finalization_does_not_guarantee_canonical_state — flaky
    precondition under the two-bridge merge setup

Co-Authored-By: Claude <noreply@anthropic.com>
When the merge algorithm drops a deploy from the canonical merged state,
its data is now placed in a new RejectedDeployBuffer so the block creator
can re-propose it in a subsequent block. Previously rejected deploys were
silently lost even though their effects never made it into canonical state.

Buffer: KeyValueRejectedDeployBuffer mirrors KeyValueDeployStorage in
shape and LMDB backing (new "rejected_deploy_buffer" store registered in
RNodeKeyValueStoreManager; shares deploy_storage sizing).

Merge-time populate: dag_merger::merge now returns (sig, source_block_hash)
pairs. compute_parents_post_state groups by source block, fetches each
block once, extracts the Signed<DeployData>, and inserts into the buffer.

Scope awareness: CasperSnapshot carries a new rejected_in_scope DashSet,
populated alongside deploys_in_scope during the ancestor BFS. The cache
key covers both sets under one (generation, LFB) tuple. A lightweight
rejected_deploy_sigs decoder on KeyValueBlockStore returns the sig list
without decoding the full block body.

Re-inclusion filter: prepare_user_deploys unions DeployStorage with
RejectedDeployBuffer and re-includes any valid deploy that is both in
deploys_in_scope and rejected_in_scope — its effects never landed, so
proposing it again is correct.

Finalization cleanup: record_directly_finalized purges from both pools.
Sigs in body.deploys of a finalized block are removed from both storage
and buffer; sigs in body.rejected_deploys of a finalized block are also
removed from the buffer (definitively lost, not recoverable from here).

Co-Authored-By: Claude <noreply@anthropic.com>
…dator justifications

Four divergences from the source-of-truth Scala implementation had
disabled slashing visibility in the Rust node:

- new_latest_messages gated on !invalid, so equivocation blocks never
  became a sender's latest message.
- The sender-advance branch gated on !invalid for the same reason.
- Block-creator justifications used valid_latest_metas (filtered),
  excluding equivocators from the justification set and causing
  justification_follows to reject otherwise-valid blocks.
- max_seq_nums used the filtered set too, omitting equivocators' sequence
  numbers downstream.

With these restored, invalid_latest_messages fires as intended,
prepare_slashing_deploys issues slashes for equivocators, and the
pre-existing multi_parent_casper_should_succeed_at_slashing test passes.

Flips dag_storage_should_not_replace_latest_message_with_invalid_block_from_same_sender
to dag_storage_should_advance_latest_message_to_invalid_block_from_same_sender
with inverted assertions reflecting the corrected behavior.

Co-Authored-By: Claude <noreply@anthropic.com>
When the merge rejects a deploy chain that contains a slash, the slash
effect is silently lost to cost-optimal rejection — SYS_SLASH_DEPLOY_COST
is 0 so any conflicting chain with cost >0 wins, and the equivocator
remains bonded. Attackers can sustain cheap conflicts to starve slashing
indefinitely.

The fix surfaces the rejected slash metadata from the merge step and has
the block creator re-issue any slash not already covered by its own
invalid_latest_messages view. The slash then lands in the merge block's
own body.system_deploys, bypassing cost-optimal rejection on the parents.

The merge pipeline stays pure — no runtime threading, no new validation
surface. Slash re-issuance flows through the existing SlashDeploy
execution path, so determinism invariants are unchanged.

- dag_merger::merge now returns (state, rejected_user_pairs, rejected_slash_pairs),
  splitting rejected pairs by is_slash_deploy_id. Close-block and heartbeat
  system deploys remain intentionally dropped.
- compute_parents_post_state extracts RejectedSlash metadata by reading
  each distinct source block's body.system_deploys once. All slashes within
  a block share a synthetic sig, so one rejected chain represents every
  slash in the source block — iterating body.system_deploys produces the
  right recovery set.
- New casper/src/rust/merging/rejected_slash.rs defines RejectedSlash and
  filter_recoverable, with the dedup key being
  (invalid_block_hash, issuer_public_key). Unit tests cover: own-slash
  covers merge-rejected duplicate (Attack 6), merge-rejected survives when
  uncovered by own (Attack 1), mixed coverage with multiple equivocators
  (Attack 4), issuer discrimination on same equivocator (Attack 7), and
  empty-input regression guard.
- block_creator::create calls compute_parents_post_state once before
  system-deploy construction to surface the rejected slashes, dedups
  against own slashing_deploys, and appends non-duplicates as fresh
  SlashDeploys signed under the proposer's identity. The downstream
  compute_deploys_checkpoint call hits the parents-post-state cache so
  the merge is not re-run.
- ParentsPostStateCacheVal extended to (StateHash, Vec<Bytes>, Vec<RejectedSlash>)
  so cache hits return the full 3-tuple.
- Regression assertion in bridge_query_survives_multi_parent_merge
  confirms non-slash merges surface an empty rejected_slashes list.

Co-Authored-By: Claude <noreply@anthropic.com>
Adds a canonical-state finalization status API for deploys, replacing
block-hash polling. After the merge fix, a block can finalize while some
of its deploys' effects were dropped by merge rejection — polling by
block hash returns true even though canonical state disagrees. Polling
by deploy sig via this API correctly reports the effect's presence in
canonical state.

States follow the design decision:

- Finalized — sig in a finalized block's body.deploys with is_failed=false,
  and not in any finalized descendant's body.rejected_deploys
- Failed    — sig in a finalized block with is_failed=true (explicit
  runtime failure)
- Pending   — sig alive: in deploy storage, in a non-finalized block, in
  the rejected-deploy buffer awaiting re-proposal, or rejected after
  finalization and awaiting canonical recovery
- Expired   — valid_after_block_number + deployLifespan elapsed without
  canonical inclusion

Response carries `state`, `rejection_count`, and `latest_block_hash`
(optional).

Architecture: single-pass canonical-chain walk from LFB backward for
deployLifespan blocks. For each block: check body.deploys for a clean or
failed match, check body.rejected_deploys for a sig match. Track the
highest-height observation for `latest_block_hash`, count rejection
occurrences for `rejection_count`, and resolve the terminal state from
the observations. Uses the lightweight rejected_deploy_sigs decoder to
avoid full body decode on the rejection-check arm.

Defensive error handling:

- Storage errors during first-seen block fetch → propagated as API error
- Missing block body when sig is indexed → warn log + Pending_unknown
- Sig indexed but absent from body.deploys → API error (state inconsistency)
- LFB with no block_number entry → API error (invariant violation)
- Blocks missing from store during scan → warn log + continue (scan
  robustness over hard failure; result may be incomplete)

Trait addition: `Casper::casper_shard_conf() -> &CasperShardConf` to give
BlockAPI access to deployLifespan. Impls added on MultiParentCasperImpl
and both NoOpsCasperEffect test stubs.

gRPC surface:
- DeployServiceCommon.proto: DeployFinalizationStatusQuery message,
  DeployFinalizationStateProto enum, DeployFinalizationStatusInfo message
  (with optional latestBlockHash for explicit absent/present)
- DeployServiceV1.proto: rpc deployFinalizationStatus +
  DeployFinalizationStatusResponse
- node/src/rust/api/deploy_grpc_service_v1.rs: server handler delegating
  to BlockAPI

HTTP surface:
- node/src/rust/api/web_api.rs: WebApi trait method +
  DeployFinalizationStatusJson with Option<String> for latest_block_hash
  so JSON serializes null when absent
- node/src/rust/web/web_api_routes.rs: GET
  /api/deploy-finalization-status/{deploy_sig_hex}

Tests:
- casper lib tests (2): state enum construction, state distinctness
- casper integration smoke test (1): unknown_sig_returns_pending_with_empty_fields
  exercises the full EngineCell → BlockAPI path

Performance: zero background cost; O(deployLifespan) block-sig reads per
query, dominated by proto decode on the lightweight rejected_deploy_sigs
decoder. Sub-millisecond for typical lifespans.

Consensus safety: read-only API, no new attack surface, no new storage,
no new trait methods beyond the shard_conf getter.

Deep end-to-end tests (Finalized, Failed, Expired, nonzero rejection
count) require real equivocation + merge-rejection fixtures and are
deferred.

Co-Authored-By: Claude <noreply@anthropic.com>
…status

Catching-up validators replay historical blocks to get to the current
tip. For each block with non-empty body.rejected_deploys, the buffer-
population path extracts the rejected sigs' DeployData and adds them to
the local rejected-deploy buffer for re-proposal. Without a status
check, this admits sigs that have already been re-proposed and
finalized elsewhere in the chain, or sigs past their deployLifespan.

Two failure modes:

- Double-execution of already-finalized work. A rejected sig is added to
  the local buffer; on the validator's next proposal round, the buffer
  read includes the deploy; the new block contains the deploy; dedup
  picks the new proposal over the older finalized copy within merge
  scope; the merge produces a re-execution of canonical work against a
  different pre-state. Effects diverge. Consensus forks.

- Past-lifespan noise. The buffer read filter drops past-lifespan sigs
  at proposal time, but the entries still accumulate and churn through
  storage.

Fix: before admitting each sig to the buffer, run the deploy
finalization status resolver. Admit only if the state is Pending. Skip
Finalized / Failed / Expired — those sigs are terminally resolved in
the local canonical view and must not be re-proposed.

The gate is unconditional — not "catchup mode" flagged. A live merge
that re-emits a canonically-finalized sig would be equally unsafe; the
same gate defends against both.

Implementation:

- Extracted BlockAPI::deploy_finalization_status's algorithm into a
  pure function `deploy_finalization_status::resolve(dag, block_store,
  deploy_lifespan, sig)`. The async BlockAPI method now reduces to a
  thin wrapper that unwraps the engine cell and delegates. This makes
  the resolver callable from compute_parents_post_state without
  threading an EngineCell through the merge layer.

- Added should_admit_to_rejected_buffer helper in interpreter_util.rs
  that calls resolve and applies the admit rule. Conservative
  skip-on-error: transient storage failures skip the sig with a warn
  log; consistency errors skip with a warn log. Never admit on error —
  admit-on-error would reintroduce the double-execution bug under
  flaky storage.

- Wired the helper into compute_parents_post_state's buffer-populate
  block as a single predicate call, replacing the direct push.

Tests:
- Pure-resolver direct call: resolve_pure_function_returns_pending_for_unknown_sig
  verifies the extracted function is callable from a non-engine-cell
  context.

Deferred to later test work:
- Integration test exercising the gate-skips-finalized path (needs a
  fixture that produces merge rejection AND later finalization of the
  same sig — overlaps with equivocation + merge-rejection work).
- Full multi-node catchup simulation.

Consensus safety: the gate is a strict reduction of what enters the
buffer. Never adds sigs that weren't there; only drops sigs with a
terminal status in the current canonical view. Deterministic per
validator's DAG view.

Performance: O(deployLifespan) block reads per admit decision. For
typical rejection rates (0-3 per merge, lifespan ~50) this is sub-ms.
Full catchup of 1000 historical blocks with average 2 rejections each
adds ~100K block reads cumulatively — seconds of wall time.

Co-Authored-By: Claude <noreply@anthropic.com>
…expansion

Integrates rust/staging API redesign (isFinalized, unified DeployResponse
with ViewMode, high-level query endpoints, removal of deprecated
transactions + listenForDataAtName) with the Phase A-G merge-stale-diff fix.

Conflict resolutions:
- block_api.rs: dropped unused MAX_FAULT_TOLERANCE import
- deploy_grpc_service_v1.rs: dropped listen_for_data_at_name handler,
  kept Phase F deploy_finalization_status handler
- web_api.rs: dropped get_transaction and DeployDetailResponse, took
  unified find_deploy refactor, kept Phase F additions alongside
  rust/staging's new query methods
- web_api_routes.rs: dropped /transactions route, kept Phase F
  /deploy-finalization-status route

Suite green post-merge: 48 block-storage + 411 casper + 101 node passed,
0 failed. Node count dropped from 108 to 101 due to rust/staging's
removal of transaction_api_test.rs and related unit tests.

Co-Authored-By: Claude <noreply@anthropic.com>
The resolver walked `main_parent_chain` from LFB backward — a linear
walk that only visits a block's first (main) parent at each step. In
a multi-parent DAG, a deploy's effects can reach canonical state via
a secondary-parent merge; the main-parent chain alone misses those
blocks, so the sig is reported Pending even after it finalized.

Fix: BFS from LFB through every parent slot (main + secondary) bounded
by deploy_lifespan depth. `visited` dedups the frontier because
multi-parent ancestries share common ancestors.

Phase G's catchup gate uses the same resolver, so it inherits the fix
automatically.

Regression test: `resolve_finds_sig_in_secondary_parent_branch`
builds a minimal DAG (genesis → A, B siblings → C with A as main,
B as secondary) and places the deploy sig only in B. The test fails
with Pending on the main-parent walk and passes with Finalized on
the BFS, locking in the semantics.

Co-Authored-By: Claude <noreply@anthropic.com>
The repeat-deploy check rejected any block whose body.deploys contained
a sig already present in an ancestor's body.deploys. This predates the
rejected-deploy-buffer recovery pipeline (Phase D): when a deploy is
rejected by a descendant merge within deploy_lifespan, the buffer
re-proposes it in a later block — a legitimate re-inclusion, not a
repeat. Without this exemption, every recovery-path block fails
validation with InvalidRepeatDeploy, the proposer retries the same
deploys, and the shard deadlocks on heartbeat propose attempts under
any merge-rejection workload.

Fix: filter sigs present in s.rejected_in_scope out of the check set
before the BFS. CasperSnapshot already computes rejected_in_scope by
walking body.rejected_deploys in the current proposal's parent scope;
prepare_user_deploys uses the same signal on the proposer side. The
validator now mirrors the proposer.

Regression test: repeat_deploy_validation_allows_recovered_deploy_from_\
rejected_in_scope builds the exact DAG shape the existing
"should not accept" test uses, then pre-populates rejected_in_scope
with the deploy's sig. Pre-fix returns Invalid(InvalidRepeatDeploy);
post-fix returns Valid.

Co-Authored-By: Claude <noreply@anthropic.com>
When dag_merger's deploy de-duplication discards a chain because some
deploy in it has a fresher copy elsewhere, deploys unique to the
discarded chain were silently dropped — not added to the rejected-deploy
buffer, not in rejected_in_scope, and the deployer had no signal.

Collect collateral-lost deploys (those unique to a dropped chain) into
the rejected-user list so the buffer can recover them in a subsequent
block, mirroring how conflict-rejected deploys recover.

Co-Authored-By: Claude <noreply@anthropic.com>
…inclusion

deploy_finalization_status::resolve was invalidating a clean finalized
inclusion if any rejection at a strictly higher height was observed. In
multi-parent DAGs, a rejection in a sibling block at the same or higher
height does not affect a deploy's effects in a canonical block on a
different chain. Recovery cycles via the rejected-deploy buffer can also
produce rejection events in non-canonical sibling blocks (validators
racing to recover the same deploy), and the height-only check turned
those into a positive feedback loop where the deploy stayed Pending
while the buffer kept re-proposing.

Track each rejection's block hash alongside its height and require the
rejection block to be a canonical-chain descendant of the clean block
(via is_in_main_chain) before invalidating. Same-block rejections (the
clean inclusion and rejection share a block — e.g., a recovery proposal
whose merge step also dedup-rejected an older copy in scope) are
excluded explicitly.

Co-Authored-By: Claude <noreply@anthropic.com>
casper/tests/mod.rs defines an init_logger() guarded by Once, but it had
no callers in the test tree. Production code with tracing::debug!/info!/
warn! calls produced no output during tests, making diagnostic logs
useless when investigating failures.

Wire init_logger() into TestNode::create_network so any test that builds
a network gets a tracing subscriber wired up with EnvFilter respecting
RUST_LOG. Behavior is unchanged when RUST_LOG is unset (default ERROR
level filter).

Co-Authored-By: Claude <noreply@anthropic.com>
Generalize the mergeable-channel mechanism from a single integer-add tag
to a typed registry of `(Par, MergeType)` pairs. Adds a second tag with
bitmask-OR semantics, exposed to system contracts via
`rho:system:bitmaskMergeableTag`, and rewires Registry.rho's TreeHashMap
interior-node channels to use it. Two concurrent `insertArbitrary` calls
that previously conflicted on the registry's bitmap now merge cleanly.

Resolves the `test_contract_lifecycle` integration regression where
bridge2 was rejected at multi-parent merge with `rejection_count >= 2`,
state Pending. With this change the integration suite goes from 0/7
(all errored at fixture setup because bridge2 never finalized) to 6/7
stable across two consecutive runs.

Architecture:

- `MergeType { IntegerAdd, BitmaskOr }` lives in rspace++ alongside
  the merger logic. `combine_mergeable_value` dispatches on type:
  IntegerAdd uses wrapping addition, BitmaskOr uses `(a as u64) | (b as u64)`.
- `NumberChannelsDiff`, `NumberChannelsEndVal`, and `NumberChannel`
  carry `(i64, MergeType)` so the merge type travels with the value
  through every aggregation site.
- `EventLogIndex::combine` and `cal_merged_result` in `conflict_set_merger`
  use `combine_mergeable_value` to dispatch correctly when a chain
  index combines diffs across deploys; assert that branches agree on
  merge_type per channel.
- `Reduce.mergeable_tags: Arc<HashMap<Par, MergeType>>` replaces the
  single-tag `mergeable_tag_name: Par`. `is_mergeable_channel` returns
  `Option<MergeType>`, looked up by tuple-channel-head Par.
- `try_get_number_with_rnd` returns `Option` for non-numeric channel
  values; the read path skips them gracefully so TreeHashMap leaf
  Maps tagged via the parent registry tuple don't crash the merger.
- Genesis exposes `default_mergeable_tags()` containing both standard
  tags. Setup paths that previously passed
  `Genesis::non_negative_mergeable_tag_name()` now pass an Arc of this
  map; tests follow the same pattern.

Registry.rho:

- Outer `new`-block binds `bitmaskTag(`rho:system:bitmaskMergeableTag`)`.
- TreeHashMap channel construction switches from
  `@[node, *storeToken]` (Rholang list, never matched by the tuple
  detector) to `@(*bitmaskTag, node, *storeToken)` (genuine tuple
  with the mergeable-tag prefix). Same change for `MakeNode`,
  `nodeGet`, `TreeHashMapSetter`, and the bitmap-update sites.

Test infrastructure:

- New `casper/tests/multi_node/bridge_contract_concurrent_merge.rs`
  reproduces the integration failure mode at unit level: a 3-validator
  in-process network deploys two bridge-v2.rho contracts plus a third
  sibling, syncs, and proposes a multi-parent merge block. Asserts
  `merge_block.body.rejected_deploys` does not contain bridge1 or
  bridge2 sigs. Verified to FAIL when the Registry.rho tuple syntax is
  reverted (bridge2 rejected) and PASS with the fix in place.
- Bridge-v2.rho copied from the integration test resources verbatim.

Diagnostic tracing:

- `f1r3fly.merge.tag_check` target. INFO log on URI binding insertion
  (one-shot at runtime startup) and on URI lookup at deploy. TRACE log
  per `is_mergeable_channel` call distinguishing match vs miss with
  hex bytes for both the channel head and registered tags. Used to
  confirm the fix engages: 2437 BitmaskOr hits per multi-validator
  bridge merge in the unit-level repro.

Out of scope:

- The `test_multi_block_state_evolution` integration test still fails
  with `empty par list from deployId channel` after this fix. That is
  a separate, previously-tracked consensus bug (memory:
  "Empty deployId after finalization") affecting per-bridge-instance
  state channels, not the registry. PR #483 fixed the deterministic
  case; the integration-level case remains.

Co-Authored-By: Claude <noreply@anthropic.com>
Without findOrCreate the bridge's _deposit contract is never installed,
so transfers to the bridge orphan their deposit send and the response
chain hangs. Required for integration test_multi_block_state_evolution.

Co-Authored-By: Claude <noreply@anthropic.com>
… merge type

Documents lessons from the bridge findOrCreate orphan-deposit investigation
and the bitmask-OR mergeable channel addition: silent failure on sends to
unregistered receivers, why both vault endpoints need findOrCreate before
transfer, the self-registering contract pattern, the block report API for
tracing tuplespace events, and the new MergeType::BitmaskOr for registry
TreeHashMap concurrency.

Co-Authored-By: Claude <noreply@anthropic.com>
… of truth

Test runtimes built via with_runtime were passing an empty mergeable_tags
map, so the rho:system:bitmaskMergeableTag URI never bound and any test
loading Registry.rho panicked with "No value set for ..." at evaluation
time. Production runtime was unaffected because it constructs tags via
Genesis::default_mergeable_tags().

Move the constants, derivation primitives, and tag-table aggregator from
casper into rholang/src/rust/interpreter/merging/mergeable_tags.rs.
Casper genesis methods become 1-line delegates. NON_NEGATIVE_NUMBER_PK
and NON_NEGATIVE_NUMBER_TIMESTAMP are also used to sign NonNegativeNumber.rho
at genesis, so they're re-exported from rholang via pub use to keep
casper's deploy-builder call sites unchanged.

Single source of truth — no duplication, no sync test, test runtimes use
the same tag identities production uses.

Co-Authored-By: Claude <noreply@anthropic.com>
…or-mergeable-channels

# Conflicts:
#	casper/src/rust/api/deploy_finalization_status.rs
#	casper/src/rust/blocks/proposer/block_creator.rs
#	casper/src/rust/merging/dag_merger.rs
#	casper/src/rust/merging/rejected_slash.rs
#	casper/src/rust/multi_parent_casper_impl.rs
#	casper/src/rust/util/rholang/interpreter_util.rs
#	casper/src/rust/validate.rs
#	casper/tests/api/deploy_finalization_status_test.rs
#	casper/tests/batch2/validate_test.rs
#	casper/tests/compute_parents_post_state_regression_spec.rs
#	casper/tests/util/rholang/runtime_manager_test.rs
@spreston8 spreston8 marked this pull request as ready for review May 2, 2026 02:23
@spreston8 spreston8 requested a review from metaweta May 2, 2026 02:23
spreston8 added 2 commits May 2, 2026 13:26
…exemption

prepare_user_deploys exempts deploys in `rejected_in_scope` from the
in-scope filter so genuinely rejected deploys can be re-proposed. Without
a canonical-descendant gate, the exemption also fires when the rejection
sits in a non-canonical sibling while the deploy's effects are already in
canonical state — producing a recovery block that downstream validators
correctly flag as `InvalidRepeatDeploy`. On FTT=0 shards this triggers
mutual slashing.

Mirror the validator-side `repeat_deploy` gate at the proposer: resolve
the candidate sigs in batch and decline the exemption when status is
`Finalized`. Resolver failure → decline conservatively.

Tests:
  - validator-side defense regression (already passes pre-fix)
  - proposer-side gate (RED pre-fix, GREEN post-fix)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant